1
0
mirror of https://github.com/rclone/rclone.git synced 2025-12-06 00:03:32 +00:00

Compare commits

...

2734 Commits

Author SHA1 Message Date
copilot-swe-agent[bot]
46bc876085 Fix zombie SSH processes by storing the first session in sshClientExternal
The real issue was that sshClientExternal.session was never assigned,
so Wait() always returned nil without waiting for the SSH process to exit.
This caused zombie processes because the process was never reaped.

The fix:
- Store the first session created in NewSession() to s.session
- This allows Wait() to actually wait for the SSH process
- The sync.Once pattern is still useful for thread-safety
- Updated comments to reflect the correct behavior

Fixes the zombie process issue reported in rclone/rclone#8929

Co-authored-by: ncw <536803+ncw@users.noreply.github.com>
2025-10-31 11:54:04 +00:00
copilot-swe-agent[bot]
a58efc1544 Simplify Wait() implementation based on code review feedback
Remove unnecessary exited() check - we should always call cmd.Wait()
in the sync.Once block to properly reap the process and capture its
exit status. The sync.Once ensures it's only called once.

Co-authored-by: ncw <536803+ncw@users.noreply.github.com>
2025-10-30 18:03:48 +00:00
copilot-swe-agent[bot]
4f5efe2871 Fix zombie SSH processes with --sftp-ssh by ensuring Wait() is called only once
The issue was that cmd.Wait() was being called multiple times on the same
process - once in the background goroutine and once in Close(). This could
lead to zombie processes because only the first call to Wait() properly
reaps the process.

The fix uses sync.Once to ensure Wait() is only called once per SSH process,
storing and returning the result on subsequent calls.

Added tests to verify the fix works correctly.

Co-authored-by: ncw <536803+ncw@users.noreply.github.com>
2025-10-30 17:56:13 +00:00
copilot-swe-agent[bot]
6d9f4a3c20 Initial plan 2025-10-30 17:48:34 +00:00
Fawzib Rojas
cc09978b79 Added rclone archive command to create and read archive files
Co-Authored-By: Nick Craig-Wood <nick@craig-wood.com>
2025-10-30 16:20:48 +00:00
Fawzib Rojas
409dc75328 accounting: add io.Seeker/io.ReaderAt support to accounting.Account
This is a pass through implementation which will fail if the
underlying reader does not have the interface.
2025-10-30 16:20:48 +00:00
Nick Craig-Wood
fb30c5f8dd operations: add ReadAt method to ReOpen 2025-10-30 16:20:48 +00:00
Nick Craig-Wood
203df6cc58 fstest: add ResetRun to allow the remote to be reset in tests 2025-10-30 16:20:48 +00:00
Riaz Arbi
459e10d599 gcs: fix --gcs-storage-class to work with server side copy for objects 2025-10-30 15:20:16 +00:00
Lukas Krejci
1ba4fd1d83 ulozto: implement the about functionality 2025-10-30 15:06:37 +00:00
Adam Dinwoodie
77553b8dd5 local: add --skip-specials to ignore special files
Give users a way to explicitly acknowledge that pipes, sockets and block
devices are to be ignored without warnings.

This follows the precedent set in commit 6152bab28 (local: add
--skip-links to suppress symlink warnings, 2017-07-21) for ignoring
warnings about symlinks.
2025-10-29 17:00:25 +00:00
Andrew Ruthven
5420dbbe38 swift: Report disk usage in segment containers
Large objects are split and stored in a _segments container in Swift.
These should be included when reporting on the space used.

Fixes #8857
2025-10-29 16:55:53 +00:00
dulanting
87b71dd6b9 refactor: use strings.Builder to improve performance 2025-10-29 16:48:34 +00:00
Nick Craig-Wood
a0bcdc2638 Archive backend to read archives on cloud storage.
Initial support with Zip and Squashfs archives.

Fixes #8633
See #2815
2025-10-28 11:05:41 +00:00
Nick Craig-Wood
e42fa9f92d vfs: remove unecessary import in tests to fix import cycles 2025-10-28 11:05:41 +00:00
Nick Craig-Wood
4586104dc7 Add Lakshmi-Surekha to contributors 2025-10-28 11:05:35 +00:00
Nick Craig-Wood
c4c360a285 Add Andrew Gunnerson to contributors 2025-10-28 11:05:35 +00:00
Nick Craig-Wood
ce4860b9b6 Add divinity76 to contributors 2025-10-28 11:05:35 +00:00
Lakshmi-Surekha
ed87f82d21 build: enable support for aix/ppc64
* Adds "aix/ppc64" to the cross-compile target list.
* Including AIX in the build tag of "metadata_other.go".
* Excluding AIX from the main ncdu build tags.
* Marking AIX as an unsupported platform for ncdu.
* Excluding AIX from the fallback redirect implementation.
* Excluding AIX from unix build tags to avoid undefined unix.WNOHANG.
2025-10-27 13:34:58 +00:00
Andrew Gunnerson
0a82929b94 rc: fix name of "queue" JSON key in docs for vfs/cache
Signed-off-by: Andrew Gunnerson <accounts+github@chiller3.com>
2025-10-27 13:28:24 +00:00
divinity76
1e8ee3b813 cmount: windows: improve error message on missing winfsp 2025-10-27 13:22:04 +00:00
Nick Craig-Wood
eaab3f5271 docs: add the Provider to the options examples in the backend docs 2025-10-26 10:25:12 +00:00
Nick Craig-Wood
25b05f1210 Add Aneesh Agrawal to contributors 2025-10-26 10:25:12 +00:00
Nick Craig-Wood
2dc1b07863 Add viocha to contributors 2025-10-26 10:25:12 +00:00
Nick Craig-Wood
49acacec2e Add reddaisyy to contributors 2025-10-26 10:25:12 +00:00
Aneesh Agrawal
70d2fe6568 fs: remove unnecessary Seek call on log file
We were seeing a (non-fatal) error in our logs:
```
Failed to seek log file to end: seek /proc/1/fd/1: illegal seek
```

Because we open the log file with O_APPEND,
we don't need to manually seek to the end.
As https://pkg.go.dev/os#File.Seek also confirms
that the behavior of `Seek` is not specified
if the file has been opened with O_APPEND,
remove the `Seek` call.
2025-10-25 19:38:57 +01:00
dougal
f28c83c6de s3: make it easier to add new S3 providers
Before this change, you had to modify a fragile data-structure
containing all providers. This often led to things being out of order,
duplicates and conflicts whilst merging. As well as the changes for
one provider being in different places across the file.

After this change, new providers are defined in an easy to edit YAML file,
one per provider.

The config output has been tested before and after for all providers
and any changes are cosmetic only.
2025-10-25 19:37:29 +01:00
dependabot[bot]
2cf44e584c build(deps): bump actions/upload-artifact from 4 to 5
Bumps [actions/upload-artifact](https://github.com/actions/upload-artifact) from 4 to 5.
- [Release notes](https://github.com/actions/upload-artifact/releases)
- [Commits](https://github.com/actions/upload-artifact/compare/v4...v5)

---
updated-dependencies:
- dependency-name: actions/upload-artifact
  dependency-version: '5'
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-10-25 12:09:16 +02:00
dependabot[bot]
bba9027817 build(deps): bump actions/download-artifact from 5 to 6
Bumps [actions/download-artifact](https://github.com/actions/download-artifact) from 5 to 6.
- [Release notes](https://github.com/actions/download-artifact/releases)
- [Commits](https://github.com/actions/download-artifact/compare/v5...v6)

---
updated-dependencies:
- dependency-name: actions/download-artifact
  dependency-version: '6'
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-10-25 12:09:06 +02:00
dougal
51859af8d9 ftp: fix SOCK proxy support - fixes #8892 (#8918) 2025-10-24 14:50:13 +01:00
viocha
4f60f8915d webdav: Add Access-Control-Max-Age header for CORS preflight caching - fixes #5078 2025-10-24 10:19:22 +01:00
hunshcn
6663eb346f webdav: use SpaceSepList to parse bearer token command 2025-10-23 19:56:37 +01:00
reddaisyy
1d0e1ea0b5 refactor: use strings.Builder to improve performance 2025-10-23 16:40:30 +01:00
Nick Craig-Wood
71631621c4 docs: re-arrange sponsors page 2025-10-23 14:50:51 +01:00
Nick Craig-Wood
31e904d84c docs: add Spectra Logic as a sponsor 2025-10-23 14:50:51 +01:00
Nick Craig-Wood
30c9843e3d Add Oleksandr Redko to contributors 2025-10-23 14:50:51 +01:00
Oleksandr Redko
c8a834f0e8 build: enable all govet checks (except fieldalignment and shadow) and fix issues. 2025-10-22 18:37:58 +01:00
Nick Craig-Wood
b272c50c4c march: fix --no-traverse being very slow - fixes #8860
Before this change --no-traverse was calling NewObject on directories
(where it would always fail) as well as files. This was very
noticeable when doing syncs with --max-age which were only
transferring a small number of objects. This should have been very
quick, but the NewObject calls for each directory slowed the sync down
a lot.

This changes replaces the check to see if the source entry is an
Object that got missed out from this commit:

88e30eecbf march: fix deadlock when using --no-traverse - fixes #8656
2025-10-22 14:14:52 +01:00
Nick Craig-Wood
b8700e8042 Add vastonus to contributors 2025-10-22 14:14:52 +01:00
kingston125
73193b0565 s3: add new FileLu S5 endpoints
Add US, EU, AP, and ME endpoints
2025-10-22 12:25:05 +01:00
vastonus
c4eef3065f build: remove obsolete build tag 2025-10-21 18:56:06 +01:00
Nick Craig-Wood
ba2a642961 azurefiles: add ListP interface - #4788 2025-10-21 18:40:23 +01:00
Nick Craig-Wood
979c6a573d dropbox: add ListP interface - #4788 2025-10-21 18:40:23 +01:00
Nick Craig-Wood
bbb866018e webdav: add ListP interface - #4788 2025-10-21 18:40:23 +01:00
Nick Craig-Wood
7706f02294 pcloud: add ListP interface - #4788 2025-10-21 18:40:23 +01:00
Nick Craig-Wood
6df7913181 box: add ListP interface - #4788 2025-10-21 18:40:23 +01:00
Nick Craig-Wood
c079495d1f onedrive: add ListP interface - #4788 2025-10-21 18:40:23 +01:00
Nick Craig-Wood
3bf1ac5b07 drive: add ListP interface - #4788 2025-10-21 18:40:23 +01:00
Nick Craig-Wood
091caa34c6 Add hunshcn to contributors 2025-10-21 18:40:23 +01:00
hunshcn
d507e9be39 webdav: optimize bearer token fetching with singleflight 2025-10-21 11:14:37 +01:00
Nick Craig-Wood
40b3251e41 Changelog updates from Version v1.71.2 2025-10-20 16:56:47 +01:00
albertony
484d955ea8 lib/http: cleanup indentation and other whitespace in http serve template 2025-10-20 11:53:55 +01:00
albertony
8fa9f255a0 docs: improve formatting of http serve template parameters 2025-10-20 11:53:55 +01:00
Nick Craig-Wood
e7f11af1ca build: stop markdown linter leaving behind docker containers 2025-10-20 11:51:23 +01:00
Nick Craig-Wood
0b5c4cc442 Add Marco Ferretti to contributors 2025-10-20 11:51:23 +01:00
Marco Ferretti
178ddafdc7 s3: add cubbit as provider 2025-10-20 11:01:34 +01:00
dougal
ad316ec6e3 s3: add servercore as a provider 2025-10-17 16:35:06 +01:00
Nick Craig-Wood
61b022dfc3 docs: update sponsors 2025-10-17 12:04:51 +01:00
Nick Craig-Wood
1903b4c1a2 docs: update sponsor images 2025-10-15 16:33:10 +01:00
Nick Craig-Wood
f7cbcf556f docs: update privacy policy with a section on user data 2025-10-14 16:24:07 +01:00
Nick Craig-Wood
3581e628c0 Add Dulani Woods to contributors 2025-10-14 16:24:07 +01:00
Nick Craig-Wood
62c41bf449 Add spiffytech to contributors 2025-10-14 16:24:07 +01:00
Dulani Woods
c5864e113b gcs: add region us-east5 - fixes #8863 2025-10-14 14:13:56 +01:00
albertony
39259a5bd1 jottacloud: refactor service list from map to slice to get predefined order 2025-10-11 20:57:19 +02:00
albertony
2e376eb3b9 jottacloud: added support for traditional oauth authentication also for the main service
This renames whitelabel authentication to traditional authentication and adds support for
the main Jottacloud service also here, as it can be used as an alternative to the
authentication based on personal login token for those who prefer it. Documentation
also adjusted correspondingly, and restructured the authentication section a bit more
since some of the sections that was under standard authentication in reality also
applies to the traditional authentication.
2025-10-11 20:57:19 +02:00
albertony
de8e9d4693 oauthutil: improved debug logs from token refresh 2025-10-10 20:10:21 +02:00
spiffytech
710cf49bc6 backend: add S3 provider for Hetzner object storage #8183 2025-10-10 18:20:43 +01:00
albertony
8dacac60ea jottacloud: improved token refresh handling
The oauthutil.Renew was initialized early in NewFs, before the first request to the
service where a token is needed. When token is already expired at the time NewFs is
called, the Renew operation would be triggered immediately, only to abort before actually
performing a token refresh, for reason described in debug message:

    Token expired but no uploads in progress - doing nothing

Then later in NewFs, a request to the customer endpoint was made, and since it requires
a valid token it would perform a token refresh after all.

This was not a big problem, but a bit unnecessary, and the debug log messages made it
confusing to understand what rclone was actually doing regarding token refreshing.

If, from debugger, we were forcing the Renew operation to perform actual token refresh,
even if no uploads in process, then it would fail because it actually needs the username
which is retrieved from the customer endpoint

    jottacloud root '': Token refresh failed: read metadata failed: error 400: org.springframework.security.core.userdetails.UsernameNotFoundException: Username not found in url! (Bad Request)

Don't think this can happen in any real situations, but better to make sure it never can.
2025-10-10 18:59:19 +02:00
dougal
3a80d4d4b4 s3: provider reordering
+ fixing some typos
2025-10-10 16:30:03 +01:00
dougal
a531f987a8 index: add missing providers 2025-10-10 16:30:03 +01:00
dougal
e906b8d0c4 docs: add missing ` 2025-10-10 16:30:03 +01:00
dougal
a5932ef91a s3: add rabata as a provider 2025-10-10 16:30:03 +01:00
Nick Craig-Wood
3afa563eaf mega: fix 402 payment required errors - fixes #8758
The underlying library now supports hashcash which should fix this
problem.
2025-10-09 11:58:49 +01:00
Nick Craig-Wood
9d9654b31f Add Andrew Ruthven to contributors 2025-10-09 11:58:49 +01:00
Nick Craig-Wood
cfe257f13d Add Microscotch to contributors 2025-10-09 11:58:49 +01:00
Nick Craig-Wood
0375efbd35 Add iTrooz to contributors 2025-10-09 11:58:49 +01:00
Andrew Ruthven
cad1954213 build: Bump SwiftAIO container to a newer one
The bouncestorage image hasn't been updated for 4 years and has this
message at the top of the docs:

  This repository is outdated; please use dockerswiftaio/docker-swift instead.

However, dockerswiftaio/docker-swift hasn't been updated for 2 years.
Switch to openstackswift/saio instead, which is getting regular updates.

This requires some minor changes to one test, and how we start the
container.
2025-10-06 16:55:48 +01:00
Andrew Ruthven
604e37caa5 build: Retry stopping the test server
On my system there needs to be a slight pause between stopping and
checking to see if SwiftAIO has stopped. Without the pause the tests fail for
a non-obvious reason.

Instead of using a magic sleep, re-use the retry logic that is used for
starting the test server.
2025-10-06 16:55:48 +01:00
Andrew Ruthven
b249d384b9 build: Increase attempts to connect to test server
On the system I'm testing Swift on it can take ~90 retries for SwiftAIO to
be ready. Extend the retry attempts.
2025-10-06 16:55:48 +01:00
Andrew Ruthven
04e91838db swift: If storage_policy isn't set, use the root containers policy
Ensure that if we need to create a segments container it uses the same
storage policy as the root container.

Fixes #8858
2025-10-06 16:55:48 +01:00
Microscotch
94829aaec5 proton: automated 2FA login with OTP secret key
add OTP secret key to config to generate 2FA code
2025-10-06 16:18:38 +01:00
iTrooz
f574e3395c serve s3: fix log output to remove the EXTRA messages
As shown in

81e56a30c8/log.go (L74)

it seems like the wanted behaviour for merging arguments is the one of PrintLn,
which is "put a space between each arg"
2025-10-06 15:17:21 +01:00
albertony
2bc155a96a docs/jottacloud: update description of invalid_grant error according to changes 2025-10-05 11:22:27 +02:00
albertony
adc8ea3427 jottacloud: add support for MediaMarkt Cloud as a whitelabel service
This was requested in issue #8852, after authentication was already fixed for existing
whitelabels.
2025-10-05 00:48:01 +02:00
kingston125
068eea025c s3: add FileLu S5 provider 2025-10-04 15:48:01 +01:00
iTrooz
4510aa679a docs: fix variants of --user-from-header 2025-10-04 08:10:49 +02:00
dougal
79281354c7 vfs: fix chunker integration test 2025-10-03 17:10:24 +01:00
Nick Craig-Wood
f57a178719 test_all: give TestZoho: extra time as it has been timing out 2025-10-03 16:03:29 +01:00
Nick Craig-Wood
44f2e2ed39 test_all: give TestCompressDrive: extra time as it has been timing out 2025-10-03 16:02:07 +01:00
Nick Craig-Wood
13e1752d94 rclone config string: reduce quoting with Human rendering for strings #8859 2025-10-03 15:54:15 +01:00
Nick Craig-Wood
bb82c0e43b Add juejinyuxitu to contributors 2025-10-03 15:54:15 +01:00
albertony
1af7151e73 docs/jottacloud: update documentation with new whitelabel services and changed configuration flow 2025-10-02 19:16:03 +02:00
albertony
fd63478ed6 jottacloud: abort attempts to run unsupported rclone authorize command 2025-10-02 19:16:03 +02:00
albertony
5133b05c74 jottacloud: minor adjustment of texts in config ui 2025-10-02 19:16:03 +02:00
albertony
6ba96ede4b jottacloud: add support for Let's Go Cloud (from MediaMarkt) as a whitelabel service 2025-10-02 19:16:03 +02:00
albertony
2896973964 jottacloud: fix authentication for whitelabel services from Elkjøp subsidiaries
This adds support for them in the whitelabel autentication type, relying on OpenID
Connect, same as Telia, Tele2 etc already uses.

Until recently the Elkjøp subsidiaries still supported the legacy authentication type
only, but that seem to have changed. They no longer support legacy authentication, which
made existing rclone version incompatible with them.

With this the legacy authentication has no known uses, however the implementation of
it is still kept for now.

Fixes #8852
2025-10-02 19:16:03 +02:00
albertony
be123d85ff jottacloud: refactor config handling of whitelabel services to use openid provider configuration 2025-10-02 19:16:03 +02:00
albertony
b1b9562ab7 jottacloud: remove nil error object from error message 2025-10-02 19:16:03 +02:00
albertony
5146b66569 jottacloud: fix legacy authentication
This fixes the issue where configuration would fail after supplying passoword:

    Reveal failed: input too short when revealing password - is it obscured?
2025-10-02 19:16:03 +02:00
albertony
8898372d5a docs: add remote setup page to main docs dropdown 2025-10-02 18:46:16 +02:00
albertony
091fe9e453 docs: update remote setup page 2025-10-02 18:46:16 +02:00
albertony
8fdb68e41a docs: add link from authorize command docs to remote setup docs 2025-10-02 18:46:16 +02:00
albertony
c124aa2ed3 docs: lowercase internet and web browser instead of Internet browser 2025-10-02 18:46:16 +02:00
albertony
54e8bb89f7 docs: use the term backend name instead of fs name for authorize command 2025-10-02 18:46:16 +02:00
Nick Craig-Wood
50c1b594ab add rclone config string for making connection strings #8859 2025-10-02 17:30:08 +01:00
Nick Craig-Wood
72437a9ca2 config: add more human readable configmap.Simple output
Before this, String() quoted every part of the config map even if it
wasn't necessary.

The new Human() method removes the quoting and adds the special case
for "true" values.
2025-10-02 17:30:08 +01:00
dougal
8ed55c61e1 serve http: download folders as zip
Now folders can be downloaded as a zip. You can also use --disable-zip
to not show this.
2025-09-26 15:18:02 +01:00
dougal
bd598c1ceb s3: reorder providers to be in alphabetical order 2025-09-26 15:14:45 +01:00
juejinyuxitu
7e30665102 refactor: use strings.FieldsFuncSeq to reduce memory allocations
Signed-off-by: juejinyuxitu <juejinyuxitu@outlook.com>
2025-09-26 15:12:53 +01:00
Nick Craig-Wood
d44957a09c accounting: add SetMaxCompletedTransfers method to fix bisync race #8815
Before this change bisync adjusted the global MaxCompletedTransfers
variable which caused races.

This adds a SetMaxCompletedTransfers method and uses it in bisync.

The MaxCompletedTransfers global becomes the default. This can be
changed externally if rclone is in use as a library, and the commit
history indicates that MaxCompletedTransfers was added for exactly
this purpose so we try not to break it here.
2025-09-26 14:54:47 +01:00
Nick Craig-Wood
37524e2dea accounting: add RemoveDoneTransfers method to fix bisync race #8815
Before this change bisync was adjusting MaxCompletedTransfers in order
to clear the done transfers from the stats.

This wasn't working (because it was only clearing one transfer) and
was part of a race adjusting MaxCompletedTransfers.

This fixes the problem by introducing a new method RemoveDoneTransfers
to clear the done transfers explicitly and calling it in bisync.
2025-09-26 14:54:47 +01:00
Nick Craig-Wood
2f6a6c8233 bisync: fix race when CaptureOutput is used concurrently #8815
Before this change CaptureOutput could trip the race detector when
used concurrently. In particular if go routines using the logging are
outlasting the return from `fun()`.

This fixes the problem with a mutex.
2025-09-26 14:54:47 +01:00
Nick Craig-Wood
4ad40b6554 build: update all dependencies 2025-09-26 14:53:36 +01:00
Nick Craig-Wood
4f33d64f25 Makefile: remove deprecated go mod usage 2025-09-26 14:53:36 +01:00
Vikas Bhansali
519623d9f1 azurefiles: Fix server side copy not waiting for completion - fixes #8848 2025-09-26 12:41:42 +01:00
Nick Craig-Wood
913278327b Changelog updates from Version v1.71.1 2025-09-24 17:34:26 +01:00
Nick Craig-Wood
a9b05e4c7a test_all: fix branch name in test report 2025-09-24 15:35:09 +01:00
Nick Craig-Wood
5d6d79e7d4 pacer: fix deadlock with --max-connections
If the pacer was used recursively and --max-connections was in use
then it could deadlock if all the connections were in use at the time
of recursive call (likely).

This affected the azureblob backend because when it receives an
InvalidBlockOrBlob error it attempts to clear the condition before
retrying. This in turn involves recursively calling the pacer.

This fixes the problem by skipping the --max-connections check if the
pacer is called recursively.

The recursive detection is done by stack inspection which isn't ideal,
but the alternative would be to add ctx to all >1,000 pacer calls. The
benchmark reveals stack inspection takes about 55nS per stack level so
it is relatively cheap.
2025-09-22 17:39:27 +01:00
Nick Craig-Wood
11de074cbf Revert "azureblob: fix deadlock with --max-connections with InvalidBlockOrBlob errors"
This reverts commit 0c1902cc6037d81eaf95e931172879517a25d529.

This turns out not to be sufficient so we need a better approach
2025-09-22 17:39:27 +01:00
Nick Craig-Wood
e9ab177a32 Add Youfu Zhang to contributors 2025-09-22 17:39:27 +01:00
Nick Craig-Wood
f3f4fba98d Add Matt LaPaglia to contributors 2025-09-22 17:39:27 +01:00
Sudipto Baral
03fccdd67b smb: optimize smb mount performance by avoiding stat checks during initialization
add IsPathDir function and tests for trailing slash optimization
2025-09-22 15:33:44 +01:00
Youfu Zhang
231083647e pikpak: fix unnecessary retries by using URL expire parameter - fixes #8601
Before this change, rclone would unnecessarily retry downloads when
the `Link.Expire` field was unreliable but the download URL contained
a valid expire query parameter. This primarily affects cases where
media links are unavailable or when `no_media_link` is enabled.

The `Link.Valid()` method now primarily checks the URL's expire query
parameter (as Unix timestamp) and falls back to the Expire field
only when URL parsing fails. This eliminates the `error no link`
retry loops while maintaining backward compatibility.

Signed-off-by: Youfu Zhang <zhangyoufu@gmail.com>
2025-09-19 12:46:26 +09:00
dougal
0e203a7546 serve http: fix: logging url on start 2025-09-18 14:49:58 +01:00
Matt LaPaglia
a7dd787569 docs: fix typo 2025-09-16 14:27:10 +02:00
dougal
689555033e b2: fix 1TB+ uploads
Before this change the minimum chunk size would default to 96M which
would allow a maximum size of just below 1TB file to be uploaded, due to
the 10000 part rule for b2.

Now the calculated chunk size is used so the chunk size can be 5GB
making a max file size of 50TB.

Fixes #8460
2025-09-15 13:05:20 +01:00
Nick Craig-Wood
4fc4898287 march: fix deadlock when using --fast-list on syncs - fixes #8811
Before this change, it was possible to have a deadlock when using
--fast-list for a sync if both the source and destination supported
ListR.

This fixes the problem by shortening the locking window.
2025-09-15 12:55:29 +01:00
Nick Craig-Wood
b003169088 build: slices.Contains, added in go1.21 2025-09-15 12:45:57 +01:00
Nick Craig-Wood
babd112665 build: use strings.CutPrefix introduced in go1.20 2025-09-15 12:45:57 +01:00
Nick Craig-Wood
71b9b4ad7a build: use sequence Split introduced in go1.24 2025-09-15 12:45:57 +01:00
Nick Craig-Wood
4368863fcb build: use "for i := range n", added in go1.22 2025-09-15 12:45:57 +01:00
Nick Craig-Wood
04d49bf0ea build: modernize benchmark usage 2025-09-15 12:45:57 +01:00
Nick Craig-Wood
d7aa37d263 build: in tests use t.Context, added in go1.24 2025-09-15 12:45:57 +01:00
Nick Craig-Wood
379dffa61c build: replace interface{} by the 'any' type added in go1.18 2025-09-15 12:45:57 +01:00
Nick Craig-Wood
5fd4ece31f build: use the built-in min or max functions added in go1.21 2025-09-15 12:45:57 +01:00
Nick Craig-Wood
fc3f95190b Add russcoss to contributors 2025-09-15 12:45:57 +01:00
russcoss
d6f5652b65 build: remove x := x made unnecessary by the new semantics of loops in go1.22
Signed-off-by: russcoss <russcoss@outlook.com>
2025-09-14 15:58:20 +01:00
Nick Craig-Wood
b5cbb7520d lib/pool: fix unreliable TestPoolMaxBufferMemory test
This turned out to be a problem in the tests. The tests used to do

1. allocate
2. increment
3. free
4. decrement

But if one goroutine had just completed 2 and another had just
completed 3 then this can cause the test to register too many
allocations.

This was fixed by doing the test in this order instead:

1. allocate
2. increment
3. decrement
4. free

The 4 operations are atomic.

Fixes #8813
2025-09-12 10:39:32 +01:00
Nick Craig-Wood
a170dfa55b Update S-Pegg1 email 2025-09-12 10:39:32 +01:00
Nick Craig-Wood
1449c5b5ba Add Jean-Christophe Cura to contributors 2025-09-12 10:39:32 +01:00
dougal
35fe609722 pool: fix flaky unreliability test 2025-09-11 18:09:50 +01:00
dougal
cce399515f copyurl: reworked code, added concurrency and tests
- Added Tests
- Fixed file name handling
- Added concurrent downloads
- Limited downloads to --transfers
- Fixes #8127
2025-09-11 13:56:14 +01:00
S-Pegg1
8c5af2f51c copyurl: Added --url to read urls from csv file - #8127 2025-09-11 13:56:14 +01:00
dougal
c639d3656e docs: HDFS: erasure coding limitation #8808 2025-09-10 19:26:55 +01:00
nielash
d9fbbba5c3 fstest: fix slice bounds out of range error when using -remotes local
Before this change, TestIntegration/FsName could fail with "slice bounds out of
range [:-1]" when run with -remotes local.

It also caused issues with
'^TestGitAnnexFstestBackendCases$/^(TransferStorePathWithInteriorWhitespace|TransferStoreRelative)$'.

This change fixes the issue by accepting either "" or "local" to indicate the
local remote.
2025-09-09 12:09:42 -04:00
nielash
fd87560388 local: fix time zones on tests
Before this change, TestMetadata could fail due to a difference between the
user's local time zone and UTC causing the string representation of the date to
be off by one day. This change fixes the issue by comparing both in the Local
time zone.
2025-09-09 12:09:42 -04:00
dougal
d87720a787 s3: added SpectraLogic as a provider 2025-09-09 16:40:10 +01:00
nielash
d541caa52b local: fix rmdir "Access is denied" on windows - fixes #8363
Before this change, Rmdir (and other commands that rely on Rmdir) would fail
with "Access is denied" on Windows, if the directory had
FILE_ATTRIBUTE_READONLY. This could happen if, for example, an empty folder had
a custom icon added via Windows Explorer's interface (Properties => Customize =>
Change Icon...).

However, Microsoft docs indicate that "This attribute is not honored on
directories."
https://learn.microsoft.com/en-us/windows/win32/fileio/file-attribute-constants#file_attribute_readonly
Accordingly, this created an odd situation where such directories were removable
(by their owner) via File Explorer and the rd command, but not via rclone.

An upstream issue has been open since 2018, but has not yet resulted in a fix.
https://github.com/golang/go/issues/26295

This change gets around the issue by doing os.Chmod on the dir and then retrying
os.Remove. If the dir is not empty, this will still fail with "The directory is
not empty."

A bisync user confirmed that it fixed their issue in
https://forum.rclone.org/t/bisync-leaving-empty-directories-on-unc-path-1-or-local-filesystem-path-2-on-directory-renames/52456/4?u=nielash

It is likely also a fix for #8019, although @ncw is correct that Purge would be
a more efficient solution in that particular scenario.
2025-09-09 11:25:09 -04:00
nielash
fd1665ae93 bisync: fix error handling for renamed conflicts
Before this change, rclone could crash during modifyListing if a rename's
srcNewName is known but not found in the srcList
(srcNewName != "" && new == nil).
This scenario should not happen, but if it does, we should print an error
instead of crashing.

On #8458 there is a report of this possibly happening on v1.68.2. It is unknown
what the underlying issue was, and whether it still exists in the latest
version, but if it does, the user will now see an error and debug info instead
of a crash.
2025-09-06 12:43:23 -04:00
Jean-Christophe Cura
457d80e8a9 docs: pcloud: update root_folder_id instructions 2025-09-05 20:50:00 +01:00
Nick Craig-Wood
c5a3e86df8 operations: fix partial name collisions for non --inplace copies
In this commit:

c63f1865f3 operations: copy: generate stable partial suffix

We made the partial suffix for non inplace copies stable. This was a
hash based off the file fingerprint.

However, given a directory of files which have the same fingerprint
the partial suffix collides. On some backends (eg the local backend)
the fingerprint is just the size and modification time so files with
different contents can collide.

The effect of collisions was hash failures on copy when using
--transfers > 1. These copies invariably retried successfully which
probably explains why this bug hasn't been reported.

This fixes the problem by adding the file name to the hash.

It also makes sure the hash is always represented as 8 hex bytes for
consistency.
2025-09-05 16:09:46 +01:00
Ed Craig-Wood
4026e8db20 drive: docs: update making your own client ID instructions
update instructions with the most recent changes to google cloud console
2025-09-05 15:30:52 +01:00
dougal
c9ce686231 swift: add ListP interface - #4788 2025-09-05 15:29:37 +01:00
dougal
b085598cbc memory: add ListP interface - #4788 2025-09-05 15:29:37 +01:00
dougal
bb47dccdeb oraceobjectstorage: add ListP interface - #4788 2025-09-05 15:29:37 +01:00
dougal
7a279d2789 B2: add ListP interface - #4788 2025-09-05 15:29:37 +01:00
dougal
9bd5df658a azureblob: add ListP interface - #4788 2025-09-05 15:29:37 +01:00
dougal
d512e4d566 googlecloudstorage: add ListP interface - Fixes #8763 2025-09-05 15:29:37 +01:00
dependabot[bot]
3dd68c824a build: bump actions/github-script from 7 to 8
Bumps [actions/github-script](https://github.com/actions/github-script) from 7 to 8.
- [Release notes](https://github.com/actions/github-script/releases)
- [Commits](https://github.com/actions/github-script/compare/v7...v8)

---
updated-dependencies:
- dependency-name: actions/github-script
  dependency-version: '8'
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-09-05 08:14:32 +02:00
dependabot[bot]
fbe73c993b build: bump actions/setup-go from 5 to 6
Bumps [actions/setup-go](https://github.com/actions/setup-go) from 5 to 6.
- [Release notes](https://github.com/actions/setup-go/releases)
- [Commits](https://github.com/actions/setup-go/compare/v5...v6)

---
updated-dependencies:
- dependency-name: actions/setup-go
  dependency-version: '6'
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-09-05 08:12:38 +02:00
nielash
d915f75edf bisync: fix chunker integration tests
Before this change, TestChunkerS3: tests were failing because our use of
obj.Remove (for "modtime_write_test") created an unexpected extra transfer.

This is because chunker calls operations.Move for removes, which (per its
function comment) is supposed to be only accounted as a check. But because S3
can Copy but not Move, the move falls back to copy and ends up getting counted
as a transfer anyway.
99e8a63df2/fs/operations/operations.go (L506)
99e8a63df2/fs/operations/copy.go (L381)

This is probably a bug that should get a more proper fix in operations. But in
the meantime, we can get around it by doing our "modtime_write_test" with its
own unique stats group.
2025-09-04 14:38:10 -04:00
nielash
26b629f42f bisync: fix koofr integration tests
Before this change, koofr failed certain bisync tests because it can't set mod
time without deleting and re-uploading. This caused the "nothing to transfer" log
to not get printed where expected (as it is only printed when there are 0
transfers, but koofr requires extra transfers to set modtime.)

This change fixes the issue by ignoring the absence of the "nothing to transfer"
log line on backends that return `fs.ErrorCantSetModTimeWithoutDelete` for
`obj.SetModTime`.
2025-09-04 14:38:10 -04:00
Nick Craig-Wood
ceaac2194c internetarchive: fix server side copy files with spaces
In this commit we broke server side copy for files with spaces

4c5764204d internetarchive: fix server side copy files with &

This fixes the problem by using rest.URLPathEscapeAll which escapes
everything possible.

Fixes #8754
2025-09-04 10:37:27 +01:00
Nick Craig-Wood
1f14b6aa35 lib/rest: add URLPathEscapeAll to URL escape as many chars as possible 2025-09-04 10:37:27 +01:00
Nick Craig-Wood
dd75af6a18 Add alternate email for dougal to contributors 2025-09-04 10:37:27 +01:00
dougal
99e8a63df2 test speed: add command to test a specified remotes speed
Run speed test to try and work in a given time budget, uploading
randomly created files to the remote then downloading them again.

Fixes #3198
2025-09-03 12:37:52 +01:00
Nick Craig-Wood
0019e18ac3 docs: add link to MEGA S4 from MEGA page 2025-09-02 17:22:32 +01:00
Nick Craig-Wood
218c3bf6e9 Add Robin Rolf to contributors 2025-09-02 17:22:32 +01:00
Nick Craig-Wood
8f9702583d Add anon-pradip to contributors 2025-09-02 17:22:32 +01:00
Robin Rolf
e6578fb5a1 s3: Add Intercolo provider 2025-09-02 16:34:43 +01:00
albertony
fa1d7da272 gendocs: refactor and add logging of skipped command docs 2025-09-02 14:06:31 +02:00
albertony
813708c24d gendocs: ignore missing rclone_mount.md, rclone_nfsmount.md, rclone_serve_nfs.md on windows 2025-09-02 14:06:31 +02:00
nielash
fee4716343 bin: add bisync.md generator
This change adds make_bisync_docs.go step to dynamically update the list of
ignored and failed tests in bisync.md
2025-09-01 14:43:40 -04:00
nielash
6e9a675b3f fstest: refactor to decouple package from implementation 2025-09-01 14:43:40 -04:00
nielash
7f5a444350 gendocs: ignore missing rclone_mount.md on macOS 2025-09-01 14:43:40 -04:00
nielash
d2916ac5c7 bisync: ignore expected "nothing to transfer" differences on tests
The "There was nothing to transfer" log is only printed when the number of
transfers is exactly 0. However, there are a variety of reasons why the transfer
count would be expected to differ between backends. For example, if either side
lacks hashes, the sync may in fact need to transfer, where it would otherwise
skip based on hash or just update modtime. Transfer stats will also differ in
the "src and dst identical but can't set mod time without deleting and re-
uploading" scenario (because the re-upload is a transfer), and where --download-hash
is needed (because calculating the hash requires downloading the file, which is
a transfer).

Before this change, these expected differences would result in erroneous test
failures. This change fixes the issue by ignoring the absence of the "nothing to
transfer" log where it is expected.

Note that this issue did not occur before
9e200531b1
because the number of transfers was not getting reset between test steps,
sometimes resulting in an artificially inflated transfers count.
2025-09-01 14:05:00 -04:00
nielash
3369a15285 bisync: fix TestBisyncConcurrent ignoring -case
Before this change, TestBisyncConcurrent would still run the "basic" test case
if a non-blank -case arg was used to specify a case other than "basic". This
change fixes it by skipping in this scenario.
2025-09-01 14:05:00 -04:00
nielash
58aee30de7 bisync: make number of parallel tests configurable
Example usage:
go test ./cmd/bisync -remote local -race -pcount 10
2025-09-01 14:05:00 -04:00
anon-pradip
ef919241a6 docs: clarify subcommand description in rclone usage 2025-09-01 17:09:51 +01:00
albertony
d5386bb9a7 docs: fix description of regex syntax of name transform 2025-09-01 16:40:14 +01:00
albertony
bf46ea5611 docs: add some more details about supported regex syntax 2025-09-01 16:40:14 +01:00
nielash
b8a379c9c9 makefile: fix lib/transform docs not getting updated
As of
4280ec75cc
the lib/transform docs are generated with //go:generate and embedded with
//go:embed.

Before this change, however, they were not getting automatically updated with
subsequent changes (like
fe62a2bb4e)
because `go generate ./lib/transform` was not being run as part of the release
making process.

This change fixes that by running it in `make commanddocs`.
2025-09-01 16:39:20 +01:00
Nick Craig-Wood
8c37a9c2ef lib/pool: fix flaky test which was causing timeouts
This puts a limit on the number of allocation failures in a row which
stops the test timing out as the exponential backoffs get very large.
2025-09-01 16:25:31 +01:00
Nick Craig-Wood
963a72ce01 Add dougal to contributors 2025-09-01 16:25:31 +01:00
dougal
a4962e21d1 vfs: fix SIGHUP killing serve instead of flushing directory caches
Before, rclone serve would crash when sent a SIGHUP which contradicts
the documentation - saying it should flush the directory caches.

Moved signal handling from the mount into the vfs layer, which now
handles SIGHUP on all uses of the VFS including mount and serve.

Fixes #8607
2025-09-01 13:15:11 +01:00
nielash
9e200531b1 bisync: use unique stats groups on tests 2025-08-30 17:46:33 +01:00
Nick Craig-Wood
04683f2032 fstest: stop errors in test cleanup changing the global stats
This was causing the concurrent bisync tests to fail every now and again.
2025-08-30 17:46:33 +01:00
Nick Craig-Wood
b41f7994da Add Motte to contributors 2025-08-30 17:46:33 +01:00
Nick Craig-Wood
13a5ffe391 Add Claudius Ellsel to contributors 2025-08-30 17:46:33 +01:00
Nick Craig-Wood
85deea82e4 build: add local markdown linting to make check 2025-08-28 16:56:40 +01:00
Motte
89a8ea7a91 lsf: add support for unix and unixnano time formats 2025-08-28 16:28:49 +01:00
albertony
c8912eb6a0 docs: remove broken links from rc to commands 2025-08-28 11:52:18 +02:00
albertony
01674949a1 hashsum: changed output format when listing algorithms 2025-08-27 23:36:28 +02:00
Claudius Ellsel
98e1d3ee73 docs: add example of how to add date as suffix 2025-08-27 22:01:28 +02:00
Nick Craig-Wood
50d7a80331 box: fix about after change in API return - fixes #8776 2025-08-26 18:03:09 +01:00
Nick Craig-Wood
bc3e8e1abd Add skbeh to contributors 2025-08-26 18:03:09 +01:00
Nick Craig-Wood
30e80d0716 Add Tilman Vogel to contributors 2025-08-26 18:03:09 +01:00
albertony
f288920696 docs: fix incorrectly escaped windows path separators 2025-08-26 14:29:33 +02:00
albertony
fa2bbd705c build: restore error handling in gendocs 2025-08-26 14:28:05 +02:00
skbeh
43a794860f combine: propagate SlowHash feature 2025-08-26 12:39:32 +01:00
albertony
adfe6b3bad docs/oracleobjectstorage: add introduction before external links and remove broken link 2025-08-26 12:04:00 +02:00
albertony
091ccb649c docs: fix markdown lint issues in backend docs 2025-08-26 12:04:00 +02:00
albertony
2e02d49578 docs: fix markdown lint issues in command docs 2025-08-26 12:04:00 +02:00
albertony
514535ad46 docs: update markdown code block json indent size 2 2025-08-26 12:04:00 +02:00
Tilman Vogel
b010591c96 mount: do not log successful unmount as an error - fixes #8766 2025-08-23 16:30:33 +01:00
Nick Craig-Wood
1aaee9edce Start v1.72.0-DEV development 2025-08-22 17:42:25 +01:00
Nick Craig-Wood
3f0e9f5fca Version v1.71.0 2025-08-22 16:03:16 +01:00
Nick Craig-Wood
cfd0d28742 fs: tls: add --client-pass support for encrypted --client-key files
This also widens the supported types

- Unencrypted PKCS#1 ("BEGIN RSA PRIVATE KEY")
- Unencrypted PKCS#8 ("BEGIN PRIVATE KEY")
- Encrypted PKCS#8 ("BEGIN ENCRYPTED PRIVATE KEY")
- Legacy PEM encryption (e.g., DEK-Info headers), which are automatically detected.
2025-08-22 12:19:29 +01:00
Nick Craig-Wood
e7a2b322ec ftp: make TLS config default to global TLS config - Fixes #6671
This allows --ca-cert, --client-cert, --no-check-certificate etc to be
used.

This also allows `override.ca_cert = XXX` to be used in the config
file.
2025-08-22 12:19:29 +01:00
Nick Craig-Wood
d3a0805a2b fshttp: return *Transport rather than http.RoundTripper from NewTransport
This allows further customization, reading the existing config and is
the Go recommended way "accept interfaces, return structs".
2025-08-22 12:19:29 +01:00
nielash
d4edf8ac18 bisync: release from beta
As of v1.71, bisync is officially out of beta.

Some history:

- bisync was born in 2018 as https://github.com/cjnaz/rclonesync-V2
by @cjnaz, written in python.
- In 2021, @ivandeex ported it to go with @cjnaz's support.
https://github.com/rclone/rclone/pull/5164
- It was introduced as an "experimental" feature in v1.58.
6210e22ab5
- In 2023, bisync needed a new maintainer, and @nielash volunteered.
https://forum.rclone.org/t/bisync-bugs-and-feature-requests/37636
- Later in 2023, bisync received a major overhaul and was relabeled "beta"
(from "experimental"). https://github.com/rclone/rclone/pull/7410
- In 2024, integration tests were introduced for bisync (which previously had
only unit tests). https://github.com/rclone/rclone/pull/7693
- As of August 2025, bisync is stable and integration tests are passing on all
of the "flagship" backends.

Development doesn't stop here, of course. But bisync has come a long way since
its "experimental" days, and the "beta" tag is no longer needed.
2025-08-22 12:13:59 +01:00
nielash
87d14b000a bisync: fix markdown formatting issues flagged by linter in docs 2025-08-22 12:13:59 +01:00
nielash
12bded980b bisync: fix --no-slow-hash settings on path2
Before this change, if path2 had slow hashes, and --no-slow-hash or --slow-hash-sync-only
was in use, bisync was erroneously setting path1's hashtype to 'none' instead of
path2's. This change fixes the issue.

See https://forum.rclone.org/t/hashtype-mismatch-with-slow-hash-sync-only-in-onedrive-local-bisync/52138/2?u=nielash
2025-08-22 12:13:59 +01:00
Nick Craig-Wood
6e0e76af9d Add cui to contributors 2025-08-22 12:13:59 +01:00
Nick Craig-Wood
6f9b2f7b9b docs: add code of conduct 2025-08-22 11:42:51 +01:00
cui
f61d79396d lib/mmap: convert to using unsafe.Slice to avoid deprecated reflect.SliceHeader 2025-08-22 00:35:50 +01:00
dependabot[bot]
9b22e38450 build: bump golangci/golangci-lint-action from 6 to 8
Bumps [golangci/golangci-lint-action](https://github.com/golangci/golangci-lint-action) from 6 to 8.
- [Release notes](https://github.com/golangci/golangci-lint-action/releases)
- [Commits](https://github.com/golangci/golangci-lint-action/compare/v6...v8)

---
updated-dependencies:
- dependency-name: golangci/golangci-lint-action
  dependency-version: '8'
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-08-22 00:14:01 +01:00
albertony
9e4fe18830 build: update golangci-lint configuration 2025-08-22 00:14:01 +01:00
albertony
ae5cc1ab37 build: ignore revive lint issue var-naming: avoid meaningless package names 2025-08-22 00:14:01 +01:00
albertony
d4be38ec02 build: fix lint issue: should omit type error from declaration 2025-08-22 00:14:01 +01:00
albertony
115cff3007 Revert "build: downgrade linter to use go1.24 until it is fixed for go1.25"
This reverts commit 8f84f91666.
2025-08-22 00:14:01 +01:00
albertony
70b862f026 build: migrate golangci-lint configuration to v2 format 2025-08-22 00:14:01 +01:00
Nick Craig-Wood
321cf23e9c s3: add --s3-use-arn-region flag - fixes #8686 2025-08-22 00:02:41 +01:00
Nick Craig-Wood
7e8d4bd915 Add Binbin Qian to contributors 2025-08-22 00:02:41 +01:00
Nick Craig-Wood
06f45e0ac0 Add Lucas Bremgartner to contributors 2025-08-22 00:02:41 +01:00
Binbin Qian
4af2f01abc docs: add tips about outdated certificates 2025-08-21 08:21:02 +02:00
Lucas Bremgartner
dd3fff6eae FAQ: specify the availability of SSL_CERT_* env vars
SSL_CERT_FILE and SSL_CERT_DIR env vars are only available on Unix systems other than macOS.

Addressing comment https://github.com/rclone/rclone/pull/1977#issuecomment-3201961570
2025-08-20 12:34:04 +01:00
wiserain
ca6631746a pikpak: add file name integrity check during upload
This commit introduces a new validation step to ensure data integrity 
during file uploads.

- The API's returned file name (new.File.Name) is now verified 
  against the requested file name (leaf) immediately after 
  the initial upload ticket is created.
- If a mismatch is detected, the upload process is aborted with an error, 
  and the defer cleanup logic is triggered to delete any partially created file.
- This addresses an unexpected API behavior where numbered suffixes 
  might be appended to filenames even without conflicts.
- This change prevents corrupted or misnamed files from being uploaded 
  without client-side awareness.
2025-08-19 22:00:23 +09:00
nielash
e5fe0b1476 bisync: skip TestBisyncConcurrent on non-local
See discussion on
https://github.com/rclone/rclone/pull/8708#discussion_r2280308808
2025-08-18 17:57:14 -04:00
Nick Craig-Wood
4c5764204d internetarchive: fix server side copy files with &
Before this change, server side copy of files with & gave the error:

    Invalid Argument</Message><Resource>x-(amz|archive)-copy-source
    header has bad character

This fix switches to using url.QueryEscape which escapes everything
from url.PathEscape which doesn't escape &.

Fixes #8754
2025-08-18 19:37:30 +01:00
Nick Craig-Wood
d70f40229e Revert "s3: set useAlreadyExists to false for Alibaba OSS"
This reverts commit 64ed9b175f.

This fails the integration tests with

s3_internal_test.go:434: Creating a bucket we already have created returned code: No Error
s3_internal_test.go:439:
    	Error Trace:	backend/s3/s3_internal_test.go:439
    	Error:      	Should be true
    	Test:       	TestIntegration/FsMkdir/FsPutFiles/Internal/Versions/Mkdir
    	Messages:   	Need to set UseAlreadyExists quirk
2025-08-18 19:37:30 +01:00
Nick Craig-Wood
05b13b47b5 Add huangnauh to contributors 2025-08-18 19:37:30 +01:00
Sudipto Baral
ecd52aa809 smb: improve multithreaded upload performance using multiple connections
In the current design, OpenWriterAt provides the interface for random-access
writes, and openChunkWriterFromOpenWriterAt wraps this interface to enable
parallel chunk uploads using multiple goroutines. A global connection pool is
already in place to manage SMB connections across files.

However, currently only one connection is used per file, which makes multiple
goroutines compete for the connection during multithreaded writes.

This changes create separate connections for each goroutine, which allows true
parallelism by giving each goroutine its own SMB connection

Signed-off-by: sudipto baral <sudiptobaral.me@gmail.com>
2025-08-18 16:29:18 +01:00
nielash
269abb1aee bisync: fix data races on tests 2025-08-17 20:16:46 -04:00
nielash
d91cbb2626 bisync: remove unused parameters 2025-08-17 20:16:46 -04:00
nielash
9073d17313 bisync: deglobalize to fix concurrent runs via rc - fixes #8675
Before this change, bisync used some global variables, which could cause errors
if running multiple concurrent bisync runs through the rc. (Running normally
from the command line was not affected.)

This change deglobalizes those variables so that multiple bisync runs can be
safely run at once, from the same rclone instance.
2025-08-17 20:16:46 -04:00
huangnauh
cc20d93f47 mount: fix identification of symlinks in directory listings 2025-08-17 12:57:35 +01:00
Nick Craig-Wood
cb1507fa96 s3: fix Content-Type: aws-chunked causing upload errors with --metadata
`Content-Type: aws-chunked` is used on S3 PUT requests to signal SigV4
streaming uploads: the body is sent in AWS-formatted chunks, each
chunk framed and HMAC-signed.

When copying from a non S3 compatible object store (like Digital
Ocean) the objects can have `Content-Type: aws-chunked` (which you
won't see on AWS S3). Attempting to copy these objects to S3 with
`--metadata` this produces this error.

    aws-chunked encoding is not supported when x-amz-content-sha256 UNSIGNED-PAYLOAD is supplied

This patch makes sure `aws-chunked` is removed from the `Content-Type`
metadata both on the way in and the way out.

Fixes #8724
2025-08-16 17:11:54 +01:00
Nick Craig-Wood
b0b3b04b3b config: fix problem reading pasted tokens over 4095 bytes
Before this change we were reading input from stdin using the terminal
in the default line mode which has a limit of 4095 characters.

The typical culprit was onedrive tokens (which are very long) giving the error

    Couldn't decode response: invalid character 'e' looking for beginning of value

This change swaps over to use the github.com/peterh/liner read line
library which does not have that limitation and also enables more
sensible cursor editing.

Fixes #8688 #8323 #5835
2025-08-16 16:44:35 +01:00
Nick Craig-Wood
8d878d0a5f config: fix test failure on local machine with a config file
This uses a temporary config file instead.
2025-08-16 16:44:00 +01:00
Nick Craig-Wood
8d353039a6 log: add log rotation to --log-file - fixes #2259 2025-08-16 16:38:23 +01:00
Nick Craig-Wood
4b777db20b accounting: Fix stats (speed=0 and eta=nil) when starting jobs via rc
Before this change we used the current context to start the average
loop. This means that if the context came from the rc the average loop
would be cancelled at the end of the rc request leading the speed not
being measured.

This uses the background context for the accounting loop so it doesn't
get cancelled when its parent gets cancelled.
2025-08-16 16:33:38 +01:00
Nick Craig-Wood
16ad0c2aef docs: update overview table for oracle object storage 2025-08-16 16:00:14 +01:00
Nick Craig-Wood
e46dec2a94 Add praveen-solanki-oracle to contributors 2025-08-16 16:00:14 +01:00
praveen-solanki-oracle
2b54b63cb3 oracleobjectstorage: add read only metadata support - Fixes #8705 2025-08-16 15:55:53 +01:00
Nick Craig-Wood
f2eb5f35f6 doc: sync doesn't symlinks in dest without --link - Fixes #8749 2025-08-16 09:22:31 +01:00
Nick Craig-Wood
d9a36ef45c s3: sort providers in docs 2025-08-15 17:38:31 +01:00
Nick Craig-Wood
eade7710e7 s3: add docs for Exaba Object Storage 2025-08-15 17:38:31 +01:00
Nick Craig-Wood
e6470d998c azureblob: fix double accounting for multipart uploads - fixes #8718
Before this change multipart uploads using OpenChunkWriter would
account for twice the space used.

This fixes the problem by adjusting the accounting delay.
2025-08-14 16:59:34 +01:00
Nick Craig-Wood
0c0fb93111 pool: fix deadlock with --max-buffer-memory
Before this change we used an overcomplicated method of memory
reservations in the pool.RW which caused deadlocks.

This changes it to use a much simpler reservation system where we
actually reserve the memory and store it in the pool.RW. This allows
us to use the semaphore.Weighted to count the actually memory in use
(rather than the memory in use and in the cache). This in turn allows
accurate use of the semaphore by users wanting memory.
2025-08-14 16:14:59 +01:00
Nick Craig-Wood
3f60764bd4 azureblob: fix deadlock with --max-connections with InvalidBlockOrBlob errors
Before this change the azureblob backend could deadlock when using
--max-connections. This is because when it receives InvalidBlockOrBlob
error it attempts to clear the condition before retrying. This in turn
involved recursively calling the pacer. At this point the pacer can
easily have no connections left which causes a deadlock as all the
other pacer connections are waiting for the InvalidBlockOrBlob to be
resolved.

This fixes the problem by using a temporary pacer when resolving the
InvalidBlockOrBlob errors.
2025-08-14 16:14:59 +01:00
Nick Craig-Wood
8f84f91666 build: downgrade linter to use go1.24 until it is fixed for go1.25 2025-08-13 17:54:45 +01:00
Nick Craig-Wood
2c91772bf1 build: update all dependencies 2025-08-13 17:54:45 +01:00
Nick Craig-Wood
c3f721755d build: update to go1.25 and make go1.24 the minimum required version 2025-08-13 17:54:45 +01:00
Nick Craig-Wood
8a952583a5 Add Timothy Jacobs to contributors 2025-08-13 17:54:40 +01:00
nielash
fc5bd21e28 bisync: fix time.Local data race on tests - fixes #8272
Before this change, the bisync tests were directly setting the time.Local
variable to UTC.

The reason for overriding the time zone on the tests is to make them
deterministic regardless of where in the world the user happens to be. There are
some goldenized strings which have the time zone hard-coded and would result in a
miscompare failure outside of that time zone.

However, mutating the time.Local variable is not the right way to do this, as OP
correctly pointed out on #8272.

Setting the TZ environment variable from within the code was also not an ideal
solution because, while it worked on unix, it did not work on Windows. See
fbac94a799/src/time/zoneinfo.go (L79-L80)

This change fixes the issue by defining a new bisync.LogTZ setting for use when
printing timestamps in /cmd/bisync/resolve.go. We override this on the tests
instead of time.Local.
2025-08-13 11:58:35 -04:00
nielash
be73a10a97 googlecloudstorage: fix rateLimitExceeded error on bisync tests
Additional to googlecloudstorage's general rate limiting, it apparently has a
separate limit for updating the same object more than once per second:

googleapi: Error 429: The object rclone-test-
demilaf1fexu/015108so/check_access/path2/modtime_write_test exceeded the rate
limit for object mutation operations (create, update, and delete). Please reduce
your request rate. See https://cloud.google.com/storage/docs/gcs429.,
rateLimitExceeded

We were encountering this in the part of the bisync tests where we create an
object, verify that we can edit its modtime, then remove it. We were not
encountering it elsewhere because it only concerns manipulations of the same
object -- not the rate of API calls in general. For the same reason, the standard
pacer is not an effective solution for enforcing this (unless, of course, we
want to slow the entire test down by setting a 1s MinSleep across the board.)

While ideally this would be handled in the backend, this gets around it by
sleeping for 1s in the relevant part of the bisync tests.
2025-08-13 11:58:35 -04:00
Timothy Jacobs
7edf8eb233 accounting: populate transfer snapshot with "what" value 2025-08-13 16:25:38 +01:00
dependabot[bot]
99144dcbba build(deps): bump actions/checkout from 4 to 5
Bumps [actions/checkout](https://github.com/actions/checkout) from 4 to 5.
- [Release notes](https://github.com/actions/checkout/releases)
- [Changelog](https://github.com/actions/checkout/blob/main/CHANGELOG.md)
- [Commits](https://github.com/actions/checkout/compare/v4...v5)

---
updated-dependencies:
- dependency-name: actions/checkout
  dependency-version: '5'
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-08-12 19:39:49 +02:00
dependabot[bot]
8f90f830bd build(deps): bump actions/download-artifact from 4 to 5
Bumps [actions/download-artifact](https://github.com/actions/download-artifact) from 4 to 5.
- [Release notes](https://github.com/actions/download-artifact/releases)
- [Commits](https://github.com/actions/download-artifact/compare/v4...v5)

---
updated-dependencies:
- dependency-name: actions/download-artifact
  dependency-version: '5'
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-08-12 17:49:55 +02:00
nielash
456108f29e googlecloudstorage: enable bisync integration tests
These were habitually failing at some point and ignored for that reason, but
seem to be passing now. It is possible that in the interim, the underlying issue
was resolved by another commit. If there is still an issue lurking, the nightly
tests will surely reveal it (and give us a log to look at.)
2025-08-09 18:12:17 -04:00
nielash
f7968aad1c fstest: fix parsing of commas in -remotes
Connection string remotes like "TestGoogleCloudStorage,directory_markers:" use
commas. Before this change, these could not be passed with the -remotes flag,
which expected commas to be used only as separators.

After this change, CSV parsing is used so that commas will be properly
recognized inside a terminal-escaped and quoted value, like:

-remotes local,\"TestGoogleCloudStorage,directory_markers:\"
2025-08-09 18:12:17 -04:00
nielash
2a587d21c4 azurefiles: fix hash getting erased when modtime is set
Before this change, setting an object's modtime with o.SetModTime() (without
updating the file's content) would inadvertently erase its md5 hash.

The documentation notes: "If this property isn't specified on the request, the
property is cleared for the file. Subsequent calls to Get File Properties won't
return this property, unless it's explicitly set on the file again."
https://learn.microsoft.com/en-us/rest/api/storageservices/set-file-properties#common-request-headers

This change fixes the issue by setting ContentMD5 (and ContentType), to the
extent we have it, during SetModTime.

Discovered on bisync integration tests such as TestBisyncRemoteRemote/resolve
2025-08-09 18:12:17 -04:00
nielash
4b0df05907 bisync: disable --sftp-copy-is-hardlink on sftp tests
Before this change, TestSFTPOpenssh integration tests would fail due to setting
copy_is_hardlink=true in /fstest/testserver/init.d/TestSFTPOpenssh.

For example, if a file was server-side copied from path1 to path2 and then the
bisync tests set the path2 modtime, the path1 modtime would also unexpectedly
mutate.

Hardlinks are not the same as copies. The bisync tests assume that they can
modify a file on one side without affecting a file on the other. This change
essentially sets --sftp-copy-is-hardlink to the default of false for the bisync
tests.
2025-08-09 18:12:17 -04:00
Anagh Kumar Baranwal
a92af34825 local: fix --copy-links on Windows when listing Junction points 2025-08-10 00:33:34 +05:30
Nick Craig-Wood
8ffde402f6 operations: fix too many connections open when using --max-memory
Before this change we opened the connection before allocating memory.
This meant a long wait sometimes for memory and too many connections
open.

Now we allocate the memory first before opening the connection.
2025-08-07 12:45:44 +01:00
Nick Craig-Wood
117d8d9fdb pool: fix deadlock with --max-memory and multipart transfers
Because multipart transfers can need more than one buffer to complete,
if transfers was set very high, it was possible for lots of multipart
transfers to start, grab fewer buffers than chunk size, then deadlock
because no more memory was available.

This fixes the problem by introducing a reservation system which the
multipart transfer uses to ensure it can reserve all the memory for
one chunk before starting.
2025-08-07 12:45:44 +01:00
Nick Craig-Wood
5050f42b8b pool: unify memory between multipart and asyncreader to use one pool
Before this the multipart code and asyncreader used separate pools
which is inefficient on memory use.
2025-08-07 12:45:44 +01:00
Nick Craig-Wood
fcbcdea067 docs: update links to rcloneui 2025-08-05 16:25:58 +01:00
Nick Craig-Wood
d4e68bf66b docs: add MEGA S4 as a gold sponsor
This also tidies the menu cards.
2025-08-01 12:40:29 +01:00
Nick Craig-Wood
743d160fdd about: fix potential overflow of about in various backends
Before this fix it was possible for an about call in various backends
to exceed an int64 and wrap.

This patch causes it to clip to the max int64 value instead.
2025-07-31 11:38:51 +01:00
Nick Craig-Wood
dc95f36bc1 box: fix about: cannot unmarshal number 1.0e+18 into Go struct field
Before this change rclone about was failing with

    cannot unmarshal number 1.0e+18 into Go struct field User.space_amount of type int64

As Box increased Enterprise accounts user.space_amount from 30PB to
1e+18 or 888.178PB returning it as a floating point number, not an integer.

This fix reads it as a float64 and clips it to the maximum value of an
int64 if necessary.
2025-07-31 11:38:51 +01:00
Nick Craig-Wood
d3e3af377a oauthutil: fix nil pointer crash when started with expired token 2025-07-31 11:38:51 +01:00
n4n5
db4812fbfa rc: listremotes should send an empty array instead of nil 2025-07-25 15:37:25 +01:00
n4n5
ff9cbab5fa config: add error if RCLONE_CONFIG_PASS was supplied but didn't decrypt config 2025-07-25 11:24:18 +01:00
n4n5
30d8ab5f2f rc: add config/unlock to unlock the config file 2025-07-25 11:19:07 +01:00
Anagh Kumar Baranwal
d71a4195d6 ftp: allow insecure TLS ciphers - fixes #8701
Signed-off-by: Anagh Kumar Baranwal <6824881+darthShadow@users.noreply.github.com>
2025-07-25 10:30:18 +01:00
zjx20
64ed9b175f s3: set useAlreadyExists to false for Alibaba OSS 2025-07-24 23:22:16 +01:00
Nick Craig-Wood
2b10340e4e docs: update sponsors page 2025-07-24 15:19:15 +01:00
Nick Craig-Wood
3c596f8d11 fs: allow global variables to be overriden or set on backend creation
This allows backend config to contain

- `override.var` - set var during remote creation only
- `global.var` - set var in the global config permanently

Fixes #8563
2025-07-23 15:09:51 +01:00
Nick Craig-Wood
6a9c221841 fs: allow setting of --http_proxy from command line
This in turn allows `override.http_proxy` to be set in backend configs
to set an http proxy for a single backend.
2025-07-23 15:09:51 +01:00
Nick Craig-Wood
c49b24ff90 tests: cloudinary: remove test ignore after merging fix from #8707 2025-07-23 13:12:55 +01:00
Nick Craig-Wood
edbbfd1e86 Add Antonin Goude to contributors 2025-07-23 13:12:55 +01:00
Nick Craig-Wood
0e0af7499c Add Yu Xin to contributors 2025-07-23 13:12:55 +01:00
Nick Craig-Wood
eb4fe3ef4c Add houance to contributors 2025-07-23 13:12:55 +01:00
Nick Craig-Wood
70eb0f21d9 Add Florent Vennetier to contributors 2025-07-23 13:12:55 +01:00
Nick Craig-Wood
12378bae27 Add n4n5 to contributors 2025-07-23 13:12:55 +01:00
Nick Craig-Wood
3c08c4df3a Add Albin Parou to contributors 2025-07-23 13:12:55 +01:00
Nick Craig-Wood
897509ae10 Add liubingrun to contributors 2025-07-23 13:12:55 +01:00
nielash
0eb7ee2e16 sync: fix testLoggerVsLsf when backend only reads modtime
There are some backends (like PikPak) that advertise a precision of
fs.ModTimeNotSupported but do actually return a modtime when asked. In the case
of PikPak, it is because the modtime can be read but not written, and is not
considered reliable enough to use for syncing.

Before this change, testLoggerVsLsf got confused in this scenario (expected a
blank modtime but got non-blank). Adding to the confusion, it only reaches this
code if the backend happens to support md5 hashes, and the fsrc and fdst have
the same precision.

This change fixes the issue by setting the modtime string on both sides to
"none" in this scenario. Note that we can't use "" (blank) because
(operations.ListFormat).AddModTime would replace that with "2006-01-02 15:04:05".
2025-07-23 12:49:52 +01:00
nielash
c1ebfb7e04 sync: fix testLoggerVsLsf checking wrong fs
Before this change, two tests (TestServerSideCopyOverSelf and
TestServerSideMoveOverSelf) were checking the wrong Fs in the call to
testLoggerVsLsf. This fixes it by making sure we are testing the same two Fs's
we synced.
2025-07-23 12:49:52 +01:00
Nick Craig-Wood
3d62058693 docs: fix make opengraph tags absolute as not all sites understand relative 2025-07-22 18:00:33 +01:00
albertony
122890799f docs: update contributing guide regarding markdown documentation 2025-07-21 20:23:16 +02:00
albertony
65078d5846 build: add markdown linting to workflow 2025-07-21 20:23:16 +02:00
albertony
92f304902d build: add markdownlint configuration 2025-07-21 20:23:16 +02:00
albertony
45477a6c7d docs: minor format cleanup install.md 2025-07-21 20:23:16 +02:00
albertony
79b549b5a4 docs: fix markdownlint issue md049/emphasis-style 2025-07-21 20:23:16 +02:00
albertony
318880b4ad docs: fix markdownlint issue md036/no-emphasis-as-heading 2025-07-21 20:23:16 +02:00
albertony
75521dcf6e docs: fix markdownlint issue md033/no-inline-html 2025-07-21 20:23:16 +02:00
albertony
8bf20dd545 docs: fix markdownlint issue md025/single-title 2025-07-21 20:23:16 +02:00
albertony
744bce1246 docs: fix markdownlint issue md041/first-line-heading 2025-07-21 20:23:16 +02:00
albertony
c817fc5c57 docs: fix markdownlint issue md001/heading-increment 2025-07-21 20:23:16 +02:00
albertony
0bb4d0a985 docs: fix markdownlint issue md003/heading-style 2025-07-21 20:23:16 +02:00
albertony
a8605abd34 docs: fix markdownlint issue md034/no-bare-urls 2025-07-21 20:23:16 +02:00
albertony
953fb4490b docs: fix markdownlint issue md010/no-hard-tabs 2025-07-21 20:23:16 +02:00
albertony
b17c3d18af docs: fix markdownlint issue md013/line-length 2025-07-21 20:23:16 +02:00
albertony
b45580fa19 docs: fix markdownlint issue md038/no-space-in-code 2025-07-21 20:23:16 +02:00
albertony
1c26f40078 docs: fix markdownlint issue md040/fenced-code-language 2025-07-21 20:23:16 +02:00
albertony
667ad093eb docs: fix markdownlint issue md046/code-block-style 2025-07-21 20:23:16 +02:00
albertony
2c369aedf5 docs: fix markdownlint issue md037/no-space-in-emphasis 2025-07-21 20:23:16 +02:00
albertony
7a0d5ab0b4 docs: fix markdownlint issue md059/descriptive-link-text 2025-07-21 20:23:16 +02:00
albertony
75582b804b docs: fix markdownlint issues md007/ul-indent md004/ul-style 2025-07-21 20:23:16 +02:00
albertony
73452551c6 docs: fix markdownlint issue md012/no-multiple-blanks 2025-07-21 20:23:16 +02:00
albertony
cb3cf5068b docs: fix markdownlint issue md058/blanks-around-tables 2025-07-21 20:23:16 +02:00
albertony
428f518771 docs: fix markdownlint issue md022/blanks-around-headings 2025-07-21 20:23:16 +02:00
albertony
0411a41e11 docs: fix markdownlint issue md031/blanks-around-fences 2025-07-21 20:23:16 +02:00
albertony
07b37bcd12 docs: fix markdownlint issue md032/blanks-around-lists 2025-07-21 20:23:16 +02:00
albertony
0506826ff5 docs: fix markdownlint issue md009/no-trailing-spaces 2025-07-21 20:23:16 +02:00
albertony
4fcd36a5ab docs: fix markdownlint issue md014/commands-show-output 2025-07-21 20:23:16 +02:00
albertony
b2f43f39ba docs: fix markdownlint issues md007/ul-indent md004/ul-style (bin/update-authors.py) 2025-07-21 20:23:16 +02:00
albertony
074d73d12b docs: fix markdownlint issues md007/ul-indent md004/ul-style (authors.md) 2025-07-21 20:23:16 +02:00
Nick Craig-Wood
6457bcf51e docs: add opengraph tags for website social media previews 2025-07-21 17:48:23 +01:00
Nick Craig-Wood
8d12519f3d mount: note that bucket based remotes can use directory markers 2025-07-21 17:48:23 +01:00
wiserain
8a7c401366 pikpak: add docs for methods to clarify name collision handling and restrictions 2025-07-21 17:43:15 +01:00
wiserain
0aae8f346f pikpak: enhance Copy method to handle name collisions and improve error management 2025-07-21 17:43:15 +01:00
wiserain
e991328967 pikpak: enhance Move for better handling of error and name collision 2025-07-21 17:43:15 +01:00
Yu Xin
614d02a673 accounting: fix incorrect stats with --transfers=1 - fixes #8670 2025-07-21 16:54:19 +01:00
houance
018ebdded5 rc: fix operations/check ignoring oneWay parameter
Change param from parsing "oneway" to "oneWay" as bool value, as the docs
say "oneWay -  check one way only, source files must exist on remote"
2025-07-21 16:41:08 +01:00
Florent Vennetier
fc08983d71 s3: add OVHcloud Object Storage provider
Co-Authored-By: Antonin Goude <antonin.goude@ovhcloud.com>
2025-07-21 16:34:53 +01:00
n4n5
7b61084891 docs: rc: fix description of how to read local config 2025-07-21 15:42:37 +01:00
albertony
d1ac6c2fe1 build: limit check for edits of autogenerated files to only commits in a pull request 2025-07-17 16:20:38 +02:00
albertony
da9c99272c build: extend check for edits of autogenerated files to all commits in a pull request 2025-07-17 16:20:38 +02:00
Sudipto Baral
9c7594d78f smb: refresh Kerberos credentials when ccache file changes
This change enhances the SMB backend in Rclone to automatically refresh
Kerberos credentials when the associated ccache file is updated.

Previously, credentials were only loaded once per path and cached
indefinitely, which caused issues when service tickets expired or the
cache was renewed on the server.
2025-07-17 14:34:44 +01:00
Albin Parou
70226cc653 s3: fix multipart upload and server side copy when using bucket policy SSE-C
When uploading or moving data within an s3-compatible bucket, the
`SSECustomer*` headers should always be forwarded: on
`CreateMultipartUpload`, `UploadPart`, `UploadCopyPart` and
`CompleteMultipartUpload`. But currently rclone doesn't forward those
headers to `CompleteMultipartUpload`.

This is a requirement if you want to enforce `SSE-C` at the bucket level
via a bucket policy. Cf: `This parameter is required only when the
object was created using a checksum algorithm or if your bucket policy
requires the use of SSE-C.` in
https://docs.aws.amazon.com/AmazonS3/latest/API/API_CompleteMultipartUpload.html
2025-07-17 14:29:31 +01:00
liubingrun
c20e4bd99c backend/s3: Fix memory leak by cloning strings #8683
This commit addresses a potential memory leak in the S3 backend where
strings extracted from large API responses were keeping the entire
response in memory. The issue occurs because Go strings share underlying
memory with their source, preventing garbage collection of large XML
responses even when only small substrings are needed.

Signed-off-by: liubingrun <liubr1@chinatelecom.cn>
2025-07-17 12:31:52 +01:00
Nick Craig-Wood
ccfe153e9b purge: exit with a fatal error if filters are set on rclone purge
Fixes #8491
2025-07-17 11:17:08 +01:00
Nick Craig-Wood
c9730bcaaf docs: Add Backblaze as a Platinum sponsor 2025-07-17 11:17:08 +01:00
Nick Craig-Wood
03dd7486c1 Add Sam Pegg to contributors 2025-07-17 11:17:08 +01:00
raider13209
6249009fdf googlephotos: added warning for Google Photos compatability-fixes #8672 2025-07-17 10:48:12 +01:00
Nick Craig-Wood
8e2d76459f test: remove flakey TestChunkerChunk50bYandex: test 2025-07-16 16:39:57 +01:00
albertony
5e539c6a72 docs: Consolidate entries for Josh Soref in contributors 2025-07-13 14:05:45 +02:00
albertony
8866112400 docs: remove dead link to example of writing a plugin 2025-07-13 13:51:38 +02:00
Nick Craig-Wood
bfdd5e2c22 filescom: document that hashes need to be enabled - fixes #8674 2025-07-11 14:15:59 +01:00
Nick Craig-Wood
f3f16cd2b9 Add Sudipto Baral to contributors 2025-07-11 14:15:59 +01:00
albertony
d84ea2ec52 docs: fix incorrect json syntax in sample output 2025-07-11 13:49:27 +02:00
albertony
b259241c07 docs: ignore author email piyushgarg80
This should merge the two duplicates:
- piyushgarg <piyushgarg80@gmail.com>
- Piyush <piyushgarg80>
2025-07-11 13:49:06 +02:00
albertony
a8ab0730a7 docs: fix header level for --dump option section 2025-07-10 12:36:10 +02:00
albertony
cef207cf94 docs: use stringArray as parameter type 2025-07-10 12:36:10 +02:00
albertony
e728ea32d1 docs: use consistent markdown heading syntax 2025-07-10 12:36:10 +02:00
Nick Craig-Wood
ccdee0420f imagekit: remove server side Copy method as it was downloading and uploading
The Copy method was downloading the file and uploading it again rather
than server side copying it.

It looks from the docs that the upload process can read a URL so this
might be possible, but the removed code is incorrect.
2025-07-10 11:29:27 +01:00
Nick Craig-Wood
8a51e11d23 imagekit: don't low level retry uploads
Low level retrying uploads can lead to partial or empty files being
uploaded as the io.Reader has been read in the first attempt.
2025-07-10 11:29:27 +01:00
Nick Craig-Wood
9083f1ff15 imagekit: return correct error when attempting to upload zero length files
Imagekit doesn't support empty files so return correct error for
integration tests to process properly.
2025-07-10 11:29:27 +01:00
Sudipto Baral
2964b1a169 smb: add --smb-kerberos-ccache option to set kerberos ccache per smb backend 2025-07-10 10:17:42 +01:00
Nick Craig-Wood
b6767820de test: fix smb kerberos integration tests
Thanks @sudiptob2 for the tip!
2025-07-09 18:05:29 +01:00
Nick Craig-Wood
821e7fce45 Changelog updates from Version v1.70.3 2025-07-09 16:26:56 +01:00
albertony
b7c6268d3e config: make parsing of duration options consistent
All user visible Durations should be fs.Duration rather than time.Duration. Suffix is then optional and defaults to s. Additional suffices d, w, M and y are supported, in addition to ms, s, m and h - which are the only ones supported by time.Duration. Absolute times can also be specified, and will be interpreted as duration relative to now.
2025-07-08 12:08:14 +02:00
albertony
521d6b88d4 docs: cleanup usage 2025-07-08 11:28:28 +02:00
albertony
cf767b0856 docs: break long lines 2025-07-08 11:28:28 +02:00
albertony
25f7809822 docs: add option value type to header where missing 2025-07-08 11:28:28 +02:00
albertony
74c0b1ea3b docs: mention that identifiers in option values are case insensitive 2025-07-08 11:28:28 +02:00
albertony
f4dcb1e9cf docs: rewrite dump option examples 2025-07-08 11:28:28 +02:00
albertony
90f1d023ff docs: use markdown inline code format for dump option headers that are real examples 2025-07-08 11:28:28 +02:00
albertony
e9c5f2d4e8 docs: change spelling from server side to server-side 2025-07-08 11:28:28 +02:00
albertony
1249e9b5ac docs: cleanup header casing 2025-07-08 11:28:28 +02:00
albertony
d47bc5f6c4 docs: rename OSX to macOS 2025-07-08 11:28:28 +02:00
albertony
efb1794135 docs: fix list and code block issue 2025-07-08 11:28:28 +02:00
albertony
71b98a03a9 docs: consistent markdown list format 2025-07-08 11:28:28 +02:00
albertony
8e625c6593 docs: split section with general description of options with that documenting actual main options 2025-07-08 11:28:28 +02:00
albertony
6b2cd7c631 docs: improve description of option types 2025-07-08 11:28:28 +02:00
albertony
aa4aead63c docs: use space instead of equal sign to separate option and value in headers 2025-07-08 11:28:28 +02:00
albertony
c491d12cd0 docs: use comma to separate short and long option format in headers 2025-07-08 11:28:28 +02:00
albertony
9e4d703a56 docs: remove use of uncommon parameter types 2025-07-08 11:28:28 +02:00
albertony
fc0c0a7771 docs: remove use of parameter type FILE 2025-07-08 11:28:28 +02:00
albertony
d5cc0d83b0 docs: remove use of parameter type DIR 2025-07-08 11:28:28 +02:00
albertony
52762dc866 docs: remove use of parameter type CONFIG_FILE 2025-07-08 11:28:28 +02:00
albertony
3c092cfc17 docs: change use of parameter type N and NUMBER to int consistent with flags and cli help 2025-07-08 11:28:28 +02:00
albertony
7f3f1af541 docs: change use of parameter type TIME to Duration consistent with flags and cli help 2025-07-08 11:28:28 +02:00
albertony
f885c481f0 docs: change use of parameter type BANDWIDTH_SPEC to BwTimetable consistent with flags and cli help 2025-07-08 11:28:28 +02:00
albertony
865d4b2bda docs: change use of parameter type SIZE to SizeSuffix consistent with flags and cli help 2025-07-08 11:28:28 +02:00
albertony
3cb1e65eb6 docs: cleanup markdown header format 2025-07-08 11:28:28 +02:00
albertony
f667346718 docs: explain separated list parameters 2025-07-08 11:28:28 +02:00
Nick Craig-Wood
c6e1f59415 azureblob: fix server side copy error "requires exactly one scope"
Before this change, if not using shared key or SAS URL authentication
for the source, rclone gave this error

    ManagedIdentityCredential.GetToken() requires exactly one scope

when doing server side copies.

This was introduced in:

3a5ddfcd3c azureblob: implement multipart server side copy

This fixes the problem by creating a temporary SAS URL using user
delegation to read the source blob when copying.

Fixes #8662
2025-07-08 07:50:51 +01:00
Nick Craig-Wood
f353c92852 test: remove and ignore failing integration tests
- remove non docker based Swift tests as they are too slow
- ignore TestChunkerChunk50b test which always fails
2025-07-08 07:48:54 +01:00
albertony
1e88c6a18b docs: explain the json log format in more detail 2025-07-07 10:21:13 +02:00
albertony
7242aed1c3 check: fix difference report (was reporting error counts) 2025-07-07 08:16:55 +01:00
albertony
81e63785fe serve sftp: add support for more hashes (crc32, sha256, blake3, xxh3, xxh128) 2025-07-07 09:11:29 +02:00
albertony
c7937f53d4 serve sftp: extract function refactoring for handling hashsum commands 2025-07-07 09:11:29 +02:00
albertony
58fa1c975f sftp: add support for more hashes (crc32, sha256, blake3, xxh3, xxh128) 2025-07-07 09:11:29 +02:00
albertony
da49fc1b6d local: configurable supported hashes 2025-07-07 09:11:29 +02:00
albertony
df9c921dd5 hash: add support for BLAKE3, XXH3, XXH128 2025-07-07 09:11:29 +02:00
Nick Craig-Wood
d9c227eff6 vfs: make integration TestDirEntryModTimeInvalidation test more reliable
Before this change it was not taking the Precision of the remote into account.
2025-07-06 14:35:16 +01:00
Nick Craig-Wood
524c285d88 smb: skip non integration tests when doing integration tests 2025-07-06 13:39:54 +01:00
Nick Craig-Wood
4107246335 seafile: fix integration test errors by adding dot to encoding
The seafile backend used to be able to cope with files called "." and
".." but at some point became unable to do so, causing integration
test failurs.

This adds EncodeDot to the encoding which encodes "." and ".." names.
2025-07-05 21:27:10 +01:00
Nick Craig-Wood
87a65ec6a5 linkbox: fix upload error "user upload file not exist"
Linkbox have started issuing 302 redirects on some of their PUT
requests when rclone uploads a file.

This is problematic for several reasons:

1. This is the wrong redirect code - it should be 307 to preserve the method
2. Since Expect/100-Continue isn't supported the whole body gets uploaded

This fixes the problem by first doing a HEAD request on the URL. This
will allow us to read the redirect Location and not upload the body to
the wrong place.

It should still work (albeit a little more inefficiently) if Linkbox
stop redirecting the PUT requests.

See: https://forum.rclone.org/t/linkbox-upload-error/51795
Fixes: #8606
2025-07-05 09:26:43 +01:00
Nick Craig-Wood
c6d0b61982 build: remove integration tests which are too slow
This removes

- TestCompressSwift: - never finishes - too slow - we have TestCompressS3 instead
- TestCryptSwift: - never finishes - too slow - we have TestCryptS3 instead
- TestChunkerChunk50bBox: - often times out - covered by other tests
2025-07-05 09:24:00 +01:00
Nick Craig-Wood
88e30eecbf march: fix deadlock when using --no-traverse - fixes #8656
This ocurred whenever there were more than 100 files in the source due
to the output channel filling up.

The fix is not to use list.NewSorter but take more care to output the
dst objects in the same order the src objects are delivered. As the
src objects are delivered sorted, no sorting is needed.

In order not to cause another deadlock, we need to send nil dst
objects which is safe since this adjusts the termination conditions
for the channels.

Thanks to @jeremy for the test script the Go tests are based on.
2025-07-04 14:52:28 +01:00
wiserain
f904378c4d pikpak: improve error handling for missing links and unrecoverable 500s
This commit improves error handling in two specific scenarios:

* Missing Download Links: A 5-second delay is introduced when a download
  link is missing, as low-level retries aren't enough. Empirically, it
  takes about 30s-1m for the link to become available. This resolves
  failed integration tests: backend: TestIntegration/FsMkdir/FsPutFiles/
  ObjectUpdate, vfs: TestFileReadAtNonZeroLength

* Unrecoverable 500 Errors: The shouldRetry method is updated to skip
  retries for 500 errors from "idx.shub.mypikpak.com" indicating "no
  record for gcid." These errors are non-recoverable, so retrying is futile.
2025-07-04 15:27:29 +09:00
wiserain
24eb8dcde0 pikpak: rewrite upload to bypass AWS S3 manager - fixes #8629
This commit introduces a significant rewrite of PikPak's upload, specifically
targeting direct handling of file uploads rather than relying on the generic
S3 manager. The primary motivation is to address critical upload failures
reported in #8629.

* Added new `multipart.go` file for multipart uploads using AWS S3 SDK.
* Removed dependency on AWS S3 manager; replaced with custom handling.
* Updated PikPak test package with new multipart upload tests,
  including configurable chunk size and upload cutoff.
* Added new configuration option `upload_cutoff` to control chunked uploads.
* Defined constraints for `chunk_size` and `upload_cutoff` (min/max values,
  validation).
* Adjusted default `upload_concurrency` from 5 to 4.
2025-07-04 11:25:12 +09:00
Nick Craig-Wood
a97425d9cb test: fix TestSMBKerberos password expiring errors
ERROR(runtime): uncaught exception - kinit for rclone@RCLONE.LOCAL failed (Password has expired)
2025-07-03 19:31:45 +01:00
Nick Craig-Wood
c51878f9a9 Add Vikas Bhansali to contributors 2025-07-03 19:31:45 +01:00
Nick Craig-Wood
92f0a73ac6 Add Ross Smith II to contributors 2025-07-03 19:31:45 +01:00
Vikas Bhansali
163c149f3f azureblob,azurefiles: add support for client assertion based authentication 2025-07-03 09:57:07 +01:00
WeidiDeng
224ca0ae8e webdav: fix setting modtime to that of local object instead of remote
In this commit the source of the modtime got changed to the wrong object by accident

0b9671313b webdav: add an ownCloud Infinite Scale vendor that enables tus chunked upload support

This reverts that change and fixes the integration tests.
2025-07-03 09:42:15 +01:00
Ross Smith II
5bf6cd1f4f build: set default shell to bash in build.yml
Per https://docs.github.com/en/actions/reference/workflow-syntax-for-github-actions#defaultsrunshell
2025-07-02 20:06:57 +01:00
Nick Craig-Wood
555739eec5 docs: fix filescom/filelu link mixup
See: https://forum.rclone.org/t/a-small-bug-in-rclone-documentation/51774
2025-07-02 15:37:28 +01:00
Nick Craig-Wood
c036ce90fe Add Davide Bizzarri to contributors 2025-07-02 15:37:28 +01:00
Davide Bizzarri
6163ae7cc7 fix: b2 versionAt read metadata 2025-07-01 17:36:07 +01:00
Nick Craig-Wood
cd950e30cb test: make TestWebdavInfiniteScale startup more reliable
This adds a _connect_delay=5s which allows the server to startup
properly. It also makes sure it stores its config in /tmp rather than
the current working directory.
2025-07-01 17:13:41 +01:00
Nick Craig-Wood
89dfae96ad test_all: add _connect_delay for slow starting servers 2025-07-01 17:13:11 +01:00
Nick Craig-Wood
c0a2d730a6 docs: update link for filescom 2025-06-30 11:09:37 +01:00
Nick Craig-Wood
592407230b test_all: make TestWebdav InfiniteScale integration tests run 2025-06-28 10:57:27 +01:00
Nick Craig-Wood
a7c3ddb482 test_all: make SMB with Kerberos integration tests run properly 2025-06-28 10:56:41 +01:00
Nick Craig-Wood
7a1813c531 test_all: allow an env parameter to set environment variables 2025-06-28 10:55:48 +01:00
Nick Craig-Wood
16e3d1becd Changelog updates from Version v1.70.2 2025-06-27 14:35:34 +01:00
Nick Craig-Wood
c0f6b910ae Add Ali Zein Yousuf to contributors 2025-06-27 14:35:34 +01:00
Nick Craig-Wood
e3bf8dc122 Add $@M@RTH_ to contributors 2025-06-27 14:35:34 +01:00
Ali Zein Yousuf
086a835131 docs: update client ID instructions to current Azure AD portal - fixes #8027 2025-06-27 12:22:10 +01:00
$@M@RTH_
d0668de192 s3: add Zata provider 2025-06-26 17:13:19 +01:00
Nick Craig-Wood
4df974ccc4 pacer: fix nil pointer deref in RetryError - fixes #8077
Before this change, if RetryAfterError was called with a nil err, then
it's Error method would return this when wrapped in a fmt.Errorf
statement

    error %!v(PANIC=Error method: runtime error: invalid memory address or nil pointer dereference))

Looking at the code, it looks like RetryAfterError will usually be
called with a nil pointer, so this patch makes sure it has a sensible
error.
2025-06-25 21:19:17 +01:00
Nick Craig-Wood
a50c903a82 docs: Remove Warp as a sponsor 2025-06-25 16:37:09 +01:00
Nick Craig-Wood
97a8092c14 docs: add files.com as a Gold sponsor 2025-06-25 16:37:09 +01:00
Nick Craig-Wood
526565b810 docs: add links to SecureBuild docker image 2025-06-25 16:37:09 +01:00
Nick Craig-Wood
64804b81bd Add curlwget to contributors 2025-06-25 16:37:09 +01:00
nielash
e10f516a5e convmv: fix moving to unicode-equivalent name - fixes #8634
Before this change, using convmv to convert filenames between NFD and NFC could
fail on certain backends (such as onedrive) that were insensitive to the
difference. This change fixes the issue by extending the existing
needsMoveCaseInsensitive logic for use in this scenario.
2025-06-25 11:19:50 +01:00
nielash
fe62a2bb4e transform: add truncate_keep_extension and truncate_bytes
This change adds a truncate_bytes mode which counts the number of bytes, as
opposed to the number of UTF-8 characters. This can be useful for ensuring that a
crypt-encoded filename will not exceed the underlying backend's length limits
(see https://forum.rclone.org/t/any-clear-file-name-length-when-using-crypt/36930 ).

This change also adds support for _keep_extension when using truncate and
truncate_bytes.
2025-06-25 11:19:50 +01:00
nielash
d6ecb949ca convmv: make --dry-run logs less noisy
Before this change, convmv dry runs would log a SkipDestructive message for
every single object, even objects that would not really be moved during a real
run. This made it quite difficult to tell what would actually happen during the
real run. This change fixes that by returning silently in such cases (as would
happen during a real run.)
2025-06-25 11:19:50 +01:00
nielash
a845a96538 sync: avoid copying dir metadata to itself
In convmv, src and dst can point to the same directory. Unless a dir's name is
changing, we should leave it alone and not attempt to copy its metadata to
itself.
2025-06-25 11:19:50 +01:00
curlwget
92f30fda8d docs: fix some function names in comments
Signed-off-by: curlwget <curlwget@icloud.com>
2025-06-24 15:04:45 +01:00
Nick Craig-Wood
559ef2eba8 combine: fix directory not found errors with ListP interface - Fixes #8627
In

b1d774c2e3 combine: implement ListP interface

We introduced the ListP interface to the combine backend. This was
passing the wrong remote to the upstreams. This was picked up by the
integration tests but was ignored by accident.
2025-06-23 17:43:52 +01:00
Nick Craig-Wood
17b25d7ce2 local: fix --skip-links on Windows when skipping Junction points
Due to a change in Go which was enabled by the `go 1.22` in `go.mod`
rclone has stopped skipping junction points ("My Documents" in
particular) if `--skip-links` is set on Windows.

This is because the output from os.Lstat has changed and junction
points are no longer marked with os.ModeSymlink but with
os.ModeIrregular instead.

This fix now skips os.ModeIrregular objects if --skip-links is set on
Windows only.

Fixes #8561
See: https://github.com/golang/go/issues/73827
2025-06-23 16:39:14 +01:00
Nick Craig-Wood
fe3253eefd Add Marvin Rösch to contributors 2025-06-23 16:39:14 +01:00
dependabot[bot]
c38ca6b2d1 build: bump github.com/go-chi/chi/v5 from 5.2.1 to 5.2.2 to fix GHSA-vrw8-fxc6-2r93
See: https://github.com/go-chi/chi/security/advisories/GHSA-vrw8-fxc6-2r93
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-06-20 18:27:36 +01:00
Marvin Rösch
5aa9811084 copy,copyto,move,moveto: implement logger flags to store result of sync
This enables the logger flags (`--combined`, `--missing-on-src`
etc.) for the `rclone copy` and `move` commands (as well as their
`copyto` and `moveto` variants) akin to `rclone sync`. Warnings for
unsupported/wonky flag combinations are also printed, e.g. when the
destination is not traversed but `--dest-after` is specified.

- fs/operations: add reusable methods for operation logging
- cmd/sync: use reusable methods for implementing logging in sync command
- cmd: implement logging for copy/copyto/move/moveto commands
- fs/operations/operationsflags: warn about logs in conjunction with --no-traverse
- cmd: add logger docs to copy and move commands

Fixes #8115
2025-06-20 16:55:00 +01:00
Nick Craig-Wood
3cae373064 log: fix deadlock when using systemd logging - fixes #8621
In this commit the logging system was re-worked

dfa4d94827 fs: Remove github.com/sirupsen/logrus and replace with log/slog

Unfortunately the systemd logging was still using the plain log
package and this caused a deadlock as it was recursively calling the
logging package.

The fix was to use the dedicated systemd journal logging routines in
the process removing a TODO!
2025-06-20 15:26:57 +01:00
Nick Craig-Wood
b6b8526fb4 docs: googlephotos: detail how to make your own client_id - fixes #8622 2025-06-20 12:14:46 +01:00
Nick Craig-Wood
6f86143176 Add necaran to contributors 2025-06-20 12:14:46 +01:00
necaran
beffef2882 mega: fix tls handshake failure - fixes #8565
The cipher suites used by Mega's storage endpoints: https://github.com/meganz/webclient/issues/103
are no longer supported by default since Go 1.22: https://tip.golang.org/doc/go1.22#minor_library_changes
This therefore assigns the cipher suites explicitly to include the one Mega needs.
2025-06-19 18:05:00 +01:00
Nick Craig-Wood
96f5bbcdd7 Changelog updates from Version v1.70.1 2025-06-19 14:25:38 +01:00
Nick Craig-Wood
27ce78bee4 Add jinjingroad to contributors 2025-06-19 14:25:29 +01:00
Ed Craig-Wood
898a59062b docs: DOI grammar error 2025-06-19 08:05:38 +02:00
albertony
c5f55243e1 docs: lib/transform: cleanup formatting 2025-06-19 08:04:46 +02:00
albertony
62a9727ab5 lib/transform: avoid empty charmap entry 2025-06-19 08:04:46 +02:00
jinjingroad
16f1e08b73 chore: fix function name
Signed-off-by: jinjingroad <jinjingroad@sina.com>
2025-06-19 08:02:51 +02:00
Nick Craig-Wood
4280ec75cc convmv: fix spurious "error running command echo" on Windows
Before this change the help for convmv was generated by running the
examples each time rclone started up. Unfortunately this involved
running the echo command which did not work on Windows.

This pre-generates the help into `transform.md` and embeds it. It can
be re-generated with `go generate` which is a better solution.

See: https://forum.rclone.org/t/invoke-of-1-70-0-complains-of-echo-not-found/51618
2025-06-18 14:28:14 +01:00
Ed Craig-Wood
b064cc2116 docs: client-credentials is not support by all backends 2025-06-18 14:06:57 +01:00
Nick Craig-Wood
f8b50f8d8f Start v1.71.0-DEV development 2025-06-18 11:31:52 +01:00
Nick Craig-Wood
9d464e8e9a Version v1.70.0 2025-06-17 17:53:11 +01:00
Nick Craig-Wood
92fea7eb1b ftp: add --ftp-http-proxy to connect via HTTP CONNECT proxy 2025-06-17 17:53:11 +01:00
Nick Craig-Wood
f226d12a2f pcloud: fix "Access denied. You do not have permissions to perform this operation" on large uploads
The API we use for OpenWriterAt seems to have been disabled at pcloud

    PUT /file_open?flags=XXX&folderid=XXX&name=XXX HTTP/1.1

gives

    {
            "result": 2003,
            "error": "Access denied. You do not have permissions to perform this operation."
    }

So disable OpenWriterAt and hence multipart uploads for the moment.
2025-06-17 12:46:35 +01:00
nielash
359260c49d operations: fix TransformFile when can't server-side copy/move 2025-06-16 17:40:19 +01:00
Nick Craig-Wood
125c8a98bb fstest: fix -verbose flag after logging revamp 2025-06-16 17:39:37 +01:00
Nick Craig-Wood
81fccd9c39 googlecloudstorage: fix directory marker after // changes in #5858
Before this change we were creating the directory markers with double
slashes on.
2025-06-16 17:33:40 +01:00
Nick Craig-Wood
1dc3421c7f s3: fix directory marker after // changes in #5858
Before this change we were creating the directory markers with double
slashes on.
2025-06-16 17:33:40 +01:00
Nick Craig-Wood
073184132e azureblob: fix directory marker after // changes in #5858
Before this change we were creating the directory markers with double
slashes on.
2025-06-16 17:33:40 +01:00
Nick Craig-Wood
476ff65fd7 tests: ignore some more habitually failing tests 2025-06-13 16:25:42 +01:00
Nick Craig-Wood
2847412433 googlephotos: fix typo in error message - Fixes #8600 2025-06-13 14:59:08 +01:00
Nick Craig-Wood
5c81132da0 s3: MEGA S4 support 2025-06-13 11:47:21 +01:00
Nick Craig-Wood
6e1c7b9239 Add Ser-Bul to contributors 2025-06-13 11:47:21 +01:00
nielash
e469c8974c chunker: fix double-transform
Before this change, chunker could double-transform a file under certain
conditions, when --name-transform was in use. This change fixes the issue by
ensuring that --name-transform is disabled during internal file moves.
2025-06-12 18:31:01 +01:00
Ser-Bul
629b427443 docs: mailru: added note about permissions level choice for the apps password 2025-06-12 17:35:42 +01:00
Nick Craig-Wood
108504963c tests: ignore habitually failing tests and backends
This ignores:

- cmd/bisync where it always fails
- cmd/gitannex where it always fails
- sharefile - citrix have refused to give us a testing account
- duplicated sia backend
- iclouddrive - token expiring every 30 days makes it too difficult

It would be nice to fix up these things at some point, but for the
integration test results to be useful they need less noise in them.
2025-06-12 16:24:14 +01:00
Nick Craig-Wood
6aa09fb1d6 docs: link to asciinema rather than including the js 2025-06-12 15:10:56 +01:00
Nick Craig-Wood
bfa6852334 docs: target="_blank" must have rel="noopener" 2025-06-12 15:10:56 +01:00
nielash
63d55d4a39 sync: fix testLoggerVsLsf when dst is local
Before this change, the testLoggerVsLsf function would get confused if given
r.Flocal when expecting r.Fremote. This change makes it agnostic.
2025-06-12 11:11:51 +01:00
kingston125
578ee49550 docs: fix FileLu docs
* Reorder providers alphabetically: moved FileLu above Files.com
* Added FileLu storage to docs.md
2025-06-11 16:25:30 +01:00
Nick Craig-Wood
dda6a863e9 build: update all dependencies
This updates all direct and indirect dependencies

It stops the linter complaining about deprecated azidentiy APIs also.
2025-06-09 14:19:53 +01:00
Nick Craig-Wood
99358cee88 onedrive: fix crash if no metadata was updated
Before this change, rclone would crash if no metadata was updated.
This could happen if the --onedrive-metadata-permissions read was
supplied but metadata to write was supplied.

Fixes #8586
2025-06-06 17:40:25 +01:00
Nick Craig-Wood
768a4236e6 Add kingston125 to contributors 2025-06-06 17:40:25 +01:00
Nick Craig-Wood
ffbf002ba8 Add Flora Thiebaut to contributors 2025-06-06 17:40:25 +01:00
kingston125
4a1b5b864c Add FileLu cloud storage backend 2025-06-06 15:15:07 +01:00
Flora Thiebaut
3b3096c940 doi: add new doi backend
Add a new backend to support mounting datasets published with a digital
object identifier (DOI).
2025-06-05 16:40:54 +01:00
Nick Craig-Wood
51fd697c7a build: fix check_autogenerated_edits.py flagging up files that didn't exist
Before this change new backend docs would have their changes flagged
which is undesirable for the first revision.
2025-06-05 16:37:01 +01:00
Nick Craig-Wood
210acb42cd docs: rc: add more info on how to discover _config and _filter parameters #8584 2025-06-05 10:44:33 +01:00
Nick Craig-Wood
6c36615efe s3: add Exaba provider 2025-06-04 17:42:48 +01:00
nielash
d4e2717081 convmv: add convmv command
convmv supports advanced path name transformations for converting and renaming
files and directories by applying prefixes, suffixes, and other alterations.

For example:

rclone convmv "stories/The Quick Brown Fox!.txt" --name-transform "all,uppercase"
// Output: STORIES/THE QUICK BROWN FOX!.TXT

See help doc for complete details.
2025-06-04 17:24:07 +01:00
nielash
013c563293 lib/transform: add transform library and --name-transform flag
lib/transform adds the transform library, supporting advanced path name
transformations for converting and renaming files and directories by applying
prefixes, suffixes, and other alterations.

It also adds the --name-transform flag for use with sync, copy, and move.

Multiple transformations can be used in sequence, applied in the order they are
specified on the command line.

By default --name-transform will only apply to file names. The means only the leaf
file name will be transformed. However some of the transforms would be better
applied to the whole path or just directories. To choose which which part of the
file path is affected some tags can be added to the --name-transform:

file	Only transform the leaf name of files (DEFAULT)
dir	Only transform name of directories - these may appear anywhere in the path
all	Transform the entire path for files and directories

Example syntax:
--name-transform file,prefix=ABC
--name-transform dir,prefix=DEF
2025-06-04 17:24:07 +01:00
nielash
41a407dcc9 march: split src and dst
splits m.key into separate functions for src and dst to prepare for
lib/transform which will want to do transforms on the src side only.

Co-Authored-By: Nick Craig-Wood <nick@craig-wood.com>
2025-06-04 17:24:07 +01:00
Nick Craig-Wood
cf1f5a7af6 Add ahxxm to contributors 2025-06-04 17:24:07 +01:00
Nick Craig-Wood
597872e5d7 Add Nathanael Demacon to contributors 2025-06-04 17:24:07 +01:00
ahxxm
e2d6872745 b2: use file id from listing when not presented in headers - fixes #8113 2025-06-04 16:23:58 +01:00
Nathanael Demacon
ddebca8d42 fs: fix goroutine leak and improve stats accounting process
This fixes the go routine leak in the stats accounting

- don't start stats average loop when initializing `StatsInfo`
- stop the loop instead of pausing
- use a context instead of a channel
- move `period` variable in `averageValues` struct

Fixes #8570
2025-06-04 14:43:19 +01:00
Nick Craig-Wood
5173ca0454 march: fix syncing with a duplicate file and directory
As part of the out of memory syncing code, in this commit

0148bd4668 march: Implement callback based syncing

we changed the syncing method to use a sorted stream of directory
entries.

Unfortunately as part of this change the sort order of files and
directories became undefined.

This meant that if there existed both a file `foo` and a directory
`foo` in the same directory (as is common on object storage systems)
then these could be matched up incorrectly.

They could be matched up correctly like this

- `foo` (directory) - `foo` (directory)
- `foo` (file)      - `foo` (file)

Or incorrectly like this (one of many possibilities)

- no match          - `foo` (file)
- `foo` (directory) - `foo` (directory)
- `foo` (file)      - no match

Just depending on how the input listings were ordered.

This in turn made container based syncing with a duplicated file and
directory name erratic, deleting files when it shouldn't.

This patch ensures that directories always sync before files by adding
a suffix to the sort key depending on whether the entry was a file or
directory.
2025-06-04 10:54:31 +01:00
Nick Craig-Wood
ccac9813f3 Add PrathameshLakawade to contributors 2025-06-04 10:54:31 +01:00
Nick Craig-Wood
9133fd03df Add Oleksiy Stashok to contributors 2025-06-04 10:54:31 +01:00
PrathameshLakawade
2e891f4ff8 docs: fix page_facing_up typo next to Lyve Cloud in README.md 2025-06-04 08:25:17 +02:00
PrathameshLakawade
3c66d9ccb1 backend/s3: require custom endpoint for Lyve Cloud v2 support
Lyve Cloud v2 no longer provides a shared S3 endpoint like v1 did. Instead, each customer receives
a unique, reseller-specific endpoint. To reflect this change, the S3 backend now requires users to
manually enter their endpoint when selecting Lyve Cloud as a provider.
Previously, users selected from a list of hardcoded Lyve Cloud v1 endpoints. This was not compatible
with Lyve Cloud v2 accounts and could cause confusion or misconfiguration.

This change:
- Removes outdated pre-defined endpoint selection for Lyve Cloud
- Requires users to provide their own endpoint
- Adds a format example to guide correct usage

Before: Users selected a fixed endpoint from a list (v1 only)
After:  Users must input their own endpoint (v2-compatible)
2025-06-03 16:19:41 +01:00
Oleksiy Stashok
badf16cc34 backend: skip hash calculation when the hashType is None - fixes #8518
When hashType is None `local` backend still runs expensive logic that reads the entire file content to produce an empty string.
2025-06-03 15:40:50 +01:00
Nick Craig-Wood
0ee7cd80f2 azureblob: fix multipart server side copies of 0 sized files
Before this fix multipart server side copies would fail.

This problem was due to an incorrect calculation of the number of
parts to transfer - it calculated 1 part to transfer rather than 0.
2025-06-02 17:22:37 +01:00
Nick Craig-Wood
aeb43c6a4c Add Jeremy Daer to contributors 2025-06-02 17:22:37 +01:00
Nick Craig-Wood
12322a2141 Add wbulot to contributors 2025-06-02 17:22:37 +01:00
Jeremy Daer
4fd5a3d0a2 s3: add Pure Storage FlashBlade provider support (#8575)
Pure Storage FlashBlade is an enterprise object storage platform that
provides S3-compatible APIs. This change adds FlashBlade as a new
provider option in the S3 backend.

Before this change, FlashBlade users had to use the "Other" provider
with manual configuration of various compatibility flags. This often
resulted in suboptimal performance due to conservative default settings.

After this change, users can select the "FlashBlade" S3 provider and
get an optimal configuration:

- ListObjectsV2 enabled for better performance
- AWS-compatible multipart ETags for reliable transfers
- Proper handling of "AlreadyOwnedByYou" bucket creation responses
- Path-style URLs by default (virtual-host style with DNS setup)
- Unsigned payloads to ensure compatibility with all rclone features

FlashBlade supports modern S3 features including trailer checksum
algorithms (SHA256, CRC32, CRC32C), object versioning, and lifecycle
management.

Provider settings were verified by testing against a FlashBlade//E
system running Purity//FB 4.5.7.

Documentation and test configurations are included.

Integration test results:
```
go test -v -fast-list -remote TestS3FlashBlade:
PASS
ok  	github.com/rclone/rclone/backend/s3	232.444s
```
2025-05-30 12:35:13 +01:00
wbulot
3594330177 backend/gofile: update to use new direct upload endpoint
Update the Gofile backend to use the new direct upload endpoint based on the latest API changes.
The previous implementation used dynamic server selection, but Gofile has simplified their API
to use a single upload endpoint at https://upload.gofile.io/uploadfile.

This change:
- Removes server selection logic and related code
- Simplifies the Fs struct by removing server-related fields
- Updates the upload process to use the direct upload URL
2025-05-27 14:28:25 +01:00
Nick Craig-Wood
15510c66d4 log: add --windows-event-log-level to support Windows Event Log
This provides JSON logs in the Windows Event Log.
2025-05-23 11:27:49 +01:00
Nick Craig-Wood
dfa4d94827 fs: Remove github.com/sirupsen/logrus and replace with log/slog
This removes logrus which is not developed any more and replaces it
with the new log/slog from the Go standard library.

It implements its own slog Handler which is backwards compatible with
all of rclone's previous logging modes.
2025-05-23 11:27:49 +01:00
Nick Craig-Wood
36b89960e3 Add fhuber to contributors 2025-05-23 11:27:49 +01:00
fhuber
a3f3fc61ee cmd serve s3: fix ListObjectsV2 response
add trailing slash to s3 ListObjectsV2 response because some clients expect a trailing forward slash to distinguish if the returned object is a directory

Fixes #8464
2025-05-22 22:27:38 +01:00
Nick Craig-Wood
b8fde4fc46 Changelog updates from Version v1.69.3 2025-05-22 09:55:00 +01:00
Nick Craig-Wood
c37fe733df onedrive: re-add --onedrive-upload-cutoff flag
This was removed as part of #1716 to fix rclone uploads taking double
the space.

7f744033d8 onedrive: Removed upload cutoff and always do session uploads

As far as I can see, two revisions are still being created for single
part uploads so the default for this flag is set to -1, off.

However it may be useful for experimentation.

See: #8545
2025-05-15 15:25:10 +01:00
Nick Craig-Wood
b31659904f onedrive: fix "The upload session was not found" errors
Before this change, sometimes, perhaps on heavily loaded sharepoint
servers, uploads would sometimes fail with the error:

{"error":{"code":"itemNotFound","message":"The upload session was not found"}}

This retries the upload after a 5 second delay up to --low-level-retries times.

Fixes #8545
2025-05-15 15:25:10 +01:00
Nick Craig-Wood
ebcf51336e Add Germán Casares to contributors 2025-05-15 15:25:10 +01:00
Nick Craig-Wood
a334bba643 Add Jeff Geerling to contributors 2025-05-15 15:25:10 +01:00
Germán Casares
d4fd93e7f3 googlephotos: update read only and read write scopes to meet Google's requirements.
As part of changes to the Google Photos APIs the scopes rclone used
for accessing Google photos have been removed.

This commit replaces the scopes with updated ones.

These aren't as powerful as the old scopes - this means rclone will
only be able to download photos it uploaded from March 31, 2025.

To use these new scopes do `rclone reconnect yourgooglephotosremote:`

Fixes #8434

Co-authored-by: Nick Craig-Wood <nick@craig-wood.com>
2025-05-12 16:43:23 +01:00
albertony
6644bdba0f build: update github.com/ebitengine/purego to v0.8.3 to fix mac_amd64 build
Fixes #8552
2025-05-12 09:08:15 +02:00
albertony
68a65e878f docs: add hint about config touch and config file not found 2025-05-09 08:30:34 +01:00
Jeff Geerling
7606ad8294 docs: add FAQ for dismissing 'rclone.conf not found'
See: https://forum.rclone.org/t/notice-about-missing-rclone-conf-is-annoying/51116
2025-05-09 08:23:31 +02:00
Nick Craig-Wood
32847e88b4 docs: document how to keep an out of tree backend 2025-05-08 17:16:28 +01:00
Nick Craig-Wood
2e879586bd Add Clément Wehrung to contributors 2025-05-08 17:16:28 +01:00
Clément Wehrung
9d55b2411f iclouddrive: fix panic and files potentially downloaded twice
- Fixing SIGSEGV Fixes #8211
- Removed files potentially downloaded twice
2025-05-07 18:00:33 +01:00
Nick Craig-Wood
fe880c0fac docs: move --max-connections documentation to the correct place 2025-05-06 15:23:55 +01:00
Nick Craig-Wood
b160089be7 Add Ben Boeckel to contributors 2025-05-06 15:23:55 +01:00
Nick Craig-Wood
c2254164f8 Add Tho Neyugn to contributors 2025-05-06 15:23:55 +01:00
Ben Boeckel
e57b94c4ac docs: fix typo in s3/storj docs 2025-05-04 18:57:47 +02:00
Tho Neyugn
3273bf3716 serve s3: remove redundant handler initialization 2025-05-01 16:49:11 +01:00
Nick Craig-Wood
f5501edfcf Changelog updates from Version 1.69.2 2025-05-01 16:43:16 +01:00
Nick Craig-Wood
2404831725 sftp: add --sftp-http-proxy to connect via HTTP CONNECT proxy 2025-04-29 14:16:17 +01:00
Nick Craig-Wood
9f0e237931 Add Jugal Kishore to contributors 2025-04-29 14:16:09 +01:00
Jugal Kishore
f752eaa298 docs: correct SSL docs anchor link from #ssl-tls to #tls-ssl
Fixed the anchor link in the documentation that points to the SSL/TLS section.
This change ensures the link directs correctly to the intended section (#tls-ssl) instead of the incorrect #ssl-tls.

No functional code changes, documentation only.
2025-04-28 10:19:35 +02:00
Nick Craig-Wood
1f8373fae8 drive: metadata: fix error when setting copy-requires-writer-permission on a folder
This appears not to be allowed, so this fixes the problem by ignoring
that metadata for a folder.

Fixes #8517
2025-04-25 12:15:37 +01:00
Nick Craig-Wood
b94f80b9d7 docs: Update contributors
- Add Andrew Kreimer to contributors
- Add Christian Richter to contributors
- Add Ed Craig-Wood to contributors
- Add Klaas Freitag to contributors
- Add Ralf Haferkamp to contributors
2025-04-25 12:14:37 +01:00
dependabot[bot]
5f4e983ccb build: bump golang.org/x/net from 0.36.0 to 0.38.0
Bumps [golang.org/x/net](https://github.com/golang/net) from 0.36.0 to 0.38.0.
- [Commits](https://github.com/golang/net/compare/v0.36.0...v0.38.0)

---
updated-dependencies:
- dependency-name: golang.org/x/net
  dependency-version: 0.38.0
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-04-22 13:49:35 +02:00
Ed Craig-Wood
28b6f38135 Update README.md
removed warp as project sponsor
2025-04-16 10:42:00 +01:00
Andrew Kreimer
6adb4056bb docs: fix typos via codespell
There are some types in the changelog.

Fix them via codespell.
2025-04-16 09:24:01 +02:00
Klaas Freitag
0b9671313b webdav: add an ownCloud Infinite Scale vendor that enables tus chunked upload support
This change adds a new vendor called "infinitescale" to the webdav
backend. It enables the ownCloud Infinite Scale
https://github.com/owncloud/ocis project and implements its specific
chunked uploader following the tus protocol https://tus.io

Signed-off-by: Christian Richter <crichter@owncloud.com>
Co-authored-by: Klaas Freitag <klaas.freitag@kiteworks.com>
Co-authored-by: Christian Richter <crichter@owncloud.com>
Co-authored-by: Christian Richter <1058116+dragonchaser@users.noreply.github.com>
Co-authored-by: Ralf Haferkamp <r.haferkamp@opencloud.eu>
2025-04-11 12:23:55 +01:00
Nick Craig-Wood
e0c99d6203 onedrive: fix metadata ordering in permissions
Before this change, due to a quirk in Graph, User permissions could be
lost when applying permissions.

Fixes #8465
2025-04-11 10:38:51 +01:00
Nick Craig-Wood
7af1a930b7 Add Ben Alex to contributors 2025-04-11 10:38:51 +01:00
Nick Craig-Wood
6e46ee4ffa Add simwai to contributors 2025-04-11 10:38:51 +01:00
Ben Alex
4f1fc1a84e iclouddrive: fix so created files are writable
At present any created file (eg through the touch command, copy, mount
etc) is read-only in iCloud.

This has been reported by users at
https://forum.rclone.org/t/icloud-and-file-editing-permissions/50659.
2025-04-10 11:38:38 +01:00
simwai
c10b6c5e8e cmd/authorize: show required arguments in help text 2025-04-09 16:30:38 +01:00
yuval-cloudinary
52ff407116 cloudinary: var naming convention - #8416 2025-04-09 15:03:59 +01:00
yuval-cloudinary
078d202f39 cloudinary: automatically add/remove known media files extensions #8416 2025-04-09 15:03:59 +01:00
Nick Craig-Wood
3e105f7e58 Add Markus Gerstel to contributors 2025-04-09 15:03:59 +01:00
Nick Craig-Wood
02ca72e30c Add Enduriel to contributors 2025-04-09 15:03:59 +01:00
Nick Craig-Wood
e567c52457 Add huanghaojun to contributors 2025-04-09 15:03:59 +01:00
Nick Craig-Wood
10501d0398 Add simonmcnair to contributors 2025-04-09 15:03:59 +01:00
Nick Craig-Wood
972ed42661 Add Samantha Bowen to contributors 2025-04-09 15:03:59 +01:00
Markus Gerstel
48802b0a3b s3: documentation regression - fixes #8438
We lost a previous documentation fix (#7077) detailing how to restore
single objects from AWS S3 Glacier.

Also make clearer that rclone provides restore functionality natively.

Co-authored-by: danielkrajnik <dan94kra@gmail.com>
2025-04-09 14:18:18 +01:00
Enduriel
a9c7c493cf hash: add SHA512 support for file hashes 2025-04-09 14:16:22 +01:00
huanghaojun
49f6ed5f5e vfs: fix inefficient directory caching when directory reads are slow
Before this change, when querying directories with large datasets, if
the query duration exceeded the directory cache expiration time, the
cache became invalid by the time results were retrieved. This means
every execution of `_readDir` triggers `_readDirFromEntries`,
resulting in prolonged processing times.

After this change we update the directory time with the time at the
end of the query.
2025-04-09 11:58:09 +01:00
simonmcnair
a5d03e0ada docs: update fuse version in docker docs 2025-04-09 11:54:06 +01:00
Samantha Bowen
199f61cefa fs/config: Read configuration passwords from stdin even when terminated with EOF - fixes #8480 2025-04-09 11:41:10 +01:00
Dan McArdle
fa78c6443e cmd/gitannex: Reject unknown layout modes in INITREMOTE
This is a "fail fast" improvement. Now, we will reject invalid layout
modes at setup time, rather than deferring failure until the user
attempts a transfer.
2025-04-09 11:27:44 +01:00
Dan McArdle
52e2e4b84c cmd/gitannex: Add configparse.go and refactor
This is a behavior-preserving refactor. I'm mostly just moving the code
that defines and parses configs (e.g. "rcloneremotename") into a new
source file. This lets us focus more on implementing the text protocol
in gitannex.go.
2025-04-09 11:27:44 +01:00
Dan McArdle
1c933372fe cmd/gitannex: Permit remotes with options
It looks like commit 2a1e28f5f5 did not
fix the errors in the integration tests that I hoped it would. Upon
further inspection, I noticed that I forgot that remotes can have
options just like backends.

This should fix some of the failing integration tests. For context:
https://github.com/rclone/rclone/pull/7987#issuecomment-2688580667

Specifically, I believe that TestGitAnnexFstestBackendCases/HandlesInit
should no longer fail on the Azure backend with "INITREMOTE-FAILURE
remote does not exist: TestAzureBlob,directory_markers:".

Issue #7984
2025-04-09 11:27:44 +01:00
Nick Craig-Wood
f5dfe3f5a6 serve ftp: add serve rc interface 2025-04-09 11:12:07 +01:00
Nick Craig-Wood
5702b7578c serve sftp: add serve rc interface 2025-04-09 11:12:07 +01:00
Nick Craig-Wood
703788b40e serve restic: add serve rc interface 2025-04-09 11:12:07 +01:00
Nick Craig-Wood
aef9c2117e serve s3: add serve rc interface 2025-04-09 11:12:07 +01:00
Nick Craig-Wood
2a42d95385 serve dlna: add serve rc interface 2025-04-09 11:12:07 +01:00
Nick Craig-Wood
e37775bb41 serve webdav: add serve rc interface - fixes #4505 2025-04-09 11:12:07 +01:00
Nick Craig-Wood
780f4040ea serve http: add serve rc interface 2025-04-09 11:12:07 +01:00
Nick Craig-Wood
0b7be6ffb9 serve nfs: add serve rc interface 2025-04-09 11:12:07 +01:00
Nick Craig-Wood
4d9a165e56 serve: Add rc control for serve commands #4505
This adds the framework for serving. The individual servers will be
added in separate commits.
2025-04-09 11:12:07 +01:00
Nick Craig-Wood
21e5fa192a configstruct: add SetAny to parse config from the rc
Now that we have unified the config, we can make a much more
convenient rc interface which mirrors the command line exactly, rather
than using the structure of the internal Go structs.
2025-04-09 11:12:07 +01:00
Nick Craig-Wood
cf571ad661 rc: In options/info make FieldName contain a "." if it should be nested
Before this would have Output "FieldName": "ListenAddr" where it
actually needs to be set in a sub object "HTTP".

After this fix it outputs "FieldName": "HTTP.ListenAddr" to indicate
"ListenAddr" needs to be set in the object "HTTP".
2025-04-09 11:12:07 +01:00
Nick Craig-Wood
b1456835d8 serve restic: convert options to new style 2025-04-09 11:12:07 +01:00
Nick Craig-Wood
b930c4b437 serve s3: convert options to new style 2025-04-09 11:12:07 +01:00
Nick Craig-Wood
cebd588092 serve http: convert options to new style 2025-04-09 11:12:07 +01:00
Nick Craig-Wood
3c981e6c2c serve webdav: convert options to new style 2025-04-09 11:12:07 +01:00
Nick Craig-Wood
6054c4e49d auth proxy: convert options to new style 2025-04-09 11:12:07 +01:00
Nick Craig-Wood
028316ba5d auth proxy: add VFS options parameter for use for default VFS
This is for use from the RC API.
2025-04-09 11:12:07 +01:00
Nick Craig-Wood
df457f5802 serve: make the servers self registering
This is so that they can import cmd/serve without causing an import
loop.

The active servers can now be configured by commenting lines out in
cmd/all/all.go like all the other commands.
2025-04-09 11:12:07 +01:00
Nick Craig-Wood
084e35c49d lib/http: fix race between Serve() and Shutdown()
This was discovered by the race detector.
2025-04-09 11:12:07 +01:00
Nick Craig-Wood
90ea4a73ad lib/http: add Addr() method to return the first configured server address 2025-04-09 11:12:07 +01:00
Nick Craig-Wood
efe8ac8f35 Add Danny Garside to contributors 2025-04-09 11:12:06 +01:00
Danny Garside
894ef3b375 docs: fix minor typo in box docs 2025-04-08 20:51:22 +01:00
Nick Craig-Wood
385465bfa9 sync: implement --list-cutoff to allow on disk sorting for reduced memory use
Before this change, rclone had to load an entire directory into RAM in
order to sort it so it could be synced.

With directories with millions of entries, this used too much memory.

This fixes the probem by using an on disk sort when there are more
than --list-cutoff entries in a directory.

Fixes #7974
2025-04-08 18:02:24 +01:00
Nick Craig-Wood
0148bd4668 march: Implement callback based syncing
This changes the syncing method to take callbacks for directory
listings rather than being passed the entire directory listing at
once.

This will enable out of memory syncing.
2025-04-08 18:02:24 +01:00
Nick Craig-Wood
0f7ecf6f06 list: add ListDirSortedFn for callback oriented directory listing
This will be used for the out of memory sync
2025-04-08 15:14:09 +01:00
Nick Craig-Wood
08e81f8420 list: Implement Sorter to sort directory entries
Later this will be extended to do out of memory sorts
2025-04-08 15:14:09 +01:00
Nick Craig-Wood
0ac2d2f50f cache: mark ListP as not supported yet 2025-04-08 15:14:09 +01:00
Nick Craig-Wood
42fcb0a6fc hasher: implement ListP interface 2025-04-08 15:14:09 +01:00
Nick Craig-Wood
490dd14bc5 compress: implement ListP interface 2025-04-08 15:14:09 +01:00
Nick Craig-Wood
943ea0acae chunker: mark ListP as not supported yet 2025-04-08 15:14:09 +01:00
Nick Craig-Wood
d64a97f973 union: mark ListP as not supported yet 2025-04-08 15:14:09 +01:00
Nick Craig-Wood
5d8f1d4b88 crypt: implement ListP interface 2025-04-08 15:14:09 +01:00
Nick Craig-Wood
b1d774c2e3 combine: implement ListP interface 2025-04-08 15:14:09 +01:00
Nick Craig-Wood
fad579c4a2 s3: Implement paged listing interface ListP 2025-04-08 15:14:09 +01:00
Nick Craig-Wood
37120ef7bd list: add WithListP helper to implement List for ListP backends 2025-04-08 15:14:09 +01:00
Nick Craig-Wood
cba653d502 walk: move NewListRHelper into list.Helper to avoid circular dependency
It turns out that the list helpers were at the wrong level and needed
to be pushed down into the fs/list for future work.
2025-04-08 15:14:00 +01:00
Nick Craig-Wood
2a90de9502 fs: define ListP interface for paged listing #4788 2025-04-08 15:12:53 +01:00
Nick Craig-Wood
bff229713a accounting: Add listed stat for number of directory entries listed 2025-04-08 15:12:53 +01:00
Nick Craig-Wood
117f583ebe walk: factor Listing helpers into their own file and add tests 2025-04-08 15:12:53 +01:00
Nick Craig-Wood
205667143c serve nfs: make metadata files have special file handles
Metadata files have the file handle of their source file with
0x00000001 suffixed in big endian so we can look them up directly from
their file handles.
2025-04-07 13:41:29 +01:00
Nick Craig-Wood
fe84cbdc9d serve nfs: change the format of --nfs-cache-type symlink file handles
This is an backwards incompatible change which will invalidate the
current handles.

This change adds a 4 byte big endian length prefix to the handles so
we can in future suffix extra info on the handles. This needed to be 4
bytes as Linux does not like File handles which aren't multiples of 4
bytes long.
2025-04-07 13:41:29 +01:00
Nick Craig-Wood
533c6438f3 vfs: add --vfs-metadata-extension to expose metadata sidecar files
This adds --vfs-metadata-extension which can be used to expose sidecar
files with file metadata in. These files don't exist in the listings
until they are accessed.
2025-04-07 13:41:29 +01:00
Nick Craig-Wood
b587b094c9 docs: Add rcloneui.com as Silver Sponsor 2025-04-07 13:41:29 +01:00
Nick Craig-Wood
525798e1a5 Add Klaas Freitag to contributors 2025-04-07 13:41:29 +01:00
Nick Craig-Wood
ea63052d36 Add eccoisle to contributors 2025-04-07 13:41:29 +01:00
Nick Craig-Wood
b5a99c5011 Add Fernando Fernández to contributors 2025-04-07 13:41:29 +01:00
Nick Craig-Wood
56b7015675 Add alingse to contributors 2025-04-07 13:41:29 +01:00
Nick Craig-Wood
4ff970ebab Add Jörn Friedrich Dreyer to contributors 2025-04-07 13:41:29 +01:00
eccoisle
dccb5144c3 docs: replace option --auto-filename-header with --header-filename 2025-04-06 14:28:34 +02:00
dependabot[bot]
33b087171a build: update github.com/golang-jwt/jwt/v5 from 5.2.1 to 5.2.2 to fix CVE-2025-30204
Bumps [github.com/golang-jwt/jwt/v5](https://github.com/golang-jwt/jwt) from 5.2.1 to 5.2.2.
- [Release notes](https://github.com/golang-jwt/jwt/releases)
- [Changelog](https://github.com/golang-jwt/jwt/blob/main/VERSION_HISTORY.md)
- [Commits](https://github.com/golang-jwt/jwt/compare/v5.2.1...v5.2.2)

See: https://github.com/golang-jwt/jwt/security/advisories/GHSA-mh63-6h87-95cp
See: https://www.cve.org/CVERecord?id=CVE-2025-30204

Signed-off-by: dependabot[bot] <support@github.com>
2025-04-06 11:46:13 +01:00
Fernando Fernández
58d9ae1c60 docs/googlephotos: fix typos 2025-04-06 10:49:02 +02:00
dependabot[bot]
20302ab6b9 build: bump github.com/golang-jwt/jwt/v4 from 4.5.1 to 4.5.2
Bumps [github.com/golang-jwt/jwt/v4](https://github.com/golang-jwt/jwt) from 4.5.1 to 4.5.2.
- [Release notes](https://github.com/golang-jwt/jwt/releases)
- [Changelog](https://github.com/golang-jwt/jwt/blob/main/VERSION_HISTORY.md)
- [Commits](https://github.com/golang-jwt/jwt/compare/v4.5.1...v4.5.2)

---
updated-dependencies:
- dependency-name: github.com/golang-jwt/jwt/v4
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-04-04 17:26:44 +02:00
alingse
6fb0de62a4 operations: fix call fmt.Errorf with wrong err 2025-04-04 16:21:45 +02:00
Jörn Friedrich Dreyer
839eef0db2 webdav: retry propfind on 425 status
This retries propfind on 425 status

In ownCloud Infinite Scale, files might be in that state if
postprocessing is still ongoing. All metadata are available anyway

Allow item status 425 "too early" for items when changing metadata

Fixes the upload behavior with ownCloud Infinite Scale

Signed-off-by: Jörn Friedrich Dreyer <jfd@butonic.de>
Co-authored-by: Klaas Freitag <kraft@freisturz.de>
2025-03-26 12:51:04 +00:00
Nick Craig-Wood
267eebe5c9 Add --max-connections to control maximum backend concurrency 2025-03-25 15:49:27 +00:00
Nick Craig-Wood
755d72a591 rc: fix debug/* commands not being available over unix sockets
This was caused by an incorrect handler URL which was passing the
debug/* commands to the debug/pprof handler by accident. This only
happened when using unix sockets.
2025-03-25 15:30:49 +00:00
Dan McArdle
4d38424e6c cmd/gitannex: Prevent tests from hanging when assertion fails
This fixes another way that the gitannex tests can hang.

The issue is that our test harness explicitly called `wg.Done()` at the
end of each test case, but when assertions checked with [require] fail,
they halt test execution and prevent `wg.Done()` from happening.

A second issue is that we were incorrectly calling [require] functions
in the goroutine that runs the gitannex server. I found that [require]
calls [testing.T.FailNow] under the hood, which says "FailNow must be
called from the goroutine running the test or benchmark function, not
from other goroutines created during the test." [1]

This commit fixes both issues by replacing the explicit synchronization
with a `chan error`. This enables us to run the gitannex server in a
goroutine, interact with the server in the test's goroutine, and then at
then end use [require] on the test-associated goroutine to ensure the
server's error/nil value matches expectations.

[1]: https://pkg.go.dev/testing#T.FailNow
2025-03-18 12:38:04 +00:00
Dan McArdle
53624222c9 cmd/gitannex: Add explicit timeout for mock stdout reads in tests
It seems like (*testState).readLine() hangs indefinitely when it's
waiting for a line that will never be written [1].

This commit adds an explicit 30-second timeout when reading from the
internal mock stdout. Given that we integrate with fstest, this timeout
needs to be sufficiently long that it accommodates slow-but-successful
operations on real remotes.

[1]: https://github.com/rclone/rclone/pull/8423#issuecomment-2701601290
2025-03-18 12:38:04 +00:00
nielash
44e83d77d7 http: correct root if definitely pointing to a file - fixes #8428
This was formalized in
c69eb84573
But it appears that we forgot to update `http`, and the `FsRoot` test didn't
catch it because we don't currently have an http integration test.
2025-03-17 18:05:23 +00:00
Nick Craig-Wood
19aa366d88 pool: add --max-buffer-memory to limit total buffer memory usage 2025-03-17 18:01:15 +00:00
Nick Craig-Wood
3fb4164d87 filter: Add --hash-filter to deterministically select a subset of files
Fixes #8400
2025-03-17 17:25:59 +00:00
dependabot[bot]
4e2b78f65d build: update golang.org/x/net to 0.36.0. to fix CVE-2025-22869
SSH servers which implement file transfer protocols are vulnerable to
a denial of service attack from clients which complete the key
exchange slowly, or not at all, causing pending content to be read
into memory, but never transmitted.

This updates golang.org/x/net to fix the problem.

See: https://pkg.go.dev/vuln/GO-2025-3487
See: https://www.cve.org/CVERecord?id=CVE-2025-22869
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-03-17 17:25:12 +00:00
Nick Craig-Wood
e47f59e1f9 rc: add add short parameter to core/stats to not return transferring and checking 2025-03-17 13:44:37 +00:00
Nick Craig-Wood
63c4fef27a fs: fix corruption of SizeSuffix with "B" suffix in config (eg --min-size)
Before this change, the config system round tripped fs.SizeSuffix
values through strings like this, corrupting them in the process.

    "2B" -> 2 -> "2" -> 2048

This caused `--min-size 2B` to be interpreted as `--min-size 2k`.

This fix makes sure SizeSuffix values have a "B" suffix when turned
into a string where necessary, so it becomes

    "2B" -> 2 -> "2B" -> 2

In rclone v2 we should probably declare unsuffixed SizeSuffix values
are in bytes not kBytes (done for rsync compatibility) but this would
be a backwards incompatible change which we don't want for v1.

Fixes #8437
Fixes #8212
Fixes #5169
2025-03-13 09:56:20 +00:00
Nick Craig-Wood
a7a7c1d592 filters: show --min-size and --max-size in --dump filters 2025-03-12 12:32:21 +00:00
Nick Craig-Wood
6a7e68aaf2 build: check docs for edits of autogenerated sections
This adds a lint step which checks the top commit for edits to
autogenerated doc sections.
2025-03-10 22:07:19 +00:00
Nick Craig-Wood
6e7a3795f1 Add jack to contributors 2025-03-10 22:07:19 +00:00
jack
177337686a docs: fix incorrect mentions of vfs-cache-min-free-size 2025-03-09 01:23:42 +01:00
Nick Craig-Wood
ccef29bbff fs/object: fix memory object out of bounds Seek 2025-03-06 11:31:52 +00:00
Nick Craig-Wood
64b3d1d539 serve nfs: fix unlikely crash 2025-03-06 11:31:52 +00:00
Nick Craig-Wood
aab6643cea docs: update minimum OS requirements for go1.24 2025-03-05 17:20:10 +00:00
Dan McArdle
2a1e28f5f5 cmd/gitannex: Tweak parsing of "rcloneremotename" config
The "rcloneremotename" (aka "target") config parameter is now permitted
to contain (1) remote names that are defined by environment variables,
but not in an rclone config file, and (2) backend strings such as
":memory:".

This should fix some of the failing integration tests. For context:
https://github.com/rclone/rclone/pull/7987#issuecomment-2688580667

Issue #7984
2025-03-04 16:40:32 +00:00
Dan McArdle
db9205b298 cmd/gitannex: Drop var rebindings now that we have go1.23 2025-03-04 16:40:32 +00:00
Zachary Vorhies
964c6204dd docs: add note for using rclone cat for slicing out a byte range from a file 2025-03-04 16:31:56 +00:00
Jonathan Giannuzzi
65f7eb0fba rcserver: improve content-type check
Some libraries use `application/json; charset=utf-8` as their `Content-Type`, which is valid.
However we were not decoding the JSON body in that case, resulting in issues communicating with the rcserver.
2025-03-04 16:28:34 +00:00
Nick Craig-Wood
401cf81034 build: modernize Go usage
This commit modernizes Go usage. This was done with:

go run golang.org/x/tools/gopls/internal/analysis/modernize/cmd/modernize@latest -fix -test ./...

Then files needed to be `go fmt`ed and a few comments needed to be
restored.

The modernizations include replacing

- if/else conditional assignment by a call to the built-in min or max functions added in go1.21
- sort.Slice(x, func(i, j int) bool) { return s[i] < s[j] } by a call to slices.Sort(s), added in go1.21
- interface{} by the 'any' type added in go1.18
- append([]T(nil), s...) by slices.Clone(s) or slices.Concat(s), added in go1.21
- loop around an m[k]=v map update by a call to one of the Collect, Copy, Clone, or Insert functions from the maps package, added in go1.21
- []byte(fmt.Sprintf...) by fmt.Appendf(nil, ...), added in go1.19
- append(s[:i], s[i+1]...) by slices.Delete(s, i, i+1), added in go1.21
- a 3-clause for i := 0; i < n; i++ {} loop by for i := range n {}, added in go1.22
2025-02-28 11:31:14 +00:00
Nick Craig-Wood
431386085f build: update all dependencies and fix deprecations 2025-02-26 18:00:58 +00:00
Nick Craig-Wood
bf150a5b7d build: update golang.org/x/crypto to v0.35.0 to fix CVE-2025-22869
SSH servers which implement file transfer protocols are vulnerable to
a denial of service attack from clients which complete the key
exchange slowly, or not at all, causing pending content to be read
into memory, but never transmitted.

This affects users of `rclone serve sftp`.

See: https://pkg.go.dev/vuln/GO-2025-3487
2025-02-26 18:00:58 +00:00
Nick Craig-Wood
ddecfe6e77 build: make go1.23 the minimum go version
This is necessary now that golang.org/x/crypto is only allowing the
last two versions of Go.

See: https://go.googlesource.com/crypto/+/89ff08d67c4d79f9ac619aaf1f7388888798651f
2025-02-26 18:00:58 +00:00
Dan McArdle
68e40dc141 cmd/gitannex: Add to integration tests
This commit registers gitannex's unit tests with the integration tester
by updating the config.yaml file.

Since we have not yet updated the e2e tests to use the fstest framework,
this commit also adds a case to the e2e tests' skipE2eTestIfNecessary()
function.

Issue #7984
2025-02-26 17:18:02 +00:00
Dan McArdle
325f400a88 cmd/gitannex: Simplify verbose failures in tests 2025-02-26 17:18:02 +00:00
Dan McArdle
be33e281b3 cmd/gitannex: Port unit tests to fstest
This enables the unit tests to run on any given backend, via the
`-remote` flag, e.g. `go test -v ./cmd/gitannex/... -remote dropbox:`.

We should also port the gitannex e2e tests at some point.

Issue: #7984
2025-02-26 17:18:02 +00:00
Nick Craig-Wood
0010090d05 vfs: fix integration test failures
In this commit

ceef78ce44 vfs: fix directory cache serving stale data

We added a new test which caused lots of integration test failures.

This fixes the problem by disabling the test unless the feature flag
DirModTimeUpdatesOnWrite is present on the remote.
2025-02-26 12:21:35 +00:00
Nick Craig-Wood
b7f26937f1 azureblob: fix errors not being retried when doing single part copy
Sometimes the Azure blob servers reply with 503 ServerBusy errors and
these should be retried.

Before this change, when testing to see if a single part server side
copy was done, ServerBusy errors were returned to the user rather than
being retried.

Wrapping the call in the pacer fixes the problem and ensures it is
retried properly using the --low-level-retries mechanism.
2025-02-25 10:22:49 +00:00
Nick Craig-Wood
5037d7368d azureblob: handle retry error codes more carefully 2025-02-24 10:58:26 +00:00
Nick Craig-Wood
0ccf65017f touch: make touch obey --transfers
Before this change, when executed on a directory, rclone would only
touch files sequentially.

This change makes rclone touch --transfers files at once.

Fixes #8402
2025-02-21 15:53:47 +00:00
Nick Craig-Wood
85d467e16a Add luzpaz to contributors 2025-02-21 15:53:44 +00:00
Nick Craig-Wood
cf4b55d965 Add Dave Vasilevsky to contributors 2025-02-21 15:53:10 +00:00
luzpaz
e0d477804b docs: fix various typos
Found via `codespell -q 3 -S "./docs/static,./fs/rc/params_test.go" -L aadd,afile,alledges,bbefore,bu,buda,copys,couldn,crashers,crypted,ddelete,deriver,failre,goup,hashin,hel,inbraces,keep-alives,ket,medias,ment,mis,nd,nin,notin,ois,ot,parth,re-use,re-using,responser,rin,sav,splited,streamin,synching,te,twoo,ue,unknwon,wasn`
2025-02-19 20:30:44 +00:00
Dave Vasilevsky
4fc9583feb dropbox: Retry link without expiry
Dropbox only allows public links with expiry for certain account types.
Rather than erroring for other accounts, retry without expiry.
2025-02-17 20:39:14 +00:00
Dave Vasilevsky
904c9b2e24 Dropbox: Support Dropbox Paper
These files must be "exported" to be useful. The export process
is controlled by the --dropbox-export-formats flag  and the ancilliary flags
--dropbox-skip-exports and --dropbox-show-all-exports modeled on the
Google drive equivalents
2025-02-17 18:20:37 +00:00
emyarod
cdfd748241 chore: update contributor email 2025-02-16 07:52:20 +00:00
Nick Craig-Wood
661027f2cf docs: correct stable release workflow 2025-02-15 20:08:51 +00:00
Nick Craig-Wood
7ecd1638eb Add Lorenz Brun to contributors 2025-02-15 20:08:51 +00:00
Nick Craig-Wood
06b92ddeb3 Add Michael Kebe to contributors 2025-02-15 20:08:51 +00:00
Lorenz Brun
ceef78ce44 vfs: fix directory cache serving stale data
The VFS directory cache layer didn't update directory entry properties
if they are reused after cache invalidation.

Update them unconditionally as newDir sets them to the same value and
setting a pointer is cheaper in both LoC as well as CPU cycles than a
branch.

Also add a test exercising this behavior.

Fixes #6335
2025-02-15 15:22:16 +00:00
Anagh Kumar Baranwal
6560ea9bdc build: fix docker plugin build - fixes #8394
Signed-off-by: Anagh Kumar Baranwal <6824881+darthShadow@users.noreply.github.com>
2025-02-15 14:23:43 +00:00
Michael Kebe
cda82f3d30 docs: improved sftp limitations
Added a link to `--sftp-path-override` for a better solution with working hash calculation.
2025-02-15 11:11:26 +01:00
Nick Craig-Wood
7da2d8b507 Changelog updates from Version v1.69.1 2025-02-14 17:15:50 +00:00
Nick Craig-Wood
fb7919928c docs: add FileLu as sponsors and tidy sponsor logos 2025-02-14 17:15:50 +00:00
Anagh Kumar Baranwal
5d670fc54a accounting: fix percentDiff calculation -- fixes #8345
Signed-off-by: Anagh Kumar Baranwal <6824881+darthShadow@users.noreply.github.com>
2025-02-14 14:58:12 +00:00
Nick Craig-Wood
b5e72e2fc3 vfs: fix the cache failing to upload symlinks when --links was specified
Before this change, if --vfs-cache-mode writes or above was set and
--links was in use, when a symlink was saved then the VFS failed to
upload it. This meant when the VFS was restarted the link wasn't there
any more.

This was caused by the local backend, which we use to manage the VFS
cache, picking up the global --links flag.

This patch makes sure that the internal instantations of the local
backend in the VFS cache don't ever use the --links flag or the
--local-links flag even if specified on the command line.

Fixes #8367
2025-02-13 13:30:52 +00:00
Nick Craig-Wood
8997993a30 Add jbagwell-akamai to contributors 2025-02-13 13:30:52 +00:00
Nick Craig-Wood
b721f363e5 Add ll3006 to contributors 2025-02-13 13:30:40 +00:00
Zachary Vorhies
d93dad22fe doc: add note on concurrency of rclone purge 2025-02-13 11:41:37 +00:00
jbagwell-akamai
e27bf8b738 s3: add latest Linode Object Storage endpoints
Added missing Linode Object Storage endpoints AMS, MAA, CGK, LON, LAX, MAD, MEL, MIA, OSA, GRU, SIN
2025-02-13 09:36:22 +00:00
Janne Hellsten
539e96cc1f cmd: fix crash if rclone is invoked without any arguments - Fixes #8378 2025-02-12 21:31:05 +00:00
Anagh Kumar Baranwal
5086aad0b2 build: disable docker builds on PRs & add missing dockerfile changes
Signed-off-by: Anagh Kumar Baranwal <6824881+darthShadow@users.noreply.github.com>
2025-02-12 21:29:01 +00:00
ll3006
c1b414e2cf sync: copy dir modtimes even when copyEmptySrcDirs is false - fixes #8317
Before, after a sync, only file modtimes were updated when not using
--copy-empty-src-dirs. This ensures modtimes are updated to match the source
folder, regardless of copyEmptySrcDir. The flag --no-update-dir-modtime
(which previously did nothing) will disable this.
2025-02-12 21:27:15 +00:00
ll3006
2ff8aa1c20 sync: add tests to check dir modtimes are kept when syncing
This adds tests to check dir modtimes are updated from source
when syncing even if they've changed in the destination.
This should work both with and without --copy-empty-src-dirs.
2025-02-12 21:27:15 +00:00
nielash
6d2a72367a fix golangci-lint errors 2025-02-12 21:24:55 +00:00
nielash
9df751d4ec bisync: fix false positive on integration tests
5f70918e2c
introduced a new INFO log when making a directory, which differs depending on
whether the backend supports setting directory metadata. This caused false
positives on the bisync createemptysrcdirs test.

This fixes it by ignoring that log line.
2025-02-12 21:24:55 +00:00
Nick Craig-Wood
e175c863aa s3: split the GCS quirks into -s3-use-x-id and -s3-sign-accept-encoding #8373
Before this we applied both these quirks if provider == "GCS".

Splitting them like this makes them applicable for other providers
such as ActiveScale.
2025-02-12 21:08:16 +00:00
Nick Craig-Wood
64cd8ae0f0 Add Joel K Biju to contributors 2025-02-12 21:08:11 +00:00
Anagh Kumar Baranwal
46b498b86a stats: fix the speed not getting updated after a pause in the processing
This shifts the behavior of the average loop to be a persistent loop
that gets resumed/paused when transfers & checks are started/completed.

Previously, the averageLoop was stopped on completion of
transfers & checks but failed to start again due to the protection of
the sync.Once

Signed-off-by: Anagh Kumar Baranwal <6824881+darthShadow@users.noreply.github.com>
2025-02-11 12:52:55 +00:00
Joel K Biju
b76cd74087 opendrive: added --opendrive-access flag to handle permissions 2025-02-11 12:43:10 +00:00
nielash
3b49fd24d4 bisync: fix listings missing concurrent modifications - fixes #8359
Before this change, there was a bug affecting listing files when:

- a given bisync run had changes in the 2to1 direction
AND
- the run had NO changes in the 1to2 direction
AND
- at least one of the changed files changed AGAIN during the run
(specifically, after the initial march and before the transfers.)

In this situation, the listings on one side would still retain the prior version
of the changed file, potentially causing conflicts or errors.

This change fixes the issue by making sure that if we're updating the listings
on one side, we must also update the other. (We previously tried to skip it for
efficiency, but this failed to account for the possibility that a changed file
could change again during the run.)
2025-02-11 11:21:02 +00:00
Anagh Kumar Baranwal
c0515a51a5 Added parallel docker builds and caching for go build in the container
Signed-off-by: Anagh Kumar Baranwal <6824881+darthShadow@users.noreply.github.com>
2025-02-11 10:17:50 +00:00
Jonathan Giannuzzi
dc9c87279b smb: improve connection pooling efficiency
* Lower pacer minSleep to establish new connections faster
* Use Echo requests to check whether connections are working (required an upgrade of go-smb2)
* Only remount shares when needed
* Use context for connection establishment
* When returning a connection to the pool, only check the ones that encountered errors
* Close connections in parallel
2025-02-04 12:35:19 +00:00
Nick Craig-Wood
057fdb3a9d lib/oauthutil: fix redirect URL mismatch errors - fixes #8351
In this commit we introduced support for client credentials flow:

65012beea4 lib/oauthutil: add support for OAuth client credential flow

This involved re-organising the oauth credentials.

Unfortunately a small error was made which used a fixed redirect URL
rather than the one configured for the backend.

This caused the box backend oauth flow not to work properly with
redirect_uri_mismatch errors.

These backends were using the wrong redirect URL and will likely be
affected, though it is possible the backends have workarounds.

- box
- drive
- googlecloudstorage
- googlephotos
- hidrive
- pikpak
- premiumizeme
- sharefile
- yandex
2025-02-03 12:15:54 +00:00
Nick Craig-Wood
3daf62cf3d b2: fix "fatal error: concurrent map writes" - fixes #8355
This was caused by the embryonic metadata support. Since this isn't
actually visible externally, this patch removes it for the time being.
2025-02-03 11:33:21 +00:00
Nick Craig-Wood
0ef495fa76 Add Alexander Minbaev to contributors 2025-02-03 11:33:21 +00:00
Nick Craig-Wood
722c567504 Add Zachary Vorhies to contributors 2025-02-03 11:33:21 +00:00
Nick Craig-Wood
0ebe1c0f81 Add Jess to contributors 2025-02-03 11:33:21 +00:00
Alexander Minbaev
2dc06b2548 s3: add IBM IAM signer - fixes #7617 2025-02-03 11:29:31 +00:00
Zachary Vorhies
b52aabd8fe serve nfs: update docs to note Windows is not supported - fixes #8352 2025-02-01 12:20:09 +00:00
Jess
6494ac037f cmd/config(update remote): introduce --no-output option
Fixes #8190
2025-01-24 17:22:58 +00:00
jkpe
5c3a1bbf30 s3: add DigitalOcean regions SFO2, LON1, TOR1, BLR1 2025-01-24 10:41:48 +00:00
Nick Craig-Wood
c837664653 sync: fix cpu spinning when empty directory finding with leading slashes
Before this change the logic which makes sure we create all
directories could get confused with directories which started with
slashes and get into an infinite loop consuming 100% of the CPU.
2025-01-22 11:56:05 +00:00
Nick Craig-Wood
77429b154e s3: fix handling of objects with // in #5858 2025-01-22 11:56:05 +00:00
Nick Craig-Wood
39b8f17ebb azureblob: fix handling of objects with // in #5858 2025-01-22 11:56:05 +00:00
Nick Craig-Wood
81ecfb0f64 fstest: add integration tests objects with // on bucket based backends #5858 2025-01-22 11:56:05 +00:00
Nick Craig-Wood
656e789c5b fs/list: tweak directory listing assertions after allowing // names 2025-01-22 11:56:05 +00:00
Nick Craig-Wood
fe19184084 lib/bucket: fix tidying of // in object keys #5858
Before this change, bucket.Join would tidy up object keys by removing
repeated / in them. This means we can't access objects with // in them
which is valid for object keys (but not for file system paths).

This could have consequences for users who are relying on rclone to
fix improper paths for them.
2025-01-22 11:56:05 +00:00
Nick Craig-Wood
b4990cd858 lib/bucket: add IsAllSlashes function 2025-01-22 11:56:05 +00:00
Nick Craig-Wood
8e955c6b13 azureblob: remove uncommitted blocks on InvalidBlobOrBlock error
When doing a multipart upload or copy, if a InvalidBlobOrBlock error
is received, it can mean that there are uncomitted blocks from a
previous failed attempt with a different length of ID.

This patch makes rclone attempt to clear the uncomitted blocks and
retry if it receives this error.
2025-01-22 11:56:05 +00:00
Nick Craig-Wood
3a5ddfcd3c azureblob: implement multipart server side copy
This implements multipart server side copy to improve copying from one
azure region to another by orders of magnitude (from 30s for a 100M
file to 10s for a 10G file with --azureblob-upload-concurrency 500).

- Add `--azureblob-copy-cutoff` to control the cutoff from single to multipart copy
- Add `--azureblob-copy-concurrency` to control the copy concurrency
- Add ServerSideAcrossConfigs flag as this now works properly
- Implement multipart copy using put block list API
- Shortcut multipart copy for same storage account
- Override with `--azureblob-use-copy-blob`

Fixes #8249
2025-01-22 11:56:05 +00:00
Nick Craig-Wood
ac3f7a87c3 azureblob: speed up server side copies for small files #8249
This speeds up server side copies for small files which need the check
the copy status by using an exponential ramp up of time to check the
copy status endpoint.
2025-01-22 11:56:05 +00:00
Nick Craig-Wood
4e9b63e141 azureblob: cleanup uncommitted blocks on upload errors
Before this change, if a multipart upload was aborted, then rclone
would leave uncommitted blocks lying around. Azure has a limit of
100,000 uncommitted blocks per storage account, so when you then try
to upload other stuff into that account, or simply the same file
again, you can run into this limit. This causes errors like the
following:

BlockCountExceedsLimit: The uncommitted block count cannot exceed the
maximum limit of 100,000 blocks.

This change removes the uncommitted blocks if a multipart upload is
aborted or fails.

If there was an existing destination file, it takes care not to
overwrite it by recomitting already comitted blocks.

This means that the scheme for allocating block IDs had to change to
make them different for each block and each upload.

Fixes #5583
2025-01-22 11:56:05 +00:00
Nick Craig-Wood
7fd7fe3c82 azureblob: factor readMetaData into readMetaDataAlways returning blob properties 2025-01-22 11:56:05 +00:00
Nick Craig-Wood
9dff45563d Add b-wimmer to contributors 2025-01-22 11:56:05 +00:00
b-wimmer
83cf8fb821 azurefiles: add --azurefiles-use-az and --azurefiles-disable-instance-discovery
Adds additional authentication options from azureblob to azurefiles as well

See rclone#8078
2025-01-22 11:11:18 +00:00
Nick Craig-Wood
32e79a5c5c onedrive: mark German (de) region as deprecated
See: https://learn.microsoft.com/en-us/previous-versions/azure/germany/
2025-01-22 11:00:37 +00:00
Nick Craig-Wood
fc44a8114e Add Trevor Starick to contributors 2025-01-22 11:00:37 +00:00
Nick Craig-Wood
657172ef77 Add hiddenmarten to contributors 2025-01-22 11:00:37 +00:00
Nick Craig-Wood
71eb4199c3 Add Corentin Barreau to contributors 2025-01-22 11:00:37 +00:00
Nick Craig-Wood
ac3c21368d Add Bruno Fernandes to contributors 2025-01-22 11:00:37 +00:00
Nick Craig-Wood
db71b2bd5f Add Moises Lima to contributors 2025-01-22 11:00:37 +00:00
Nick Craig-Wood
8cfe42d09f Add izouxv to contributors 2025-01-22 11:00:37 +00:00
Nick Craig-Wood
e673a28a72 Add Robin Schneider to contributors 2025-01-22 11:00:37 +00:00
Nick Craig-Wood
59889ce46b Add Tim White to contributors 2025-01-22 11:00:37 +00:00
Nick Craig-Wood
62e8a01e7e Add Christoph Berger to contributors 2025-01-22 11:00:37 +00:00
Trevor Starick
87eaf37629 azureblob: add support for x-ms-tags header 2025-01-17 19:37:56 +00:00
hiddenmarten
7c7606a6cf rc: disable the metrics server when running rclone rc
Fixes #8248
2025-01-17 17:46:22 +00:00
Corentin Barreau
dbb21165d4 internetarchive: add --internetarchive-metadata="key=value" for setting item metadata
Added the ability to include item's metadata on uploads via the
Internet Archive backend using the `--internetarchive-metadata="key=value"`
argument. This is hidden from the configurator as should only
really be used on the command line.

Before this change, metadata had to be manually added after uploads.
With this new feature, users can specify metadata directly during the
upload process.
2025-01-17 16:00:34 +00:00
Dan McArdle
375953cba3 lib/batcher: Deprecate unused option: batch_commit_timeout 2025-01-17 15:56:09 +00:00
Bruno Fernandes
af5385b344 s3: Added new storage class to magalu provider 2025-01-17 15:54:34 +00:00
Moises Lima
347be176af http servers: add --user-from-header to use for authentication
Retrieve the username from a specified HTTP header if no
other authentication methods are configured
(ideal for proxied setups)
2025-01-17 15:53:23 +00:00
Pat Patterson
bf5a4774c6 b2: add SkipDestructive handling to backend commands - fixes #8194 2025-01-17 15:47:01 +00:00
izouxv
0275d3edf2 vfs: close the change notify channel on Shutdown 2025-01-17 15:38:09 +00:00
Robin Schneider
be53ae98f8 Docker image: Add label org.opencontainers.image.source for release notes in Renovate dependency updates 2025-01-17 15:29:36 +00:00
Tim White
0d9fe51632 docs: add OneDrive Impersonate instructions - fixes #5610 2025-01-17 14:30:51 +00:00
Christoph Berger
03bd795221 docs: explain the stringArray flag parameter descriptor 2025-01-17 09:50:22 +01:00
Nick Craig-Wood
5a4026ccb4 iclouddrive: add notes on ADP and Missing PCS cookies - fixes #8310 2025-01-16 10:14:52 +00:00
Dimitri Papadopoulos
b1d4de69c2 docs: fix typos found by codespell in docs and code comments 2025-01-16 10:39:01 +01:00
Nick Craig-Wood
5316acd046 fs: fix confusing "didn't find section in config file" error
This change decorates the error with the section name not found which
will hopefully save user confusion.

Fixes #8170
2025-01-15 16:32:59 +00:00
Nick Craig-Wood
2c72842c10 vfs: fix race detected by race detector
This race would only happen when --dir-cache-time was very small.

This was noticed in the VFS tests when --dir-cache-time was 100 mS so
is unlikely to affect normal users.
2025-01-14 20:46:27 +00:00
Nick Craig-Wood
4a81f12c26 Add Jonathan Giannuzzi to contributors 2025-01-14 20:46:27 +00:00
Nick Craig-Wood
aabda1cda2 Add Spencer McCullough to contributors 2025-01-14 20:46:27 +00:00
Nick Craig-Wood
572fe20f8e Add Matt Ickstadt to contributors 2025-01-14 20:46:27 +00:00
Jonathan Giannuzzi
2fd4c45b34 smb: add support for kerberos authentication
Fixes #7800
2025-01-14 19:24:31 +00:00
Spencer McCullough
ec5489e23f drive: added backend moveid command 2025-01-14 19:21:13 +00:00
Matt Ickstadt
6898375a2d docs: fix reference to serves3 setting disable_multipart_uploads which was renamed 2025-01-14 18:51:19 +01:00
Matt Ickstadt
d413443a6a docs: fix link to Rclone Serve S3 2025-01-14 18:51:19 +01:00
Nick Craig-Wood
5039747f26 serve s3: fix list objects encoding-type
Before this change rclone would always use encoding-type url even if
the client hadn't asked for it.

This confused some clients.

This fixes the problem by leaving the URL encoding to the gofakes3
library which has also been fixed.

Fixes #7836
2025-01-14 16:08:18 +00:00
Nick Craig-Wood
11ba4ac539 build: update gopkg.in/yaml.v2 to v3 2025-01-14 15:25:10 +00:00
Nick Craig-Wood
b4ed7fb7d7 build: update all dependencies 2025-01-14 15:25:10 +00:00
Nick Craig-Wood
719473565e bisync: fix go vet problems with go1.24 2025-01-14 15:25:10 +00:00
Nick Craig-Wood
bd7278d7e9 build: update to go1.24rc1 and make go1.22 the minimum required version 2025-01-14 12:13:14 +00:00
Nick Craig-Wood
45ba81c726 version: add --deps flag to show dependencies and other build info 2025-01-14 12:08:49 +00:00
Nick Craig-Wood
530658e0cc doc: make man page well formed for whatis - fixes #7430 2025-01-13 18:35:27 +00:00
Nick Craig-Wood
b742705d0c Start v1.70.0-DEV development 2025-01-12 16:31:12 +00:00
Nick Craig-Wood
cd3b08d8cf Version v1.69.0 2025-01-12 15:09:13 +00:00
Nick Craig-Wood
009660a489 test_all: disable docker plugin tests
These are not completing on the integration test server. This needs
investigating, but we need the integration tests to run properly.
2025-01-12 14:02:57 +00:00
albertony
4b6c7c6d84 docs: fix typo 2025-01-12 13:49:47 +01:00
Nick Craig-Wood
a7db375f5d accounting: fix race stopping/starting the stats counter
This was picked up by the race detector in the CI.
2025-01-11 20:25:34 +00:00
Nick Craig-Wood
101dcfe157 docs: add github.com/icholy/gomajor to RELEASE for updating major versions 2025-01-11 20:25:34 +00:00
Francesco Frassinelli
aec87b74d3 ftp: fix ls commands returning empty on "Microsoft FTP Service" servers
The problem was in the upstream library jlaffaye/ftp and this updates it.

Fixes #8224
2025-01-11 20:02:16 +00:00
Nick Craig-Wood
91c8f92ccb s3: add docs on data integrity
See: https://forum.rclone.org/t/help-me-figure-out-how-to-verify-backup-accuracy-and-completeness-on-s3/37632/5
2025-01-11 18:39:15 +00:00
Nick Craig-Wood
965bf19065 webdav: make --webdav-auth-redirect to fix 401 unauthorized on redirect
Before this change, if the server returned a 302 redirect message when
opening a file rclone would do the redirect but drop the
Authorization: header. This is a sensible thing to do for security
reasons but breaks some setups.

This patch adds the --webdav-auth-redirect flag which makes it
preserve the auth just for this kind of request.

See: https://forum.rclone.org/t/webdav-401-unauthorized-when-server-redirects-to-another-domain/39292
2025-01-11 18:39:15 +00:00
Nick Craig-Wood
15ef3b90fa rest: make auth preserving redirects an option 2025-01-11 18:39:15 +00:00
Nick Craig-Wood
f6efaf2a63 box: fix panic when decoding corrupted PEM from JWT file
See: https://forum.rclone.org/t/box-jwt-config-erroring-panic/40685/
2025-01-11 18:39:15 +00:00
Nick Craig-Wood
0e7c495395 size: make output compatible with -P
Before this change the output of `rclone size -P` would get corrupted
by the progress printing.

This is fixed by using operations.SyncPrintf instead of fmt.Printf.

Fixes #7912
2025-01-11 18:39:15 +00:00
Nick Craig-Wood
ff0ded8f11 vfs: add remote name to vfs cache log messages - fixes #7952 2025-01-11 18:39:15 +00:00
Nick Craig-Wood
110bf468a4 dropbox: fix return status when full to be fatal error
This will stop the sync, but won't stop a mount.

Fixes #7334
2025-01-11 18:39:15 +00:00
Nick Craig-Wood
d4e86f4d8b rc: add relative to vfs/queue-set-expiry 2025-01-11 18:39:15 +00:00
Nick Craig-Wood
6091a0362b vfs: fix open files disappearing from directory listings
In this commit

aaadb48d48 vfs: keep virtual directory status accurate and reduce deadlock potential

We reworked the virtual directory detection to use an atomic bool so
that we could run part of the cache forgetting only with a read lock.

Unfortunately this had a bug which meant that directories with virtual
items could be forgotten.

This commit changes the boolean into a count of virtual entries which
should be more accurate.

Fixes #8082
2025-01-11 18:39:15 +00:00
Nick Craig-Wood
33d2747829 docker serve: parse all remaining mount and VFS options
Before this change, this code implemented an ad-hoc parser for a
subset of vfs and mount options.

After the config re-organization it can use the same parsing code as
the rest of rclone which simplifies the code and exposes all the VFS
and mount options.
2025-01-11 18:39:15 +00:00
Nick Craig-Wood
c9e5f45d73 smb: fix panic if stat fails
Before this fix the smb backend could panic if a stat call failed.

This fix makes it return an error instead.

It should have the side effect that we do one less stat call on upload
too.

Fixes #8106
2025-01-11 18:39:15 +00:00
Nick Craig-Wood
2f66537514 googlephotos: fix nil pointer crash on upload - fixes #8233 2025-01-11 18:39:15 +00:00
Nick Craig-Wood
a491312c7d iclouddrive: tweak docs 2025-01-11 18:39:15 +00:00
Nick Craig-Wood
45b7690867 serve dlna: sort the directory entries by directories first then alphabetically by name
Some media boxes don't sort the items returned from the DLNA server,
so sort them here, directories first then alphabetically by name.

See: https://forum.rclone.org/t/serve-dlna-files-order-directories-first/47790
2025-01-11 17:11:40 +00:00
Nick Craig-Wood
30ef1ddb23 serve nfs: fix missing inode numbers which was messing up ls -laR
In 6ba3e24853

    serve nfs: fix incorrect user id and group id exported to NFS #7973

We updated the stat function to output uid and gid. However this set
the inode numbers of everything to -1. This causes a problem with
doing `ls -laR` giving "not listing already-listed directory" as it
uses inode numbers to see if it has listed a directory or not.

This patch reads the inode number from the vfs.Node and sets it in the
Stat output.
2025-01-09 18:55:18 +00:00
Nick Craig-Wood
424d8e3123 serve nfs: implement --nfs-cache-type symlink
`--nfs-cache-type symlink` is similar to `--nfs-cache-type disk` in
that it uses an on disk cache, but the cache entries are held as
symlinks. Rclone will use the handle of the underlying file as the NFS
handle which improves performance.
2025-01-09 18:55:18 +00:00
Nick Craig-Wood
04dfa6d923 azureblob,oracleobjectstorage,s3: quit multipart uploads if the context is cancelled
Before this change the multipart uploads would continue retrying even
if the context was cancelled.
2025-01-09 18:55:18 +00:00
Oleg Kunitsyn
fdff1a54ee http: fix incorrect URLs with initial slash
* http: trim initial slash building url
* Add a test for http object with leading slash

Fixes #8261
2025-01-09 17:40:00 +00:00
Eng Zer Jun
42240f4b5d build: update github.com/shirou/gopsutil to v4
v4 is the latest version with bug fixes and enhancements. While there
are 4 breaking changes in v4, they do not affect us because we do not
use the impacted functions.

Signed-off-by: Eng Zer Jun <engzerjun@gmail.com>
2025-01-09 17:32:09 +00:00
albertony
7692ef289f Replace Windows-specific NewLazyDLL with NewLazySystemDLL
This will only search Windows System directory for the DLL if name is a base
name (like "advapi32.dll"), which prevents DLL preloading attacks.

To get access to NewLazySystemDLL imports of syscall needs to be swapped with
golang.org/x/sys/windows.
2025-01-08 17:35:00 +01:00
Nick Craig-Wood
bfb7b88371 lib/oauthutil: don't require token to exist for client credentials flow
Before this change when setting up client credentials flow manually,
rclone would fail with this error message on first run despite the
fact that no existing token is needed.

    empty token found - please run "rclone config reconnect remote:"

This fixes the problem by ignoring token loading problems for client
credentials flow.
2025-01-08 12:38:24 +00:00
Nick Craig-Wood
5f70918e2c fs/operations: make log messages consistent for mkdir/rmdir at INFO level
Before this change, creating a new directory would write a DEBUG log
but removing it would write an INFO log.

This change makes both write an INFO log for consistency.
2025-01-08 12:38:24 +00:00
Nick Craig-Wood
abf11271fe Add Francesco Frassinelli to contributors 2025-01-08 12:38:24 +00:00
Francesco Frassinelli
a36e89bb61 smb: Add support for Kerberos authentication.
This updates go-smb2 to a version which supports kerberos.

Fixes #7600
2025-01-08 11:25:23 +00:00
Francesco Frassinelli
35614acf59 docs: smb: link to CloudSoda/go-smb2 fork 2025-01-08 11:18:55 +00:00
yuval-cloudinary
7e4b8e33f5 cloudinary: add cloudinary backend - fixes #7989 2025-01-06 10:54:03 +00:00
yuval-cloudinary
5151a663f0 operations: fix eventual consistency in TestParseSumFile test 2025-01-06 10:54:03 +00:00
Nick Craig-Wood
b85a1b684b Add TAKEI Yuya to contributors 2025-01-06 10:34:03 +00:00
Nick Craig-Wood
8fa8f146fa docs: Remove Backblaze as a Platinum sponsor 2025-01-06 10:33:57 +00:00
Nick Craig-Wood
6cad0a013e docs: add RcloneView as silver sponsor 2025-01-06 10:33:57 +00:00
TAKEI Yuya
aa743cbc60 serve docker: fix incorrect GID assignment 2025-01-05 21:42:58 +01:00
Nick Craig-Wood
a389a2979b serve s3: fix Last-Modified timestamp
This had two problems

1. It was using a single digit for day of month
2. It is supposed to be in UTC

Fixes #8277
2024-12-30 14:16:53 +00:00
Nick Craig-Wood
d6f0d1d349 Add ToM to contributors 2024-12-30 14:16:53 +00:00
Nick Craig-Wood
4ed6960d95 Add Henry Lee to contributors 2024-12-30 14:16:53 +00:00
Nick Craig-Wood
731af0c0ab Add Louis Laureys to contributors 2024-12-30 14:16:53 +00:00
ToM
5499fd3b59 docs: filtering: mention feeding --files-from from standard input 2024-12-28 16:44:44 +01:00
ToM
e0e697ca11 docs: filtering: fix --include-from copypaste error 2024-12-27 11:36:41 +00:00
Henry Lee
05f000b076 s3: rename glacier storage class to flexible retrieval 2024-12-27 11:32:43 +00:00
Louis Laureys
a34c839514 b2: add daysFromStartingToCancelingUnfinishedLargeFiles to backend lifecycle command
See: https://www.backblaze.com/blog/effortlessly-managing-unfinished-large-file-uploads-with-b2-cloud-storage/
See: https://www.backblaze.com/docs/cloud-storage-lifecycle-rules
2024-12-22 10:16:31 +00:00
Nick Craig-Wood
6a217c7dc1 build: update golang.org/x/net to v0.33.0 to fix CVE-2024-45338
An attacker can craft an input to the Parse functions that would be
processed non-linearly with respect to its length, resulting in
extremely slow parsing. This could cause a denial of service.

This only affects users running rclone servers exposed to untrusted
networks.

See: https://pkg.go.dev/vuln/GO-2024-3333
See: https://github.com/advisories/GHSA-w32m-9786-jp63
2024-12-21 18:43:26 +00:00
Nick Craig-Wood
e1748a3183 azurefiles: fix missing x-ms-file-request-intent header
According to the SDK docs

> FileRequestIntent is required when using TokenCredential for
> authentication. Acceptable value is backup.

This sets the correct option in the SDK. It does it for all types of
authentication but the SDK seems clever enough not to supply it when
it isn't needed.

This fixes the error

> MissingRequiredHeader An HTTP header that's mandatory for this
> request is not specified. x-ms-file-request-intent

Fixes #8241
2024-12-19 17:01:34 +00:00
Nick Craig-Wood
bc08e05a00 Add Thomas ten Cate to contributors 2024-12-19 17:01:34 +00:00
Thomas ten Cate
9218b69afe docs: Document --url and --unix-socket on the rc page
This page started talking about what commands you can send, without
explaining how to actually send them.

Fixes #8252.
2024-12-19 15:50:43 +00:00
Nick Craig-Wood
0ce2e12d9f docs: link to the outstanding vfs symlinks issue 2024-12-16 11:01:03 +00:00
Nick Craig-Wood
7224b76801 Add Yxxx to contributors 2024-12-16 11:01:03 +00:00
Nick Craig-Wood
d2398ccb59 Add hayden.pan to contributors 2024-12-16 11:01:03 +00:00
Yxxx
0988fd9e9f docs: update pcloud doc to avoid puzzling token error when use remote rclone authorize 2024-12-16 10:29:24 +00:00
wiserain
51cde23e82 pikpak: add option to use original file links - fixes #8246 2024-12-16 01:17:58 +09:00
hayden.pan
caac95ff54 rc/job: use mutex for adding listeners thread safety
Fix in extreme cases, when the job is executing finish(), the listener added by calling OnFinish() will never be executed.

This change should not cause compatibility issues, as consumers should not make assumptions about whether listeners will be run in a new goroutine
2024-12-15 13:05:29 +00:00
albertony
19f4580aca docs: mention in serve tls options when value is path to file - fixes #8232 2024-12-14 11:48:38 +00:00
Nick Craig-Wood
27f448d14d build: update all dependencies 2024-12-13 16:07:45 +00:00
Nick Craig-Wood
500698c5be accounting: fix debug printing when debug wasn't set 2024-12-13 15:34:44 +00:00
Nick Craig-Wood
91af6da068 Add Filipe Azevedo to contributors 2024-12-13 15:34:44 +00:00
Nick Craig-Wood
b8835fe7b4 fs: make --links flag global and add new --local-links and --vfs-links flag
Before this change the --links flag when using the VFS override the
--links flag for the local backend which meant the local backend
needed explicit config to use links.

This fixes the problem by making the --links flag global and adding a
new --local-links flag and --vfs-links flags to control the features
individually if required.
2024-12-13 12:43:20 +00:00
Nick Craig-Wood
48d9e88e8f vfs: add docs for -l/--links flag 2024-12-13 12:43:20 +00:00
Nick Craig-Wood
4e7ee9310e nfsmount,serve nfs: introduce symlink support #2975 2024-12-13 12:43:20 +00:00
Filipe Azevedo
d629102fa6 mount2: introduce symlink support #2975 2024-12-13 12:43:20 +00:00
Filipe Azevedo
db1ed69693 mount: introduce symlink support #2975 2024-12-13 12:43:20 +00:00
Filipe Azevedo
06657c49a0 cmount: introduce symlink support #2975 2024-12-13 12:43:20 +00:00
Filipe Azevedo
f1d2f2b2c8 vfstest: make VFS test suite support symlinks 2024-12-13 12:43:20 +00:00
Nick Craig-Wood
a5abe4b8b3 vfs: add symlink support to VFS
This is somewhat limited in that it only resolves symlinks when files
are opened. This will work fine for the intended use in rclone mount,
but is inadequate for the other servers probably.
2024-12-13 12:43:20 +00:00
Nick Craig-Wood
c0339327be vfs: add ELOOP error 2024-12-13 12:43:20 +00:00
Filipe Azevedo
353bc3130e vfs: Add link permissions 2024-12-13 12:43:20 +00:00
Filipe Azevedo
126f00882b vfs: Add VFS --links command line switch
This will be used to enable links support for the various mount engines
in a follow up commit.
2024-12-13 12:43:20 +00:00
Nick Craig-Wood
44c3f5e1e8 vfs: add vfs.WriteFile to match os.WriteFile 2024-12-13 12:43:20 +00:00
Filipe Azevedo
c47c94e485 fs: Move link suffix to fs 2024-12-13 12:43:20 +00:00
Nick Craig-Wood
1f328fbcfd cmount: fix problems noticed by linter 2024-12-13 12:43:20 +00:00
Filipe Azevedo
7f1240516e mount2: Fix missing . and .. entries 2024-12-13 12:43:20 +00:00
Nick Craig-Wood
f9946b37f9 sftp: fix nil check when using auth proxy
An incorrect nil check was spotted while reviewing the code for
CVE-2024-45337.

The nil check failing has never happened as far as we know. The
consequences would be a nil pointer exception.
2024-12-13 12:36:15 +00:00
Nick Craig-Wood
96fe25cf0a Add Martin Hassack to contributors 2024-12-13 12:36:15 +00:00
dependabot[bot]
a176d4cbda serve sftp: resolve CVE-2024-45337
This commit resolves CVE-2024-45337 which is an a potential auth
bypass for `rclone serve sftp`.

https://nvd.nist.gov/vuln/detail/CVE-2024-45337

However after review of the code, rclone is **not** affected as it
handles the authentication correctly. Rclone already uses the
Extensions field of the Permissions return value from the various
authentication callbacks to record data associated with the
authentication attempt as suggested in the vulnerability report.

This commit includes the recommended update to golang.org/x/crypto
anyway so that this is visible in the changelog.

Bumps [golang.org/x/crypto](https://github.com/golang/crypto) from 0.29.0 to 0.31.0.
- [Commits](https://github.com/golang/crypto/compare/v0.29.0...v0.31.0)

---
updated-dependencies:
- dependency-name: golang.org/x/crypto
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-12-13 12:28:08 +00:00
Tony Metzidis
e704e33045 googlecloudstorage: typo fix in docs 2024-12-13 11:49:21 +00:00
Martin Hassack
2f3e90f671 onedrive: add support for OAuth client credential flow - fixes #6197
This adds support for the client credential flow oauth method which
requires some special handling in onedrive:

- Special scopes are required
- The tenant is required
- The tenant needs to be used in the oauth URLs

This also:

- refactors the oauth config creation so it isn't duplicated
- defaults the drive_id to the previous one in the config
- updates the documentation

Co-authored-by: Nick Craig-Wood <nick@craig-wood.com>
2024-12-13 11:34:11 +00:00
Martin Hassack
65012beea4 lib/oauthutil: add support for OAuth client credential flow
This commit reorganises the oauth code to use our own config struct
which has all the info for the normal oauth method and also the client
credentials flow method.

It updates all backends which use lib/oauthutil to use the new config
struct which shouldn't change any functionality.

It also adds code for dealing with the client credential flow config
which doesn't require the use of a browser and doesn't have or need a
refresh token.

Co-authored-by: Nick Craig-Wood <nick@craig-wood.com>
2024-12-13 11:34:11 +00:00
Nick Craig-Wood
704217b698 lib/oauthutil: return error messages from the oauth process better 2024-12-13 11:34:11 +00:00
Nick Craig-Wood
6ade1055d5 bin/test_backend_sizes.py fix compile flags and s3 reporting
This now compiles rclone with CGO_ENABLED=0 which is closer to the
release compile.

It also removes pikpak if testing s3 as the two depend on each
other.
2024-12-13 11:34:11 +00:00
Nick Craig-Wood
6a983d601c test makefiles: add --flat flag for making directories with many entries 2024-12-11 18:21:42 +00:00
Nick Craig-Wood
eaafae95fa Add divinity76 to contributors 2024-12-11 18:21:42 +00:00
Nick Craig-Wood
5ca1436c24 Add Ilias Ozgur Can Leonard to contributors 2024-12-11 18:21:42 +00:00
Nick Craig-Wood
c46e93cc42 Add remygrandin to contributors 2024-12-11 18:21:42 +00:00
Nick Craig-Wood
66943d3d79 Add Michael R. Davis to contributors 2024-12-11 18:21:42 +00:00
divinity76
a78bc093de cmd/mountlib: better snap mount error message
Mounting will always fail when rclone is installed from the snap package manager.
But the error message generated when trying to mount from a snap install was not
very good. Improve the error message.

Fixes #8208
2024-12-06 08:14:09 +00:00
Ilias Ozgur Can Leonard
2446c4928d vfs: with --vfs-used-is-size value is calculated and then thrown away - fixes #8220 2024-12-04 22:57:41 +00:00
albertony
e11e679e90 serve sftp: fix loading of authorized keys file with comment on last line - fixes #8227 2024-12-04 13:42:10 +01:00
Manoj Ghosh
ba8e538173 oracleobjectstorage: make specifying compartmentid optional 2024-12-03 17:54:00 +00:00
Georg Welzel
40111ba5e1 plcoud: fix failing large file uploads - fixes #8147
This changes the OpenWriterAt implementation to make client/fd
handling atomic.

This PR stabilizes the situation of bigger files and multi-threaded
uploads. The root cause boils down to the old "fun" property of
pclouds fileops API: sessions are bound to TCP connections. This
forces us to use a http client with only a single connection
underneath.

With large files, we reuse the same connection for each chunk. If that
connection interrupts (e.g. because we are talking through the
internet), all chunks will fail. The probability for latter one
increases with larger files.

As the point of the whole multi-threaded feature was to speed-up large
files in the first place, this change pulls the client creation (and
hence connection handling) into each chunk. This should stabilize the
situation, as each chunk (and retry) gets its own connection.
2024-12-03 17:52:44 +00:00
remygrandin
ab58ae5b03 docs: add docker volume plugin troubleshooting steps
This proposal expand the current docker volume plugin troubleshooting possible steps to include a state cleanup command and a reminder that a un/reinstall don't clean up those cache files.


Co-authored-by: albertony <12441419+albertony@users.noreply.github.com>
2024-11-26 20:56:10 +01:00
Michael R. Davis
ca8860177e docs: fix missing state parameter in /auth link in instructions 2024-11-22 22:40:07 +00:00
Nick Craig-Wood
d65d1a44b3 build: fix build failure on ubuntu 2024-11-21 12:05:49 +00:00
Sam Harrison
c1763a3f95 docs: upgrade fontawesome to v6
Also update the Filescom icon.
2024-11-21 11:06:38 +00:00
Nick Craig-Wood
964fcd5f59 s3: fix multitenant multipart uploads with CEPH
CEPH uses a special bucket form `tenant:bucket` for multitentant
access using S3 as documented here:

https://docs.ceph.com/en/reef/radosgw/multitenancy/#s3

However when doing multipart uploads, in the reply from
`CreateMultipart` the `tenant:` was missing from the `Bucket` response
rclone was using to build the `UploadPart` request. This caused a 404
failure return. This may be a CEPH bug, but it is easy to work around.

This changes the code to use the `Bucket` and `Key` that we used in
`CreateMultipart` in `UploadPart` rather than the one returned from
`CreateMultipart` which fixes the problem.

See: https://forum.rclone.org/t/rclone-zcat-does-not-work-with-a-multitenant-ceph-backend/48618
2024-11-21 11:04:49 +00:00
Nick Craig-Wood
c6281a1217 Add David Seifert to contributors 2024-11-21 11:04:49 +00:00
Nick Craig-Wood
ff3f8f0b33 Add vintagefuture to contributors 2024-11-21 11:04:49 +00:00
Anthony Metzidis
2d844a26c3 use better docs 2024-11-20 18:05:56 +00:00
Anthony Metzidis
1b68492c85 googlecloudstorage: update docs on service account access tokens 2024-11-20 18:05:56 +00:00
David Seifert
acd5a893e2 test_all: POSIX head/tail invocations
* head -number is not allowed by POSIX.1-2024:
  https://pubs.opengroup.org/onlinepubs/9799919799/utilities/head.html
  https://devmanual.gentoo.org/tools-reference/head-and-tail/index.html
2024-11-20 18:02:07 +00:00
vintagefuture
0214a59a8c icloud: Added note about app specific password not working 2024-11-20 17:43:42 +00:00
Nick Craig-Wood
6079cab090 s3: fix download of compressed files from Cloudflare R2 - fixes #8137
Before this change attempting to download a file with
`Content-Encoding: gzip` from Cloudflare R2 gave this error

    corrupted on transfer: sizes differ src 0 vs dst 999

This was caused by the SDK v2 overriding our attempt to set
`Accept-Encoding: gzip`.

This fixes the problem by disabling the middleware that does that
overriding.
2024-11-20 12:08:23 +00:00
Nick Craig-Wood
bf57087a6e s3: fix testing tiers which don't exist except on AWS 2024-11-20 12:08:23 +00:00
Nick Craig-Wood
d8bc542ffc Changelog updates from Version v1.68.2 2024-11-15 14:51:27 +00:00
Nick Craig-Wood
01ccf204f4 local: fix permission and ownership on symlinks with --links and --metadata
Before this change, if writing to a local backend with --metadata and
--links, if the incoming metadata contained mode or ownership
information then rclone would apply the mode/ownership to the
destination of the link not the link itself.

This fixes the problem by using the link safe sycall variants
lchown/fchmodat when --links and --metadata is in use. Note that Linux
does not support setting permissions on symlinks, so rclone emits a
debug message in this case.

This also fixes setting times on symlinks on Windows which wasn't
implemented for atime, mtime and was incorrectly setting the target of
the symlink for btime.

See: https://github.com/rclone/rclone/security/advisories/GHSA-hrxh-9w67-g4cv
2024-11-14 16:20:18 +00:00
Nick Craig-Wood
84b64dcdf9 Revert "Merge commit from fork"
This reverts commit 1e2b354456.
2024-11-14 16:20:06 +00:00
Nick Craig-Wood
8cc1020a58 Add Dimitrios Slamaris to contributors 2024-11-14 16:15:49 +00:00
Nick Craig-Wood
1e2b354456 Merge commit from fork
Before this change, if writing to a local backend with --metadata and
--links, if the incoming metadata contained mode or ownership
information then rclone would apply the mode/ownership to the
destination of the link not the link itself.

This fixes the problem by using the link safe sycall variants
lchown/fchmodat when --links and --metadata is in use. Note that Linux
does not support setting permissions on symlinks, so rclone emits a
debug message in this case.

This also fixes setting times on symlinks on Windows which wasn't
implemented for atime, mtime and was incorrectly setting the target of
the symlink for btime.

See: https://github.com/rclone/rclone/security/advisories/GHSA-hrxh-9w67-g4cv
2024-11-14 16:13:57 +00:00
Nick Craig-Wood
f639cd9c78 onedrive: fix integration tests after precision change
We changed the precision of the onedrive personal backend in
c053429b9c from 1mS to 1S.

However the tests did not get updated. This changes the time tests to
use `fstest.AssertTimeEqualWithPrecision` which compares with
precision so hopefully won't break again.
2024-11-12 13:09:15 +00:00
Nick Craig-Wood
e50f995d87 operations: fix TestRemoveExisting on crypt backends by shortening the file name 2024-11-12 13:09:15 +00:00
Dimitrios Slamaris
abe884e744 bisync: fix output capture restoring the wrong output for logrus
Before this change, if rclone is used as a library and logrus is used
after a call to rc `sync/bisync`, logging does not work anymore and
leads to writing to a closed pipe.

This change restores the output correctly.

Fixes #8158
2024-11-12 11:42:54 +00:00
Nick Craig-Wood
173b2ac956 serve sftp: update github.com/pkg/sftp to v1.13.7 and fix deadlock in tests
Before this change, upgrading to v1.13.7 caused a deadlock in the tests.

This was caused by additional locking in the sftp package exposing a
bad choice by the rclone code.

See https://github.com/pkg/sftp/issues/603 and thanks to @puellanivis
for the fix suggestion.
2024-11-11 18:15:00 +00:00
Nick Craig-Wood
1317fdb9b8 build: fix comments after golangci-lint upgrade 2024-11-11 18:03:36 +00:00
Nick Craig-Wood
1072173d58 build: update all dependencies 2024-11-11 18:03:34 +00:00
dependabot[bot]
df19c6f7bf build(deps): bump github.com/golang-jwt/jwt/v4 from 4.5.0 to 4.5.1
Bumps [github.com/golang-jwt/jwt/v4](https://github.com/golang-jwt/jwt) from 4.5.0 to 4.5.1.
- [Release notes](https://github.com/golang-jwt/jwt/releases)
- [Changelog](https://github.com/golang-jwt/jwt/blob/main/VERSION_HISTORY.md)
- [Commits](https://github.com/golang-jwt/jwt/compare/v4.5.0...v4.5.1)

---
updated-dependencies:
- dependency-name: github.com/golang-jwt/jwt/v4
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-11-11 18:01:03 +00:00
Nick Craig-Wood
ee72554fb9 pikpak: fix fatal crash on startup with token that can't be refreshed 2024-11-08 19:34:09 +00:00
Nick Craig-Wood
abb4f77568 yandex: fix server side copying over existing object
This was causing a conflict error. This was fixed by renaming the
existing file first and if the copy was successful deleting it, or
renaming it back.
2024-11-08 18:17:55 +00:00
Nick Craig-Wood
ca2b27422f sugarsync: fix server side copying over existing object
This was causing a conflict error. This was fixed by renaming the
existing file first and if the copy was successful deleting it, or
renaming it back.
2024-11-08 18:17:55 +00:00
Nick Craig-Wood
740f6b318c putio: fix server side copying over existing object
This was causing a conflict error. This was fixed by checking for the
existing object and deleting it after the file was server side copied.
2024-11-08 18:17:55 +00:00
Nick Craig-Wood
f307d929a8 onedrive: fix server side copying over existing object
This was causing a conflict error. This was fixed by renaming the
existing file first and if the copy was successful deleting it, or
renaming it back.
2024-11-08 18:17:55 +00:00
Nick Craig-Wood
ceea6753ee dropbox: fix server side copying over existing object
This was causing a conflict error. This was fixed by renaming the
existing file first and if the copy was successful deleting it, or
renaming it back.
2024-11-08 18:17:55 +00:00
Nick Craig-Wood
2bafbf3c04 operations: add RemoveExisting to safely remove an existing file
This renames the file first and if the operation is successful then it
deletes the renamed file.
2024-11-08 18:17:55 +00:00
Nick Craig-Wood
3e14ba54b8 gofile: fix server side copying over existing object
This was creating a duplicate.
2024-11-08 14:01:51 +00:00
Nick Craig-Wood
2f7a30cf61 test_all: try to fix mailru rate limits in integration tests
The Mailru backend integration tests have been failing due to new rate
limits on the backend.

This patch

- Removes Mailru from the chunker tests
- Adds the flag so we only run one Mailru test at once
2024-11-08 10:02:44 +00:00
Nick Craig-Wood
0ad925278d Add shenpengfeng to contributors 2024-11-08 10:02:44 +00:00
Nick Craig-Wood
e3053350f3 Add Dimitar Ivanov to contributors 2024-11-08 10:02:44 +00:00
shenpengfeng
b9207e5727 docs: fix function name in comment 2024-10-29 09:26:37 +01:00
Dimitar Ivanov
40159e7a16 sftp: allow inline ssh public certificate for sftp
Currently rclone allows us to specify the path to a public ssh
certificate file.

That works great for cases where we can specify key path, like local
envs.

If users are using rclone with [volsync](https://github.com/backube/volsync/tree/main/docs/usage/rclone)
there currently is a limitation that users can specify only the rclone config file.
With this change users can pass the public certificate in the same fashion
as they can with `key_file`.
2024-10-25 10:40:57 +01:00
Nick Craig-Wood
16baa24964 serve s3: fix excess locking which was making serve s3 single threaded
The fix for this was in the upstream library to narrow the locking
window.

See: https://forum.rclone.org/t/can-rclone-serve-s3-handle-more-than-one-client/48329/
2024-10-25 10:36:50 +01:00
Nick Craig-Wood
72f06bcc4b lib/oauthutil: allow the browser opening function to be overridden 2024-10-24 17:56:50 +01:00
Nick Craig-Wood
c527dd8c9c Add Moises Lima to contributors 2024-10-24 17:56:50 +01:00
Moises Lima
29fd894189 lib/http: disable automatic authentication skipping for unix sockets
Disabling the authentication for unix sockets makes it impossible to
use `rclone serve` behind a proxy that that communicates with rclone
via a unix socket.

Re-enabling the authentication should not have any effect on most
users of unix sockets as they do not set authentication up with a unix
socket anyway.
2024-10-24 12:39:28 +01:00
Nick Craig-Wood
175aa07cdd onedrive: fix Retry-After handling to look at 503 errors also
According to the Microsoft docs a Retry-After header can be returned
on 429 errors and 503 errors, but before this change we were only
checking for it on 429 errors.

See: https://forum.rclone.org/t/onedrive-503-response-retry-after-not-used/48045
2024-10-23 13:00:32 +01:00
Kaloyan Raev
75257fc9cd s3: Storj provider: fix server-side copy of files bigger than 5GB
Like some other S3-compatible providers, Storj does not currently
implements UploadPartCopy and returns NotImplemented errors for
multi-part server side copies.

This patch works around the problem by raising --s3-copy-cutoff for
Storj to the maximum. This means that rclone will never use
multi-part copies for files in Storj. This includes files larger than
5GB which (according to AWS documentation) must be copied with
multi-part copy. This works fine for Storj.

See https://github.com/storj/roadmap/issues/40
2024-10-22 21:15:04 +01:00
Nick Craig-Wood
53ff3b3b32 s3: add Selectel as a provider 2024-10-22 19:54:33 +01:00
Nick Craig-Wood
8b4b59412d fs: fix Don't know how to set key "chunkSize" on upload errors in tests
Before this testing any backend which implemented the OpenChunkWriter
gave this error:

    ERROR : writer-at-subdir/writer-at-file: Don't know how to set key "chunkSize" on upload

This was due to the ChunkOption incorrectly rendering into HTTP
headers which weren't understood by the backend.
2024-10-22 19:54:33 +01:00
Nick Craig-Wood
264c9fb2c0 drive: implement rclone backend rescue to rescue orphaned files
Fixes #4166
2024-10-21 10:15:01 +01:00
Nick Craig-Wood
1b10cd3732 Add tgfisher to contributors 2024-10-21 10:15:01 +01:00
Nick Craig-Wood
d97492cbc3 Add Diego Monti to contributors 2024-10-21 10:15:01 +01:00
Nick Craig-Wood
82a510e793 Add Randy Bush to contributors 2024-10-21 10:15:01 +01:00
Nick Craig-Wood
9f2c590e13 Add Alexandre Hamez to contributors 2024-10-21 10:15:01 +01:00
Nick Craig-Wood
11a90917ec Add Simon Bos to contributors 2024-10-21 10:15:01 +01:00
tgfisher
8ca7b2af07 docs: mention that inline comments are not supported in a filter-file 2024-10-21 09:10:09 +02:00
Diego Monti
a19ddffe92 s3: add Wasabi eu-south-1 region
Ref. https://docs.wasabi.com/docs/what-are-the-service-urls-for-wasabi-s-different-storage-regions
2024-10-14 14:05:33 +02:00
Randy Bush
3e2c0f8c04 docs: fix forward refs in step 9 of using your own client id 2024-10-14 13:25:25 +02:00
Alexandre Hamez
589458d1fe docs: fix Scaleway Glacier website URL 2024-10-12 12:13:47 +01:00
Simon Bos
69897b97fb dlna: fix loggingResponseWriter disregarding log level 2024-10-08 15:27:05 +01:00
albertony
4db09331c6 build: remove required property on boolean inputs
Since boolean inputs are now properly treated as booleans, and GitHub Web GUI shows
them as checkboxes, setting required does nothing.
2024-10-03 16:31:36 +01:00
albertony
fcd3b88332 build: use inputs context in github workflow
Currently input options are retrieved from the event payload, via github.event.inputs,
and that still works, but boolean values are represented as strings there while in the
dedicated inputs context the boolean types are preserved, which means conditional
expressions can be simplified.
2024-10-03 16:31:36 +01:00
Nick Craig-Wood
1ca3f12672 s3: fix crash when using --s3-download-url after migration to SDKv2
Before this change rclone was crashing when the download URL did not
supply an X-Amz-Storage-Class header.

This change allows the header to be missing.

See: https://forum.rclone.org/t/sigsegv-on-ubuntu-24-04/48047
2024-10-03 14:31:56 +01:00
Nick Craig-Wood
e7a0fd0f70 docs: update overview to show pcloud can set modtime
See 258092f9c6 and #7896
2024-10-03 14:31:56 +01:00
Nick Craig-Wood
c23c59544d Add André Tran to contributors 2024-10-03 14:31:56 +01:00
Nick Craig-Wood
9dec3de990 Add Matthias Gatto to contributors 2024-10-03 14:31:56 +01:00
Nick Craig-Wood
5caa695c79 Add lostb1t to contributors 2024-10-03 14:31:56 +01:00
Nick Craig-Wood
8400809900 Add Noam Ross to contributors 2024-10-03 14:31:11 +01:00
Nick Craig-Wood
e49516d5f4 Add Benjamin Legrand to contributors 2024-10-03 14:31:11 +01:00
Matthias Gatto
9614fc60f2 s3: add Outscale provider
Signed-off-by: matthias.gatto <matthias.gatto@outscale.com>
Co-authored-by: André Tran <andre.tran@outscale.com>
2024-10-02 10:26:41 +01:00
lostb1t
51db76fd47 Add ICloud Drive backend 2024-10-02 10:19:11 +01:00
Noam Ross
17e7ccfad5 drive: add support for markdown format 2024-09-30 17:22:32 +01:00
Benjamin Legrand
8a6fc8535d accounting: fix global error acounting
fs.CountError is called when an error is encountered. The method was
calling GlobalStats().Error(err) which incremented the error at the
global stats level. This led to calls to core/stats with group= filter
returning an error count of 0 even if errors actually occured.

This change requires the context to be provided when calling
fs.CountError. Doing so, we can retrieve the correct StatsInfo to
increment the errors from.

Fixes #5865
2024-09-30 17:20:42 +01:00
Nick Craig-Wood
c053429b9c onedrive: fix time precision for OneDrive personal
This reduces the precision advertised by the backend from 1ms to 1s
for OneDrive personal accounts.

The precision was set to 1ms as part of:

1473de3f04 onedrive: add metadata support

which was released in v1.66.0.

However it appears not all OneDrive personal accounts support 1ms time
precision and that Microsoft may be migrating accounts away from this
to backends which only support 1s precision.

Fixes #8101
2024-09-30 11:34:06 +01:00
Nick Craig-Wood
18989fbf85 Add RcloneView as a sponsor 2024-09-30 11:34:06 +01:00
Nick Craig-Wood
a7451c6a77 Add Leandro Piccilli to contributors 2024-09-30 11:32:13 +01:00
nielash
5147d1101c cache: skip bisync tests
per ncw: "I don't care about cache as it is deprecated - we should probably stop
it running bisync tests"
https://github.com/rclone/rclone/pull/7795#issuecomment-2163295857
2024-09-29 18:37:52 -04:00
nielash
11ad2a1316 bisync: allow blank hashes on tests
Some backends support hashes but allow them to be blank. In other words, we
can't expect them to be reliably non-blank, and we shouldn't treat a blank hash
as an error.

Before this change, the bisync integration tests errored if a backend said it
supported hashes but in fact sometimes lacked them. After this change, such
errors are ignored.
2024-09-29 18:37:52 -04:00
nielash
3c7ad8d961 box: fix server-side copying a file over existing dst - fixes #3511
Before this change, server-side copying a src file over a dst that already exists
gave `Error "item_name_in_use" (409): Item with the same name already exists`.

This change fixes the error by copying to a temporary name first, then moving it
to the real name.

There might be a more graceful way to overwrite a file during a copy, but I
didn't see one in the API docs.
https://developer.box.com/reference/post-files-id-copy/
In the meantime, this workaround is better than a critical error.

This should (hopefully) fix 8 bisync integration tests.
2024-09-29 18:37:52 -04:00
nielash
a3e8fb584a sync: add tests for copying/moving a file over itself
This should catch issues like this, for example:
https://github.com/rclone/rclone/issues/3511#issuecomment-528332895
2024-09-29 18:37:52 -04:00
nielash
9b4b3033da fs/cache: fix parent not getting pinned when remote is a file
Before this change, when cache.GetFn was called on a file rather than a
directory, two cache entries would be added (the file + its parent) but only one
of them would get pinned if the caller then called Pin(f). This left the other
one exposed to expiration if the ci.FsCacheExpireDuration was reached. This was
problematic because both entries point to the same Fs, and if one entry expires
while the other is pinned, the Shutdown method gets erroneously called on an Fs
that is still in use.

An example of the problem showed up in the Hasher backend, which uses the
Shutdown method to stop the bolt db used to store hashes. If a command was run
on a Hasher file (ex. `rclone md5sum --download hasher:somelargefile.zip`) and
hashing the file took longer than the --fs-cache-expire-duration (5m by default), the
bolt db was stopped before the hashing operation completed, resulting in an
error.

This change fixes the issue by ensuring that:
1. only one entry is added to the cache (the file's parent, not the file).
2. future lookups correctly find the entry regardless of whether they are called
	with the parent name or one of its children.
3. fs.ErrorIsFile is returned when (and only when) fsString points to a file
	(preserving the fix from 8d5bc7f28b).

Note that f.Root() should always point to the parent dir as of c69eb84573
2024-09-28 13:49:56 +01:00
Leandro Piccilli
94997d25d2 gcs: add access token auth with --gcs-access-token 2024-09-27 17:37:07 +01:00
Nick Craig-Wood
19458e8459 accounting: write the current bwlimit to the log on SIGUSR2 2024-09-26 18:01:18 +01:00
Nick Craig-Wood
7d32da441e accounting: fix wrong message on SIGUSR2 to enable/disable bwlimit
This was caused by the message code only looking at one of the
bandwidth filters, not all of them.

Fixes #8104
2024-09-26 17:53:58 +01:00
Nick Craig-Wood
22e13eea47 gphotos: implment --gphotos-proxy to allow download of full resolution media
This works in conjunction with the gphotosdl tool

https://github.com/rclone/gphotosdl
2024-09-26 12:57:28 +01:00
Nick Craig-Wood
de9b593f02 googlephotos: remove noisy debugging statements 2024-09-26 12:52:53 +01:00
Nick Craig-Wood
b2b4f8196c docs: add note to CONTRIBUTING that the overview needs editing in 2 places 2024-09-25 17:56:33 +01:00
Nick Craig-Wood
84cebb6872 test_all: add ignoretests parameter for skipping certain tests
Use like this for a `backend:` in `config.yaml`

   ignoretests:
     - "fs/operations"
     - "fs/sync"
2024-09-25 16:03:43 +01:00
Nick Craig-Wood
cb9f4f8461 build: replace "golang.org/x/exp/slices" with "slices" now go1.21 is required 2024-09-25 16:03:43 +01:00
Nick Craig-Wood
498d9cfa85 Changelog updates from Version v1.68.1 2024-09-24 17:26:49 +01:00
Dan McArdle
109e4ed0ed Makefile: Fail when doc recipes create dir named '$HOME'
This commit makes the `commanddocs` and `backenddocs` fail if they
accidentally create a directory named '$HOME'. This is basically a
regression test for issue #8092.

It also makes those recipes rmdir the '$HOME/.config/rclone/'
directories. This will only delete empty directories, so nothing of
value should ever be deleted.
2024-09-24 10:38:25 +01:00
Dan McArdle
353270263a Makefile: Prevent doc recipe from creating dir named '$HOME'
Prior to this commit, running `make doc` had the unwanted side effect of
creating a directory literally named `$HOME` in the source tree.

Fixed #8092
2024-09-24 10:38:25 +01:00
wiserain
f8d782c02d pikpak: fix cid/gcid calculations for fs.OverrideRemote
Previously, cid/gcid (custom hash for pikpak) calculations failed when 
attempting to unwrap object info from `fs.OverrideRemote`. 

This commit introduces a new function that can correctly unwrap 
object info from both regular objects and `fs.OverrideRemote` types, 
ensuring uploads with accurate cid/gcid calculations in all scenarios.
2024-09-21 10:22:31 +09:00
albertony
3dec664a19 bisync: change exit code from 2 to 7 for critically aborted run 2024-09-20 18:51:08 +02:00
albertony
a849fd59f0 cmd: change exit code from 1 to 2 for syntax and usage errors 2024-09-20 18:51:08 +02:00
nielash
462a1cf491 local: fix --copy-links on macOS when cloning
Before this change, --copy-links erroneously behaved like --links when using cloning
on macOS, and cloning was not supported at all when using --links.

After this change, --copy-links does what it's supposed to, and takes advantage of
cloning when possible, by copying the file being linked to instead of the link
itself.

Cloning is now also supported in --links mode for regular files (which benefit
most from cloning). symlinks in --links mode continue to be tossed back to be
handled by rclone's special translation logic.

See https://forum.rclone.org/t/macos-local-to-local-copy-with-copy-links-causes-error/47671/5?u=nielash
2024-09-20 17:43:52 +01:00
Nick Craig-Wood
0b7b3cacdc azureblob: add --azureblob-use-az to force the use of the Azure CLI for auth
Setting this can be useful if you wish to use the az CLI on a host with
a System Managed Identity that you do not want to use.

Fixes #8078
2024-09-20 16:16:09 +01:00
Nick Craig-Wood
976103d50b azureblob: add --azureblob-disable-instance-discovery
If set this skips requesting Microsoft Entra instance metadata

See #8078
2024-09-20 16:16:09 +01:00
Nick Craig-Wood
192524c004 s3: add initial --s3-directory-bucket to support AWS Directory Buckets
This will ensure no Content-Md5 headers are sent and ensure ETags are not
interpreted as MD5 sums. X-Amz-Meta-Md5chksum will be set on all objects
whether single or multipart uploaded.

This also sets "no_check_bucket = true".

This is enough to make the integration tests pass, but there are some
limitations as noted in the docs.

See: https://forum.rclone.org/t/support-s3-directory-bucket/47653/
2024-09-19 12:01:24 +01:00
Nick Craig-Wood
28667f58bf Add Lawrence Murray to contributors 2024-09-19 12:01:24 +01:00
Lawrence Murray
c669f4e218 backend/protondrive: improve performance of Proton Drive backend
This change removes redundant calls to the Proton Drive Bridge when
creating Objects. Specifically, the function List() would get a
directory listing, get a link for each file, construct a remote path
from that link, then get a link for that remote path again by calling
getObjectLink() unnecessarily. This change removes that unnecessary
call, and tidies up a couple of functions around this with unused
parameters.

Related to performance issues reported in #7322 and #7413
2024-09-18 18:15:24 +01:00
Nick Craig-Wood
1a9e6a527d ftp: implement --ftp-no-check-upload to allow upload to write only dirs
Fixes #8079
2024-09-18 12:57:01 +01:00
Nick Craig-Wood
8c48cadd9c docs: document that fusermount3 may be needed when mounting/unmounting
See: https://forum.rclone.org/t/documentation-fusermount-vs-fusermount3/47816/
2024-09-18 12:57:01 +01:00
Nick Craig-Wood
76e1ba8c46 Add rishi.sridhar to contributors 2024-09-18 12:57:01 +01:00
Nick Craig-Wood
232e4cd18f Add quiescens to contributors 2024-09-18 12:57:00 +01:00
buengese
88141928f2 docs/zoho: update options 2024-09-17 20:40:42 +01:00
buengese
a2a0388036 zoho: make upload cutoff configurable 2024-09-17 20:40:42 +01:00
buengese
48543d38e8 zoho: add support for private spaces 2024-09-17 20:40:42 +01:00
buengese
eceb390152 zoho: try to handle rate limits a bit better 2024-09-17 20:40:42 +01:00
buengese
f4deffdc96 zoho: print clear error message when missing oauth scope 2024-09-17 20:40:42 +01:00
buengese
c172742cef zoho: switch to large file upload API for larger files, fix missing URL encoding of filenames for the upload API 2024-09-17 20:40:42 +01:00
buengese
7daed30754 zoho: use download server to accelerate downloads
Co-authored-by: rishi.sridhar <rishi.sridhar@zohocorp.com>
2024-09-17 20:40:42 +01:00
quiescens
b1b4c7f27b opendrive: add about support to backend 2024-09-17 17:20:42 +01:00
wiserain
ed84553dc1 pikpak: fix login issue where token retrieval fails
This addresses the login issue caused by pikpak's recent cancellation 
of existing login methods and requirement for additional verifications. 

To resolve this, we've made the following changes:

1. Similar to lib/oauthutil, we've integrated a mechanism to handle 
captcha tokens.

2. A new pikpakClient has been introduced to wrap the existing 
rest.Client and incorporate the necessary headers including 
x-captcha-token for each request.

3. Several options have been added/removed to support persistent 
user/client identification.

* client_id: No longer configurable.
* client_secret: Deprecated as it's no longer used.
* user_agent: A new option that defaults to PC/Firefox's user agent 
but can be overridden using the --pikpak-user-agent flag.
* device_id: A new option that is randomly generated if invalid. 
It is recommended not to delete or change it frequently.
* captcha_token: A new option that is automatically managed 
by rclone, similar to the OAuth token.

Fixes #7950 #8005
2024-09-18 01:09:21 +09:00
Nick Craig-Wood
c94edbb76b webdav: nextcloud: implement backoff and retry for 423 LOCKED errors
When uploading chunked files to nextcloud, it gives a 423 error while
it is merging files.

This waits for an exponentially increasing amount of time for it to
clear.

If after we have received a 423 error we receive a 404 error then we
assume all is good as this is what appears to happen in practice.

Fixes #7109
2024-09-17 16:46:02 +01:00
Nick Craig-Wood
2dcb327bc0 s3: fix rclone ignoring static credentials when env_auth=true
The SDKv2 conversion introduced a regression to do with setting
credentials with env_auth=true. The rclone documentation explicitly
states that env_auth only applies if secret_access_key and
access_key_id are blank and users had been relying on that.

However after the SDKv2 conversion we were ignoring static credentials
if env_auth=true.

This fixes the problem by ignoring env_auth=true if secret_access_key
and access_key_id are both provided. This brings rclone back into line
with the documentation and users expectations.

Fixes #8067
2024-09-17 16:07:56 +01:00
Nick Craig-Wood
874d66658e fs: fix setting stringArray config values from environment variables
After the config re-organisation, the setting of stringArray config
values (eg `--exclude` set with `RCLONE_EXCLUDE`) was broken and gave
a message like this for `RCLONE_EXCLUDE=*.jpg`:

    Failed to load "filter" default values: failed to initialise "filter" options:
    couldn't parse config item "exclude" = "*.jpg" as []string: parsing "*.jpg" as []string failed:
    invalid character '/' looking for beginning of value

This was caused by the parser trying to parse the input string as a
JSON value.

When the config was re-organised it was thought that the internal
representation of stringArray values was not important as it was never
visible externally, however this turned out not to be true.

A defined representation was chosen - a comma separated string and
this was documented and tests were introduced in this patch.

This potentially introduces a very small backwards incompatibility. In
rclone v1.67.0

    RCLONE_EXCLUDE=a,b

Would be interpreted as

    --exclude "a,b"

Whereas this new code will interpret it as

    --exclude "a" --exclude "b"

The benefit of being able to set multiple values with an environment
variable was deemed to outweigh the very small backwards compatibility
risk.

If a value with a `,` is needed, then use CSV escaping, eg

    RCLONE_EXCLUDE="a,b"

(Note this needs to have the quotes in so at the unix shell that would be

    RCLONE_EXCLUDE='"a,b"'

Fixes #8063
2024-09-13 15:52:51 +01:00
Nick Craig-Wood
3af757e26d rc: fix default value of --metrics-addr
Before this fix it was empty string, which isn't a good default for a
stringArray.
2024-09-13 15:52:51 +01:00
Nick Craig-Wood
fef1b61585 fs: fix --dump filters not always appearing
Before this fix, we initialised the options blocks in a random order.
This meant that there was a 50/50 chance whether --dump filters would
show the filters or not as it was depending on the "main" block having
being read first to set the Dump flags.

This initialises the options blocks in a defined order which is
alphabetically but with main first which fixes the problem.
2024-09-13 15:52:51 +01:00
Nick Craig-Wood
3fca7a60a5 docs: correct notes on docker manual build 2024-09-13 15:52:51 +01:00
Nick Craig-Wood
6b3f41fa0c Add ttionya to contributors 2024-09-13 15:52:51 +01:00
ttionya
3d0ee47aa2 build: fix docker release build - fixes #8062
This updates the action to use `docker/build-push-action` instead of `ilteoood/docker_buildx`
which fixes the build problem in testing.
2024-09-12 17:57:53 +01:00
Pawel Palucha
da70088b11 docs: add section for improving performance for s3 2024-09-12 11:29:35 +01:00
Nick Craig-Wood
1bc9b94cf2 onedrive: fix spurious "Couldn't decode error response: EOF" DEBUG
This DEBUG was being generated on redirects which don't have a JSON
body and is irrelevant.
2024-09-10 16:19:38 +01:00
Nick Craig-Wood
15a026d3be Add Divyam to contributors 2024-09-10 16:19:38 +01:00
Divyam
ad122c6f6f serve docker: add missing vfs-read-chunk-streams option in docker volume driver 2024-09-09 10:07:25 +01:00
Nick Craig-Wood
b155231cdd Start v1.69.0-DEV development 2024-09-08 17:22:19 +01:00
Nick Craig-Wood
49f69196c2 Version v1.68.0 2024-09-08 16:21:56 +01:00
Nick Craig-Wood
3f7651291b gofile: fix failed downloads on newly uploaded objects
The upload routine no longer returns a url to download the object.

This fixes the problem by fetching it if necessary when we attempt to
Open the object.
2024-09-08 14:58:29 +01:00
Nick Craig-Wood
796013dd06 gofile: fix Move a file
For some reason the parent ID got out of date in the Object (exact
reason not known - but the fact that this was OK before suggests a
change in the provider).

However we know the parent ID as it is in the directory cache, so use
that instead.
2024-09-08 14:58:29 +01:00
Nick Craig-Wood
e0da406ca7 test_all: mark linkbox fs/sync test TestSyncOverlapWithFilter as ignore
This gives the error

> Update second step failed: Linkbox error 500: The file name needs to include a suffix, such as xxx.mp4

As linkbox can't have files starting with "." and we are trying to save a file called ".ignore".
2024-09-08 14:58:14 +01:00
albertony
9a02c04028 jottacloud: fix setting of metadata on server side move - fixes #7900 2024-09-07 16:00:03 +01:00
albertony
918185273f docs: group the different options affecting lsjson output 2024-09-07 09:41:08 +01:00
Nick Craig-Wood
3f2074901a fichier: fix server side move - fixes #7856
The server side move had a combination of bugs
- Fichier changed the API disallowing a move to the same name
- Rclone was using the wrong object for some operations
2024-09-06 18:20:10 +01:00
Nick Craig-Wood
648afc7df4 fichier: Fix detection of Flood Detected error 2024-09-06 18:20:10 +01:00
Nick Craig-Wood
16e0245a8e rc: add vfs/queue-set-expiry to adjust expiry of items in the VFS queue 2024-09-06 17:33:35 +01:00
Nick Craig-Wood
59acb9dfa9 rc: add vfs/queue to show the status of the upload queue 2024-09-06 17:33:35 +01:00
Nick Craig-Wood
bfec159504 vfs: keep a record of the file size in the writeback queue 2024-09-06 17:33:35 +01:00
Nick Craig-Wood
842396c8a0 build: fix gocritic change missed in merge
The original problem was introduced here

bcdfad3c83 build: update logging statements to make json log work #6038

And this was fixed non-optimally here

f1a84d171e build: fix build after update
2024-09-06 17:33:35 +01:00
Nick Craig-Wood
5f9a201b45 Add Oleg Kunitsyn to contributors 2024-09-06 17:33:35 +01:00
Nick Craig-Wood
22583d0a5f Add fsantagostinobietti to contributors 2024-09-06 17:33:35 +01:00
Nick Craig-Wood
f1466a429c Add Mathieu Moreau to contributors 2024-09-06 17:33:35 +01:00
Florian Klink
e3b09211b8 lib/sd-activation: wrap coreos/go-systemd
It fails to build on plan9, which is part of the rclone CI matrix, and
the PR fixing it upstream doesn't seem to be getting traction.

Stub it on our side, we can still remove this once it gets merged.
2024-09-06 17:21:56 +01:00
Florian Klink
156feff9f2 sftp: support listening on passed FDs 2024-09-06 17:21:56 +01:00
Florian Klink
b29a22095f http: fix addr CLI arg help text
This was missing the fact rclone also supports listening on Unix Domain
Sockets.
2024-09-06 17:21:56 +01:00
Florian Klink
861c01caf5 http: support listening on passed FDs
Instead of the listening addresses specified above, rclone will listen to all
FDs passed by the service manager, if any (and ignore any arguments passed by
`--{{ .Prefix }}addr`.

This allows rclone to be a socket-activated service. It can be configured as described in
https://www.freedesktop.org/software/systemd/man/latest/systemd.socket.html

It's possible to test this interactively through `systemd-socket-activate`,
firing of a request in a second terminal:

```
❯ systemd-socket-activate -l 8088 -l 8089 --fdname=foo:bar -- ./rclone serve webdav :local:test/
Listening on [::]:8088 as 3.
Listening on [::]:8089 as 4.
Communication attempt on fd 3.
Execing ./rclone (./rclone serve webdav :local:test/)
2024/04/24 18:14:42 NOTICE: Local file system at /home/flokli/dev/flokli/rclone/test: WebDav Server started on [sd-listen:bar-0/ sd-listen:foo-0/]
```
2024-09-06 17:21:56 +01:00
Nick Craig-Wood
f1a84d171e build: fix build after update
This adds an import missed in

bcdfad3c83 build: update logging statements to make json log work #6038
2024-09-06 17:18:46 +01:00
albertony
bcdfad3c83 build: update logging statements to make json log work - fixes #6038
This changes log statements from log to fs package, which is required for --use-json-log
to properly make log output in JSON format. The recently added custom linting rule,
handled by ruleguard via gocritic via golangci-lint, warns about these and suggests
the alternative. Fixing was therefore basically running "golangci-lint run --fix",
although some manual fixup of mainly imports are necessary following that.
2024-09-06 17:04:18 +01:00
albertony
88b0757288 build: update custom linting rule for log to suggest new non-format functions 2024-09-06 17:04:18 +01:00
albertony
33d6c3f92f fs: add non-format variants of log functions to avoid non-constant format string warnings 2024-09-06 17:04:18 +01:00
albertony
752809309d fs: add log Printf, Fatalf and Panicf 2024-09-06 17:04:18 +01:00
albertony
4a54cc134f fs: refactor base log method name for improved consistency 2024-09-06 17:04:18 +01:00
albertony
dfc2c98bbf fs: refactor log statements to use common helper 2024-09-06 17:04:18 +01:00
albertony
604d6bcb9c build: enable custom linting rules with ruleguard via gocritic 2024-09-06 17:04:18 +01:00
Oleg Kunitsyn
d15704ef9f rcserver: implement prometheus metrics on a dedicated port - fixes #7940 2024-09-06 15:00:36 +01:00
fsantagostinobietti
26bc9826e5 swift: add total/free space info in about command.
With the enhancement in version v2.0.3 of ncw/swift library, we can now get Total and Free space info from remotes that support this feature (ex. Blomp storage)
2024-09-06 12:46:51 +01:00
Mathieu Moreau
2a28b0eaf0 docs: filtering: added Byte unit for min/max-size parameters. 2024-09-06 12:28:29 +01:00
Nick Craig-Wood
2d1c2b1f76 config encryption: set, remove and check to manage config file encryption #7859 2024-09-06 10:34:29 +01:00
Nick Craig-Wood
ffb2e2a6de config: use --password-command to set config file password if supplied
Before this change, rclone ignored the --password-command on the
rclone config setting except when decrypting an existing config file.

This change allows for offloading the password storage/generation into
external hardware key or other protected password storage.

Fixes #7859
2024-09-06 10:34:29 +01:00
Nick Craig-Wood
c9c283533c config: factor --password-command code into its own function #7859 2024-09-06 10:34:29 +01:00
Nick Craig-Wood
71799d7efd Add yuval-cloudinary to contributors 2024-09-06 10:34:29 +01:00
Nick Craig-Wood
8f4fdf6cc8 Add nipil to contributors 2024-09-06 10:34:29 +01:00
yuval-cloudinary
91b11f9eac documentation: add cheatsheet for configuration encryption 2024-09-05 17:39:25 +01:00
nipil
b49927fbd0 docs: more secure two-step signature and hash validation 2024-09-05 16:54:26 +01:00
Nick Craig-Wood
1a8b7662e7 serve nfs: unify the nfs library logging with rclone's logging better
Before this we ignored the logging levels and logged everything as
debug. This will obey the rclone logging flags and log at the correct
level.
2024-09-04 10:50:21 +01:00
Nick Craig-Wood
6ba3e24853 serve nfs: fix incorrect user id and group id exported to NFS #7973
Before this change all exports were exported as root and the --uid and
--gid flags of the VFS were ignored.

This fixes the issue by exporting the UID and GID correctly which
default to the current user and group unless set explicitly.
2024-09-04 10:50:21 +01:00
Nick Craig-Wood
802a938bd1 zoho: fix inefficiencies uploading with new API to avoid throttling
Before this fix, rclone queried the uploaded object to find its size
and modtime after upload as the API did not return these items.

Zoho have subsequently modified the API to return these items so
rclone uses them to avoid an API call.

This should help with rclone being throttled by Zoho.

See: https://forum.rclone.org/t/second-followup-on-the-older-topic-rclone-invokes-more-number-of-workdrive-s-files-listing-api-calls-which-exceeds-the-throttling-limit/45697/20
2024-09-04 10:45:47 +01:00
Nick Craig-Wood
9deb3e8adf Add crystalstall to contributors 2024-09-04 10:45:12 +01:00
crystalstall
296281a6eb docs: fix some function names in comments
Signed-off-by: crystalstall <crystalruby@qq.com>
2024-09-02 18:20:08 +02:00
albertony
711478554e lib/file: use builtin MkdirAll with go1.22 instead of our own custom version for windows
Starting with go1.22 the standard os.MkdirAll has improved its handling of volume names,
and as part of that it now stops recursing into parent directory if it is a volume name
(see: cd589c8a73).
This is similar to what was our main change and reason for creating a custom version. When
building with go1.22 or newer we can therefore stop using our custom version, with the
advantage that we automatically get current and future relevant improvements from golang.
To support building with go1.21 the existing custom version is still kept, and therefore
also our wrapper function file.MkdirAll - but it now just calls os.MkdirAll with go1.22
or newer on Windows.

See #5401, #6420 and acf1e2df84 for details about the
creation of our custom version of MkdirAll.
2024-09-02 18:16:38 +02:00
albertony
906aef91fa docs: document that paths using volume guids are supported 2024-09-02 18:01:47 +02:00
Nick Craig-Wood
6b58cd0870 s3: fix accounting for mulpart transfers after migration to SDKv2 #4989 2024-08-31 20:11:53 +01:00
Sebastian Bünger
af9f8ced80 yandex: implement custom user agent to help with upload speeds 2024-08-29 18:25:08 +01:00
Georg Welzel
c63f1865f3 operations: copy: generate stable partial suffix 2024-08-28 08:45:38 +02:00
Nick Craig-Wood
1bb89bc818 docs: add missing sftp providers to README and main docs page - fixes #8038 2024-08-28 07:25:49 +01:00
Nick Craig-Wood
a365503750 nfsmount: fix stale handle problem after converting options to new style
This problem was caused by the defaults not being set for the options
after the conversion to the new config system in

28ba4b832d serve nfs: convert options to new style

This makes the nfs serve options globally available so nfsmount can
use them directly.

Fixes #8029
2024-08-28 07:08:23 +01:00
Nick Craig-Wood
3bb6d0a42b docs: mark flags.md as auto generated so contributors don't edit it 2024-08-28 07:08:23 +01:00
Nick Craig-Wood
f65755b3a3 Add Pawel Palucha to contributors 2024-08-28 07:08:21 +01:00
Nick Craig-Wood
33c5f35935 Add John Oxley to contributors 2024-08-28 07:07:34 +01:00
Nick Craig-Wood
4367b999c9 Add Georg Welzel to contributors 2024-08-28 07:04:02 +01:00
Nick Craig-Wood
b57e6213aa Add Péter Bozsó to contributors 2024-08-28 07:04:02 +01:00
Nick Craig-Wood
cd90ba4337 Add Sam Harrison to contributors 2024-08-28 07:04:02 +01:00
Pawel Palucha
0e5eb7a9bb s3: allow restoring from intelligent-tiering storage class 2024-08-25 15:08:06 +02:00
nielash
956c2963fd bisync: don't convert modtime precision in listings - fixes #8025
Before this change, bisync proactively converted modtime precision when greater
than what the destination backend supported.

This dates back to a time before bisync considered the modifyWindow for same-side
comparisons. Back then, it was problematic to save a listing with 12:54:49.7 for
a backend that can't handle that precision, as on the next run the backend would
report the time as 12:54:50 and bisync would think the file had changed. So the
truncation was a workaround to anticipate this and proactively record the time
with the precision we expect to receive next time.

However, this caused problems for backends (such as dropbox) that round instead
of truncating as bisync expected.

After this change, bisync preserves the original precision in the listing
(without conversion), even when greater than what the backend supports, to avoid
rounding error. On the next run, bisync will compare it to the rounded time
reported by the backend, and if it's within the modifyWindow, it will treat them
as equivalent.
2024-08-24 22:32:48 -04:00
John Oxley
146562975b build: rename Unknwon/goconfig to unknwon/goconfig
Before this change we used the repo with an initial uppercase `U`. However it is now canonically spelled with a lower case `u`.

This package is too old to have a go.mod but the README clearly states the desired capitalization.

In 4b0d4b818a the
recommended capitalization was changed to lower case.

Co-authored-by: John Oxley <joxley@meta.com>
2024-08-23 11:03:27 +01:00
Georg Welzel
4c1cb0622e backend: pcloud: Implement OpenWriterAt feature 2024-08-22 23:42:32 +02:00
Georg Welzel
258092f9c6 backend: pcloud: implement SetModTime - Fixes #7896 2024-08-22 23:42:32 +02:00
Sam Harrison
be448c9e13 filescom: don't make an extra fetch call on each item in a list response 2024-08-19 22:31:57 +02:00
albertony
4e708e59f2 local: fix incorrect conversion between integer types 2024-08-18 10:29:36 +02:00
albertony
c8366dfef3 local: fix incorrect conversion between integer types 2024-08-17 17:07:17 +02:00
albertony
1e14523b82 docs: make tardigrade page auto redirect to storj page 2024-08-17 16:00:42 +02:00
albertony
da25305ba0 docs: update backend config samples 2024-08-17 16:00:18 +02:00
albertony
e439121ab2 config: fix size computation for allocation may overflow 2024-08-17 15:03:39 +02:00
albertony
37c12732f9 lib: fix incorrect conversion between integer types 2024-08-17 15:03:39 +02:00
albertony
4c488e7517 serve docker: fix incorrect conversion between integer types 2024-08-17 15:03:39 +02:00
albertony
7261f47bd2 local: fix incorrect conversion between integer types 2024-08-17 15:03:39 +02:00
albertony
1db8b20fbc s3: fix incorrect conversion between integer types 2024-08-17 15:03:39 +02:00
albertony
a87d8967fc s3: fix potentially unsafe quoting issue 2024-08-17 15:03:39 +02:00
albertony
4804f1f1e9 dropbox: fix potentially unsafe quoting issue 2024-08-17 15:03:39 +02:00
Eng Zer Jun
d1c84f9115 refactor: replace min/max helpers with built-in min/max
We upgraded our minimum Go version in commit ca24447090. We can now use
the built-in `min` and `max` functions directly.

Reference: https://go.dev/ref/spec#Min_and_max
Signed-off-by: Eng Zer Jun <engzerjun@gmail.com>
2024-08-17 13:09:44 +02:00
JT Olio
e0b08883cb go.mod: update storj.io/uplink to latest release
this has a couple of bug fixes and small enhancements.

we are working on reducing the size of this library, but this
version bump does not yet have those improvements.
2024-08-16 23:06:45 +02:00
Péter Bozsó
a0af72c27a docs: update ssh tunnel example 2024-08-16 20:27:23 +02:00
Péter Bozsó
28d6985764 docs: update rclone authorize section 2024-08-16 20:27:23 +02:00
Péter Bozsó
f2ce9a9557 docs: fix command highlight 2024-08-16 20:27:23 +02:00
albertony
95151eac82 docs: fix alignment of some of the icons in the storage system dropdown 2024-08-16 11:07:54 +02:00
Sam Harrison
bd9bf4eb1c docs: mark filescom as supporting link sharing 2024-08-15 22:55:45 +01:00
albertony
705c72d293 build: enable gocritic linter
Running with default set of checks, except disabling the following
- appendAssign: append result not assigned to the same slice (diagnostics check, many false positives)
- captLocal: using capitalized names for local variables (style check, too opinionated)
- commentFormatting: not having a space between `//` and comment text (style check, too opinionated)
- exitAfterDefer: log.Fatalln will exit, and `defer func(){...}(...)` will not run (diagnostics check, to be revisited)
- ifElseChain: rewrite if-else to switch statement (style check, many occurrences and a bit opinionated, to be revisited)
- singleCaseSwitch: should rewrite switch statement to if statement (style check, many occurrences and a bit opinionated, to be revisited)
2024-08-15 22:08:34 +01:00
albertony
330c6702eb build: ignore remaining gocritic lint issues 2024-08-15 22:08:34 +01:00
albertony
4d787ae87f build: fix gocritic lint issue unlambda 2024-08-15 22:08:34 +01:00
albertony
86e9a56d73 build: fix gocritic lint issue dupbranchbody 2024-08-15 22:08:34 +01:00
albertony
64e8013c1b build: fix gocritic lint issue sloppylen 2024-08-15 22:08:34 +01:00
albertony
33bff6fe71 build: fix gocritic lint issue wrapperfunc 2024-08-15 22:08:34 +01:00
albertony
e82b5b11af build: fix gocritic lint issue elseif 2024-08-15 22:08:34 +01:00
albertony
4454ed9d3b build: fix gocritic lint issue underef 2024-08-15 22:08:34 +01:00
albertony
bad8207378 build: fix gocritic lint issue valswap 2024-08-15 22:08:34 +01:00
albertony
c6d3714e73 build: fix gocritic lint issue assignop 2024-08-15 22:08:34 +01:00
albertony
59501fcdb6 build: fix gocritic lint issue unslice 2024-08-15 22:08:34 +01:00
Florian Klink
afd199d756 dlna: document external subtitle feature 2024-08-15 22:01:52 +01:00
Florian Klink
00e073df1e dlna: set more correct mime type
The code currently hardcodes `text/srt` for all subtitles.

`text/srt` is wrong, it seems `application/x-subrip` is the official
extension coming from the official mime database, at least (and still
works with the Samsung TV I tested with). Also add that one to `fs/
mimetype.go`.

Compared to previous iterations of this PR, I dropped tests ensuring
certain mime types are present - as detection still seems to be fairly
platform-specific.
2024-08-15 22:01:52 +01:00
Florian Klink
2e007f89c7 dlna: don't swallow video.{idx,sub}
.idx and .sub subtitle files only work if both are present, but the code
was overwriting the first-inserted element to subtitlesByName, as it was
keyed by the basename without extension.

Make subtitlesByName point to a slice of nodes instead.
2024-08-15 22:01:52 +01:00
Florian Klink
edd9347694 dlna: add cds_test.go
This tests the mediaWithResources function in various scenarios.
2024-08-15 22:01:52 +01:00
Florian Klink
1fad49ee35 dlna: also look at "Subs" subdirectory
Apparently it seems pretty common for subtitles to be put in a
subdirectory called "Subs", rather than in the same directory as the
media file itself.

This covers that usecase, by checking the returned listing for a
directory called "Subs" to exist.

If it does, its child nodes are added to the list before they're being
passed to mediaWithResources, allowing these subtitles to be discovered
automatically.
2024-08-15 22:01:52 +01:00
Sam Harrison
182b2a6417 chore: add childish-sambino as filescom maintainer 2024-08-15 22:00:26 +01:00
albertony
da9faf1ffe Make filtering rules for help and listremotes more lenient 2024-08-15 18:45:12 +02:00
albertony
303358eeda help: cleanup template syntax (consistent whitespace) 2024-08-15 18:45:12 +02:00
albertony
62233b4993 help: avoid empty additional help topics header 2024-08-15 18:45:12 +02:00
albertony
498abcc062 help: make help command output less distracting 2024-08-15 18:45:12 +02:00
albertony
482bfae8fa docs: consistent newline of first line in command output 2024-08-15 18:26:34 +02:00
Sam Harrison
ae9960a4ed filescom: add Files.com backend 2024-08-15 17:00:39 +01:00
Nick Craig-Wood
089c168fb9 fstests: attempt to fix flaky serve s3 test
Sometimes (particularly on macOS amd64) the serve s3 test fails with
TestIntegration/FsMkdir/FsPutError where it wasn't expecting to get an
object but it did.

This is likely caused by a race between the serve s3 goroutine
deleting the half uploaded file and the fstests code looking for it to
not exist.

This fix treats it like any other eventual consistency problem and
retries the check using the test framework.
2024-08-15 16:30:29 +01:00
albertony
6f515ded8f docs: move the link to global flags page to the main options header 2024-08-15 16:41:45 +02:00
albertony
91c6faff71 docs: make command group options subsections of main options 2024-08-15 16:41:45 +02:00
albertony
874616a73e docs: stop shouting the SEE ALSO header 2024-08-15 16:41:45 +02:00
albertony
458d93ea7e docs: fix the rclone root command header levels 2024-08-15 16:41:45 +02:00
albertony
513653910c docs: make the see also section header consistent and listed in toc of command pages 2024-08-15 16:41:45 +02:00
nielash
bd5199910b local: --local-no-clone flag to disable cloning for server-side copies
This flag allows users to disable the reflink cloning feature and instead force
"deep" copies, for certain use cases where data redundancy is preferable. It is
functionally equivalent to using `--disable Copy` on local.
2024-08-15 15:36:38 +01:00
nielash
f6d836eefd local: support setting custom --metadata during server-side Copy 2024-08-15 15:36:38 +01:00
nielash
87ec26001f local: add server-side copy with xattrs on macOS (part-fix #1710)
Before this change, macOS-specific metadata was not preserved by rclone, even for
local-to-local transfers (it does not use the "user." prefix, nor is Mac metadata
limited to xattrs.) Additionally, rclone did not take advantage of APFS's native
"cloning" functionality for fast and deduplicated transfers.

After this change, local (on macOS only) supports "server-side copy" similarly to
other remotes, and achieves this by using (when possible) macOS's native APFS
"cloning", which is the same underlying mechanism deployed when a user
duplicates a file via the Finder UI. This has several advantages over the
previous behavior:

- It is extremely fast (even large files can be cloned instantly)
- It is very efficient in terms of storage, as it automatically deduplicates when
possible (i.e. so that having two identical files does not consume more storage
than having just one.) (The concept is similar to a "hard link", but subsequent
modifications will not affect the original file.)
- It preserves Mac-specific metadata to the maximum degree, including not only
xattrs but also metadata not easily settable by other methods, including Finder
and Spotlight params.

When server-side "clone" is not available (for example, on non-APFS volumes), it
falls back to server-side "copy" (still preserving metadata but using more disk
storage.) It is only used when both remotes are local (and not wrapped by other
remotes, such as crypt.) The behavior of local on non-mac systems is unchanged.
2024-08-15 15:36:38 +01:00
albertony
3e12612aae docs: add automatic alias redirects for command pages 2024-08-15 16:18:38 +02:00
Florian Klink
aee2480fc4 cmd/rc: add --unix-socket option
This adds an additional flag --unix-socket, and if supplied connects
to the unix socket given.

    rclone rcd --rc-addr unix:///tmp/my.socket
    rclone rc --unix-socket /tmp/my.socket core/stats
2024-08-15 15:14:51 +01:00
Florian Klink
3ffa47ea16 webdav: add --webdav-unix-socket-path to connect to a unix socket
This adds a new optional parameter to the backend, to specify a path
to a unix domain socket to connect to, instead the specified URL.

The URL itself is still used for the rest of the HTTP client, allowing
host and subpath to stay intact.

This allows using rclone with the webdav backend to connect to a WebDAV
server provided at a Unix Domain socket:

    rclone serve webdav --addr unix:///tmp/my.socket remote:path
    rclone --webdav-unix-socket /tmp/my.socket --webdav-url http://localhost lsf :webdav:
2024-08-15 15:14:51 +01:00
Nick Craig-Wood
70e8ad456f serve nfs: implement on disk cache for file handles 2024-08-14 21:55:26 +01:00
Nick Craig-Wood
55b9b3e33a serve nfs: factor caching to its own file 2024-08-14 21:55:26 +01:00
Nick Craig-Wood
ce7dfa075c serve nfs: update github.com/willscott/go-nfs to latest
This fixes various cache invalidation bugs
2024-08-14 21:55:26 +01:00
Nick Craig-Wood
a697d27455 serve nfs: store billy FS in the Handler 2024-08-14 21:55:26 +01:00
Nick Craig-Wood
cae22a7562 serve nfs: mask unimplemented error from chmod 2024-08-14 21:55:26 +01:00
Nick Craig-Wood
877321c2fb serve nfs: add tracing to filesystem calls 2024-08-14 21:55:26 +01:00
Nick Craig-Wood
574378e871 serve nfs: rename types and methods which should be internal 2024-08-14 21:55:26 +01:00
Nick Craig-Wood
50d42babd8 nfsmount: require --vfs-cache-mode writes or above in tests
These tests fail for --vfs-cache-mode minimal on Linux for the same
reason they don't work properly with --vfs-cache-mode off
2024-08-14 21:55:26 +01:00
Nick Craig-Wood
13ea77dd71 nfsmount: allow tests to run on any unix where sudo mount/umount works 2024-08-14 21:55:26 +01:00
Nick Craig-Wood
62b76b631c nfsmount: make the --sudo flag work for umount as well as mount 2024-08-14 21:55:26 +01:00
Nick Craig-Wood
96f92b7364 nfsmount: add tcp option to NFS mount options to fix mounting under Linux 2024-08-14 21:55:26 +01:00
Nick Craig-Wood
7c02a63884 build: install NFS client libraries to allow nfsmount tests to run 2024-08-14 21:55:26 +01:00
Nick Craig-Wood
67d4394a37 vfstest: fix crash if open failed 2024-08-14 21:55:26 +01:00
Nick Craig-Wood
c1a98768bc Implement Gofile backend - fixes #4632 2024-08-14 21:15:37 +01:00
Nick Craig-Wood
bac9abebfb lib/encoder: add Exclamation mark encoding 2024-08-14 21:15:37 +01:00
Nick Craig-Wood
27b281ef69 chunkedreader: add --vfs-read-chunk-streams to parallel read chunks
This converts the ChunkedReader into an interface and provides two
implementations one sequential and one parallel.

This can be used to improve the performance of the VFS on high
bandwidth or high latency links.

Fixes #4760
2024-08-14 21:13:09 +01:00
Nick Craig-Wood
10270a4354 accounting: fix race detected by the race detector 2024-08-14 21:13:09 +01:00
Nick Craig-Wood
d08b49d723 pool: Add ability to wait for a write to RW 2024-08-14 21:13:09 +01:00
Nick Craig-Wood
cb2d2d72a0 pool: Make RW thread safe so can read and write at the same time 2024-08-14 21:13:09 +01:00
Nick Craig-Wood
e686e34f89 multipart: make pool buffer size public 2024-08-14 21:13:09 +01:00
Nick Craig-Wood
5f66350331 Add Fornax to contributors 2024-08-14 21:12:56 +01:00
Nick Craig-Wood
e1d935b854 build: use go1.23 for the linter
This reverts commit 485aa90d13.

As the upstream problem is now fixed by golangci-lint v1.60.1
2024-08-14 18:27:13 +01:00
Nick Craig-Wood
61b27cda80 build: fix govet lint errors with golangci-lint v1.60.1
There were a lot of instances of this lint error

    printf: non-constant format string in call to github.com/rclone/rclone/fs.Logf (govet)

Which were fixed by re-arranging the arguments and adding "%s".

There were quite a few genuine bugs which were found too.
2024-08-14 18:25:40 +01:00
Nick Craig-Wood
83613634f9 build: bisync: fix govet lint errors with golangci-lint v1.60.1
There were a lot of instances of this lint error

    printf: non-constant format string in call to github.com/rclone/rclone/fs.Logf (govet)

Most of these could not easily be fixed so had nolint lines added.

This should probably be done in a neater way perhaps by making
LogColorf/ErrorColorf functions.
2024-08-14 18:21:31 +01:00
Nick Craig-Wood
1c80cbd13a build: fix staticcheck lint errors with golangci-lint v1.60.1 2024-08-14 17:48:24 +01:00
Nick Craig-Wood
9d5315a944 build: fix gosimple lint errors with golangci-lint v1.60.1 2024-08-14 17:46:12 +01:00
Nick Craig-Wood
8d1d096c11 drive: fix copying Google Docs to a backend which only supports SHA1
When copying Google Docs to Backblaze B2 errors like this would happen

    ERROR : test.docx: Failed to calculate src hash: hash type not supported
    ERROR : test.docx: corrupted on transfer: sha1 hashes differ src

This was due to an oversight in

8fd66daab6 drive: add support of SHA-1 and SHA-256 checksum

Which omitted to change the base object (which includes Google Docs) so
that it supported SHA-1 and SHA-256.
2024-08-12 20:27:12 +01:00
Nick Craig-Wood
4b922d86d7 drive: update docs on creating admin service accounts 2024-08-12 20:27:12 +01:00
Fornax
3b3625037c Add pixeldrain backend
This commit adds support for pixeldrain's experimental filesystem API.
2024-08-12 13:35:44 +01:00
kapitainsky
bfa3278f30 docs: add comment how to reduce rclone binary size (#8000)
See #7998
2024-08-10 17:52:32 +01:00
albertony
e334366345 Make listremotes long output backwards compatible - fixes #7995
The format was changed to include the source attribute in #7404, but that is now
reverted and the source information is only shown in json output.
2024-08-09 17:39:00 +01:00
Nick Craig-Wood
642d4082ac test_backend_sizes.py calculates space in the binary each backend uses #7998 2024-08-09 12:13:24 +01:00
albertony
024ff6ed15 listremotes: added options for filtering, ordering and json output 2024-08-08 13:41:31 +01:00
albertony
d6b0743cf4 config: make getting config values more consistent 2024-08-08 13:41:31 +01:00
albertony
e4749cf0d0 config: make listing of remotes more consistent 2024-08-08 13:41:31 +01:00
albertony
8d2907d8f5 config: avoid remote with empty name from environment 2024-08-08 13:41:31 +01:00
albertony
1720d3e11c help: global flags help command extended filtering 2024-08-08 13:41:31 +01:00
albertony
c6352231e4 help: global flags help command now takes glob filter 2024-08-08 13:41:31 +01:00
albertony
731947f3ca filter: add options for glob to regexp without anchors and special path rules 2024-08-08 13:41:31 +01:00
albertony
16d642825d docs: remove old genautocomplete command docs and add as alias from the newer completion command 2024-08-08 13:34:10 +01:00
albertony
50aebcf403 docs: replace references to genautocomplete with the new name completion 2024-08-08 13:34:10 +01:00
Nick Craig-Wood
c8555d1b16 serve s3: update to AWS SDKv2 by updating github.com/rclone/gofakes3
This is the last dependency for the SDKv1 and this commit removes it
from go.mod also.
2024-08-07 16:35:39 +01:00
Nick Craig-Wood
3ec0ff5d8f s3: fix SSE-C after SDKv2 change
The new SDK apparently keeds the customer key to be base64 encoded
where the old one did that for you automatically.

See: https://github.com/aws/aws-sdk-go-v2/issues/2736
See: https://forum.rclone.org/t/new-s3-backend-help-testing-needed/47139/3
2024-08-07 12:13:13 +01:00
wiserain
746516511d pikpak: update to using AWS SDK v2 #4989 2024-08-07 12:13:13 +01:00
Nick Craig-Wood
8aef1de695 s3: fix Cloudflare R2 integration tests after SDKv2 update #4989
Cloudflare will normally automatically decompress files with
`Content-Encoding: gzip` when downloaded. This is not what AWS S3 does
and it breaks the integration tests.

This fudges the integration tests to upload the test file with
`Cache-Control: no-transform` on Cloudflare R2 and puts a note in the
docs about this problem.
2024-08-07 12:13:13 +01:00
Nick Craig-Wood
cb611b8330 s3: add --s3-sdk-log-mode to control SDK debugging 2024-08-07 12:13:13 +01:00
Nick Craig-Wood
66ae050a8b s3: fix GCS provider after SDKv2 update #4989
This also adds GCS via S3 to the integration tester.
2024-08-07 12:13:13 +01:00
Nick Craig-Wood
fd9049c83d s3: update to using AWS SDK v2 - fixes #4989
SDK v2 conversion

Changes

  - `--s3-sts-endpoint` is no longer supported
  - `--s3-use-unsigned-payload` to control use of trailer checksums (needed for non AWS)
2024-08-07 12:13:13 +01:00
Nick Craig-Wood
a1f52bcf50 fstest: implement method to skip ChunkedCopy tests 2024-08-06 12:45:07 +01:00
Nick Craig-Wood
0470450583 build: disable wasm/js build due to go bug
Rclone is too big for js/wasm until
https://github.com/golang/go/issues/64856 is fixed
2024-08-04 12:18:34 +01:00
Nick Craig-Wood
1901bae4eb Add @dmcardle as gitannex maintainer 2024-08-01 17:48:39 +01:00
Nick Craig-Wood
9866d1c636 docs: s3: add section on using too much memory #7974 2024-08-01 16:33:09 +01:00
Nick Craig-Wood
c5c7bcdd45 docs: link the workaround for big directory syncs in the FAQ #7974 2024-08-01 16:33:09 +01:00
Nick Craig-Wood
d5c7b55ba5 Add David Seifert to contributors 2024-08-01 16:33:09 +01:00
Nick Craig-Wood
feafbfca52 Add Will Miles to contributors 2024-08-01 16:33:09 +01:00
Nick Craig-Wood
abe01179ae Add Ernie Hershey to contributors 2024-08-01 16:33:09 +01:00
David Seifert
612c717ea0 docs: rc: fix correct _path to _root in on the fly backend docs 2024-07-30 10:19:47 +01:00
Saleh Dindar
f26d2c6ba8 fs/http: reload client certificates on expiry
In corporate environments, client certificates have short life times
for added security, and they get renewed automatically. This means
that client certificate can expire in the middle of long running
command such as `mount`.

This commit attempts to reload the client certificates 30s before they
expire.

This will be active for all backends which use HTTP.
2024-07-24 15:02:32 +01:00
Will Miles
dcecb0ede4 docs: clarify hasher operation
Add a line to the "other operations" block to indicate that the hasher overlay will apply auto-size and other checks for all commands.
2024-07-24 11:07:52 +01:00
Ernie Hershey
47588a7fd0 docs: fix typo in batcher docs for dropbox and googlephotos 2024-07-24 10:58:22 +01:00
Nick Craig-Wood
ba381f8721 b2: update versions documentation - fixes #7878 2024-07-24 10:52:05 +01:00
Nick Craig-Wood
8f0ddcca4e s3: document need to set force_path_style for buckets with invalid DNS names
Fixes #6110
2024-07-23 11:34:08 +01:00
Nick Craig-Wood
404ef80025 ncdu: document that excludes are not shown - fixes #6087 2024-07-23 11:29:07 +01:00
Nick Craig-Wood
13fa583368 sftp: clarify the docs for key_pem - fixes #7921 2024-07-23 10:07:44 +01:00
Nick Craig-Wood
e111ffba9e serve ftp: fix failed startup due to config changes
See: https://forum.rclone.org/t/failed-to-ftp-failed-to-parse-host-port/46959
2024-07-22 14:54:32 +01:00
Nick Craig-Wood
30ba7542ff docs: add Route4Me as a sponsor 2024-07-22 14:48:41 +01:00
wiserain
31fabb3402 pikpak: correct file transfer progress for uploads by hash
Pikpak can accelerate file uploads by leveraging existing content 
in its storage (identified by a custom hash called gcid). 
Previously, file transfer statistics were incorrect for uploads 
without outbound traffic as the input stream remained unchanged.

This commit addresses the issue by:

* Removing unnecessary unwrapping/wrapping of accountings 
before/after gcid calculation, leading immediate AccountRead() on buffering.
* Correctly tracking file transfer statistics for uploads 
with no incoming/outgoing traffic by marking them as Server Side Copies.

This change ensures correct statistics tracking and improves overall user experience.
2024-07-20 21:50:08 +09:00
Nick Craig-Wood
b3edc9d360 fs: fix --use-json-log and -vv after config reorganization 2024-07-20 12:49:08 +01:00
Nick Craig-Wood
04f35fc3ac Add Tobias Markus to contributors 2024-07-20 12:49:08 +01:00
Tobias Markus
8e5dd79e4d ulozto: fix upload of > 2GB files on 32 bit platforms - fixes #7960 2024-07-20 11:29:34 +01:00
Nick Craig-Wood
b809e71d6f lib/mmap: fix lint error on deprecated reflect.SliceHeader
reflect.SliceHeader is deprecated, however the replacement gives a go
vet warning so this disables the lint warning in one use of
reflect.SliceHeader and replaces it in the other.
2024-07-20 10:54:47 +01:00
Nick Craig-Wood
d149d1ec3e lib/http: fix tests after go1.23 update
go1.22 output the Content-Length on a bad Range request on a file but
go1.23 doesn't - adapt the tests accordingly.
2024-07-20 10:54:47 +01:00
Nick Craig-Wood
3b51ad24b2 rc: fix tests after go1.23 upgrade
go1.23 adds a doctype to the HTML output when serving file listings.
This adapts the tests for that.
2024-07-20 10:54:47 +01:00
Nick Craig-Wood
485aa90d13 build: use go1.22 for the linter to fix excess memory usage
golangci-lint seems to have a bug which uses excess memory under go1.23

See: https://github.com/golangci/golangci-lint/issues/4874
2024-07-20 10:54:47 +01:00
Nick Craig-Wood
8958d06456 build: update all dependencies 2024-07-20 10:54:47 +01:00
Nick Craig-Wood
ca24447090 build: update to go1.23rc1 and make go1.21 the minimum required version 2024-07-20 10:54:47 +01:00
Nick Craig-Wood
d008381e59 Add AThePeanut4 to contributors 2024-07-20 10:54:47 +01:00
AThePeanut4
14629c66f9 systemd: prevent unmount rc command from sending a STOPPING=1 sd-notify message
This prevents an `rclone rcd` server from prematurely going into the
'deactivating' state, which was causing systemd to kill it with a
SIGABRT after the stop timeout.

Fixes #7540
2024-07-19 10:32:34 +01:00
Nick Craig-Wood
4824837eed azureblob: allow anonymous access for public resources
See: https://forum.rclone.org/t/azure-blob-public-resources/46882
2024-07-18 11:13:29 +01:00
Nick Craig-Wood
5287a9b5fa Add Ke Wang to contributors 2024-07-18 11:13:29 +01:00
Nick Craig-Wood
f2ce1767f0 Add itsHenry to contributors 2024-07-18 11:13:29 +01:00
Nick Craig-Wood
7f048ac901 Add Tomasz Melcer to contributors 2024-07-18 11:13:29 +01:00
Nick Craig-Wood
b0d0e0b267 Add Paul Collins to contributors 2024-07-18 11:13:29 +01:00
Nick Craig-Wood
f5eef420a4 Add Russ Bubley to contributors 2024-07-18 11:13:29 +01:00
Sawjan Gurung
9de485f949 serve s3: implement --auth-proxy
This implements --auth-proxy for serve s3. In addition it:

* add listbuckets tests with and without authProxy
* use auth proxy test framework
* servetest: implement workaround for #7454
* update github.com/rclone/gofakes3 to fix race condition
2024-07-17 15:14:08 +01:00
Kyle Reynolds
d4b29fef92 fs: Allow semicolons as well as spaces in --bwlimit timetable parsing - fixes #7595 2024-07-17 11:04:01 +01:00
wiserain
471531eb6a pikpak: optimize upload by pre-fetching gcid from API
This commit optimizes the PikPak upload process by pre-fetching the Global 
Content Identifier (gcid) from the API server before calculating it locally.

Previously, a gcid required for uploads was calculated locally. This process was 
resource-intensive and time-consuming. By first checking for a cached gcid 
on the server, we can potentially avoid the local calculation entirely. 
This significantly improves upload speed especially for large files.
2024-07-17 12:20:09 +09:00
Nick Craig-Wood
afd2663057 rc: add option blocks parameter to options/get and options/info 2024-07-16 15:02:50 +01:00
Ke Wang
97d6a00483 chore(deps): update github.com/rclone/gofakes3 2024-07-16 10:58:02 +01:00
Nick Craig-Wood
5ddedae431 fstest: fix compile after merge
After merging this commit

56caab2033 b2: Include custom upload headers in large file info

The compile failed as a change had been missed. Should have rebased
before merging!
2024-07-15 12:18:14 +01:00
URenko
e1b7bf7701 local: fix encoding of root path
fix #7824
Statements like rclone copy <somewhere> . will spontaneously miss
if . expands to a path with a Full Width replacement character.
This is due to the incorrect order in which
relative paths and decoding were handled in the original implementation.
2024-07-15 12:10:04 +01:00
URenko
2a615f4681 vfs: fix cache encoding with special characters - #7760
The vfs use the hardcoded OS encoding when creating temp file,
but decode it with encoding for the local filesystem (--local-encoding)
when copying it to remote.
This caused failures when the filenames contained special characters.
The hardcoded OS encoding is now used uniformly.
2024-07-15 12:10:04 +01:00
URenko
e041796bfe docs: correct description of encoding None and add Raw. 2024-07-15 12:10:04 +01:00
URenko
1b9217bc78 lib/encoder: add EncodeRaw 2024-07-15 12:10:04 +01:00
wiserain
846c1aeed0 pikpak: non-buffered hash calculation for local source files 2024-07-15 11:53:01 +01:00
Pat Patterson
56caab2033 b2: Include custom upload headers in large file info - fixes #7744 2024-07-15 11:51:37 +01:00
itsHenry
495a5759d3 chore(deps): update github.com/rclone/gofakes3 2024-07-15 11:34:28 +01:00
Nick Craig-Wood
d9bd6f35f2 fs/test: fix erratic test 2024-07-15 11:09:54 +01:00
Nick Craig-Wood
532a0818f7 fs: make sure we load the options defaults to start with 2024-07-15 11:09:54 +01:00
Nick Craig-Wood
91558ce6aa fs: fix the defaults overriding the actual config
After re-organising the config it became apparent that there was a bug
in the config system which hadn't manifested until now.

This was the default config overriding the main config and was fixed
by noting when the defaults had actually changed.
2024-07-15 11:09:54 +01:00
Nick Craig-Wood
8fbb259091 rc: add options/info call to enumerate options
This also makes some fields in the Options block optional - these are
documented in rc.md
2024-07-15 11:09:54 +01:00
Nick Craig-Wood
4d2bc190cc fs: convert main options to new config system
There are some flags which haven't been converted which could be
converted in the future.
2024-07-15 11:09:54 +01:00
Nick Craig-Wood
c2bf300dd8 accounting: fix creating of global stats ignoring the config
Before this change the global stats were created before the global
config which meant they ignored the global config completely.
2024-07-15 11:09:54 +01:00
Nick Craig-Wood
c954c397d9 filter: convert options to new style 2024-07-15 11:09:54 +01:00
Nick Craig-Wood
25c6379688 filter: rename Opt to Options for consistency 2024-07-15 11:09:54 +01:00
Nick Craig-Wood
ce1859cd82 rc: convert options to new style 2024-07-15 11:09:54 +01:00
Nick Craig-Wood
cf25ae69ad lib/http: convert options to new style
There are still users of the old style options which haven't been
converted yet.
2024-07-15 11:09:54 +01:00
Nick Craig-Wood
dce8317042 log: convert options to new style 2024-07-15 11:09:54 +01:00
Nick Craig-Wood
eff2497633 serve sftp: convert options to new style 2024-07-15 11:09:54 +01:00
Nick Craig-Wood
28ba4b832d serve nfs: convert options to new style 2024-07-15 11:09:54 +01:00
Nick Craig-Wood
58da1a165c serve ftp: convert options to new style 2024-07-15 11:09:54 +01:00
Nick Craig-Wood
eec95a164d serve dlna: convert options to new style 2024-07-15 11:09:54 +01:00
Nick Craig-Wood
44cd2e07ca cmd/mountlib: convert mount options to new style 2024-07-15 11:09:54 +01:00
Nick Craig-Wood
a28287e96d vfs: convert vfs options to new style
This also
- move in use options (Opt) from vfsflags to vfscommon
- change os.FileMode to vfscommon.FileMode in parameters
- rework vfscommon.FileMode and add tests
2024-07-15 11:09:54 +01:00
Nick Craig-Wood
fc1d8dafd5 vfs: convert time.Duration option to fs.Duration 2024-07-15 11:09:54 +01:00
Nick Craig-Wood
2c57fe9826 cmd/mountlib: convert time.Duration option to fs.Duration 2024-07-15 11:09:54 +01:00
Nick Craig-Wood
7c51b10d15 configstruct: skip items with config:"-" 2024-07-15 11:09:54 +01:00
Nick Craig-Wood
3280b6b83c configstruct: allow parsing of []string encoded as JSON 2024-07-15 11:09:54 +01:00
Nick Craig-Wood
1a77a2f92b configstruct: make nested config structs work 2024-07-15 11:09:54 +01:00
Nick Craig-Wood
c156716d01 configstruct: fix parsing of invalid booleans in the config
Apparently fmt.Sscanln doesn't parse bool's properly and this isn't
likely to be fixed by the Go team who regard sscanf as a mistake.

This only uses sscan for integers and uses the correct routine for
everything else.

This also implements parsing time.Duration

See: https://github.com/golang/go/issues/43306
2024-07-15 11:09:54 +01:00
Nick Craig-Wood
0d9d0eef4c fs: check the names and types of the options blocks are correct 2024-07-15 11:09:54 +01:00
Nick Craig-Wood
2e653f8128 fs: make Flagger and FlaggerNP interfaces public so we can test flags elsewhere 2024-07-15 11:09:54 +01:00
Nick Craig-Wood
e79273f9c9 fs: add Options registry and rework rc to use it 2024-07-15 11:09:54 +01:00
Nick Craig-Wood
8e10fe71f7 fs: allow []string to work in Options 2024-07-15 11:09:54 +01:00
Nick Craig-Wood
c6ab37a59f flags: factor AddFlagsFromOptions from cmd
This is in preparation for generalising the backend config
2024-07-15 11:09:53 +01:00
Nick Craig-Wood
671a15f65f fs: add Groups and FieldName to Option 2024-07-15 11:09:53 +01:00
Nick Craig-Wood
8d72698d5a fs: refactor fs.ConfigMap to take a prefix and Options rather than an fs.RegInfo
This is in preparation for generalising the backend config system
2024-07-15 11:09:53 +01:00
Nick Craig-Wood
6e853c82d8 sftp: ignore errors when closing the connection pool
There is no need to report errors when draining the connection pool -
they are useless at this point.

See: https://forum.rclone.org/t/rclone-fails-to-close-unused-tcp-connections-due-to-use-of-closed-network-connection/46735
2024-07-15 10:48:45 +01:00
Tomasz Melcer
27267547b9 sftp: use uint32 for mtime
The SFTP protocol (and the golang sftp package) internally uses uint32 unix
time for expressing mtime. Hence it is a waste of memory to store it as 24-byte
time.Time data structure in long-lived data structures. So despite that the
golang sftp package uses time.Time as external interface, we can re-encode the
value back to the original format and save memory.

Co-authored-by: Tomasz Melcer <tomasz@melcer.pl>
2024-07-09 10:23:11 +01:00
wiserain
cdcf0e5cb8 pikpak: optimize file move by removing unnecessary readMetaData() call
Previously, the code relied on calling `readMetaData()` after every file move operation.
This introduced an unnecessary API call and potentially impacted performance.

This change removes the redundant `readMetaData()` call, improving efficiency.
2024-07-08 18:16:00 +09:00
wiserain
6507770014 pikpak: fix error with copyto command
Fixes an issue where copied files could not be renamed when using the
`copyto` command. This occurred because the object ID was empty
before calling `readMetaData`. The fix preemptively calls `readMetaData`
to ensure an object ID is available before attempting the rename operation.
2024-07-08 10:37:42 +09:00
Paul Collins
bd5799c079 swift: add workarounds for bad listings in Ceph RGW
Ceph's Swift API emulation does not fully confirm to the API spec.
As a result, it sometimes returns fewer items in a container than
the requested limit, which according to the spec should means
that there are no more objects left in the container.  (Note that
python-swiftclient always fetches unless the current page is empty.)

This commit adds a pair of new Swift backend settings to handle this.

Set `fetch_until_empty_page` to true to always fetch another
page of the container listing unless there are no items left.

Alternatively, set `partial_page_fetch_threshold` to an integer
percentage.  In this case rclone will fetch a new page only when
the current page is within this percentage of the limit.

Swift API reference: https://docs.openstack.org/swift/latest/api/pagination.html

PR against ncw/swift with research and discussion: https://github.com/ncw/swift/pull/167

Fixes #7924
2024-06-28 11:14:26 +01:00
Russ Bubley
c834eb7dcb sftp: fix docs on connections not to refer to concurrency 2024-06-28 10:42:52 +01:00
Nick Craig-Wood
754e53dbcc docs: remove warp as silver sponsor 2024-06-24 10:33:18 +01:00
Nick Craig-Wood
5511fa441a onedrive: fix nil pointer error when uploading small files
Before this fix when uploading a single part file, if the
o.fetchAndUpdateMetadata() call failed rclone would call
o.setMetaData() with a nil info which caused a crash.

This fixes the problem by returning the error from
o.fetchAndUpdateMetadata() explicitly.

See: https://forum.rclone.org/t/serve-webdav-is-crashing-fatal-error-sync-unlock-of-unlocked-mutex/46300
2024-06-24 09:30:59 +01:00
Nick Craig-Wood
4ed4483bbc vfs: fix fatal error: sync: unlock of unlocked mutex in panics
Before this change a panic could be overwritten with the message

    fatal error: sync: unlock of unlocked mutex

This was because we temporarily unlocked the mutex, but failed to lock
it again if there was a panic.

This is code is never the cause of an error but it masks the
underlying error by overwriting the panic cause.

See: https://forum.rclone.org/t/serve-webdav-is-crashing-fatal-error-sync-unlock-of-unlocked-mutex/46300
2024-06-24 09:30:59 +01:00
Nick Craig-Wood
0e85ba5080 Add Filipe Herculano to contributors 2024-06-24 09:30:59 +01:00
Nick Craig-Wood
e5095a7d7b Add Thearas to contributors 2024-06-24 09:30:59 +01:00
wiserain
300851e8bf pikpak: implement custom hash to replace wrong sha1
This improves PikPak's file integrity verification by implementing a custom 
hash function named gcid and replacing the previously used SHA-1 hash.
2024-06-20 00:57:21 +09:00
wiserain
cbccad9491 pikpak: improves data consistency by ensuring async tasks complete
Similar to uploads implemented in commit ce5024bf33, 
this change ensures most asynchronous file operations (copy, move, delete, 
purge, and cleanup) complete before proceeding with subsequent actions. 
This reduces the risk of data inconsistencies and improves overall reliability.
2024-06-20 00:07:05 +09:00
dependabot[bot]
9f1a7cfa67 build(deps): bump docker/build-push-action from 5 to 6
Bumps [docker/build-push-action](https://github.com/docker/build-push-action) from 5 to 6.
- [Release notes](https://github.com/docker/build-push-action/releases)
- [Commits](https://github.com/docker/build-push-action/compare/v5...v6)

---
updated-dependencies:
- dependency-name: docker/build-push-action
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-06-18 14:48:30 +01:00
Filipe Herculano
d84a4c9ac1 s3: fix incorrect region for Magalu provider 2024-06-15 17:40:28 +01:00
Thearas
1c9da8c96a docs: recommend no_check_bucket = true for Alibaba - fixes #7889
Change-Id: Ib6246e416ce67dddc3cb69350de69129a8826ce3
2024-06-15 17:39:05 +01:00
Nick Craig-Wood
af9c5fef93 docs: tidy .gitignore for docs 2024-06-15 13:08:20 +01:00
Nick Craig-Wood
7060777d1d docs: fix hugo warning: found no layout file for "html" for kind "term"
Hugo has been making this warning for a while

WARN found no layout file for "html" for kind "term": You should
create a template file which matches Hugo Layouts Lookup Rules for
this combination.

This turned out to be the addition of the `groups:` keyword to the
command frontmatter. Hugo is doing something with this keyword though
this isn't documented in the frontmatter documentation.

The fix was removing the `groups:` keyword from the frontmatter since
it was never used by hugo.
2024-06-15 12:59:49 +01:00
Nick Craig-Wood
0197e7f4e5 docs: remove slug and url from command pages since they are no longer needed 2024-06-15 12:37:43 +01:00
Nick Craig-Wood
c1c9e209f3 docs: fix hugo warning: found no layout file for "html" for kind "section"
Hugo has been making this warning for a while

WARN found no layout file for "html" for kind "section": You should
create a template file which matches Hugo Layouts Lookup Rules for
this combination.

It turned out to be
- the arrangement of the oracle object storage docs and sub page
- the fact that a section template was missing
2024-06-15 12:29:37 +01:00
Nick Craig-Wood
fd182af866 serve dlna: fix panic: invalid argument to Int63n
This updates the upstream github.com/anacrolix/dms to master to fix
the problem.

Fixes #7911
2024-06-15 10:58:57 +01:00
Nick Craig-Wood
4ea629446f Start v1.68.0-DEV development 2024-06-14 17:54:27 +01:00
Nick Craig-Wood
93e8a976ef Version v1.67.0 2024-06-14 16:04:51 +01:00
nielash
8470bdf810 s3: fix 405 error on HEAD for delete marker with versionId
When getting an object by specifying a versionId in the request, if the
specified version is a delete marker, it returns 405 (Method Not Allowed),
instead of 404 (Not Found) which would be returned without a versionId. See
https://docs.aws.amazon.com/AmazonS3/latest/userguide/DeleteMarker.html

Before this change, we were only looking for 404 (and not 405) to determine
whether the object exists. This meant that in some circumstances (ex. when
Versioning is enabled for the bucket and we have a non-null X-Amz-Version-Id), we
deemed the object to exist when we should not have.

After this change, 405 (Method Not Allowed) is treated the same as 404 (Not
Found) for the purposes of headObject.

See https://forum.rclone.org/t/bisync-rename-failed-method-not-allowed/45723/13
2024-06-13 18:09:29 +01:00
Nick Craig-Wood
1aa3a37a28 gitannex: make tests run more quietly - use go test -v for more info
These tests were generating 1000s of lines of logs and making it
difficult to figure out what was failing in other tests.
2024-06-13 17:33:56 +01:00
albertony
ae887ad042 jottacloud: set metadata on server side copy and move - fixes #7900 2024-06-13 16:19:36 +01:00
Nick Craig-Wood
d279fea44a qingstor: disable integration tests as test account suspended
QingStor support have disabled the integration test account with this message

尊敬的用户您好:依据监管部门相关内容安全合规要求,QingStor即日起限制对
个人客户提供对象存储服务,您的对象存储服务将被系统置于禁用状态,如需继
续使用QingsStor对象存储服务,您可以通过工单或者拨打400热线申请开通,未
解封期间您的数据将不受影响,感谢您的谅解和支持。

Which google translate renders as

> Dear user: In accordance with the relevant content security
> compliance requirements of the regulatory authorities, QingStor will
> limit the provision of object storage services to individual
> customers from now on. Your object storage service will be disabled
> by the system. If you need to continue to use the QingsStor object
> storage service, you can apply for activation through a work order
> or by calling the 400 hotline. Your data will not be affected during
> the period of unblocking. Thank you for your understanding and
> support.
2024-06-13 12:50:35 +01:00
Nick Craig-Wood
282e34f2d5 operations: add operations.ReadFile to read the contents of a file into memory 2024-06-13 12:48:46 +01:00
Nick Craig-Wood
021f25a748 fs: make ConfigFs take an fs.Info which makes it more useful 2024-06-13 12:48:46 +01:00
Nick Craig-Wood
18e9d039ad touch: fix using -R on certain backends
On backends which return a valid object for "" with NewObject then
touch was going wrong as it thought it was passed an object.

This should not happen normally but s3 can be configured with
--s3-no-head where it is happy to believe that all objects exist.
2024-06-12 17:57:28 +01:00
Nick Craig-Wood
cbcfb90d9a serve s3: fix XML of error message
This updates the s3 libary to fix the XML of the error response

Fixes #7749
2024-06-12 17:53:57 +01:00
Nick Craig-Wood
caba22a585 fs/logger: make the tests deterministic
Previously this used `rclone test makefiles --seed 0` which sets a
random seed and every now and again we get this error

    Failed to open file "$WORK\\src\\moru": open $WORK\src\moru: is a directory

Because a file with the same name was created as a file in the src and
a dir in the dst.

This fixes it by using determinstic seeds each time.
2024-06-12 16:39:30 +01:00
Nick Craig-Wood
3fef8016b5 zoho: sleep for 60 seconds if rate limit error received 2024-06-12 16:34:30 +01:00
Nick Craig-Wood
edf6537c61 zoho: remove simple file names complication which is no longer needed 2024-06-12 16:34:27 +01:00
Nick Craig-Wood
00f0e9df9d zoho: retry reading info if size wasn't returned 2024-06-12 16:34:24 +01:00
Nick Craig-Wood
e6ab644350 zoho: fix throttling problem when uploading files
Before this change rclone checked to see if a file existed before
uploading it. It did this to avoid making duplicate files. This
involved listing the destination directory to see if the file existed
which was rate limited by Zoho.

However Zoho can't have duplicate files anyway so this fix just
removes that check and the PutUnchecked method which isn't needed.

See: https://forum.rclone.org/t/second-followup-on-the-older-topic-rclone-invokes-more-number-of-workdrive-s-files-listing-api-calls-which-exceeds-the-throttling-limit/45697
See: https://forum.rclone.org/t/followup-on-the-older-topic-rclone-invokes-more-number-of-workdrive-s-files-listing-api-calls-which-exceeds-the-throttling-limit/44794
2024-06-12 16:34:18 +01:00
Nick Craig-Wood
61c18e3b60 zoho: use cursor listing for improved performance
Cursor listing enables us to list up to 1,000 items per call
(previously it was 10) and uses one less transaction per call.

See: https://forum.rclone.org/t/second-followup-on-the-older-topic-rclone-invokes-more-number-of-workdrive-s-files-listing-api-calls-which-exceeds-the-throttling-limit/45697/4
2024-06-12 16:34:11 +01:00
Nick Craig-Wood
d068e0b1a9 operations: fix hashing problem in integration tests
Before this change backends which supported more than one hash (eg
pcloud) or backends which wrapped backends supporting more than one
hash (combine) would fail the TestMultithreadCopy and
TestMultithreadCopyAbort with an error like

    Failed to make new multi hasher: requested set 000001ff contains unknown hash types

This was caused by the tests limiting the globally available hashes to
the first hash supplied by the backend.

This was added in this commit

d5d28a7513 operations: fix overwrite of destination when multi-thread transfer fails

to overcome the tests taking >100s on the local backend because they
made every single hash that the local backend. It brought this time
down to 20s.

This commit fixes the problem and retains the CPU speedup by only
applying the fix from the original commit if the destination backend
is the local backend. This fixes the common case (testing on the local
backend). This does not fix the problem for a backend which wraps the
local backend (eg combine) but this is run only on the integration
test machine and not on all the CI.
2024-06-12 11:11:54 +01:00
Nick Craig-Wood
a341065b8d Add Bill Fraser to contributors 2024-06-12 11:11:54 +01:00
Nick Craig-Wood
0c29a1fe31 Add Florian Klink to contributors 2024-06-12 11:11:54 +01:00
Nick Craig-Wood
1a40300b5f Add Michał Dzienisiewicz to contributors 2024-06-12 11:11:54 +01:00
dependabot[bot]
44be27729a build(deps): bump github.com/Azure/azure-sdk-for-go/sdk/azidentity
Bumps [github.com/Azure/azure-sdk-for-go/sdk/azidentity](https://github.com/Azure/azure-sdk-for-go) from 1.5.2 to 1.6.0.
- [Release notes](https://github.com/Azure/azure-sdk-for-go/releases)
- [Changelog](https://github.com/Azure/azure-sdk-for-go/blob/main/documentation/release.md)
- [Commits](https://github.com/Azure/azure-sdk-for-go/compare/sdk/internal/v1.5.2...sdk/azcore/v1.6.0)

---
updated-dependencies:
- dependency-name: github.com/Azure/azure-sdk-for-go/sdk/azidentity
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-06-12 09:33:04 +01:00
wiserain
b7624287ac pikpak: implement configurable chunk size for multipart upload
Previously, the fixed 10MB chunk size could lead to exceeding the maximum 
allowed number of parts for very large files. Similar to other backends, options for 
chunk size and upload concurrency are now user-configurable. Additionally, 
the internal library is used to automatically adjust chunk size to prevent exceeding 
upload part limitations.

Fixes #7850
2024-06-12 13:19:25 +09:00
Evan Harris
6db9f7180f docs: added info about --progress terminal width 2024-06-11 21:37:40 +01:00
wiserain
b601961e54 pikpak: remove PublicLink from integration tests
This commit removes the test for PublicLink as it is not currently supported in the test environment.

This removes it from the integration tests to avoid meaningless retries.
2024-06-12 04:43:49 +09:00
Nick Craig-Wood
51aca9cf9d onedrive: add --onedrive-hard-delete to permanently delete files
Fixes #7812
2024-06-11 17:48:15 +01:00
Bill Fraser
7c0645dda9 dropbox: add option to override root namespace
This lets you, for example, use shared folders without mounting them
into your home namespace first, as long as you know their namespace ID.

(The --dropbox-shared-folders flag could thus be changed to not need to
mount the shared folder first, but I'm not doing that here as it's a
behavior change, who knows, maybe somebody relies on it.)
2024-06-11 12:49:01 +01:00
Florian Klink
aed77a8fb2 tree-wide: replace /bin/bash with /usr/bin/env bash
The latter is more portable, while the former only works on systems
where /bin/bash exists (or is symlinked appropriately).
2024-06-11 12:47:47 +01:00
Michał Dzienisiewicz
4250dd98f3 protondrive: don't auth with an empty access token 2024-06-11 12:46:00 +01:00
nielash
c13118246c serve s3: fix in-memory metadata storing wrong modtime
Before this change, serve s3 did not consistently save the correct modtime value
in memory after putting or copying an object, which could sometimes cause an
incorrect modtime to be returned. This change fixes the issue by ensuring that
both "mtime" and "X-Amz-Meta-Mtime" are updated in b.meta when we have fresh data.

The issue was discovered on the TestBisyncRemoteRemote/ext_paths test.
2024-06-11 12:10:43 +01:00
nielash
a56cd52025 vfs: fix renaming a directory
Before this change, renaming a directory d failed to rename its key in
d.parent.items, which caused trouble later when doing Dir.Stat on a
subdirectory. This change fixes the issue.
2024-06-11 12:09:19 +01:00
nielash
3ae4534ce6 fstest: make RandomRemoteName shorter
Use 12 random characters instead of 24, to avoid trouble on the bisync tests
2024-06-11 12:02:48 +01:00
nielash
9c287c72d6 googlephotos: remove unnecessary nil check
golangci-lint was complaining about this. `entry` can never be nil because
itemToDirEntry never returns a nil interface value
2024-06-11 11:55:42 +01:00
nielash
862d5d6086 s3, googlecloudstorage, azureblob: fix encoding issue with dir path comparison
`remote` has been converted ToStandardPath a few lines above, so `directory`
needs to be converted the same way in order to be compared properly. This was
spotted on `TestBisyncRemoteRemote/extended_filenames` for
`TestS3,directory_markers:` and `TestGoogleCloudStorage,directory_markers:`
which tripped over a directory name containing a Line Feed symbol.
2024-06-11 11:54:54 +01:00
Nick Craig-Wood
003f4531fe sync: don't test reading metadata if we can't write it 2024-06-11 11:31:35 +01:00
Nick Craig-Wood
a52e887ddd linkbox: ignore TestListDirSorted test until encoding is implemented 2024-06-11 11:31:35 +01:00
Nick Craig-Wood
b7681e72bf Add Tomasz Melcer to contributors 2024-06-11 11:31:35 +01:00
wiserain
ce5024bf33 pikpak: improve upload reliability and resolve potential file conflicts
This attempts to resolve upload conflicts by implementing cancel/cleanup on failed
uploads

* fix panic error on defer cancel upload
* increase pacer min sleep from 10 to 100 ms
* stop using uploadByForm()
* introduce force sleep before and after async tasks
* use pacer's retry scheme instead of manual implementation

Fixes #7787
2024-06-10 18:08:07 +01:00
Tomasz Melcer
d2af114139 sftp: --sftp-connections to limit maximum number of connections
Done based on a similar feature in the ftp remote. However, the switch
name is different, as `concurrency` is already taken by a different
feature.
2024-06-09 18:36:30 +01:00
Nick Craig-Wood
c8d6b02dd6 ulozto: fix panic in various integration tests
Before this change some of the integration tests were producing this error

    panic: runtime error: invalid memory address or nil pointer dereference

This was caused by an `fs.Object` of which the type (`*Object`) was
not `nil`, but the value within was `nil`. These do not compare as
`nil` leading to the panic.

This is a classic Go gotcha: https://go.dev/doc/faq#nil_error

This was easily fixed by changing the type of one function to return
fs.Object instead of *Object.
2024-06-08 17:44:11 +01:00
Nick Craig-Wood
55cac4c34d swift: fix integration tester with use_segments_container=false 2024-06-08 17:44:11 +01:00
Nick Craig-Wood
7ce60a47e8 drive: fix tests for backend query command
The tests assumed that there would be only one match, but on the
integration test server there are multiple matches due to failed test
runs.
2024-06-08 17:44:11 +01:00
Nick Craig-Wood
27496fb26d mailru: attempt to fix throttling by decreasing min sleep to 100ms
Before this change we waited a minimum of 10ms between API calls for
mailru.

The tests no longer pass at this rate, so this increases the time to
100ms.

See #7768
2024-06-08 17:44:11 +01:00
Nick Craig-Wood
39f8d039fe sync: fix expecting SFTP to have MkdirMetadata method: optional feature not implemented
Before this fix we attempted to copy metadata to SFTP backends despite
them not being capable of it.

This fixes the problem by making the need to copy metadata explicit
rather than implicit in a value being present or not.
2024-06-08 17:44:11 +01:00
Nick Craig-Wood
57f5ad188b operations: fix incorrect modtime on some multipart transfers
In this commit

6a0a54ab97 operations: fix missing metadata for multipart transfers to local disk

We broke the setting of modification times when doing multipart
transfers from a backend which didn't support metadata to a backend
which did support metadata.

This was fixed by setting the "mtime" in the metadata if it was
missing.
2024-06-08 17:44:11 +01:00
Nick Craig-Wood
76798d5bb1 sync: fix tests on backends which can't have empty directories 2024-06-08 17:44:11 +01:00
Nick Craig-Wood
5921bb0efd cache: fix tests when testing for Object.SetMetadata 2024-06-08 17:44:11 +01:00
Nick Craig-Wood
ce0d8a70a3 Add Charles Hamilton to contributors 2024-06-08 17:44:11 +01:00
Nick Craig-Wood
c23a40cb2a Add Thomas Schneider to contributors 2024-06-08 17:44:11 +01:00
Nick Craig-Wood
86a1951a56 Add Bruno Fernandes to contributors 2024-06-08 17:44:11 +01:00
Charles Hamilton
b778ec0142 windows: make rclone work with SeBackupPrivilege and/or SeRestorePrivilege
On Windows, this change includes the `FILE_FLAG_BACKUP_SEMANTICS` in
all calls to `CreateFile`.

Adding this flag allows is useful when rclone is running within a
security context that has `SeBackupPrivilege` and/or `SeRestorePrivilege`
token privileges enabled.

Without this flag, rclone cannot properly leverage special security
groups such as Backup Operators who possess the these privileges.

See: https://forum.rclone.org/t/rclone-sebackupprivilege-file-flag-backup-semantics/45339
See: https://github.com/rclone/rclone/pull/7877.
2024-06-07 13:26:30 +01:00
Dan McArdle
dac7f76b14 cmd/gitannex: Update command docs
Mentioned the possibility of skipping the symlink for new versions of
git-annex. (Probably deserves a test once the new git-annex trickles
down to CI platforms.)

I stopped trying to explain each config parameter here. Rather, the doc
now shows the user how to ask git-annex to describe config parameters
with `--whatelse`.
2024-06-06 17:42:27 +01:00
Dan McArdle
446d6b28b8 cmd/gitannex: Support synonyms of config values 2024-06-06 17:42:27 +01:00
Thomas Schneider
7e04ff9528 S3: Ceph Backend use already exist changed to true (now tested) - fixes #7871 2024-06-06 11:27:07 +01:00
Bruno Fernandes
4568feb5f9 s3: Add Magalu S3 Object Storage as provider 2024-06-06 11:25:45 +01:00
Nick Craig-Wood
b9a2d3b6b9 config: fix default value for description 2024-06-06 09:25:17 +01:00
Nick Craig-Wood
775e567a7b b2: update URLs to new home 2024-06-06 09:25:17 +01:00
Nick Craig-Wood
59fc7ac193 Add yumeiyin to contributors 2024-06-06 09:25:17 +01:00
albertony
fea61cac9e serve dlna: make BrowseMetadata more compliant - fixes #7883 2024-06-02 14:07:45 +02:00
albertony
3f3e4b055e Fix new lint issues reported by golangci-lint v1.59.0
Error return value of `fmt.Fprintf` is not checked (errcheck)
2024-05-31 09:48:32 +02:00
yumeiyin
2257c03391 docs: fix some comments 2024-05-24 21:39:40 +02:00
Nick Craig-Wood
8f1c309c81 build: update all dependencies 2024-05-22 15:50:31 +01:00
Nick Craig-Wood
8e2f596fd0 drive: debug when we are ignoring permissions #7853 2024-05-21 15:32:26 +01:00
Nick Craig-Wood
de742ffc67 Add Dominik Joe Pantůček to contributors 2024-05-21 15:32:26 +01:00
Dominik Joe Pantůček
181ed55662 docs: crypt: fix incorrect terminology
This fixes the misuse of the key-derivation term (salt) used in place
of symmetric cipher nonce (IV) in the crypt remote documentation.
2024-05-20 23:21:21 +01:00
Nick Craig-Wood
a5700a4a53 operations: rework rcat so that it doesn't call the --metadata-mapper twice
The --metadata-mapper was being called twice for files that rclone
needed to stream to disk,

This happened only for:
- files bigger than --upload-streaming-cutoff
- on backends which didn't support PutStream

This also meant that these were being logged as two transfers which
was a little strange.

This fixes the problem by not using operations.Copy to upload the file
once it has been streamed to disk, instead using the Put method on the
backend.

This should have no effect on reliability of the transfers as we retry
Put if possible.

This also tidies up the Rcat function to make the different ways of
uploading the data clearer and make it easy to see that it gets
verified on all those paths.

See #7848
2024-05-20 18:16:54 +01:00
Nick Craig-Wood
faa58315c5 operations: ensure SrcFsType is set correctly when using --metadata-mapper
Before this change on files which have unknown length (like Google
Documents) the SrcFsType would be set to "memoryFs".

This change fixes the problem by getting the Copy function to pass the
src Fs into a variant of Rcat.

Fixes #7848
2024-05-20 18:16:54 +01:00
Nick Craig-Wood
7b89735ae7 onedrive: allow setting permissions to fail if failok flag is set
For example using

    --onedrive-metadata-permissions read,write,failok

Will allow permissions to be read and written but if the writing
fails, then only an ERROR will be written in the log and the transfer
won't fail.
2024-05-17 11:03:46 +01:00
Nick Craig-Wood
91192c2c5e Add Evan McBeth to contributors 2024-05-17 11:03:46 +01:00
Evan McBeth
96e39ea486 docs: improve readability in faq 2024-05-16 15:35:42 +02:00
Nick Craig-Wood
488ed28635 fs: fix panic when using --metadata-mapper on large google doc files
Before this change, attempting to copy a large google doc while using
the metadata mapper caused a panic. Google doc files use Rcat to
download as they have an unknown size, and when the size of the doc
file got above --streaming-upload-cutoff it used a
object.NewStaticObjectInfo with a `nil` Fs to upload the file which
caused the crash in the metadata mapper code.

This change makes sure that the Fs in object.NewStaticObjectInfo is
never nil, and it returns MemoryFs which is consistent with the Rcat
code when the source is sized below the --streaming-upload-cutoff
threshold.

Fixes #7845
2024-05-16 10:05:08 +01:00
Nick Craig-Wood
b059c96322 Add JT Olio to contributors 2024-05-16 10:05:08 +01:00
Nick Craig-Wood
6d22168a8c Add overallteach to contributors 2024-05-16 10:05:08 +01:00
JT Olio
e34e2df600 go.mod: update storj.io/uplink to latest release
significant performance and stability improvements
2024-05-16 08:43:57 +01:00
overallteach
6607102034 chore: fix function name in comment
Signed-off-by: overallteach <cricis@foxmail.com>
2024-05-15 19:30:17 +01:00
Nick Craig-Wood
c6c327e4e7 build: update issue label notification machinery 2024-05-15 15:07:45 +01:00
Nick Craig-Wood
6a0a54ab97 operations: fix missing metadata for multipart transfers to local disk
Before this change multipart downloads to the local disk with
--metadata failed to have their metadata set properly.

This was because the OpenWriterAt interface doesn't receive metadata
when creating the object.

This patch fixes the problem by using the recently introduced
Object.SetMetadata method to set the metadata on the object after the
download has completed (when using --metadata). If the backend we are
copying to is using OpenWriterAt but the Object doesn't support
SetMetadata then it will write an ERROR level log but complete
successfully. This should not happen at the moment as only the local
backend supports metadata and OpenWriterAt but it may in the future.

It also adds a test to check metadata is preserved when doing
multipart transfers.

Fixes #7424
2024-05-14 12:51:03 +01:00
Nick Craig-Wood
629e895da8 local: implement Object.SetMetadata 2024-05-14 12:51:03 +01:00
Nick Craig-Wood
cc634213a5 fs: define the optional interface SetMetadata and implement it in wrapping backends
This also implements backend integration tests for the feature
2024-05-14 12:51:03 +01:00
Nick Craig-Wood
e9e9feb21e drive: allow setting metadata to fail if failok flag is set
For example using

    --drive-metadata-permissions read,write,failok

Will allow metadata to be read and written but if the writing fails,
then only an ERROR will be written in the log and the transfer won't
fail.
2024-05-13 19:44:03 +01:00
Dan McArdle
f26fc8f07c cmd/gitannex: When tags do not match, run e2e tests anyway
Issue #7625
2024-05-13 18:44:31 +01:00
Dan McArdle
96703bb31e build: Inject rclone version tag when testing
This enables gitannex end-to-end tests to run on CI. Otherwise, the
version would not match and tests that check the rclone version would
fail like so:

```
=== RUN   TestEndToEnd
    e2e_test.go:199: Skipping due to rclone version: expected version "v1.67.0-DEV", but got "v1.67.0-beta.7905.220bbe24d.merge"
--- SKIP: TestEndToEnd (0.07s)
```

Issue #7625
2024-05-13 18:44:31 +01:00
Dan McArdle
96d3adc771 cmd/gitannex: Remove assumption in e2e test version check 2024-05-13 18:44:31 +01:00
Dan McArdle
f82822baca .github/workflows: Install git-annex-remote-rclone on Linux and macOS
Issue #7625
2024-05-13 18:44:31 +01:00
Dan McArdle
af33a4f822 cmd/gitannex: Add TestEndToEndMigration tests
For each layout mode, these tests start with a git-annex-remote-rclone
remote, migrate it to a git-annex-remote-rclone-builtin remote. They
verify that a file copied pre-migration is still present and that `git
annex testremote` passes.

Issue #7625
2024-05-13 18:44:31 +01:00
Dan McArdle
a675cc6677 cmd/gitannex: Describe new rclonelayout config in help
Issue #7625
2024-05-13 18:44:31 +01:00
Dan McArdle
ad605ee356 cmd/gitannex: Drop chdir from e2e tests
Now that e2e tests are running in parallel, undoing the chdir to the
temp dir was causing flaky failures on cleanup. We don't need it anyway
because the worrisome subcommands have their working directory
controlled by `runInRepo()`.

Issue #7625
2024-05-13 18:44:31 +01:00
Dan McArdle
4ab235c06c cmd/gitannex: Repeat TestEndToEnd for all layout modes
I'm hopeful that running these in parallel will not impact CI runtime
very much, but that likely depends on the number of CPU cores and
whether the tmp filesystem is backed by memory vs a physical disk.

Issue #7625
2024-05-13 18:44:31 +01:00
Dan McArdle
9a2b85d71c cmd/gitannex: Refactor e2e tests, add layout compat tests
TestEndToEndRepoLayoutCompat exercises git-annex-remote-rclone-builtin
and git-annex-remote-rclone on the same rclone remote to ensure they are
compatible. It repeats the same test for all known layout modes.

Issue #7625
2024-05-13 18:44:31 +01:00
Dan McArdle
29b58dd4c5 cmd/gitannex: Add support for different layouts
This commit adds support for the same repo layouts supported by
git-annex-remote-rclone. This should enable git-annex users with remotes
of type "rclone" to switch to a "rclone-builtin" without needing to
retransfer content.

Issue #7625
2024-05-13 18:44:31 +01:00
Dan McArdle
36ad4eb145 cmd/gitannex: Simplify messageParser's finalParameter() func
Issue #7625
2024-05-13 18:44:31 +01:00
nielash
61ab519791 chunker: fix finalizer already set error
Before this change, cache.PinUntilFinalized was called twice if the root pointed
to a composite multi-chunk file without metadata, resulting in a fatal "finalizer
already set" error. This change fixes the issue.
2024-05-13 18:33:54 +01:00
nielash
678941afc1 mailru: use --tpslimit 10 on bisync tests
see https://github.com/rclone/rclone/issues/7768#issuecomment-2060888980
2024-05-13 18:33:11 +01:00
nielash
b153254b3a bisync: ignore "Implicitly create directory" messages on tests 2024-05-13 18:32:55 +01:00
nielash
17cd7a9496 quatrix: fix f.String() not including subpath 2024-05-13 18:32:41 +01:00
Nick Craig-Wood
0735f44f91 operations: fix lsjson --encrypted when using --crypt-XXX parameters
Before this change an `rclone lsjson --encrypted` command where
additional `--crypt-` parameters were supplied on the command line:

    rclone lsjson --crypt-description XXX --encrypted secret:

Produced an error like this:

    Failed to lsjson: ListJSON failed to load config for crypt remote: config name contains invalid characters...

This was due to an incorrect lookup of the crypt config to create the
encrypted mapping.

Fixes #7833
2024-05-13 17:59:58 +01:00
Nick Craig-Wood
04c69959b8 Add Sunny to contributors 2024-05-13 17:59:58 +01:00
Nick Craig-Wood
25cc8c927a Add Michael Terry to contributors 2024-05-13 17:59:58 +01:00
Sunny
6356b51b33 serve http: added content-length header when html directory is served 2024-05-13 17:24:54 +01:00
albertony
1890608f55 docs: minor formatting improvement 2024-05-13 12:50:22 +02:00
Michael Terry
cd76fd9219 oauthutil: clear client secret if client ID is set
When an external OAuth flow is being used (i.e. a client ID and an
OAuth token are set in the config), a client secret should not be set.
If one is, the server may reject a token refresh attempt.

But there's no way to clear out a backend's default client secret via
configuration, since empty-string config values are ignored.

So instead, when a client ID is set, we should clear out any default
client secret, since it wouldn't apply anyway.
2024-05-11 16:03:32 +01:00
Nick Craig-Wood
5b8cdaff39 drive: fix description being overwritten on server side moves
Before this the description for files got overwritten on server side
moves.

This change stops rclone setting the description to the leaf name
completely.

See: https://forum.rclone.org/t/file-descriptions-in-google-drive-not-shown-after-dedupe/45510
Fixes: #7770
2024-05-11 15:59:40 +01:00
albertony
f2f559230c bump golangci/golangci-lint-action from 4 to 6
Version 5 removed go cache management, and therefore also options skip-pkg-cache and
skip-build-cache, because the cache related to go itself is already handled by
actions/setup-go, and now it only caches golangci-lint analysis. Since we run multiple
golangci-lint-action steps for different goos, we want to cache package and build cache
and golangci-lint results from all of them, and therefore this commit now changes the
approach by disabling all built-in caching and introducing a separate cache step to
handle it properly.
2024-05-11 14:43:29 +02:00
nielash
e0b38cc9ac onedrive: add support for group permissions
This change adds support for "group" identities, and SharePoint variants
"siteUser" and "siteGroup". It also adds support for using any identity type
(including "application" and "device") as a recipient source when adding
permissions.
2024-05-10 16:25:08 +01:00
nielash
68dc79eddd onedrive: fix references to deprecated permissions properties
Before this change, metadata permissions used the `grantedTo` and
`grantedToIdentities` properties, which are deprecated on OneDrive Business in
favor of `grantedToV2` and `grantedToIdentitiesV2`. After this change, OneDrive
Business uses the new V2 versions, while OneDrive Personal still uses the
originals, as the V2 versions are not available for OneDrive Personal. (see
https://learn.microsoft.com/en-us/answers/questions/1079737/inconsistency-between-grantedtov2-and-grantedto-re)
2024-05-10 16:25:08 +01:00
nielash
76cea0c704 onedrive: skip writing permissions with 'owner' role
The 'owner' role is an implicit role that can't be removed, so don't try to.
2024-05-10 16:25:08 +01:00
Nick Craig-Wood
41d5d8b88a build: add issue label notification machinery 2024-05-10 16:15:28 +01:00
Nick Craig-Wood
aa2746d0de union: fix deleting dirs when all remotes can't have empty dirs 2024-05-10 11:15:12 +01:00
wiserain
b2f6aac754 pikpak: improve getFile() usage
Previously, `getFile()` was called indiscriminately during uploads, moves, 
and download link generation. This could lead to users with download limit 
causing subsequent operations like uploads and moves to fail. 
This PR optimizes the use of getFile(), by only calling it 
when it's strictly necessary.
2024-05-08 09:09:56 +09:00
Eric Wolf
a0dacf4930 docs: exit code 9 requires --error-on-no-transfer
Updated exit code 9 definition to include that it requires the use of the "--error-on-no-transfer" flag with a link to that section.
2024-05-07 09:18:05 +02:00
IoT Maestro
c5ff5afc21 ulozto: Fix handling of root paths with leading / trailing slashes.
This fixes #7796
2024-05-04 20:04:30 +01:00
Nick Craig-Wood
bd8523f208 fstest: reduce precision of directory time checks on CI
For unknown reasons the precision of modification times of directories
on the CI is > 15mS compared to files which are 100nS. The tests
work fine when run in Virtualbox though so I conjecture this is
something to do with the file system used there.
2024-05-03 12:29:18 +01:00
nielash
0bfd70c405 sync: remove now superfluous copyEmptyDirectories function 2024-05-03 12:29:18 +01:00
Nick Craig-Wood
47735d8fe1 sync: fix failed to update directory timestamp or metadata: directory not found
See: https://forum.rclone.org/t/empty-dirs-not-wanted/45059/14
Co-authored-by: nielash <nielronash@gmail.com>
2024-05-03 12:29:18 +01:00
Nick Craig-Wood
617534112b sync: fix directory modification times not being set
Co-authored-by: nielash <nielronash@gmail.com>
2024-05-03 12:29:18 +01:00
Nick Craig-Wood
271ec43189 sync: don't need to sync directories if they haven't been modified
Before this change we synced directories regardless if the source
directory existed. It is irrelevant whether the source directory
exists or not, what we need to know is has the directory been
modified.

Co-authored-by: nielash <nielronash@gmail.com>
2024-05-03 12:29:18 +01:00
Nick Craig-Wood
10eb4742dd sync: fix creation of empty directories when --create-empty-src-dirs=false
In v1.66.0 the changes to enable metadata preservation on directories
introduced a regression, namely that empty directories were created
despite the state of the --create-empty-src-dirs flag.

This patch fixes the problem by letting the normal rclone directory
creation create the directories and fixing up their timestamps and
metadata afterwards if --create-empty-src-dirs=false.

Fixes #7689
See: https://forum.rclone.org/t/empty-dirs-not-wanted/45059/
See: https://forum.rclone.org/t/how-to-ignore-empty-directories-when-uploading-from-windows/45057/
2024-05-03 12:29:18 +01:00
Nick Craig-Wood
2a2ec06ec1 sync: fix management of empty directories to make it more accurate
Before this change we used the same datastructure for managing empty
directories for both --create-empty-src-dirs in sync/copy/move and for
the --delete-empty-src-dirs flag in move.

These two uses are subtly incompatible and this change uses a separate
datastructure for both uses. This makes it more accurate and easier to
understand.
2024-05-03 12:29:18 +01:00
Nick Craig-Wood
7237b142fa drive: be more explicit in debug when setting permissions fail 2024-05-02 18:10:16 +01:00
Nick Craig-Wood
254e514330 onedrive,drive: make errors setting permissions into no retry errors 2024-04-30 09:34:33 +01:00
Nick Craig-Wood
9fa610088f docs: add Backblaze as a sponsor 2024-04-29 12:43:13 +01:00
Nick Craig-Wood
d2fa45acf3 storj: update bio on request 2024-04-29 11:49:13 +01:00
albertony
a86eb7ad50 docs: note that newer linux kernel version is required for ARMv5 2024-04-27 22:21:40 +02:00
Nick Craig-Wood
1fef8e667c build: migrate bucket storage for the project to new provider
This changes

- beta.rclone.org
- www.rclone.org
- pub.rclone.org
- downloads.rclone.org
2024-04-25 17:04:18 +01:00
Nick Craig-Wood
a5daef3892 Add hidewrong to contributors 2024-04-25 17:04:18 +01:00
Nick Craig-Wood
5bf70c68f1 swift: implement --swift-use-segments-container to allow >5G files on Blomp
This switches between storing chunks in a separate container suffixed
with `_segments` (the default) and a directory in the root
`.file-segments`)

By default the `.file-segments` mode will be auto selected if
`auth_url`s that require it are detected.

If the `.file-segments` mode is in use then rclone will omit that
directory from listings.

See: https://forum.rclone.org/t/blomp-unable-to-upload-5gb-files/42498/
2024-04-25 11:14:14 +01:00
Nick Craig-Wood
8a18c29835 random: update Password docs 2024-04-25 11:14:14 +01:00
albertony
29ed17d19c build: add linting for different values of GOOS 2024-04-22 19:29:12 +02:00
albertony
7ee22fcdf9 build: fix linting issues reported by running golangci-lint with different GOOS 2024-04-22 19:29:12 +02:00
albertony
159e274921 build: fix linting issues reported by golangci-lint on windows 2024-04-22 19:29:12 +02:00
albertony
fdc56b21c1 log: fix lint issue SA1019: syscall.Syscall has been deprecated since Go 1.18: Use SyscallN instead. 2024-04-22 19:29:12 +02:00
albertony
1ca825b6f0 build: run go mod tidy 2024-04-22 19:29:12 +02:00
Kyle Reynolds
d36bc8833c backend http: Adding no-escape flag for option to not escape URL metacharacters in path names - fixes issue #7637 2024-04-22 17:57:09 +01:00
nielash
8977655869 bisync: avoid starting tests we don't have time to finish
To prevent all-or-nothing retries, for tests that take longer (in total) than the
-timeout but less than the -timeout * -maxtries

https://github.com/rclone/rclone/pull/7743#issuecomment-2057250848
2024-04-19 22:29:41 +01:00
nielash
58e09e1cd4 bisync: skip test if config string contains a space 2024-04-19 22:29:41 +01:00
Kyle Reynolds
64734dfe41 fs accounting: Add deleted files total size to status summary line - fixes issue #7190 2024-04-18 22:09:23 +01:00
albertony
68bf6aa584 build: remove build constraint syntax for go 1.16 and older 2024-04-18 16:53:55 +02:00
albertony
db17aaf7cd build: remove separate go module cache step as its done by setup-go 2024-04-18 12:43:26 +02:00
albertony
9531cd2c46 Convert source files with crlf to lf 2024-04-18 11:32:45 +02:00
hidewrong
c09426bcfe fix spelling 2024-04-17 18:02:44 +02:00
nielash
30517698aa bisync: make session path even shorter on tests
The .lck file filename length needs to be less than 255 bytes (not symbols) on
linux, and it was still too long on this test, because of the
subdir=測試_Русский_{spc}_{spc}_ě_áñ
on remotes with long names, such as TestChunkerChunk3bNoRenameLocal:
2024-04-16 14:45:54 -04:00
Nick Craig-Wood
8dc4c01209 build: make integration tests run better on macOS and Windows
This changes as many of the integraton tests as possible so that they
use port forwarding rather than the docker IP directly.

Using the docker IP directly does not work on macOS and Windows as the
docker images are running in a VM rather than a container.

This adds the PORTS.md document to document which port numbers we are
using for which service as they need to be unique.
2024-04-16 10:48:48 +01:00
Nick Craig-Wood
807a7dabaa docs: fix heading anchor 2024-04-16 10:48:48 +01:00
Nick Craig-Wood
416324c047 Add pawsey-kbuckley to contributors 2024-04-16 10:48:48 +01:00
Nick Craig-Wood
524137f78a Add Katia Esposito to contributors 2024-04-16 10:48:48 +01:00
Evan Harris
f4c033a6a6 lsjson: small docs change to clarify options 2024-04-15 17:11:35 +01:00
pawsey-kbuckley
d459fb0cb8 genautocomplete: remove Ubuntu-ism from docs and clarify non-root use 2024-04-15 17:00:43 +01:00
Dave Nicolson
205745313d docs: fix macOS install from source link 2024-04-15 16:33:41 +01:00
Katia Esposito
79c00879ff ncdu: Do not quit on Esc 2024-04-15 16:18:27 +01:00
Nick Craig-Wood
2cff5514aa fix: test_all re-running too much stuff
This re-works the code which works out which tests need re-running to
be more accurate.
2024-04-15 16:08:27 +01:00
Nick Craig-Wood
88322f3eb2 Add Dave Nicolson to contributors 2024-04-15 16:08:27 +01:00
Nick Craig-Wood
036690c060 Add Butanediol to contributors 2024-04-15 16:08:27 +01:00
Nick Craig-Wood
805584a8dd Add yudrywet to contributors 2024-04-15 16:08:27 +01:00
Dave Nicolson
cc3ae931db docs: Add left and right padding to prevent icon truncation 2024-04-14 17:51:10 +01:00
Butanediol
0c0d64c316 serve s3: fix Last-Modified header format 2024-04-14 17:49:51 +01:00
yudrywet
50aa677934 chore: fix function names in comment
Signed-off-by: yudrywet <yudeyao@yeah.net>
2024-04-14 14:38:01 +01:00
nielash
51582e36e8 onedrive: set all metadata permissions and return error summary
Before this change when setting permissions from the metadata rclone
would stop on the first error.

This change causes rclone to attempt to set all the permissions and
return an error summary at the end.
2024-04-13 19:57:30 +01:00
Kyle Reynolds
47cbddbd27 fs rc: fixes incorrect Content-Type in HTTP API - fixes #7726 2024-04-13 19:56:34 +01:00
nielash
5323a21898 operations: fix move when dst is nil and fdst is case-insensitive
Before this change, the MoveCaseInsensitive logic in operations.move made the
assumption that dst != nil && remote != "". After this change, it should work
correctly when either one is present without the other.
2024-04-13 19:28:09 +01:00
Nick Craig-Wood
f2e693f722 sync: fix case normalisation on s3
Before this change when the sync routine attempted to normalise a
case, say from "FiLe.txt" to "file.txt" this caused a 400 Bad Request
error:

> This copy request is illegal because it is trying to copy an object
> to itself without changing the object's metadata, storage class,
> website redirect location or encryption attributes.

This was caused by passing the same object as the source and
destination to the move routine, whereas the destination object had a
different case and didn't exist, so should have been passed as nil.

See: https://github.com/rclone/rclone/pull/7743#discussion_r1557345906
2024-04-13 19:28:09 +01:00
Nick Craig-Wood
93955b755f operations: fix retries downloading too much data with certain backends
Before this fix if more than one retry happened on a file that rclone
had opened for read with a backend that uses fs.FixRangeOption then
rclone would read too much data and the transfer would fail.

Backends affected:

- azureblob, azurefiles, b2, box, dropbox, fichier, filefabric
- googlecloudstorage, hidrive, imagekit, jottacloud, koofr, netstorage
- onedrive, opendrive, oracleobjectstorage, pikpak, premiumizeme
- protondrive, qingstor, quatrix, s3, sharefile, sugarsync, swift
- uptobox, webdav, zoho

This was because rclone was emitting Range requests for the wrong data
range on the second and subsequent retries.

This was caused by fs.FixRangeOption modifying the options and the
reopen code relying on them not being modified.

This fix makes a copy of the fs.FixRangeOption in the reopen code to
fix the problem.

In future it might be best to change fs.FixRangeOption so it returns a
new options slice.

Fixes #7759
2024-04-13 19:25:15 +01:00
Nick Craig-Wood
a4fc5edc5e operations: add more assertions to ReOpen tests to check seek positions 2024-04-13 19:25:15 +01:00
Nick Craig-Wood
8b73dcb95d Add static-moonlight to contributors 2024-04-13 19:25:15 +01:00
static-moonlight
3ba57cabce doc: add example how to run serve s3 2024-04-13 19:22:26 +01:00
Nick Craig-Wood
c87097109b serve s3: adjust to move of Mikubill/gofakes3 to rclone/gofakes3
This also updates the interface which has gained a ctx parameter in
the mean time.
2024-04-13 18:25:41 +01:00
Nick Craig-Wood
ae76498a38 Add guangwu to contributors 2024-04-13 18:25:41 +01:00
Nick Craig-Wood
10f730c49f Add jakzoe to contributors 2024-04-13 18:25:41 +01:00
albertony
88d96d133b Add go mod and sum to gitattributes for consistent line endings 2024-04-13 11:16:42 +02:00
nielash
2c7680050b bisync: rename extended_char_paths test
The .lck file filename length needs to be less than 255 bytes (not symbols) on
linux, and it was still too long on this test, because of the
subdir=測試_Русский_{spc}_{spc}_ě_áñ
2024-04-11 16:27:20 +01:00
nielash
fe6c9aa4da chunker: fix case-insensitive comparison on local without metadata
Before this change, chunker would erroneously consider two different paths to be
equal if, due to special characters, they normalized to equal-folding strings in
Standard Encoding, but not otherwise. This caused base objects to get moved when
they should not have been. This change fixes the issue, which was discovered on
the bisync integration tests.

Ideally it should also be fixed when the base Fs is non-local, but there's not an
easy way at the moment to reference the wrapped Fs's encoding, at least without
breaking encapsulation.
2024-04-11 16:27:20 +01:00
nielash
8524afa9ce chunker: fix NewFs when root points to composite multi-chunk file without metadata
Before this change, calling NewFs on a composite multi-chunk file with
--chunker-meta-format "none"
would fail due to f.base pointing to the wrong Fs. This change fixes the issue,
which was discovered on the bisync integration tests.
2024-04-11 16:27:20 +01:00
nielash
21f3ba13f6 bisync: more fixes for integration tests
-use fs.ConfigStringFull instead of bilib.StripHexString to properly reverse
connection string remotes
2024-04-11 16:27:20 +01:00
nielash
04128f97ee bisync: fix endless loop if lockfile decoder errors
Before this change, the decoder looked only for `io.EOF`, and if any other error
was returned, it could cause an infinite loop. This change fixes the issue by
breaking for any non-nil error.
2024-04-10 16:33:05 +01:00
nielash
bef9fd0bc3 bisync: make tempDir path shorter
to avoid exceeding linux filename length limits
2024-04-10 16:33:05 +01:00
guangwu
2ab2ec29f9 fix: close cpu profile
Signed-off-by: guoguangwu <guoguangwug@gmail.com>
2024-04-09 11:23:55 +01:00
jakzoe
8817ee25ae docs: fix typo in filtering.md
Fix typo: moved misplaced double quotation mark.
2024-04-09 11:16:59 +01:00
Nick Craig-Wood
46b3854330 drive: set all metadata permissions and return error summary
Before this change when setting permissions from the metadata rclone
would stop on the first error.

This change causes rclone to attempt to set all the permissions and
return an error summary at the end.
2024-04-08 17:23:22 +01:00
Nick Craig-Wood
efbaca3a95 crypt: fix max suggested length of filenames 2024-04-08 17:23:22 +01:00
nielash
f995ece64d bisync: fix io.PipeWriter not getting closed on tests 2024-04-07 21:55:26 -04:00
jumbi77
68c2ba74dd pikpak: fix a typo in a comment
Last still open fix from PR #6970
2024-04-06 11:27:24 +01:00
albertony
e739ee2c27 docs: ensure empty line between text and a following heading 2024-04-05 21:39:44 +02:00
Dan McArdle
05e5712bc4 .github/workflows: Upgrade deprecated macos-11 to macos-latest
See https://github.com/actions/runner-images
2024-04-05 18:01:39 +01:00
Dan McArdle
a2e38e9883 cmd/gitannex: Downgrade to protocol version 1
This enables compatibility with versions of git-annex currently
available on GitHub's "ubuntu-latest" image, aka Ubuntu 22.04 Jammy.
Currently, Jammy is shipping git-annex 8.20210223-2ubuntu2.
https://packages.ubuntu.com/jammy/git-annex

Issue #7625
2024-04-05 18:01:39 +01:00
Dan McArdle
ef42c32cc6 cmd/gitannex: Replace e2e test script with Go test
This commit implements milestone 2.1 for the gitannex subcommand:
https://github.com/rclone/rclone/issues/7625#issuecomment-1951403856

This rewrite makes a few improvements over the old shell script:

(1) It no longer uses the system's rclone.conf. Now, it writes the
    rclone.conf file in an ephemeral directory.

(2) It no longer makes any assumptions about the contents of /tmp.

However, it now assumes that an rclone built from the HEAD commit is on
the PATH. It makes a best-effort attempt to verify this assumption, but
I'm not sure it's bulletproof.

I'm hoping that writing this in Go will enable more cross-platform
support in the future, but for now we're still restricted to Unixy
systems due to reliance on the HOME environment variable.

Issue #7625
2024-04-05 18:01:39 +01:00
Nick Craig-Wood
6a5c0065ef docs: clarify option syntax
See: https://forum.rclone.org/t/seeming-documentation-problem-rclones-syntax-a-problem-with-the-categories-on-this-forum/45395/
2024-04-05 15:59:32 +01:00
Nick Craig-Wood
6da27db844 build: fix CVE-2023-45288 by upgrading golang.org/x/net
See: https://pkg.go.dev/vuln/GO-2024-2687
2024-04-05 15:59:32 +01:00
Nick Craig-Wood
c0497d46d5 ulozto: remove use of github.com/pkg/errors 2024-04-05 15:59:32 +01:00
Nick Craig-Wood
df3df06d2e Add Pieter van Oostrum to contributors 2024-04-05 15:59:32 +01:00
Pieter van Oostrum
1e3ab7acfd docs: fix MANUAL formatting problems
1) Missing closing code backticks (```) in s3.md causes formatting problems
2) Pandoc requires blank lines before ATX headings
2024-04-05 10:41:47 +02:00
Kyle Reynolds
339bc1d1a3 backend koofr: remove trailing bracket - fixes #7600 2024-04-04 20:03:26 +01:00
nielash
71069ed5c1 webdav: fix SetModTime erasing checksums on owncloud and nextcloud
Before this change, calling SetModTime on owncloud and nextcloud would
inadvertently erase the object's stored hashes. This change fixes the issue,
which was discovered by the bisync integration tests.
2024-04-03 16:43:11 -04:00
nielash
75df38f6ee bisync: use fstest.RandomRemote on tests
- use fstest.RandomRemote to create the root level directory
- fix parsing of canonical name for connection string remotes
2024-04-03 16:43:11 -04:00
nielash
ce4064aabf hdfs: fix f.String() not including subpath 2024-04-03 16:43:11 -04:00
Nick Craig-Wood
7c9f1b8917 local: disable unreliable test
In this commit we merged an unreliable test

e053c8a1c0 copy: fix nil pointer dereference when corrupted on transfer with nil dst

It is a good idea but very hard to implement so it always works.

Hence this disables it for the moment.
2024-04-02 18:48:34 +01:00
Nick Craig-Wood
e71c95a554 docs: update warp sponsorship 2024-04-02 16:32:24 +01:00
nielash
e053c8a1c0 copy: fix nil pointer dereference when corrupted on transfer with nil dst 2024-04-02 15:34:58 +01:00
Nick Craig-Wood
c2d96113ac Add Erisa A to contributors 2024-04-02 15:34:58 +01:00
Nick Craig-Wood
5ff961d2ea Add yoelvini to contributors 2024-04-02 15:34:58 +01:00
Nick Craig-Wood
eedeaf7cbb Add Alexandre Lavigne to contributors 2024-04-02 15:34:58 +01:00
Kyle Reynolds
f6e716543a test info: improve cleanup of temp files - fixes #7209
Co-authored-by: Kyle Reynolds <kyle.reynolds@bridgerphotonics.com>
2024-04-02 15:03:33 +01:00
nielash
998df26ceb onedrive: fix --metadata-mapper called twice if writing permissions
Before this change, the --metadata-mapper was called twice if an object was
uploaded via multipart upload with --metadata and --onedrive-metadata-permissions
"write" or "read,write". This change fixes the issue.
2024-04-02 14:57:43 +01:00
Pat Patterson
93c960df59 b2: Add tests for new cleanup and cleanup-hidden backend commands. 2024-04-02 12:36:43 +01:00
Nikita Shoshin
92368f6d2b rcserver: set ModTime for dirs and files served by --rc-serve 2024-04-02 12:10:45 +01:00
Erisa A
08bf5228a7 docs: Add R2 note about no_check_bucket 2024-04-02 12:08:56 +01:00
yoelvini
76f3eb3ed2 s3: add new AWS region il-central-1 Tel Aviv 2024-04-01 18:17:16 +01:00
nielash
0d43da7655 bisync: more fixes for integration tests
- fix parsing of connection string remotes (comma in name)
- skip remotes that can't upload empty files
- Mkdir the test case subdir before cache.Get-ing it
	(only storj seems to need this... bug?)
2024-03-31 17:56:28 -04:00
Alexandre Lavigne
f9429de807 s3: update Scaleway's configuration options - fixes #7507
In order to handle special character, the configuration must specify
rclone configuration to use `list_url_encode`.
2024-03-31 17:42:20 +01:00
nielash
bce80be2f8 bisync: several fixes for integration tests
Several fixes for the bisync integration tests:

- use unique initdir and datadir for each subtest so concurrent tests don't interfere with each other
- remove dots from dir names for bucket backends
- ignore messages specific to cache backend
- skip fix-case tests on backends that can't fix-case
- don't expect "{hashtype} differ" messages on backends with no hash types
- print timestamps in UTC local

More fixes will still be needed, but this should hopefully fix a good portion of them.
2024-03-30 13:39:44 -04:00
Nick Craig-Wood
679f4fdfa9 ulozto: make password config item be obscured 2024-03-30 17:32:32 +00:00
Nick Craig-Wood
7c828ffe09 operations: fix very long file names when using copy with --partial
Before this change we were using the wrong variable to read the
filename length from. This meant that very long filenames were not
being truncated as intended.

This problem was spotted by Wang Zhiwei on the forum in a code review.

See: https://forum.rclone.org/t/why-use-c-remoteforcopy-instead-of-c-remote-to-check-length-in-copy-operation/45099
2024-03-30 09:06:58 +00:00
Nick Craig-Wood
1cf1f4fab2 Add Warrentheo to contributors 2024-03-30 09:06:58 +00:00
Nick Craig-Wood
853e802d8d Add Alex Garel to contributors 2024-03-30 09:06:30 +00:00
Warrentheo
3052d026ce onedrive: fix typo 2024-03-30 08:22:32 +00:00
albertony
9c9487365f config: show more user friendly names of custom types in ui 2024-03-29 21:20:19 +01:00
albertony
0f6c10ca02 config: add ending period on description option help text 2024-03-29 17:11:21 +01:00
Alex Garel
a075654f20 docs: add an indication in case of recursive shortcuts in drive
Help people handle an issue which might be difficult to understand
otherwise.

If you have recursive shortcuts (pointing to a parent folder) in a
google drive, rclone is doing infinite recursion, never ending and
filling the disk. Even if you ask not to get shortcuts content.
2024-03-29 12:35:47 +01:00
IoT Maestro
571d20d126 ulozto: implement Mover and DirMover interfaces. 2024-03-29 09:09:12 +00:00
IoT Maestro
c9ce384ec7 ulozto: revert the temporary file size limitations 2024-03-29 09:07:44 +00:00
IoT Maestro
748c43d525 ulozto: set Content-Length header if the file size is known. 2024-03-29 09:07:44 +00:00
Nick Craig-Wood
7c20ec3772 local: fix and update -l docs
See: https://forum.rclone.org/t/rclone-l-cloning-symlink-file-problem/45286/
2024-03-28 11:56:22 +00:00
Nick Craig-Wood
42914bc0b0 serve webdav: fix webdav with --baseurl under Windows
Windows webdav does an OPTIONS request on the root even when given a
path and if we return 404 here then Windows refuses to use the path.

This patch allows OPTIONS requests only on the root to fix this.

This affects all the HTTP servers.
2024-03-28 10:06:04 +00:00
nielash
f62e7b5b30 memory: fix incorrect list entries when rooted at subdirectory
Before this change, List would return incorrect directory paths (relative to the
wrong root) if the Fs root pointed to a subdirectory. For example, listing dir
"a/b/c/d" of remote :memory: would work correctly, but listing dir "c/d" of
remote :memory:a/b would not, and would result in "Entry doesn't belong in
directory %q (contains subdir)" errors.

This change fixes the issue and adds a test to detect any other backends that
might have the same issue.
2024-03-27 11:43:26 -04:00
nielash
2b0a25a64d memory: fix deadlock in operations.Purge
Before this change, the Memory backend had the potential to deadlock under
certain conditions, if the ListR callback required locking the b.mu mutex. This
was the case with operations.Purge, because Memory has no Purge method, and the
fallback option does:

	err = DeleteFiles(ctx, listToChan(ctx, f, dir))

which potentially starts removing objects before the listing has completed.

This change fixes the issue by batching all the entries before calling the
callback on them.
2024-03-27 11:42:49 -04:00
nielash
2bebbfaded bisync: add to integration tests - fixes #7665
This change officially adds bisync to the nightly integration tests for all
backends.

This will be part of giving us the confidence to take bisync out of beta.

A number of fixes have been added to account for features which can differ on
different backends -- for example, hash types / modtime support, empty
directories, unicode normalization, and unimportant differences in log output.
We will likely find that more of these are needed once we start running these
with the full set of remotes.

Additionally, bisync's extremely sensitive tests revealed a few bugs in other
backends that weren't previously covered by other tests. Fixes for those issues
have been submitted on the following separate PRs (and bisync test failures will
be expected until they are merged):

- #7670 memory: fix deadlock in operations.Purge
- #7688 memory: fix incorrect list entries when rooted at subdirectory
- #7690 memory: fix dst mutating src after server-side copy
- #7692 dropbox: fix chunked uploads when size <= chunkSize

Relatedly, workarounds have been put in place for the following backend
limitations that are unsolvable for the time being:

- #3262 drive is sometimes aware of trashed files/folders when it shouldn't be
- #6199 dropbox can't handle emojis and certain other characters
- #4590 onedrive API has longstanding bug for conflictBehavior=replace in
	server-side copy/move
2024-03-27 10:50:14 -04:00
nielash
fecce67ac6 memory: fix dst mutating src after server-side copy
Before this change, the Memory backend's Copy method created a dst object that
referenced the src's objectData by pointer instead of making a copy. While this
minimized memory usage, an unintended consequence was that subsequently mutating
the src (such as changing the modtime) would inadvertently also mutate the dst,
and vice versa.

This change fixes the issue and adds a test.
2024-03-26 20:40:06 -04:00
Nick Craig-Wood
a67688dcc7 mount,cmount,mount2: add --direct-io flag to force uncached access
This change adds the --direct-io flag to the mount. This means the
page cache is completely bypassed for reads and writes. No read-ahead
takes place. Shared mmap is disabled.

This is useful to accurately read files which may change length
frequently on the source.
2024-03-26 17:32:11 +00:00
Nick Craig-Wood
f3f743c3f9 vfs: fix download loop when file size shrunk
Before this change, if a file shrunk in size on the remote then rclone
could get into an loop trying to download the file forever.

The symptom was repeating errors like this:

    vfs cache: restart download failed: failed to start downloader: failed to open downloader: vfs reader: failed to open source file: invalid seek position

The fix was to check that file size in various places and makes sure
that we weren't trying to download too much data.

This was a problems with backends (like s3) which update the size of
the object on Open to the actual size of the object.
2024-03-26 17:32:10 +00:00
Nick Craig-Wood
ac6ba11d22 local: add --local-time-type to use mtime/atime/btime/ctime as the time
Fixes #7484
2024-03-26 11:58:28 +00:00
Nick Craig-Wood
854a36c4ab Add psychopatt to contributors 2024-03-26 11:58:28 +00:00
psychopatt
522ab1de6d docs: remove email from authors 2024-03-26 11:45:22 +00:00
Nick Craig-Wood
215ae17272 rc: fix stats groups being ignored in operations/check
Before this change operations/check was using a background context for
the checking which was causing the stats group to be ignored.

This fixes the problem and also a similar problem in backend/command

See: https://forum.rclone.org/t/operations-check-only-reports-to-global-stats-not-per-job-group/45254
2024-03-26 11:23:40 +00:00
Nick Craig-Wood
efed6b01d2 drive: fix server side copy with metadata from my drive to shared drive
Before this change trying to server side copy an object from a my
drive to a shared drive using --metadata caused this error:

    Sharing restrictions cannot be set on a shared drive item., teamDrivesSharingRestrictionNotAllowed

This was because we were setting the "writers-can-share" metadata
which isn't allowed on shared drives
2024-03-26 11:16:22 +00:00
Nick Craig-Wood
d11fe9779e drive: stop sending notification emails when setting permissions 2024-03-26 11:11:18 +00:00
Nick Craig-Wood
f167846fb9 Add iotmaestro to contributors 2024-03-26 11:11:18 +00:00
Nick Craig-Wood
1f4b433ace Add Vitaly to contributors 2024-03-26 11:11:18 +00:00
Nick Craig-Wood
4d09320b2b Add hoyho to contributors 2024-03-26 11:11:18 +00:00
Nick Craig-Wood
af313d66d5 Add Lewis Hook to contributors 2024-03-26 11:11:18 +00:00
iotmaestro
4b5c10f72e Add a new backend for uloz.to
Note that this temporarily skips uploads of files over 2.5 GB.

See https://github.com/rclone/rclone/pull/7552#issuecomment-1956316492
for details.
2024-03-26 09:46:47 +00:00
Dan McArdle
dfc329c036 cmd/gitannex: Add the gitannex subcommand
This commit adds a new subcommand named "gitannex", aka
"git-annex-remote-rclone-builtin" when invoked via a symlink.

This accomplishes milestone 1 from issue #7625: "minimal support for the
external special remote protocol".

Issue #7625
2024-03-26 09:43:43 +00:00
gvitali
d9601c78b1 linkbox: fix list paging and optimized synchronization.
1. The maximum number of objects on a page should be no more than
1000. Currently it is 1024, for this reason the listing always ends on
the first page with the error “object not found”, rclone tries to
upload the file again, Linkbox stores it with the name “filename(N)”,
and so the storage fills up indefinitely.

2. A hyphen is added to the list of allowed characters, that makes
queries more optimized (no need to load all files in a directory for
an entity with a hyphen).
2024-03-24 12:05:58 +00:00
Vitaly
4258ad705e linkbox: fix working with names longer than 8-25 Unicode chars.
The LinkBox API does not allow searching by more than 25 Unicode
characters in the name, for this reason it is currently impossible to
work with files and folders named longer than 8 Unicode chars (if
encoded in base32).

This fix queries all files in a directory for long names and checks
their names one by one, thus solving the issue.

Fixes #7542
2024-03-24 12:05:58 +00:00
Pat Patterson
070cff8a65 b2: Add new cleanup and cleanup-hidden backend commands. 2024-03-23 18:07:02 +00:00
hoyho
a24aeba495 s3: validate CopyCutoff size before copy
Signed-off-by: hoyho <luohaihao@gmail.com>
2024-03-23 15:09:38 +00:00
Lewis Hook
bf494d48d6 Improve error messages when objects have been corrupted on transfer - fixes #5268 2024-03-23 12:35:35 +00:00
Nick Craig-Wood
aee8d909b3 onedrive: fix "unauthenticated: Unauthenticated" errors when downloading
Before this change we would pass the Authorization header on to the
download server. This is allowed according to the docs, but on some
onedrive servers this sometimes causes an error with the text
"unauthenticated: Unauthenticated".

This is a similar fix to

dedad9f071 onedrive: fix "unauthenticated: Unauthenticated" errors when uploading

See: https://forum.rclone.org/t/cryptcheck-on-encrypted-onedrive-personal-failed-with-unauthenticated-error/44581/
2024-03-23 12:08:35 +00:00
Nick Craig-Wood
48262849df lib/rest: Add Client.Do function to call http.Client.Do 2024-03-23 12:08:23 +00:00
Nick Craig-Wood
09cc8179cc lib/rest: add CheckRedirect function for redirect management 2024-03-23 12:08:23 +00:00
Nick Craig-Wood
ff855fe1fb operations: Fix "optional feature not implemented" error with a crypted sftp
Before this change operations.SetDirModTime could return the error
"optional feature not implemented" when attempting to set modification
times on crypted sftp backends.

This was because crypt wraps the directories using fs.DirWrapper but
these return fs.ErrorNotImplemented for the SetModTime method.

The fix is to recognise that error and fall back to using the
DirSetModTime method on the backend which does work.

Fixes #7673
2024-03-22 17:36:04 +00:00
Nick Craig-Wood
5ee89bdcf8 Add Kyle Reynolds to contributors 2024-03-22 17:36:04 +00:00
Nick Craig-Wood
f7bf28806c Add YukiUnHappy to contributors 2024-03-22 17:36:04 +00:00
Nick Craig-Wood
df6c573c99 Add Gachoud Philippe to contributors 2024-03-22 17:36:04 +00:00
Nick Craig-Wood
b7c06e5eb9 Add racerole to contributors 2024-03-22 17:36:04 +00:00
Nick Craig-Wood
e0e9ac50d3 Add John-Paul Smith to contributors 2024-03-22 17:36:04 +00:00
YukiUnHappy
f68d962c86 onedrive: make server-side copy to work in more scenarios 2024-03-22 17:29:38 +00:00
kapitainsky
6232cc123f docs: Proton Drive, correct typo
Proton Drive correct typo
2024-03-22 16:36:21 +00:00
Gachoud Philippe
a33576af7d docs: drive: corrected relative path of scopes to absolute
and added some links to the reference
2024-03-22 12:26:34 +00:00
kapitainsky
2591703494 docs: clarify shell_type = none and ssh = behaviour
Discussed on the forum:

https://forum.rclone.org/t/can-rclone-be-made-to-work-with-an-sftp-server-confining-users-to-an-sftp-jail-and-no-login/44931
2024-03-21 15:08:56 +01:00
Kyle Reynolds
7803b4ed6c fs: improve JSON Unmarshalling for Duration
Enhanced the UnmarshalJSON method for the Duration type to correctly
handle the special string 'off' and ensure large integers are parsed
accurately without floating-point rounding errors. This resolves
issues with setting and removing the MinAge filter through the rclone
rc command.

Fixes #3783

Co-authored-by: Kyle Reynolds <kyle.reynolds@bridgerphotonics.com>
2024-03-13 18:08:59 +00:00
racerole
00fb847662 docs: remove repeated words 2024-03-13 17:12:39 +00:00
Thomas Müller
c7bfadd10a owncloud: add config owncloud_exclude_mounts which allows to exclude mounted folders when listing remote resources 2024-03-13 17:09:10 +00:00
John-Paul Smith
ca903b9872 drive: backend query command
This command executes a list query in Google Drive’s native query
language and returns a JSON dump of matches. It’s useful for locating
files quickly in folders with a large number of files, where rclone’s
normal list command is slow due to client-side filtering.
2024-03-11 20:16:13 +00:00
Nick Craig-Wood
b7783f75a4 Start v1.67.0-DEV development 2024-03-10 12:14:00 +00:00
Nick Craig-Wood
b6013a5e68 Version v1.66.0 2024-03-10 11:22:43 +00:00
Nick Craig-Wood
b7422a4fc8 docs: update metadata docs with Move and Copy support 2024-03-09 14:13:18 +00:00
nielash
9b650d3517 hasher: look for cached hash if passed hash unexpectedly blank
Before this change, Hasher did not check whether a "passed hash" (hashtype
natively supported by the wrapped backend) returned from a backend was blank,
and would sometimes return a blank hash to the caller even when a non-blank hash
was already stored in the db. This caused issues with, for example, Google
Drive, which has SHA1 / SHA256 hashes for some files but not others
(https://rclone.org/drive/#sha1-or-sha256-hashes-may-be-missing) and sometimes also
does not have hashes for very recently modified files.

After this change, Hasher will check if the received "passed hash" is
unexpectedly blank, and if so, it will continue to try other enabled methods,
such as retrieving a value from the database, or possibly regenerating it.

https://forum.rclone.org/t/hasher-with-gdrive-backend-does-not-return-sha1-sha256-for-old-files/44680/9?u=nielash
2024-03-09 11:58:02 +00:00
nielash
ff0acfb568 hasher: fix error from trying to stop an already-stopped db
Before this change, Hasher would sometimes try to stop a bolt db that was
already stopped, resulting in an error. This change fixes the issue by checking
first whether the db is already stopped.

https://forum.rclone.org/t/hasher-with-gdrive-backend-does-not-return-sha1-sha256-for-old-files/44680/11?u=nielash
2024-03-09 11:58:02 +00:00
Nick Craig-Wood
ac830ddd42 sync: don't sync directory modtimes from backends which don't have directories
Some backends (like s3, swift, gcs, azureblob) don't have directories
(this can be overridden on some using the directory markers feature).

It therefore makes no sense to sync directory times from them as they
will all be a value made up by rclone (--default-time)

We use the feature flag CanHaveEmptyDirectories to mark backends
without real directory support and disable the directory modification
time syncing on those.
2024-03-09 11:28:15 +00:00
Nick Craig-Wood
f491efc85d sync: fix integration tests on chunker
The tests added in this commit needed a tweak for chunker

8c69455c37 sync: don't set dir modtimes if already set
2024-03-08 15:04:35 +00:00
Nick Craig-Wood
fcb182efce docs: add current sponsor logos in 2024-03-08 15:04:35 +00:00
nielash
1473de3f04 onedrive: add metadata support
This change adds support for metadata on OneDrive. Metadata (including
permissions) is supported for both files and directories.

OneDrive supports System Metadata (not User Metadata, as of this writing.) Much
of the metadata is read-only, and there are some differences between OneDrive
Personal and Business (see table in OneDrive backend docs for details).

Permissions are also supported, if --onedrive-metadata-permissions is set. The
accepted values for --onedrive-metadata-permissions are read, write, read,write, and
off (the default). write supports adding new permissions, updating the "role" of
existing permissions, and removing permissions. Updating and removing require
the Permission ID to be known, so it is recommended to use read,write instead of
write if you wish to update/remove permissions.

Permissions are read/written in JSON format using the same schema as the
OneDrive API, which differs slightly between OneDrive Personal and Business.
(See OneDrive backend docs for examples.)

To write permissions, pass in a "permissions" metadata key using this same
format. The --metadata-mapper tool can be very helpful for this.

When adding permissions, an email address can be provided in the User.ID or
DisplayName properties of grantedTo or grantedToIdentities. Alternatively, an
ObjectID can be provided in User.ID. At least one valid recipient must be
provided in order to add a permission for a user. Creating a Public Link is also
supported, if Link.Scope is set to "anonymous".

Note that adding a permission can fail if a conflicting permission already
exists for the file/folder.

To update an existing permission, include both the Permission ID and the new
roles to be assigned. roles is the only property that can be changed.

To remove permissions, pass in a blob containing only the permissions you wish
to keep (which can be empty, to remove all.)

Note that both reading and writing permissions requires extra API calls, so if
you don't need to read or write permissions it is recommended to omit --onedrive-
metadata-permissions.

Metadata and permissions are supported for Folders (directories) as well as
Files. Note that setting the mtime or btime on a Folder requires one extra API
call on OneDrive Business only.

OneDrive does not currently support User Metadata. When writing metadata, only
writeable system properties will be written -- any read-only or unrecognized keys
passed in will be ignored.

TIP: to see the metadata and permissions for any file or folder, run:

rclone lsjson remote:path --stat -M --onedrive-metadata-permissions read

See the OneDrive backend docs for a table of all the supported metadata
properties.
2024-03-08 14:48:54 +00:00
Nick Craig-Wood
4e07a72dc7 fs: Implement --no-update-dir-modtime to disable setting modification times on dirs 2024-03-07 17:20:24 +00:00
Nick Craig-Wood
99acee7ba0 operations: remove stray debug 2024-03-07 17:15:43 +00:00
Nick Craig-Wood
bda4f25baa s3: support metadata setting and mapping on server side Copy
Before this change the backend would not run the metadata mapper and
it would ignore metadata set when doing server side copies.
2024-03-07 14:44:45 +00:00
Nick Craig-Wood
9f2ce2c7fc drive: support metadata setting and mapping on server side Move,Copy
Before this change the backend would not run the metadata mapper and
it would ignore metadata set when doing server side moves or copies.
2024-03-07 14:44:45 +00:00
Nick Craig-Wood
6e85a39e99 local: support metadata setting and mapping on server side Move
Before this change the backend would not run the metadata mapper and
it would ignore metadata set when doing server side moves.
2024-03-07 14:44:45 +00:00
Nick Craig-Wood
24b4148b5e fs: add MetadataAsOpenOptions 2024-03-07 14:44:45 +00:00
Nick Craig-Wood
41b1250eaf fstests: add tests for Metadata on server side Move and Copy 2024-03-07 14:44:45 +00:00
Nick Craig-Wood
339d3e8ee6 netstorage,quatrix,seafile: fix Root to return correct directory when pointing to a file
This fixes the TestIntegration/FsMkdir/FsPutFiles/FsIsFile/FsRoot
integration test.
2024-03-07 14:44:45 +00:00
Nick Craig-Wood
5750795324 protondrive: fix encoding of Root method
This fixes the TestIntegration/FsMkdir/FsPutFiles/FsIsFile/FsRoot
integration test.
2024-03-07 14:44:45 +00:00
Nick Craig-Wood
cdcb8b2a0a Add huajin tong to contributors 2024-03-07 14:44:45 +00:00
huajin tong
b1ae7df556 docs: fix some comments
Signed-off-by: thirdkeyword <fliterdashen@gmail.com>
2024-03-07 12:57:15 +00:00
nielash
431524445e combine: fix operations.DirMove across upstreams - fixes #7661
Before this change, operations.DirMove would fail when moving a directory, if
the src and dest were on different upstreams of a combine remote.

The issue only affected operations.DirMove, and not sync.MoveDir, because they
checked for server-side-move support in different ways.

MoveDir checks by just trying it and seeing what error comes back. This works
fine for combine because combine returns fs.ErrorCantDirMove which MoveDir
understands what to do with.

DirMove, however, only checked whether the function pointer is nil. This is an
unreliable way to check for combine, because combine does advertise support for
DirMove, despite not always being able to do it.

This change fixes the issue by checking the returned error in a manner similar
to sync.MoveDir and falling back to individual file moves (copy + delete)
depending on which error was returned.
2024-03-07 11:11:46 +00:00
nielash
252562d00a combine: fix CopyDirMetadata error on upstream root
Before this change, operations.CopyDirMetadata would fail with: `internal error:
expecting directory string from combine root '' to have SetMetadata method:
optional feature not implemented` if the dst was the root directory of a combine
upstream. This is because combine was returning a *fs.Dir, which does not
satisfy the fs.SetMetadataer interface.

While it is true that combine cannot set metadata on the root of an upstream
(see also #7652), this should not be considered an error that causes sync to do
high-level retries, abort without doing deletes, etc.

This change addresses the issue by creating a new type of DirWrapper that is
allowed to fail silently, for exceptional cases such as this where certain
special directories have more limited abilities than what the Fs usually
supports.

It is possible that other similar wrapping backends (Union?) may need this same
fix.
2024-03-07 11:09:07 +00:00
nielash
6a72cfd6e1 operations: fix typo in log messages
I assume this must be a typo as %T of dir would only ever print "string"
2024-03-07 11:09:07 +00:00
nielash
354ea6fff3 docs: update to reflect dir modtime/metadata support 2024-03-07 11:09:07 +00:00
nielash
8c69455c37 sync: don't set dir modtimes if already set
Before this change, directory modtimes (and metadata) were always synced from
src to dst, even if already in sync (i.e. their modtimes already matched.) This
potentially required excessive API calls, made logs noisy, and was potentially
problematic for backends that create "versions" or otherwise log activity
updates when modtime/metadata is updated.

After this change, a new DirsEqual function is added to check whether dirs are
equal based on a number of factors such as ModifyWindow and sync flags in use.
If the dirs are equal, the modtime/metadata update is skipped.

For backends that require setDirModTimeAfter, the "after" sync is performed only
for dirs that could have been changed by the sync (i.e. dirs containing files
that were created/updated.)

Note that dir metadata (other than modtime) is not currently considered by
DirsEqual, consistent with how object metadata is synced (only when objects are
unequal for reasons other than metadata).

To sync dir modtimes and metadata unconditionally (the previous behavior), use
--ignore-times.
2024-03-07 09:57:11 +00:00
nielash
fd8faeb0e6 vfs: fix unicode normalization on macOS - fixes #7072
Before this change, the VFS layer did not properly handle unicode normalization,
which caused problems particularly for users of macOS. While attempts were made
to handle it with various `-o modules=iconv` combinations, this was an imperfect
solution, as no one combination allowed both NFC and NFD content to
simultaneously be both visible and editable via Finder.

After this change, the VFS supports `--no-unicode-normalization` (default `false`)
via the existing `--vfs-case-insensitive` logic, which is extended to apply to both
case insensitivity and unicode normalization form.

This change also adds an additional flag, `--vfs-block-norm-dupes`, to address a
probably rare but potentially possible scenario where a directory contains
multiple duplicate filenames after applying case and unicode normalization
settings. In such a scenario, this flag (disabled by default) hides the
duplicates. This comes with a performance tradeoff, as rclone will have to scan
the entire directory for duplicates when listing a directory. For this reason,
it is recommended to leave this disabled if not needed. However, macOS users may
wish to consider using it, as otherwise, if a remote directory contains both NFC
and NFD versions of the same filename, an odd situation will occur: both
versions of the file will be visible in the mount, and both will appear to be
editable, however, editing either version will actually result in only the NFD
version getting edited under the hood. `--vfs-block-norm-dupes` prevents this
confusion by detecting this scenario, hiding the duplicates, and logging an
error, similar to how this is handled in `rclone sync`.
2024-03-06 16:12:13 +00:00
Kyle Reynolds
dcdbad3554 bisync: clarify file operation directions in dry-run logs - fixes #7029
Before this change, NOTICE log messages during bisync dry runs were unclear as
to the direction of the skipped operation (Path1 to 2 vs. 2 to 1.) This change
adjusts the cmd/bisync/log.go indent function to be more expressive about
direction.
2024-03-06 09:26:53 -05:00
Nick Craig-Wood
effad3fe4b build: fix CVE-2024-24786 by upgrading google.golang.org/protobuf
See: https://pkg.go.dev/vuln/GO-2024-2611
2024-03-06 12:42:38 +00:00
Nick Craig-Wood
692af42858 operations: fix TestSetDirModTime for backends with SetDirModTime but not Metadata 2024-03-01 11:39:21 +00:00
Nick Craig-Wood
1693d7ad0f sftp: set DirModTimeUpdatesOnWrite to fix integration tests 2024-03-01 11:29:08 +00:00
Nick Craig-Wood
3bb9394ae5 operations: fix TestMkdirModTime test
This was failing on backends that didn't support metadata but did
support setting directory modtimes.
2024-03-01 11:18:24 +00:00
Nick Craig-Wood
be39e99918 sync: fix TestMoveEmptyDirectories so they work on backends which don't support DirModTimes 2024-03-01 10:56:48 +00:00
Nick Craig-Wood
6e28edeb9a cache: fix crash in tests which assumed local could Purge 2024-02-29 17:55:36 +00:00
Nick Craig-Wood
d50572b108 operations: add operations/hashsum to the rc as rclone hashsum equivalent
Fixes #7569
2024-02-29 16:21:42 +00:00
Nick Craig-Wood
0b8689dc28 rc: Add GetFsNamedFileOK to get an fs which could also be a file 2024-02-29 16:21:42 +00:00
Nick Craig-Wood
5994fcfed8 fs/cache: add PutErr to add an fs.Fs with an fs.ErrorIsFile error to the cache 2024-02-29 16:21:41 +00:00
Nick Craig-Wood
e3f6f68885 lib/cache: add PutErr to put a value with an error into the cache 2024-02-29 16:21:41 +00:00
Nick Craig-Wood
6ff1b6c505 local: delete backend implementation of Purge to speed up and make stats
In this commit (2014 for v1.02) Purge was implemented for the local
backend:

1527e64ee7 local: Implement Purger interface

This appeared to be implemented just to make a Purge and doesn't
appear to do anything useful.

It is in fact significatly worse than the rclone fallback purge since
it doesn't operate in parallel or update stats.

This patch removes the Purge routine for a consequent speed up and
showing of stats.

See: https://forum.rclone.org/t/progress-flag-for-rclone-purge/44416
2024-02-29 15:04:51 +00:00
Nick Craig-Wood
4a049c12fe copyurl: add troubleshooting section to the docs
See: https://forum.rclone.org/t/copyurl-fails-with-stream-error-wget-and-curl-works/44382/2
2024-02-29 14:58:12 +00:00
Nick Craig-Wood
15890b7ce7 cmd: make auto completion work for all shells and reduce the size
This updates the bash completion to work with GenBashCompletionV2
which cuts down the size of the completion file dramatically.

See: https://forum.rclone.org/t/request-make-remote-path-completion-work-for-fish-and-zsh/42982/
See: #7000
2024-02-29 14:46:50 +00:00
Nick Craig-Wood
186bb85c44 crypt: add missing error check spotted by linter 2024-02-29 14:46:50 +00:00
nielash
4c6d2c5410 crypt: improve handling of undecryptable file names - fixes #5787 fixes #6439 fixes #6437
Before this change, undecryptable file names would be skipped very quietly
(there was a log warning, but only at DEBUG level),
failing to alert users of a potentially serious issue that needs attention.

After this change, the log level is raised to NOTICE by default and a new
--crypt-strict-names flag allows raising an error, for users who may prefer not
to proceed if such an issue is detected.

See https://forum.rclone.org/t/skipping-undecryptable-file-name-should-be-an-error/27115
https://github.com/rclone/rclone/issues/5787
2024-02-29 12:11:02 +00:00
Nick Craig-Wood
f5f86786b2 sync: implement directory sync for mod times and metadata
Directory mod times are synced by default if the backend is capable
and directory metadata is synced if the --metadata flag is provided
and the backend is capable.

This updates the bisync golden tests also which were affected by
--dry-run setting of directory modtimes.

Fixes #6685
2024-02-28 16:26:14 +00:00
Nick Craig-Wood
15579c2195 fstests: factor out fstest.NewObject function 2024-02-28 16:26:14 +00:00
Nick Craig-Wood
e8fe0b0553 operations: Implement CopyDirMetadata, CopyDirModTime and SetDirModTime 2024-02-28 16:26:14 +00:00
Nick Craig-Wood
09953d77b5 lsjson,lsf: make sure metadata appears for directories 2024-02-28 16:26:14 +00:00
Nick Craig-Wood
e4d0055b3e drive: implement modtime and metadata setting for directories 2024-02-28 16:26:14 +00:00
Nick Craig-Wood
a60da2ef38 local: fix setting of btime on directories on Windows
Before this change this would give errors like this

    failed to set metadata on directory: failed to set birth (creation) time: Access is denied.

This was caused by opening the directory in the wrong mode.
2024-02-28 16:25:59 +00:00
Nick Craig-Wood
7b01564f83 local: implement modtime and metadata for directories
A consequence of this is that fs.Directory returned by the local
backend will now have a correct size in (rather than -1). Some tests
depended on this and have been fixed by this commit too.
2024-02-28 16:09:04 +00:00
Nick Craig-Wood
39db8caff1 cache,chunker,combine,compress,crypt,hasher,union: implement MkdirMetadata and related Features 2024-02-28 16:09:04 +00:00
nielash
0297542f6b cache,chunker,combine,compress,crypt,hasher,union: implement DirSetModTime (if supported by wrapped remote) 2024-02-28 16:09:04 +00:00
nielash
17c0ecc72c sftp: implement DirSetModTime 2024-02-28 16:09:04 +00:00
nielash
cbcb295185 drive: implement DirSetModTime 2024-02-27 19:59:13 +00:00
nielash
67e3725205 local: implement DirSetModTime 2024-02-27 19:59:13 +00:00
Nick Craig-Wood
61d76ae47d fstests: add integration tests for Directory Metadata and ModTime 2024-02-27 19:59:13 +00:00
Nick Craig-Wood
fd1ca2dfe8 fs: allow Metadata calls to be called with Directory or Object
This involved adding the Fs() method to DirEntry as it is needed in
the metadata mapper.

Unspecialised fs.Dir objects will return a new fs.Unknown from their
Fs() methods as they are not specific to any given Fs.
2024-02-27 10:56:19 +00:00
Nick Craig-Wood
e1032f693f fs: add DirWrapper for wrapping Directory-s with optional methods 2024-02-27 10:56:19 +00:00
Nick Craig-Wood
a4cadd1128 fs: add Directory Metadata flags for backends and interfaces
Add backend flags
- ReadDirMetadata
- WriteDirMetadata
- WriteDirSetModTime
- UserDirMetadata
- DirModTimeUpdatesOnWrite

Add Metadata/SetMetadata for directories.

Add MkdirMetadata optional feature
2024-02-27 10:56:19 +00:00
nielash
6da52d76a7 fs: implement DirSetModTime optional feature 2024-02-22 11:13:54 +00:00
Nick Craig-Wood
71a1bbb2be errcount: factor errcount abstraction from operations 2024-02-22 11:13:54 +00:00
Nick Craig-Wood
8f0e9f9f6b mega: fix panic with go1.22
Before this fix rclone would crash with

    panic: encoding alphabet includes duplicate symbols

When compiled with go1.22. This was fixed upstream in

https://github.com/t3rm1n4l/go-mega/issues/48

And this just pulls in the fix.

Fixes #7639
2024-02-21 18:41:44 +00:00
Nick Craig-Wood
072d1f10ab serve webdav: fix --baseurl without leading /
The webdav server needs the prefix passed to it with a leading /
otherwise it does not remove it properly.

The docs state that a leading slash is optional so this patch adds one
if not present.

See: https://forum.rclone.org/t/cant-rename-files-in-rclone-serve-webdav-with-baseurl-maybe-wrong-handling-of-move-request-method/44637
2024-02-21 18:08:44 +00:00
Nick Craig-Wood
5014348229 Add Anders Swanson to contributors 2024-02-21 18:08:44 +00:00
Nick Craig-Wood
ed78ac7c92 Add Joe Cai to contributors 2024-02-21 18:08:44 +00:00
Nick Craig-Wood
53d873d60d Add Dan McArdle to contributors 2024-02-21 18:08:44 +00:00
Nick Craig-Wood
f2c35fdec6 Add Gabriel Ramos to contributors 2024-02-21 18:08:44 +00:00
Nick Craig-Wood
1c69b20ed7 Add Jack Provance to contributors 2024-02-21 18:08:44 +00:00
nielash
547c635552 mailru: add override for TestApplyTransforms - #7591
mailru is unable to handle filenames with certain combining characters (for
example: йěáñ), and is therefore incapable of testing ApplyTransforms. (It is
also therefore incapable of fully supporting --no-unicode-normalization.)

The same override is applied to chunker when wrapping mailru.
2024-02-21 18:02:19 +00:00
nielash
f0d9117ff3 linkbox: add override for TestFixCase - #7591
linkbox already has an override for TestCaseInsensitiveMoveFile, and being able
to handle case-insensitive moves is a prerequisite for TestFixCase.
2024-02-21 18:02:19 +00:00
nielash
9d2bd163c7 opendrive: fix moving file/folder within the same parent dir - #7591
Before this change, moving (renaming) a file or folder to a different name
within the same parent directory would fail, due to using the wrong API
operation ("/file/move_copy.json" and "/folder/move_copy.json", instead of the
separate "/file/rename.json" and "/folder/rename.json" that opendrive has for
this purpose.)

After this change, Move and DirMove check whether the move is within the same
parent dir. If so, "rename" is used. If not, "move_copy" is used, like before.
2024-02-21 18:02:19 +00:00
Anders Swanson
db8fb5ceda oracleobjectstorage: supports workload identity authentication for OKE
Signed-off-by: Anders Swanson <anders.swanson@oracle.com>
2024-02-20 16:25:59 +00:00
Joe Cai
a1e66cc5e8 swift: Avoid unnecessary container versioning check
Container versioning check is only needed for non-empty large objects.
2024-02-20 15:52:25 +00:00
nielash
7b8bbe531e nfsmount: fix --volname being ignored #7503
Before this change, nfsmount ignored the --volname flag. After this change, the --
volname flag is respected, making it possible to set a custom volume name.

macOS users should note that Finder will show the correct volume name in most
places, but a notable exception is the sidebar, which will show "localhost".
This seems to be a system limitation (at least without `sudo`), but see the
discussion at https://github.com/rclone/rclone/issues/7503#issuecomment-1933997678
for some possible workarounds.
2024-02-18 05:08:59 -05:00
nielash
0e2f1d64e3 nfsmount: fix exit after external unmount #7503
Before this change, if a user unmounted externally (for example, via the Finder
UI), rclone would not be aware of this and wait forever to exit -- effectively
causing a deadlock that would require Ctrl+C to terminate.

After this change, when the handler detects an external unmount, it calls a
function which allows rclone to cleanly shutdown the VFS and exit.
2024-02-18 05:08:59 -05:00
nielash
5638a3841f serve nfs: fix writing files via Finder on macOS - fixes #7503
Before this change, writing files to an `nfsmount` via Finder on macOS would
cause critical errors, rendering `nfsmount` effectively unusable on macOS. This
change fixes the issue so that writes via Finder should be possible.

The issue was primarily caused by the handler's HandleLimit being set to -1. -1 is
the correct default for a NullAuthHandler, but not for a CachingHandler, which
interprets -1 not as "no limit" but as "no cache".

This change sets a high default of 1000000, and gives the user control over it
with a new --nfs-cache-handle-limit flag (available in both `serve nfs` and
`nfsmount`. A minimum of 5 is enforced, as any lower than this will be
insufficient to support directory listing.
2024-02-18 05:08:59 -05:00
Dan McArdle
6986a43b68 bisync: delete flushCache() function from tests
The flushCache() function has a bug that causes it to never actually
flush the cache. Specifically, it checks whether DirCacheFlush is nil,
but never calls it.

The tests are already passing without flushing the dir cache, so this
commit just deletes flushCache() and its call sites.

Fixes rclone/rclone#7623
2024-02-18 04:14:51 -05:00
Oksana Zhykina
11c6489fd1 quatrix: add option to skip project folders 2024-02-18 07:38:19 +01:00
Gabriel Ramos
43823bc925 webdav: reduce priority of chunks upload log 2024-02-18 07:29:23 +01:00
dependabot[bot]
a3b661be0d build(deps): bump golangci/golangci-lint-action from 3 to 4
Bumps [golangci/golangci-lint-action](https://github.com/golangci/golangci-lint-action) from 3 to 4.
- [Release notes](https://github.com/golangci/golangci-lint-action/releases)
- [Commits](https://github.com/golangci/golangci-lint-action/compare/v3...v4)

---
updated-dependencies:
- dependency-name: golangci/golangci-lint-action
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-02-18 07:25:50 +01:00
Jack Provance
f113c68b13 docs: Fix a heading level in webdav.md documentation (#7631)
This fixes a heading problem under the "Provider Notes" section.
2024-02-18 07:16:23 +01:00
nielash
137f7f62fb sync: use operations.DirMove instead of sync.MoveDir for --fix-case - #7591
This should be more efficient for the purposes of --fix-case, as operations.DirMove
accepts `srcRemote` and `dstRemote` arguments, while sync.MoveDir does not.

This also factors the two-step-move logic to operations.DirMoveCaseInsensitive, so
that it is reusable by other commands.
2024-02-13 15:07:41 -05:00
nielash
dfe76570a1 operations: skip backends incapable of testing TestApplyTransforms - #7591
This adds a step to detect whether the backend is capable of supporting the
feature, and skips the test if not. A backend can be incapable if, for example,
it is non-case-preserving or automatically converts NFD to NFC.
2024-02-13 15:07:41 -05:00
nielash
f4c058e13e bisync: use global --retries and --retries-sleep flags instead of overriding 2024-02-12 13:24:54 -05:00
nielash
407a0f3733 cmd: refactor --retries and --retries-sleep to global config
This change moves the --retries and --retries-sleep flags/variables from cmd to
config (consistent with --low-level-retries), so that they can be more easily
referenced from subcommands.
2024-02-12 13:24:54 -05:00
nielash
b14269fd23 bisync: add support for --retries-sleep - fixes #7555
Before this change, bisync supported --retries but not --retries-sleep.
This change adds support for --retries-sleep.
2024-02-12 13:24:54 -05:00
nielash
76b7bcd4d7 bisync: reset errors between retries
Before this change, in the event of a retryable error, bisync would always retry
the maximum number of times allowed by the `--retries` flag, even if one of the
retries was successful. This change fixes the issue, so that bisync moves on
after the first successful retry.
2024-02-12 13:24:54 -05:00
nielash
782ab3f582 bisync: clean up docs
(as the flags in docs/content/bisync.md do not update automatically, unlike
docs/content/commands/rclone_bisync.md)
2024-02-12 13:24:54 -05:00
nielash
9c6325c131 backend: rename variables to fix CI lint test failures 2024-02-12 12:49:00 -05:00
Volodymyr
2abeda5961 quatrix: fix Content-Range header
This change does not actually affect uploads. Just to be right according to definition of Content-Range in
https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Content-Range#range-end
2024-02-09 16:44:45 +00:00
nielash
885a543023 operations: use --download for TestApplyTransforms #7591
This makes it possible to run the test even on remotes without MD5 support.
2024-02-08 16:08:05 +00:00
nielash
f3680d222c operations: fix TestCaseInsensitiveMoveFileDryRun on chunker integration tests #7591
It appears that ci.DryRun = true affects the behavior of r.WriteObject on
chunker only, and no other remotes. This change puts a quick bandaid on it by
setting it later on in the test, but perhaps the underlying issue warrants a
closer look at some point... is chunker checking ci.DryRun itself in a way that
no other remote does? If so, should it? (Does this break encapsulation?)
2024-02-08 16:08:02 +00:00
nielash
d2b37cf61e operations: fix case-insensitive moves in operations.Move #7591
Before this change, operations.moveOrCopyFile had a special section to detect
and handle changing case of a file on a case insensitive remote, but
operations.Move did not. This caused operations.Move to fail for certain
backends that are incapable of renaming a file in-place to an equal-folding name.
(Not all case-insensitive backends have this limitation -- for example, Dropbox
does but macOS local does not.)

After this change, the special two-part-move section from
operations.moveOrCopyFile is factored out to its own function,
moveCaseInsensitive, which is then called from both operations.moveOrCopyFile
and operations.Move.
2024-02-08 16:07:57 +00:00
Nick Craig-Wood
83f61a9cfb s3: GCS provider: fix server side copy of files bigger than 5G
GCS gives NotImplemented errors for multi-part server side copies. The
threshold for these is currently set just below 5G so any files bigger
than 5G that rclone attempts to server side copy will fail.

This patch works around the problem by adding a quirk for GCS raising
--s3-copy-cutoff to the maximum. This means that rclone will never use
multi-part copies for files in GCS. This includes files bigger than
5GB which (according to AWS documentation) must be copied with
multi-part copy. However this seems to work with GCS.

See: https://forum.rclone.org/t/chunker-uploads-to-gcs-s3-fail-if-the-chunk-size-is-greater-than-the-max-part-size/44349/
See: https://issuetracker.google.com/issues/323465186
2024-02-08 14:53:30 +00:00
Nick Craig-Wood
b206496f63 b2: clarify exactly what --b2-download-auth-duration does in the docs
See: https://forum.rclone.org/t/what-does-b2-download-auth-duration-mean/44504/
2024-02-08 09:39:53 +00:00
Nick Craig-Wood
24fdecf107 ftp: fix mkdir with rsftp which is returning the wrong code
On a successfull MKD, rsftp seems to return code 250 whereas we and
the RFC expects 257.

This patch makes rclone accept 250 here as well.

See: https://forum.rclone.org/t/rclone-pop-up-an-i-o-error-when-creating-a-folder-in-a-mounted-ftp-drive/44368/3
2024-02-07 22:09:56 +00:00
Nick Craig-Wood
9bd7262dfc Add DanielEgbers to contributors 2024-02-07 22:09:56 +00:00
DanielEgbers
a0dff2dd9c Seafile: Fix download/upload error when FILE_SERVER_ROOT is relative
A seafile server can be configured to use a relative URL as
FILE_SERVER_ROOT in order to support more than one hostname/ip. (see
https://github.com/haiwen/seahub/issues/3398#issuecomment-506920360 )

The previous backend implementation always expected an absolute
download/upload URL, resulting in an "unsupported protocol scheme"
error.

With this commit it supports both absolute and relative.
2024-02-05 11:48:51 +00:00
Nick Craig-Wood
91b54aafcc rc: add srcFs and dstFs to core/stats and core/transferred stats
Before this change it wasn't possible to see where transfers were
going from and to in core/stats and core/transferred.

When use in rclone mount in particular this made interpreting the
stats very hard.
2024-02-02 11:43:10 +00:00
Nick Craig-Wood
81a29e6895 Add Thomas Müller to contributors 2024-02-02 11:43:10 +00:00
Nick Craig-Wood
f762ef668f Add Michael Eischer to contributors 2024-02-02 11:43:10 +00:00
Thomas Müller
99b9062551 owncloud: add config owncloud_exclude_shares which allows to exclude shared files and folders when listing remote resources 2024-01-31 14:47:24 +00:00
Michael Eischer
ef2c5a1998 serve restic: fix error handling
* serve restic: return internal error if listing failed

If listing a remote failed, then rclone returned http status "not
found". This has become a problem since restic 0.16.0 which ignores "not
found"-errors while listing a directory.

Just return internal server error, if something unexpected happens while
listing a directory.

* serve restic: fix error handling if getting a file fails

If the call to `newObject` in `serveObject` fails, then rclone always
returned a "not found" error. This prevents restic from distinguishing
permanent "not found" errors from everything else.

Thus, only return "not found" if the object is not found and an internal
server error otherwise.
2024-01-29 17:54:23 +00:00
Nick Craig-Wood
6e4dd2ab96 docs: ignore amazon cloud drive doc stub when building the docs 2024-01-25 16:35:33 +00:00
Nick Craig-Wood
0c17a17e19 Changelog updates from Version v1.65.2 2024-01-24 16:40:47 +00:00
Nick Craig-Wood
03295bbc3c azureblob: fix data corruption bug #7590
It was reported that rclone copy occasionally uploaded corrupted data
to azure blob.

This turned out to be a race condition updating the block count which
caused blocks to be duplicated.

This bug was introduced in this commit in v1.64.0 and will be fixed in v1.65.2

0427177857 azureblob: implement OpenChunkWriter and multi-thread uploads #7056

This race only seems to happen if `--checksum` is used but can happen otherwise.

Unfortunately Azure blob does not check the MD5 that we send them so
despite sending incorrect data this corruption is not detected. The
corruption is detected when rclone tries to download the file, so
attempting to copy the files back to local disk will result in errors
such as:

    ERROR : file.pokosuf5.partial: corrupted on transfer: md5 hash differ "XXX" vs "YYY"

This adds a check to test the blocklist we upload is as we expected
which would have caught the problem had it been in place earlier.
2024-01-24 11:28:05 +00:00
Nick Craig-Wood
b3a1f66759 build: add -race flag to integration tester test_all 2024-01-24 11:27:43 +00:00
Nick Craig-Wood
a947f75d3b Add Kyle Reynolds to contributors 2024-01-24 11:27:43 +00:00
Nick Craig-Wood
ae0a4c8bbf Add Tera to contributors 2024-01-24 11:27:43 +00:00
Kyle Reynolds
7835991147 fs: add more detailed logging for file includes/excludes
This makes a DEBUG log to show why files were included or excluded.

Fixes #7463
2024-01-22 16:46:26 +00:00
nielash
810644e873 bisync: add --resync-mode for customizing --resync - fixes #5681
Before this change, the path1 version of a file always prevailed during
--resync, and many users requested options to automatically select the winner
based on characteristics such as newer, older, larger, and smaller. This change
adds support for such options.

Note that ideally this feature would have been implemented by allowing the
existing `--resync` flag to optionally accept string values such as `--resync
newer`. However, this would have been a breaking change, as the existing flag
is a `bool` and it does not seem to be possible to have a `string` flag that
accepts both `--resync newer` and `--resync` (with no argument.) (`NoOptDefVal`
does not work for this, as it would force an `=` like `--resync=newer`.) So
instead, the best compromise to avoid a breaking change was to add a new
`--resync-mode CHOICE` flag that implies `--resync`, while maintaining the
existing behavior of `--resync` (which implies `--resync-mode path1`. i.e. both
flags are now valid, and either can be used without the other.

--resync-mode CHOICE

In the event that a file differs on both sides during a `--resync`,
`--resync-mode` controls which version will overwrite the other. The supported
options are similar to `--conflict-resolve`. For all of the following options,
the version that is kept is referred to as the "winner", and the version that
is overwritten (deleted) is referred to as the "loser". The options are named
after the "winner":

- `path1` - (the default) - the version from Path1 is unconditionally
considered the winner (regardless of `modtime` and `size`, if any). This can be
useful if one side is more trusted or up-to-date than the other, at the time of
the `--resync`.
- `path2` - same as `path1`, except the path2 version is considered the winner.
- `newer` - the newer file (by `modtime`) is considered the winner, regardless
of which side it came from. This may result in having a mix of some winners
from Path1, and some winners from Path2. (The implementation is analagous to
running `rclone copy --update` in both directions.)
- `older` - same as `newer`, except the older file is considered the winner,
and the newer file is considered the loser.
- `larger` - the larger file (by `size`) is considered the winner (regardless
of `modtime`, if any). This can be a useful option for remotes without
`modtime` support, or with the kinds of files (such as logs) that tend to grow
but not shrink, over time.
- `smaller` - the smaller file (by `size`) is considered the winner (regardless
of `modtime`, if any).

For all of the above options, note the following:
- If either of the underlying remotes lacks support for the chosen method, it
will be ignored and will fall back to the default of `path1`. (For example, if
`--resync-mode newer` is set, but one of the paths uses a remote that doesn't
support `modtime`.)
- If a winner can't be determined because the chosen method's attribute is
missing or equal, it will be ignored, and bisync will instead try to determine
whether the files differ by looking at the other `--compare` methods in effect.
(For example, if `--resync-mode newer` is set, but the Path1 and Path2 modtimes
are identical, bisync will compare the sizes.) If bisync concludes that they
differ, preference is given to whichever is the "source" at that moment. (In
practice, this gives a slight advantage to Path2, as the 2to1 copy comes before
the 1to2 copy.) If the files _do not_ differ, nothing is copied (as both sides
are already correct).
- These options apply only to files that exist on both sides (with the same
name and relative path). Files that exist *only* on one side and not the other
are *always* copied to the other, during `--resync` (this is one of the main
differences between resync and non-resync runs.).
- `--conflict-resolve`, `--conflict-loser`, and `--conflict-suffix` do not
apply during `--resync`, and unlike these flags, nothing is renamed during
`--resync`. When a file differs on both sides during `--resync`, one version
always overwrites the other (much like in `rclone copy`.) (Consider using
`--backup-dir` to retain a backup of the losing version.)
- Unlike for `--conflict-resolve`, `--resync-mode none` is not a valid option
(or rather, it will be interpreted as "no resync", unless `--resync` has also
been specified, in which case it will be ignored.)
- Winners and losers are decided at the individual file-level only (there is
not currently an option to pick an entire winning directory atomically,
although the `path1` and `path2` options typically produce a similar result.)
- To maintain backward-compatibility, the `--resync` flag implies
`--resync-mode path1` unless a different `--resync-mode` is explicitly
specified. Similarly, all `--resync-mode` options (except `none`) imply
`--resync`, so it is not necessary to use both the `--resync` and
`--resync-mode` flags simultaneously -- either one is sufficient without the
other.
2024-01-20 17:17:01 -05:00
nielash
8d3bcc025a bisync: fix --colors flag
quick fix to get around lack of support in fs.Infof etc.
2024-01-20 17:17:01 -05:00
nielash
0f549520ef bisync: factor resync to separate file 2024-01-20 17:17:01 -05:00
nielash
ba16fcfaf5 bisync: skip empty test case dirs 2024-01-20 17:17:01 -05:00
nielash
68f0998699 bisync: add options to auto-resolve conflicts - fixes #7471
Before this change, when a file was new/changed on both paths (relative to the
prior sync), and the versions on each side were not identical, bisync would
keep both versions, renaming them with ..path1 and ..path2 suffixes,
respectively. Many users have requested more control over how bisync handles
such conflicts -- including an option to automatically select one version as
the "winner" and rename or delete the "loser". This change introduces support
for such options.

--conflict-resolve CHOICE

In bisync, a "conflict" is a file that is *new* or *changed* on *both sides*
(relative to the prior run) AND is *not currently identical* on both sides.
`--conflict-resolve` controls how bisync handles such a scenario. The currently
supported options are:

- `none` - (the default) - do not attempt to pick a winner, keep and rename
both files according to `--conflict-loser` and
`--conflict-suffix` settings. For example, with the default
settings, `file.txt` on Path1 is renamed `file.txt.conflict1` and `file.txt` on
Path2 is renamed `file.txt.conflict2`. Both are copied to the opposite path
during the run, so both sides end up with a copy of both files. (As `none` is
the default, it is not necessary to specify `--conflict-resolve none` -- you
can just omit the flag.)
- `newer` - the newer file (by `modtime`) is considered the winner and is
copied without renaming. The older file (the "loser") is handled according to
`--conflict-loser` and `--conflict-suffix` settings (either renamed or
deleted.) For example, if `file.txt` on Path1 is newer than `file.txt` on
Path2, the result on both sides (with other default settings) will be `file.txt`
(winner from Path1) and `file.txt.conflict1` (loser from Path2).
- `older` - same as `newer`, except the older file is considered the winner,
and the newer file is considered the loser.
- `larger` - the larger file (by `size`) is considered the winner (regardless
of `modtime`, if any).
- `smaller` - the smaller file (by `size`) is considered the winner (regardless
of `modtime`, if any).
- `path1` - the version from Path1 is unconditionally considered the winner
(regardless of `modtime` and `size`, if any). This can be useful if one side is
usually more trusted or up-to-date than the other.
- `path2` - same as `path1`, except the path2 version is considered the
winner.

For all of the above options, note the following:
- If either of the underlying remotes lacks support for the chosen method, it
will be ignored and fall back to `none`. (For example, if `--conflict-resolve
newer` is set, but one of the paths uses a remote that doesn't support
`modtime`.)
- If a winner can't be determined because the chosen method's attribute is
missing or equal, it will be ignored and fall back to `none`. (For example, if
`--conflict-resolve newer` is set, but the Path1 and Path2 modtimes are
identical, even if the sizes may differ.)
- If the file's content is currently identical on both sides, it is not
considered a "conflict", even if new or changed on both sides since the prior
sync. (For example, if you made a change on one side and then synced it to the
other side by other means.) Therefore, none of the conflict resolution flags
apply in this scenario.
- The conflict resolution flags do not apply during a `--resync`, as there is
no "prior run" to speak of (but see `--resync-mode` for similar
options.)

--conflict-loser CHOICE

`--conflict-loser` determines what happens to the "loser" of a sync conflict
(when `--conflict-resolve` determines a winner) or to both
files (when there is no winner.) The currently supported options are:

- `num` - (the default) - auto-number the conflicts by automatically appending
the next available number to the `--conflict-suffix`, in chronological order.
For example, with the default settings, the first conflict for `file.txt` will
be renamed `file.txt.conflict1`. If `file.txt.conflict1` already exists,
`file.txt.conflict2` will be used instead (etc., up to a maximum of
9223372036854775807 conflicts.)
- `pathname` - rename the conflicts according to which side they came from,
which was the default behavior prior to `v1.66`. For example, with
`--conflict-suffix path`, `file.txt` from Path1 will be renamed
`file.txt.path1`, and `file.txt` from Path2 will be renamed `file.txt.path2`.
If two non-identical suffixes are provided (ex. `--conflict-suffix
cloud,local`), the trailing digit is omitted. Importantly, note that with
`pathname`, there is no auto-numbering beyond `2`, so if `file.txt.path2`
somehow already exists, it will be overwritten. Using a dynamic date variable
in your `--conflict-suffix` (see below) is one possible way to avoid this. Note
also that conflicts-of-conflicts are possible, if the original conflict is not
manually resolved -- for example, if for some reason you edited
`file.txt.path1` on both sides, and those edits were different, the result
would be `file.txt.path1.path1` and `file.txt.path1.path2` (in addition to
`file.txt.path2`.)
- `delete` - keep the winner only and delete the loser, instead of renaming it.
If a winner cannot be determined (see `--conflict-resolve` for details on how
this could happen), `delete` is ignored and the default `num` is used instead
(i.e. both versions are kept and renamed, and neither is deleted.) `delete` is
inherently the most destructive option, so use it only with care.

For all of the above options, note that if a winner cannot be determined (see
`--conflict-resolve` for details on how this could happen), or if
`--conflict-resolve` is not in use, *both* files will be renamed.

--conflict-suffix STRING[,STRING]

`--conflict-suffix` controls the suffix that is appended when bisync renames a
`--conflict-loser` (default: `conflict`).
`--conflict-suffix` will accept either one string or two comma-separated
strings to assign different suffixes to Path1 vs. Path2. This may be helpful
later in identifying the source of the conflict. (For example,
`--conflict-suffix dropboxconflict,laptopconflict`)

With `--conflict-loser num`, a number is always appended to the suffix. With
`--conflict-loser pathname`, a number is appended only when one suffix is
specified (or when two identical suffixes are specified.) i.e. with
`--conflict-loser pathname`, all of the following would produce exactly the
same result:

```
--conflict-suffix path
--conflict-suffix path,path
--conflict-suffix path1,path2
```

Suffixes may be as short as 1 character. By default, the suffix is appended
after any other extensions (ex. `file.jpg.conflict1`), however, this can be
changed with the `--suffix-keep-extension` flag (i.e. to instead result in
`file.conflict1.jpg`).

`--conflict-suffix` supports several *dynamic date variables* when enclosed in
curly braces as globs. This can be helpful to track the date and/or time that
each conflict was handled by bisync. For example:

```
--conflict-suffix {DateOnly}-conflict
// result: myfile.txt.2006-01-02-conflict1
```

All of the formats described [here](https://pkg.go.dev/time#pkg-constants) and
[here](https://pkg.go.dev/time#example-Time.Format) are supported, but take
care to ensure that your chosen format does not use any characters that are
illegal on your remotes (for example, macOS does not allow colons in
filenames, and slashes are also best avoided as they are often interpreted as
directory separators.) To address this particular issue, an additional
`{MacFriendlyTime}` (or just `{mac}`) option is supported, which results in
`2006-01-02 0304PM`.

Note that `--conflict-suffix` is entirely separate from rclone's main `--sufix`
flag. This is intentional, as users may wish to use both flags simultaneously,
if also using `--backup-dir`.

Finally, note that the default in bisync prior to `v1.66` was to rename
conflicts with `..path1` and `..path2` (with two periods, and `path` instead of
`conflict`.) Bisync now defaults to a single dot instead of a double dot, but
additional dots can be added by including them in the specified suffix string.
For example, for behavior equivalent to the previous default, use:

```
[--conflict-resolve none] --conflict-loser pathname --conflict-suffix .path
```
2024-01-20 17:17:01 -05:00
nielash
d031cc138d bisync: check for syntax errors in path args - fixes #7511
Before this change, certain shell quoting / escaping errors (particularly on
Windows) were not detected by Bisync, possibly resulting in incorrect expansion
and confusing errors. In particular, Windows paths with a single trailing
backslash followed by a quote would be interpreted as an escaped quote --
resulting in the quote and subsequent flags being erroneously considered part
of the path.

After this change, Bisync specifically checks for a few of the most common
patterns, and if detected, exits with a more helpful error message before doing
any damage.
2024-01-20 16:54:12 -05:00
nielash
e71b252b65 bisync: add overlapping paths check
Before this change, Bisync did not check to make sure that Path1 and Path2 do
not overlap, nor did it check for overlaps with `--backup-dir`. While `sync`
does check for these things, it can sometimes be fooled because of the way
Bisync calls it with `--files-from` filters. Relying on sync could also leave a
run in a half-finished state if it were to error in one direction but not the
other (`--backup-dir` only checks for overlaps with the dest.)

After this change, Bisync does its own check up front, so we can quickly return
an error and exit before any changes are made.
2024-01-20 16:54:12 -05:00
nielash
e9cd3e5986 bisync: allow lock file expiration/renewal with --max-lock - #7470
Background: Bisync uses lock files as a safety feature to prevent
interference from other bisync runs while it is running. Bisync normally
removes these lock files at the end of a run, but if bisync is abruptly
interrupted, these files will be left behind. By default, they will lock out
all future runs, until the user has a chance to manually check things out and
remove the lock.

Before this change, lock files blocked future runs indefinitely, so a single
interrupted run would lock out all future runs forever (absent user
intervention), and there was no way to change this behavior.

After this change, a new --max-lock flag can be used to make lock files
automatically expire after a certain period of time, so that future runs are
not locked out forever, and auto-recovery is possible. --max-lock can be any
duration 2m or greater (or 0 to disable). If set, lock files older than this
will be considered "expired", and future runs will be allowed to disregard them
and proceed. (Note that the --max-lock duration must be set by the process that
left the lock file -- not the later one interpreting it.)

If set, bisync will also "renew" these lock files every
--max-lock_minus_one_minute throughout a run, for extra safety. (For example,
with --max-lock 5m, bisync would renew the lock file (for another 5 minutes)
every 4 minutes until the run has completed.) In other words, it should not be
possible for a lock file to pass its expiration time while the process that
created it is still running -- and you can therefore be reasonably sure that
any _expired_ lock file you may find was left there by an interrupted run, not
one that is still running and just taking awhile.

If --max-lock is 0 or not set, the default is that lock files will never
expire, and will block future runs (of these same two bisync paths)
indefinitely.

For maximum resilience from disruptions, consider setting a relatively short
duration like --max-lock 2m along with --resilient and --recover, and a
relatively frequent cron schedule. The result will be a very robust
"set-it-and-forget-it" bisync run that can automatically bounce back from
almost any interruption it might encounter, without requiring the user to get
involved and run a --resync.
2024-01-20 16:31:28 -05:00
nielash
4025f42bd9 bisync: Graceful Shutdown, --recover from interruptions without --resync - fixes #7470
Before this change, bisync had no mechanism to gracefully cancel a sync early
and exit in a clean state. Additionally, there was no way to recover on the
next run -- any interruption at all would cause bisync to require a --resync,
which made  bisync more difficult to use as a scheduled background process.

This change introduces a "Graceful Shutdown" mode and --recover flag to
robustly recover from even un-graceful shutdowns.

If --recover is set, in the event of a sudden interruption or other un-graceful
shutdown, bisync will attempt to automatically recover on the next run, instead
of requiring --resync. Bisync is able to recover robustly by keeping one
"backup" listing at all times, representing the state of both paths after the
last known successful sync. Bisync can then compare the current state with this
snapshot to determine which changes it needs to retry. Changes that were synced
after this snapshot (during the run that was later interrupted) will appear to
bisync as if they are "new or changed on both sides", but in most cases this is
not a problem, as bisync will simply do its usual "equality check" and learn
that no action needs to be taken on these files, since they are already
identical on both sides.

In the rare event that a file is synced successfully during a run that later
aborts, and then that same file changes AGAIN before the next run, bisync will
think it is a sync conflict, and handle it accordingly. (From bisync's
perspective, the file has changed on both sides since the last trusted sync,
and the files on either side are not currently identical.) Therefore, --recover
carries with it a slightly increased chance of having conflicts -- though in
practice this is pretty rare, as the conditions required to cause it are quite
specific. This risk can be reduced by using bisync's "Graceful Shutdown" mode
(triggered by sending SIGINT or Ctrl+C), when you have the choice, instead of
forcing a sudden termination.

--recover and --resilient are similar, but distinct -- the main difference is
that --resilient is about _retrying_, while --recover is about _recovering_.
Most users will probably want both. --resilient allows retrying when bisync has
chosen to abort itself due to safety features such as failing --check-access or
detecting a filter change. --resilient does not cover external interruptions
such as a user shutting down their computer in the middle of a sync -- that is
what --recover is for.

"Graceful Shutdown" mode is activated by sending SIGINT or pressing Ctrl+C
during a run. Once triggered, bisync will use best efforts to exit cleanly
before the timer runs out. If bisync is in the middle of transferring files, it
will attempt to cleanly empty its queue by finishing what it has started but
not taking more. If it cannot do so within 30 seconds, it will cancel the
in-progress transfers at that point and then give itself a maximum of 60
seconds to wrap up, save its state for next time, and exit. With the -vP flags
you will see constant status updates and a final confirmation of whether or not
the graceful shutdown was successful.

At any point during the "Graceful Shutdown" sequence, a second SIGINT or Ctrl+C
will trigger an immediate, un-graceful exit, which will leave things in a
messier state. Usually a robust recovery will still be possible if using
--recover mode, otherwise you will need to do a --resync.

If you plan to use Graceful Shutdown mode, it is recommended to use --resilient
and --recover, and it is important to NOT use --inplace, otherwise you risk
leaving partially-written files on one side, which may be confused for real
files on the next run. Note also that in the event of an abrupt interruption, a
lock file will be left behind to block concurrent runs. You will need to delete
it before you can proceed with the next run (or wait for it to expire on its
own, if using --max-lock.)
2024-01-20 16:31:28 -05:00
nielash
b4216648e4 bisync: full support for comparing checksum, size, modtime - fixes #5679 fixes #5683 fixes #5684 fixes #5675
Before this change, bisync could only detect changes based on modtime, and
would refuse to run if either path lacked modtime support. This made bisync
unavailable for many of rclone's backends. Additionally, bisync did not account
for the Fs's precision when comparing modtimes, meaning that they could only be
reliably compared within the same side -- not against the opposite side. Size
and checksum (even when available) were ignored completely for deltas.

After this change, bisync now fully supports comparing based on any combination
of size, modtime, and checksum, lifting the prior restriction on backends
without modtime support. The comparison logic considers the backend's
precision, hash types, and other features as appropriate.

The comparison features optionally use a new --compare flag (which takes any
combination of size,modtime,checksum) and even supports some combinations not
otherwise supported in `sync` (like comparing all three at the same time.) By
default (without the --compare flag), bisync inherits the same comparison
options as `sync` (that is: size and modtime by default, unless modified with
flags such as --checksum or --size-only.) If the --compare flag is set, it will
override these defaults.

If --compare includes checksum and both remotes support checksums but have no
hash types in common with each other, checksums will be considered only for
comparisons within the same side (to determine what has changed since the prior
sync), but not for comparisons against the opposite side. If one side supports
checksums and the other does not, checksums will only be considered on the side
that supports them. When comparing with checksum and/or size without modtime,
bisync cannot determine whether a file is newer or older -- only whether it is
changed or unchanged. (If it is changed on both sides, bisync still does the
standard equality-check to avoid declaring a sync conflict unless it absolutely
has to.)

Also included are some new flags to customize the checksum comparison behavior
on backends where hashes are slow or unavailable. --no-slow-hash and
--slow-hash-sync-only allow selectively ignoring checksums on backends such as
local where they are slow. --download-hash allows computing them by downloading
when (and only when) they're otherwise not available. Of course, this option
probably won't be practical with large files, but may be a good option for
syncing small-but-important files with maximum accuracy (for example, a source
code repo on a crypt remote.) An additional advantage over methods like
cryptcheck is that the original file is not required for comparison (for
example, --download-hash can be used to bisync two different crypt remotes with
different passwords.)

Additionally, all of the above are now considered during the final --check-sync
for much-improved accuracy (before this change, it only compared filenames!)

Many other details are explained in the included docs.
2024-01-20 16:08:06 -05:00
nielash
d8e07bfd8e bisync: document beta status more clearly - fixes #6082 2024-01-20 15:38:26 -05:00
nielash
199d82969b bisync: normalize session name to non-canonical - fixes #7423
Before this change, bisync used the "canonical" Fs name in the filename for its
listing files, including any {hexstring} suffix. An unintended consequence of
this was that if a user added a backend-specific flag from the command line
(thus "overriding" the config), bisync would fail to find the listing files it
created during the prior run without this flag, due to the path now having a
{hexstring} suffix that wasn't there before (or vice versa, if the flag was
present when the session was established, and later removed.) This would
sometimes cause bisync to fail with a critical error (if no listing existed
with the alternate name), or worse -- it would sometimes cause bisync to use an
old, incorrect listing (if old listings with the alternate name DID still
exist, from before the user changed their flags.)

After this change, the issue is fixed by always normalizing the SessionName to
the non-canonical version (no {hexstring} suffix), regardless of the flags. To
avoid a breaking change, we first check if a suffixed listing exists. If so, we
rename it (and overwrite the non-suffixed version, if any.) If not, we carry on
with the non-suffixed version. (We should only find a suffixed version if
created prior to this commit.)

The result for the user is that the same pair of paths will always use the same
.lst filenames, with or without backend-specific flags.
2024-01-20 15:38:26 -05:00
nielash
bb74a13c07 bisync: update version number in docs
as these changes did not make it in time for 1.65
2024-01-20 15:38:26 -05:00
nielash
57624629d6 bisync: account for differences in backend features on integration tests - see #5679
Before this change, integration tests often could not be run on backends with
differing features from the local system that goldenized them. In particular,
differences in modtime precision, checksum support, and encoding would cause
false positives. After this change, the tests more accurately account for the
features of the backend being tested, which allows us to see true positives
more clearly, and more meaningfully assess whether a backend is supported.
2024-01-20 14:50:08 -05:00
nielash
7c6f0cc455 operations: fix renaming a file on macOS
Before this change, a file would sometimes be silently deleted instead of
renamed on macOS, due to its unique handling of unicode normalization. Rclone
already had a SameObject check in place for case insensitivity before deleting
the source (for example if "hello.txt" was renamed to "HELLO.txt"), but had no
such check for unicode normalization. After this change, the delete is skipped
on macOS if the src and dst filenames normalize to the same NFC string.

Example of the previous behavior:

 ~ % rclone touch /Users/nielash/rename_test/ö
 ~ % rclone lsl /Users/nielash/rename_test/ö
        0 2023-11-21 17:28:06.170486000 ö
 ~ % rclone moveto /Users/nielash/rename_test/ö /Users/nielash/rename_test/ö -vv
2023/11/21 17:28:51 DEBUG : rclone: Version "v1.64.0" starting with parameters ["rclone" "moveto" "/Users/nielash/rename_test/ö" "/Users/nielash/rename_test/ö" "-vv"]
2023/11/21 17:28:51 DEBUG : Creating backend with remote "/Users/nielash/rename_test/ö"
2023/11/21 17:28:51 DEBUG : Using config file from "/Users/nielash/.config/rclone/rclone.conf"
2023/11/21 17:28:51 DEBUG : fs cache: adding new entry for parent of "/Users/nielash/rename_test/ö", "/Users/nielash/rename_test"
2023/11/21 17:28:51 DEBUG : Creating backend with remote "/Users/nielash/rename_test/"
2023/11/21 17:28:51 DEBUG : fs cache: renaming cache item "/Users/nielash/rename_test/" to be canonical "/Users/nielash/rename_test"
2023/11/21 17:28:51 DEBUG : ö: Size and modification time the same (differ by 0s, within tolerance 1ns)
2023/11/21 17:28:51 DEBUG : ö: Unchanged skipping
2023/11/21 17:28:51 INFO  : ö: Deleted
2023/11/21 17:28:51 INFO  :
Transferred:   	          0 B / 0 B, -, 0 B/s, ETA -
Checks:                 1 / 1, 100%
Deleted:                1 (files), 0 (dirs)
Elapsed time:         0.0s

2023/11/21 17:28:51 DEBUG : 5 go routines active
 ~ % rclone lsl /Users/nielash/rename_test/
 ~ %
2024-01-20 14:50:08 -05:00
nielash
422b037087 bisync: fallback to cryptcheck or --download when can't check hash
Bisync checks file equality before renaming sync conflicts by comparing
checksums. Before this change, backends without checksum support (notably
Crypt) would fall back to --size-only for these checks, which is not a very
safe method (differing files can sometimes have the same size, especially if
they're small.) After this change, Crypt remotes fallback to using Cryptcheck
so that checksums can be compared. As a last resort when neither Check nor
Cryptcheck are available, files are compared using --download so that we can be
certain the files are identical regardless of checksum support.
2024-01-20 14:50:08 -05:00
nielash
7f854acb05 local: fix cleanRootPath on Windows after go1.21.4 stdlib update
Similar to
acf1e2df84,
go1.21.4 appears to have broken sync.MoveDir on Windows because
filepath.VolumeName() returns `\\?` instead of `\\?\C:` in cleanRootPath. It
looks like the Go team is aware of the issue and planning a fix, so this may
only be needed temporarily.
2024-01-20 14:50:08 -05:00
nielash
bbf9b1b3d2 bisync: support two --backup-dir paths on different remotes
Before this change, bisync supported `--backup-dir` only when `Path1` and
`Path2` were different paths on the same remote. With this change, bisync
introduces new `--backup-dir1` and `--backup-dir2` flags to support separate
backup-dirs for `Path1` and `Path2`.

`--backup-dir1` and `--backup-dir2` can use different remotes from each other,
but `--backup-dir1` must use the same remote as `Path1`, and `--backup-dir2`
must use the same remote as `Path2`. Each backup directory must not overlap its
respective bisync Path without being excluded by a filter rule.

The standard `--backup-dir` will also work, if both paths use the same remote
(but note that deleted files from both paths would be mixed together in the
same dir). If either `--backup-dir1` and `--backup-dir2` are set, they will
override `--backup-dir`.
2024-01-20 14:50:08 -05:00
nielash
9cf783677e bisync: support files with unknown length, including Google Docs - fixes #5696
Before this change, bisync intentionally ignored Google Docs (albeit in a
buggy way that caused problems during --resync.) After this change, Google Docs
(including Google Sheets, Slides, etc.) are now supported in bisync, subject to
the same options, defaults, and limitations as in `rclone sync`. When bisyncing
drive with non-drive backends, the drive -> non-drive direction is controlled
by `--drive-export-formats` (default `"docx,xlsx,pptx,svg"`) and the non-drive
-> drive direction is controlled by `--drive-import-formats` (default none.)

For example, with the default export/import formats, a Google Sheet on the
drive side will be synced to an `.xlsx` file on the non-drive side. In the
reverse direction, `.xlsx` files with filenames that match an existing Google
Sheet will be synced to that Google Sheet, while `.xlsx` files that do NOT
match an existing Google Sheet will be copied to drive as normal `.xlsx` files
(without conversion to Sheets, although the Google Drive web browser UI may
still give you the option to open it as one.)

If `--drive-import-formats` is set (it's not, by default), then all of the
specified formats will be converted to Google Docs, if there is no existing
Google Doc with a matching name. Caution: such conversion can be quite lossy,
and in most cases it's probably not what you want!

To bisync Google Docs as URL shortcut links (in a manner similar to "Drive for
Desktop"), use: `--drive-export-formats url` (or alternatives.)

Note that these link files cannot be edited on the non-drive side -- you will
get errors if you try to sync an edited link file back to drive. They CAN be
deleted (it will result in deleting the corresponding Google Doc.) If you
create a `.url` file on the non-drive side that does not match an existing
Google Doc, bisyncing it will just result in copying the literal `.url` file
over to drive (no Google Doc will be created.) So, as a general rule of thumb,
think of them as read-only placeholders on the non-drive side, and make all
your changes on the drive side.

Likewise, even with other export-formats, it is best to only move/rename Google
Docs on the drive side. This is because otherwise, bisync will interpret this
as a file deleted and another created, and accordingly, it will delete the
Google Doc and create a new file at the new path. (Whether or not that new file
is a Google Doc depends on `--drive-import-formats`.)

Lastly, take note that all Google Docs on the drive side have a size of `-1`
and no checksum. Therefore, they cannot be reliably synced with the
`--checksum` or `--size-only` flags. (To be exact: they will still get
created/deleted, and bisync's delta engine will notice changes and queue them
for syncing, but the underlying sync function will consider them identical and
skip them.) To work around this, use the default (modtime and size) instead of
`--checksum` or `--size-only`.

To ignore Google Docs entirely, use `--drive-skip-gdocs`.

Nearly all of the Google Docs logic is outsourced to the Drive backend, so
future changes should also be supported by bisync.
2024-01-20 14:50:08 -05:00
nielash
4d5d6ee61b bisync: provide more info in critical error msgs 2024-01-20 14:50:08 -05:00
nielash
44637dcd7f bisync: high-level retries if --resilient
Before this change, bisync had no ability to retry in the event of sync errors.
After this change, bisync will retry if --resilient is passed, but only in one
direction at a time. We can safely retry in one direction because the source is
still intact, even if the dest was left in a messy state. If the first
direction still fails after our final retry, we abort and do NOT continue in
the other direction, to prevent the messy dest from polluting the source. If
the first direction succeeds, we do then allow retries in the other direction.

The number of retries is controllable by --retries (default 3)

bisync: high-level retries if --resilient

Before this change, bisync had no ability to retry in the event of sync errors.
After this change, bisync will retry if --resilient is passed, but only in one
direction at a time. We can safely retry in one direction because the source is
still intact, even if the dest was left in a messy state. If the first
direction still fails after our final retry, we abort and do NOT continue in
the other direction, to prevent the messy dest from polluting the source. If
the first direction succeeds, we do then allow retries in the other direction.

The number of retries is controllable by --retries (default 3)
2024-01-20 14:50:08 -05:00
nielash
98f539de8f bisync: refactor normalization code, fix deltas - fixes #7270
Refactored the case / unicode normalization logic to be much more efficient,
 and fix the last outstanding issue from #7270. Before this change, we were
 doing lots of for loops and re-normalizing strings we had already normalized
 earlier. Now, we leave the normalizing entirely to March and avoid
 re-transforming later, which seems to make a large difference in terms of
 performance.
2024-01-20 14:50:08 -05:00
nielash
58fd6d7b94 docs: add bisync to index 2024-01-20 14:50:08 -05:00
nielash
9c96c13a35 bisync: optimize --resync performance -- partially addresses #5681
Before this change, --resync was handled in three steps, and needed to do a lot
of unnecessary work to implement its own --ignore-existing logic, which also
caused problems with unicode normalization, in addition to being pretty slow.
After this change, it is refactored to produce the same result much more
efficiently, by reducing the three steps to two and letting ci.IgnoreExisting
do the work instead of reinventing the wheel.

The behavior and sync order remain unchanged for now -- just faster (but see
the ongoing lively discussions about potential future changes in #5681!)
2024-01-20 14:50:08 -05:00
nielash
f7f4651828 bisync: handle unicode and case normalization consistently - mostly-fixes #7270
Before this change, Bisync sometimes normalized NFD to NFC and sometimes
did not, causing errors in some scenarios (particularly for users of macOS).
It was similarly inconsistent in its handling of case-insensitivity.

There were three main places where Bisync should have normalized, but didn't:

1. When building the list of files that need to be transferred during --resync
2. When building the list of deltas during a non-resync
3. When comparing Path1 to Path2 during --check-sync

After this change, 1 and 3 are resolved, and bisync supports
--no-unicode-normalization and --ignore-case-sync in the same way as sync.
2 will be addressed in a future update.
2024-01-20 14:50:08 -05:00
nielash
11afc3dde0 sync: --fix-case flag to rename case insensitive dest - fixes #4854
Before this change, a sync to a case insensitive dest (such as macOS / Windows)
would not result in a matching filename if the source and dest had casing
differences but were otherwise equal. For example, syncing `hello.txt` to
`HELLO.txt` would result in the dest filename remaining `HELLO.txt`.
Furthermore, `--local-case-sensitive` did not solve this, as it actually caused
`HELLO.txt` to get deleted!

After this change, `HELLO.txt` is renamed to `hello.txt` to match the source,
only if the `--fix-case` flag is specified. (The old behavior remains the
default.)
2024-01-20 14:50:08 -05:00
nielash
88e516adee moveOrCopyFile: avoid panic on --dry-run
Before this change, changing the case of a file on a case insensitive remote
would fatally panic when `--dry-run` was set, due to `moveOrCopyFile`
attempting to access the non-existent `tmpObj` it (would normally have)
created. After this change, the panic is avoided by skipping this step during
a `--dry-run` (with the usual "skipped as --dry-run is set" log message.)
2024-01-20 14:50:08 -05:00
nielash
fd95511091 bisync: generate listings concurrently with march -- fixes #7332
Before this change, bisync needed to build a full listing for Path1, then a
full listing for Path2, then compare them -- and each of those tasks needed to
finish before the next one could start. In addition to being slow and
inefficient, it also caused real problems if a file changed between the time
bisync checked it on Path1 and the time it checked the corresponding file on
Path2.

This change solves these problems by listing both paths concurrently, using
the same March infrastructure that check and sync use to traverse two
directories in lock-step, optimized by Go's robust concurrency support.
Listings should now be much faster, and any given path is now checked
nearly-instantaneously on both sides, minimizing room for error.

Further discussion:
https://forum.rclone.org/t/bisync-bugs-and-feature-requests/37636#:~:text=4.%20Listings%20should%20alternate%20between%20paths%20to%20minimize%20errors
2024-01-20 14:50:08 -05:00
nielash
0cac5d67ab bisync: introduce terminal colors
This introduces a few basic color codings to make the terminal output more
readable (and more fun). Rclone's standard --color flag is supported.
(AUTO|NEVER|ALWAYS)

Only a few lines have colors right now -- more will probably be added in
future versions.
2024-01-20 14:50:08 -05:00
nielash
6d6dc00abb bisync: rollback listing on error
Before this change, bisync had no mechanism for "retrying" a file again next
time, in the event of an unexpected and possibly temporary error. After this
change, bisync is now essentially able to mark a file as needing to be
rechecked next time. Bisync does this by keeping one prior listing on hand at
all times. In a low-confidence situation, bisync can revert a given file row
back to its state at the end of the last known successful sync, ensuring that
any subsequent changes will be re-noticed on the next run.
This can potentially be helpful for a dynamically changing file system, where
files may be changing quickly while bisync is working with them.
2024-01-20 14:50:08 -05:00
nielash
079763f09a bisync: isDir check for deltas
Before this change, if --create-empty-src-dirs was specified, bisync would
include directories in the list of deltas to evaluate by their modtime,
relative to the prior sync. This was unnecessary, as rclone does not yet
support setting modtime for directories.

After this change, we skip directories when comparing modtimes. (In other
words, we care only if a directory is created or deleted, not whether it is
newer or older.)
2024-01-20 14:50:08 -05:00
nielash
978cbf9360 bisync: generate final listing from sync results, not relisting -- fixes #5676
Before this change, if there were changes to sync, bisync listed each path
twice: once before the sync and once after. The second listing caused quite
a lot of problems, in addition to making each run much slower and more
expensive. A serious side-effect was that file changes could slip through
undetected, if they happened to occur while a sync was running (between the
first and second listing snapshots.)

After this change, the second listing is eliminated by getting the underlying
sync operation to report back a list of what it changed. Not only is this more
efficient, but also much more robust to concurrent modifications. It should no
longer be necessary to avoid make changes while it's running -- bisync will
simply learn about those changes next time and handle them on the next run.
Additionally, this also makes --check-sync usable again.

For further discussion, see:
https://forum.rclone.org/t/bisync-bugs-and-feature-requests/37636#:~:text=5.%20Final%20listings%20should%20be%20created%20from%20initial%20snapshot%20%2B%20deltas%2C%20not%20full%20re%2Dscans%2C%20to%20avoid%20errors%20if%20files%20changed%20during%20sync
2024-01-20 14:50:08 -05:00
nielash
3a50f35df9 sync: report list of synced paths to file -- see #7282
Allows rclone sync to accept the same output file flags as rclone check,
for the purpose of writing results to a file.
A new --dest-after option is also supported, which writes a list file using
the same ListFormat flags as lsf (including customizable options for hash,
modtime, etc.) Conceptually it is similar to rsync's --itemize-changes, but
not identical -- it should output an accurate list of what will be on the
destination after the sync.

Note that it has a few limitations, and certain scenarios
are not currently supported:

--max-duration / CutoffModeHard
--compare-dest / --copy-dest (because equal() is called multiple times for the
    same file)
server-side moves of an entire dir at once (because we never get the individual
file objects in the dir)
High-level retries, because there would be dupes
Possibly some error scenarios that didn't come up on the tests

Note also that each file is logged during the sync, as opposed to after, so it
is most useful as a predictor of what SHOULD happen to each file
(which may or may not match what actually DID.)

Only rclone sync is currently supported -- support for copy and move may be
added in the future.
2024-01-20 14:50:08 -05:00
nielash
c0968a0987 operations: add logger to log list of sync results -- fixes #7282
Logger instruments the Sync routine with a status report for each file pair,
making it possible to output a list of the synced files, along with their
attributes and sigil categorization (match/differ/missing/etc.)
It is very customizable by passing in a custom LoggerFn, options, and
io.Writers to be written to. Possible uses include:
- allow sync to write path lists to a file, in the same format as rclone check
- allow sync to output a --dest-after file using the same format flags as lsf
- receive results as JSON when calling sync from an internal function
- predict the post-sync state of the destination

For usage examples, see bisync.WriteResults() or sync.SyncLoggerFn()
2024-01-20 14:50:08 -05:00
nielash
932f9ec34a bisync: document support for atomic uploads 2024-01-20 14:50:08 -05:00
nielash
0e5f12126f bisync: merge copies and deletes, support --track-renames and --backup-dir -- fixes #5690 fixes #5685
Before this change, bisync handled copies and deletes in separate operations.
After this change, they are combined in one sync operation, which is faster
and also allows bisync to support --track-renames and --backup-dir.

Bisync uses a --files-from filter containing only the paths bisync has
determined need to be synced. Just like in sync (but in both directions),
if a path is present on the dst but not the src, it's interpreted as a delete
rather than a copy.
2024-01-20 14:50:08 -05:00
nielash
5c7ba0bfd3 bisync: fix tests on macOS
normalizes unicode and ignores .DS_Store files to make testing possible
on macOS
2024-01-20 14:50:08 -05:00
nielash
9933d6c071 check: respect --no-unicode-normalization and --ignore-case-sync for --checkfile
Before this change, --no-unicode-normalization and --ignore-case-sync
were respected for rclone check but not for rclone check --checkfile,
causing them to give different results.

This change adds support for --checkfile so that the behavior is consistent.
2024-01-20 14:50:08 -05:00
nielash
66929416d4 lsf: add --time-format flag
Before this change, lsf's time format was hard-coded to "2006-01-02 15:04:05",
regardless of the Fs's precision. After this change, a new optional
--time-format flag is added to allow customizing the format (the default is
unchanged).

Examples:
	rclone lsf remote:path --format pt --time-format 'Jan 2, 2006 at 3:04pm (MST)'
	rclone lsf remote:path --format pt --time-format '2006-01-02 15:04:05.000000000'
	rclone lsf remote:path --format pt --time-format '2006-01-02T15:04:05.999999999Z07:00'
	rclone lsf remote:path --format pt --time-format RFC3339
	rclone lsf remote:path --format pt --time-format DateOnly
	rclone lsf remote:path --format pt --time-format max

--time-format max will automatically truncate '2006-01-02 15:04:05.000000000'
to the maximum precision supported by the remote.
2024-01-20 14:50:08 -05:00
dependabot[bot]
b06935a12e build(deps): bump actions/cache from 3 to 4
Bumps [actions/cache](https://github.com/actions/cache) from 3 to 4.
- [Release notes](https://github.com/actions/cache/releases)
- [Changelog](https://github.com/actions/cache/blob/main/RELEASES.md)
- [Commits](https://github.com/actions/cache/compare/v3...v4)

---
updated-dependencies:
- dependency-name: actions/cache
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-01-19 17:19:08 +00:00
Tera
806f6ab1eb add missing backtick 2024-01-19 11:17:36 +00:00
Nick Craig-Wood
c482624a6c config: add config/paths to the rc as rclone config paths equivalent
Fixes #7568
2024-01-18 17:47:39 +00:00
kapitainsky
17fea90ac9 docs: add rclone OS requirements
Adds rclone OS requirements list and latest rclone versions known to be working with specific historical OS versions.

Discussed on the forum:
https://forum.rclone.org/t/rclone-1-65-1-runtime-exception-error-crash-immediately-after-running-the-command/44051

Fixes: #7571
2024-01-17 16:42:33 +00:00
Harshit Budhraja
78176d39fd imagekit: updated overview - supported operations 2024-01-17 16:38:54 +00:00
Nick Craig-Wood
ae3c73f610 stats: fix race between ResetCounters and stopAverageLoop called from time.AfterFunc
Before this change StatsInfo.ResetCounters() and stopAverageLoop()
(when called from time.AfterFunc) could race on StatsInfo.average.
This was because the deferred stopAverageLoop accessed
StatsInfo.average without locking.

For some reason this only ever happened on macOS. This caused the CI
to fail on macOS thus causing the macOS builds not to appear.

This commit fixes the problem with a bit of extra locking.

It also renames all StatsInfo methods that should be called without
the lock to start with an initial underscore as this is the convention
we use elsewhere.

Fixes #7567
2024-01-17 10:23:50 +00:00
Nick Craig-Wood
d20f647487 Add Harshit Budhraja to contributors 2024-01-17 10:23:50 +00:00
Harshit Budhraja
6521394865 imagekit: Updated docs and web content 2024-01-16 18:25:25 +00:00
Nick Craig-Wood
42cac4cf53 build: use API when fetching golangci-lint as it is more reliable
This was turned off previously because we used it in the CI and it
rate limited.
2024-01-15 16:22:07 +00:00
Nick Craig-Wood
223d8c5fe3 serve dlna: now only supported on go1.21 or later
This is due to use of go1.21 only constructs in github.com/anacrolix/log
2024-01-15 16:22:07 +00:00
Nick Craig-Wood
dd0e5b9a7f operations: use built in io.OffsetWriter for go1.20 2024-01-15 16:22:07 +00:00
Nick Craig-Wood
da244a3709 ssh: shorten wait delay for external ssh binaries now that we are using go1.20
Now we are guaranteed to have go1.20 or later we can use the WaitDelay
flag when running external ssh binaries.
2024-01-15 16:22:07 +00:00
Nick Craig-Wood
938b43c26c build: remove random.Seed since random generator is seeded automatically in go1.20
Now that the minimum version is go1.20 we can stop seeding the random
number generator.
2024-01-15 16:22:07 +00:00
Nick Craig-Wood
13fb2fb2ec build: update to go1.22rc1 and make go1.20 the minimum required version 2024-01-15 16:22:07 +00:00
Nick Craig-Wood
43cc2435c3 build: update indirect dependencies where possible 2024-01-15 16:18:42 +00:00
Nick Craig-Wood
1b1e43074f build: update direct dependencies and fix serve nfs
This updates the direct dependencies.

The latest github.com/willscott/go-nfs has changed the interface
slightly so this implements a dummy InvalidateHandle method in order
to satisfy it.
2024-01-15 16:18:42 +00:00
Nick Craig-Wood
cacfc100de docs: add warp.dev sponsorship to github home page 2024-01-15 11:57:27 +00:00
Nick Craig-Wood
f8c5695aed docs: add warp.dev as a sponsor 2024-01-15 11:55:38 +00:00
Nick Craig-Wood
a5972fe0d1 docs: update website footer 2024-01-15 11:55:38 +00:00
Nick Craig-Wood
184459ba8f vfs: fix stale data when using --vfs-cache-mode full
Before this change the VFS cache could get into a state where when an
object was updated remotely, the fingerprint of the item was correct
for the new object but the data in the VFS cache was for the old
object.

This fixes the problem by updating the fingerprint of the item at the
point we remove the stale data. The empty cache item now represents
the new item even though it has no data in.

This stops the fallback code for an empty fingerprint running (used
when we are writing items to the cache instead of reading them) which
was causing the problem.

Fixes #6053
See: https://forum.rclone.org/t/cached-webdav-mount-fingerprints-get-nuked-on-ls/43974/
2024-01-15 11:12:59 +00:00
Nick Craig-Wood
519fe98e6e azureblob: implement --azureblob-delete-snapshots
This flag controls what happens when we try to delete a blob with a
snapshot. The UI follows the azcopy tool.

See: https://forum.rclone.org/t/how-to-delete-undeleted-blobs-on-azure/43911/
2024-01-13 14:27:54 +00:00
Nick Craig-Wood
3df6518006 Add Nikhil Ahuja to contributors 2024-01-13 14:27:54 +00:00
Nikhil Ahuja
1045f54128 oracleobjectstorage: Support "backend restore" command - fixes #7371 2024-01-09 09:43:36 +00:00
dependabot[bot]
0563cc6314 build(deps): bump github.com/cloudflare/circl from 1.3.6 to 1.3.7
Bumps [github.com/cloudflare/circl](https://github.com/cloudflare/circl) from 1.3.6 to 1.3.7.
- [Release notes](https://github.com/cloudflare/circl/releases)
- [Commits](https://github.com/cloudflare/circl/compare/v1.3.6...v1.3.7)

---
updated-dependencies:
- dependency-name: github.com/cloudflare/circl
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-01-08 17:38:09 +00:00
Nick Craig-Wood
e20f2eee59 Changelog updates from Version v1.65.1 2024-01-08 11:54:02 +00:00
Vincent Murphy
41b8935a6c docs: Fix broken test_proxy.py link again
The previous fix fixed the auto generated output - this fixes the source.
2024-01-08 11:54:02 +00:00
Nick Craig-Wood
fbdf71ab64 operations: fix files moved by rclone move not being counted as transfers
Before this change we were only counting moves as checks. This means
that when using `rclone move` the `Transfers` stat did not count up
like it should do.

This changes introduces a new primitive operations.MoveTransfers which
counts moves as Transfers for use where that is appropriate, such as
rclone move/moveto. Otherwise moves are counted as checks and their
bytes are not accounted.

See: #7183
See: https://forum.rclone.org/t/stats-one-line-date-broken-in-1-64-0-and-later/43263/
2024-01-07 11:26:09 +00:00
Nick Craig-Wood
d392f9fcd8 accounting: fix stats to show server side transfers
Before this fix we were not counting transferred files nor transferred
bytes for server side moves/copies.

If the server side move/copy has been marked as a transfer and not a
checker then this accounts transferred files and transferred bytes.

The transferred bytes are not accounted to the network though so this
should not affect the network stats.
2024-01-07 11:26:09 +00:00
Nick Craig-Wood
dedad9f071 onedrive: fix "unauthenticated: Unauthenticated" errors when uploading
Before this change, sometimes when uploading files the onedrive
servers return 401 Unauthorized errors with the text "unauthenticated:
Unauthenticated".

This is because we are sending the Authorization header with the
request and it says in the docs that we shouldn't.

https://learn.microsoft.com/en-us/graph/api/driveitem-createuploadsession?view=graph-rest-1.0#remarks

> If you include the Authorization header when issuing the PUT call,
> it may result in an HTTP 401 Unauthorized response. Only send the
> Authorization header and bearer token when issuing the POST during
> the first step. Don't include it when you issue the PUT call.

This patch fixes the problem by doing the PUT request with an
unauthenticated client.

Fixes #7405
See: https://forum.rclone.org/t/onedrive-unauthenticated-when-trying-to-copy-sync-but-can-use-lsd/41149/
See: https://forum.rclone.org/t/onedrive-unauthenticated-issue/43792/
2024-01-07 11:14:08 +00:00
Nick Craig-Wood
1f6271fa15 s3: copy parts in parallel when doing chunked server side copy
Before this change rclone copied each chunk serially.

After this change it does --s3-upload-concurrency at once.

See: https://forum.rclone.org/t/transfer-big-files-50gb-from-s3-bucket-to-another-s3-bucket-doesnt-starts/43209
2024-01-05 15:54:52 +00:00
Nick Craig-Wood
c16c22d6e1 s3: fix crash if no UploadId in multipart upload
Before this change if the S3 API returned a multipart upload with no
UploadId then rclone would crash.

This detects the problem and attempts to retry the multipart upload
creation.

See: https://forum.rclone.org/t/panic-runtime-error-invalid-memory-address-or-nil-pointer-dereference/43425
2024-01-05 15:52:52 +00:00
Nick Craig-Wood
486a10bec5 serve s3: fix listing oddities
Before this change, listing a subdirectory gave errors like this:

    Entry doesn't belong in directory "" (contains subdir) - ignoring

It also did full recursive listings when it didn't need to.

This was caused by the code using the underlying Fs to do recursive
listings on bucket based backends.

Using both the VFS and the underlying Fs is a mistake so this patch
removes the code which uses the underlying Fs and just uses the VFS.

Fixes #7500
2024-01-05 15:51:13 +00:00
Nick Craig-Wood
5fa13e3e31 protondrive: fix CVE-2023-45286 / GHSA-xwh9-gc39-5298
A race condition in go-resty can result in HTTP request body
disclosure across requests.

See: https://pkg.go.dev/vuln/GO-2023-2328
Fixes: #7491
2024-01-04 17:14:53 +00:00
Nick Craig-Wood
0e746f25a3 amazonclouddrive: remove Amazon Drive backend code and docs #7539
The Amazon Drive backend is closed from 2023-12-31.

See: https://www.amazon.com/b?ie=UTF8&node=23943055011
2024-01-04 17:05:54 +00:00
Nick Craig-Wood
578b9df6ea build: fix docker build on arm/v6
Unexpectedly the team which runs the Go docker images have removed the
arm/v6 image which means that the rclone docker images no longer
build.

One of the recommended fixes is what we've done here - switch to the
alpine builder. This has the advantage that it actually builds arm/v6
architecture unlike the previous builder which build arm/v5.

See: https://github.com/docker-library/golang/issues/502
2024-01-03 17:43:23 +00:00
Nick Craig-Wood
208e49ce4b fs: update use of math/rand to modern practice 2024-01-03 16:14:40 +00:00
Nick Craig-Wood
7aa066cff8 Add Paul Stern to contributors 2024-01-03 16:14:40 +00:00
dependabot[bot]
64df4cf2db build(deps): bump golang.org/x/crypto to fix ssh terrapin CVE-2023-48795
Fixes SSH terrapin attack: see https://terrapin-attack.com.

Bumps [golang.org/x/crypto](https://github.com/golang/crypto) from 0.14.0 to 0.17.0.
- [Commits](https://github.com/golang/crypto/compare/v0.14.0...v0.17.0)

---
updated-dependencies:
- dependency-name: golang.org/x/crypto
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-01-03 15:47:34 +00:00
rkonfj
451d7badf7 oauthutil: avoid panic when *token and *ts.token are the same
the field `raw` of `oauth2.Token` may be an uncomparable type(often map[string]interface{}), causing `*token != *ts.token` expression to panic(comparing uncomparable type ...).

the semantics of comparing whether two tokens are the same can be achieved by comparing accessToken, refreshToken and expire to avoid panic.
2024-01-03 15:15:14 +00:00
WeidiDeng
d977fa25fa ftp: fix multi-thread copy
Before this change multi-thread copies using the FTP backend used to error with

    551 Error reading file

This was caused by a spurious error being reported which this code silences.

Fixes #7532
See #3942
2024-01-03 12:21:08 +00:00
Paul Stern
bb679a9def backend: add description field for all backends
Fixes #4391
2024-01-03 10:57:59 +00:00
Nick Craig-Wood
a3d19942bd googlephotos: fix nil pointer exception when batch failed
This was a simple error check that was missing. Interestingly the
errcheck linter did not spot this.

See: https://forum.rclone.org/t/invalid-memory-address-or-nil-pointer-dereference-error-when-copy-to-google-photos/43634/
2024-01-03 10:57:59 +00:00
Nick Craig-Wood
394195cfdf Add rarspace01 to contributors 2024-01-03 10:57:59 +00:00
nielash
3ca766b2f1 hasher: fix invalid memory address error when MaxAge == 0
When f.opt.MaxAge == 0, f.db is never set, however several methods later assume
it is set and attempt to access it, causing an invalid memory address error.
This change fixes the issue in a few spots (there may still be others I haven't
yet encountered.)
2024-01-02 18:14:01 +00:00
albertony
3bf8c877c3 docs/librclone: the newer and recommended ucrt64 subsystem of msys2 can now be used for building on windows 2024-01-01 21:56:45 +01:00
rarspace01
fba2d4c4a7 docs: fix broken link in serve webdav 2023-12-30 18:10:27 +01:00
Oksana
8503282a5a azure-files: fix storage base url
Documented in https://learn.microsoft.com/en-us/azure/storage/common/storage-account-overview
2023-12-18 14:15:13 +00:00
Manoj Ghosh
743ea6ac26 oracle object storage: fix object storage endpoint for custom endpoints 2023-12-15 10:13:35 +00:00
Nick Craig-Wood
c69eb84573 chunker,compress,crypt,hasher,union: fix rclone move a file over itself deleting the file
This fixes the Root() returned by the backend when it has returned
fs.ErrorIsFile.

Before this change it returned a root which included the file path.

Because Root() was wrong this caused the detection of the file being
moved over itself check to fail.

This adds an integration test to check it for all backends.

See: https://forum.rclone.org/t/rclone-move-chunker-dir-file-chunker-dir-deletes-all-file-chunks/43333/
2023-12-10 22:29:57 +00:00
Nick Craig-Wood
f98e672f37 selfupdate: fix crash in tests if beta not found 2023-12-10 22:29:57 +00:00
Nick Craig-Wood
242fe96b18 Add keongalvin to contributors 2023-12-10 22:29:57 +00:00
rkonfj
3f159bac16 backend: fs implements the Shutdowner interface
Since `tokenRenewer` adds a Shutdown method, we should call it to
clean up resources.

changes backends:
onedrive,box,pcloud,amazonclouddrive,hidrive,jottacloud,sharefile
,premiumizeme

Signed-off-by: rkonfj <rkonfj@gmail.com>
2023-12-09 11:44:50 +00:00
rkonfj
6c58e9976c oauthutil: add Shutdown method
Before this change, calling the `oauthutil.NewRenew` func may
cause goroutine leaks.

This change adds a `Shutdown` method to allow the caller to exit
the goroutine to avoid leaks.

Signed-off-by: rkonfj <rkonfj@gmail.com>
2023-12-09 11:44:50 +00:00
keongalvin
110d07548f docs: fix broken link 2023-12-08 16:21:09 +00:00
Nick Craig-Wood
f45cee831f dropbox: fix used space on dropbox team accounts
Before this change we were not using the used space from the team
stats.

This patch uses that as the used space if available as it seems to
include the user stats in it.

See: https://forum.rclone.org/t/rclone-about-with-dropbox-reporte-size-incorrectly/43269/
2023-12-08 14:26:46 +00:00
Nick Craig-Wood
ef0f3020e4 vfs: note that --vfs-refresh runs in the background #6830 2023-12-08 14:26:46 +00:00
Nick Craig-Wood
113b2b648c Add emyarod to contributors 2023-12-08 14:26:46 +00:00
Nick Craig-Wood
57ab4d279e Add Anthony Metzidis to contributors 2023-12-08 14:26:46 +00:00
Nick Craig-Wood
8e21c77ead Add Eli Orzitzer to contributors 2023-12-08 14:26:46 +00:00
emyarod
4751980659 docs: update contributor email 2023-12-08 11:21:26 +00:00
Anthony Metzidis
9fe343b725 s3: S3 IPv6 support with option "use_dual_stack" (bool)
dualstack_endpoint=true enables IPv6 DNS lookup for S3 endpoints
in s3.go, add Options.DualstackEndpoint to support IPv6 on S3
2023-12-08 11:11:47 +00:00
dependabot[bot]
2f5685b405 build(deps): bump actions/setup-go from 4 to 5
Bumps [actions/setup-go](https://github.com/actions/setup-go) from 4 to 5.
- [Release notes](https://github.com/actions/setup-go/releases)
- [Commits](https://github.com/actions/setup-go/compare/v4...v5)

---
updated-dependencies:
- dependency-name: actions/setup-go
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
2023-12-07 16:48:50 +00:00
Eli Orzitzer
c3117d9efb Doc change: Add the CreateBucket permission requirement for AWS S3 2023-12-07 16:46:04 +00:00
Nick Craig-Wood
1ebbc74f1d nfsmount: compile for all unix oses, add --sudo and fix error/option handling
- make compile on all unix OSes - this will make the docs appear on linux and rclone.org!
- add --sudo flag for using with mount
- improve error reporting
- fix option handling
2023-12-05 10:44:53 +00:00
Nick Craig-Wood
aee787d33e serve nfs: Mark as experimental 2023-12-05 10:44:53 +00:00
Anagh Kumar Baranwal
298c13e719 systemd: Fix detection and switch to the coreos package everywhere
rather than having 2 separate libraries

Signed-off-by: Anagh Kumar Baranwal <6824881+darthShadow@users.noreply.github.com>
2023-12-02 14:17:15 +00:00
Nick Craig-Wood
f0c774156e onedrive: fix error listing: unknown object type <nil>
This error was introduced in this commit when refactoring the list
routine.

b8591b230d onedrive: implement ListR method which gives --fast-list support

The error was caused by OneNote files not being skipped properly.
2023-12-02 10:49:15 +00:00
Nick Craig-Wood
08c460dd1a Add ben-ba to contributors 2023-12-02 10:49:15 +00:00
ben-ba
e3d0bff9ca docs: fix typo in docs.md
- OpenChunkedWriter
+ OpenChunkWriter
2023-12-01 20:45:48 +01:00
Nick Craig-Wood
caf5dd9d5e mount: notice daemon dying much quicker
Before this change we waited until until the timeout to check the
daemon was alive.

Now we check it every 100ms like we do the mount status.

This also fixes compiling on all platforms which was broken by the
previous change

9bfbf2a4a mount: fix macOS not noticing errors with --daemon

See: https://forum.rclone.org/t/rclone-mount-daemon-exits-successfully-even-when-mount-fails/43146
2023-12-01 09:36:05 +00:00
Nick Craig-Wood
97d7945cef Add halms to contributors 2023-12-01 09:36:05 +00:00
Manoj Ghosh
9061e81850 multipart copy create bucket if it doesn't exist. 2023-11-29 15:47:56 +00:00
halms
58339845f4 smb: fix shares not listed by updating go-smb2
Before this change the IP address of the server was used in the SMB
connect request (see CloudSoda/go-smb2#18).
The updated library now can pass the hostname instead.

The update requires a small change in the dial method call.

Fixes rclone#6672
2023-11-29 15:39:27 +00:00
Nick Craig-Wood
4d4f3de5a5 s3: add --s3-version-deleted to show delete markers in listings when using versions.
See: https://forum.rclone.org/t/s3-object-deletion-times/42781
2023-11-29 09:44:40 +00:00
Nick Craig-Wood
9bfbf2a4ae mount: fix macOS not noticing errors with --daemon
See: https://forum.rclone.org/t/rclone-mount-daemon-exits-successfully-even-when-mount-fails/43146
2023-11-28 19:42:00 +00:00
Nick Craig-Wood
96f8b7c827 install.sh: fix harmless error message on install
This was caused by trying to write to a non existent file, and
changing the order of the cleanup fixed it.

https://forum.rclone.org/t/rclone-v1-65-0-release/43100/18
2023-11-28 19:10:04 +00:00
Nick Craig-Wood
85f142a206 Start v1.66.0-DEV development 2023-11-26 17:14:38 +00:00
Nick Craig-Wood
82b963e372 Version v1.65.0 2023-11-26 16:07:39 +00:00
Nick Craig-Wood
74d5477fad onedrive: add --onedrive-delta flag to enable ListR
Before this change ListR was unconditionally enabled on onedrive.

This caused performance problems for some uses, so now the
--onedrive-delta flag has to be supplied.

Fixes #7362
2023-11-26 16:06:49 +00:00
Nick Craig-Wood
b5857f0bf8 smb: fix modtime of multithread uploads by setting PartialUploads
Before this change PartialUploads was not set. This is clearly wrong
since incoming files are visible on the smb server.

Setting PartialUploads fixes the multithread upload modtime problem as
it uses the PartialUploads flag as an indication that it needs to set
the modtime explicitly.

This problem was detected by the new TestMultithreadCopy integration
tests

Fixes #7411
2023-11-25 18:46:48 +00:00
Nick Craig-Wood
edb5ccdd0b smb: fix about size wrong by switching to github.com/cloudsoda/go-smb2/ fork
Before this change smb drives sometimes showed a fraction of the
correct size using `rclone about`.

This fixes the problem by switching the upstream library from
github.com/hirochachacha/go-smb2 to github.com/cloudsoda/go-smb2 which
has a fix for the problem.

The new library passes the integration tests.

Fixes #6733
2023-11-25 18:45:41 +00:00
Nick Craig-Wood
0244caf13a serve s3: fix overwrite of files with 0 length file
Before this change overwriting an existing file with a 0 length file
didn't update the file size.

This change corrects the issue and makes sure the file is truncated
properly.

This was discovered by the full integration tests.
2023-11-24 20:47:06 +00:00
Nick Craig-Wood
aaa897337d serve s3: fix error handling for listing non-existent prefix - fixes #7455
Before this change serve s3 would return NoSuchKey errors when a non
existent prefix was listed.

This change fixes it to return an empty list like AWS does.

This was discovered by the full integration tests.
2023-11-24 20:47:06 +00:00
Nick Craig-Wood
e7c002adef test_all: make integration test for serve s3 2023-11-24 20:47:06 +00:00
Nick Craig-Wood
9e62a74a23 Add Abhinav Dhiman to contributors 2023-11-24 20:47:06 +00:00
Nick Craig-Wood
a10abf9934 Add 你知道未来吗 to contributors 2023-11-24 20:47:06 +00:00
Abhinav Dhiman
36eb3cd660 imagekit: Added ImageKit backend 2023-11-24 18:18:01 +00:00
你知道未来吗
fd2322cb41 fs/fshttp: fix --contimeout being ignored
The following command will block for 60s(default) when the network is slow or unavailable:

```
rclone  --contimeout 10s --low-level-retries 0 lsd dropbox:
```

This change will make it timeout after the expected 10s.

Signed-off-by: rkonfj <rkonfj@gmail.com>
2023-11-24 17:53:33 +00:00
Nick Craig-Wood
4eed3ae99a s3: ensure we can set upload cutoff that we use for Rclone provider
This is a workaround to make the new multipart upload integration
tests pass.
2023-11-24 16:32:06 +00:00
Nick Craig-Wood
d8855b21eb serve s3: document multipart copy doesn't work #7454
This puts in a workaround for the tests also
2023-11-24 15:49:33 +00:00
Nick Craig-Wood
8f47b6746d b2: fix streaming chunked files an exact multiple of chunk size
Before this change, streaming files an exact multiple of the chunk
size would cause rclone to attempt to stream a 0 sized chunk which was
rejected by the b2 servers.

This bug was noticed by the new integration tests for chunked streaming.
2023-11-24 14:32:01 +00:00
Nick Craig-Wood
cc2a4c2e20 fstest: factor chunked streaming tests from b2 and use in all backends 2023-11-24 12:58:40 +00:00
Nick Craig-Wood
fabeb8e44e b2: fix server side chunked copy when file size was exactly --b2-copy-cutoff
Before this change the b2 servers would complain as this was only a
single part transfer.

This was noticed by the new integration tests for server side chunked copy.
2023-11-24 12:37:11 +00:00
Nick Craig-Wood
c27977d4d5 fstest: factor chunked copy tests from b2 and use them in s3 and oos 2023-11-24 12:37:11 +00:00
Nick Craig-Wood
d5d28a7513 operations: fix overwrite of destination when multi-thread transfer fails
Before this change, if a multithread upload failed (let's say the
source became unavailable) rclone would finalise the file first before
aborting the transfer.

This caused the partial file to be written which would overwrite any
existing files.

This was fixed by making sure we Abort the transfer before Close-ing
it.

This updates the docs to encourage calling of Abort before Close and
updates writerAtChunkWriter to make sure that works properly.

This also reworks the tests to detect this and to make sure we upload
and download to each multi-thread capable backend (we were only
downloading before which isn't a full test).

Fixes #7071
2023-11-24 11:19:58 +00:00
Nick Craig-Wood
94ccc95515 random: stop using deprecated rand.Seed in go1.20 and later 2023-11-24 11:19:58 +00:00
Nick Craig-Wood
5d5473c8a5 random: speed up String function for generating larger blocks 2023-11-24 11:19:58 +00:00
Nick Craig-Wood
251a8e3c39 hash: allow runtime configuration of supported hashes for testing 2023-11-24 11:19:58 +00:00
Nick Craig-Wood
a259226eb2 Add Alen Šiljak to contributors 2023-11-24 11:19:58 +00:00
Alen Šiljak
5fba502516 http: enable methods used with WebDAV - fixes #7444
Without this, requests like PROPFIND, issued from a browser, fail.
2023-11-23 16:49:03 +00:00
Nick Craig-Wood
ba11040d6b s3: detect looping when using gcs and versions
Apparently gcs doesn't return an S3 compatible result when using
versions.

In particular it doesn't return a NextKeyMarker - this means rclone
loops and fetches the same page over and over again.

This patch detects the problem and stops the infinite retries but it
doesn't fix the underlying problem.

See: https://forum.rclone.org/t/list-s3-versions-files-looping-bug/42974
See: https://issuetracker.google.com/u/0/issues/312292516
2023-11-23 09:50:28 +00:00
Nick Craig-Wood
668711e432 dropbox: fix missing encoding for rclone purge again
This commit fixed the problem but made the integration tests fail.

33376bf399 dropbox: fix missing encoding for rclone purge

This fixes the problem properly by making sure we send the encoded or
non encoded root to the right places.
2023-11-21 12:23:28 +00:00
Nick Craig-Wood
a71d181cb0 test_all: limit the Zoho tests to just the backend
The free account has a very ungenerous 1000 api calls per day limit
and the full integration test suite breaches that so limit the
integration tests to just the backend.
2023-11-21 12:06:31 +00:00
Nick Craig-Wood
cab42107f7 test_all: remove uptobox from integration tests
The uptobox service hasn't running since 20 September 2023.

This removes it from the integration tests to save noise.
2023-11-21 11:49:39 +00:00
Nick Craig-Wood
1f9a79ef09 operations: use less memory when doing multithread uploads
For uploads which are coming from disk or going to disk or going to a
backend which doesn't need to seek except for retries this doesn't
buffer the input.

This dramatically reduces rclone's memory usage.

Fixes #7350
2023-11-20 18:07:05 +00:00
Nick Craig-Wood
c0fb9ebfce operations: make Open() return an io.ReadSeekCloser #7350
As part of reducing memory usage in rclone, we need to have a raw
handle to an object we can seek with.
2023-11-20 18:07:05 +00:00
Nick Craig-Wood
e8fcde8de1 fs: add ChunkWriterDoesntSeek feature flag and set it for b2 2023-11-20 18:07:05 +00:00
Nick Craig-Wood
72dfdd97d8 mockobject: fix SetUnknownSize method to obey parameter passed in 2023-11-20 18:07:05 +00:00
Nick Craig-Wood
bb88b8499b box: fix performance problem reading metadata for single files
Before this change the backend used to list the directory to find the
metadata for a single file. For lots of files in a directory this
caused a serious performance problem.

This change uses the preflight check to check for a files existence
and find its ID.

See: https://forum.rclone.org/t/psa-box-com-has-serious-performance-issues-in-directories-with-thousands-of-files/41128/10
See: https://forum.box.com/t/is-there-an-api-to-find-a-file-by-leaf-name-given-a-folder-id/997/
See: https://developer.box.com/guides/uploads/check/
2023-11-20 18:07:05 +00:00
Nick Craig-Wood
4ac5cb07ca gcs: fix 400 Bad request errors when using multi-thread copy
Before this change, on every Open, we added the userProject parameter
to the URL in the object.

This meant it grew and grew until Google returned Error 400 (Bad
Request) errors when the URL became too long.

This fixes the problem by adding the userProject parameter once.

See: https://forum.rclone.org/t/endlessly-repeating-userproject-parameter-in-get-to-google-storage-context-canceled-got-http-response-code-400/42652
2023-11-20 18:07:05 +00:00
Nick Craig-Wood
4a3e9bbabf http: implement set backend command to update running backend
See: https://forum.rclone.org/t/updating-the-url-of-http-remote-not-applied-on-mounts/42763
2023-11-20 18:07:05 +00:00
Nick Craig-Wood
33376bf399 dropbox: fix missing encoding for rclone purge
This was causing directories with encodable characters in not to be
found on purge.

See: https://forum.rclone.org/t/purge-command-does-not-work-on-directories-with-files/42793
2023-11-20 18:07:05 +00:00
asdffdsazqqq
94b7c49196 Update Docs to show SMB remote supports modtime.md 2023-11-20 17:50:28 +00:00
albertony
a7faf05393 docs: cleanup backend hashes sections 2023-11-20 17:43:57 +00:00
albertony
98a96596df docs: replace mod-time with modtime 2023-11-20 17:43:57 +00:00
Nick Craig-Wood
88bd80c1fa march: Fix excessive parallelism when using --no-traverse
When using `--no-traverse` the march routines call NewObject on each
potential object in the destination.

The concurrency limiter was accidentally arranged so that there were
`--checkers` * `--checkers` NewObject calls going on at once.

This became obvious when using the sftp backend which used too many
connections.

Fixes #5824
2023-11-20 17:36:31 +00:00
Nick Craig-Wood
c6755aa768 Add Mina Galić to contributors 2023-11-20 17:36:31 +00:00
Mina Galić
01be5c75be Makefile: use POSIX compatible install arguments
install -t doesn't exist on BSD.
flip the arguments since we only have one.
2023-11-20 15:01:26 +00:00
Jacob Hands
20bd17f107 install.sh: Clean up temp files in install script 2023-11-20 15:00:08 +00:00
Nick Craig-Wood
64ec5709fe drive: fix integration tests by enabling metadata support from the context
Before this change, the drive backend only used metadata if it was
created with Metadata enabled.

This patch changes it so the Metadata support is enabled dynamically
if it is set in the context.

This fixes the metadata tests in the integration tests which have been
changed to make sure Metadata is enabled.
2023-11-19 12:48:27 +00:00
Nick Craig-Wood
1ea8678be2 fstests: make sure Metadata is enabled in the context for metadata tests 2023-11-19 12:48:27 +00:00
Nick Craig-Wood
8341de05c6 Refresh CONTRIBUTING.md
- add dos and don'ts section to writing a new backend
- bring markdown up to modern style
2023-11-19 12:48:27 +00:00
Nick Craig-Wood
47ca0c326e fs: implement --metadata-mapper to transform metatadata with a user supplied program 2023-11-18 17:49:35 +00:00
Nick Craig-Wood
54196f34e3 drive: fix error updating created time metadata on existing object
Google drive doesn't allow the btime (created time) metadata to be
updated when updating an existing object.

This changes skips btime metadata if we are updating an existing
object but allows it otherwise.
2023-11-18 17:49:35 +00:00
Nick Craig-Wood
9fdf3d548a drive: add read/write metadata support
- fetch metadata with listings and fetch permissions in parallel
- only write permissions out if they are not inherited.
- make setting labels, owner and permissions work controlled by flags
    - `--drive-metadata-labels`, `--drive-metadata-owner`, `--drive-metadata-permissions`
2023-11-18 17:49:35 +00:00
Nick Craig-Wood
10774d297a Add moongdal to contributors 2023-11-18 17:49:35 +00:00
Nick Craig-Wood
bf9053705d Add viktor to contributors 2023-11-18 17:49:35 +00:00
Nick Craig-Wood
0bd059ec55 Add karan to contributors 2023-11-18 17:49:35 +00:00
Nick Craig-Wood
59d363b3c1 Add Oksana Zhykina to contributors 2023-11-18 17:49:35 +00:00
Nick Craig-Wood
94a5de58c8 linkbox: pre-merge fixes
- convert to directoryCache - makes backend much more efficient
- don't force --low-level-retries to 2
- don't wrap paced calls in pacer
- fix shouldRetry
- fix file list searching mechanism
2023-11-18 17:14:45 +00:00
viktor
a466ababd0 backend: add Linkbox backend
Add backend for linkbox.io with read and write capabilities

fixes #6960 #6629
2023-11-18 17:14:45 +00:00
Nick Craig-Wood
168d577297 vfs: error out early if can't upload 0 length file
Before this change if a backend can't upload 0 length files and
`--vfs-cache-mode writes` was in use then the writeback logic would
try to upload the 0 length file forever.

This change causes it to exit on the first failure to upload.
2023-11-18 17:14:45 +00:00
Nick Craig-Wood
ddaf01ece9 azurefiles: finish docs and implementation and add optional interfaces
- use rclone's http Transport
- fix handling of 0 length files
- combine into one file and remove uneeded abstraction
- make `chunk_size` and `upload_concurrency` settable
- make auth the same as azureblob
- set the Features correctly
- implement `--azurefiles-max-stream-size`
- remove arbitrary sleep on Mkdir
- implement `--header-upload`
- implement read and write MimeType for objects
- implement optional methods
    - About
    - Copy
    - DirMove
    - Move
    - OpenWriterAt
    - PutStream
- finish documentation
- disable build on plan9 and js

Fixes #365
Fixes #7378
2023-11-18 16:48:23 +00:00
karan
b5301e03a6 Implement Azure Files backend
Co-authored-by: moongdal <moongdal@tutanota.com>
2023-11-18 16:42:13 +00:00
Dimitri Papadopoulos
e9763552f7 fs: fix a typo in a comment 2023-11-16 17:15:00 +00:00
Oksana Zhykina
6b60e09ff2 quatrix: overwrite files on conflict during server-side move 2023-11-16 17:14:00 +00:00
Oksana Zhykina
41a52f50df quatrix: add partial upload support 2023-11-16 17:14:00 +00:00
Nick Craig-Wood
93f35c915a serve s3: pre-merge tweaks
- Changes
    - Rename `--s3-authkey` to `--auth-key` to get it out of the s3 backend namespace
    - Enable `Content-MD5` integrity checks
    - Remove locking after code audit
- Documentation
    - Factor out documentation into seperate file
    - Add Quickstart to docs
    - Add Bugs section to docs
    - Add experimental tag to docs
    - Add rclone provider to s3 backend docs
- Fixes
    - Correct quirks in s3 backend
    - Change fmt.Printlns into fs.Logs
    - Make metadata storage per backend not global
    - Log on startup if anonymous access is enabled
- Coding style fixes
    - rename fs to vfs to save confusion with the rest of rclone code
    - rename db to b for *s3Backend

Fixes #7062
2023-11-16 16:59:56 +00:00
Nick Craig-Wood
a2c4f07a57 Add Saw-jan to contributors 2023-11-16 16:59:56 +00:00
Saw-jan
d3dcc61154 serve s3: fixes before merge
- add context to log and fallthrough to error log level
- test: use rclone random lib to generate random strings
- calculate hash from vfs cache if file is uploading
- add server started log with server url
- remove md5 hasher
2023-11-16 16:59:56 +00:00
Nick Craig-Wood
34ef5147aa Add Artur Neumann to contributors 2023-11-16 16:59:56 +00:00
Artur Neumann
aa29742be2 serve s3: fix file name encoding using s3 serve with mc client
using the mc (minio) client file encoding were wrong
see Mikubill/gofakes3#2 for details
2023-11-16 16:59:56 +00:00
Nick Craig-Wood
ef366b47f1 Add Mikubill to contributors 2023-11-16 16:59:55 +00:00
Mikubill
23abac2a59 serve s3: let rclone act as an S3 compatible server 2023-11-16 16:59:55 +00:00
Nick Craig-Wood
d3ba32c43e s3: add --s3-disable-multipart-uploads flag 2023-11-16 16:59:55 +00:00
Nick Craig-Wood
cdf5a97bb6 bin/update_authors.py: add authors from Co-authored-by: lines too 2023-11-16 16:59:55 +00:00
albertony
e1b0417c28 size: dont show duplicate object count when less than 1k 2023-11-14 16:44:12 +00:00
Nick Craig-Wood
acf1e2df84 lib/file: fix MkdirAll after go1.21.4 stdlib update
In ths security related issue the go1.21.4 stdlib changed the parsing
of volume names on Windows.

https://github.com/golang/go/issues/63713

This had the consequences of breaking the MkdirAll tests which were
looking for specific error messages which changed and using invalid
paths.

In particular under go1.21.3:

    filepath.VolumeName(`\\?\C:`) == `\\?\C:`

But under go1.21.4 it is:

    filepath.VolumeName(`\\?\C:`) == `\\?`

The path `\\?\C:` isn't actually a valid Windows path. I reported this
as a FYI bug upstream - I'm not expecting it to be fixed.

See: https://github.com/golang/go/issues/64101
2023-11-14 09:47:46 +00:00
Nick Craig-Wood
831d1df67f docs: factor large docs into separate .md files to make them easier to maintain.
We then use the go embed command to embed them back into the binary.
2023-11-13 16:27:09 +00:00
Nick Craig-Wood
e67157cf46 Add Tayo-pasedaRJ to contributors 2023-11-13 16:27:09 +00:00
Nick Craig-Wood
ac012618db Add Adithya Kumar to contributors 2023-11-13 16:27:09 +00:00
Nick Craig-Wood
7f09d9c2a0 Add wuxingzhong to contributors 2023-11-13 16:27:09 +00:00
Tayo-pasedaRJ
0548e61910 hdfs: added support for list of namenodes in hdfs remote config
Users can now input a comma separated list of namenodes when writing
config for hdfs remotes.

This is required when you have multiple namenodes in your hdfs cluster
and cannot be certain which namenodes will be in 'standby' or 'active'
states.

This was available before but wasn't documented and didn't use the
correct rclone interfaces.
2023-11-13 15:55:52 +00:00
Adithya Kumar
ad83ff769b webdav: added an rclone vendor to work with rclone serve webdav
Fixes #7160
2023-11-05 12:37:25 +00:00
albertony
ca14b00b34 docs: show hashsum arguments as optional in usage string 2023-11-03 23:31:00 +01:00
albertony
52d444f4a9 docs: document how to build with version info and icon resources on windows 2023-11-01 12:44:04 +01:00
albertony
4506f35f2e build: refactor version info and icon resource handling on windows
This makes it easier to add resources with any build method, and also when
building librclone.dll.

Goversioninfo is now used as a library, instead of running it as a tool.
2023-11-01 12:44:04 +01:00
wuxingzhong
4ab57eb90b serve dnla: fix crash on graceful exit
Before this change, closing a uninitialised chan would cause a crash.
2023-10-31 16:44:25 +00:00
Nick Craig-Wood
23ab6fa3a0 operations: fix server side copies on partial upload backends after refactor
After the copy refactor:

179f978f75 operations: refactor Copy into methods on an temporary object

There was some confusion in the code about server side copies - should
they or shouldn't they use partials?

This manifested in unit test failures for remotes which supported
server side Copy and PartialUploads. This combination is rare and only
exists in the sftp backend with the --sftp-copy-is-hardlink flag.

This fix makes the choice that backends which set PartialUploads
always use partials even for server side copies.
2023-10-30 16:50:19 +00:00
Nick Craig-Wood
af8ba18580 mount: disable mount for freebsd
The upstream library rclone uses for rclone mount no longer supports
freebsd. Not only is it broken, but it no longer compiles.

This patch disables rclone mount for freebsd.

However all is not lost for freebsd users - compiling rclone with the
`cmount` tag, so `go install -tags cmount` will install a working
`rclone mount` command which uses cgofuse and the libfuse C library
directly.

Note that the binaries from rclone.org will not have mount support as
we don't have a freebsd build machine in CI and it is very hard to
cross compile cmount.

See: https://github.com/bazil/fuse/issues/280
Fixes #5843
2023-10-29 15:46:41 +00:00
Nick Craig-Wood
0b90dd23c1 build: update all dependencies 2023-10-29 15:46:38 +00:00
Nick Craig-Wood
e64be7652a operations: fix invalid UTF-8 when truncating file names when not using --inplace
Before this change, when not using --inplace, rclone could generate
invalid file names when truncating file names to fit within the
character size limits.

This fixes it by taking care to truncate on UTF-8 character
boundaries.

See: https://forum.rclone.org/t/ssh-fx-failure-when-copying-file-with-nonstandard-characters-to-sftp-remote-with-ntfs-drive/42560/
2023-10-29 14:04:37 +00:00
Nick Craig-Wood
179f978f75 operations: refactor Copy into methods on an temporary object
operations.Copy had become very unwieldy. This refactors it into
methods on a copy object which is created for the duration of the
copy. This makes it much easier to read and reason about.
2023-10-29 14:04:37 +00:00
Nick Craig-Wood
17b7ee1f3a operations: factor Copy into its own file 2023-10-29 14:04:37 +00:00
dependabot[bot]
5c73363b16 build(deps): bump google.golang.org/grpc from 1.56.2 to 1.56.3
Bumps [google.golang.org/grpc](https://github.com/grpc/grpc-go) from 1.56.2 to 1.56.3.
- [Release notes](https://github.com/grpc/grpc-go/releases)
- [Commits](https://github.com/grpc/grpc-go/compare/v1.56.2...v1.56.3)

---
updated-dependencies:
- dependency-name: google.golang.org/grpc
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>
2023-10-28 18:31:35 +01:00
Nick Craig-Wood
bf21db0ac4 b2: fix multi-thread upload with copyto going to wrong name
See: https://forum.rclone.org/t/errors-and-failure-with-big-file-upload-to-b2/42522/
2023-10-28 15:18:00 +01:00
Nick Craig-Wood
0180301b3f fstests: add integration test for OpenChunkWriter uploading to the wrong name 2023-10-28 15:18:00 +01:00
Nick Craig-Wood
adfb1f7c7d b2: fix error handler to remove confusing DEBUG messages
On a 404 error, b2 returns an empty body which, before this change,
caused the error handler to try to parse an empty string and give the
following DEBUG message:

    Couldn't decode error response: EOF

This is confusing as it is expected in normal operations and isn't an
error.

This change reads the body of an error response first then tries to
decode it only if it isn't empty, which avoids the confusing DEBUG
message.

This also upgrades failure to read the body or failure to decode the
JSON to ERROR messages as now we are certain that we should have
something to read and decode.
2023-10-28 15:18:00 +01:00
Nick Craig-Wood
6092fe2aaa s3: emit a debug message if anonymous credentials are in use
This can indicate the user is expecting `env_auth=true` to be the
default so we say that in the debug message.

See: https://forum.rclone.org/t/rclone-with-amazon-s3-access-point/42411
2023-10-27 16:00:47 +01:00
Nick Craig-Wood
53868ef4e1 ncdu: fix crash when re-entering changed directory after rescan
ncdu stores the position that it was in for each directory. However
doing a rescan can cause those positions to be out of range if the
number of files decreased in a directory. When re-entering the
directory, this causes an index out of range error.

This fixes the problem by detecting the index out of range and
flushing the saved directory position.

See: https://forum.rclone.org/t/slice-bounds-out-of-range-during-ncdu/42492/
2023-10-24 14:26:57 +01:00
Nick Craig-Wood
e1ad467009 fs: fix docs for Bits 2023-10-23 15:43:55 +01:00
Nick Craig-Wood
12db7b6935 fs: add IsSet convenience method to Bits 2023-10-23 15:43:42 +01:00
Nick Craig-Wood
7434ad8618 docs: remove third party logos from source tree 2023-10-23 15:35:25 +01:00
Nick Craig-Wood
e4ab59bcc7 docs: update Storj image and link 2023-10-23 15:35:25 +01:00
Nick Craig-Wood
9119c6c76f Add alfish2000 to contributors 2023-10-23 15:35:25 +01:00
alfish2000
9d4d294793 union: fix documentation 2023-10-21 10:37:43 +01:00
Nick Craig-Wood
750ed556a5 build: fix new lint errors with golangci-lint v1.55.0 2023-10-20 18:53:30 +01:00
Nick Craig-Wood
5b0d3d060f selfupdate: make sure we don't run tests if selfupdate is set 2023-10-20 18:14:27 +01:00
Nick Craig-Wood
5b0f9dc4e3 local: fix copying from Windows Volume Shadows
For some files the Windows Volume Shadow Service (VSS) advertises the
file size as X in the directory listing but returns a different number
Y on stat-ing the file. If the file is opened and read there are Y
bytes available for reading.

Existing copy tools copy Y bytes rather than X so for consistency
rclone should do the same.

This fixes the problem by stat-ing the file immediately before opening
it. This will also reduce the unnecessary occurrence of "can't copy -
source file is being updated" errors; if the file has finished
changing by the time we come to copy it then we now can copy it
successfully.

See: https://forum.rclone.org/t/consistently-getting-corrupted-on-transfer-sizes-differ-syncing-to-an-smb-share/42218/
2023-10-19 16:38:10 +01:00
Nick Craig-Wood
b0a87d7cf1 Changelog updates from Version 1.64.2 2023-10-19 12:34:34 +01:00
Nick Craig-Wood
37d786c82a selfupdate: fix "invalid hashsum signature" error
This was caused by a change to the upstream library
ProtonMail/go-crypto checking the flags on the keys more strictly.

However the signing key for rclone is very old and does not have those
flags. Adding those flags using `gpg --edit-key` and then the
`change-usage` subcommand to remove, save, quite then re-add, save
quit the signing capabilities caused the key to work.

This also adds tests for the verification and adds the selfupdate
tests into the integration test harness as they had been disabled on
CI because they rely on external sources and are sometimes unreliable.

Fixes #7373
2023-10-18 17:55:19 +01:00
Nick Craig-Wood
56fe12c479 build: add the serve docker tests to the integration tester
These had been disabled on CI for being unreliable, so test them in
the integration tests framework which will retry them.
2023-10-18 17:55:19 +01:00
Nick Craig-Wood
9197180610 build: fix docker build running out of space
This removes some unused SDKs from the build machine to free some
space up before building. It also adds some lines to show the free
space.
2023-10-18 17:55:19 +01:00
Nick Craig-Wood
f4a538371d Add Ivan Yanitra to contributors 2023-10-18 17:55:10 +01:00
Nick Craig-Wood
f2ec08cba2 Add Keigo Imai to contributors 2023-10-18 17:55:10 +01:00
Nick Craig-Wood
8f25531b7f Add Gabriel Espinoza to contributors 2023-10-18 17:55:10 +01:00
Ivan Yanitra
0ee6d0b4bf azureblob: add support cold tier 2023-10-18 17:54:25 +01:00
Keigo Imai
4ac4597afb drive: add a note that --drive-scope accepts comma-separated list of scopes 2023-10-18 17:54:08 +01:00
Joda Stößer
143df6f6d2 docs: change authors email for SimJoSt 2023-10-18 16:31:15 +01:00
Nick Craig-Wood
8264ba987b Changelog updates from Version 1.64.1 2023-10-17 18:37:04 +01:00
Gabriel Espinoza
7a27d9a192 lib/http: export basic go strings functions
makes the following go strings functions available to be used in custom templates; contains, hasPrefix, hasSuffix

added documentation for exported funcs
2023-10-16 19:46:19 +01:00
albertony
195ad98311 docs: update documentation for --fast-list adding info about ListR 2023-10-16 18:11:22 +02:00
Nick Craig-Wood
29baa5888f mount: fix automount not detecting drive is ready
With automount the target mount drive appears twice in /proc/self/mountinfo.

    379 27 0:70 / /mnt/rclone rw,relatime shared:433 - autofs systemd-1 rw,fd=57,...
    566 379 0:90 / /mnt/rclone rw,nosuid,nodev,relatime shared:488 - fuse.rclone remote: rw,...

Before this fix we only looked for the mount once in
/proc/self/mountinfo. It finds the automount line and since this
doesn't have fs type rclone it concludes the mount isn't ready yet.

This patch makes rclone look through all the mounts and if any of them
have fs type rclone it concludes the mount is ready.

See: https://forum.rclone.org/t/systemd-mount-works-but-automount-does-not/42287/
2023-10-16 12:13:20 +01:00
Nick Craig-Wood
c7a2719fac sftp: implement --sftp-copy-is-hardlink to server side copy as hardlink
If the server does not support hardlinks then it falls back to normal
copy.

See: https://forum.rclone.org/t/sftp-remote-server-side-copy/41867
2023-10-16 12:08:22 +01:00
Nick Craig-Wood
c190b9b14f serve sftp: return not supported error for not supported commands
Before this change, if a hardlink command was issued, rclone would
just ignore it and not return an error.

This changes any unknown operations (including hardlink) to return an
unsupported error.
2023-10-16 12:08:22 +01:00
Nick Craig-Wood
5fa68e9ca5 b2: fix chunked streaming uploads
Streaming uploads are used by rclone rcat and rclone mount
--vfs-cache-mode off.

After the multipart chunker refactor the multipart chunked streaming
upload was accidentally mixing the first and the second parts up which
was causing corrupted uploads.

This was caused by a simple off by one error in the refactoring where
we went from 1 based part number counting to 0 based part number
counting.

Fixing this revealed that the metadata wasn't being re-read for the
copied object either.

This fixes both of those issues and adds an integration tests so it
won't happen again.

Fixes #7367
2023-10-13 15:46:36 +01:00
Nick Craig-Wood
b9727cc6ab build: upgrade golang.org/x/net to v0.17.0 to fix HTTP/2 rapid reset
Vulnerability1: GO-2023-2102

HTTP/2 rapid reset can cause excessive work in net/http

More info: https://pkg.go.dev/vuln/GO-2023-2102
2023-10-12 17:44:16 +01:00
Nick Craig-Wood
d8d76ff647 b2: fix server side copies greater than 4GB
After the multipart chunker refactor the multipart chunked server side
copy was accidentally sending one part too many. The last part was 0
length which was rejected by b2.

This was caused by a simple off by one error in the refactoring where
we went from 1 based part number counting to 0 based part number
counting.

Fixing this revealed that the metadata wasn't being re-read for the
copied object either.

This fixes both of those issues and adds an integration tests so it
won't happen again.

See: https://forum.rclone.org/t/large-server-side-copy-in-b2-fails-due-to-bad-byte-range/42294
2023-10-12 11:19:56 +01:00
Nick Craig-Wood
5afa838457 cmd: Make --progress output logs in the same format as without
See: https://forum.rclone.org/t/using-progress-change-dates-from-2023-10-05-to-2023-10-05/42173
2023-10-11 11:36:31 +01:00
Nick Craig-Wood
2de084944b operations: fix error message on delete to have file name - fixes #7355 2023-10-11 11:34:11 +01:00
Vitor Gomes
48a8bfa6b3 operations: fix OpenOptions ignored in copy if operation was a multiThreadCopy 2023-10-11 11:19:03 +01:00
Nick Craig-Wood
d3ce795c30 build: fix docker beta build running out of space
This removes some unused SDKs from the build machine to free some
space up before building. It also adds some lines to show the free
space.
2023-10-10 15:59:07 +01:00
Nick Craig-Wood
c04657cd4c Add Volodymyr to contributors 2023-10-10 15:59:07 +01:00
Volodymyr
6255d9dfaa operations: implement --partial-suffix to control extension of temporary file names 2023-10-10 12:27:32 +01:00
Nick Craig-Wood
f56ea2bee2 s3: fix no error being returned when creating a bucket we don't own
Before this change if you tried to create a bucket that already
existed, but someone else owned then rclone did not return an error.

This now will return an error on providers that return the
AlreadyOwnedByYou error code or no error on bucket creation of an
existing bucket owned by you.

This introduces a new provider quirk and this has been set or cleared
for as many providers as can be tested. This can be overridden by the
--s3-use-already-exists flag.

Fixes #7351
2023-10-09 18:15:02 +01:00
Nick Craig-Wood
d6ba60c04d oracleobjectstorage: fix OpenOptions being ignored in uploadMultipart with chunkWriter 2023-10-09 17:13:42 +01:00
Vitor Gomes
37eaa3682a s3: fix OpenOptions being ignored in uploadMultipart with chunkWriter 2023-10-09 17:12:56 +01:00
Nick Craig-Wood
c5f6fc3283 drive: add --drive-show-all-gdocs to allow unexportable gdocs to be server side copied
Before this change, attempting to server side copy a google form would
give this error

    No export formats found for "application/vnd.google-apps.form"

Adding this flag allows the form to be server side copied but not
downloaded.

Fixes #6302
2023-10-09 16:53:03 +01:00
Nick Craig-Wood
4daf755da0 Add Saleh Dindar to contributors 2023-10-09 16:53:03 +01:00
Nick Craig-Wood
eee8ad5146 Add Beyond Meat to contributors 2023-10-09 16:53:03 +01:00
Saleh Dindar
bcb3289dad nfsmount: documentation for new NFS mount feature for macOS 2023-10-06 14:08:20 +01:00
Saleh Dindar
ef2ef8ef84 nfsmount: New mount command to provide mount mechanism on macOS without FUSE
Summary:
In cases where cmount is not available in macOS, we alias nfsmount to mount command and transparently start the NFS server and mount it to the target dir.

The NFS server is started on localhost on a random port so it is reasonably secure.

Test Plan:
```
go run rclone.go mount --http-url https://beta.rclone.org :http: nfs-test
```

Added mount tests:
```
go test ./cmd/nfsmount
```
2023-10-06 14:08:20 +01:00
Saleh Dindar
c69cf46f06 serve nfs: new serve nfs command
Summary:
Adding a new command to serve any remote over NFS. This is only useful for new macOS versions where FUSE mounts are not available.
 * Added willscot/go-nfs dependency and updated go.mod and go.sum

Test Plan:
```
go run rclone.go serve nfs --http-url https://beta.rclone.org :http:
```

Test that it is serving correctly by mounting the NFS directory.

```
mkdir nfs-test
mount -oport=58654,mountport=58654 localhost: nfs-test
```

Then we can list the mounted directory to see it is working.
```
ls nfs-test
```
2023-10-06 14:08:20 +01:00
Saleh Dindar
25f59b2918 vfs: Add go-billy dependency and make sure vfs.Handle implements billy.File
billy defines a common file system interface that is used in multiple go packages.
vfs.Handle implements billy.File mostly, only two methods needed to be added to
make it compliant.

An interface check is added as well.

This is a preliminary work for adding serve nfs command.
2023-10-06 14:08:20 +01:00
Saleh Dindar
7801b160f2 vfs: [bugfix] Update dir modification time
A subtle bug where dir modification time is not updated when the dir already exists
in the cache. It is only noticeable when some clients use dir modification time to
invalidate cache.
2023-10-06 14:08:20 +01:00
Saleh Dindar
23f8dea182 vfs: [bugfix] Implement Name() method in WriteFileHandle and ReadFileHandle
Name() method was originally left out and defaulted to the base
class which always returns empty. This trigerred incorrect behavior
in serve nfs where it relied on the Name() of the interafce to figure
out what file it was modifying.

This method is copied from RWFileHandle struct.

Added extra assert in the tests.
2023-10-06 14:08:20 +01:00
Beyond Meat
3337fe31c7 vfs: add --vfs-refresh flag to read all the directories on start
Refreshes the directory listing recursively at VFS start time.
2023-10-06 13:11:09 +01:00
Nick Craig-Wood
a752563842 operations: add operations/check to the rc API
Fixes #7015
2023-10-04 17:52:57 +01:00
Nick Craig-Wood
c12085b265 operations: close file in TestUploadFile test so it can be deleted on Windows 2023-10-04 17:52:57 +01:00
Nick Craig-Wood
3ab9077820 googlephotos: implement batcher for uploads - fixes #6920 2023-10-03 18:01:34 +01:00
Nick Craig-Wood
b94806a143 dropbox: factor batcher into lib/batcher 2023-10-03 18:01:34 +01:00
Nick Craig-Wood
55d10f4d25 fs: re-implement DumpMode with Bits
This almost 100% backwards compatible. The only difference being that
in the rc options/get output DumpMode will be output as strings
instead of integers. This is a lot more convenient for the user. They
still accept integer inputs though so the fallout from this should be
minimal.
2023-10-03 15:24:09 +01:00
Nick Craig-Wood
75745fcb21 fs: create fs.Bits for easy creation of parameters from a bitset of choices 2023-10-03 15:24:09 +01:00
Nick Craig-Wood
1cc22da87d vfs: re-implement CacheMode with fs.Enum
This almost 100% backwards compatible. The only difference being that
in the rc options/get output CacheMode will be output as strings
instead of integers. This is a lot more convenient for the user. They
still accept integer inputs though so the fallout from this should be
minimal.
2023-10-03 15:14:24 +01:00
Nick Craig-Wood
3092f82dcc fs: re-implement CutoffMode, LogLevel, TerminalColorMode with Enum
This almost 100% backwards compatible. The only difference being that
in the rc options/get output CutoffMode, LogLevel, TerminalColorMode
will be output as strings instead of integers. This is a lot more
convenient for the user. They still accept integer inputs though so
the fallout from this should be minimal.
2023-10-03 15:14:24 +01:00
Nick Craig-Wood
60a6ef914c fs: create fs.Enum for easy creation of parameters from a list of choices 2023-10-03 15:14:24 +01:00
Nick Craig-Wood
3553cc4a5f fs: fix option types printing incorrectly for backend flags
Before this change backend types were printing incorrectly as the name
of the type, not what was defined by the Type() method.

This was not working due to not calling the Type() method. However
this needed to be defined on a non-pointer type due to the way the
options are handled.
2023-10-03 11:23:58 +01:00
Nick Craig-Wood
b8591b230d onedrive: implement ListR method which gives --fast-list support
This implents ListR for onedrive. The API only allows doing this at
the root so it is inefficient to use it not at the root.

Fixes #7317
2023-10-02 11:12:08 +01:00
Nick Craig-Wood
ecb09badba onedrive: factor API types back into correct file 2023-10-02 10:48:06 +01:00
Nick Craig-Wood
cb43e86d16 b2: reduce default --b2-upload-concurrency to 4 to reduce memory usage
In v1.63 memory usage in the b2 backend was limited to `--transfers` *
`--b2-chunk-size`

However in v1.64 this was raised to `--transfers` * `--b2-chunk-size`
* `--b2-upload-concurrency`.

The default value for this was accidently set quite high at 16 which
means by default rclone could use up to 6.4GB of memory!

The new default sets a more reasonable (but still high) max memory of 1.6GB.
2023-10-01 12:30:26 +01:00
Nick Craig-Wood
5c48102ede b2: fix locking window when getting mutipart upload URL
Before this change, the lock was held while the upload URL was being
fetched from the server.

This meant that any other threads were blocked from getting upload
URLs unecessarily.

It also increased the potential for deadlock.
2023-10-01 12:30:26 +01:00
Nick Craig-Wood
96438ff259 pacer: fix b2 deadlock by defaulting max connections to unlimited
Before this change, the maximum number of connections was set to 10.

This means that b2 could deadlock while uploading multipart uploads
due to a lock being held longer than it should have been.
2023-10-01 12:30:26 +01:00
albertony
c1df3ce08c docs: add utime (time of file upload) to standard system metadata 2023-09-29 13:19:57 +02:00
albertony
19ad39fa1c jottacloud: add support for reading and writing metadata
Most useful is the addition of the file created timestamp, but also a timestamp for
when the file was uploaded.

Currently supporting a rather minimalistic set of metadata items, see PR #6359 for
some thoughts about possible extensions.
2023-09-29 13:19:57 +02:00
Nick Craig-Wood
b296f37801 s3: fix slice bounds out of range error when listing
In this commit:

5f938fb9ed s3: fix "Entry doesn't belong in directory" errors when using directory markers

We checked that the remote has the prefix and then changed the remote
before removing the prefix. This sometimes causes:

    panic: runtime error: slice bounds out of range [56:55]

The fix is to do the modification of the remote after removing the
prefix.

See: https://forum.rclone.org/t/cryptcheck-panic-runtime-error-slice-bounds-out-of-range/41977
2023-09-25 11:52:23 +01:00
Nick Craig-Wood
23e44c6065 Add rinsuki to contributors 2023-09-25 11:52:23 +01:00
rinsuki
8fd66daab6 drive: add support of SHA-1 and SHA-256 checksum 2023-09-24 17:38:30 +01:00
Nick Craig-Wood
9e80d48b03 s3: add docs on how to add a new provider 2023-09-23 14:36:48 +01:00
Nick Craig-Wood
eb3082a1eb s3: add Linode provider 2023-09-23 14:34:00 +01:00
Nick Craig-Wood
77ea22ac5b s3: Factor providers list out and auto generate textual version 2023-09-23 14:34:00 +01:00
Nick Craig-Wood
9959712a06 docs: fix backend doc generator to not output duplicate config names
This was always the intention, it was just implemented wrong.

This shortens the s3 docs by 1369 bringing them down to half the size
just about.

Fixes #7325
2023-09-23 12:54:08 +01:00
Nick Craig-Wood
7586fecbca Add Nikita Shoshin to contributors 2023-09-23 12:54:08 +01:00
Nikita Shoshin
94cdb00eb6 rcserver: set Last-Modified header for files served by --rc-serve 2023-09-23 12:20:29 +01:00
Dimitri Papadopoulos Orfanos
3d473eb54e docs: fix typos found by codespell in docs and code comments 2023-09-23 12:20:01 +01:00
Nick Craig-Wood
50b4a2398e onedrive: fix the configurator to allow /teams/ID in the config
See: https://forum.rclone.org/t/sharepoint-to-google/41548/
2023-09-22 15:54:22 +01:00
Nick Craig-Wood
e6b718c938 build: add btesth target to output beta log in HTML for email pasting 2023-09-21 16:15:48 +01:00
Nick Craig-Wood
9370dbcc47 lsjson: make sure we set the global metadata flag too 2023-09-21 16:15:38 +01:00
Nick Craig-Wood
8c1e9a2905 rc: always report an error as JSON
Before this change, the rclone rc command wouldn't actually report the
error as a JSON blob which is inconsitent with what the HTTP API does.

This change make sure we always report a JSON error, making a
synthetic one if necessary.

See: https://forum.rclone.org/t/when-using-rclone-rc-commands-somehow-return-errors-as-parsable-json/41855
Co-authored-by: Fawzib Rojas
2023-09-20 21:57:40 +01:00
Nick Craig-Wood
6072d314e1 b2: fix multipart upload: corrupted on transfer: sizes differ XXX vs 0
Before this change the b2 backend wasn't writing the metadata to the
object properly after a multipart upload.

The symptom of this was that sometimes it would give the error:

    corrupted on transfer: sizes differ XXX vs 0

This was fixed by returning the metadata in the chunk writer and setting it in Update.

See: https://forum.rclone.org/t/multipart-upload-to-b2-sometimes-failing-with-corrupted-on-transfer-sizes-differ/41829
2023-09-18 20:41:31 +01:00
Nick Craig-Wood
9277ca1e54 b2: implement --b2-lifecycle to control lifecycle when creating buckets 2023-09-16 17:01:43 +01:00
Nick Craig-Wood
d6722607cb b2: implement "rclone backend lifecycle" to read and set bucket lifecycles 2023-09-16 16:44:28 +01:00
Nick Craig-Wood
4ef30db209 b2: fix listing all buckets when not needed
Before this change the b2 backend listed all the buckets to turn a
single bucket name into an ID.

However in July 26, 2018 a parameter was added to the list buckets API
to make listing all the buckets unecessary.

This code sets the bucketName parameter so that only the results for
the desired bucket are returned.
2023-09-16 16:04:50 +01:00
Nick Craig-Wood
55c12c9a2d azureblob: fix "fatal error: concurrent map writes"
Before this change, the metadata map could be accessed from multiple
goroutines at once, sometimes causing this error.

This fix adds a global mutex for adjusting the metadata map to make
all accesses safe.

See: https://forum.rclone.org/t/azure-blob-storage-with-vfs-cache-concurrent-map-writes-exception/41686
2023-09-16 11:33:03 +01:00
dependabot[bot]
4349dae784 build(deps): bump docker/setup-qemu-action from 2 to 3
Bumps [docker/setup-qemu-action](https://github.com/docker/setup-qemu-action) from 2 to 3.
- [Release notes](https://github.com/docker/setup-qemu-action/releases)
- [Commits](https://github.com/docker/setup-qemu-action/compare/v2...v3)

---
updated-dependencies:
- dependency-name: docker/setup-qemu-action
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
2023-09-15 10:25:14 +01:00
David Sze
3e63f2c249 box: add more logging for polling 2023-09-15 10:24:43 +01:00
David Sze
5118ab9609 box: filter more EventIDs when polling 2023-09-15 10:24:43 +01:00
dependabot[bot]
7ea118aeae build(deps): bump docker/setup-buildx-action from 2 to 3
Bumps [docker/setup-buildx-action](https://github.com/docker/setup-buildx-action) from 2 to 3.
- [Release notes](https://github.com/docker/setup-buildx-action/releases)
- [Commits](https://github.com/docker/setup-buildx-action/compare/v2...v3)

---
updated-dependencies:
- dependency-name: docker/setup-buildx-action
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
2023-09-15 09:15:26 +01:00
Kaloyan Raev
af260921c0 storj: update storj.io/uplink to v1.12.0
The improved upload logic is active by default in uplink v1.12.0, so the
`testuplink.WithConcurrentSegmentUploadsDefaultConfig(ctx)` is not
required anymore.

See https://github.com/rclone/rclone/pull/7198
2023-09-14 14:01:35 +01:00
Nick Craig-Wood
62db2bb329 docs: add notes on how to update the website between releases 2023-09-13 17:02:29 +01:00
Nick Craig-Wood
8d58adbd54 docs: remove minio sponsor box for the moment 2023-09-13 17:02:29 +01:00
Nick Craig-Wood
b4d251081f docs: update Storj partner link 2023-09-13 17:02:29 +01:00
Nick Craig-Wood
a382bf8f03 Add Herby Gillot to contributors 2023-09-13 17:02:29 +01:00
Nick Craig-Wood
28fc43fb11 Add Pat Patterson to contributors 2023-09-13 17:02:29 +01:00
Herby Gillot
83be1501db docs: add MacPorts install info
https://ports.macports.org/port/rclone/
2023-09-13 16:06:15 +01:00
dependabot[bot]
be156133c5 build(deps): bump docker/metadata-action from 4 to 5
Bumps [docker/metadata-action](https://github.com/docker/metadata-action) from 4 to 5.
- [Release notes](https://github.com/docker/metadata-action/releases)
- [Upgrade guide](https://github.com/docker/metadata-action/blob/master/UPGRADE.md)
- [Commits](https://github.com/docker/metadata-action/compare/v4...v5)

---
updated-dependencies:
- dependency-name: docker/metadata-action
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
2023-09-13 15:56:38 +01:00
dependabot[bot]
d494db78d9 build(deps): bump docker/login-action from 2 to 3
Bumps [docker/login-action](https://github.com/docker/login-action) from 2 to 3.
- [Release notes](https://github.com/docker/login-action/releases)
- [Commits](https://github.com/docker/login-action/compare/v2...v3)

---
updated-dependencies:
- dependency-name: docker/login-action
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
2023-09-13 15:55:58 +01:00
dependabot[bot]
1b9eb74204 build(deps): bump docker/build-push-action from 4 to 5
Bumps [docker/build-push-action](https://github.com/docker/build-push-action) from 4 to 5.
- [Release notes](https://github.com/docker/build-push-action/releases)
- [Commits](https://github.com/docker/build-push-action/compare/v4...v5)

---
updated-dependencies:
- dependency-name: docker/build-push-action
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
2023-09-13 15:55:23 +01:00
Manoj Ghosh
cda09704a8 fix overview of oracle object storage as it supports multithreaded 2023-09-13 12:14:07 +01:00
Pat Patterson
71f2883562 operations: ensure concurrency is no greater than the number of chunks - fixes #7299 2023-09-13 12:12:27 +01:00
Nick Craig-Wood
d29d263329 docs: fix minimum Go version and update to 1.18
Fixes #7288
2023-09-12 11:24:17 +01:00
Nick Craig-Wood
20126de1aa Start v1.65.0-DEV development 2023-09-12 11:24:17 +01:00
Nick Craig-Wood
77f7bb08af Version v1.64.0 2023-09-11 15:59:44 +01:00
Nick Craig-Wood
a5a61f4874 protondrive: make cached keys rclone style and not show with rclone config redacted 2023-09-11 15:57:08 +01:00
Nick Craig-Wood
e8879f3e77 docs: document release signing and verification
Fixes #7257
2023-09-11 12:28:23 +01:00
Nick Craig-Wood
2a6675cffd docs: fix typo in rc docs - fixes #7287 2023-09-11 09:34:22 +01:00
Nick Craig-Wood
fa4d171f62 protondrive: complete docs with all references to Proton Drive 2023-09-10 11:57:39 +01:00
Nick Craig-Wood
d4d530bd8e drive: add --drive-fast-list-bug-fix to control ListR bug workaround
See: https://forum.rclone.org/t/how-to-list-empty-directories-recursively/40995/12
2023-09-09 17:46:03 +01:00
Nick Craig-Wood
f4b011e4e4 s3: add rclone backend restore-status command
This command shows the restore status of objects being retrieved from GLACIER.

See: https://forum.rclone.org/t/aws-s3-glacier-monitor-restore-status-command-for-glacier-restoring-process/41373/7
2023-09-09 17:44:36 +01:00
Nick Craig-Wood
d80890bf32 Add Drew Stinnett to contributors 2023-09-09 17:44:36 +01:00
Nick Craig-Wood
39392d70dd Add David Pedersen to contributors 2023-09-09 17:44:36 +01:00
Drew Stinnett
643386f026 rc: Add operations/settier to API 2023-09-09 17:41:02 +01:00
Chun-Hung Tseng
ed755bf04f protondrive: implement two-password mode (#7279) 2023-09-08 22:54:46 +02:00
David Pedersen
071c3f28e5 vfs: Update parent directory modtimes on vfs actions
This isn't written back to the storage so might change on remount but
makes the VFS just a little more POSIX compatible.
2023-09-08 17:19:52 +01:00
Nick Craig-Wood
7453b7d5f3 hdfs: fix retry "replication in progress" errors when uploading
Before this change uploaded files could return the error "replication
in progress".

This error is harmless though and means the Close should be retried
which is what this patch does.
2023-09-08 15:35:50 +01:00
Nick Craig-Wood
c9350149d8 hdfs: fix uploading to the wrong object on Update with overriden remote name
In this commit we discovered a problem with objects being uploaded to
the incorrect object name. It added an integration test for the
problem.

65b2e378e0 drive: fix incorrect remote after Update on object

This test was tripped by the hdfs backend and this patch fixes the
problem.
2023-09-08 15:35:50 +01:00
Nick Craig-Wood
08789a5815 test_all: remove filefabric from integration tests
The filefabric test server doesn't seem to be working at all so remove
it from the integration tests pending revitalization.
2023-09-08 15:35:50 +01:00
Nick Craig-Wood
4037af9c1a Add Oksana and Volodymyr Kit to contributors 2023-09-08 15:35:50 +01:00
Oksana
628ff8e524 quatrix: add backend to support Quatrix
Co-authored-by: Volodymyr Kit <v.kit@maytech.net>
2023-09-08 14:31:29 +01:00
Chun-Hung Tseng
578c75cb1e protondrive: fix signature verification logic by accounting for legacy signing scheme (#7278) 2023-09-08 16:00:34 +08:00
Nick Craig-Wood
63ab250817 vfs: add --vfs-cache-min-free-space to control minimum free space on the disk containing the cache
See: https://forum.rclone.org/t/rclone-fails-to-control-disk-usage-and-its-filling-the-disk-to-100/41494/
2023-09-07 15:57:45 +01:00
Nick Craig-Wood
39f910a65d rc: add core/du to measure local disk usage 2023-09-07 15:57:45 +01:00
Nick Craig-Wood
0fb36562dd Add lib/diskusage to measure used/free on disks 2023-09-07 15:57:45 +01:00
Nick Craig-Wood
8c25a15a40 Add zjx20 to contributors 2023-09-07 15:57:45 +01:00
zjx20
f5ee16e201 local: rmdir return an error if the path is not a dir 2023-09-07 14:30:08 +01:00
Nick Craig-Wood
2bcbed30bd s3: implement backend set command to update running config 2023-09-07 12:26:48 +01:00
Chun-Hung Tseng
5026a9171d protondrive: improves 2fa and draft error messages (#7280) 2023-09-07 01:50:28 +08:00
Nick Craig-Wood
b750c50bfd zoho: remove Range requests workarounds to fix integration tests
Zoho are now responding to Range requests properly. The remnants of
our old workaround was breaking the integration tests so this removes
them.
2023-09-05 18:21:15 +01:00
Nick Craig-Wood
535acd0483 fstests: fix PublicLink failing on storj
Storj requires a minimum duration of 1 minute for the link expiry so
increase what we are asking for from 1 minute to 2 minutes.
2023-09-05 18:01:37 +01:00
Nick Craig-Wood
db37b3ef9e opendrive: fix List on a just deleted and remade directory
Sometimes opendrive reports "403 Folder is already deleted" on
directories which should exist.

This might be a bug in opendrive or in rclone however we work-around
here sufficient to get the tests passing.
2023-09-05 17:59:03 +01:00
Nick Craig-Wood
257607ab3d operations: fix TestCopyFileMaxTransfer test to not be quite so fussy 2023-09-05 17:42:17 +01:00
Nick Craig-Wood
3ea1c5c4d2 compress: fix ChangeNotify
ChangeNotify has been broken on the compress backend for a long time!

Before this change it was wrapping the file names received rather than
unwrapping them to discover the original names.

It is likely ChangeNotify was working adequately though for users as
the VFS just uses the directories rather than the file names.
2023-09-05 17:22:36 +01:00
Nick Craig-Wood
bd23ea028e azureblob: fix purging with directory markers 2023-09-05 17:07:44 +01:00
Nick Craig-Wood
c58d4fe939 test_all: ignore Rmdirs test failure on b2 as it fails because of versions 2023-09-05 15:47:14 +01:00
nielash
ddc7059a73 Add @nielash as bisync maintainer 2023-09-05 04:05:39 -04:00
dependabot[bot]
2677c43f26 build(deps): bump actions/checkout from 3 to 4
Bumps [actions/checkout](https://github.com/actions/checkout) from 3 to 4.
- [Release notes](https://github.com/actions/checkout/releases)
- [Changelog](https://github.com/actions/checkout/blob/main/CHANGELOG.md)
- [Commits](https://github.com/actions/checkout/compare/v3...v4)

---
updated-dependencies:
- dependency-name: actions/checkout
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
2023-09-05 08:55:32 +01:00
nielash
48ab67f090 bisync: fix dryRun rc parameter being ignored
Before this change, bisync ignored the dryRun parameter (only when specified
via the rc.)

This change fixes the issue, so that the dryRun rc parameter is equivalent to
the --dry-run flag.
2023-09-05 08:53:58 +01:00
nielash
089df7d977 bisync: add rc parameters for new flags
Added rc support for the flags recently introduced in #6971.

createEmptySrcDirs
ignoreListingChecksum
resilient
2023-09-05 08:53:58 +01:00
Nick Craig-Wood
4fbe0652c9 compress: fix integration tests by adding missing OpenChunkWriter exclude 2023-09-04 19:26:14 +01:00
Nick Craig-Wood
47665dad07 cache: fix integration tests by adding missing OpenChunkWriter exclude 2023-09-04 19:26:14 +01:00
eNV25
ad724463a5 cmd: refactor and use sysdnotify in more commands
* cmd: refactor and use sysdnotify in more commands

Fixes #5117
2023-09-04 16:32:04 +01:00
Nick Craig-Wood
6afd7088d3 box: add --box-impersonate to impersonate a user ID - fixes #7267 2023-09-04 12:09:54 +01:00
Nick Craig-Wood
b33140ddeb union: add :writback to act as a simple cache
This adds a :writeback tag to upstreams. If set on a single upstream
then it writes back objects not found into that upstream.

Fixes #6934
2023-09-04 12:03:26 +01:00
Nick Craig-Wood
b1c0ae5e7d azureblob: fix creation of directory markers
This also fixes the integration tests which is why we didn't notice this before!
2023-09-03 18:09:31 +01:00
Nick Craig-Wood
40bcc7a90b fstest: fix sftp ssh integration tests
This adds a private and public key to the SFTP SSH test so that it
works when it doesn't have access to my ssh agent!
2023-09-03 15:23:12 +01:00
Nick Craig-Wood
be17f1523a b2: fix ChunkWriter size return 2023-09-03 13:53:11 +01:00
Nick Craig-Wood
bb58040d9c s3: fix multpart streaming uploads of 0 length files 2023-09-03 12:37:20 +01:00
Nick Craig-Wood
2db0e23584 backends: change OpenChunkWriter interface to allow backend concurrency override
Before this change the concurrency used for an upload was rather
inconsistent.

- if size below `--backend-upload-cutoff` (default 200M) do single part upload.

- if size below `--multi-thread-cutoff` (default 256M) or using streaming
  uploads (eg `rclone rcat) do multipart upload using
  `--backend-upload-concurrency` to set the concurrency used by the uploader.

- otherwise do multipart upload using `--multi-thread-streams` to set the
  concurrency.

This change makes the default for the concurrency used be the
`--backend-upload-concurrency`. If `--multi-thread-streams` is set and larger
than the `--backend-upload-concurrency` then that will be used instead.

This means that if the user sets `--backend-upload-concurrency` then it will be
obeyed for all multipart/multi-thread transfers and the user can override them
all with `--multi-thread-streams`.

See: #7056
2023-09-03 11:47:05 +01:00
Nick Craig-Wood
a7337b0a95 Add Alishan Ladhani to contributors 2023-09-03 11:47:05 +01:00
Alishan Ladhani
7821cb884d b2: fix rclone link when object path contains special characters
Before this change, b2 would return an error when opening a link
generated by `rclone link`. The following error occurs when the object
path contains an ampersand that is not percent encoded:

{
  "code": "bad_request",
  "message": "Bad character in percent-encoded string: 38 (0x26)",
  "status": 400
}
2023-09-02 18:31:14 +01:00
Nick Craig-Wood
85c29e3629 serve dlna: fix MIME type if backend can't identify it
If the server returns the MIME type as application/octet-stream we
assume it doesn't really know what the MIME type. This patch tries
matching the MIME type from the file extension instead in this case.

This enables the use of servers (like OneDrive for Business) which
don't allow the setting of MIME types on upload and have a poor
selection of mime types.

Fixes #7259
2023-09-01 18:09:44 +01:00
Nick Craig-Wood
b7ec75aab6 docs: add Storj as a sponsor 2023-09-01 18:09:44 +01:00
Nick Craig-Wood
38309f2df2 Add Bjørn Smith to contributors 2023-09-01 18:09:44 +01:00
NoLooseEnds
7487d34c33 jotta: added Telia Sky whitelabel (Norway)
Duplicated Telia Cloud (Sweden) changed the URLs and added teliase and teliano
(instead of just telia) to differentiate.

See: #5153 #5016
2023-09-01 14:55:32 +01:00
kapitainsky
e45cb4fc75 docs: single character remote names in Windows
Clarify how single character remote names are interpreted in Windows (as drive letters)

See: https://forum.rclone.org/t/issue-with-single-character-configuration-on-windows-with-rclone/
2023-09-01 11:00:14 +01:00
Bjørn Smith
21008b4cd5 docs: sftp: add note regarding format of server_command
Elaborate exactly how server_command should be used in the configuration file
2023-09-01 10:57:42 +01:00
Nick Craig-Wood
cffe85e6c5 fshttp: fix --bind 0.0.0.0 allowing IPv6 and --bind ::0 allowing IPv4
Due to a bug/misfeature in the go standard library as described here:
https://github.com/golang/go/issues/48723 the go standard library
binds to both IPv4 and IPv6 when passed 0.0.0.0 or ::0.

This patch detects the bind address and forces the correct IP
protocol.

Fixes #6124
Fixes #6244
See: https://forum.rclone.org/t/issues-with-bind-0-0-0-0-and-onedrive-getting-etag-mismatch-when-using-ipv6/41379/
2023-09-01 10:47:39 +01:00
Nick Craig-Wood
d12a92eac9 box: fix unhelpful decoding of error messages into decimal numbers
Before this change the box backend could make errors like

    Error "not_found" (404): On-Behalf-Of User not found ([123 34 105 110
    118 97 108 105 100 95 117 115 101 114 95 105 100 34 58 123 34 105 100
    34 58 34 48 48 48 48 48 48 48 48 48 48 48 34 125 125])

This fixes it to produce this instead

    Error "not_found" (404): On-Behalf-Of User not found ({"invalid_user_id":{"id":"00000000000"}})
2023-08-31 23:03:27 +01:00
eNV25
11eeaaf792 cmd/ncdu: fix add keybinding to rescan filesystem 2023-08-30 14:29:46 +01:00
David Sze
a603efeaf4 box: add polling support 2023-08-30 09:25:00 +01:00
eNV25
0bd0a992a4 cmd/ncdu: add keybinding to rescan filesystem
Fixes #7255
2023-08-30 09:05:58 +01:00
Justin Hellings
82c8d78a44 docs: may not -> might not, to remove ambiguity
"may not" can be interpreted as "is not allowed".
Replaced with "might not" in both cases to remove ambiguity.
2023-08-29 15:10:50 +01:00
Nick Craig-Wood
a83fec756b build: fix lint errors when re-enabling revive var-naming 2023-08-29 13:03:49 +01:00
Nick Craig-Wood
e953598987 build: fix lint errors when re-enabling revive exported & package-comments 2023-08-29 13:03:13 +01:00
Nick Craig-Wood
feaa20d885 build: re-enable revive linters
In this commit:

75dfdbf211 ci: revert revive settings back to fix lint

We accidentally disabled all the revive linters. Unfortunately setting
the rules clears the default set of rules so it is necessary to
mention all rules that we need.
2023-08-29 13:01:15 +01:00
Nick Craig-Wood
967fc6d7f4 lib/multipart: fix accounting for multipart transfers
This change makes sure the accouting is done when bytes are taken out
of the buffer rather than put in.

See: https://forum.rclone.org/t/improve-transfer-stats-calculation-for-multipart-uploads/41172
2023-08-27 23:10:58 +01:00
Nick Craig-Wood
b95bda1e92 s3: fix purging of root directory with --s3-directory-markers - fixes #7247 2023-08-25 17:39:16 +01:00
Nick Craig-Wood
9c14562850 fstests: add backend integration test for purging root directory #7247 2023-08-25 17:39:07 +01:00
Nick Craig-Wood
f992742404 s3: fix accounting for multpart uploads 2023-08-25 16:31:31 +01:00
Nick Craig-Wood
f2467d07aa oracleobjectstorage: fix accounting for multpart uploads 2023-08-25 16:31:31 +01:00
Nick Craig-Wood
d69cdb79f7 b2: fix accounting for multpart uploads 2023-08-25 16:31:31 +01:00
Nick Craig-Wood
df5d92d709 operations: fix terminology in multi-thread copy 2023-08-25 16:31:31 +01:00
Nick Craig-Wood
1b5b36523b operations: fix accounting for multi-thread transfers
This stops the accounting moving in large chunks and gets it to move
as the data is read out of the buffer.
2023-08-24 17:51:36 +01:00
Nick Craig-Wood
2f424ceecf operations: don't buffer when a backend implements OpenWriterAt
In this case we don't seek on errors so no need for seeking.
2023-08-24 17:51:36 +01:00
Nick Craig-Wood
bc986b44b2 lib/pool: add DelayAccounting() to fix accounting when reading hashes 2023-08-24 16:42:09 +01:00
Nick Craig-Wood
f4b1a51af6 lib/pool: add SetAccounting to RW 2023-08-24 15:28:40 +01:00
Manoj Ghosh
25703ad20e oracleobjectstorage: implement OpenChunkWriter and multi-thread uploads #7056 2023-08-24 12:39:28 +01:00
Nick Craig-Wood
ab803d1278 b2: implement OpenChunkWriter and multi-thread uploads #7056
This implements the OpenChunkWriter interface for b2 which
enables multi-thread uploads.

This makes the memory controls of the s3 backend inoperative; they are
replaced with the global ones.

    --b2-memory-pool-flush-time
    --b2-memory-pool-use-mmap

By using the buffered reader this fixes excessive memory use when
uploading large files as it will share memory pages between all
readers.
2023-08-24 12:39:27 +01:00
Nick Craig-Wood
0427177857 azureblob: implement OpenChunkWriter and multi-thread uploads #7056
This implements the OpenChunkWriter interface for azureblob which
enables multi-thread uploads.

This makes the memory controls of the s3 backend inoperative; they are
replaced with the global ones.

    --azureblob-memory-pool-flush-time
    --azureblob-memory-pool-use-mmap

By using the buffered reader this fixes excessive memory use when
uploading large files as it will share memory pages between all
readers.
2023-08-24 12:39:27 +01:00
Nick Craig-Wood
3dfcfc2caa operations: document multi-thread copy and tweak defaults 2023-08-24 12:39:27 +01:00
Nick Craig-Wood
d4cff1ae19 operations: add abort on exit to multithread copy 2023-08-24 12:39:27 +01:00
Nick Craig-Wood
f5753369e4 operations: multipart: don't buffer transfers to local disk #7056 2023-08-24 12:39:27 +01:00
Nick Craig-Wood
4c76fac594 s3: factor generic multipart upload into lib/multipart #7056
This makes the memory controls of the s3 backend inoperative and
replaced with the global ones.

    --s3-memory-pool-flush-time
    --s3-memory-pool-use-mmap

By using the buffered reader this fixes excessive memory use when
uploading large files as it will share memory pages between all
readers.

Fixes #7141
2023-08-24 12:39:27 +01:00
Nick Craig-Wood
0d0bcdac31 fs: add context.Ctx to ChunkWriter methods
WriteChunk in particular needs a different context from that which
OpenChunkWriter was used with so add it to all the methods.
2023-08-24 12:39:27 +01:00
Nick Craig-Wood
f3bd02f0ef operations: fix and tidy multithread code
- fix docs and error messages for multithread
- use sync/errgroup built in concurrency limiting
- re-arrange multithread code
- don't continue multi-thread uploads if one part fails
2023-08-24 12:39:27 +01:00
Nick Craig-Wood
e6fde67491 s3: fix retry logic, logging and error reporting for chunk upload
- move retries into correct place into lowest level functions
- fix logging and error reporting
2023-08-24 12:39:27 +01:00
Nick Craig-Wood
b4e3332e02 fs: introduces aliases for OpenWriterAtFn and OpenChunkWriterFn 2023-08-24 12:39:27 +01:00
Nick Craig-Wood
0dea83a4aa pool: add page backed reader/writer for multi thread uploads 2023-08-24 12:39:27 +01:00
Nick Craig-Wood
e8f3f98aa0 lib/readers: add NoSeeker to adapt io.Reader to io.ReadSeeker 2023-08-24 12:39:27 +01:00
Nick Craig-Wood
d61328e459 serve ftp: fix race condition when using the auth proxy
In this commit we introduced a race condition when using the auth
proxy.

94a320f23c serve ftp: update to goftp.io/server v2.0.1

This was due to the re-organisation of the upstream library which made
the driver be a singleton rather than per session.

This means that when using the auth proxy we need to keep track of
which VFS to use by based on which FTP user is connected.

This also adjusts the locking so that the methods will run
concurrently.
2023-08-23 15:11:47 +01:00
r-ricci
9844704567 docs: remove contributor's old email 2023-08-23 12:31:48 +01:00
Nick Craig-Wood
94a320f23c serve ftp: update to goftp.io/server v2.0.1 - fixes #7237 2023-08-22 17:24:05 +01:00
Nick Craig-Wood
7fc573db27 serve sftp: fix hash calculations with --vfs-cache-mode full
Before this change uploading files with rclone to:

    rclone serve sftp --vfs-cache-mode full

Would return the error:

    command "md5sum XXX" failed with error: unexpected non file

This patch detects that the file is still in the VFS cache and reads
the MD5SUM from there rather from the remote.

Fixes #7241
2023-08-22 13:18:36 +01:00
Nick Craig-Wood
af95616122 Add Roberto Ricci to contributors 2023-08-22 13:18:29 +01:00
Roberto Ricci
72f9f1e9c0 vfs: make sure struct field is aligned for atomic access 2023-08-22 12:52:13 +01:00
Roberto Ricci
91b8152321 vfs: use atomic types 2023-08-22 12:52:13 +01:00
Roberto Ricci
552b6c47ff lib: use atomic types 2023-08-22 12:52:13 +01:00
Roberto Ricci
01a155fb00 fs: use atomic types 2023-08-22 12:52:13 +01:00
Roberto Ricci
50d0597d56 cmount: use atomic types 2023-08-22 12:52:13 +01:00
Roberto Ricci
123a030441 smb: use atomic types 2023-08-22 12:52:13 +01:00
Roberto Ricci
28ceb323ee sftp: use atomic types 2023-08-22 12:52:13 +01:00
Roberto Ricci
c624dd5c3a seafile: use atomic types 2023-08-22 12:52:13 +01:00
Roberto Ricci
a56c11753a local: use atomic types 2023-08-22 12:52:13 +01:00
Roberto Ricci
4341d472aa filefabric: use atomic types 2023-08-22 12:52:13 +01:00
Roberto Ricci
b6e7148daf box: use atomic types 2023-08-22 12:52:13 +01:00
Roberto Ricci
45458f2cdb union: use atomic types 2023-08-22 12:52:13 +01:00
Nick Craig-Wood
de147b6e54 sftp: fix --sftp-ssh looking for ssh agent - fixes #7235
Before this change if pass was empty this would attempt to connect to
the ssh agent even when using --sftp-ssh.

This patch prevents that.
2023-08-21 17:43:02 +01:00
Nick Craig-Wood
11de137660 sftp: fix spurious warning when using --sftp-ssh
When using --sftp-ssh we were warning about user/host/port even when
they were at their defaults.

See: #7235
2023-08-21 17:43:02 +01:00
Nick Craig-Wood
156c372cd7 sync: fix lockup with --cutoff-mode=soft and --max-duration
Before this change, when using --cutoff-mode=soft and --max-duration
rclone deadlocked when the cutoff limit was reached.

This was because the sync objects Pipe became full and nothing was
emptying it because the cutoff was reached.

This changes the context for putting items into the pipe to be the one
that gets cancelled when the cutoff is reached.

See: https://forum.rclone.org/t/sync-command-hanging-using-cutoff-mode-soft-with-max-duration-time-flags/40866
2023-08-18 17:33:54 +01:00
Nick Craig-Wood
c979cde002 ftp: fix 425 "TLS session of data connection not resumed" errors
As an extra security feature some FTP servers (eg FileZilla) require
that the data connection re-use the same TLS connection as the control
connection. This is a good thing for security.

The message "TLS session of data connection not resumed" means that it
was not done.

The problem turned out to be that rclone was re-using the TLS session
cache between concurrent connections so the resumed TLS data
connection could from any of the control connections.

This patch makes each TLS connection have its own session cache which
should fix the problem.

This also reverts the ftp library to the upstream version which now
contains all of our patches.

Fixes #7234
2023-08-18 14:44:13 +01:00
Nick Craig-Wood
03aab1a123 rmdirs: remove directories concurrently controlled by --checkers
See: https://forum.rclone.org/t/how-to-list-empty-directories-recursively/40995
2023-08-18 12:05:15 +01:00
Nick Craig-Wood
dc803b572c Add hideo aoyama to contributors 2023-08-18 12:05:15 +01:00
Nick Craig-Wood
4d19042a61 Add Jacob Hands to contributors 2023-08-18 12:05:15 +01:00
hideo aoyama
923989d1d7 build: add snap installation
I ( @boukendesho ) have volunteered to maintain the snap package so
this adds it back into the installation instructions.

It will set a `snap` tag visible in `rclone version` so we know where
it came from for support queries.
2023-08-18 11:57:25 +01:00
sitiom
cf65e36cf3 ci: change Winget Releaser job to ubuntu-latest 2023-08-17 11:36:28 +01:00
Jacob Hands
cf5457c2cd fs: Fix transferTime not being set in JSON logs
This was unintentionally broken in 04aa696
2023-08-17 11:19:35 +01:00
Jacob Hands
ea4aa696a5 fs: Don't stop calculating average transfer speed until the operation is complete
Currently, the average transfer speed will stop calculating 1 minute
after the last queued transfer completes. This causes the average to
stop calculating when checking is slow and the transfer queue becomes
empty.

This change will require all checks to complete before stopping the
average speed calculation.
2023-08-16 21:43:24 +01:00
Nick Craig-Wood
34195fd3e8 sync: fix erroneous test in TestSyncOverlapWithFilter
In this commit:

432d5d1e20 operations: fix overlapping check on case insensitive file systems

We introduced a test that makes no sense. This happens to pass without --fast-list and fail with it.

This removes the test.
2023-08-13 11:42:31 +01:00
Nick Craig-Wood
40b8167ab4 Add Vitor Gomes to contributors 2023-08-13 11:42:22 +01:00
Nick Craig-Wood
e365f237f5 Add nielash to contributors 2023-08-13 11:41:24 +01:00
Nick Craig-Wood
7d449572bd Add alexia to contributors 2023-08-13 11:41:24 +01:00
Vitor Gomes
181fecaec3 multithread: refactor multithread operation to use OpenChunkWriter if available #7056
If the feature OpenChunkWriter is not available, multithread tries to create an adapter from OpenWriterAt to OpenChunkWriter.
2023-08-12 17:55:01 +01:00
Vitor Gomes
7701d1d33d config: add "multi-thread-chunk-size" flag #7056 2023-08-12 17:55:01 +01:00
Vitor Gomes
6dd736fbdc s3: refactor MultipartUpload to use OpenChunkWriter and ChunkWriter #7056 2023-08-12 17:55:01 +01:00
Vitor Gomes
f36ca0cd25 features: add new interfaces OpenChunkWriter and ChunkWriter #7056 2023-08-12 17:55:01 +01:00
nielash
9b3b1c7067 bisync: typo corrections & other doc improvements 2023-08-12 17:24:21 +01:00
nielash
0dd0d6a13e bisync: Add support for --create-empty-src-dirs - Fixes #6109
Sync creation and deletion of empty directories.
https://forum.rclone.org/t/bisync-bugs-and-feature-requests/37636#:~:text=3.%20Bisync%20should%20create/delete%20empty%20directories%20as%20sync%20does%2C%20when%20%2D%2Dcreate%2Dempty%2Dsrc%2Ddirs%20is%20passed

Also fixed an issue causing --resync to erroneously delete empty folders and duplicate files unique to Path2
https://forum.rclone.org/t/bisync-bugs-and-feature-requests/37636#:~:text=2.%20%2D%2Dresync%20deletes%20data%2C%20contrary%20to%20docs
2023-08-12 17:24:21 +01:00
nielash
e5bde42303 bisync: Add experimental --resilient mode to allow recovery from self-correctable errors
https://forum.rclone.org/t/bisync-bugs-and-feature-requests/37636#:~:text=2.%20Bisync%20should%20be%20more%20resilient%20to%20self%2Dcorrectable%20errors
2023-08-12 17:24:21 +01:00
nielash
f01a50eb47 bisync: Add new --ignore-listing-checksum flag to distinguish from --ignore-checksum
https://forum.rclone.org/t/bisync-bugs-and-feature-requests/37636#:~:text=6.%20%2D%2Dignore%2Dchecksum%20should%20be%20split%20into%20two%20flags%20for%20separate%20purposes
2023-08-12 17:24:21 +01:00
nielash
5ca61ab705 bisync: equality check before renaming (leave identical files alone)
Improved detection of false positive change conflicts (identical files are now left alone instead of renamed)
https://forum.rclone.org/t/bisync-bugs-and-feature-requests/37636#:~:text=1.%20Identical%20files%20should%20be%20left%20alone%2C%20even%20if%20new/newer/changed%20on%20both%20sides
2023-08-12 17:24:21 +01:00
nielash
4ac4ce6afd bisync: apply filters correctly during deletes
Fixed an issue causing bisync to consider more files than necessary due to overbroad filters during delete operations
https://forum.rclone.org/t/bisync-bugs-and-feature-requests/37636#:~:text=5.%20Bisync%20reads%20files%20in%20excluded%20directories%20during%20delete%20operations
2023-08-12 17:24:21 +01:00
nielash
40a874a0d8 bisync: enforce --check-access during --resync
--check-access is now enforced during --resync, preventing data loss in certain user error scenarios
https://forum.rclone.org/t/bisync-bugs-and-feature-requests/37636#:~:text=%2D%2Dcheck%2Daccess%20doesn%27t%20always%20fail%20when%20it%20should
2023-08-12 17:24:21 +01:00
nielash
f4dd86238d bisync: dry runs no longer commit filter changes
Fixed an issue causing dry runs to inadvertently commit filter changes
https://forum.rclone.org/t/bisync-bugs-and-feature-requests/37636#:~:text=1.%20Dry%20runs%20are%20not%20completely%20dry
2023-08-12 17:24:21 +01:00
nielash
4f1eafb044 gitignore: add .DS_Store and remove *.log
.DS_Store is an irrelevant file created automatically by macOS.

*.log can't be ignored because bisync uses .log files in its test suite.
2023-08-12 17:24:21 +01:00
alexia
20c9e0cab6 fichier: fix error code parsing
This fixes the following error I encountered:

```
2023/08/09 16:18:49 DEBUG : failed parsing fichier error: strconv.Atoi: parsing "#374": invalid syntax
2023/08/09 16:18:49 DEBUG : pacer: low level retry 1/10 (error HTTP error 403 (403 Forbidden) returned body: "{\"status\":\"KO\",\"message\":\"Flood detected: IP Locked #374\"}")
```
2023-08-11 00:47:01 +09:00
Nick Craig-Wood
9c09cf9cf6 build: update to released go1.21 2023-08-09 22:41:19 +01:00
Nick Craig-Wood
3a3af00180 Add antoinetran to contributors 2023-08-09 22:41:19 +01:00
Nick Craig-Wood
281e0c2d62 Add James Braza to contributors 2023-08-09 22:41:19 +01:00
Nick Craig-Wood
25b81b8789 Add Masamune3210 to contributors 2023-08-09 22:41:19 +01:00
Nick Craig-Wood
90fdd97a7b Add Nihaal Sangha to contributors 2023-08-09 22:41:19 +01:00
Chun-Hung Tseng
3c58e0efe0 protondrive: update the information regarding the advance setting enable_caching (#7202) 2023-08-09 16:01:19 +02:00
antoinetran
db744f64f6 docs: clarify --checksum documentation - Fixes #7145 2023-08-09 20:00:46 +09:00
Nick Craig-Wood
480220a84a docs: add some more docs on making your own backend 2023-08-09 20:00:46 +09:00
James Braza
d0362171cf docs: environment variable remote name only supports letters, digits, or underscores 2023-08-09 11:42:04 +01:00
Masamune3210
45887d11f6 docs: local: fix typo 2023-08-09 11:32:45 +01:00
Eng Zer Jun
c4bad5c1bc lib/rest: remove unnecessary nil check
From the Go docs:

  "A `nil` map is equivalent to an empty map. [1]

Therefore, an additional nil check for `opts.ExtraHeaders` before the loop is
unnecessary because `opts.ExtraHeaders` is a `map`.

[1]: https://go.dev/ref/spec#Map_types

Signed-off-by: Eng Zer Jun <engzerjun@gmail.com>
2023-08-09 19:17:42 +09:00
Nihaal Sangha
40de89df73 drive: fix typo in docs 2023-08-05 12:51:51 +01:00
Manoj Ghosh
27f5297e8d oracleobjectstorage: Use rclone's rate limiter in mutipart transfers 2023-08-05 12:09:17 +09:00
Nick Craig-Wood
de185de215 accounting: show server side stats in own lines and not as bytes transferred
Before this change we showed both server side moves and server side
copies as bytes transferred.

This made a nice easy to use stats display, but also caused confusion
for users who saw unrealistic transfer times. It also caused a problem
with --max-transfer and chunker which renames each chunk after
uploading which was counted as a transfer byte.

This patch instead accounts the server side move and copy statistics
as a seperate lines in the stats display which will only appear if
there are any server side moves / copies. This is also output in the
rc.

This gives users something to look at when transfers are running which
was the point of the original change but it now means that transfer
bytes represents data transfers through this rclone instance only.

Fixes #7183
2023-08-05 03:54:01 +01:00
Nick Craig-Wood
d362db2e08 rclone test info: add --check-base32768 flag to check can store all base32768 characters
Fixes #7208
2023-08-05 02:59:58 +01:00
Nick Craig-Wood
db2a49e384 Add Raymond Berger to contributors 2023-08-05 02:59:58 +01:00
Kaloyan Raev
d63fcc6e44 storj: performance improvement for large file uploads
storj.io/uplink v1.11.0 comes with an improved logic for uploading large
files where file segments are uploaded concurrently instead of serially.
This allows to fully utilize the network connection during the entire
upload process.

This change enable the new upload logic.
2023-08-04 17:40:03 +09:00
kapitainsky
4444037f5c docs: box client_id creation
See: https://forum.rclone.org/t/box-getting-403-errors-when-using-chunker-working-fine-without-it/40388/22
2023-08-03 14:51:49 +01:00
Raymond Berger
a555513c26 docs: add missing comma to overview webdav footnote 2023-08-03 14:50:04 +01:00
Nick Craig-Wood
039c260216 build: update to go1.21rc4 2023-08-03 13:53:43 +01:00
Nick Craig-Wood
4577c08e05 Add Julian Lepinski to contributors 2023-08-03 13:53:43 +01:00
Nick Craig-Wood
c9ed691919 docs: add minio as a sponsor 2023-08-03 08:39:15 +01:00
Julian Lepinski
9f96c0d4ea swift: fix HEADing 0-length objects when --swift-no-large-objects set
The Swift backend does not always respect the flag telling it to skip
HEADing zero-length objects. This commit fixes that for ls/lsl/lsf.

Swift returns zero length for dynamic large object files when they're
included in a files lookup, which means that determining their size
requires HEADing each file that returns a size of zero. rclone's
--swift-no-large-objects instructs rclone that no large objects are
present and accordingly rclone should not HEAD files that return zero
length.

When rclone is performing an ls / lsf / lsl type lookup, however, it
continues to HEAD any zero length objects it encounters, even with
this flag set. Accordingly, this change causes rclone to respect the
flag in these situations.

NB: It is worth noting that this will cause rclone to incorrectly
report zero length for any dynamic large objects encountered with the
--swift-no-large-objects flag set.
2023-08-03 08:38:39 +01:00
Nick Craig-Wood
91d095f468 docs: update command docs to new style 2023-08-02 12:53:09 +01:00
Nick Craig-Wood
bff702a6f1 docs: group the global flags and make them appear on command and flags pages
This adds an additional parameter to the creation of each flag. This
specifies one or more flag groups. This **must** be set for global
flags and **must not** be set for local flags.

This causes flags.md to be built with sections to aid comprehension
and it causes the documentation pages for each command (and the
`--help`) to be built showing the flags groups as specified in the
`groups` annotation on the command.

See: https://forum.rclone.org/t/make-docs-for-mortals-not-only-rclone-gurus/39476/
2023-08-02 12:53:09 +01:00
Nick Craig-Wood
a1d6bbd31f Add rclone completion powershell - basic implementation only 2023-08-02 12:53:09 +01:00
Nick Craig-Wood
fb6a9dfbf3 docs: fix rclone config edit docs 2023-08-02 12:53:09 +01:00
Nick Craig-Wood
3f3c5f3ff4 build: remove unused package cmd/serve/http/data
This was superseded by lib/http/template.go
2023-08-02 12:53:09 +01:00
Nick Craig-Wood
89196cb353 Add nielash to contributors 2023-08-02 12:53:09 +01:00
Nick Craig-Wood
9284506b86 Add Zach to contributors 2023-08-02 12:53:09 +01:00
yuudi
88c72d1f4d http: fix webdav OPTIONS response (#6433) 2023-08-01 11:48:41 +09:00
Paul
5e3bf50b2e webdav: nextcloud: fix segment violation in low-level retry
Fix https://github.com/rclone/rclone/issues/7168

Co-authored-by: ncw <nick@craig-wood.com>
Co-authored-by: Paul <devnoname120@gmail.com>
2023-08-01 11:15:33 +09:00
nielash
982f76b4df sftp: support dynamic --sftp-path-override
Before this change, rclone always expected --sftp-path-override to be
the absolute SSH path to remote:path/subpath which effectively made it
unusable for wrapped remotes (for example, when used with a crypt
remote, the user would need to provide the full decrypted path.)

After this change, the old behavior remains the default, but dynamic
paths are now also supported, if the user adds '@' as the first
character of --sftp-path-override. Rclone will ignore the '@' and
treat the rest of the string as the path to the SFTP remote's root.
Rclone will then add any relative subpaths automatically (including
unwrapping/decrypting remotes as necessary).

In other words, the path_override config parameter can now be used to
specify the difference between the SSH and SFTP paths. Once specified
in the config, it is no longer necessary to re-specify for each
command.

See: https://forum.rclone.org/t/sftp-path-override-breaks-on-wrapped-remotes/40025
2023-07-30 03:12:07 +01:00
Zach
347812d1d3 ftp,sftp: add socks_proxy support for SOCKS5 proxies
Fixes #3558
2023-07-30 03:02:08 +01:00
yuudi
f4449440f8 http: CORS should not be send if not set (#6433) 2023-07-29 16:12:31 +09:00
kapitainsky
e66675d346 docs: rclone backend restore 2023-07-29 11:31:16 +09:00
Nick Craig-Wood
45228e2f18 build: update dependencies
This does not update bazil/fuse because it does not build on freebsd

https://github.com/bazil/fuse/issues/295

This partially updates the prometheus library as the latest no longer compiles with plan9

https://github.com/prometheus/procfs/issues/554
2023-07-29 01:57:23 +01:00
Nick Craig-Wood
b866850fdd Add yuudi to contributors 2023-07-29 01:57:23 +01:00
yuudi
5b63b9534f rc: add execute-id for job-id 2023-07-28 18:35:14 +09:00
Nick Craig-Wood
10449c86a4 sftp: add --sftp-ssh to specify an external ssh binary to use
This allows using an external ssh binary instead of the built in ssh
library for making SFTP connections.

This makes another integration test target TestSFTPRcloneSSH:

Fixes #7012
2023-07-28 10:29:02 +01:00
Nick Craig-Wood
26a9a9fed2 Add Niklas Hambüchen to contributors 2023-07-28 10:29:02 +01:00
Chun-Hung Tseng
602e42d334 protondrive: fix a bug in parsing User metadata (#7174) 2023-07-28 11:03:23 +02:00
Niklas Hambüchen
4c5a21703e docs: dropbox: Explain that Teams needs "Full Dropbox" 2023-07-28 17:52:29 +09:00
Nick Craig-Wood
f2ee949eff fichier: implement DirMove
See: https://forum.rclone.org/t/1fichier-rclone-does-not-allow-to-rename-files-and-folders-when-you-mount-a-1fichier-disk-drive/24726/
2023-07-28 01:25:42 +01:00
kapitainsky
3ad255172c docs: b2 versions names caveat 2023-07-28 09:23:34 +09:00
Nick Craig-Wood
29b1751d0e serve webdav: fix error: Expecting fs.Object or fs.Directory, got <nil>
Before this change rclone serve webdav would sometimes give this error

    Expecting fs.Object or fs.Directory, got <nil>

It turns out that when a file is being updated it doesn't have a
DirEntry and it is allowed to be <nil> so in this case we create the
mime type from the extension.

See: https://forum.rclone.org/t/webdav-union-of-onedrive-expecting-fs-object-or-fs-directory-got-nil/40298
2023-07-28 00:54:45 +01:00
kapitainsky
363da9aa82 docs: s3 versions names caveat 2023-07-27 12:36:50 +09:00
yuudi
6c8148ef39 http servers: allow CORS to be set with --allow-origin flag - fixes #5078
Some changes about test cases:
Because MiddlewareCORS will return early on OPTIONS request,
this middleware should only be used once at NewServer function.
Test cases should pass AllowOrigin config instead of adding
this middleware again.

A new test case was added to test CORS preflight request with
an authenticator. Preflight request should always return 200 OK
regardless of autentications.

Co-authored-by: yuudi <yuudi@users.noreply.github.com>
2023-07-26 10:15:54 +01:00
Nick Craig-Wood
3ed4a2e963 sftp: stop uploads re-using the same ssh connection to improve performance
Before this change we released the ssh connection back to the pool
before the upload was finished.

This meant that uploads were re-using the same ssh connection which
reduces throughput.

This releases the ssh connection back to the pool only after the
upload has finished, or on error state.

See: https://forum.rclone.org/t/sftp-backend-opens-less-connection-than-expected/40245
2023-07-25 13:05:37 +01:00
Anagh Kumar Baranwal
aaadb48d48 vfs: keep virtual directory status accurate and reduce deadlock potential
This changes hasVirtual to an atomic struct variable that's updated on
add or delete from the virtual map.

This keeps it up to date and avoids deadlocks.

Signed-off-by: Anagh Kumar Baranwal <6824881+darthShadow@users.noreply.github.com>
2023-07-25 09:08:16 +01:00
Anagh Kumar Baranwal
52e25c43b9 vfs: Added cache cleaner for directories to reduce memory usage
This empties the directory cache after twice the directory cache
period to release memory.

Signed-off-by: Anagh Kumar Baranwal <6824881+darthShadow@users.noreply.github.com>
2023-07-25 09:08:16 +01:00
Nick Craig-Wood
9a66563fc6 Add Edwin Mackenzie-Owen to contributors 2023-07-25 09:08:16 +01:00
Nick Craig-Wood
6ca670d66a Add Tiago Boeing to contributors 2023-07-25 09:08:16 +01:00
Nick Craig-Wood
809653055d Add gabriel-suela to contributors 2023-07-25 09:08:16 +01:00
Nick Craig-Wood
61325ce507 Add Ricardo D'O. Albanus to contributors 2023-07-25 09:08:16 +01:00
Edwin Mackenzie-Owen
c3989d1906 smb: implement multi-threaded writes for copies to smb
smb2.File implements the WriterAtCloser interface defined in
fs/types.go. Expose it via a OpenWriterAt method on
the fs struct to support multi-threaded writes.
2023-07-25 08:31:36 +01:00
Tiago Boeing
a79887171c docs: mega: update with solution when receiving killed on process 2023-07-25 04:21:37 +01:00
Chun-Hung Tseng
f29e284c90 protondrive: fix download signature verification bug (#7169) 2023-07-24 14:54:39 +02:00
Chun-Hung Tseng
9a66086fa0 protondrive: fix bug in digests parsing (#7164) 2023-07-24 09:00:18 +02:00
Chun-Hung Tseng
1845c261c6 protondrive: fix missing file sha1 and appstring issues (#7163) 2023-07-24 08:56:21 +02:00
Chun-Hung Tseng
70cbcef624 Add Chun-Hung Tseng to Maintainer (#7162) 2023-07-23 16:29:24 +02:00
gabriel-suela
9169b2b5ab cmd: fix log message typo 2023-07-23 08:43:03 +09:00
Ricardo D'O. Albanus
0957c8fb74 chunker: Update documentation to mention issue with small files
See: https://forum.rclone.org/t/chunker-not-deactivating-for-small-files-and-wasting-api-calls/40122
2023-07-23 00:40:50 +01:00
Anagh Kumar Baranwal
bb0cd76a5f fix: mount parsing for linux 2023-07-22 17:29:20 +05:30
Nick Craig-Wood
08240c8cf5 Add Chun-Hung Tseng to contributors 2023-07-22 10:54:21 +01:00
Chun-Hung Tseng
014acc902d protondrive: add protondrive backend - fixes #6072 2023-07-22 10:46:21 +01:00
Benjamin
33fec9c835 doc: Fix Leviia block 2023-07-18 19:58:19 +01:00
kapitainsky
3a5ffc7839 docs: mention Box as base32768 compatible
As suddenly many people move to Box - another "unlimited" cloud story migration saga there are frequent questions about crypt files encoding to be used.

Box is base32768 friendly.

It has been tested with:

https://pub.rclone.org/base32768.zip

and:

rclone test info --check-length boxremote:

maxFileLength = 255 // for 1 byte unicode characters
maxFileLength = 255 // for 2 byte unicode characters
maxFileLength = 255 // for 3 byte unicode characters
maxFileLength = -1 // for 4 byte unicode characters
2023-07-18 19:55:54 +01:00
Benjamin
8a6bf35481 Add Leviia Object Storage on index.md 2023-07-18 09:52:05 +01:00
Benjamin
f7d27f4bf2 Add Object storage to Leviia on README.md 2023-07-18 09:52:05 +01:00
kapitainsky
378a2d21ee --max-transfer - add new exit code (10)
It adds dedicated exit code (10) for --max-duration flag.

Rclone will exit with exit code 10 if the duration limit is reached.

It behaves in similar fashion as --max-transfer and exit code 8.

discussed on the forum:

https://forum.rclone.org/t/max-duration-option-is-triggering-exit-with-error/39917/6
2023-07-18 09:51:31 +01:00
Nick Craig-Wood
3404eb0444 Changelog updates from Version v1.63.1 2023-07-17 15:15:16 +01:00
Nick Craig-Wood
13e5701f2a build: add new sponsors page to docs 2023-07-17 14:28:40 +01:00
Nick Craig-Wood
432d5d1e20 operations: fix overlapping check on case insensitive file systems
Before this change, the overlapping check could erroneously give this
error on case insensitive file systems:

    Failed to sync: destination and parameter to --backup-dir mustn't overlap

The code was fixed and re-worked to be simpler and more reliable.

See: https://forum.rclone.org/t/backup-dir-cannot-be-in-root-even-when-excluded/39844/
2023-07-17 14:00:04 +01:00
Nick Craig-Wood
cc05159518 Add Benjamin to contributors 2023-07-17 14:00:04 +01:00
Benjamin
119ccb2b95 s3: add Leviia S3 Object Storage as provider 2023-07-16 18:08:47 +01:00
Anagh Kumar Baranwal
0ef0e908ca build: update to go1.21rc3 and make go1.19 the minimum required version
Signed-off-by: Anagh Kumar Baranwal <6824881+darthShadow@users.noreply.github.com>
2023-07-16 10:09:25 +01:00
Nick Craig-Wood
0063d14dbb Add darix to contributors 2023-07-14 10:27:20 +01:00
albertony
0d34efb10f box: fix reconnect failing with HTTP 400 Bad Request
The error is:

  Error: failed to configure token with jwt authentication: jwtutil: failed making auth request: 400 Bad Request

With the following additional debug information:

  jwtutil: Response Body: {"error":"invalid_grant","error_description":"Please check the 'aud' claim. Should be a string"}

Problem is that in jwt-go the RegisteredClaims type has Audience field (aud claim) that
is a list, while box apparantly expects it to be a singular string. In jwt-go v4 we
currently use there is an alternative type StandardClaims which matches what box wants.
Unfortunately StandardClaims is marked as deprecated, and is removed in the
newer v5 version, so we this is a short term fix only.

Fixes #7114
2023-07-14 10:24:33 +01:00
darix
415f4b2b93 webdav: nextcloud chunking: add more guidance for the user to check the config 2023-07-10 14:37:09 +01:00
Nick Craig-Wood
07cf5f1d25 operations: fix .rclonelink files not being converted back to symlinks
Before this change the new partial downloads code was causing symlinks
to be copied as regular files.

This was because the partial isn't named .rclonelink so the local
backend saves it as a normal file and renaming it to .rclonelink
doesn't cause it to become a symlink.

This fixes the problem by not copying .rclonelink files using the
partials mechanism but reverting to the previous --inplace behaviour.

This could potentially be fixed better in the future by changing the
local backend Move to change files to and from symlinks depending on
their name. However this was deemed too complicated for a point
release.

This also adds a test in the local backend. This test should ideally
be in operations but it isn't easy to put it there as operations knows
nothing of symlinks.

Fixes #7101
See: https://forum.rclone.org/t/reggression-in-v1-63-0-links-drops-the-rclonelink-extension/39483
2023-07-10 14:30:59 +01:00
Nick Craig-Wood
7d31956169 local: fix partial directory read for corrupted filesystem
Before this change if a directory entry could be listed but not
lstat-ed then rclone would give an error and abort the directory
listing with the error

    failed to read directory entry: failed to read directory "XXX": lstat XXX

This change makes sure that the directory listing carries on even
after this kind of error.

The sync will be failed but it will carry on.

This problem was caused by a programming error setting the err
variable in an outer scope when it should have been using a local err
variable.

See: https://forum.rclone.org/t/sync-aborts-if-even-one-single-unreadable-folder-is-encountered/39653
2023-07-09 17:58:03 +01:00
Nick Craig-Wood
473d443874 smb: fix "Statfs failed: bucket or container name is needed" when mounting
Before this change, if you mounted the root of the smb then it would
give an error on rclone about and periodically in the mount logs:

    Statfs failed: bucket or container name is needed in remote

This fix makes the smb backend return empty usage in this case which
will stop the errors and show the default 1P of free space.

See: https://forum.rclone.org/t/error-statfs-failed-bucket-or-container-name-is-needed-in-remote/39631
2023-07-08 12:24:46 +01:00
Nick Craig-Wood
e294b76121 Add Vladislav Vorobev to contributors 2023-07-08 12:24:46 +01:00
Vladislav Vorobev
8f3c583870 docs: no need to disable 2FA for Mail.ru Cloud anymore
This sentence was written at the time when backend used access token, nowadays, users need to generate and use application password instead, see #6398.
2023-07-08 10:27:40 +02:00
Nick Craig-Wood
d0d41fe847 rclone config redacted: implement support mechanism for showing redacted config
This introduces a new fs.Option flag, Sensitive and uses this along
with IsPassword to redact the info in the config file for support
purposes.

It adds this flag into backends where appropriate. It was necessary to
add oauthutil.SharedOptions to some backends as they were missing
them.

Fixes #5209
2023-07-07 16:25:14 +01:00
Nick Craig-Wood
297f15a3e3 docs: update the number of providers supported 2023-07-07 16:25:14 +01:00
Nick Craig-Wood
d5f0affd4b Add Mahad to contributors 2023-07-07 16:25:14 +01:00
Nick Craig-Wood
0598aafbfd Add BakaWang to contributors 2023-07-07 16:25:14 +01:00
Mahad
528e22f139 docs: drive: Fix step 4 in "Making your own client_id" 2023-07-06 21:24:17 +01:00
BakaWang
f1a8420814 s3: add synology to s3 provider list 2023-07-06 10:54:07 +01:00
Nick Craig-Wood
e250f1afcd docs: remove old donate page 2023-07-06 10:13:42 +01:00
Nick Craig-Wood
ebf24c9872 docs: update contact page on website 2023-07-05 16:57:07 +01:00
Paul
b4c7b240d8 webdav: nextcloud: fix must use /dav/files/USER endpoint not /webdav error
Fix https://github.com/rclone/rclone/issues/7103

Before this change the RegExp validating the endpoint URL was a bit
too strict allowing only /dav/files/USER due to chunking limitations.

This patch adds back support for /dav/files/USER/dir/subdir etc.

Co-authored-by: Nick Craig-Wood <nick@craig-wood.com>
2023-07-05 16:56:01 +01:00
Nick Craig-Wood
22a14a8c98 operations: fix deadlock when using lsd/ls with --progress - Fixes #7102
The --progress flag overrides operations.SyncPrintf in order to do its
magic on stdout without interfering with other output.

Before this change the syncFprintf routine in operations (which is
used to print all output to stdout) was taking the
operations.StdoutMutex and the printProgress function in the
--progress routine was also attempting to take the same mutex causing
a deadlock.

This patch fixes the problem by moving the locking from the
syncFprintf function to SyncPrintf. It is then up to the function
overriding this to lock the StdoutMutex. This ensures the StdoutMutex
can never cause a deadlock.
2023-07-03 15:07:00 +01:00
Nick Craig-Wood
07133b892d dirtree: fix performance with large directories of directories and --fast-list
Before this change if using --fast-list on a directory with more than
a few thousand directories in it DirTree.CheckParents became very slow
taking up to 24 hours for a directory with 1,000,000 directories in
it.

This is because it becomes an O(N²) operation as DirTree.Find has to
search each directory in a linear fashion as it is stored as a slice.

This patch fixes the problem by scanning the DirTree for directories
before starting the CheckParents process so it never has to call
DirTree.Find.

After the fix calling DirTree.CheckParents on a directory with
1,000,000 directories in it will take about 1 second.

Anything which calls DirTree.Find can potentially have bad performance
so in the future we should redesign the DirTree to use a different
underlying datastructure or have an index.

https://forum.rclone.org/t/almost-24-hours-cpu-compute-time-during-sync-between-two-large-s3-buckets/39375/
2023-07-03 14:09:21 +01:00
Nick Craig-Wood
a8ca18165e Add Fjodor42 to contributors 2023-07-03 14:09:21 +01:00
Nick Craig-Wood
8c4e71fc84 Add Dean Attali to contributors 2023-07-03 14:09:21 +01:00
Nick Craig-Wood
351e2db2ef Add Sawada Tsunayoshi to contributors 2023-07-03 14:09:21 +01:00
Fjodor42
2234feb23d jottacloud: add Onlime provider 2023-07-02 11:16:07 +01:00
Anagh Kumar Baranwal
fb5125ecee build: fix macos builds for versions < 12
Signed-off-by: Anagh Kumar Baranwal <6824881+darthShadow@users.noreply.github.com>
2023-07-01 18:03:50 +01:00
Dean Attali
e8cbc54a06 docs: dropbox get client id, clarify you need to click a button 2023-07-01 17:50:40 +01:00
Nick Craig-Wood
00512e1303 Start v1.64.0-DEV development 2023-06-30 15:39:03 +01:00
Nick Craig-Wood
fcfbd3153b docs: website: replace google analytics with plausible analytics 2023-06-30 14:32:53 +01:00
Nick Craig-Wood
9a8075b682 docs: rename donate page to sponsor page and rework 2023-06-30 14:32:53 +01:00
Sawada Tsunayoshi
996037bee9 docs: fixed typo in exclude example in filtering docs (#7097)
The exclude flag instructions had "without" written as "with" which changes the whole meaning of how the exclude flag works.
2023-06-30 15:28:38 +02:00
Nick Craig-Wood
e90537b2e9 Version v1.63.0 2023-06-30 14:11:17 +01:00
Nick Craig-Wood
42c211c6b2 Revert sponsors back to organization 2023-06-30 10:10:05 +01:00
Nick Craig-Wood
3d4f127b33 Revert "union: disable PartialUploads on integration tests failures"
This reverts commit 9065e921c1.

It turns out the problem for the failing fs/sync tests was the
policies being different for search and create which meant that the
file was being created in one union branch but a diferent one was
found in another branch.
2023-06-29 21:11:04 +01:00
Misty
ff966b37af dropbox: fix result chans not taken care by defer fun 2023-06-28 19:49:38 +01:00
Nick Craig-Wood
3b6effa81a uptobox: fix rmdir declaring that directories weren't empty
The API seems to have changed and the `totalFileCount` item no longer
tracks the number of files in the directory so is useless for seeing
if the directory is empty.

This patch fixes the problem by seeing whether there are any files or
directories in the folder instead.

This problem was detected by the integration tests.
2023-06-28 17:27:43 +01:00
Nick Craig-Wood
8308d5d640 putio: fix server side copy failures (400 errors)
For some unknown reason the API sometimes returns the name already
exists on a server side copy.

    {
      "error_id": null,
      "error_message": "Name already exist",
      "error_type": "NAME_ALREADY_EXIST",
      "error_uri": "http://api.put.io/v2/docs",
      "extra": {},
      "status": "ERROR",
      "status_code": 400
    }

This patch uploads to a temporary name then renames it which works
around the problem.

This was spotted by the integration tests.
2023-06-28 16:45:35 +01:00
Nick Craig-Wood
14024936a8 putio: fix modification times not being preserved for server side copy and move
The integration tests spotted that modification times are no longer
being preserved by the putio API in server side move and copy.

This patch explicitly sets the modtime after the server side move or
copy.
2023-06-28 11:03:19 +01:00
Nick Craig-Wood
9065e921c1 union: disable PartialUploads on integration tests failures
In this commit we enabled PartialUploads for the union backend.

3faa84b47c combine,compress,crypt,hasher,union: support wrapping backends with PartialUploads

This turns out to cause test failures in fs/sync so this commit
disables them again pending further investigation.
2023-06-27 17:31:01 +01:00
Nick Craig-Wood
99788b605e sharefile: disable streamed transfers as they no longer work
At some point the sharefile API changed to require the size of the
file in the initial transaction which makes the streaming upload fail
with this error:

    upload failed: file size does not match (-2)

This was discovered by the integration tests.
2023-06-27 17:08:37 +01:00
Nick Craig-Wood
d4cc3760e6 putio: fix uploading to the wrong object on Update with overriden remote name
In this commit we discovered a problem with objects being uploaded to
the incorrect object name. It added an integration test for the
problem.

65b2e378e0 drive: fix incorrect remote after Update on object

This test was tripped by the putio backend and this patch fixes the
problem.
2023-06-27 16:02:33 +01:00
Nick Craig-Wood
a6acbd1844 uptobox: fix Update returning the wrong object
Before this patch the Update method had a 50/50 chance of returning
the old object rather than the new updated object.

This was discovered in the integration tests.

This patch fixes the problem by deleting the duplicate object before
we look for the new object.
2023-06-27 16:02:33 +01:00
Nick Craig-Wood
389565f5e2 storj: fix uploading to the wrong object on Update with overriden remote name
In this commit we discovered a problem with objects being uploaded to
the incorrect object name. It added an integration test for the
problem.

65b2e378e0 drive: fix incorrect remote after Update on object

This test was tripped by the Storj backend and this patch fixes the
problem.
2023-06-27 16:02:33 +01:00
Nick Craig-Wood
4b4198522d storj: fix "uplink: too many requests" errors when uploading to the same file
Storj has a rate limit of 1 per second when uploading to the same
file.

This was being tripped by the integration tests.

This patch fixes it by detecting the error and sleeping for 1 second
before retrying.

See: https://github.com/storj/uplink/issues/149
2023-06-27 16:02:33 +01:00
Nick Craig-Wood
f7665300c0 fstests: allow ObjectUpdate test to retry upload 2023-06-27 16:02:33 +01:00
Nick Craig-Wood
73beae147f webdav: Fix modtime on server side copy for owncloud and nextcloud
Before this change a server side copy did not preserve the modtime.

This used to work on nextcloud but at some point it started ignoring
the `X-Oc-Mtime` header.

This patch sets the modtime explicitly after a server side copy if the
`X-Oc-Mtime` wasn't accepted.

This problem was discovered in the integration tests.
2023-06-26 20:23:28 +01:00
Nick Craig-Wood
92f8e476b7 Add mac-15 to contributors 2023-06-26 20:23:28 +01:00
Nick Craig-Wood
5849148d51 Add zzq to contributors 2023-06-26 20:23:28 +01:00
Nick Craig-Wood
37853ec412 Add Peter Fern to contributors 2023-06-26 20:23:28 +01:00
Nick Craig-Wood
ae7ff28714 Add danielkrajnik to contributors 2023-06-26 20:23:28 +01:00
Nick Craig-Wood
9873f4bc74 Add Mariusz Suchodolski to contributors 2023-06-26 20:23:28 +01:00
Nick Craig-Wood
1b200bf69a Add Paulo Schreiner to contributors 2023-06-26 20:23:28 +01:00
Nick Craig-Wood
e3fa6fe3cc swift: fix code formatting 2023-06-26 20:23:28 +01:00
mac-15
9e1b3861e7 docs: add blomp cloud storage guide 2023-06-26 17:49:27 +01:00
zzq
e9a753f678 s3: add Qiniu KODO quirks virtualHostStyle is false 2023-06-26 17:47:27 +01:00
Dimitri Papadopoulos
708391a5bf backend: fix misspellings found by codespell 2023-06-26 14:34:52 +01:00
Peter Fern
1cfed18aa7 http: add client certificate user auth middleware
This populates the authenticated user from the client certificate
common name.

Also added tests for the existing client certificate functionality.
2023-06-26 14:33:53 +01:00
kapitainsky
7751d5a00b rc: config/listremotes include from env vars
Fixes: 
#6540

Discussed:
https://forum.rclone.org/t/environment-variable-config-not-used-for-remote-control/39014
2023-06-26 12:30:44 +01:00
danielkrajnik
8274712c2c docs: s3: fix example for restoring single objects
See: https://forum.rclone.org/t/cant-restore-files-from-aws-glacier-deep-only-directories/39258/3
2023-06-26 11:41:15 +01:00
Mariusz Suchodolski
625a564ba3 docs: faq: add solution for port opening issues on Windows 2023-06-25 11:20:54 +01:00
Ehsan Tadayon
2dd2072cdb s3: Fix Arvancloud Domain and region changes and alphabetise the provider 2023-06-25 11:01:41 +01:00
kapitainsky
998d1d1727 docs: listremotes also includes remotes from env vars 2023-06-24 15:46:23 +01:00
Paulo Schreiner
fcb912a664 fs: allow setting a write buffer for multithread
when multi-thread downloading is enabled, rclone used
to send a write to disk after every read, resulting in a lot
of small writes to different locations of the file.

depending on the underlying filesystem or device, it can be more
efficient to send bigger writes.
2023-06-23 18:44:43 +01:00
Nick Craig-Wood
5f938fb9ed s3: fix "Entry doesn't belong in directory" errors when using directory markers
Before this change we were incorrectly identifying the root directory
of the listing and adding it into the listing.

This caused higher layers of rclone to emit the error above.

See #7038
2023-06-23 18:01:11 +01:00
Nick Craig-Wood
72b79504ea azureblob: fix "Entry doesn't belong in directory" errors when using directory markers
Before this change we were incorrectly identifying the root directory
of the listing and adding it into the listing.

This caused higher layers of rclone to emit the error above.

See #7038
2023-06-23 18:01:11 +01:00
Nick Craig-Wood
3e2a606adb gcs: fix "Entry doesn't belong in directory" errors when using directory markers
Before this change we were incorrectly identifying the root directory
of the listing and adding it into the listing.

This caused higher layers of rclone to emit the error above.

Fixes #7038
2023-06-23 18:01:11 +01:00
Nick Craig-Wood
95a6e3e338 Add Stanislav Gromov to contributors 2023-06-23 18:01:11 +01:00
Anagh Kumar Baranwal
d06bb55f3f mount: Added _netdev to the example mount so it gets treated as a remote-fs rather than local-fs
Signed-off-by: Anagh Kumar Baranwal <6824881+darthShadow@users.noreply.github.com>
2023-06-23 17:37:00 +01:00
Stanislav Gromov
9f3694cea3 docs: drive: fix typo 2023-06-23 14:40:47 +01:00
Nick Craig-Wood
2c50f26c36 mount: fix mount failure on macOS with on the fly remote
This commit

3567a47258 fs: make ConfigString properly reverse suffixed file systems

made fs.ConfigString() return the full config of the backend. Because
mount was using this to make a volume name it started to make volume
names with illegal characters in which couldn't be mounted by macOS.

This fixes the problem by making a separate fs.ConfigStringFull() and
using that where appropriate and leaving the original
fs.ConfigString() function untouched.

Fixes #7063
See: https://forum.rclone.org/t/1-63-beta-fails-to-mount-on-macos-with-on-the-fly-crypt-remote/39090
2023-06-23 14:12:03 +01:00
Nick Craig-Wood
22d6c8d30d Add URenko to contributors 2023-06-23 14:12:03 +01:00
Nick Craig-Wood
96fb75c5a7 Add Sam Lai to contributors 2023-06-23 14:12:03 +01:00
URenko
acd67edf9a docs: remove "After" in systemd mount example again 2023-06-22 18:03:04 +01:00
Sam Lai
b26db8e640 accounting: bwlimit signal handler should always start
The SIGUSR2 signal handler for bandwidth limits currently only starts
if rclone is started at a time when a bandwidth limit applies. This
means that if rclone starts _outside_ such a time, i.e. with no
bandwidth limits, then enters a time where bandwidth limits do apply,
it will not be possible to use SIGUSR2 to toggle it.

This fixes that by always starting the signal handler, but only
toggling the limiter if there is a bandwidth limit configured.
2023-06-22 17:59:24 +01:00
Nick Craig-Wood
da955e5d4f operations: remove partials when the copy fails
Before this change we were only removing partials when it was
corrupted rather than when the copy just failed.
2023-06-21 22:56:05 +01:00
Nick Craig-Wood
4f8dab8bce zoho: fix downloads with Range: header returning the wrong data
Zoho has started returning the results from Range: requests with a 200
response code rather than the technically correct 206 error code.

Before this change this triggered workaround code to deal with Zoho
not obeying Range: requests properly.

This fix tests the returned header for a Content-Range: header and if
it exists assumes it is a valid reply to the Range: request despite
the status being 200.

This problem was spotted by the integration tests.
2023-06-14 17:43:26 +01:00
Nick Craig-Wood
000ddc4951 s3: fix versions tests when running on minio 2023-06-14 17:30:36 +01:00
Nick Craig-Wood
3faa84b47c combine,compress,crypt,hasher,union: support wrapping backends with PartialUploads
This means that, for example, wrapping a sftp backend with crypt will
upload to a temporary name and then rename unless disabled with
--inplace.

See: https://forum.rclone.org/t/backup-versioning/38978/7
2023-06-14 10:52:03 +01:00
kapitainsky
e1162ec440 docs: clarify --server-side-across-configs 2023-06-13 17:58:27 +01:00
Nick Craig-Wood
30cccc7101 cache: fix backends shutting down when in use when used via the rc
Before this fix, if a long running task (eg a copy) was started by the
rc then the backend could expire before the copy had finished.

The typical symptom was with the dropbox backend giving "batcher is
shutting down" errors.

This patch fixes the problem by pinning the backend until the job has
finished.

See: https://forum.rclone.org/t/uploads-start-repeatedly-failing-after-a-while-using-rc-sync-copy-vs-rclone-copy-for-dropbox/38873/
2023-06-13 15:48:20 +01:00
Nick Craig-Wood
1f5a29209e rc: add Job to ctx so it can be used elsewhere
See: https://forum.rclone.org/t/uploads-start-repeatedly-failing-after-a-while-using-rc-sync-copy-vs-rclone-copy-for-dropbox/38873/
2023-06-13 15:48:20 +01:00
Nick Craig-Wood
45255bccb3 accounting: fix Prometheus metrics to be the same as core/stats
In 04aa6969a4 we updated the displayed speed to be a rolling
average in core/stats and the progress output but we didn't update the
Prometheus metrics.

This patch updates the Prometheus metrics too.

Fixes #7053
2023-06-12 17:42:29 +01:00
Nick Craig-Wood
055206c4ee yandex: fix 400 Bad Request on transfer failure
Before this fix, if the upload failed for some reason the yandex
backend would attempt to retry itself it which would fail immediately
with 400 Bad Request.

Normally we retry uploads at a higher level so they can be done with
new data and this patch does that.

See #7044
2023-06-11 11:11:43 +01:00
Nick Craig-Wood
f3070b82bc Add douchen to contributors 2023-06-11 11:11:43 +01:00
douchen
7e2deffc62 filter: fix deadlock with errors on --files-from
Before this change if doing a recursive directory listing with
`--files-from` if more than `--checkers` files errored (other than
file not found) then rclone would deadlock.

This fixes the problem by exiting on the first error.
2023-06-10 15:53:08 +01:00
Nick Craig-Wood
ae3ff50580 dropbox: implement --dropbox-pacer-min-sleep flag
See: https://forum.rclone.org/t/combine-mount-options-query/38080
2023-06-10 14:57:26 +01:00
Nick Craig-Wood
6486ba6344 operations: remove partially uploaded files on exit when not using --inplace
Before this change partially uploaded files (when --inplace is not in
effect) would be left lying around in the file system if rclone was
killed in the middle of a transfer.

This adds an exit handler to remove the file and removes it when the
file is complete.
2023-06-10 14:55:05 +01:00
Nick Craig-Wood
7842000f8a backend: for command not found errors, hint to look in the underlying remote
See: https://forum.rclone.org/t/rclone-cleanup-no-way-to-delete-pending-uploads-newer-than-24-hours/38416/6
2023-06-10 14:44:01 +01:00
Nick Craig-Wood
1f9c962183 operations: reopen downloads on error when using check --download and cat
Before this change, some parts of operations called the Open method on
objects directly, and some called NewReOpen to make an object which
can re-open itself on errors.

This adds a new function operations.Open which should be called
instead of fs.Object.Open to open a reliable stream of data and
changes all call sites to use that.

This means `rclone check --download` and `rclone cat` will re-open
files on failures.

See: https://forum.rclone.org/t/does-rclone-support-retries-for-check-when-using-download-flag/38641
2023-06-10 14:42:29 +01:00
Nick Craig-Wood
279d9ecc56 operations: fix pcloud can't set modified time
Before this change we tested special errors for straight equality.

This works for all normal backends, but the union backend may return
wrapped errors which contain the special error types.

In particular if a pcloud backend was part of a union when attempting
to set modification times the fs.ErrorCantSetModTime return wasn't
understood because it was wrapped in a union.Error.

This fixes the problem by using errors.Is instead in all the
comparisons in operations.

See: https://forum.rclone.org/t/failed-to-set-modification-time-1-error-pcloud-cant-set-modified-time/38596
2023-06-10 14:39:41 +01:00
Nick Craig-Wood
31773ecfbf union: allow errors to be unwrapped for inspection
Before this change the Errors type in the union backend produced
errors which could not be Unwrapped to test their type.

This adds the (go1.20) Unwrap method to the Errors type which allows
errors.Is to work on these errors.

It also adds unit tests for the Errors type and fixes a couple of
minor bugs thrown up in the process.

See: https://forum.rclone.org/t/failed-to-set-modification-time-1-error-pcloud-cant-set-modified-time/38596
2023-06-10 14:39:41 +01:00
kapitainsky
666e34cf69 s3: docs: old broken link updated 2023-06-09 18:15:54 +01:00
Nick Craig-Wood
5a84a08b3f build: fix build failure installing nfpm
Before this fix we used the bin/get-github-release.go script to
install nfpm.

However this script fails scraping the downloads page when the target
has more than a few download options. The alternative would be using
the GitHub API but this needs authentication so as not to be rate
limited on GitHub actions.

This patch switches over to go install which is less efficient but
should work in all circumstances.
2023-06-07 15:41:52 +01:00
Nick Craig-Wood
51a468b2ba genautocomplete: rename to completion with alias to old name
This brings it into line with cobra's naming scheme and stops cobra
writing another "completion" command which doesn't work as well which
confuses users.

See: https://forum.rclone.org/t/rclone-genautocomplete-bash-vs-rclone-completion-bash-neither-works-fully/38431
2023-05-25 14:32:40 +01:00
Nick Craig-Wood
fc798d800c vfs: fix backends being Shutdown too early when startup takes a long time
Before this change if the VFS took more than 5 to initialise (which
can happen if there is a lot of files or a lot of files which need
uploading) the backend was dropped out of the cache before the VFS was
fully created.

This was noticeable in the dropbox backend where the batcher Shutdown
too soon and prevented further uploads.

This fixes the problem by Pinning backends before the VFS cache is
created.

https://forum.rclone.org/t/if-more-than-251-elements-in-the-que-to-upload-fails-with-batcher-is-shutting-down/38076/2
2023-05-18 16:16:12 +01:00
Nick Craig-Wood
3115ede1d8 Add kapitainsky to contributors 2023-05-18 16:16:12 +01:00
kapitainsky
7a5491ba7b docs: chunker: fix typo 2023-05-17 17:10:53 +01:00
Nick Craig-Wood
a6cf4989b6 local: fix crash with --metadata on Android
Before this change we called statx which causes a

    SIGSYS: bad system call

fault.

After this we force Android to use fstatat

Fixes #7006
2023-05-17 17:03:26 +01:00
Nick Craig-Wood
f489b54fa0 operations: ignore partial tests on backends which don't support them 2023-05-17 17:03:26 +01:00
Nick Craig-Wood
6244d1729b Add Tareq Sharafy to contributors 2023-05-17 17:03:19 +01:00
Nick Craig-Wood
e97c2a2832 Add cc to contributors 2023-05-17 17:03:19 +01:00
albertony
56bf9b4a10 Add albertony to maintainers 2023-05-17 15:31:07 +02:00
WeidiDeng
ceb9406c2f serve webdav: implement owncloud checksum and modtime extensions
* implement owncloud checksum and modtime extensions for webdav server
* test rclone webdav server as owncloud webdav
2023-05-15 15:38:00 +01:00
Tareq Sharafy
1f887f7ba0 azblob: doc
Signed-off-by: Tareq Sharafy <tareq.sha@gmail.com>
2023-05-14 12:12:24 +01:00
Tareq Sharafy
7db26b6b34 azblob: support azure workload identities 2023-05-14 12:12:24 +01:00
cc
37a3309438 s3: v3sign: add missing subresource delete
The delete query string parameter must be included when you create the
CanonicalizedResource for a multi-object Delete request.
2023-05-14 11:25:52 +01:00
Nick Craig-Wood
97be9015a4 union: implement missing methods
Implement these missing methods:

- CleanUp

And declare these ones unimplementable:

- UnWrap
- WrapFs
- SetWrapper
- UserInfo
- Disconnect
- PublicLink
- PutUnchecked
- MergeDirs
- OpenWriterAt
2023-05-14 11:22:57 +01:00
Nick Craig-Wood
487e4f09b3 combine: implement missing methods
Implement these missing methods:

- PublicLink
- PutUnchecked
- MergeDirs
- CleanUp
- OpenWriterAt

And declare these ones unimplementable:

- UnWrap
- WrapFs
- SetWrapper
- UserInfo
- Disconnect

Fixes #6999
2023-05-14 11:22:57 +01:00
Nick Craig-Wood
09a408664d fs: create Overlay feature flag to indicate backend wraps others
Set this automatically for any backend which implements UnWrap and
manually for combine and union which can't implement UnWrap but do
overlay other backends.
2023-05-14 11:22:57 +01:00
Nick Craig-Wood
43fa256d56 fs: add OverrideDirectory for overriding path of directory 2023-05-14 11:22:57 +01:00
wiserain
6859c04772 pikpak: add validity check when using a media link
Before this change, the Pikpak backend would always download
the first media item whenever possible, regardless of whether
or not it was the original contents.

Now we check the validity of a media link using the `fid`
parameter in the link URL.

Fixes #6992
2023-05-13 03:41:59 +09:00
dependabot[bot]
38a0539096 build(deps): bump github.com/cloudflare/circl from 1.1.0 to 1.3.3
Bumps [github.com/cloudflare/circl](https://github.com/cloudflare/circl) from 1.1.0 to 1.3.3.
- [Release notes](https://github.com/cloudflare/circl/releases)
- [Commits](https://github.com/cloudflare/circl/compare/v1.1.0...v1.3.3)

---
updated-dependencies:
- dependency-name: github.com/cloudflare/circl
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>
2023-05-12 14:56:45 +01:00
Nick Craig-Wood
2cd85813b4 sftp: don't check remote points to a file if it ends with /
This avoids calling stat on the root directory which saves a call and
some servers don't like.

See: https://forum.rclone.org/t/stat-failed-error-on-sftp/38045
2023-05-11 07:58:20 +01:00
Nick Craig-Wood
e6e6069ecf sftp: don't stat directories before listing them
Before this change we ran stat on the directory to see if it existed.

Not only is this inefficient it isn't allowed by some SFTP servers.

See: https://forum.rclone.org/t/stat-failed-error-on-sftp/38045
2023-05-10 15:07:21 +01:00
Nick Craig-Wood
fcf47a8393 pikpak: set the NoMultiThreading feature flag to disable multi-thread copy
Before this change the pikpak backend changed the global
--multi-thread-streams flag which wasn't desirable.

Now the machinery is in place to use the NoMultiThreading feature flag
instead.

Fixes #6915
2023-05-09 17:46:19 +01:00
Nick Craig-Wood
46a323ae14 operations: Don't use multi-thread copy if the backend doesn't support it #6915 2023-05-09 17:40:58 +01:00
Nick Craig-Wood
72be80ddca fs: add new backend feature NoMultiThreading
This should be set for backends which can't support simultaneous reads
from different offsets in a single file.
2023-05-09 17:40:11 +01:00
Nick Craig-Wood
a9e7e7bcc2 ftp: Fix "501 Not a valid pathname." errors when creating directories
Some servers return a 501 error when using MLST on a non-existing
directory. This patch allows it.

I don't think this is correct usage according to the RFC, but the RFC
doesn't explicitly state which error code should be returned for
file/directory not found.
2023-05-09 17:27:35 +01:00
Nick Craig-Wood
925c4382e2 ftp: fix "unsupported LIST line" errors on startup
Before this fix a blank line in the MLST output from the FTP server
would cause the "unsupported LIST line" error.

This fixes the problem in the upstream fork.

Fixes #6879
2023-05-09 17:27:35 +01:00
Nick Craig-Wood
08c60c3091 Add Janne Hellsten to contributors 2023-05-09 17:27:35 +01:00
Janne Hellsten
5c594fea90 operations: implement uploads to temp name with --inplace to disable
When copying to a backend which has the PartialUploads feature flag
set and can Move files the file is copied into a temporary name first.
Once the copy is complete, the file is renamed to the real
destination.

This prevents other processes from seeing partially downloaded copies
of files being downloaded and prevents overwriting the old file until
the new one is complete.

This also adds --inplace flag that can be used to disable the partial
file copy/rename feature.

See #3770

Co-authored-by: Nick Craig-Wood <nick@craig-wood.com>
2023-05-09 16:28:10 +01:00
Janne Hellsten
cc01223535 fs: Implement PartialUploads feature flag
Implement a Partialuploads feature flag to mark backends for which
uploads are not atomic.

This is set for the following backends

- local
- ftp
- sftp

See #3770
2023-05-09 16:28:10 +01:00
Nick Craig-Wood
aaacfa51a0 sftp: fix move to allow overwriting existing files
Before this change rclone used a normal SFTP rename if present to
implement Move.

However the normal SFTP rename won't overwrite existing files.

This fixes it to either use the POSIX rename extension
("posix-rename@openssh.com") or to delete the source first before
renaming using the normal SFTP rename.

This isn't normally a problem as rclone always removes any existing
objects first, however to implement non --inplace operations we do
require overwriting an existing file.
2023-05-09 16:28:10 +01:00
Nick Craig-Wood
c18c66f167 fs: when creating new fs.OverrideRemotes don't layer overrides if not needed 2023-05-09 16:28:10 +01:00
Nick Craig-Wood
d6667d34e7 fs: fix String() method on fs.OverrideRemote
Before this fix it was returning the base objects string rather than
the overridden remote.
2023-05-09 16:28:10 +01:00
Nick Craig-Wood
e649cf4d50 uptobox: add --uptobox-private flag to make all uploaded files private
See: #6946
2023-05-08 17:50:50 +01:00
Nick Craig-Wood
f080ec437c azureblob: empty directory markers #3453 2023-05-07 12:47:09 +01:00
Nick Craig-Wood
4023eaebe0 gcs: fix directory marker code #3453
Use Update to upload the directory markers
2023-05-07 12:47:09 +01:00
Nick Craig-Wood
baf16a65f0 s3: fix directory marker code #3453
Use Update to upload the directory markers
2023-05-07 12:47:09 +01:00
Nick Craig-Wood
70fe2ac852 azureblob: fix azure blob uploads with multiple bits of metadata 2023-05-07 12:47:09 +01:00
Nick Craig-Wood
41cf7faea4 Add Andrei Smirnov to contributors 2023-05-07 12:47:09 +01:00
Andrei Smirnov
f226f2dfb1 s3: add petabox.io to s3 providers 2023-05-05 09:44:25 +01:00
Nick Craig-Wood
31caa019fa rc: fix output of Time values in options/get
Before this change these were output as `{}` after this change they
are output as time strings `"2022-03-26T17:48:19Z"` in standard
javascript format.
2023-05-04 15:04:11 +01:00
Nick Craig-Wood
0468375054 uptobox: ensure files and folders show the modtime configured by --default-time #6986 2023-05-04 15:03:11 +01:00
Nick Craig-Wood
6001f05a12 union: the root folder shows the modtime configured by --default-time #6986 2023-05-04 15:03:11 +01:00
Nick Craig-Wood
f7b87a8049 koofr: ensure folders show the modtime configured by --default-time #6986 2023-05-04 15:03:11 +01:00
Nick Craig-Wood
d379641021 http: ensure folders show the modtime configured by --default-time #6986 2023-05-04 15:03:11 +01:00
Nick Craig-Wood
84281c9089 dropbox: ensure folders show the modtime configured by --default-time #6986 2023-05-04 15:03:11 +01:00
Nick Craig-Wood
8e2dc069d2 fs: Add --default-time flag to control unknown modtime of files/dirs
Before this patch, files or directories with unknown modtime would
appear as the current date.

When mounted some systems look at modification dates of directories to
see if they change and having them change whenever they drop out of
the directory cache is not optimal.

See #6986
2023-05-04 15:03:11 +01:00
Nick Craig-Wood
61d6f538b3 onedrive: add --onedrive-av-override flag to download files flagged as virus
This also produces a warning when rclone detects files have been
blocked because of virus content

    server reports this file is infected with a virus - use --onedrive-av-override to download anyway

Fixes #557
2023-05-03 15:21:30 +01:00
Nick Craig-Wood
65b2e378e0 drive: fix incorrect remote after Update on object
Before this change, when Object.Update was called in the drive
backend, it overwrote the remote with that of the object info.

This is incorrect - the remote doesn't change on Update and this patch
fixes that and introduces a new test to make sure it is correct for
all backends.

This was noticed when doing Update of objects in a nested combine
backend.

See: https://forum.rclone.org/t/rclone-runtime-goroutine-stack-exceeds-1000000000-byte-limit/37912
2023-05-03 13:51:27 +01:00
Nick Craig-Wood
dea6bdf3df combine: fix goroutine stack overflow on bad object
If the Remote() call failed to do its path adjustment, then it would
recursively call Remote() as part of logging the failure and cause a
stack overflow.

This fixes it by logging the underlying object instead.

See: https://forum.rclone.org/t/rclone-runtime-goroutine-stack-exceeds-1000000000-byte-limit/37912
2023-05-03 13:51:27 +01:00
Nick Craig-Wood
27eb8c7f45 config: stop config create making invalid config files
If config create was passed a parameter with an embedded \n it wrote
it straight to the config file which made it invalid and caused a
fatal error reloading it.

This stops keys and values with \r and \n being added to the config
file.

See: https://forum.rclone.org/t/how-to-control-bad-remote-creation-which-takes-rclone-down/37856
2023-05-03 11:40:30 +01:00
Nick Craig-Wood
1607344613 Add Adam K to contributors 2023-05-03 11:40:30 +01:00
Adam K
5f138dd822 dropbox: syncing documentation with source for dropbox default batch_timeout - fixes #6984 2023-05-02 17:04:32 +01:00
Anagh Kumar Baranwal
2520c05c4b mount2: disable xattrs
Signed-off-by: Anagh Kumar Baranwal <6824881+darthShadow@users.noreply.github.com>
2023-04-30 17:56:47 +01:00
Anagh Kumar Baranwal
f7f5e87632 mount2: fixed statfs
Signed-off-by: Anagh Kumar Baranwal <6824881+darthShadow@users.noreply.github.com>
2023-04-30 17:56:47 +01:00
Anagh Kumar Baranwal
a7e6806f26 mount2: updated go-fuse version
Signed-off-by: Anagh Kumar Baranwal <6824881+darthShadow@users.noreply.github.com>
2023-04-30 17:56:47 +01:00
Anagh Kumar Baranwal
d0eb884262 mount: removed unnecessary byte slice allocation for reads
Signed-off-by: Anagh Kumar Baranwal <6824881+darthShadow@users.noreply.github.com>
2023-04-30 17:54:30 +01:00
WeidiDeng
ae6874170f webdav: set modtime using propset for owncloud and nextcloud 2023-04-28 17:38:49 +01:00
Nick Craig-Wood
f5bab284c3 s3: fix missing "tier" metadata
Before this change if the storage class wasn't set on the object, we
didn't set the "tier" metadata.

This made it impossible to filter on tier using the metadata filters.

This returns the "tier" metadata as STANDARD if the storage class
isn't set on the object.

See: https://forum.rclone.org/t/copy-from-s3-to-another-s3-filter-by-storage-class/37861
2023-04-28 14:33:01 +01:00
Nick Craig-Wood
c75dfa6436 Add Jānis Bebrītis to contributors 2023-04-28 14:33:01 +01:00
Nick Craig-Wood
56eb82bdfc Add Tobias Gion to contributors 2023-04-28 14:33:01 +01:00
Nick Craig-Wood
066e00b470 gcs: empty directory markers #3453
- Report correct feature flag
- Fix test failures due to that
- don't output the root directory marker
- Don't create the directory marker if it is the bucket or root
- Create directories when uploading files
2023-04-28 14:31:05 +01:00
Jānis Bebrītis
e0c445d36e gcs: empty directory markers - #3453 2023-04-28 14:31:05 +01:00
Nick Craig-Wood
74652bf318 s3: empty directory markers further work #3453
- Report correct feature flag
- Fix test failures due to that
- don't output the root directory marker
- Don't create the directory marker if it is the bucket or root
- Create directories when uploading files
2023-04-28 14:31:05 +01:00
Jānis Bebrītis
b6a95c70e9 s3: empty directory markers - #3453 2023-04-28 14:31:05 +01:00
Nick Craig-Wood
aca7d0fd22 s3: fix potential crash in integration tests 2023-04-28 14:31:05 +01:00
Nick Craig-Wood
12761b3058 fstests: make integration tests work with connection strings in remotes 2023-04-28 14:31:05 +01:00
Nick Craig-Wood
3567a47258 fs: make ConfigString properly reverse suffixed file systems
Before this change we renamed file systems with overridden config with
{suffix}.

However this meant that ConfigString produced a value which wouldn't
re-create the file system.

This uses an internal hash to keep note of what config goes which
which {suffix} in order to remake the config properly.
2023-04-28 14:31:05 +01:00
Nick Craig-Wood
6b670bd439 mockfs: make it so it can be registered as an Fs 2023-04-28 14:31:05 +01:00
Nick Craig-Wood
335ca6d572 lsjson: make --stat more efficient
Don't look for a file if the remote ends with /

This also makes it less likely to find a directory marker in bucket
based file systems.
2023-04-28 14:31:05 +01:00
Tobias Gion
c4a9e480c9 ftp: lower log message priority when SetModTime is not supported to debug
See: https://forum.rclone.org/t/ftp-fritz-box-setmodtime-is-not-supported/37781
2023-04-25 16:31:42 +02:00
Nick Craig-Wood
232d304c13 drive: fix trailing slash mis-identificaton of folder as file
Before this change, drive would mistakenly identify a folder with a
training slash as a file when passed to NewObject.

This was picked up by the integration tests
2023-04-25 12:10:15 +01:00
Nick Craig-Wood
44ac79e357 Add dlitster to contributors 2023-04-25 12:10:15 +01:00
dlitster
0487e465ee docs: s3: clarify that X-Amz-Meta-Md5chksum is really a base64-encoded hex 2023-04-25 11:39:36 +01:00
Nick Craig-Wood
bb6cfe109d crypt: fix reading 0 length files
In an earlier patch

d5afcf9e34 crypt: try not to return "unexpected EOF" error

This introduced a bug for 0 length files which this fixes which only
manifests if the io.Reader returns data and EOF which not all readers
do.

This was failing in the integration tests.
2023-04-24 16:54:40 +01:00
WeidiDeng
864eb89a67 webdav: fix server side copy/move not overwriting - fixes #6964 2023-04-24 14:35:42 +01:00
Nick Craig-Wood
4471e6f258 selfupdate: obey --no-check-certificate flag
This patch makes sure we use our own HTTP transport when fetching the
current rclone version.

This allows it to use --no-check-certificate (and any other features
of our own transport).

See: https://forum.rclone.org/t/rclone-selfupdate-no-check-certificate-flag-not-work/37501
2023-04-24 12:26:01 +01:00
Nick Craig-Wood
e82db0b7d5 vfs: fix potential data race - Fixes #6962
This fixes a data race that was found by static analysis.
2023-04-24 12:17:03 +01:00
Nick Craig-Wood
72e624c5e4 serve dlna: fix potential data race #6962
This fixes a data race that was found by static analysis.
2023-04-24 12:17:03 +01:00
Nick Craig-Wood
6092fa57c3 Add Loren Gordon to contributors 2023-04-24 12:17:03 +01:00
Loren Gordon
3e15a594b7 cat: adds --separator option to cat command
When using `rclone cat` to print the contents of several files, the
user may want to inject some separator between the files, such as a
comma or a newline. This patch adds a `--separator` option to the `cat`
command to make that possible. The default value remains an empty
string, `""`, maintaining the prior behavior of `rclone cat`.

Closes #6968
2023-04-24 12:01:53 +01:00
Nick Craig-Wood
db8c007983 swift: ignore 404 error when deleting an object
See: https://forum.rclone.org/t/rclone-should-optionally-ignore-404-for-delete/37592
2023-04-22 10:49:10 +01:00
dependabot[bot]
5836da14c2 build(deps): bump github.com/aws/aws-sdk-go from 1.44.236 to 1.44.246
Bumps [github.com/aws/aws-sdk-go](https://github.com/aws/aws-sdk-go) from 1.44.236 to 1.44.246.
- [Release notes](https://github.com/aws/aws-sdk-go/releases)
- [Commits](https://github.com/aws/aws-sdk-go/compare/v1.44.236...v1.44.246)

---
updated-dependencies:
- dependency-name: github.com/aws/aws-sdk-go
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2023-04-20 18:03:27 +01:00
dependabot[bot]
8ed07d11a0 build(deps): bump github.com/klauspost/compress from 1.16.3 to 1.16.5
Bumps [github.com/klauspost/compress](https://github.com/klauspost/compress) from 1.16.3 to 1.16.5.
- [Release notes](https://github.com/klauspost/compress/releases)
- [Changelog](https://github.com/klauspost/compress/blob/master/.goreleaser.yml)
- [Commits](https://github.com/klauspost/compress/compare/v1.16.3...v1.16.5)

---
updated-dependencies:
- dependency-name: github.com/klauspost/compress
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2023-04-20 18:03:18 +01:00
dependabot[bot]
1f2ee44c20 build(deps): bump golang.org/x/term from 0.6.0 to 0.7.0
Bumps [golang.org/x/term](https://github.com/golang/term) from 0.6.0 to 0.7.0.
- [Release notes](https://github.com/golang/term/releases)
- [Commits](https://github.com/golang/term/compare/v0.6.0...v0.7.0)

---
updated-dependencies:
- dependency-name: golang.org/x/term
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
2023-04-20 18:02:43 +01:00
Nick Craig-Wood
32798dca25 build: remove Go updates from dependabot as it is too noisy 2023-04-20 17:58:10 +01:00
Nick Craig-Wood
075f98551f Add jladbrook to contributors 2023-04-20 17:58:10 +01:00
Nick Craig-Wood
963ab220f6 Add Brian Starkey to contributors 2023-04-20 17:58:10 +01:00
jladbrook
281a007b1a crypt: add suffix option to set a custom suffix for encrypted files - fixes #6392 2023-04-20 17:28:13 +01:00
Brian Starkey
589b7b4873 s3: update Scaleway storage classes
There are now 3 classes:
 * "STANDARD" - Multi-AZ, all regions
 * "ONEZONE_IA" - Single-AZ, FR-PAR only
 * "GLACIER" - Archive, FR-PAR and NL-AMS only
2023-04-19 17:20:30 +01:00
Nick Craig-Wood
04d2781fda fichier: add cdn option to use CDN for download - Fixes #6943 2023-04-18 17:35:21 +01:00
Nick Craig-Wood
5b95fd9588 Add WeidiDeng to contributors 2023-04-18 17:35:21 +01:00
Nick Craig-Wood
a42643101e Add Damo to contributors 2023-04-18 17:35:21 +01:00
Nick Craig-Wood
bcca67efd5 Add Rintze Zelle to contributors 2023-04-18 17:35:21 +01:00
WeidiDeng
7771aaacf6 vfs: fix writing to a read only directory creating spurious directory entries
Before this fix, when a write to a read only directory failed, rclone
would leav spurious directory entries in the directory.

This confuses `rclone serve webdav` into giving this error

    http: superfluous response.WriteHeader

This fixes the VFS layer to remove any directory entries where the
file creation did not succeed.

Fixes #5702
2023-04-18 17:33:04 +01:00
Damo
fda06fc17d docs: mount: add guidance for macFUSE installed via macports 2023-04-18 15:28:20 +01:00
Rintze Zelle
2faa4758e4 docs: azureblob: typo fix in "azureblob-account" command 2023-04-18 12:48:55 +01:00
Nick Craig-Wood
9a9ef040e3 vfs: fix reload: failed to add virtual dir entry: file does not exist
This error happened on a restart of the VFS with files to upload into
a new directory on a bucket based backend. Rclone was assuming that
directories created before the restart would still exist, but this is
a bad assumption for bucket based backends which don't really have
directories.

This change creates the pretend directory and thus the directory cache
if the parent directory does not exist when adding a virtual on a
backend which can't have empty directories.

See: https://forum.rclone.org/t/that-pesky-failed-to-reload-error-message/34527
2023-04-13 18:00:26 +01:00
Nick Craig-Wood
ca403dc90e vfs: add MkdirAll function to make a directory and all beneath 2023-04-13 18:00:22 +01:00
Nick Craig-Wood
451f4c2a8f onedrive: fix quickxorhash on 32 bit architectures
Before this fix quickxorhash would sometimes crash with an error like
this:

    panic: runtime error: slice bounds out of range [-1248:]

This was caused by an incorrect cast of a 64 bit number to a 32 bit
one on 32 bit platforms.

See: https://forum.rclone.org/t/panic-runtime-error-slice-bounds-out-of-range/37548
2023-04-13 15:14:46 +01:00
Nick Craig-Wood
5f6b105c3e Add Shyim to contributors 2023-04-13 15:14:46 +01:00
Nick Craig-Wood
d98837b7e6 Add Roel Arents to contributors 2023-04-13 15:14:46 +01:00
Shyim
99dd748fec serve restic: trigger systemd notify
Allow to use Type=notify together with serving restic api
2023-04-10 15:22:54 +01:00
albertony
bdfe213c47 version: fix reported os/kernel version for windows 2023-04-10 12:02:26 +02:00
albertony
52fbb10b47 config: add more unit tests of save 2023-04-08 21:48:21 +02:00
albertony
6cb584f455 config: do not overwrite config file symbolic link - fixes #6754 2023-04-08 21:48:21 +02:00
albertony
ec8bbb8d30 config: do not remove/overwrite other files during config file save - fixes #3759 2023-04-08 21:48:21 +02:00
wiserain
fcdffab480 Add @wiserain as the pikpak backend maintainer 2023-04-06 17:45:54 +09:00
dependabot[bot]
aeb568c494 build(deps): bump github.com/aws/aws-sdk-go from 1.44.228 to 1.44.236
Bumps [github.com/aws/aws-sdk-go](https://github.com/aws/aws-sdk-go) from 1.44.228 to 1.44.236.
- [Release notes](https://github.com/aws/aws-sdk-go/releases)
- [Commits](https://github.com/aws/aws-sdk-go/compare/v1.44.228...v1.44.236)

---
updated-dependencies:
- dependency-name: github.com/aws/aws-sdk-go
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2023-04-05 15:12:35 +01:00
dependabot[bot]
b07f575d07 build(deps): bump github.com/oracle/oci-go-sdk/v65
Bumps [github.com/oracle/oci-go-sdk/v65](https://github.com/oracle/oci-go-sdk) from 65.33.0 to 65.34.0.
- [Release notes](https://github.com/oracle/oci-go-sdk/releases)
- [Changelog](https://github.com/oracle/oci-go-sdk/blob/master/CHANGELOG.md)
- [Commits](https://github.com/oracle/oci-go-sdk/compare/v65.33.0...v65.34.0)

---
updated-dependencies:
- dependency-name: github.com/oracle/oci-go-sdk/v65
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
2023-04-05 15:12:00 +01:00
dependabot[bot]
ebae647dfa build(deps): bump google.golang.org/api from 0.114.0 to 0.115.0
Bumps [google.golang.org/api](https://github.com/googleapis/google-api-go-client) from 0.114.0 to 0.115.0.
- [Release notes](https://github.com/googleapis/google-api-go-client/releases)
- [Changelog](https://github.com/googleapis/google-api-go-client/blob/main/CHANGES.md)
- [Commits](https://github.com/googleapis/google-api-go-client/compare/v0.114.0...v0.115.0)

---
updated-dependencies:
- dependency-name: google.golang.org/api
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
2023-04-05 15:11:18 +01:00
dependabot[bot]
6fd5b469bc build(deps): bump github.com/spf13/cobra from 1.6.1 to 1.7.0
Bumps [github.com/spf13/cobra](https://github.com/spf13/cobra) from 1.6.1 to 1.7.0.
- [Release notes](https://github.com/spf13/cobra/releases)
- [Commits](https://github.com/spf13/cobra/compare/v1.6.1...v1.7.0)

---
updated-dependencies:
- dependency-name: github.com/spf13/cobra
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
2023-04-05 15:10:31 +01:00
dependabot[bot]
78e822dd79 build(deps): bump github.com/shirou/gopsutil/v3 from 3.23.2 to 3.23.3
Bumps [github.com/shirou/gopsutil/v3](https://github.com/shirou/gopsutil) from 3.23.2 to 3.23.3.
- [Release notes](https://github.com/shirou/gopsutil/releases)
- [Commits](https://github.com/shirou/gopsutil/compare/v3.23.2...v3.23.3)

---
updated-dependencies:
- dependency-name: github.com/shirou/gopsutil/v3
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2023-04-05 15:09:55 +01:00
Roel Arents
a79db20bcd azureblob: send nil tier if empty string 2023-04-05 15:08:32 +01:00
Nick Craig-Wood
d67ef19f6e bisync: fix maxDelete parameter being ignored via the rc
See: https://forum.rclone.org/t/bisync-maxdelete-api/37215
2023-04-05 14:51:46 +01:00
Nick Craig-Wood
037a6bd1b0 crypt: recommend Dropbox for base32768 encoding
See: https://forum.rclone.org/t/base32768-filename-encoding-with-crypt-dropbox-remote/37375
2023-04-05 14:51:21 +01:00
Nick Craig-Wood
09b884aade Add wiserain to contributors 2023-04-05 14:51:21 +01:00
wiserain
243bcc9d07 pikpak: new backend
Fixes #6429
2023-04-04 16:33:48 +01:00
Nick Craig-Wood
64cf9ac911 local: fix /path/to/file.rclonelink when -l/--links is in use
Before this change using /path/to/file.rclonelink would not find the
file when using -l/--links.

This fixes the problem by doing another stat call if the file wasn't
found without the suffix if -l/--links is in use.

It will also give an error if you refer to a symlink without its
suffix which will not work because the limit to a single file
filtering will be using the file name without the .rclonelink suffix.

    need ".rclonelink" suffix to refer to symlink when using -l/--links

Before this change it would use the symlink as a directory which then
would fail when listed.

See: #6855
2023-04-04 10:22:00 +01:00
Nick Craig-Wood
15a3ec8fa1 local: fix filtering of symlinks with -l/--links flag
Before this fix, with the -l flag, the `.rclonelink` suffix wasn't
being added to the file names before filtering by name.

See #6855
2023-04-04 10:22:00 +01:00
Nick Craig-Wood
2b8af4d23f sync,copy,move: make sure we output a debug log on start of transfer
Before this change we weren't outputing a debug log on the start of a
transfer for files which existed on the source but not in the
destination.

This was different to the single file copy routine.
2023-04-04 09:41:36 +01:00
Nick Craig-Wood
5755e31ef0 Add Joel to contributors 2023-04-04 09:41:36 +01:00
Joel
f4c787ab74 sftp: add --sftp-host-key-algorithms to allow specifying SSH host key algorithms 2023-03-30 18:00:54 +01:00
Nick Craig-Wood
4d7b6e14b8 mount: clarify rclone mount error when installed via homebrew
See: https://forum.rclone.org/t/suggestion-for-error-message/37145
2023-03-29 13:59:27 +01:00
Nick Craig-Wood
9ea7d143dd Add Drew Parsons to contributors 2023-03-29 13:59:27 +01:00
Drew Parsons
927e721a25 docs: faq: clarify name resolver control
On Linux systems rclone builds with cgo but uses the internal Go
resolver for DNS by default.

This update the FAQ to suggest use of GODEBUG=netdns=cgo if there are
name resolution problems on Linux/BSD (with CGO_ENABLED rebuild from
source if necessary), or try GODEBUG=netdns=go on Windows/MacOS.

See: #683
2023-03-28 15:24:37 +01:00
Nick Craig-Wood
bd46f01eb4 cmount: add --mount-case-insensitive to force the mount to be case insensitive 2023-03-27 16:17:49 +01:00
Nick Craig-Wood
5f4d7154c0 fs: fix tristate conversion to JSON 2023-03-27 16:17:49 +01:00
Nick Craig-Wood
bad8a01850 fs: allow boolean features to be enabled with --disable !Feature 2023-03-27 16:17:49 +01:00
Nick Craig-Wood
d808c3848a Add ed to contributors 2023-03-27 16:17:49 +01:00
ed
3f0bec2ee9 webdav: make pacer minSleep configurable
This adds the config argument --webdav-pacer-min-sleep which specifies
the http-request rate limit. Lowering this from the default 10ms can
greatly improve performance when synchronizing small files.

See: https://forum.rclone.org/t/webdav-with-persistent-connections/37024/10
2023-03-27 15:30:02 +02:00
Nick Craig-Wood
8fb9eb2fee sync: make --suffix-keep-extension preserve 2 part extensions like .tar.gz
If a file has two (or more) extensions and the second (or subsequent)
extension is recognised as a valid mime type, then the suffix will go
before that extension. So `file.tar.gz` would be backed up to
`file-2019-01-01.tar.gz` whereas `file.badextension.gz` would be
backed up to `file.badextension-2019-01-01.gz`

Fixes #6892
2023-03-27 14:24:21 +01:00
Nick Craig-Wood
01fa15a7d9 Add Aditya Basu to contributors 2023-03-27 14:24:21 +01:00
Nick Craig-Wood
6aaa5d7a75 Add jumbi77 to contributors 2023-03-27 14:24:21 +01:00
Nick Craig-Wood
b4d3411637 Add Juang, Yi-Lin to contributors 2023-03-27 14:24:21 +01:00
Nick Craig-Wood
01ddc8ca6c Add NickIAm to contributors 2023-03-27 14:24:21 +01:00
Nick Craig-Wood
16c1e7149e Add yuudi to contributors 2023-03-27 14:24:21 +01:00
albertony
0374ea2c79 Use jwt-go (golang-jwt) instead of deprecated jws (x/oauth2/jws)
golang.org/x/oauth2/jws is deprecated: this package is not intended for public use and
might be removed in the future. It exists for internal use only. Please switch to another
JWS package or copy this package into your own source tree.

github.com/golang-jwt/jwt/v4 seems to be a good alternative, and was already
an implicit dependency.
2023-03-26 19:20:50 +02:00
Nick Craig-Wood
2e2451f8ec lib/rest: fix problems re-using HTTP connections
Before this fix, it was noticed that the rclone webdav client did not
re-use HTTP connections when it should have been.

This turned out to be because rclone was not draining the HTTP bodies
when it was not expecting a response.

From the Go docs:

> If the returned error is nil, the Response will contain a non-nil
> Body which the user is expected to close. If the Body is not both
> read to EOF and closed, the Client's underlying RoundTripper
> (typically Transport) may not be able to re-use a persistent TCP
> connection to the server for a subsequent "keep-alive" request.

This fixes the problem by draining up to 10MB of data from an HTTP
response if the NoResponse flag is set, or at the end of a JSON or XML
response (which could have some whitespace on the end).

See: https://forum.rclone.org/t/webdav-with-persistent-connections/37024/
2023-03-26 17:19:48 +01:00
albertony
bd1e3448b3 build: add exclude for misspell linter 2023-03-26 17:05:24 +02:00
albertony
20909fa294 build: enable misspell linter 2023-03-26 17:05:24 +02:00
albertony
c502e00c87 fs: fix infinite recursive call in pacer ModifyCalculator (fixes issue reported by the staticcheck linter) 2023-03-26 14:28:15 +02:00
albertony
9172c9b3dd crypt: reduce allocations
This changes crypt's use of sync.Pool: Instead of storing slices
it now stores pointers pointers fixed sized arrays.

This issue was reported by staticcheck:

SA6002 - Storing non-pointer values in sync.Pool allocates memory

A sync.Pool is used to avoid unnecessary allocations and reduce
the amount of work the garbage collector has to do.

When passing a value that is not a pointer to a function that accepts
an interface, the value needs to be placed on the heap, which means
an additional allocation. Slices are a common thing to put in sync.Pools,
and they're structs with 3 fields (length, capacity, and a pointer to
an array). In order to avoid the extra allocation, one should store
a pointer to the slice instead.

See: https://staticcheck.io/docs/checks#SA6002
2023-03-26 14:28:15 +02:00
albertony
78deab05f9 netstorage: ignore false positive from the staticcheck linter regarding header name not being canonical 2023-03-26 14:28:15 +02:00
albertony
6c9d377bbb vfs: ignore false positive from the unused linter 2023-03-26 14:28:15 +02:00
albertony
62ddc9b7f9 vfscache: remove unused code (fixes issue reported by the unused linter) 2023-03-26 14:28:15 +02:00
albertony
448ae49fa4 webgui: remove unused code (fixes issue reported by the unused linter) 2023-03-26 14:28:15 +02:00
albertony
5f3c276d0a zoho: remove unused code (fixes issue reported by the unused linter) 2023-03-26 14:28:15 +02:00
albertony
9cea493f58 union: remove unused code (fixes issue reported by the unused linter) 2023-03-26 14:28:15 +02:00
albertony
400d1a4468 swift: remove unused code (fixes issue reported by the unused linter) 2023-03-26 14:28:15 +02:00
albertony
851ce0f4fe seafile: remove unused code for legacy API v2 (fixes issue reported by the unused linter) 2023-03-26 14:28:15 +02:00
albertony
cc885bd39a hidrive: remove unused code (fixes issue reported by the unused linter) 2023-03-26 14:28:15 +02:00
albertony
a1a8c21c70 dropbox: remove unused code (fixes issue reported by the unused linter) 2023-03-26 14:28:15 +02:00
albertony
6ef4bd8c45 cache: remove unused code (fixes issue reported by the unused linter) 2023-03-26 14:28:15 +02:00
albertony
fb316123ec azureblob: remove unused code (fixes issue reported by the unused linter) 2023-03-26 14:28:15 +02:00
albertony
270af61665 smb: code cleanup to avoid overwriting ctx before first use (fixes issue reported by the staticcheck linter) 2023-03-26 14:28:15 +02:00
albertony
155f4f2e21 mount: replace deprecated bazil/fuse specific constants with syscall constants 2023-03-26 14:28:15 +02:00
albertony
eaf593884b serve/ftp: use io.SeekEnd instead of os.SEEK_END (deprecated since Go 1.7) 2023-03-26 14:28:15 +02:00
albertony
930574c6e9 oracleobjectstorage: remove empty branch (fixes issue reported by the staticcheck linter) 2023-03-26 14:28:15 +02:00
albertony
c1586a9866 onedrive: report any list errors during cleanup 2023-03-26 14:28:15 +02:00
albertony
432eb74814 lib: avoid unnecessary use of fmt.Sprintf for string constant 2023-03-26 14:28:15 +02:00
albertony
92fb644fb6 test: use decompressed.String() instead of string(decompressed.Bytes()) 2023-03-26 14:28:15 +02:00
albertony
bb92af693a test: do not test deprecated and unused Dial and DialTLS functions on http Transport type 2023-03-26 14:28:15 +02:00
albertony
eb5fd07131 mount: error strings should not be capitalized 2023-03-26 14:28:15 +02:00
albertony
b2ce7c9aa6 hidrive: error strings should not be capitalized 2023-03-26 14:28:15 +02:00
albertony
d6b46e41dd build: replace deprecated linters deadcode, structcheck and varcheckadd with unused
The three linters deadcode, structcheck and varcheck we have been using are as of
gitlabci-lint version 1.49.0 (24 Aug 2022) marked as deprecated, and replaced by unused.

The linters staticcheck, gosimple, stylecheck and unused should combined correspond to
the checks performed by the stand-alone staticcheck tool, which is by default used for
linting in Visual Studio Code with the Go extension. We previously enabled the first
three, but skipped unused due to many reported issues.

See #6387 for more information.
2023-03-26 14:28:15 +02:00
albertony
254c6ef1dd build: add lint ignore comment required for golangci-staticcheck in addition to stand-alone staticcheck 2023-03-26 14:28:15 +02:00
albertony
547f943851 build: exclude known issues from the staticcheck linting in ci 2023-03-26 14:28:15 +02:00
albertony
8611c9f6f7 build: add staticcheck, gosimple and stylecheck linting to the build pipeline - fixes #6273
These combined should correspond to the checks performed by the stand-alone
staticcheck tool, which is by default used for linting in Visual Studio Code
with the Go extension. One exception is the unused checks, which staticcheck
tool performs, but chose to not enabled here in rclone due to many reported
occurrences.

See #6387 for more information.
2023-03-26 14:28:15 +02:00
Dimitri Papadopoulos
f6576237a4 fs: fix typos found by codespell 2023-03-25 12:51:04 +01:00
Dimitri Papadopoulos
207b64865e fstest: fix typo found by codespell 2023-03-25 09:34:10 +01:00
Dimitri Papadopoulos
9ee1b21ec2 vfs: fix typos found by codespell 2023-03-25 09:33:34 +01:00
Dimitri Papadopoulos
55a12bd639 backend: fix repeated words typos 2023-03-25 09:31:36 +01:00
dependabot[bot]
3b4a57dab9 build(deps): bump github.com/aws/aws-sdk-go from 1.44.227 to 1.44.228
Bumps [github.com/aws/aws-sdk-go](https://github.com/aws/aws-sdk-go) from 1.44.227 to 1.44.228.
- [Release notes](https://github.com/aws/aws-sdk-go/releases)
- [Commits](https://github.com/aws/aws-sdk-go/compare/v1.44.227...v1.44.228)

---
updated-dependencies:
- dependency-name: github.com/aws/aws-sdk-go
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2023-03-24 20:49:50 +00:00
Dimitri Papadopoulos
afe158f878 docs: fix typos found by codespell 2023-03-24 20:49:00 +00:00
Dimitri Papadopoulos
722a3f32cc cmdtest: fix typos found by codespell 2023-03-24 20:44:25 +00:00
Dimitri Papadopoulos
9183618082 backend: fix typos found by codespell 2023-03-24 20:42:45 +00:00
Dimitri Papadopoulos
18ebca3979 lib: fix typos found by codespell 2023-03-24 20:40:52 +00:00
Nick Craig-Wood
e84d2c9e5f docs: add info about # of parallel checks for rclone check/cryptcheck
The original commit 7dbf1ab66f put the changes in the auto
generated docs - this fixes that.
2023-03-24 12:43:45 +00:00
Aditya Basu
e98b61ceeb docs: update install with docker interactive use
* Install with docker: interactive use
* remove extra mount from command
* update listremotes
2023-03-24 11:42:58 +00:00
albertony
19f9fca2f6 docs: document how the configuration file is written, and that an .old file will be deleted 2023-03-24 11:40:34 +00:00
jumbi77
7dbf1ab66f docs: add info about # of parallel checks for rclone check/cryptcheck 2023-03-24 11:35:58 +00:00
Dimitri Papadopoulos
bfe272bf67 backend: fix typos found by codespell 2023-03-24 11:34:14 +00:00
Dimitri Papadopoulos
cce8936802 cmd: fix typos found by codespell 2023-03-24 11:32:59 +00:00
Juang, Yi-Lin
043bf3567d drive: update drive service account guide 2023-03-24 11:31:46 +00:00
NickIAm
1b2f2c0d69 docs: add section about --vfs-cache-max-age
This change adds a section to clarify how exactly the --vfs-cache-max-age flag affects caching
2023-03-24 11:28:34 +00:00
yuudi
4b376514a6 doc: Clarify the srcFs and dstRs when using local filesystem
Co-authored-by: yuudi <yuudi@users.noreply.github.com>
2023-03-24 11:25:39 +00:00
Peter Brunner
c27e6a89b0 drive: add env_auth to drive provider
This change provides the ability to pass `env_auth` as a parameter to
the drive provider. This enables the provider to pull IAM
credentials from the environment or instance metadata. Previously if no
auth method was given it would default to requesting oauth.
2023-03-24 11:11:21 +00:00
albertony
76c6e3b15c build: set Rclone as file description since it is shown as process name in task manager 2023-03-23 18:55:57 +01:00
Nick Craig-Wood
48ec00cc1a rc: fix missing --rc flags
In this commit we accidentally removed the global --rc flags.

0df7466d2b cmd/rcd: Fix command docs to include command specific prefix (#6675)

This re-instates them.
2023-03-23 12:05:31 +00:00
dependabot[bot]
866600a73b build(deps): bump github.com/aws/aws-sdk-go from 1.44.226 to 1.44.227
Bumps [github.com/aws/aws-sdk-go](https://github.com/aws/aws-sdk-go) from 1.44.226 to 1.44.227.
- [Release notes](https://github.com/aws/aws-sdk-go/releases)
- [Commits](https://github.com/aws/aws-sdk-go/compare/v1.44.226...v1.44.227)

---
updated-dependencies:
- dependency-name: github.com/aws/aws-sdk-go
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2023-03-23 11:27:14 +00:00
Nick Craig-Wood
d8f4cd4d5f drive: fix change notify picking up files outside the root
Before this change, change notify would pick up files which were
shared with us as well as file within the drive.

When using an encrypted mount this caused errors like:

    ChangeNotify was unable to decrypt "Plain file name": illegal base32 data at input byte 5

The fix tells drive to restrict changes to the drive in use.

Fixes #6771
2023-03-22 16:24:07 +00:00
Nick Craig-Wood
d0810b602a crypt: add --crypt-pass-bad-blocks to allow corrupted file output 2023-03-22 16:23:37 +00:00
Nick Craig-Wood
d5afcf9e34 crypt: try not to return "unexpected EOF" error
Before this change the code wasn't taking into account the error
io.ErrUnexpectedEOF that io.ReadFull can return properly. Sometimes
that error was being returned instead of a more specific and useful
error.

To fix this, io.ReadFull was replaced with the simpler
readers.ReadFill which is much easier to use correctly.
2023-03-22 16:23:37 +00:00
Nick Craig-Wood
07c4d95f38 crypt: fix tests assert.Error which should have been assert.EqualError 2023-03-22 16:23:37 +00:00
Nick Craig-Wood
fd83071b6b rc: fix operations/stat with trailing /
Before this change using operations/stat with a remote pointing to a
dir with a trailing / would return a null output rather than the
correct info.

This was because the directory was not found with a trailing slash in
the directory listing.

Fixes #6817
2023-03-22 16:22:45 +00:00
Nick Craig-Wood
e042d9089f fs: Fix interaction between --progress and --interactive
Before this change if both --progress and --interactive were set then
the screen display could become muddled.

This change makes --progress and --interactive use the same lock so
while rclone is asking for interactive questions, the progress will be
paused.

Fixes #6755
2023-03-22 16:18:41 +00:00
Nick Craig-Wood
cdfa0beafb lib/atexit: ensure OnError only calls cancel function once
Before this change the cancelFunc could be called twice, once while
handling the interrupt (CTRL-C) and once while unwinding the stack if
the function happened to finish.

This change ensure the cancelFunc is only called once by wrapping it
in a sync.Once
2023-03-22 12:50:58 +00:00
Nick Craig-Wood
ddb3b17e96 s3: fix hang on aborting multpart upload with iDrive e2
Apparently the abort multipart upload call doesn't return while
multipart uploads are in progress on iDrive e2.

This means that if we CTRL-C a multpart upload rclone hangs until the
all parts uploading have completed. However since rclone is uploading
multiple parts at once this doesn't happen until after the entire file
is uploaded.

This was fixed by cancelling the upload context which causes all the
uploads to stop instantly.
2023-03-22 12:50:58 +00:00
Nick Craig-Wood
32f71c97ea Add Zach Kipp to contributors 2023-03-22 12:50:58 +00:00
dependabot[bot]
53853116fb build(deps): bump github.com/aws/aws-sdk-go from 1.44.223 to 1.44.226
Bumps [github.com/aws/aws-sdk-go](https://github.com/aws/aws-sdk-go) from 1.44.223 to 1.44.226.
- [Release notes](https://github.com/aws/aws-sdk-go/releases)
- [Commits](https://github.com/aws/aws-sdk-go/compare/v1.44.223...v1.44.226)

---
updated-dependencies:
- dependency-name: github.com/aws/aws-sdk-go
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2023-03-22 11:51:03 +00:00
dependabot[bot]
a887856998 build(deps): bump github.com/oracle/oci-go-sdk/v65
Bumps [github.com/oracle/oci-go-sdk/v65](https://github.com/oracle/oci-go-sdk) from 65.32.1 to 65.33.0.
- [Release notes](https://github.com/oracle/oci-go-sdk/releases)
- [Changelog](https://github.com/oracle/oci-go-sdk/blob/master/CHANGELOG.md)
- [Commits](https://github.com/oracle/oci-go-sdk/compare/v65.32.1...v65.33.0)

---
updated-dependencies:
- dependency-name: github.com/oracle/oci-go-sdk/v65
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
2023-03-22 11:50:24 +00:00
Zach Kipp
0df7466d2b cmd/rcd: Fix command docs to include command specific prefix (#6675)
This change addresses two issues with commands that re-used
flags from common packages:

1) cobra.Command definitions did not include the command specific
   prefix in doc strings.
2) Command specific flag prefixes were added after generating
   command doc strings.
2023-03-22 11:47:35 +00:00
eNV25
23579e3b99 cmd/ncdu: refactor redraw handling 2023-03-21 16:41:22 +00:00
Nick Craig-Wood
3affba6fa6 build: remove duplicate linux/arm64 build 2023-03-21 16:25:46 +00:00
Nick Craig-Wood
542677d807 s3: fix --s3-versions on individual objects
Before this fix attempting to access an s3 versioned object by name in
a subdirectory of root would not find the object.

This fixes the problem and introduced an integraton test.

See: https://forum.rclone.org/t/s3-versions-cant-retrieve-old-version/36900
2023-03-21 12:44:45 +00:00
Nick Craig-Wood
d481aa8613 Revert "s3: fix InvalidRequest copying to a locked bucket from a source with no MD5SUM"
This reverts commit e5a1bcb1ce.

This causes a lot of integration test failures so may need to be optional.
2023-03-21 11:43:43 +00:00
Nick Craig-Wood
15e633fa8b build: disable provenance in docker build
To attempt to fix this error:

buildx failed with: ERROR: failed to solve: missing provenance for owlcc15myb2dpmxrz6dl5bzqc
2023-03-20 18:09:54 +00:00
dependabot[bot]
732c24c624 build(deps): bump docker/build-push-action from 3 to 4
Bumps [docker/build-push-action](https://github.com/docker/build-push-action) from 3 to 4.
- [Release notes](https://github.com/docker/build-push-action/releases)
- [Commits](https://github.com/docker/build-push-action/compare/v3...v4)

---
updated-dependencies:
- dependency-name: docker/build-push-action
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
2023-03-20 16:21:22 +00:00
Nick Craig-Wood
75dfdbf211 ci: revert revive settings back to fix lint
The upstream revive repo changed the default settings for this linter.
We use this through golangci-lint.

This change meant lots of errors appearing all at once. We should
probably fix these in due course, but for the time being this disables
those settings.

See: https://github.com/mgechev/revive/pull/799
2023-03-20 15:26:21 +00:00
asdffdsazqqq
5f07113a4b docs: install: how to uninstall rclone via winget 2023-03-20 14:51:42 +00:00
Richard Tweed
6a380bcc67 build: fix dockerfile reference in beta image pipeline 2023-03-20 11:54:31 +00:00
dependabot[bot]
97276ce765 build(deps): bump google.golang.org/api from 0.112.0 to 0.114.0
Bumps [google.golang.org/api](https://github.com/googleapis/google-api-go-client) from 0.112.0 to 0.114.0.
- [Release notes](https://github.com/googleapis/google-api-go-client/releases)
- [Changelog](https://github.com/googleapis/google-api-go-client/blob/main/CHANGES.md)
- [Commits](https://github.com/googleapis/google-api-go-client/compare/v0.112.0...v0.114.0)

---
updated-dependencies:
- dependency-name: google.golang.org/api
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
2023-03-17 16:25:06 +00:00
dependabot[bot]
a23a7a807f build(deps): bump github.com/klauspost/compress from 1.16.0 to 1.16.3
Bumps [github.com/klauspost/compress](https://github.com/klauspost/compress) from 1.16.0 to 1.16.3.
- [Release notes](https://github.com/klauspost/compress/releases)
- [Changelog](https://github.com/klauspost/compress/blob/master/.goreleaser.yml)
- [Commits](https://github.com/klauspost/compress/compare/v1.16.0...v1.16.3)

---
updated-dependencies:
- dependency-name: github.com/klauspost/compress
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2023-03-17 16:24:20 +00:00
dependabot[bot]
c6a4caaf7e build(deps): bump goftp.io/server
Bumps goftp.io/server from 0.4.2-0.20210615155358-d07a820aac35 to 1.0.0-rc1.

---
updated-dependencies:
- dependency-name: goftp.io/server
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
2023-03-17 16:23:37 +00:00
dependabot[bot]
5574733dcb build(deps): bump github.com/oracle/oci-go-sdk/v65
Bumps [github.com/oracle/oci-go-sdk/v65](https://github.com/oracle/oci-go-sdk) from 65.32.0 to 65.32.1.
- [Release notes](https://github.com/oracle/oci-go-sdk/releases)
- [Changelog](https://github.com/oracle/oci-go-sdk/blob/master/CHANGELOG.md)
- [Commits](https://github.com/oracle/oci-go-sdk/compare/v65.32.0...v65.32.1)

---
updated-dependencies:
- dependency-name: github.com/oracle/oci-go-sdk/v65
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2023-03-17 16:22:54 +00:00
dependabot[bot]
49c21d0b6e build(deps): bump github.com/aws/aws-sdk-go from 1.44.218 to 1.44.223
Bumps [github.com/aws/aws-sdk-go](https://github.com/aws/aws-sdk-go) from 1.44.218 to 1.44.223.
- [Release notes](https://github.com/aws/aws-sdk-go/releases)
- [Commits](https://github.com/aws/aws-sdk-go/compare/v1.44.218...v1.44.223)

---
updated-dependencies:
- dependency-name: github.com/aws/aws-sdk-go
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2023-03-17 16:22:08 +00:00
eNV25
0ea2ce3674 cmd/ncdu: fix screen corruption when logging
Before this change if logs were not redirected, logging would
corrupt the terminal screen.

This commit stores the logs (max ~100 lines) in an array and
print them when the program exits.
2023-03-17 14:52:34 +00:00
dependabot[bot]
3ddf824251 build(deps): bump actions/setup-go from 3 to 4
Bumps [actions/setup-go](https://github.com/actions/setup-go) from 3 to 4.
- [Release notes](https://github.com/actions/setup-go/releases)
- [Commits](https://github.com/actions/setup-go/compare/v3...v4)

---
updated-dependencies:
- dependency-name: actions/setup-go
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
2023-03-17 14:41:49 +00:00
Nick Craig-Wood
68fdff3c27 build: ensure users with no secrets (dependabot) don't run android upload step 2023-03-17 14:04:46 +00:00
Nick Craig-Wood
c003485ae3 build: ensure users with no secrets (dependabot) don't run deploy step 2023-03-17 13:49:11 +00:00
Nick Craig-Wood
99d5080191 Add Richard Tweed to contributors 2023-03-17 13:49:11 +00:00
alankrit
2ad217eedd librclone:Added example on using librclone with golang. 2023-03-17 12:00:27 +00:00
albertony
a3eb7f1142 jottacloud: fix vfs writeback stuck in a failed upload loop with file versioning disabled
Avoid returning error when option no_versions and remove fail

Fixes #6857
2023-03-17 11:54:43 +00:00
Richard Tweed
6d620b6d88 build: update docker beta build to latest actions and to push to ghcr
* Add ghcr option for docker images
* Update to use the upstream build actions
* Add ability to push beta images manually.
2023-03-17 11:54:01 +00:00
Arnav Singh
9f8357ada7 sftp: fix using key_use_agent and key_file together needing private key file
When using ssh-agent to hold multiple keys, it is common practice to configure
openssh to use a specific key by setting the corresponding public key as
the `IdentityFile`. This change makes a similar behavior possible in rclone
by having it parse the `key_file` config as the public key when
`key_use_agent` is `true`.

rclone already attempted this behavior before this change, but it assumed that
`key_file` is the private key and that the public key is specified in
`${key_file}.pub`. So for parity with the openssh behavior, this change makes
rclone first attempt to read the public key from `${key_file}.pub` as before
(for the sake of backward compatibility), then fall back to reading it from
`key_file`.

Fixes #6791
2023-03-17 11:44:19 +00:00
Nick Craig-Wood
e5a1bcb1ce s3: fix InvalidRequest copying to a locked bucket from a source with no MD5SUM
Before this change, we would upload files as single part uploads even
if the source MD5SUM was not available.

AWS won't let you upload a file to a locket bucket without some sort
of hash protection of the upload which we don't have with no MD5SUM.

So we switch to multipart upload when the source does not have an
MD5SUM.

This means that if --s3-disable-checksum is set or we are copying from
a source with no MD5SUMs we will copy with multipart uploads.

This patch changes all uploads, not just those to locked buckets
because having no MD5SUM protection on uploads is undesirable.

Fixes #6846
2023-03-17 11:34:20 +00:00
Nick Craig-Wood
46484022b0 fs: add size to JSON logs when moving or copying an object #6849 2023-03-17 11:22:57 +00:00
Nick Craig-Wood
ab746ef891 Add Thibault Coupin to contributors 2023-03-17 11:22:57 +00:00
Paul
6241c1ae43 Add devnoname120 to contributors 2023-03-17 11:09:08 +00:00
Paul
0f8d3fe6a3 webdav: add support for chunked uploads — fix #3666
Co-authored-by: Thibault Coupin <thibault.coupin@gmail.com>
Co-authored-by: Nick Craig-Wood <nick@craig-wood.com>
2023-03-17 11:09:08 +00:00
Paul
07afb9e700 webdav: add chunking helper file 2023-03-17 11:09:08 +00:00
Thibault Coupin
3165093feb fstests: add option to skip chunked upload 2023-03-17 11:09:08 +00:00
Paul
4af0c1d902 rest: add optional GetBody function for HTTP call 2023-03-17 11:09:08 +00:00
Nick Craig-Wood
82f9554474 docs: note that rcat will retry chunks when multipart uploading
See: https://forum.rclone.org/t/the-rclone-rcat-reliability-for-the-uploading-files-to-s3/36830
2023-03-17 10:52:21 +00:00
Nick Craig-Wood
d8d53b7aa0 Add Christopher Merry to contributors 2023-03-17 10:52:21 +00:00
Nick Craig-Wood
8c9048259a Add Arnavion to contributors 2023-03-17 10:52:21 +00:00
Christopher Merry
0361acbde4 googlecloudstorage: added gcs requester pays 2023-03-16 17:13:37 +00:00
Aaron Gokaslan
f5bf0a48f3 uptobox: fix improper regex 2023-03-16 17:12:27 +00:00
albertony
cec843dd8c build: run workflow even if tag/branch name contains slash 2023-03-16 17:07:07 +00:00
Anthony Pessy
54a9488e59 s3: add GCS to provider list 2023-03-16 14:24:21 +00:00
Arnavion
29fe0177bd webdav: add "fastmail" provider for Fastmail Files
This provider:

- supports the `X-OC-Mtime` header to set the mtime

- calculates SHA1 checksum server side and returns it as a `ME:sha1hex` prop

To differentiate the new hasMESHA1 quirk, the existing hasMD5 and hasSHA1
quirks for Owncloud have been renamed to hasOCMD5 and hasOCSHA1.

Fixes #6837
2023-03-16 14:20:29 +00:00
Nick Craig-Wood
0e134364ac Changelog updates from Version v1.62.2 2023-03-16 12:00:06 +00:00
Lesmiscore
0d8350d95d ftp: fix 426 errors on downloads with vsftpd
Sometimes vsftpd returns a 426 error when closing the stream even when
all the data has been transferred successfully. This is some TLS
protocol mismatch.

Rclone has code to deal with this already, but the error returned from
Close was wrapped in a multierror so the detection didn't work.

This properly extract `textproto.Error` from the errors returned by
`github.com/jlaffaye/ftp` in all the cases.

See: https://forum.rclone.org/t/vsftpd-vs-rclone-part-2/36774
2023-03-15 18:09:29 +00:00
asdffdsazqqq
497e373e31 docs: fix size documentation
change `Google Drive` to `Google Docs`
2023-03-15 16:21:37 +00:00
Nick Craig-Wood
ed8fea4aa5 docker volume plugin: add missing fuse3 dependency #6844 2023-03-15 15:57:53 +00:00
Nick Craig-Wood
4d7f75dd76 Changelog updates from Version v1.62.1 2023-03-15 14:53:21 +00:00
Nick Craig-Wood
53e757aea9 build: update release docs to be more careful with the tag 2023-03-15 14:53:21 +00:00
Nick Craig-Wood
f578896745 Set Github release to draft while uploading binaries 2023-03-15 14:53:21 +00:00
Nick Craig-Wood
13be03cb86 Add cycneuramus to contributors 2023-03-15 14:53:21 +00:00
cycneuramus
864e02409e docker: add missing fuse3 dependency - fixes #6844 2023-03-15 10:54:30 +00:00
Nick Craig-Wood
fccc779a15 Start v1.63.0-DEV development 2023-03-14 15:18:54 +00:00
Nick Craig-Wood
77c7077458 Version v1.62.0 2023-03-14 12:42:23 +00:00
Nick Craig-Wood
ffd4ab222c docs: add idrive e2 as a major sponsor 2023-03-14 12:37:34 +00:00
Nick Craig-Wood
676277e255 docs: move FUSE-T docs from auto generated file to source file
Docs commited in wrong place in

c0a5283416 docs: rclone mount on macOS with macFUSE and  FUSE-T
2023-03-14 12:37:34 +00:00
Justin Winokur
c0a5283416 docs: rclone mount on macOS with macFUSE and FUSE-T 2023-03-13 10:55:39 +00:00
Nick Craig-Wood
e405ca7733 vfs: make uploaded files retain modtime with non-modtime backends
Before this change if a file was uploaded to a backend which didn't
support modtimes, the time of the file read after the upload had
completed would change to the time the file was uploaded on the
backend.

When using `--vfs-cache-mode writes` or `full` this time would be
different by the `--vfs-write-back` delay which would cause
applications to think the file had been modified.

This changes uses the last modification time read by the OS as a
virtual modtime for backends which don't support setting modtimes. It
does not change the modtime to that actually uploaded.

This means that as long as the file remains in the directory cache it
will have the expected modtime.

See: https://forum.rclone.org/t/saving-files-causes-wrong-modified-time-to-be-set-for-a-few-seconds-on-webdav-mount-with-bitrix24/36451
2023-03-10 15:00:01 +00:00
Nick Craig-Wood
580d72f0f6 operations: skip --max-delete tests on chunker integration tests
The recent changes to remove race conditions from --max-delete have
made these tests fail on chunker with s3 because they do copy then
delete and the deletes are being counted in the --max-delete(-size)
counts.
2023-03-10 12:13:44 +00:00
Nick Craig-Wood
22daeaa6f3 build: update dependencies
This fixes the azureblob backend so it builds again after the SDK
changes.

This doesn't update bazil.org/fuse because it doesn't build on FreeBSD

https://github.com/bazil/fuse/issues/295
2023-03-10 11:15:07 +00:00
Nick Craig-Wood
ca9ad7935a Add dependabot[bot] to contributors 2023-03-10 11:15:07 +00:00
Nick Craig-Wood
dd6e229327 move: if --check-first and --order-by are set then delete with perfect ordering
If using rclone move and --check-first and --order-by then rclone uses
the transfer routine to delete files to ensure perfect ordering.

This will cause the transfer stats to have a larger than expected
number of items in it so we don't enable this by default.

Fixes #6033
2023-03-10 08:23:32 +00:00
dependabot[bot]
4edcd16f5f build(deps): bump github.com/gdamore/tcell/v2 from 2.5.4 to 2.6.0
Bumps [github.com/gdamore/tcell/v2](https://github.com/gdamore/tcell) from 2.5.4 to 2.6.0.
- [Release notes](https://github.com/gdamore/tcell/releases)
- [Changelog](https://github.com/gdamore/tcell/blob/main/CHANGESv2.md)
- [Commits](https://github.com/gdamore/tcell/compare/v2.5.4...v2.6.0)

---
updated-dependencies:
- dependency-name: github.com/gdamore/tcell/v2
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
2023-03-09 18:38:47 +00:00
dependabot[bot]
534e3acd06 build(deps): bump github.com/iguanesolutions/go-systemd/v5
Bumps [github.com/iguanesolutions/go-systemd/v5](https://github.com/iguanesolutions/go-systemd) from 5.1.0 to 5.1.1.
- [Release notes](https://github.com/iguanesolutions/go-systemd/releases)
- [Commits](https://github.com/iguanesolutions/go-systemd/compare/v5.1.0...v5.1.1)

---
updated-dependencies:
- dependency-name: github.com/iguanesolutions/go-systemd/v5
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2023-03-09 18:38:04 +00:00
dependabot[bot]
cf75ddabd3 build(deps): bump golang.org/x/term from 0.5.0 to 0.6.0
Bumps [golang.org/x/term](https://github.com/golang/term) from 0.5.0 to 0.6.0.
- [Release notes](https://github.com/golang/term/releases)
- [Commits](https://github.com/golang/term/compare/v0.5.0...v0.6.0)

---
updated-dependencies:
- dependency-name: golang.org/x/term
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
2023-03-09 18:37:23 +00:00
dependabot[bot]
6edcacf932 build(deps): bump github.com/Azure/azure-sdk-for-go/sdk/azidentity
Bumps [github.com/Azure/azure-sdk-for-go/sdk/azidentity](https://github.com/Azure/azure-sdk-for-go) from 1.2.0 to 1.2.2.
- [Release notes](https://github.com/Azure/azure-sdk-for-go/releases)
- [Changelog](https://github.com/Azure/azure-sdk-for-go/blob/sdk/azidentity/v1.2.2/CHANGELOG.md)
- [Commits](https://github.com/Azure/azure-sdk-for-go/compare/v1.2...sdk/azidentity/v1.2.2)

---
updated-dependencies:
- dependency-name: github.com/Azure/azure-sdk-for-go/sdk/azidentity
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2023-03-09 18:36:23 +00:00
dependabot[bot]
51506a7ccd build(deps): bump github.com/Azure/azure-sdk-for-go/sdk/azcore
Bumps [github.com/Azure/azure-sdk-for-go/sdk/azcore](https://github.com/Azure/azure-sdk-for-go) from 1.3.0 to 1.4.0.
- [Release notes](https://github.com/Azure/azure-sdk-for-go/releases)
- [Changelog](https://github.com/Azure/azure-sdk-for-go/blob/main/documentation/release.md)
- [Commits](https://github.com/Azure/azure-sdk-for-go/compare/sdk/azcore/v1.3.0...sdk/azcore/v1.4.0)

---
updated-dependencies:
- dependency-name: github.com/Azure/azure-sdk-for-go/sdk/azcore
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
2023-03-09 18:33:40 +00:00
Ryan Caezar Itang
a50fd2a2a2 ci: add dependabot 2023-03-09 15:05:15 +00:00
Ryan Caezar Itang
efac7e18fb ci: add winget releaser workflow 2023-03-09 14:56:37 +00:00
Ryan Caezar Itang
02dd8eacea docs: add winget installation method 2023-03-09 14:56:37 +00:00
Nick Craig-Wood
e2984227bb fs: fix race conditions in --max-delete and --max-delete-size 2023-03-09 09:25:31 +00:00
Nick Craig-Wood
a35ee30d9f Add Leandro Sacchet to contributors 2023-03-09 09:25:31 +00:00
Leandro Sacchet
f689db4422 fs: Add --max-delete-size a delete size threshold
Fixes #3329
2023-03-08 17:12:31 +00:00
Nick Craig-Wood
fb4600f6f9 tree: fix display of files with illegal Windows file system names
Before this change, files with illegal Windows names (eg those
containing \) would not be displayed properly in tree.

This change adds the local encoding to the Windows file names so \
will be displayed as its wide unicode equivalent.

See: https://forum.rclone.org/t/error-with-build-v1-61-1-tree-command-panic-runtime-error-invalid-memory-address-or-nil-pointer-dereference/35922/
2023-03-07 15:30:11 +00:00
Nick Craig-Wood
1d0c75b0c2 ftp: retry errors when initiating downloads
This adds a retry loop to the Open() call in the FTP server so it can
retry failures opening files.

This should make downloading multipart files more reliable.

See: https://forum.rclone.org/t/downloads-fail-from-remote-server-error-426-failure-writing-network-stream/33839/
2023-03-07 12:34:20 +00:00
Nick Craig-Wood
2e435af4de ftp: retry 426 errors
Before this change we didn't retry 426 errors which are

    426 Connection closed; transfer aborted.

Or in this particular case

    426 Failure writing network stream.

These seem like they might be temporary so retry them.

See: https://forum.rclone.org/t/downloads-fail-from-remote-server-error-426-failure-writing-network-stream/33839/
2023-03-07 12:34:20 +00:00
Nick Craig-Wood
62a7765e57 smb: allow SPN (service principal name) to be configured
This enables connection to clusters.

Fixes #6515
2023-03-07 12:18:32 +00:00
Nick Craig-Wood
5ad942ed87 local: fix exclusion of dangling symlinks with -L/--copy-links
Before this fix, a dangling symlink was erroring the sync. It was
writing an ERROR log and causing rclone to exit with an error. The
List method wasn't returning an error though.

This fix makes sure that we don't log or report a global error on a
file/directory that has been excluded.

This feature was first implemented in:

a61d219bc local: fix -L/--copy-links with filters missing directories

Then fixed in:

8d1fff9a8 local: obey file filters in listing to fix errors on excluded files

This commit also adds test cases for the failure modes of those commits.

See #6376
2023-03-07 12:15:10 +00:00
Nick Craig-Wood
96609e3d6e ftp: revert to upstream github.com/jlaffaye/ftp now fix is merged
This reverts to using the upstream now the patch to fix hang when
using ExplicitTLS to certain servers is merged.

Fixes #6426
2023-03-07 12:12:07 +00:00
Nick Craig-Wood
28a8ebce5b vfs: fix rename of directory containing files to be uploaded
Before this change, if you renamed a directory containg files yet to
be uploaded then deleted the directory the files would still be
uploaded.

This fixes the problem by changing the directory path in all the file
objects in a directory when it is renamed. This wasn't necessary until
we introduced virtual files and directories which lived beyond the
directory flush mechanism.

Fixes #6809
2023-03-07 11:40:50 +00:00
Nick Craig-Wood
17854663de vfs: log size of File and Dir in tests for optimization 2023-03-07 11:40:50 +00:00
Nick Craig-Wood
a4a6b5930a Add Peter Brunner to contributors 2023-03-07 11:40:50 +00:00
Nick Craig-Wood
e9ae620844 Add Ryan Caezar Itang to contributors 2023-03-07 11:40:50 +00:00
Nick Craig-Wood
e7cfb8ad8e Add Ninh Pham to contributors 2023-03-07 11:40:50 +00:00
Nick Craig-Wood
786a1c212c Add Peter Brunner to contributors 2023-03-07 11:40:50 +00:00
Peter Brunner
03bc270730 gcs: fix google cloud storage provider help 2023-03-07 11:39:02 +00:00
Ryan Caezar Itang
7cef042231 docs: add scoop installation method 2023-03-07 11:36:07 +00:00
Ninh Pham
1155cc0d3f drive: Make --drive-stop-on-upload-limit to respond to storageQuotaExceeded
Before this change, if a "--drive-stop-on-upload-limit" was set,
rclone would not stop the upload if a "storageQuotaExceeded" error occurred.

This fix now checks for the "storageQuotaExceeded" error
and "--drive-stop-on-upload-limit", and fails fast.
2023-03-07 11:00:08 +00:00
Peter Brunner
13c3f67ab0 gcs: add env_auth to pick up IAM credentials from env/instance
This change provides the ability to pass `env_auth` as a parameter to
the google cloud storage provider. This enables the provider to pull IAM
credentials from the environment or instance metadata. Previously if no
auth method was given it would default to requesting oauth.
2023-03-06 18:18:33 +00:00
Nick Craig-Wood
ab2cdd840f serve ftp: fix timestamps older than 1 year in listings
Fixes #6785
2023-03-06 15:59:56 +00:00
Nick Craig-Wood
143285e2b7 vfs: fix incorrect modtime on fs which don't support setting modtime
Before this change we were using the Precision literally to round the
precision of the mod times.

However fs.ModTimeNotSupported is 100y on backends which don't support
setting modtimes so rounding to 100y was producing very strange
results.

See: https://forum.rclone.org/t/saving-files-causes-wrong-modified-time-to-be-set-for-a-few-seconds-on-webdav-mount-with-bitrix24/36451/
2023-03-06 10:54:21 +00:00
Nick Craig-Wood
19e8c8d42a s3: make purge remove directory markers too
See: https://forum.rclone.org/t/cannot-purge-aws-s3/36169/
2023-03-03 15:51:00 +00:00
Nick Craig-Wood
de9c4a3611 s3: use bucket.Join instead of path.Join to preserve paths
Before this change, path.Join would remove the trailing / from objects
which had them. The simplified bucket.Join does not.
2023-03-03 15:51:00 +00:00
Nick Craig-Wood
d7ad13d929 bucket: add Join function for a simplified path.Join 2023-03-03 15:51:00 +00:00
albertony
f9d50f677d lib/terminal: enable windows console virtual terminal sequences processing (ANSI/VT100 colors)
This ensures the virtual terminal processing mode is enabled on the rclone process
for Windows 10 consoles (by using Windows Console API functions GetConsoleMode/SetConsoleMode
and flag ENABLE_VIRTUAL_TERMINAL_PROCESSING), which adds native support for ANSI/VT100
escape sequences. This mode is default in many cases, e.g. when using the Windows
Terminal application, but in other cases it is not, and the default can also be
controlled with registry setting (see below), and therefore configuring it on the process
seem to be the only reliable way of ensuring it is enabled when supported.

[HKEY_CURRENT_USER\Console]
"VirtualTerminalLevel"=dword:00000001
2023-03-03 12:37:01 +01:00
albertony
3641993fab tree: fix colored output on windows
Since rclone version 1.61.0 the tree command uses ANSI color sequences in output by
default, but this lead to issues in Windows terminals that were not handling these (#6668).

This commit ensures the tree command uses the terminal package for output. It relies on
go-colorable to properly handle ANSI color sequences: If stdout is connected to a terminal
the escape sequences are decoded and the text are written with color formatting using
Windows Console API. If stdout is not connected to a terminal, e.g. redirected to file,
the escape sequences are stripped off. The tree command has its own method for writing
directly to a file, specified with flag --output, and then the output is not passed
through the terminal package and must therefore be written without ansi codes.
2023-03-03 12:37:01 +01:00
Nick Craig-Wood
93d3ae04c7 deletefile: return error code 4 if file does not exist
Before this change `rclone deletefile` would return error code 1 if
the file it was trying to delete does not exist.

Rclone can't actually tell at this point whether the file doesn't
exist or what you tried to delete is a directory, but it seems more
logical to return error code 4 "object not found" here.

See: https://forum.rclone.org/t/rclone-deletefile-cmd-return-exit-code-1-when-file-not-found-in-remote-why-1-and-not-exit-code-4/
2023-03-03 09:51:23 +00:00
Nick Craig-Wood
e25e9fbf22 Add NodudeWasTaken to contributors 2023-03-03 09:51:23 +00:00
NodudeWasTaken
fe26d6116d mega: add --mega-use-https flag
Some ISPs throttle HTTP which MEGA uses by default, so some users may find using HTTPS beneficial.
2023-03-02 20:28:10 +00:00
Fred
06e1e18793 seafile: fix for flaky tests #6799 2023-03-02 20:03:25 +00:00
Nick Craig-Wood
23d17b76be onedrive: default onedrive personal to QuickXorHash
Before this change the hash used for Onedrive Personal was SHA1. From
July 2023 Microsoft is phasing out SHA1 hashes in favour of
QuickXorHash in Onedrive Personal. Onedrive Business and Sharepoint
remain using QuickXorHash as before.

This choice can be changed using the --onedrive-hash-type flag (and
config option) so that SHA1 can be selected while it is still
available in the transition period.

See: https://forum.rclone.org/t/microsoft-is-switching-onedrive-personal-to-quickxorhash-from-sha1/36296/
2023-03-02 19:32:35 +00:00
Nick Craig-Wood
dfe4e78a77 onedrive: add --onedrive-hash-type to change the hash in use
In preparation for Microsoft removing the SHA1 hash on OneDrive
Personal this allows the hash type to be set on OneDrive.

See: https://forum.rclone.org/t/microsoft-is-switching-onedrive-personal-to-quickxorhash-from-sha1/36296/
2023-03-02 19:32:35 +00:00
Nick Craig-Wood
59e7982040 s3: add --s3-sts-endpoint to specify STS endpoint
See: https://forum.rclone.org/t/s3-profile-failing-when-explicit-s3-endpoint-is-present/36063/
2023-03-02 09:56:09 +00:00
Nick Craig-Wood
c6b0587dc0 s3: fix AWS STS failing if --s3-endpoint is set
Before this change if an --s3-profile was set which used AWS STS (eg
to assume a role) and --s3-endpoint was set then rclone would use the
value from --s3-endpoint to contact the STS server which did not work.

This fix implements an endpoint resolver which only overrides the "s3"
service if --s3-endpoint is set. It sends the "sts" service (and any
other service) to the default resolver.

Fixes #6443
See: https://forum.rclone.org/t/s3-profile-failing-when-explicit-s3-endpoint-is-present/36063/
2023-03-01 16:24:40 +00:00
Nick Craig-Wood
9baa4d1c3c accounting: show checking tag if available even on transfers 2023-03-01 11:10:38 +00:00
Nick Craig-Wood
a5390dbbeb sync,operations: fix correct concurrency: use --checkers unless transferring files
There were some places (e.g. deleting files) where we were using
--transfers instead of --checkers to control the concurrency when
files weren't being transferred.

These have been updated to use --checkers.
2023-03-01 11:10:38 +00:00
Nick Craig-Wood
019a486d5b accounting: Make checkers show what they are doing
Before this change, all types of checkers showed "checking" after the
file name despite the fact that not all of them were checking.

After this change, they can show

- checking
- deleting
- hashing
- importing
- listing
- merging
- moving
- renaming

See: https://forum.rclone.org/t/what-is-rclone-checking-during-a-purge/35931/
2023-03-01 11:10:38 +00:00
Nick Craig-Wood
34ce11d2be Add ToBeFree to contributors 2023-03-01 11:10:38 +00:00
Nick Craig-Wood
88e8ede0aa Add Gerard Bosch to contributors 2023-03-01 11:10:38 +00:00
Nick Craig-Wood
f6f250c507 Add logopk to contributors 2023-03-01 11:10:38 +00:00
Nick Craig-Wood
2c45e901f0 Add Hunter Wittenborn to contributors 2023-03-01 11:10:38 +00:00
ToBeFree
9e1443799a docs: crypt: fix typo 2023-02-28 11:50:53 +00:00
Gerard Bosch
dd72aff98a docs: bisync: clarification of --resync 2023-02-28 11:47:28 +00:00
logopk
5039f9be48 docker: fix volume plugin does not remount volume on docker restart
docker volume plugin restoreState: skip fs option if empty

Fixes #6769
Co-authored-by: Peter Kreuser <logo@kreuser.name>
2023-02-28 11:29:07 +00:00
Hunter Wittenborn
56b582cdb9 authorize: add support for custom templates
This adds support for providing custom Go templates for use in the
`rclone authorize` command.

Fixes #6741
2023-02-24 15:08:38 +00:00
Aaron Gokaslan
745c0af571 all: Apply codeql fixes 2023-02-23 10:31:51 +00:00
Nick Craig-Wood
2dabbe83ac serve http: tests for --auth-proxy 2023-02-23 10:28:13 +00:00
Nick Craig-Wood
90561176fb Add Matthias Baur to contributors 2023-02-23 10:28:13 +00:00
Matthias Baur
a0b5d77427 serve http: support --auth-proxy 2023-02-22 14:55:24 +00:00
Manoj Ghosh
ce8b1cd861 oracle-object-storage: bring your own encryption keys 2023-02-21 14:45:02 +00:00
Manoj Ghosh
5bd6e3d1e9 fix vulnerablities: upgrade golang.org/x/net@v0.5.0 to golang.org/x/net@v0.7.0 2023-02-21 10:11:16 +00:00
Nick Craig-Wood
d4d7a6a55e sftp: fix uploads being 65% slower than they should be with crypt
The block size for crypt is 64k + a few bytes. The default block size
for sftp is 32k. This means that the blocks for crypt get split over 3
sftp packets two of 32k and one of a few bytes.

However due to a bug in pkg/sftp it was sending 32k instead of just a
few bytes, leading to the 65% slowdown.

This was fixed in the upstream library.

This bug probably affected transfers from over the network sources
also.

Fixes #6763
See: https://github.com/pkg/sftp/pull/537
2023-02-14 15:47:19 +00:00
Nick Craig-Wood
b3e0672535 s3: Check multipart upload ETag when --s3-no-head is in use
Before this change if --s3-no-head was in use rclone didn't check the
multipart upload ETag at all. However the ETag is returned in the
final POST request when completing the object.

This change uses that ETag from the final POST if --s3-no-head is in
use, otherwise it uses the ETag from a fresh HEAD request.

See: https://forum.rclone.org/t/in-some-cases-rclone-does-not-use-etag-to-verify-files/36095/
2023-02-14 12:04:28 +00:00
Nick Craig-Wood
a407437e92 Add Simmon Li (he/him) to contributors 2023-02-14 12:04:28 +00:00
Manoj Ghosh
0164a4e686 add more documentation around oci authentication ways 2023-02-14 11:58:38 +00:00
Simmon Li (he/him)
b8ea79042c docs: drive: make clear "testing" apps have short token grant time 2023-02-13 14:30:20 +00:00
albertony
49a6533bc1 docs/mount: improve explanation of windows filesystem permissions 2023-02-10 23:21:33 +01:00
Nick Craig-Wood
21459f3cc0 tree: fix nil pointer exception on stat failure
This fixes the crash by updating the upstream.

See: https://forum.rclone.org/t/error-with-build-v1-61-1-tree-command-panic-runtime-error-invalid-memory-address-or-nil-pointer-dereference/35922/
See: https://github.com/a8m/tree/pull/21
2023-02-08 16:21:25 +00:00
albertony
04f7e52803 accounting: show human readable elapsed time when longer than a day - fixes #6748 2023-02-06 15:02:03 +01:00
Kaloyan Raev
25535e5eac storj: update satellite urls and labels
The docs and setup wizard still contained deprecated URLs and labels of
Storj satellites. This change updates them.
2023-02-06 13:18:15 +00:00
Nick Craig-Wood
c37b6b1a43 cache: fix lint error in latest golangci-lint 2023-02-06 10:44:40 +00:00
albertony
0328878e46 accounting: limit length of ETA string
No need to report hours, minutes, and even seconds when the
ETA is several years, e.g. "292y24w3d23h47m16s". Now only
reports the 3 most significant units, sacrificing precision,
e.g. "292y24w3d", "24w3d23h", "3d23h47m", "23h47m16s".

Fixes #6381
2023-02-04 17:29:08 +01:00
albertony
67132ecaec accounting: avoid negative ETA values for very slow speeds
Integer overflow would lead to ETA such as "-255y7w4h11m22s966ms",
as reported in #6381. Now the value will be clipped at the maximum
"292y24w3d23h47m16s", and it will be shown as infinity.
2023-02-04 17:29:08 +01:00
albertony
120cfcde70 install.sh: fix arm-v6 download 2023-02-04 13:32:26 +01:00
albertony
37db2a0e44 selfupdate: consider arm version 2023-02-04 13:32:26 +01:00
albertony
f92816899c version: report arm version 2023-02-04 13:32:26 +01:00
albertony
5386ffc8f2 build: correct building for ARMv5 and ARMv6
Explicitly set ARM version in GOARM build variable, to avoid relying
on some default value which differs when compiling natively and when
cross-compiling, and which is also incorrectly documented as being
6 when in reality it is 5.

Fix incorrect labelling of ARMv5 builds as ARMv6, and change
architecture of .rpm and .deb packages containing them to
match.

Add ARMv6 builds, to complement existing ARMv5 and ARMv7, and to
reduce disruption due to previous ARMv5 builds incorrectly being
identified as ARMv6, and to provide .rpm and .deb packages with the
same ARMv6 architectures as was previously also published
(then containing ARMv5 binaries).

See #6528

Background info:

https://github.com/golang/go/wiki/GoArm
https://go.dev/doc/install/source#environment
661e931dd1/src/cmd/dist/build.go (L140-L144)
661e931dd1/src/cmd/dist/util.go (L392-L422)
2023-02-04 13:32:26 +01:00
Anagh Kumar Baranwal
3898d534f3 build: update to go1.20
Signed-off-by: Anagh Kumar Baranwal <6824881+darthShadow@users.noreply.github.com>
2023-02-03 20:15:15 +00:00
Ole Frost
34333d9fa8 docs: added troubleshooting tips for Live Photos in OneDrive 2023-02-03 16:24:30 +00:00
Ole Frost
14e852ee9d s3: fix incorrect tier support for StorJ and IDrive when pointing at a file
Fixes #6734
2023-02-02 18:12:00 +00:00
albertony
37623732c6 build: avoid running workflow twice for pull requests with branch on main repo 2023-02-01 16:47:38 +01:00
Nick Craig-Wood
adbcc83fa5 filter: emit INFO message when can't work out directory filters
See: https://forum.rclone.org/t/rclone-scans-unwanted-folder/34437
2023-02-01 14:21:45 +00:00
Nick Craig-Wood
d4ea6632ca drive: note that --drive-acknowledge-abuse needs SA Manager permission
See: https://github.com/rclone/rclone/issues/2338#issuecomment-762820600
See: https://forum.rclone.org/t/bisync-already-add-drive-acknowledge-abuse-still-got-critical-error-cannotdownloadabusivefile/35604/
2023-02-01 12:11:46 +00:00
Nick Craig-Wood
21849fd0d9 webdav: fix interop with davrods server
The davrods server returns URLS with a double / in and the // confuses
rclone into thinking these files are in a directory called "".

The fix removes leading /s from the directory listing names.

See: https://forum.rclone.org/t/upload-to-webdav-does-not-check-if-files-already-exist/35756/
2023-02-01 12:00:25 +00:00
Nick Craig-Wood
ac20ee41ca Add happyxhw to contributors 2023-02-01 12:00:25 +00:00
happyxhw
d376fb1df2 smb: check smb connection is closed - fixes #6735 2023-02-01 08:25:25 +01:00
Nick Craig-Wood
8e63a08d7f docs: note that we have test Android builds 2023-01-31 14:11:50 +00:00
Nick Craig-Wood
3aee5b3c55 Add Simmon Li (he/him) to contributors 2023-01-31 14:11:50 +00:00
Nick Craig-Wood
0145d98314 Add LXY to contributors 2023-01-31 14:11:50 +00:00
Nick Craig-Wood
4c03c71a5f Add Bryan Kaplan to contributors 2023-01-31 14:11:50 +00:00
Simmon Li (he/him)
82e2801aae update drive.md
* Updates OAuth consent screen instructions to include adding scopes for backup purposes (create, edit and delete files).
* Updates instructions to keep app in testing mode (appropriate for most users). The previous instructions suggested this, but we don't need to "publish" the app at all in order to proceed with this step.
2023-01-27 15:25:17 +00:00
LXY
dc5d5de35c onedrive: improve speed of quickxorhash
This commits ports a fast C-implementation from https://github.com/namazso/QuickXorHash

It uses new crypto/subtle code from go1.20 to avoid the use of unsafe.

Typical speedups are about 25x  when using go1.20

    goos: linux
    goarch: amd64
    cpu: Intel(R) Celeron(R) N5105 @ 2.00GHz
    QuickXorHash-Before  2.49ms   422MB/s ±11%   100.00%
    QuickXorHash-Subtle  87.9µs 11932MB/s ± 5% +2730.83% + 42.17%

Co-Author: @namazso
2023-01-26 11:50:12 +00:00
Bryan Kaplan
41cc4530f3 docs: Improve bisync check-access & check-filename
This commit documents my learnings after having encountered a failure
I reported in the rclone forum[0].

I may be a fool for having failed to understand the previous
documentation, but I am likely not the only fool to get snared by it.

This commit therefore adds details to clarify what the user must do in
order to allow `--check-access` to succeed.

While at it, I've also added some basic documentation for `--check-filename`.

[0]: https://forum.rclone.org/t/bisync-check-file-check-failed/35682
2023-01-26 11:10:01 +00:00
albertony
c5acb10151 fspath: allow the symbols at and plus in remote names - fixes #6710 2023-01-25 13:37:24 +01:00
Manoj Ghosh
8c8ee9905c oracleobjectstorage: speed up operations by using S3 pacer and setting minsleep to 10ms
Uploading 100 files of each 1 MB took 20 seconds before. With above fix it takes around 2 seconds now.

10x time improvement in line with pacer's sleep reduction from 100ms to 10ms
2023-01-25 10:48:16 +00:00
albertony
e2afd00118 mount: avoid incorrect or premature overlap check on windows
See: #6234
2023-01-24 22:27:02 +01:00
albertony
5b82576dbf build: fix condition for manual workflow run
See #5275
2023-01-24 20:46:33 +01:00
albertony
b9d9f9edb0 docs: use --interactive instead of -i in examples to avoid confusion 2023-01-24 20:43:51 +01:00
Bryan Kaplan
c40b706186 docs: Fix link in bisync doc
This commit fixes the `#check-access` anchor link in the bisync.md document.

`#check-access-option` does not exist in bisync.md; `#check-access` does.
2023-01-24 09:16:43 +01:00
Nick Craig-Wood
351fc609b1 b2: fix uploading files bigger than 1TiB
Before this change when uploading files bigger than 1TiB, the chunk
calculator would work out that the chunk size needed to be bigger than
the default 100 MiB to fit within the 10,000 parts limit.

However the uploader was still using the memory pool for the old chunk
size and this caused errors like

    panic: runtime error: slice bounds out of range [:122683392] with capacity 100663296

The fix for this is to make a temporary pool with the larger chunk
size and use it during the upload of the large file.

See: https://forum.rclone.org/t/rclone-cannot-complete-upload-to-b2-restarts-upload-frequently/35617/
2023-01-22 12:46:23 +00:00
Nick Craig-Wood
a6f6a9dcdf mount,mount2,cmount: fix --allow-non-empty #3562
Since version 3 of fuse libfuse no longer does anything when given the
nonempty option and it's default is to allow mounting over non empty
directories like normal mount does.

Some versions of libfuse give an error when using `--allow-non-empty`
which is annoying for the user.

We now do this check ourselves so we no longer need to pass the option
to libfuse.

Fixes #3562
2023-01-20 15:39:54 +00:00
Nick Craig-Wood
267a09001d mount: fix check for empty mount point on Linux #3562 2023-01-20 15:39:54 +00:00
Nick Craig-Wood
37db2abecd Add alankrit to contributors 2023-01-20 15:39:49 +00:00
albertony
0272d44192 mount: do not treat \\?\ prefixed paths as network share paths on windows
See: #6234
2023-01-20 15:40:03 +01:00
alankrit
6b17044f8e fs:Added multiple ca certificate support. 2023-01-17 12:16:11 +00:00
Nick Craig-Wood
844e8fb8bd lib/errors: add support for unwrapping go1.20 multi errors 2023-01-17 11:35:19 +00:00
Nick Craig-Wood
ca9182d6ae Add IMTheNachoMan to contributors 2023-01-17 11:35:19 +00:00
IMTheNachoMan
ec20c48523 googlephotos: fix grammar in docs (#6699) 2023-01-16 13:40:30 +01:00
Nick Craig-Wood
ec68b72387 lib/file: fix error message test after go1.20 upgrade 2023-01-16 11:19:16 +00:00
Nick Craig-Wood
2d1c2725e4 webdav: fix tests after go1.20 upgrade
Before this change we were sending webdav requests to the go http
FileServer. In go1.20 these (rightly) started returning errors which
caused the tests to fail.

The test has been changed to properly mock up an About query and
response so an end to end test of adding headers is possible.
2023-01-16 11:19:16 +00:00
Nick Craig-Wood
1680c5af8f build: update to go1.20rc3 and make go1.17 the minimum required version 2023-01-16 11:19:16 +00:00
Nick Craig-Wood
88c0d78639 build: update to fuse3 after bazil.org/fuse update 2023-01-16 11:19:16 +00:00
Nick Craig-Wood
559157cb58 azureblob: remove workarounds for SDK bugs after v0.6.1 update 2023-01-16 11:19:16 +00:00
Nick Craig-Wood
10bf8a769e build: update dependencies
This fixes the azureblob backend so it builds again after the SDK
changes.
2023-01-16 11:19:16 +00:00
Fred
f31ab6d178 seafile: renew library password - fixes #6662
Passwords for encrypted libraries are kept in memory in the server
and flushed after an hour.
This MR fixes an issue when the library password expires after 1 hour.
2023-01-15 16:26:29 +00:00
Kaloyan Raev
f08bb5bf66 storj: implement purge 2023-01-15 16:23:49 +00:00
Manoj Ghosh
e2886aaddf oracle-object-storage: expose the storage_tier option in config 2023-01-15 16:20:55 +00:00
albertony
71227986db docs: remove link to nonexistent uploadfile command - fixes #6693 2023-01-12 20:13:02 +01:00
Nick Craig-Wood
8c6ff1fa7e cmount: fix creating and renaming files on case insensitive backends
Before this fix, we told cgofuse/WinFSP that the backend was case
insensitive but didn't implement the Getpath backend function to
return the normalised case of a file.

Resently cgofuse started implementing case insensitive files properly
but since we hadn't implemented Getpath, the file names were taking
the default of all in UPPER CASE.

This patch implements Getpath for cgofuse which fixes the case
problems.

This problem came to light when we upgraded cgofuse and WinFSP (to
1.12) which had the code to implement Getpath.

Fixes #6682
2023-01-11 17:21:57 +00:00
Nick Craig-Wood
9d1b786a39 Add Kaloyan Raev to contributors 2023-01-11 17:21:57 +00:00
Nick Craig-Wood
8ee0e2efb1 Add piyushgarg to contributors 2023-01-11 17:21:57 +00:00
Alex Chen
d66f5e8db0 lib/oauthutil: handle fatal errors better
PR #6678
2023-01-12 00:50:14 +08:00
Ole Frost
02d6d28ec4 crypt: fix for unencrypted directory names on case insensitive remotes
rclone sync erroneously deleted folders renamed to a different case on
crypts where directory name encryption was disabled and the underlying
remote was case insensitive.

Example: Renaming the folder Test to tEST before a sync to a crypt having
remote=OneDrive:crypt and directory_name_encryption=false could result in
the folder and all its content being deleted. The following sync would
correctly create the tEST folder and upload all of the content.

Additional tests have revealed other potential issues when using
filename_encryption=off or directory_name_encryption=false on case
insensitive remotes. The documentation has been updated to warn about
potential problems when using these combinations.
2023-01-11 16:32:40 +00:00
Kaloyan Raev
1cafc12e8c storj: implement public link 2023-01-10 17:40:04 +00:00
piyushgarg
98fa93f6d1 webdav: Document Mapping/Accessing WebDAV shares on windows.
Fixes #6596

Co-authored-by: Piyush <piyushgarg80>
2022-12-30 11:22:46 +00:00
albertony
c6c67a29eb Add Marks Polakovs to contributors 2022-12-26 18:39:49 +01:00
Marks Polakovs
ad5395e953 backend/local: fix %!w(<nil>) in "failed to read directory" error 2022-12-26 18:37:32 +01:00
Nick Craig-Wood
1925ceaade Changelog updates from Version v1.61.1 2022-12-23 18:26:56 +00:00
Nick Craig-Wood
8aebf12797 docs: fix unescaped HTML 2022-12-23 16:53:43 +00:00
Nick Craig-Wood
ffeefe8a56 crypt: obey --ignore-checksum
Before this change the crypt backend would calculate and check upload
checksums regardless of the setting of --ignore-checksum.
2022-12-23 16:52:19 +00:00
Nick Craig-Wood
81ce5e4961 docs: correct RELEASE procedure for stable branch 2022-12-23 12:34:04 +00:00
Nick Craig-Wood
638058ef91 lib/http: shutdown all servers on exit to remove unix socket
Before this change only serve http was Shutting down its server which
was causing other servers such as serve restic to leave behind their
unix sockets.

This change moves the finalisation to lib/http so all servers have it
and removes it from serve http.

Fixes #6648
2022-12-23 12:28:07 +00:00
Nick Craig-Wood
b1b62f70d3 serve webdav: fix running duplicate Serve call
Before this change we were starting the server twice for webdav which
is inefficient and causes problems at exit.
2022-12-23 12:28:07 +00:00
Nick Craig-Wood
823d89af9a serve restic: don't serve via http if serving via --stdio
Before this change, we started the http listener even if --stdio was
supplied.

This also moves the log message so the user won't see the serving via
HTTP message unless they are really using that.

Fixes #6646
2022-12-23 12:28:07 +00:00
Nick Craig-Wood
448fff9a04 serve restic: fix immediate exit when not using stdio
In the lib/http refactor

    52443c2444 restic: refactor to use lib/http

We forgot to serve the data and wait for the server to finish. This is
not tested in the unit tests as it is part of the command line
handler.

Fixes #6644 Fixes #6647
2022-12-23 12:28:07 +00:00
Nick Craig-Wood
6257a6035c serve webdav: fix --baseurl handling after lib/http refactor
The webdav library was confused by the Path manipulation done by
lib/http when stripping the prefix.

This patch adds the prefix back before calling it.

Fixes #6650
2022-12-23 12:28:07 +00:00
Nick Craig-Wood
54c0f17f2a azureblob: fix "409 Public access is not permitted on this storage account"
This error was caused by rclone supplying an empty
`x-ms-blob-public-access:` header when creating a container for
private access, rather than omitting it completely.

This is a valid way of specifying containers should be private, but if
the storage account has the flag "Blob public access" unset then it
gives "409 Public access is not permitted on this storage account".

This patch fixes the problem by only supplying the header if the
access is set.

Fixes #6645
2022-12-23 12:28:07 +00:00
Kaloyan Raev
d049cbb59e s3/storj: update endpoints
Storj switched to a single global s3 endpoint backed by a BGP routing.
We want to stop advertizing the former regional endpoints and have the
global one as the only option.
2022-12-22 15:46:49 +00:00
Anagh Kumar Baranwal
00e853144e rc: set url to the first value of rc-addr since it has been converted to an array of strings now -- fixes #6641
Signed-off-by: Anagh Kumar Baranwal <6824881+darthShadow@users.noreply.github.com>
2022-12-22 09:02:20 +00:00
albertony
5ac8cfee56 docs: show only significant parts of version number in version introduced label 2022-12-21 12:41:47 +00:00
Nick Craig-Wood
496ae8adf6 Start v1.62.0-DEV development 2022-12-20 18:33:59 +00:00
Nick Craig-Wood
2001cc0831 Version v1.61.0 2022-12-20 17:16:14 +00:00
Ole Frost
a35490bf70 docs: Added note on Box API rate limits 2022-12-20 12:49:31 +00:00
Nick Craig-Wood
01877e5a0f s3: ignore versionIDs from uploads unless using --s3-versions or --s3-versions-at
Before this change, when a new object was created s3 returns its
versionID (on a versioned bucket) and rclone recorded it in the
object.

This means that when rclone came to delete the object it would delete
it with the versionID.

However it is common to forbid actions with versionIDs on buckets so
as to preserve the historical record and these operations would fail
whereas they succeeded in pre-v1.60.0 versions.

This patch fixes the problem by not recording versions of objects
supplied by the S3 API on upload unless `--s3-versions` or
`--s3-version-at` is used. This makes rclone behave as it did before
v1.60.0 when version support was introduced.

See: https://forum.rclone.org/t/s3-and-intermittent-403-errors-with-file-renames-and-drag-and-drop-operations-in-windows-explorer/34773
2022-12-17 10:24:56 +00:00
Nick Craig-Wood
614d79121a serve dlna: fix panic: Logger uninitialized.
Before this change we forgot to initialize the logger for the dlna
server. This meant when it needed to log something, it paniced
instead.

See: https://forum.rclone.org/t/rclone-serve-dlna-after-few-hours-of-idle-running-panic-logger-uninitialized-names/34835
2022-12-17 10:23:58 +00:00
Nick Craig-Wood
3a6f1f5cd7 filter: add metadata filters --metadata-include/exclude/filter and friends
Fixes #6353
2022-12-17 10:21:11 +00:00
Nick Craig-Wood
4a31961c4f filter: factor rules into its own file 2022-12-16 17:05:31 +00:00
Abdullah Saglam
7be9855a70 azureblob: implement --use-server-modtime
This patch implements --use-server-modtime for the Azureblob backend.

It does this by not reading the time from the metadata if the global
flag is set.
2022-12-15 15:58:36 +00:00
Nick Craig-Wood
6f8112ff67 Add Abdullah Saglam to contributors 2022-12-15 15:58:36 +00:00
Nick Craig-Wood
67fc227684 config: add config/setpath for setting config path via rc/librclone 2022-12-15 12:41:30 +00:00
Nick Craig-Wood
7edb4c0162 sftp: fix NewObject with leading /
This was breaking the use of operations/stat with remote with an
initial /

See: https://forum.rclone.org/t/rclone-rc-api-operations-stat-is-not-working-for-sftp-remotes/34560
2022-12-15 12:40:59 +00:00
Nick Craig-Wood
5db4493557 lib/http: fix race condition 2022-12-15 12:38:09 +00:00
Nick Craig-Wood
a85c0b0cc2 cmd/serve/httplib: remove as it is now replaced by lib/http 2022-12-15 12:38:09 +00:00
Nolan Woods
52443c2444 restic: refactor to use lib/http
Co-authored-by: Nick Craig-Wood <nick@craig-wood.com>
2022-12-15 12:38:09 +00:00
Nick Craig-Wood
4444d2d102 serve webdav: refactor to use lib/http 2022-12-15 12:38:09 +00:00
Nick Craig-Wood
08a1ca434b rcd: refactor rclone rc server to use lib/http 2022-12-15 12:38:09 +00:00
Nick Craig-Wood
a9ce86f9a3 lib/http: add UsingAuth method 2022-12-15 12:38:09 +00:00
Nick Craig-Wood
3167292c2f lib/http: remove unused Template from Config 2022-12-15 12:38:09 +00:00
Tom Mombourquette
ec7cc2b3c3 lib/http: Simplify server.go to export an http server rather than an interface
This also makes the implementation public.
2022-12-15 12:38:09 +00:00
Tom Mombourquette
2a2fcf1012 lib/http: rationalise names in test servers to be more consistent 2022-12-15 12:38:09 +00:00
Tom Mombourquette
6d62267227 serve http: support unix sockets and multiple listners
- add support for unix sockets (which skip the auth).
- add support for multiple listeners
- collapse unnecessary internal structure of lib/http so it can all be
  imported together
- moves files in sub directories of lib/http into the main lib/http
  directory and reworks the code that uses them.

See: https://forum.rclone.org/t/wip-rc-rcd-over-unix-socket/33619
Fixes: #6605
2022-12-15 12:38:09 +00:00
Nick Craig-Wood
dfd8ad2fff Add compiletest target to compile all the tests only 2022-12-15 12:38:09 +00:00
Nick Craig-Wood
43506f8086 test memory: read metadata if -M flag is specified 2022-12-15 12:37:19 +00:00
Nick Craig-Wood
ec3cee89d3 fstest: switch to port forwarding now Owncloud disallows wildcards
A recent security fix in the Owncloud container now causes it to
disallow wildcards in the OWNCLOUD_TRUSTED_DOMAINS setting.

This patch works around the problem by using port forwarding from the
host so we can keep the domain name constant.
2022-12-15 11:34:12 +00:00
Nick Craig-Wood
a171497a8b Add Jack to contributors 2022-12-15 11:34:12 +00:00
Jack
c6ad15e3b8 s3: make DigitalOcean name canonical 2022-12-14 16:35:05 +00:00
Jack
9a81885b51 s3: add DigitalOcean Spaces regions sfo3, fra1, syd1 2022-12-14 16:35:05 +00:00
Nick Craig-Wood
3d291da0f6 azureblob: fix directory marker detection after SDK upgrade
When the SDK was upgraded it started delivering metadata where the
keys were not in lower case as per the old SDK.

Rclone normalises the case of the keys for storage in the Object, but
the directory marker check was being done with the unnormalised keys
as it needs to be done before the Object is created.

This fixes the directory marker check to do a case insensitive compare
of the metadata keys.
2022-12-14 14:24:26 +00:00
Nick Craig-Wood
43bf177ff7 s3: fix excess memory usage when using versions
Before this change, we were taking the version ID straight from the
XML blob returned by the SDK and thus pinning the XML into memory
which bulked up the average memory per object from about 400 bytes to
4k.

Copying the string fixes the excess memory usage.
2022-12-14 14:24:26 +00:00
Nick Craig-Wood
c446651be8 Revert "s3: turn off list v2 support for Alibaba OSS since it does not work"
This reverts commit 4f386a1ccd.

It turns out that Alibaba OSS does support list v2 and the detection
code was wrong.

This means that users of the gov version of Alibaba will have to add
`list_version 1` to their config files.

See #6600
2022-12-14 14:24:26 +00:00
Nick Craig-Wood
6c407dbe15 s3: fix detection of listing routines which don't support v2 properly
In this commit

ab849b3613 s3: fix listing loop when using v2 listing on v1 server

The ContinuationToken was tested for existence, but it is the
NextContinuationToken that we are interested in.

See: #6600
2022-12-14 14:24:26 +00:00
albertony
5a59b49b6b drive: handle shared drives with leading/trailing space in name (related to #6618) 2022-12-14 10:18:12 +01:00
albertony
8b9f3bbe29 fspath: improved detection of illegal remote names starting with dash (related to #4261) 2022-12-14 10:18:12 +01:00
albertony
8e6a469f98 fspath: allow unicode numbers and letters in remote names
Previously it was limited to plain ASCII (0-9, A-Z, a-z).

Implemented by adding \p{L}\p{N} alongside the \w in the regex,
even though these overlap it means we can be sure it is 100%
backwards compatible.

Fixes #6618
2022-12-12 13:24:32 +00:00
albertony
f650a543ef docs: remote names may not start or end with space 2022-12-12 13:24:32 +00:00
albertony
683178a1f4 fspath: change remote name regex to not match when leading/trailing space 2022-12-12 13:24:32 +00:00
albertony
3937233e1e fspath: refactor away unnecessary constant for remote name regex 2022-12-12 13:24:32 +00:00
albertony
c571200812 fspath: remove unused capture group in remote name regex 2022-12-12 13:24:32 +00:00
albertony
04a663829b fspath: remove duplicate start-of-line anchor in remote name regex 2022-12-12 13:24:32 +00:00
albertony
6b4a2c1c4e fspath: remove superfluous underscore covered by existing word character class in remote name regex 2022-12-12 13:24:32 +00:00
albertony
f73be767a4 fspath: add unit tests for remote names with leading dash 2022-12-12 13:24:32 +00:00
albertony
4120dffcc1 fspath: add unit tests for remote names with space 2022-12-12 13:24:32 +00:00
Nick Craig-Wood
53ff5bb205 build: Update golang.org/x/net/http2 to fix GO-2022-1144
An attacker can cause excessive memory growth in a Go server accepting
HTTP/2 requests. HTTP/2 server connections contain a cache of HTTP
header keys sent by the client. While the total number of entries in
this cache is capped, an attacker sending very large keys can cause
the server to allocate approximately 64 MiB per open connection.
2022-12-12 12:49:12 +00:00
Nick Craig-Wood
397f428c48 Add vanplus to contributors 2022-12-12 12:49:12 +00:00
vanplus
c5a2c9b046 onedrive: document workaround for shared with me files 2022-12-12 12:04:28 +00:00
Kaloyan Raev
b98d7f6634 storj: implement server side Copy 2022-12-12 12:02:38 +00:00
Ole Frost
beea4d5119 lib/oauthutil: Improved usability of config flows needing web browser
The config question "Use auto config?" confused many users and lead to
recurring forum posts from users that were unaware that they were using
a remote or headless machine.

This commit makes the question and possible options more descriptive
and precise.

This commit also adds references to the guide on remote setup in the
documentation of backends using oauth as primary authentication.
2022-12-09 14:41:05 +00:00
Eng Zer Jun
8e507075d1 test: replace defer cleanup with t.Cleanup
Reference: https://pkg.go.dev/testing#T.Cleanup
Signed-off-by: Eng Zer Jun <engzerjun@gmail.com>
2022-12-09 14:38:05 +00:00
Nick Craig-Wood
be783a1856 dlna: properly attribute code used from https://github.com/anacrolix/dms
Fixes #4101
2022-12-09 14:27:10 +00:00
Nick Craig-Wood
450c366403 s3: fix nil pointer exception when using Versions
This was caused by

a9bd0c8de6 s3: reduce memory consumption for s3 objects

Which assumed that the StorageClass would always be set, but it isn't
set for Versions.
2022-12-09 12:23:51 +00:00
Matthew Vernon
1dbdc48a77 WASM: comply with wasm_exec.js licence terms
The BSD-style license that Go uses requires the license to be included
with the source distribution; so add it as LICENSE.wasmexec (to avoid
confusion with the other licenses in rclone) and note the location of
the license in wasm_exec.js itself.
2022-12-07 15:25:46 +00:00
Nick Craig-Wood
d7cb17848d azureblob: revamp authentication to include all methods and docs
The updates the authentication to include

- Auth from the environment
    1. Environment Variables
    2. Managed Service Identity Credentials
    3. Azure CLI credentials (as used by the az tool)
- Account and Shared Key
- SAS URL
- Service principal with client secret
- Service principal with certificate
- User with username and password
- Managed Service Identity Credentials

And rationalises the auth order.
2022-12-06 15:07:01 +00:00
Nick Craig-Wood
f3c8b7a948 azureblob: add --azureblob-no-check-container to assume container exists
Normally rclone will check the container exists before uploading if it
hasn't listed the container yet.

Often rclone will be running with a limited set of permissions which
means rclone can't create the container anyway, so this stops the
check.

This will save a transaction.
2022-12-06 15:07:01 +00:00
Nick Craig-Wood
914fbe242c azureblob: ignore AuthorizationFailure when trying to create a create a container
If we get AuthorizationFailure when trying to create a container, then
assume the container has already been created
2022-12-06 15:07:01 +00:00
Nick Craig-Wood
f746b2fe85 azureblob: port old authentication methods to new SDK
Co-authored-by: Brad Ackerman <brad@facefault.org>
2022-12-06 15:07:01 +00:00
Nick Craig-Wood
a131da2c35 azureblob: Port to new SDK
This commit switches from using the old Azure go modules

    github.com/Azure/azure-pipeline-go/pipeline
    github.com/Azure/azure-storage-blob-go/azblob
    github.com/Azure/go-autorest/autorest/adal

To the new SDK

    github.com/Azure/azure-sdk-for-go/

This stops rclone using deprecated code and enables the full range of
authentication with Azure.

See #6132 and #5284
2022-12-06 15:07:01 +00:00
Nick Craig-Wood
60e4cb6f6f Add MohammadReza to contributors 2022-12-06 15:06:51 +00:00
MohammadReza
0a8b1fe5de s3: add Liara LOS to provider list 2022-12-06 12:25:23 +00:00
asdffdsazqqq
b24c83db21 restic: fix typo in docs 'remove' should be 'remote' 2022-12-06 12:14:25 +00:00
Nick Craig-Wood
4f386a1ccd s3: turn off list v2 support for Alibaba OSS since it does not work
See: #6600
2022-12-06 12:11:21 +00:00
Nick Craig-Wood
ab849b3613 s3: fix listing loop when using v2 listing on v1 server
Before this change, rclone would enter a listing loop if it used v2
listing on a v1 server and the list exceeded 1000 items.

This change detects the problem and gives the user a helpful message.

Fixes #6600
2022-12-06 12:11:21 +00:00
Nick Craig-Wood
10aee3926a Add Kevin Verstaen to contributors 2022-12-06 12:11:21 +00:00
Nick Craig-Wood
4583b61e3d Add Erik Agterdenbos to contributors 2022-12-06 12:11:06 +00:00
Nick Craig-Wood
483e9e1ee3 Add ycdtosa to contributors 2022-12-06 12:11:06 +00:00
Kevin Verstaen
c2dfc3e5b3 fs: Add global flag '--color' to control terminal colors
* fs: add TerminalColorMode type
* fs: add new config(flags) for TerminalColorMode
* lib/terminal: use TerminalColorMode to determine how to handle colors
* Add documentation for '--terminal-color-mode'
* tree: remove obsolete --color replaced by global --color

This changes the default behaviour of tree. It now displays colors by
default instead of only displaying them when the flag -C/--color was
active. Old behaviour (no color) can be achieved by setting --color to
'never'.

Fixes: #6604
2022-12-06 12:07:06 +00:00
Erik Agterdenbos
a9bd0c8de6 s3: reduce memory consumption for s3 objects
Copying the storageClass string instead of using a pointer to the original string.
This prevents the Go garbage collector from keeping large amounts of
XMLNode structs and references in memory, created by xmlutil.XMLToStruct()
from the aws-sdk-go.
2022-12-05 23:07:08 +00:00
Anthony Pessy
1628ca0d46 ftp: Improve performance to speed up --files-from and NewObject
This commit uses the MLST command (where available) to get the status
for single files rather than listing the parent directory and looking
for the file. This makes actions such as using `--files-from` much quicker.

* use getEntry to lookup remote files when supported
*  findItem now expects the full path directly

It makes the expected argument similar to the getInfo method, the
difference now is that one is returning a FileInfo whereas
the other is returning an ftp Entry.

Fixes #6225

Co-authored-by: Nick Craig-Wood <nick@craig-wood.com>
2022-12-05 16:19:04 +00:00
albertony
313493d51b docs: remove minimum versions from command pages of pre v1 commands 2022-12-03 18:58:55 +01:00
albertony
6d18f60725 docs: add minimum versions to the command pages 2022-12-03 18:58:55 +01:00
albertony
d74662a751 docs: add badge showing version introduced and experimental/beta/deprecated status to command doc pages 2022-12-03 18:58:55 +01:00
albertony
d05fd2a14f docs: add badge for experimental/beta/deprecated status next to version in backend docs 2022-12-03 18:58:55 +01:00
albertony
097be753ab docs: minor cleanup of headers in backend docs 2022-12-03 18:58:55 +01:00
ycdtosa
50c9678cea ftp: update help text of implicit/explicit TLS options to refer to FTPS instead of FTP 2022-11-29 14:58:46 +01:00
eNV25
7672cde4f3 cmd/ncdu: use negative values for key runes
The previous version used values after the maximum Unicode code-point
to encode a key. This could lead to an overflow since a key is a int16,
a rune is int32 and the maximum Unicode code-point is larger than int16.

A better solution is to simply use negative runes for keys.
2022-11-28 10:51:11 +00:00
eNV25
a4c65532ea cmd/ncdu: use tcell directly instead of the termbox wrapper
Following up on 36add0af, which switched from termbox
to tcell's termbox wrapper.
2022-11-25 14:42:19 +00:00
Nick Craig-Wood
46b080c092 vfs: Fix IO Error opening a file with O_CREATE|O_RDONLY in --vfs-cache-mode not full
Before this fix, opening a file with `O_CREATE|O_RDONLY` caused an IO error to
be returned when using `--vfs-cache-mode off` or `--vfs-cache-mode writes`.

This was because the file was opened with read intent, but the `O_CREATE`
implies write intent to create the file even though the file is opened
`O_RDONLY`.

This fix sets write intent for the file if `O_CREATE` is passed in which fixes
the problem for all the VFS cache modes.

It also extends the exhaustive open flags testing to `--vfs-cache-mode writes`
as well as `--vfs-cache-mode full` which would have caught this problem.

See: https://forum.rclone.org/t/i-o-error-trashing-file-on-sftp-mount/34317/
2022-11-24 17:04:36 +00:00
Nick Craig-Wood
0edf6478e3 Add Nathaniel Wesley Filardo to contributors 2022-11-24 17:04:36 +00:00
Nathaniel Wesley Filardo
f7cdf318db azureblob: support simple "environment credentials"
As per
https://learn.microsoft.com/en-us/dotnet/api/azure.identity.environmentcredential?view=azure-dotnet

This supports only AZURE_CLIENT_SECRET-based authentication, as with the
existing service principal support.

Co-authored-by: Nick Craig-Wood <nick@craig-wood.com>
2022-11-24 12:06:14 +00:00
Nathaniel Wesley Filardo
6f3682c12f azureblob: make newServicePrincipalTokenRefresher take parsed principal structure 2022-11-24 12:06:14 +00:00
Nick Craig-Wood
e3d593d40c build: update dependencies 2022-11-24 11:05:54 +00:00
Nick Craig-Wood
83551bb02e cmount: update cgofuse for FUSE-T support for mounting volumes on Mac
See: https://forum.rclone.org/t/fr-fuse-t-support-for-mounting-volumes-on-mac/33110/
2022-11-24 10:51:16 +00:00
Nick Craig-Wood
430bf0d5eb crypt: fix compress wrapping crypt giving upload errors
Before this fix a chain compress -> crypt -> s3 was giving errors

    BadDigest: The Content-MD5 you specified did not match what we received.

This was because the crypt backend was encrypting the underlying local
object to calculate the hash rather than the contents of the metadata
stream.

It did this because the crypt backend incorrectly identified the
object as a local object.

This fixes the problem by making sure the crypt backend does not
unwrap anything but fs.OverrideRemote objects.

See: https://forum.rclone.org/t/not-encrypting-or-compressing-before-upload/32261/10
2022-11-21 08:02:09 +00:00
Nick Craig-Wood
dd71f5d968 fs: move operations.NewOverrideRemote to fs.NewOverrideRemote 2022-11-21 08:02:09 +00:00
albertony
7db1c506f2 smb: fix issue where spurious dot directory is created 2022-11-20 17:12:02 +00:00
Nick Craig-Wood
959cd938bc docs: Add minimum versions to all the backend pages and some of the other pages 2022-11-18 14:41:24 +00:00
Nick Craig-Wood
03b07c280c Changelog updates from Version v1.60.1 2022-11-17 16:32:25 +00:00
Nick Craig-Wood
705e8f2fe0 smb: fix Failed to sync: context canceled at the end of syncs
Before this change we were putting connections into the connection
pool which had a local context in.

This meant that when the operation had finished the context was
cancelled and the connection became unusable.

See: https://forum.rclone.org/t/failed-to-sync-context-canceled/34017/
2022-11-16 10:55:25 +00:00
Nick Craig-Wood
591fc3609a vfs: fix deadlock caused by cache cleaner and upload finishing
Before this patch a deadlock could occur if the cache cleaner was
running when an object upload finished.

This fixes the problem by delaying marking the object as clean until
we have notified the VFS layer. This means that the cache cleaner
won't consider the object until **after** the VFS layer has been
notified, thus avoiding the deadlock.

See: https://forum.rclone.org/t/rclone-mount-deadlock-when-dir-cache-time-strikes/33486/
2022-11-15 18:01:36 +00:00
Nick Craig-Wood
b4a3d1b9ed Add asdffdsazqqq to contributors 2022-11-15 18:01:36 +00:00
asdffdsazqqq
84219b95ab docs: faq: how to use a proxy server that requires a username and password - fixes #6565 2022-11-15 17:58:43 +00:00
Nick Craig-Wood
2c78f56d48 webdav: fix Move/Copy/DirMove when using -server-side-across-configs
Before this change, when using -server-side-across-configs rclone
would direct Move/Copy/DirMove to the destination server.

However this should be directed to the source server. This is a little
unclear in the RFC, but the name of the parameter "Destination:" seems
clear and this is how dCache and Rucio have implemented it.

See: https://forum.rclone.org/t/webdav-copy-request-implemented-incorrectly/34072/
2022-11-15 09:51:30 +00:00
Nick Craig-Wood
a61d219bcd local: fix -L/--copy-links with filters missing directories
In this commit

8d1fff9a82 local: obey file filters in listing to fix errors on excluded files

We introduced the concept of local backend filters.

Unfortunately the filters were being applied before we had resolved
the symlink to point to a directory. This meant that symlinks pointing
to directories were filtered out when they shouldn't have been.

This was fixed by moving the filter check until after the symlink had
been resolved.

See: https://forum.rclone.org/t/copy-links-not-following-symlinks-on-1-60-0/34073/7
2022-11-14 18:03:40 +00:00
Nick Craig-Wood
652d3cdee4 vfs: windows: fix slow opening of exe files by not truncating files when not necessary
Before this change we truncated files in the backing store regardless
of whether we needed to or not.

After, we check to see if the file is the right size and don't
truncate if it is.

Apparently Windows Defender likes to check executables each time they
are modified, and truncating a file to its existing size is enough to
trigger the Windows Defender scan. This was causing a big slowdown for
operations which opened and closed the file a lot, such as looking at
properties on an executable.

See: https://forum.rclone.org/t/for-mount-sftp-why-right-click-on-exe-file-is-so-slow-until-it-freezes/33830
2022-11-14 17:05:51 +00:00
Nick Craig-Wood
bb1fc5b86d Add Kamui to contributors 2022-11-14 17:05:51 +00:00
Kamui
efd3c6449b rcserver: avoid generating default credentials with htpasswd - fixes #4839 2022-11-14 15:26:44 +00:00
Nick Craig-Wood
0ac5795f8c fs: make all duration flags take y, M, w, d etc suffixes
Fixes #6556
2022-11-14 15:13:49 +00:00
Nick Craig-Wood
2f77651f64 Add rkettelerij to contributors 2022-11-14 15:13:49 +00:00
Nick Craig-Wood
8daacc2b99 Add techknowlogick to contributors 2022-11-14 15:13:49 +00:00
rkettelerij
87fa9f8e46 azureblob: Add support for custom upload headers 2022-11-14 15:12:28 +00:00
albertony
1392793334 sftp: auto-detect shell type for fish
Fish is different from POSIX-based Unix shells such as bash,
and a bracketed variable references like we use for the
auto-detection echo command is not supported. The command
will return with zero exit code but produce no output on
stdout. There is a message on stderr, but we don't log it
due to the zero exit code:

fish: Variables cannot be bracketed. In fish, please use {$ShellId}.

Fixes #6552
2022-11-11 15:32:44 +00:00
techknowlogick
0e427216db s3: Add additional Wasabi locations 2022-11-11 14:39:12 +00:00
Anagh Kumar Baranwal
0c56c46523 rc: Add commands to set GC Percent & Memory Limit (1.19+)
Signed-off-by: Anagh Kumar Baranwal <6824881+darthShadow@users.noreply.github.com>
2022-11-10 12:07:18 +00:00
Nick Craig-Wood
617c5d5e1b rcat: preserve metadata when Copy falls back to Rcat
Before this change if we copied files of unknown size, then they lost
their metadata.

This was particularly noticeable using --s3-decompress.

This change adds metadata to Rcat and RcatSized and changes Copy to
pass the metadata in when it calls Rcat for an unknown sized input.

Fixes #6546
2022-11-10 12:04:35 +00:00
Nick Craig-Wood
ec2024b907 fstest: use WithMetadata / WithMimeType 2022-11-10 12:04:35 +00:00
Nick Craig-Wood
458845ce89 fs/object: add WithMetadata and WithMimetype to static and memory objects 2022-11-10 12:04:35 +00:00
Nick Craig-Wood
57bde20acd Add Aaron Gokaslan to contributors 2022-11-10 12:04:35 +00:00
Aaron Gokaslan
b0248e8070 s3: fix for unchecked err value in s3 listv2 2022-11-10 11:52:59 +00:00
Nick Craig-Wood
b285efb476 mailru: allow timestamps to be before the epoch 1970-01-01
Fixes #6547
2022-11-10 11:27:01 +00:00
Nick Craig-Wood
be6f29930b dedupe: make dedupe obey the filters
See: https://forum.rclone.org/t/dial-tcp-lookup-api-pcloud-com-no-such-host/33910/
2022-11-10 09:56:02 +00:00
Nick Craig-Wood
653bc23728 dedupe: count Checks in the stats while scanning for duplicates
This allows the user to see rclone has not hung.

See: https://forum.rclone.org/t/dial-tcp-lookup-api-pcloud-com-no-such-host/33910/
2022-11-10 09:56:02 +00:00
Nick Craig-Wood
47b04580db accounting: make it so we can account directories as well as files 2022-11-10 09:56:02 +00:00
Nick Craig-Wood
919e28b8bf lib/cache: fix alias backend shutting down too soon
Before this patch, when an alias backend was created it would be
renamed to be canonical and in the process Shutdown would be called on
it. This was particularly noticeable with the dropbox backend which
gave this error when uploading files after the backend was Shutdown.

    Failed to copy: upload failed: batcher is shutting down

This patch fixes the cache Rename code not to finalize objects if the
object that is being overwritten is the same as the existing object.

See: https://forum.rclone.org/t/upload-failed-batcher-is-shutting-down/33900
2022-11-09 16:29:23 +00:00
Nick Craig-Wood
3a3bc5a1ae mailru: note that an app password is now needed - fixes #6398 2022-11-08 20:33:11 +00:00
Nick Craig-Wood
133c006c37 Add Roel Arents to contributors 2022-11-08 20:33:11 +00:00
Roel Arents
e455940f71 azureblob: allow emulator account/key override 2022-11-08 20:24:06 +00:00
Nick Craig-Wood
65528fd009 docs: remove link to rclone slack as it is no longer supported 2022-11-08 16:11:34 +00:00
Nick Craig-Wood
691159fe94 s3: allow Storj to server side copy since it seems to work now - fixes #6550 2022-11-08 16:05:24 +00:00
Nick Craig-Wood
09858c0c5a Add Arnie97 to contributors 2022-11-08 16:05:24 +00:00
Nick Craig-Wood
5fd0abb2b9 Add x3-apptech to contributors 2022-11-08 16:05:24 +00:00
Arnie97
36c37ffec1 backend/http: rename stat to decodeMetadata 2022-11-08 13:04:17 +00:00
Arnie97
6a5b7664f7 backend/http: support content-range response header 2022-11-08 13:04:17 +00:00
Arnie97
ebac854512 backend/http: do not update object size based on range requests 2022-11-08 13:04:17 +00:00
Arnie97
cafce96185 backend/http: parse get responses when no_head is set 2022-11-08 13:04:17 +00:00
João Henrique Franco
92ffcf9f86 wasm: fix walltime link error by adding up-to-date wasm_exec.js
Solves link error while running rclone's wasm version. Go's `walltime1` function was renamed to `walltime`. This commit updates wasm_exec.js with the new name.
2022-11-07 12:13:23 +00:00
albertony
64cdbb67b5 ncdu: add support for modification time 2022-11-07 11:57:44 +00:00
albertony
528fc899fb ncdu: fallback to sort by name also for sort by average size 2022-11-07 11:57:44 +00:00
x3-apptech
d452f502c3 cmd: Enable SIGINFO (Ctrl-T) handler on FreeBSD, NetBSD, OpenBSD and Dragonfly BSD 2022-11-07 11:45:04 +00:00
albertony
5d6b8141ec Replace deprecated ioutil
As of Go 1.16, the same functionality is now provided by package io or
package os, and those implementations should be preferred in new code.
2022-11-07 11:41:47 +00:00
albertony
776e5ea83a docs: fix character that was incorrectly interpreted as markdown 2022-11-07 08:59:40 +01:00
albertony
c9acc06a49 Add Clément Notin to contributors 2022-11-07 08:51:49 +01:00
Clément Notin
a2dca02594 docs: fix character that was incorrectly interpreted as markdown 2022-11-07 08:50:21 +01:00
Joda Stößer
210331bf61 docs: fix typo remove in rclone_serve_restic command 2022-11-07 08:46:05 +01:00
Nick Craig-Wood
5b5fdc6bc5 s3: add provider quirk --s3-might-gzip to fix corrupted on transfer: sizes differ
Before this change, some files were giving this error when downloaded
from Cloudflare and other providers.

    ERROR corrupted on transfer: sizes differ NNN vs MMM

This is because these providers auto gzips the object when rclone
wasn't expecting it to. (AWS does not gzip objects without their being
uploaded gzipped).

This patch adds a quirk to for fix the problem and a flag to control
it. The quirk `might_gzip` is set to `true` for all providers except
AWS.

See: https://forum.rclone.org/t/s3-error-corrupted-on-transfer-sizes-differ-nnn-vs-mmm/33694/
Fixes: #6533
2022-11-04 16:53:32 +00:00
Nick Craig-Wood
0de74864b6 Add dgouju to contributors 2022-11-04 16:53:32 +00:00
dgouju
7042a11875 sftp: add configuration options to set ssh Ciphers / MACs / KeyExchange 2022-11-03 17:11:28 +00:00
Nick Craig-Wood
028832ce73 s3: if bucket or object ACL is empty string then don't add X-Amz-Acl: header - fixes #5730
Before this fix it was impossible to stop rclone generating an
X-Amx-Acl: header which is incompatible with GCS with uniform access
control and is generally deprecated at AWS.
2022-11-03 17:06:24 +00:00
Philip Harvey
c7c9356af5 s3: stop setting object and bucket ACL to "private" if it is an empty string #5730 2022-11-03 17:06:24 +00:00
Nick Craig-Wood
3292c112c5 Add Philip Harvey to contributors 2022-11-03 17:06:24 +00:00
Nick Craig-Wood
126d71b332 Add Anthony Pessy to contributors 2022-11-03 17:06:24 +00:00
Nick Craig-Wood
df9be72a82 Add coultonluke to contributors 2022-11-03 17:06:24 +00:00
Nick Craig-Wood
6aa8f7409a Add Samuel Johnson to contributors 2022-11-03 17:06:24 +00:00
Anthony Pessy
10c884552c s3: use different strategy to resolve s3 region
The API endpoint GetBucketLocation requires
top level permission.

If we do an authenticated head request to a bucket, the bucket location will be returned in the HTTP headers.

Fixes #5066
2022-11-02 11:48:08 +00:00
albertony
2617610741 docs: add direct download link for windows arm64 2022-10-31 21:14:10 +01:00
coultonluke
53dd174f3d docs: corrected download links in windows install docs 2022-10-31 21:09:53 +01:00
albertony
65987f5970 lib/file: improve error message for create dir on non-existent network host on windows (#6420) 2022-10-28 21:00:22 +02:00
Manoj Ghosh
1fc864fb32 oracle-object-storage: doc fix
See #6521
2022-10-28 20:32:17 +02:00
albertony
22abcc9fd2 build: update golang.org/x/net dependency
This fixes vulnerability GO-2022-0969 reported by govulncheck:

HTTP/2 server connections can hang forever waiting for a clean
shutdown that was preempted by a fatal error. This condition can
be exploited by a malicious client to cause a denial of service.

Call stacks in your code:
Error: cmd/serve/restic/restic.go:150:22: github.com/rclone/rclone/cmd/serve/restic.init$1$1 calls golang.org/x/net/http2.Server.ServeConn

Found in: golang.org/x/net/http2@v0.0.0-20220805013720-a33c5aa5df48
Fixed in: golang.org/x/net/http2@v0.0.0-20220906165146-f3363e06e74c
More info: https://pkg.go.dev/vuln/GO-2022-0969
2022-10-26 12:59:31 +02:00
albertony
178cf821de build: add vulnerability testing using govulncheck 2022-10-26 12:59:31 +02:00
albertony
f4a571786c local: clean absolute paths - fixes #6493 2022-10-25 21:09:56 +02:00
albertony
c0a8ffcbef build: setup-go v3 improved semver notation 2022-10-25 20:25:39 +02:00
albertony
76eeca9eae build: setup-go v3 dropped the stable input 2022-10-25 20:25:39 +02:00
Samuel Johnson
8114744bce docs: Update faq.md with bisync
Updated FAQ to clarify that experimental bi-sync is now available.
2022-10-23 11:15:09 +01:00
Nick Craig-Wood
db5d582404 Start v1.61.0-DEV development 2022-10-21 16:15:53 +01:00
1932 changed files with 396334 additions and 122047 deletions

4
.gitattributes vendored
View File

@@ -1,3 +1,7 @@
# Go writes go.mod and go.sum with lf even on windows
go.mod text eol=lf
go.sum text eol=lf
# Ignore generated files in GitHub language statistics and diffs
/MANUAL.* linguist-generated=true
/rclone.1 linguist-generated=true

4
.github/FUNDING.yml vendored
View File

@@ -1,4 +0,0 @@
github: [ncw]
patreon: njcw
liberapay: ncw
custom: ["https://rclone.org/donate/"]

6
.github/dependabot.yml vendored Normal file
View File

@@ -0,0 +1,6 @@
version: 2
updates:
- package-ecosystem: "github-actions"
directory: "/"
schedule:
interval: "daily"

View File

@@ -8,29 +8,33 @@ name: build
on:
push:
branches:
- '*'
- '**'
tags:
- '*'
- '**'
pull_request:
workflow_dispatch:
inputs:
manual:
required: true
description: Manual run (bypass default conditions)
type: boolean
default: true
jobs:
build:
if: ${{ github.repository == 'rclone/rclone' || github.event.inputs.manual }}
if: inputs.manual || (github.repository == 'rclone/rclone' && (github.event_name != 'pull_request' || github.event.pull_request.head.repo.full_name != github.event.pull_request.base.repo.full_name))
timeout-minutes: 60
defaults:
run:
shell: bash
strategy:
fail-fast: false
matrix:
job_name: ['linux', 'linux_386', 'mac_amd64', 'mac_arm64', 'windows', 'other_os', 'go1.17', 'go1.18']
job_name: ['linux', 'linux_386', 'mac_amd64', 'mac_arm64', 'windows', 'other_os', 'go1.24']
include:
- job_name: linux
os: ubuntu-latest
go: '1.19.x'
go: '>=1.25.0-rc.1'
gotags: cmount
build_flags: '-include "^linux/"'
check: true
@@ -41,14 +45,14 @@ jobs:
- job_name: linux_386
os: ubuntu-latest
go: '1.19.x'
go: '>=1.25.0-rc.1'
goarch: 386
gotags: cmount
quicktest: true
- job_name: mac_amd64
os: macos-11
go: '1.19.x'
os: macos-latest
go: '>=1.25.0-rc.1'
gotags: 'cmount'
build_flags: '-include "^darwin/amd64" -cgo'
quicktest: true
@@ -56,15 +60,15 @@ jobs:
deploy: true
- job_name: mac_arm64
os: macos-11
go: '1.19.x'
os: macos-latest
go: '>=1.25.0-rc.1'
gotags: 'cmount'
build_flags: '-include "^darwin/arm64" -cgo -macos-arch arm64 -cgo-cflags=-I/usr/local/include -cgo-ldflags=-L/usr/local/lib'
deploy: true
- job_name: windows
os: windows-latest
go: '1.19.x'
go: '>=1.25.0-rc.1'
gotags: cmount
cgo: '0'
build_flags: '-include "^windows/"'
@@ -74,20 +78,14 @@ jobs:
- job_name: other_os
os: ubuntu-latest
go: '1.19.x'
go: '>=1.25.0-rc.1'
build_flags: '-exclude "^(windows/|darwin/|linux/)"'
compile_all: true
deploy: true
- job_name: go1.17
- job_name: go1.24
os: ubuntu-latest
go: '1.17.x'
quicktest: true
racequicktest: true
- job_name: go1.18
os: ubuntu-latest
go: '1.18.x'
go: '1.24'
quicktest: true
racequicktest: true
@@ -97,19 +95,17 @@ jobs:
steps:
- name: Checkout
uses: actions/checkout@v3
uses: actions/checkout@v5
with:
fetch-depth: 0
- name: Install Go
uses: actions/setup-go@v3
uses: actions/setup-go@v6
with:
stable: 'false'
go-version: ${{ matrix.go }}
check-latest: true
- name: Set environment variables
shell: bash
run: |
echo 'GOTAGS=${{ matrix.gotags }}' >> $GITHUB_ENV
echo 'BUILD_FLAGS=${{ matrix.build_flags }}' >> $GITHUB_ENV
@@ -118,20 +114,25 @@ jobs:
if [[ "${{ matrix.cgo }}" != "" ]]; then echo 'CGO_ENABLED=${{ matrix.cgo }}' >> $GITHUB_ENV ; fi
- name: Install Libraries on Linux
shell: bash
run: |
sudo modprobe fuse
sudo chmod 666 /dev/fuse
sudo chown root:$USER /etc/fuse.conf
sudo apt-get install fuse libfuse-dev rpm pkg-config
sudo apt-get update
sudo apt-get install -y fuse3 libfuse-dev rpm pkg-config git-annex git-annex-remote-rclone nfs-common
if: matrix.os == 'ubuntu-latest'
- name: Install Libraries on macOS
shell: bash
run: |
# https://github.com/Homebrew/brew/issues/15621#issuecomment-1619266788
# https://github.com/orgs/Homebrew/discussions/4612#discussioncomment-6319008
unset HOMEBREW_NO_INSTALL_FROM_API
brew untap --force homebrew/core
brew untap --force homebrew/cask
brew update
brew install --cask macfuse
if: matrix.os == 'macos-11'
brew install git-annex git-annex-remote-rclone
if: matrix.os == 'macos-latest'
- name: Install Libraries on Windows
shell: powershell
@@ -150,7 +151,6 @@ jobs:
if: matrix.os == 'windows-latest'
- name: Print Go version and environment
shell: bash
run: |
printf "Using go at: $(which go)\n"
printf "Go version: $(go version)\n"
@@ -161,38 +161,25 @@ jobs:
printf "\n\nSystem environment:\n\n"
env
- name: Go module cache
uses: actions/cache@v3
with:
path: ~/go/pkg/mod
key: ${{ runner.os }}-go-${{ hashFiles('**/go.sum') }}
restore-keys: |
${{ runner.os }}-go-
- name: Build rclone
shell: bash
run: |
make
- name: Rclone version
shell: bash
run: |
rclone version
- name: Run tests
shell: bash
run: |
make quicktest
if: matrix.quicktest
- name: Race test
shell: bash
run: |
make racequicktest
if: matrix.racequicktest
- name: Run librclone tests
shell: bash
run: |
make -C librclone/ctest test
make -C librclone/ctest clean
@@ -200,68 +187,135 @@ jobs:
if: matrix.librclonetest
- name: Compile all architectures test
shell: bash
run: |
make
make compile_all
if: matrix.compile_all
- name: Deploy built binaries
shell: bash
run: |
if [[ "${{ matrix.os }}" == "ubuntu-latest" ]]; then make release_dep_linux ; fi
if [[ "${{ matrix.os }}" == "windows-latest" ]]; then make release_dep_windows ; fi
make ci_beta
env:
RCLONE_CONFIG_PASS: ${{ secrets.RCLONE_CONFIG_PASS }}
# working-directory: '$(modulePath)'
# Deploy binaries if enabled in config && not a PR && not a fork
if: matrix.deploy && github.head_ref == '' && github.repository == 'rclone/rclone'
if: env.RCLONE_CONFIG_PASS != '' && matrix.deploy && github.head_ref == '' && github.repository == 'rclone/rclone'
lint:
if: ${{ github.repository == 'rclone/rclone' || github.event.inputs.manual }}
if: inputs.manual || (github.repository == 'rclone/rclone' && (github.event_name != 'pull_request' || github.event.pull_request.head.repo.full_name != github.event.pull_request.base.repo.full_name))
timeout-minutes: 30
name: "lint"
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v3
- name: Get runner parameters
id: get-runner-parameters
run: |
echo "year-week=$(/bin/date -u "+%Y%V")" >> $GITHUB_OUTPUT
echo "runner-os-version=$ImageOS" >> $GITHUB_OUTPUT
- name: Code quality test
uses: golangci/golangci-lint-action@v3
- name: Checkout
uses: actions/checkout@v5
with:
fetch-depth: 0
- name: Install Go
id: setup-go
uses: actions/setup-go@v6
with:
go-version: '>=1.24.0-rc.1'
check-latest: true
cache: false
- name: Cache
uses: actions/cache@v4
with:
path: |
~/go/pkg/mod
~/.cache/go-build
~/.cache/golangci-lint
key: golangci-lint-${{ steps.get-runner-parameters.outputs.runner-os-version }}-go${{ steps.setup-go.outputs.go-version }}-${{ steps.get-runner-parameters.outputs.year-week }}-${{ hashFiles('go.sum') }}
restore-keys: golangci-lint-${{ steps.get-runner-parameters.outputs.runner-os-version }}-go${{ steps.setup-go.outputs.go-version }}-${{ steps.get-runner-parameters.outputs.year-week }}-
- name: Code quality test (Linux)
uses: golangci/golangci-lint-action@v8
with:
# Optional: version of golangci-lint to use in form of v1.2 or v1.2.3 or `latest` to use the latest version
version: latest
skip-cache: true
- name: Code quality test (Windows)
uses: golangci/golangci-lint-action@v8
env:
GOOS: "windows"
with:
version: latest
skip-cache: true
- name: Code quality test (macOS)
uses: golangci/golangci-lint-action@v8
env:
GOOS: "darwin"
with:
version: latest
skip-cache: true
- name: Code quality test (FreeBSD)
uses: golangci/golangci-lint-action@v8
env:
GOOS: "freebsd"
with:
version: latest
skip-cache: true
- name: Code quality test (OpenBSD)
uses: golangci/golangci-lint-action@v8
env:
GOOS: "openbsd"
with:
version: latest
skip-cache: true
- name: Install govulncheck
run: go install golang.org/x/vuln/cmd/govulncheck@latest
- name: Scan for vulnerabilities
run: govulncheck ./...
- name: Check Markdown format
uses: DavidAnson/markdownlint-cli2-action@v20
with:
globs: |
CONTRIBUTING.md
MAINTAINERS.md
README.md
RELEASE.md
CODE_OF_CONDUCT.md
docs/content/{authors,bugs,changelog,docs,downloads,faq,filtering,gui,install,licence,overview,privacy}.md
- name: Scan edits of autogenerated files
run: bin/check_autogenerated_edits.py 'origin/${{ github.base_ref }}'
if: github.event_name == 'pull_request'
android:
if: ${{ github.repository == 'rclone/rclone' || github.event.inputs.manual }}
if: inputs.manual || (github.repository == 'rclone/rclone' && (github.event_name != 'pull_request' || github.event.pull_request.head.repo.full_name != github.event.pull_request.base.repo.full_name))
timeout-minutes: 30
name: "android-all"
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v3
uses: actions/checkout@v5
with:
fetch-depth: 0
# Upgrade together with NDK version
- name: Set up Go
uses: actions/setup-go@v3
uses: actions/setup-go@v6
with:
go-version: 1.19.x
- name: Go module cache
uses: actions/cache@v3
with:
path: ~/go/pkg/mod
key: ${{ runner.os }}-go-${{ hashFiles('**/go.sum') }}
restore-keys: |
${{ runner.os }}-go-
go-version: '>=1.25.0-rc.1'
- name: Set global environment variables
shell: bash
run: |
echo "VERSION=$(make version)" >> $GITHUB_ENV
@@ -280,7 +334,6 @@ jobs:
run: env PATH=$PATH:~/go/bin gomobile bind -androidapi ${RCLONE_NDK_VERSION} -v -target=android/arm -javapkg=org.rclone -ldflags '-s -X github.com/rclone/rclone/fs.Version='${VERSION} github.com/rclone/rclone/librclone/gomobile
- name: arm-v7a Set environment variables
shell: bash
run: |
echo "CC=$(echo $ANDROID_NDK/toolchains/llvm/prebuilt/linux-x86_64/bin/armv7a-linux-androideabi${RCLONE_NDK_VERSION}-clang)" >> $GITHUB_ENV
echo "CC_FOR_TARGET=$CC" >> $GITHUB_ENV
@@ -294,7 +347,6 @@ jobs:
run: go build -v -tags android -trimpath -ldflags '-s -X github.com/rclone/rclone/fs.Version='${VERSION} -o build/rclone-android-${RCLONE_NDK_VERSION}-armv7a .
- name: arm64-v8a Set environment variables
shell: bash
run: |
echo "CC=$(echo $ANDROID_NDK/toolchains/llvm/prebuilt/linux-x86_64/bin/aarch64-linux-android${RCLONE_NDK_VERSION}-clang)" >> $GITHUB_ENV
echo "CC_FOR_TARGET=$CC" >> $GITHUB_ENV
@@ -307,7 +359,6 @@ jobs:
run: go build -v -tags android -trimpath -ldflags '-s -X github.com/rclone/rclone/fs.Version='${VERSION} -o build/rclone-android-${RCLONE_NDK_VERSION}-armv8a .
- name: x86 Set environment variables
shell: bash
run: |
echo "CC=$(echo $ANDROID_NDK/toolchains/llvm/prebuilt/linux-x86_64/bin/i686-linux-android${RCLONE_NDK_VERSION}-clang)" >> $GITHUB_ENV
echo "CC_FOR_TARGET=$CC" >> $GITHUB_ENV
@@ -320,7 +371,6 @@ jobs:
run: go build -v -tags android -trimpath -ldflags '-s -X github.com/rclone/rclone/fs.Version='${VERSION} -o build/rclone-android-${RCLONE_NDK_VERSION}-x86 .
- name: x64 Set environment variables
shell: bash
run: |
echo "CC=$(echo $ANDROID_NDK/toolchains/llvm/prebuilt/linux-x86_64/bin/x86_64-linux-android${RCLONE_NDK_VERSION}-clang)" >> $GITHUB_ENV
echo "CC_FOR_TARGET=$CC" >> $GITHUB_ENV
@@ -338,4 +388,4 @@ jobs:
env:
RCLONE_CONFIG_PASS: ${{ secrets.RCLONE_CONFIG_PASS }}
# Upload artifacts if not a PR && not a fork
if: github.head_ref == '' && github.repository == 'rclone/rclone'
if: env.RCLONE_CONFIG_PASS != '' && github.head_ref == '' && github.repository == 'rclone/rclone'

View File

@@ -1,26 +1,294 @@
name: Docker beta build
---
# Github Actions release for rclone
# -*- compile-command: "yamllint -f parsable build_publish_docker_image.yml" -*-
name: Build & Push Docker Images
# Trigger the workflow on push or pull request
on:
push:
branches:
- master
push:
branches:
- '**'
tags:
- '**'
workflow_dispatch:
inputs:
manual:
description: Manual run (bypass default conditions)
type: boolean
default: true
jobs:
build:
if: github.repository == 'rclone/rclone'
runs-on: ubuntu-latest
name: Build image job
steps:
- name: Checkout master
uses: actions/checkout@v3
with:
fetch-depth: 0
- name: Build and publish image
uses: ilteoood/docker_buildx@1.1.0
with:
tag: beta
imageName: rclone/rclone
platform: linux/amd64,linux/386,linux/arm64,linux/arm/v7,linux/arm/v6
publish: true
dockerHubUser: ${{ secrets.DOCKER_HUB_USER }}
dockerHubPassword: ${{ secrets.DOCKER_HUB_PASSWORD }}
build-image:
if: inputs.manual || (github.repository == 'rclone/rclone' && github.event_name != 'pull_request')
timeout-minutes: 60
strategy:
fail-fast: false
matrix:
include:
- platform: linux/amd64
runs-on: ubuntu-24.04
- platform: linux/386
runs-on: ubuntu-24.04
- platform: linux/arm64
runs-on: ubuntu-24.04-arm
- platform: linux/arm/v7
runs-on: ubuntu-24.04-arm
- platform: linux/arm/v6
runs-on: ubuntu-24.04-arm
name: Build Docker Image for ${{ matrix.platform }}
runs-on: ${{ matrix.runs-on }}
steps:
- name: Free Space
shell: bash
run: |
df -h .
# Remove android SDK
sudo rm -rf /usr/local/lib/android || true
# Remove .net runtime
sudo rm -rf /usr/share/dotnet || true
df -h .
- name: Checkout Repository
uses: actions/checkout@v5
with:
fetch-depth: 0
- name: Set REPO_NAME Variable
run: |
echo "REPO_NAME=`echo ${{github.repository}} | tr '[:upper:]' '[:lower:]'`" >> ${GITHUB_ENV}
- name: Set PLATFORM Variable
run: |
platform=${{ matrix.platform }}
echo "PLATFORM=${platform//\//-}" >> $GITHUB_ENV
- name: Set CACHE_NAME Variable
shell: python
run: |
import os, re
def slugify(input_string, max_length=63):
slug = input_string.lower()
slug = re.sub(r'[^a-z0-9 -]', ' ', slug)
slug = slug.strip()
slug = re.sub(r'\s+', '-', slug)
slug = re.sub(r'-+', '-', slug)
slug = slug[:max_length]
slug = re.sub(r'[-]+$', '', slug)
return slug
ref_name_slug = "cache"
if os.environ.get("GITHUB_REF_NAME") and os.environ['GITHUB_EVENT_NAME'] == "pull_request":
ref_name_slug += "-pr-" + slugify(os.environ['GITHUB_REF_NAME'])
with open(os.environ['GITHUB_ENV'], 'a') as env:
env.write(f"CACHE_NAME={ref_name_slug}\n")
- name: Get ImageOS
# There's no way around this, because "ImageOS" is only available to
# processes, but the setup-go action uses it in its key.
id: imageos
uses: actions/github-script@v8
with:
result-encoding: string
script: |
return process.env.ImageOS
- name: Extract Metadata (tags, labels) for Docker
id: meta
uses: docker/metadata-action@v5
env:
DOCKER_METADATA_ANNOTATIONS_LEVELS: manifest,manifest-descriptor # Important for digest annotation (used by Github packages)
with:
images: |
ghcr.io/${{ env.REPO_NAME }}
labels: |
org.opencontainers.image.url=https://github.com/rclone/rclone/pkgs/container/rclone
org.opencontainers.image.vendor=${{ github.repository_owner }}
org.opencontainers.image.authors=rclone <https://github.com/rclone>
org.opencontainers.image.source=${{ github.server_url }}/${{ github.repository }}
org.opencontainers.image.revision=${{ github.sha }}
tags: |
type=sha
type=ref,event=pr
type=ref,event=branch
type=semver,pattern={{version}}
type=semver,pattern={{major}}
type=semver,pattern={{major}}.{{minor}}
type=raw,value=beta,enable={{is_default_branch}}
- name: Setup QEMU
uses: docker/setup-qemu-action@v3
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
- name: Load Go Build Cache for Docker
id: go-cache
uses: actions/cache@v4
with:
key: ${{ runner.os }}-${{ steps.imageos.outputs.result }}-go-${{ env.CACHE_NAME }}-${{ env.PLATFORM }}-${{ hashFiles('**/go.mod') }}-${{ hashFiles('**/go.sum') }}
restore-keys: |
${{ runner.os }}-${{ steps.imageos.outputs.result }}-go-${{ env.CACHE_NAME }}-${{ env.PLATFORM }}
# Cache only the go builds, the module download is cached via the docker layer caching
path: |
go-build-cache
- name: Inject Go Build Cache into Docker
uses: reproducible-containers/buildkit-cache-dance@v3
with:
cache-map: |
{
"go-build-cache": "/root/.cache/go-build"
}
skip-extraction: ${{ steps.go-cache.outputs.cache-hit }}
- name: Login to GitHub Container Registry
uses: docker/login-action@v3
with:
registry: ghcr.io
# This is the user that triggered the Workflow. In this case, it will
# either be the user whom created the Release or manually triggered
# the workflow_dispatch.
username: ${{ github.actor }}
password: ${{ secrets.GITHUB_TOKEN }}
- name: Build and Publish Image Digest
id: build
uses: docker/build-push-action@v6
with:
file: Dockerfile
context: .
provenance: false
# don't specify 'tags' here (error "get can't push tagged ref by digest")
# tags: ${{ steps.meta.outputs.tags }}
labels: ${{ steps.meta.outputs.labels }}
annotations: ${{ steps.meta.outputs.annotations }}
platforms: ${{ matrix.platform }}
outputs: |
type=image,name=ghcr.io/${{ env.REPO_NAME }},push-by-digest=true,name-canonical=true,push=true
cache-from: |
type=registry,ref=ghcr.io/${{ env.REPO_NAME }}:build-${{ env.CACHE_NAME }}-${{ env.PLATFORM }}
cache-to: |
type=registry,ref=ghcr.io/${{ env.REPO_NAME }}:build-${{ env.CACHE_NAME }}-${{ env.PLATFORM }},image-manifest=true,mode=max,compression=zstd
- name: Export Image Digest
run: |
mkdir -p /tmp/digests
digest="${{ steps.build.outputs.digest }}"
touch "/tmp/digests/${digest#sha256:}"
- name: Upload Image Digest
uses: actions/upload-artifact@v5
with:
name: digests-${{ env.PLATFORM }}
path: /tmp/digests/*
retention-days: 1
if-no-files-found: error
merge-image:
name: Merge & Push Final Docker Image
runs-on: ubuntu-24.04
needs:
- build-image
steps:
- name: Download Image Digests
uses: actions/download-artifact@v6
with:
path: /tmp/digests
pattern: digests-*
merge-multiple: true
- name: Set REPO_NAME Variable
run: |
echo "REPO_NAME=`echo ${{github.repository}} | tr '[:upper:]' '[:lower:]'`" >> ${GITHUB_ENV}
- name: Extract Metadata (tags, labels) for Docker
id: meta
uses: docker/metadata-action@v5
env:
DOCKER_METADATA_ANNOTATIONS_LEVELS: index
with:
images: |
${{ env.REPO_NAME }}
ghcr.io/${{ env.REPO_NAME }}
labels: |
org.opencontainers.image.url=https://github.com/rclone/rclone/pkgs/container/rclone
org.opencontainers.image.vendor=${{ github.repository_owner }}
org.opencontainers.image.authors=rclone <https://github.com/rclone>
org.opencontainers.image.source=${{ github.server_url }}/${{ github.repository }}
org.opencontainers.image.revision=${{ github.sha }}
tags: |
type=sha
type=ref,event=pr
type=ref,event=branch
type=semver,pattern={{version}}
type=semver,pattern={{major}}
type=semver,pattern={{major}}.{{minor}}
type=raw,value=beta,enable={{is_default_branch}}
- name: Extract Tags
shell: python
run: |
import json, os
metadata_json = os.environ['DOCKER_METADATA_OUTPUT_JSON']
metadata = json.loads(metadata_json)
tags = [f"--tag '{tag}'" for tag in metadata["tags"]]
tags_string = " ".join(tags)
with open(os.environ['GITHUB_ENV'], 'a') as env:
env.write(f"TAGS={tags_string}\n")
- name: Extract Annotations
shell: python
run: |
import json, os
metadata_json = os.environ['DOCKER_METADATA_OUTPUT_JSON']
metadata = json.loads(metadata_json)
annotations = [f"--annotation '{annotation}'" for annotation in metadata["annotations"]]
annotations_string = " ".join(annotations)
with open(os.environ['GITHUB_ENV'], 'a') as env:
env.write(f"ANNOTATIONS={annotations_string}\n")
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
- name: Login to Docker Hub
uses: docker/login-action@v3
with:
username: ${{ secrets.DOCKERHUB_USERNAME }}
password: ${{ secrets.DOCKERHUB_TOKEN }}
- name: Login to GitHub Container Registry
uses: docker/login-action@v3
with:
registry: ghcr.io
# This is the user that triggered the Workflow. In this case, it will
# either be the user whom created the Release or manually triggered
# the workflow_dispatch.
username: ${{ github.actor }}
password: ${{ secrets.GITHUB_TOKEN }}
- name: Create & Push Manifest List
working-directory: /tmp/digests
run: |
docker buildx imagetools create \
${{ env.TAGS }} \
${{ env.ANNOTATIONS }} \
$(printf 'ghcr.io/${{ env.REPO_NAME }}@sha256:%s ' *)
- name: Inspect and Run Multi-Platform Image
run: |
docker buildx imagetools inspect --raw ${{ env.REPO_NAME }}:${{ steps.meta.outputs.version }}
docker buildx imagetools inspect --raw ghcr.io/${{ env.REPO_NAME }}:${{ steps.meta.outputs.version }}
docker run --rm ghcr.io/${{ env.REPO_NAME }}:${{ steps.meta.outputs.version }} version

View File

@@ -0,0 +1,49 @@
---
# Github Actions release for rclone
# -*- compile-command: "yamllint -f parsable build_publish_docker_plugin.yml" -*-
name: Release Build for Docker Plugin
on:
release:
types: [published]
workflow_dispatch:
inputs:
manual:
description: Manual run (bypass default conditions)
type: boolean
default: true
jobs:
build_docker_volume_plugin:
if: inputs.manual || github.repository == 'rclone/rclone'
name: Build docker plugin job
runs-on: ubuntu-latest
steps:
- name: Free some space
shell: bash
run: |
df -h .
# Remove android SDK
sudo rm -rf /usr/local/lib/android || true
# Remove .net runtime
sudo rm -rf /usr/share/dotnet || true
df -h .
- name: Checkout master
uses: actions/checkout@v5
with:
fetch-depth: 0
- name: Build and publish docker plugin
shell: bash
run: |
VER=${GITHUB_REF#refs/tags/}
PLUGIN_USER=rclone
docker login --username ${{ secrets.DOCKER_HUB_USER }} \
--password-stdin <<< "${{ secrets.DOCKER_HUB_PASSWORD }}"
for PLUGIN_ARCH in amd64 arm64 arm/v7 arm/v6 ;do
export PLUGIN_USER PLUGIN_ARCH
make docker-plugin PLUGIN_TAG=${PLUGIN_ARCH/\//-}
make docker-plugin PLUGIN_TAG=${PLUGIN_ARCH/\//-}-${VER#v}
done
make docker-plugin PLUGIN_ARCH=amd64 PLUGIN_TAG=latest
make docker-plugin PLUGIN_ARCH=amd64 PLUGIN_TAG=${VER#v}

View File

@@ -1,59 +0,0 @@
name: Docker release build
on:
release:
types: [published]
jobs:
build:
if: github.repository == 'rclone/rclone'
runs-on: ubuntu-latest
name: Build image job
steps:
- name: Checkout master
uses: actions/checkout@v3
with:
fetch-depth: 0
- name: Get actual patch version
id: actual_patch_version
run: echo ::set-output name=ACTUAL_PATCH_VERSION::$(echo $GITHUB_REF | cut -d / -f 3 | sed 's/v//g')
- name: Get actual minor version
id: actual_minor_version
run: echo ::set-output name=ACTUAL_MINOR_VERSION::$(echo $GITHUB_REF | cut -d / -f 3 | sed 's/v//g' | cut -d "." -f 1,2)
- name: Get actual major version
id: actual_major_version
run: echo ::set-output name=ACTUAL_MAJOR_VERSION::$(echo $GITHUB_REF | cut -d / -f 3 | sed 's/v//g' | cut -d "." -f 1)
- name: Build and publish image
uses: ilteoood/docker_buildx@1.1.0
with:
tag: latest,${{ steps.actual_patch_version.outputs.ACTUAL_PATCH_VERSION }},${{ steps.actual_minor_version.outputs.ACTUAL_MINOR_VERSION }},${{ steps.actual_major_version.outputs.ACTUAL_MAJOR_VERSION }}
imageName: rclone/rclone
platform: linux/amd64,linux/386,linux/arm64,linux/arm/v7,linux/arm/v6
publish: true
dockerHubUser: ${{ secrets.DOCKER_HUB_USER }}
dockerHubPassword: ${{ secrets.DOCKER_HUB_PASSWORD }}
build_docker_volume_plugin:
if: github.repository == 'rclone/rclone'
needs: build
runs-on: ubuntu-latest
name: Build docker plugin job
steps:
- name: Checkout master
uses: actions/checkout@v3
with:
fetch-depth: 0
- name: Build and publish docker plugin
shell: bash
run: |
VER=${GITHUB_REF#refs/tags/}
PLUGIN_USER=rclone
docker login --username ${{ secrets.DOCKER_HUB_USER }} \
--password-stdin <<< "${{ secrets.DOCKER_HUB_PASSWORD }}"
for PLUGIN_ARCH in amd64 arm64 arm/v7 arm/v6 ;do
export PLUGIN_USER PLUGIN_ARCH
make docker-plugin PLUGIN_TAG=${PLUGIN_ARCH/\//-}
make docker-plugin PLUGIN_TAG=${PLUGIN_ARCH/\//-}-${VER#v}
done
make docker-plugin PLUGIN_ARCH=amd64 PLUGIN_TAG=latest
make docker-plugin PLUGIN_ARCH=amd64 PLUGIN_TAG=${VER#v}

15
.github/workflows/notify.yml vendored Normal file
View File

@@ -0,0 +1,15 @@
name: Notify users based on issue labels
on:
issues:
types: [labeled]
jobs:
notify:
runs-on: ubuntu-latest
steps:
- uses: jenschelkopf/issue-label-notification-action@1.3
with:
token: ${{ secrets.NOTIFY_ACTION_TOKEN }}
recipients: |
Support Contract=@rclone/support

14
.github/workflows/winget.yml vendored Normal file
View File

@@ -0,0 +1,14 @@
name: Publish to Winget
on:
release:
types: [released]
jobs:
publish:
runs-on: ubuntu-latest
steps:
- uses: vedantmgoyal2009/winget-releaser@v2
with:
identifier: Rclone.Rclone
installers-regex: '-windows-\w+\.zip$'
token: ${{ secrets.WINGET_TOKEN }}

9
.gitignore vendored
View File

@@ -3,15 +3,20 @@ _junk/
rclone
rclone.exe
build
docs/public
/docs/public/
/docs/.hugo_build.lock
/docs/static/img/logos/
rclone.iml
.idea
.history
.vscode
*.test
*.log
*.iml
fuzz-build.zip
*.orig
*.rej
Thumbs.db
__pycache__
.DS_Store
resource_windows_*.syso
.devcontainer

View File

@@ -1,30 +1,151 @@
# golangci-lint configuration options
version: "2"
linters:
# Configure the linter set. To avoid unexpected results the implicit default
# set is ignored and all the ones to use are explicitly enabled.
default: none
enable:
- deadcode
# Default
- errcheck
- goimports
- revive
- ineffassign
- structcheck
- varcheck
- govet
- ineffassign
- staticcheck
- unused
# Additional
- gocritic
- misspell
#- prealloc # TODO
- revive
- unconvert
#- prealloc
#- maligned
disable-all: true
# Configure checks. Mostly using defaults but with some commented exceptions.
settings:
govet:
enable-all: true
disable:
- fieldalignment
- shadow
staticcheck:
# With staticcheck there is only one setting, so to extend the implicit
# default value it must be explicitly included.
checks:
# Default
- all
- -ST1000
- -ST1003
- -ST1016
- -ST1020
- -ST1021
- -ST1022
# Disable quickfix checks
- -QF*
gocritic:
# With gocritic there are different settings, but since enabled-checks
# and disabled-checks cannot both be set, for full customization the
# alternative is to disable all defaults and explicitly enable the ones
# to use.
disable-all: true
enabled-checks:
#- appendAssign # Skip default
- argOrder
- assignOp
- badCall
- badCond
#- captLocal # Skip default
- caseOrder
- codegenComment
#- commentFormatting # Skip default
- defaultCaseOrder
- deprecatedComment
- dupArg
- dupBranchBody
- dupCase
- dupSubExpr
- elseif
#- exitAfterDefer # Skip default
- flagDeref
- flagName
#- ifElseChain # Skip default
- mapKey
- newDeref
- offBy1
- regexpMust
- ruleguard # Enable additional check that are not enabled by default
#- singleCaseSwitch # Skip default
- sloppyLen
- sloppyTypeAssert
- switchTrue
- typeSwitchVar
- underef
- unlambda
- unslice
- valSwap
- wrapperFunc
settings:
ruleguard:
rules: ${base-path}/bin/rules.go
revive:
# With revive there is in reality only one setting, and when at least one
# rule are specified then only these rules will be considered, defaults
# and all others are then implicitly disabled, so must explicitly enable
# all rules to be used.
rules:
- name: blank-imports
disabled: false
- name: context-as-argument
disabled: false
- name: context-keys-type
disabled: false
- name: dot-imports
disabled: false
#- name: empty-block # Skip default
# disabled: true
- name: error-naming
disabled: false
- name: error-return
disabled: false
- name: error-strings
disabled: false
- name: errorf
disabled: false
- name: exported
disabled: false
#- name: increment-decrement # Skip default
# disabled: true
- name: indent-error-flow
disabled: false
- name: package-comments
disabled: false
- name: range
disabled: false
- name: receiver-naming
disabled: false
#- name: redefines-builtin-id # Skip default
# disabled: true
#- name: superfluous-else # Skip default
# disabled: true
- name: time-naming
disabled: false
- name: unexported-return
disabled: false
#- name: unreachable-code # Skip default
# disabled: true
#- name: unused-parameter # Skip default
# disabled: true
- name: var-declaration
disabled: false
- name: var-naming
disabled: false
formatters:
enable:
- goimports
issues:
# Enable some lints excluded by default
exclude-use-default: false
# Maximum issues count per one linter. Set to 0 to disable. Default is 50.
max-issues-per-linter: 0
# Maximum count of issues with the same text. Set to 0 to disable. Default is 3.
max-same-issues: 0
run:
# timeout for analysis, e.g. 30s, 5m, default is 1m
# Timeout for total work, e.g. 30s, 5m, 5m30s. Default is 0 (disabled).
timeout: 10m

43
.markdownlint.yml Normal file
View File

@@ -0,0 +1,43 @@
default: true
# Use specific styles, to be consistent accross all documents.
# Default is to accept any as long as it is consistent within the same document.
heading-style: # MD003
style: atx
ul-style: # MD004
style: dash
hr-style: # MD035
style: ---
code-block-style: # MD046
style: fenced
code-fence-style: # MD048
style: backtick
emphasis-style: # MD049
style: asterisk
strong-style: # MD050
style: asterisk
# Allow multiple headers with same text as long as they are not siblings.
no-duplicate-heading: # MD024
siblings_only: true
# Allow long lines in code blocks and tables.
line-length: # MD013
code_blocks: false
tables: false
# The Markdown files used to generated docs with Hugo contain a top level
# header, even though the YAML front matter has a title property (which is
# used for the HTML document title only). Suppress Markdownlint warning:
# Multiple top-level headings in the same document.
single-title: # MD025
level: 1
front_matter_title:
# The HTML docs generated by Hugo from Markdown files may have slightly
# different header anchors than GitHub rendered Markdown, e.g. Hugo trims
# leading dashes so "--config string" becomes "#config-string" while it is
# "#--config-string" in GitHub preview. When writing links to headers in the
# Markdown files we must use whatever works in the final HTML generated docs.
# Suppress Markdownlint warning: Link fragments should be valid.
link-fragments: false # MD051

80
CODE_OF_CONDUCT.md Normal file
View File

@@ -0,0 +1,80 @@
# Rclone Code of Conduct
Like the technical community as a whole, the Rclone team and community
is made up of a mixture of professionals and volunteers from all over
the world, working on every aspect of the mission - including
mentorship, teaching, and connecting people.
Diversity is one of our huge strengths, but it can also lead to
communication issues and unhappiness. To that end, we have a few
ground rules that we ask people to adhere to. This code applies
equally to founders, mentors and those seeking help and guidance.
This isn't an exhaustive list of things that you can't do. Rather,
take it in the spirit in which it's intended - a guide to make it
easier to enrich all of us and the technical communities in which we
participate.
This code of conduct applies to all spaces managed by the Rclone
project or Rclone Services Ltd. This includes the issue tracker, the
forum, the GitHub site, the wiki, any other online services or
in-person events. In addition, violations of this code outside these
spaces may affect a person's ability to participate within them.
- **Be friendly and patient.**
- **Be welcoming.** We strive to be a community that welcomes and
supports people of all backgrounds and identities. This includes,
but is not limited to members of any race, ethnicity, culture,
national origin, colour, immigration status, social and economic
class, educational level, sex, sexual orientation, gender identity
and expression, age, size, family status, political belief,
religion, and mental and physical ability.
- **Be considerate.** Your work will be used by other people, and you
in turn will depend on the work of others. Any decision you take
will affect users and colleagues, and you should take those
consequences into account when making decisions. Remember that we're
a world-wide community, so you might not be communicating in someone
else's primary language.
- **Be respectful.** Not all of us will agree all the time, but
disagreement is no excuse for poor behavior and poor manners. We
might all experience some frustration now and then, but we cannot
allow that frustration to turn into a personal attack. It's
important to remember that a community where people feel
uncomfortable or threatened is not a productive one. Members of the
Rclone community should be respectful when dealing with other
members as well as with people outside the Rclone community.
- **Be careful in the words that you choose.** We are a community of
professionals, and we conduct ourselves professionally. Be kind to
others. Do not insult or put down other participants. Harassment and
other exclusionary behavior aren't acceptable. This includes, but is
not limited to:
- Violent threats or language directed against another person.
- Discriminatory jokes and language.
- Posting sexually explicit or violent material.
- Posting (or threatening to post) other people's personally
identifying information ("doxing").
- Personal insults, especially those using racist or sexist terms.
- Unwelcome sexual attention.
- Advocating for, or encouraging, any of the above behavior.
- Repeated harassment of others. In general, if someone asks you to
stop, then stop.
- **When we disagree, try to understand why.** Disagreements, both
social and technical, happen all the time and Rclone is no
exception. It is important that we resolve disagreements and
differing views constructively. Remember that we're different. The
strength of Rclone comes from its varied community, people from a
wide range of backgrounds. Different people have different
perspectives on issues. Being unable to understand why someone holds
a viewpoint doesn't mean that they're wrong. Don't forget that it is
human to err and blaming each other doesn't get us anywhere.
Instead, focus on helping to resolve issues and learning from
mistakes.
If you believe someone is violating the code of conduct, we ask that
you report it by emailing [info@rclone.com](mailto:info@rclone.com).
Original text courtesy of the [Speak Up! project](http://web.archive.org/web/20141109123859/http://speakup.io/coc.html).
## Questions?
If you have questions, please feel free to [contact us](mailto:info@rclone.com).

View File

@@ -1,8 +1,8 @@
# Contributing to rclone #
# Contributing to rclone
This is a short guide on how to contribute things to rclone.
## Reporting a bug ##
## Reporting a bug
If you've just got a question or aren't sure if you've found a bug
then please use the [rclone forum](https://forum.rclone.org/) instead
@@ -12,163 +12,227 @@ When filing an issue, please include the following information if
possible as well as a description of the problem. Make sure you test
with the [latest beta of rclone](https://beta.rclone.org/):
* Rclone version (e.g. output from `rclone version`)
* Which OS you are using and how many bits (e.g. Windows 10, 64 bit)
* The command you were trying to run (e.g. `rclone copy /tmp remote:tmp`)
* A log of the command with the `-vv` flag (e.g. output from `rclone -vv copy /tmp remote:tmp`)
* if the log contains secrets then edit the file with a text editor first to obscure them
- Rclone version (e.g. output from `rclone version`)
- Which OS you are using and how many bits (e.g. Windows 10, 64 bit)
- The command you were trying to run (e.g. `rclone copy /tmp remote:tmp`)
- A log of the command with the `-vv` flag (e.g. output from
`rclone -vv copy /tmp remote:tmp`)
- if the log contains secrets then edit the file with a text editor first to
obscure them
## Submitting a new feature or bug fix ##
## Submitting a new feature or bug fix
If you find a bug that you'd like to fix, or a new feature that you'd
like to implement then please submit a pull request via GitHub.
If it is a big feature, then [make an issue](https://github.com/rclone/rclone/issues) first so it can be discussed.
If it is a big feature, then [make an issue](https://github.com/rclone/rclone/issues)
first so it can be discussed.
To prepare your pull request first press the fork button on [rclone's GitHub
page](https://github.com/rclone/rclone).
Then [install Git](https://git-scm.com/downloads) and set your public contribution [name](https://docs.github.com/en/github/getting-started-with-github/setting-your-username-in-git) and [email](https://docs.github.com/en/github/setting-up-and-managing-your-github-user-account/setting-your-commit-email-address#setting-your-commit-email-address-in-git).
Then [install Git](https://git-scm.com/downloads) and set your public contribution
[name](https://docs.github.com/en/github/getting-started-with-github/setting-your-username-in-git)
and [email](https://docs.github.com/en/github/setting-up-and-managing-your-github-user-account/setting-your-commit-email-address#setting-your-commit-email-address-in-git).
Next open your terminal, change directory to your preferred folder and initialise your local rclone project:
Next open your terminal, change directory to your preferred folder and initialise
your local rclone project:
git clone https://github.com/rclone/rclone.git
cd rclone
git remote rename origin upstream
# if you have SSH keys setup in your GitHub account:
git remote add origin git@github.com:YOURUSER/rclone.git
# otherwise:
git remote add origin https://github.com/YOURUSER/rclone.git
```sh
git clone https://github.com/rclone/rclone.git
cd rclone
git remote rename origin upstream
# if you have SSH keys setup in your GitHub account:
git remote add origin git@github.com:YOURUSER/rclone.git
# otherwise:
git remote add origin https://github.com/YOURUSER/rclone.git
```
Note that most of the terminal commands in the rest of this guide must be executed from the rclone folder created above.
Note that most of the terminal commands in the rest of this guide must be
executed from the rclone folder created above.
Now [install Go](https://golang.org/doc/install) and verify your installation:
go version
```sh
go version
```
Great, you can now compile and execute your own version of rclone:
go build
./rclone version
```sh
go build
./rclone version
```
(Note that you can also replace `go build` with `make`, which will include a
more accurate version number in the executable as well as enable you to specify
more build options.) Finally make a branch to add your new feature
git checkout -b my-new-feature
```sh
git checkout -b my-new-feature
```
And get hacking.
You may like one of the [popular editors/IDE's for Go](https://github.com/golang/go/wiki/IDEsAndTextEditorPlugins) and a quick view on the rclone [code organisation](#code-organisation).
You may like one of the [popular editors/IDE's for Go](https://github.com/golang/go/wiki/IDEsAndTextEditorPlugins)
and a quick view on the rclone [code organisation](#code-organisation).
When ready - test the affected functionality and run the unit tests for the code you changed
When ready - test the affected functionality and run the unit tests for the
code you changed
cd folder/with/changed/files
go test -v
```sh
cd folder/with/changed/files
go test -v
```
Note that you may need to make a test remote, e.g. `TestSwift` for some
of the unit tests.
This is typically enough if you made a simple bug fix, otherwise please read the rclone [testing](#testing) section too.
This is typically enough if you made a simple bug fix, otherwise please read
the rclone [testing](#testing) section too.
Make sure you
* Add [unit tests](#testing) for a new feature.
* Add [documentation](#writing-documentation) for a new feature.
* [Commit your changes](#committing-your-changes) using the [message guideline](#commit-messages).
- Add [unit tests](#testing) for a new feature.
- Add [documentation](#writing-documentation) for a new feature.
- [Commit your changes](#committing-your-changes) using the [commit message guidelines](#commit-messages).
When you are done with that push your changes to GitHub:
git push -u origin my-new-feature
```sh
git push -u origin my-new-feature
```
and open the GitHub website to [create your pull
request](https://help.github.com/articles/creating-a-pull-request/).
Your changes will then get reviewed and you might get asked to fix some stuff. If so, then make the changes in the same branch, commit and push your updates to GitHub.
Your changes will then get reviewed and you might get asked to fix some stuff.
If so, then make the changes in the same branch, commit and push your updates to
GitHub.
You may sometimes be asked to [base your changes on the latest master](#basing-your-changes-on-the-latest-master) or [squash your commits](#squashing-your-commits).
You may sometimes be asked to [base your changes on the latest master](#basing-your-changes-on-the-latest-master)
or [squash your commits](#squashing-your-commits).
## Using Git and GitHub ##
## Using Git and GitHub
### Committing your changes ###
### Committing your changes
Follow the guideline for [commit messages](#commit-messages) and then:
git checkout my-new-feature # To switch to your branch
git status # To see the new and changed files
git add FILENAME # To select FILENAME for the commit
git status # To verify the changes to be committed
git commit # To do the commit
git log # To verify the commit. Use q to quit the log
```sh
git checkout my-new-feature # To switch to your branch
git status # To see the new and changed files
git add FILENAME # To select FILENAME for the commit
git status # To verify the changes to be committed
git commit # To do the commit
git log # To verify the commit. Use q to quit the log
```
You can modify the message or changes in the latest commit using:
git commit --amend
```sh
git commit --amend
```
If you amend to commits that have been pushed to GitHub, then you will have to [replace your previously pushed commits](#replacing-your-previously-pushed-commits).
If you amend to commits that have been pushed to GitHub, then you will have to
[replace your previously pushed commits](#replacing-your-previously-pushed-commits).
### Replacing your previously pushed commits ###
### Replacing your previously pushed commits
Note that you are about to rewrite the GitHub history of your branch. It is good practice to involve your collaborators before modifying commits that have been pushed to GitHub.
Note that you are about to rewrite the GitHub history of your branch. It is good
practice to involve your collaborators before modifying commits that have been
pushed to GitHub.
Your previously pushed commits are replaced by:
git push --force origin my-new-feature
```sh
git push --force origin my-new-feature
```
### Basing your changes on the latest master ###
### Basing your changes on the latest master
To base your changes on the latest version of the [rclone master](https://github.com/rclone/rclone/tree/master) (upstream):
To base your changes on the latest version of the
[rclone master](https://github.com/rclone/rclone/tree/master) (upstream):
git checkout master
git fetch upstream
git merge --ff-only
git push origin --follow-tags # optional update of your fork in GitHub
git checkout my-new-feature
git rebase master
```sh
git checkout master
git fetch upstream
git merge --ff-only
git push origin --follow-tags # optional update of your fork in GitHub
git checkout my-new-feature
git rebase master
```
If you rebase commits that have been pushed to GitHub, then you will have to [replace your previously pushed commits](#replacing-your-previously-pushed-commits).
If you rebase commits that have been pushed to GitHub, then you will have to
[replace your previously pushed commits](#replacing-your-previously-pushed-commits).
### Squashing your commits ###
### Squashing your commits
To combine your commits into one commit:
git log # To count the commits to squash, e.g. the last 2
git reset --soft HEAD~2 # To undo the 2 latest commits
git status # To check everything is as expected
```sh
git log # To count the commits to squash, e.g. the last 2
git reset --soft HEAD~2 # To undo the 2 latest commits
git status # To check everything is as expected
```
If everything is fine, then make the new combined commit:
git commit # To commit the undone commits as one
```sh
git commit # To commit the undone commits as one
```
otherwise, you may roll back using:
git reflog # To check that HEAD{1} is your previous state
git reset --soft 'HEAD@{1}' # To roll back to your previous state
```sh
git reflog # To check that HEAD{1} is your previous state
git reset --soft 'HEAD@{1}' # To roll back to your previous state
```
If you squash commits that have been pushed to GitHub, then you will have to [replace your previously pushed commits](#replacing-your-previously-pushed-commits).
If you squash commits that have been pushed to GitHub, then you will have to
[replace your previously pushed commits](#replacing-your-previously-pushed-commits).
Tip: You may like to use `git rebase -i master` if you are experienced or have a more complex situation.
Tip: You may like to use `git rebase -i master` if you are experienced or have a
more complex situation.
### GitHub Continuous Integration ###
### GitHub Continuous Integration
rclone currently uses [GitHub Actions](https://github.com/rclone/rclone/actions) to build and test the project, which should be automatically available for your fork too from the `Actions` tab in your repository.
rclone currently uses [GitHub Actions](https://github.com/rclone/rclone/actions)
to build and test the project, which should be automatically available for your
fork too from the `Actions` tab in your repository.
## Testing ##
## Testing
### Quick testing ###
### Code quality tests
If you install [golangci-lint](https://github.com/golangci/golangci-lint) then
you can run the same tests as get run in the CI which can be very helpful.
You can run them with `make check` or with `golangci-lint run ./...`.
Using these tests ensures that the rclone codebase all uses the same coding
standards. These tests also check for easy mistakes to make (like forgetting
to check an error return).
### Quick testing
rclone's tests are run from the go testing framework, so at the top
level you can run this to run all the tests.
go test -v ./...
```sh
go test -v ./...
```
You can also use `make`, if supported by your platform
make quicktest
```sh
make quicktest
```
The quicktest is [automatically run by GitHub](#github-continuous-integration) when you push your branch to GitHub.
The quicktest is [automatically run by GitHub](#github-continuous-integration)
when you push your branch to GitHub.
### Backend testing ###
### Backend testing
rclone contains a mixture of unit tests and integration tests.
Because it is difficult (and in some respects pointless) to test cloud
@@ -182,94 +246,137 @@ need to make a remote called `TestDrive`.
You can then run the unit tests in the drive directory. These tests
are skipped if `TestDrive:` isn't defined.
cd backend/drive
go test -v
```sh
cd backend/drive
go test -v
```
You can then run the integration tests which test all of rclone's
operations. Normally these get run against the local file system,
but they can be run against any of the remotes.
cd fs/sync
go test -v -remote TestDrive:
go test -v -remote TestDrive: -fast-list
```sh
cd fs/sync
go test -v -remote TestDrive:
go test -v -remote TestDrive: -fast-list
cd fs/operations
go test -v -remote TestDrive:
cd fs/operations
go test -v -remote TestDrive:
```
If you want to use the integration test framework to run these tests
altogether with an HTML report and test retries then from the
project root:
go install github.com/rclone/rclone/fstest/test_all
test_all -backend drive
```sh
go install github.com/rclone/rclone/fstest/test_all
test_all -backends drive
```
### Full integration testing ###
### Full integration testing
If you want to run all the integration tests against all the remotes,
then change into the project root and run
make check
make test
```sh
make check
make test
```
The commands may require some extra go packages which you can install with
make build_dep
```sh
make build_dep
```
The full integration tests are run daily on the integration test server. You can
find the results at https://pub.rclone.org/integration-tests/
find the results at <https://pub.rclone.org/integration-tests/>
## Code Organisation ##
## Code Organisation
Rclone code is organised into a small number of top level directories
with modules beneath.
* backend - the rclone backends for interfacing to cloud providers -
* all - import this to load all the cloud providers
* ...providers
* bin - scripts for use while building or maintaining rclone
* cmd - the rclone commands
* all - import this to load all the commands
* ...commands
* cmdtest - end-to-end tests of commands, flags, environment variables,...
* docs - the documentation and website
* content - adjust these docs only - everything else is autogenerated
* command - these are auto-generated - edit the corresponding .go file
* fs - main rclone definitions - minimal amount of code
* accounting - bandwidth limiting and statistics
* asyncreader - an io.Reader which reads ahead
* config - manage the config file and flags
* driveletter - detect if a name is a drive letter
* filter - implements include/exclude filtering
* fserrors - rclone specific error handling
* fshttp - http handling for rclone
* fspath - path handling for rclone
* hash - defines rclone's hash types and functions
* list - list a remote
* log - logging facilities
* march - iterates directories in lock step
* object - in memory Fs objects
* operations - primitives for sync, e.g. Copy, Move
* sync - sync directories
* walk - walk a directory
* fstest - provides integration test framework
* fstests - integration tests for the backends
* mockdir - mocks an fs.Directory
* mockobject - mocks an fs.Object
* test_all - Runs integration tests for everything
* graphics - the images used in the website, etc.
* lib - libraries used by the backend
* atexit - register functions to run when rclone exits
* dircache - directory ID to name caching
* oauthutil - helpers for using oauth
* pacer - retries with backoff and paces operations
* readers - a selection of useful io.Readers
* rest - a thin abstraction over net/http for REST
* vfs - Virtual FileSystem layer for implementing rclone mount and similar
- backend - the rclone backends for interfacing to cloud providers -
- all - import this to load all the cloud providers
- ...providers
- bin - scripts for use while building or maintaining rclone
- cmd - the rclone commands
- all - import this to load all the commands
- ...commands
- cmdtest - end-to-end tests of commands, flags, environment variables,...
- docs - the documentation and website
- content - adjust these docs only, except those marked autogenerated
or portions marked autogenerated where the corresponding .go file must be
edited instead, and everything else is autogenerated
- commands - these are auto-generated, edit the corresponding .go file
- fs - main rclone definitions - minimal amount of code
- accounting - bandwidth limiting and statistics
- asyncreader - an io.Reader which reads ahead
- config - manage the config file and flags
- driveletter - detect if a name is a drive letter
- filter - implements include/exclude filtering
- fserrors - rclone specific error handling
- fshttp - http handling for rclone
- fspath - path handling for rclone
- hash - defines rclone's hash types and functions
- list - list a remote
- log - logging facilities
- march - iterates directories in lock step
- object - in memory Fs objects
- operations - primitives for sync, e.g. Copy, Move
- sync - sync directories
- walk - walk a directory
- fstest - provides integration test framework
- fstests - integration tests for the backends
- mockdir - mocks an fs.Directory
- mockobject - mocks an fs.Object
- test_all - Runs integration tests for everything
- graphics - the images used in the website, etc.
- lib - libraries used by the backend
- atexit - register functions to run when rclone exits
- dircache - directory ID to name caching
- oauthutil - helpers for using oauth
- pacer - retries with backoff and paces operations
- readers - a selection of useful io.Readers
- rest - a thin abstraction over net/http for REST
- librclone - in memory interface to rclone's API for embedding rclone
- vfs - Virtual FileSystem layer for implementing rclone mount and similar
## Writing Documentation ##
## Writing Documentation
If you are adding a new feature then please update the documentation.
The documentation sources are generally in Markdown format, in conformance
with the CommonMark specification and compatible with GitHub Flavored
Markdown (GFM). The markdown format is checked as part of the lint operation
that runs automatically on pull requests, to enforce standards and consistency.
This is based on the [markdownlint](https://github.com/DavidAnson/markdownlint)
tool, which can also be integrated into editors so you can perform the same
checks while writing.
HTML pages, served as website <rclone.org>, are generated from the Markdown,
using [Hugo](https://gohugo.io). Note that when generating the HTML pages,
there is currently used a different algorithm for generating header anchors
than what GitHub uses for its Markdown rendering. For example, in the HTML docs
generated by Hugo any leading `-` characters are ignored, which means when
linking to a header with text `--config string` we therefore need to use the
link `#config-string` in our Markdown source, which will not work in GitHub's
preview where `#--config-string` would be the correct link.
Most of the documentation are written directly in text files with extension
`.md`, mainly within folder `docs/content`. Note that several of such files
are autogenerated (e.g. the command documentation, and `docs/content/flags.md`),
or contain autogenerated portions (e.g. the backend documentation under
`docs/content/commands`). These are marked with an `autogenerated` comment.
The sources of the autogenerated text are usually Markdown formatted text
embedded as string values in the Go source code, so you need to locate these
and edit the `.go` file instead. The `MANUAL.*`, `rclone.1` and other text
files in the root of the repository are also autogenerated. The autogeneration
of files, and the website, will be done during the release process. See the
`make doc` and `make website` targets in the Makefile if you are interested in
how. You don't need to run these when adding a feature.
If you add a new general flag (not for a backend), then document it in
`docs/content/docs.md` - the flags there are supposed to be in
alphabetical order.
@@ -277,47 +384,48 @@ alphabetical order.
If you add a new backend option/flag, then it should be documented in
the source file in the `Help:` field.
* Start with the most important information about the option,
as a single sentence on a single line.
* This text will be used for the command-line flag help.
* It will be combined with other information, such as any default value,
and the result will look odd if not written as a single sentence.
* It should end with a period/full stop character, which will be shown
in docs but automatically removed when producing the flag help.
* Try to keep it below 80 characters, to reduce text wrapping in the terminal.
* More details can be added in a new paragraph, after an empty line (`"\n\n"`).
* Like with docs generated from Markdown, a single line break is ignored
and two line breaks creates a new paragraph.
* This text will be shown to the user in `rclone config`
and in the docs (where it will be added by `make backenddocs`,
normally run some time before next release).
* To create options of enumeration type use the `Examples:` field.
* Each example value have their own `Help:` field, but they are treated
a bit different than the main option help text. They will be shown
as an unordered list, therefore a single line break is enough to
create a new list item. Also, for enumeration texts like name of
countries, it looks better without an ending period/full stop character.
- Start with the most important information about the option,
as a single sentence on a single line.
- This text will be used for the command-line flag help.
- It will be combined with other information, such as any default value,
and the result will look odd if not written as a single sentence.
- It should end with a period/full stop character, which will be shown
in docs but automatically removed when producing the flag help.
- Try to keep it below 80 characters, to reduce text wrapping in the terminal.
- More details can be added in a new paragraph, after an empty line (`"\n\n"`).
- Like with docs generated from Markdown, a single line break is ignored
and two line breaks creates a new paragraph.
- This text will be shown to the user in `rclone config`
and in the docs (where it will be added by `make backenddocs`,
normally run some time before next release).
- To create options of enumeration type use the `Examples:` field.
- Each example value have their own `Help:` field, but they are treated
a bit different than the main option help text. They will be shown
as an unordered list, therefore a single line break is enough to
create a new list item. Also, for enumeration texts like name of
countries, it looks better without an ending period/full stop character.
The only documentation you need to edit are the `docs/content/*.md`
files. The `MANUAL.*`, `rclone.1`, website, etc. are all auto-generated
from those during the release process. See the `make doc` and `make
website` targets in the Makefile if you are interested in how. You
don't need to run these when adding a feature.
When writing documentation for an entirely new backend,
see [backend documentation](#backend-documentation).
Documentation for rclone sub commands is with their code, e.g.
`cmd/ls/ls.go`. Write flag help strings as a single sentence on a single
line, without a period/full stop character at the end, as it will be
combined unmodified with other information (such as any default value).
If you are updating documentation for a command, you must do that in the
command source code, e.g. `cmd/ls/ls.go`. Write flag help strings as a single
sentence on a single line, without a period/full stop character at the end,
as it will be combined unmodified with other information (such as any default
value).
Note that you can use [GitHub's online editor](https://help.github.com/en/github/managing-files-in-a-repository/editing-files-in-another-users-repository)
for small changes in the docs which makes it very easy.
Note that you can use
[GitHub's online editor](https://help.github.com/en/github/managing-files-in-a-repository/editing-files-in-another-users-repository)
for small changes in the docs which makes it very easy. Just remember the
caveat when linking to header anchors, noted above, which means that GitHub's
Markdown preview may not be an entirely reliable verification of the results.
## Making a release ##
## Making a release
There are separate instructions for making a release in the RELEASE.md
file.
## Commit messages ##
## Commit messages
Please make the first line of your commit message a summary of the
change that a user (not a developer) of rclone would like to read, and
@@ -341,13 +449,13 @@ change will get linked into the issue.
Here is an example of a short commit message:
```
```text
drive: add team drive support - fixes #885
```
And here is an example of a longer one:
```
```text
mount: fix hang on errored upload
In certain circumstances, if an upload failed then the mount could hang
@@ -358,7 +466,7 @@ error fixing the hang.
Fixes #1498
```
## Adding a dependency ##
## Adding a dependency
rclone uses the [go
modules](https://tip.golang.org/cmd/go/#hdr-Modules__module_versions__and_more)
@@ -370,7 +478,9 @@ To add a dependency `github.com/ncw/new_dependency` see the
instructions below. These will fetch the dependency and add it to
`go.mod` and `go.sum`.
GO111MODULE=on go get github.com/ncw/new_dependency
```sh
go get github.com/ncw/new_dependency
```
You can add constraints on that package when doing `go get` (see the
go docs linked above), but don't unless you really need to.
@@ -378,15 +488,17 @@ go docs linked above), but don't unless you really need to.
Please check in the changes generated by `go mod` including `go.mod`
and `go.sum` in the same commit as your other changes.
## Updating a dependency ##
## Updating a dependency
If you need to update a dependency then run
GO111MODULE=on go get -u golang.org/x/crypto
```sh
go get golang.org/x/crypto
```
Check in a single commit as above.
## Updating all the dependencies ##
## Updating all the dependencies
In order to update all the dependencies then run `make update`. This
just uses the go modules to update all the modules to their latest
@@ -395,7 +507,7 @@ stable release. Check in the changes in a single commit as above.
This should be done early in the release cycle to pick up new versions
of packages in time for them to get some testing.
## Updating a backend ##
## Updating a backend
If you update a backend then please run the unit tests and the
integration tests for that backend.
@@ -410,105 +522,154 @@ integration tests.
The next section goes into more detail about the tests.
## Writing a new backend ##
## Writing a new backend
Choose a name. The docs here will use `remote` as an example.
Note that in rclone terminology a file system backend is called a
remote or an fs.
Research
### Research
* Look at the interfaces defined in `fs/fs.go`
* Study one or more of the existing remotes
- Look at the interfaces defined in `fs/types.go`
- Study one or more of the existing remotes
Getting going
### Getting going
* Create `backend/remote/remote.go` (copy this from a similar remote)
* box is a good one to start from if you have a directory-based remote
* b2 is a good one to start from if you have a bucket-based remote
* Add your remote to the imports in `backend/all/all.go`
* HTTP based remotes are easiest to maintain if they use rclone's rest module, but if there is a really good go SDK then use that instead.
* Try to implement as many optional methods as possible as it makes the remote more usable.
* Use lib/encoder to make sure we can encode any path name and `rclone info` to help determine the encodings needed
* `rclone purge -v TestRemote:rclone-info`
* `rclone test info --all --remote-encoding None -vv --write-json remote.json TestRemote:rclone-info`
* `go run cmd/test/info/internal/build_csv/main.go -o remote.csv remote.json`
* open `remote.csv` in a spreadsheet and examine
- Create `backend/remote/remote.go` (copy this from a similar remote)
- box is a good one to start from if you have a directory-based remote (and
shows how to use the directory cache)
- b2 is a good one to start from if you have a bucket-based remote
- Add your remote to the imports in `backend/all/all.go`
- HTTP based remotes are easiest to maintain if they use rclone's
[lib/rest](https://pkg.go.dev/github.com/rclone/rclone/lib/rest) module, but
if there is a really good Go SDK from the provider then use that instead.
- Try to implement as many optional methods as possible as it makes the remote
more usable.
- Use [lib/encoder](https://pkg.go.dev/github.com/rclone/rclone/lib/encoder) to
make sure we can encode any path name and `rclone info` to help determine the
encodings needed
- `rclone purge -v TestRemote:rclone-info`
- `rclone test info --all --remote-encoding None -vv --write-json remote.json TestRemote:rclone-info`
- `go run cmd/test/info/internal/build_csv/main.go -o remote.csv remote.json`
- open `remote.csv` in a spreadsheet and examine
Unit tests
### Guidelines for a speedy merge
* Create a config entry called `TestRemote` for the unit tests to use
* Create a `backend/remote/remote_test.go` - copy and adjust your example remote
* Make sure all tests pass with `go test -v`
- **Do** use [lib/rest](https://pkg.go.dev/github.com/rclone/rclone/lib/rest)
if you are implementing a REST like backend and parsing XML/JSON in the backend.
- **Do** use rclone's Client or Transport from [fs/fshttp](https://pkg.go.dev/github.com/rclone/rclone/fs/fshttp)
if your backend is HTTP based - this adds features like `--dump bodies`,
`--tpslimit`, `--user-agent` without you having to code anything!
- **Do** follow your example backend exactly - use the same code order, function
names, layout, structure. **Don't** move stuff around and **Don't** delete the
comments.
- **Do not** split your backend up into `fs.go` and `object.go` (there are a few
backends like that - don't follow them!)
- **Do** put your API type definitions in a separate file - by preference `api/types.go`
- **Remember** we have >50 backends to maintain so keeping them as similar as
possible to each other is a high priority!
Integration tests
### Unit tests
* Add your backend to `fstest/test_all/config.yaml`
* Once you've done that then you can use the integration test framework from the project root:
* go install ./...
* test_all -backends remote
- Create a config entry called `TestRemote` for the unit tests to use
- Create a `backend/remote/remote_test.go` - copy and adjust your example remote
- Make sure all tests pass with `go test -v`
### Integration tests
- Add your backend to `fstest/test_all/config.yaml`
- Once you've done that then you can use the integration test framework from
the project root:
- go install ./...
- test_all -backends remote
Or if you want to run the integration tests manually:
* Make sure integration tests pass with
* `cd fs/operations`
* `go test -v -remote TestRemote:`
* `cd fs/sync`
* `go test -v -remote TestRemote:`
* If your remote defines `ListR` check with this also
* `go test -v -remote TestRemote: -fast-list`
- Make sure integration tests pass with
- `cd fs/operations`
- `go test -v -remote TestRemote:`
- `cd fs/sync`
- `go test -v -remote TestRemote:`
- If your remote defines `ListR` check with this also
- `go test -v -remote TestRemote: -fast-list`
See the [testing](#testing) section for more information on integration tests.
Add your fs to the docs - you'll need to pick an icon for it from
### Backend documentation
Add your backend to the docs - you'll need to pick an icon for it from
[fontawesome](http://fontawesome.io/icons/). Keep lists of remotes in
alphabetical order of full name of remote (e.g. `drive` is ordered as
`Google Drive`) but with the local file system last.
* `README.md` - main GitHub page
* `docs/content/remote.md` - main docs page (note the backend options are automatically added to this file with `make backenddocs`)
* make sure this has the `autogenerated options` comments in (see your reference backend docs)
* update them with `make backenddocs` - revert any changes in other backends
* `docs/content/overview.md` - overview docs
* `docs/content/docs.md` - list of remotes in config section
* `docs/content/_index.md` - front page of rclone.org
* `docs/layouts/chrome/navbar.html` - add it to the website navigation
* `bin/make_manual.py` - add the page to the `docs` constant
- `README.md` - main GitHub page
- `docs/content/remote.md` - main docs page (note the backend options are
automatically added to this file with `make backenddocs`)
- make sure this has the `autogenerated options` comments in (see your
reference backend docs)
- update them in your backend with `bin/make_backend_docs.py remote`
- `docs/content/overview.md` - overview docs - add an entry into the Features
table and the Optional Features table.
- `docs/content/docs.md` - list of remotes in config section
- `docs/content/_index.md` - front page of rclone.org
- `docs/layouts/chrome/navbar.html` - add it to the website navigation
- `bin/make_manual.py` - add the page to the `docs` constant
Once you've written the docs, run `make serve` and check they look OK
in the web browser and the links (internal and external) all work.
## Writing a plugin ##
## Adding a new s3 provider
New features (backends, commands) can also be added "out-of-tree", through Go plugins.
Changes will be kept in a dynamically loaded file instead of being compiled into the main binary.
This is useful if you can't merge your changes upstream or don't want to maintain a fork of rclone.
[Please see the guide in the S3 backend directory](backend/s3/README.md).
Usage
## Writing a plugin
- Naming
- Plugins names must have the pattern `librcloneplugin_KIND_NAME.so`.
- `KIND` should be one of `backend`, `command` or `bundle`.
- Example: A plugin with backend support for PiFS would be called
`librcloneplugin_backend_pifs.so`.
- Loading
- Supported on macOS & Linux as of now. ([Go issue for Windows support](https://github.com/golang/go/issues/19282))
- Supported on rclone v1.50 or greater.
- All plugins in the folder specified by variable `$RCLONE_PLUGIN_PATH` are loaded.
- If this variable doesn't exist, plugin support is disabled.
- Plugins must be compiled against the exact version of rclone to work.
(The rclone used during building the plugin must be the same as the source of rclone)
New features (backends, commands) can also be added "out-of-tree", through Go
plugins. Changes will be kept in a dynamically loaded file instead of being
compiled into the main binary. This is useful if you can't merge your changes
upstream or don't want to maintain a fork of rclone.
Building
### Usage
- Naming
- Plugins names must have the pattern `librcloneplugin_KIND_NAME.so`.
- `KIND` should be one of `backend`, `command` or `bundle`.
- Example: A plugin with backend support for PiFS would be called
`librcloneplugin_backend_pifs.so`.
- Loading
- Supported on macOS & Linux as of now. ([Go issue for Windows support](https://github.com/golang/go/issues/19282))
- Supported on rclone v1.50 or greater.
- All plugins in the folder specified by variable `$RCLONE_PLUGIN_PATH` are loaded.
- If this variable doesn't exist, plugin support is disabled.
- Plugins must be compiled against the exact version of rclone to work.
(The rclone used during building the plugin must be the same as the source
of rclone)
### Building
To turn your existing additions into a Go plugin, move them to an external repository
and change the top-level package name to `main`.
Check `rclone --version` and make sure that the plugin's rclone dependency and host Go version match.
Check `rclone --version` and make sure that the plugin's rclone dependency and
host Go version match.
Then, run `go build -buildmode=plugin -o PLUGIN_NAME.so .` to build the plugin.
[Go reference](https://godoc.org/github.com/rclone/rclone/lib/plugin)
[Minimal example](https://gist.github.com/terorie/21b517ee347828e899e1913efc1d684f)
## Keeping a backend or command out of tree
Rclone was designed to be modular so it is very easy to keep a backend
or a command out of the main rclone source tree.
So for example if you had a backend which accessed your proprietary
systems or a command which was specialised for your needs you could
add them out of tree.
This may be easier than using a plugin and is supported on all
platforms not just macOS and Linux.
This is explained further in <https://github.com/rclone/rclone_out_of_tree_example>
which has an example of an out of tree backend `ram` (which is a
renamed version of the `memory` backend).

View File

@@ -1,18 +1,47 @@
FROM golang AS builder
FROM golang:alpine AS builder
ARG CGO_ENABLED=0
COPY . /go/src/github.com/rclone/rclone/
WORKDIR /go/src/github.com/rclone/rclone/
RUN \
CGO_ENABLED=0 \
make
RUN ./rclone version
RUN echo "**** Set Go Environment Variables ****" && \
go env -w GOCACHE=/root/.cache/go-build
RUN echo "**** Install Dependencies ****" && \
apk add --no-cache \
make \
bash \
gawk \
git
COPY go.mod .
COPY go.sum .
RUN echo "**** Download Go Dependencies ****" && \
go mod download -x
RUN echo "**** Verify Go Dependencies ****" && \
go mod verify
COPY . .
RUN --mount=type=cache,target=/root/.cache/go-build,sharing=locked \
echo "**** Build Binary ****" && \
make
RUN echo "**** Print Version Binary ****" && \
./rclone version
# Begin final image
FROM alpine:latest
RUN apk --no-cache add ca-certificates fuse tzdata && \
echo "user_allow_other" >> /etc/fuse.conf
RUN echo "**** Install Dependencies ****" && \
apk add --no-cache \
ca-certificates \
fuse3 \
tzdata && \
echo "Enable user_allow_other in fuse" && \
echo "user_allow_other" >> /etc/fuse.conf
COPY --from=builder /go/src/github.com/rclone/rclone/rclone /usr/local/bin/

View File

@@ -1,4 +1,4 @@
# Maintainers guide for rclone #
# Maintainers guide for rclone
Current active maintainers of rclone are:
@@ -16,81 +16,116 @@ Current active maintainers of rclone are:
| Max Sum | @Max-Sum | union backend |
| Fred | @creativeprojects | seafile backend |
| Caleb Case | @calebcase | storj backend |
| wiserain | @wiserain | pikpak backend |
| albertony | @albertony | |
| Chun-Hung Tseng | @henrybear327 | Proton Drive Backend |
| Hideo Aoyama | @boukendesho | snap packaging |
| nielash | @nielash | bisync |
| Dan McArdle | @dmcardle | gitannex |
| Sam Harrison | @childish-sambino | filescom |
**This is a work in progress Draft**
## This is a work in progress draft
This is a guide for how to be an rclone maintainer. This is mostly a write-up of what I (@ncw) attempt to do.
This is a guide for how to be an rclone maintainer. This is mostly a write-up
of what I (@ncw) attempt to do.
## Triaging Tickets ##
## Triaging Tickets
When a ticket comes in it should be triaged. This means it should be classified by adding labels and placed into a milestone. Quite a lot of tickets need a bit of back and forth to determine whether it is a valid ticket so tickets may remain without labels or milestone for a while.
When a ticket comes in it should be triaged. This means it should be classified
by adding labels and placed into a milestone. Quite a lot of tickets need a bit
of back and forth to determine whether it is a valid ticket so tickets may
remain without labels or milestone for a while.
Rclone uses the labels like this:
* `bug` - a definitely verified bug
* `can't reproduce` - a problem which we can't reproduce
* `doc fix` - a bug in the documentation - if users need help understanding the docs add this label
* `duplicate` - normally close these and ask the user to subscribe to the original
* `enhancement: new remote` - a new rclone backend
* `enhancement` - a new feature
* `FUSE` - to do with `rclone mount` command
* `good first issue` - mark these if you find a small self-contained issue - these get shown to new visitors to the project
* `help` wanted - mark these if you find a self-contained issue - these get shown to new visitors to the project
* `IMPORTANT` - note to maintainers not to forget to fix this for the release
* `maintenance` - internal enhancement, code re-organisation, etc.
* `Needs Go 1.XX` - waiting for that version of Go to be released
* `question` - not a `bug` or `enhancement` - direct to the forum for next time
* `Remote: XXX` - which rclone backend this affects
* `thinking` - not decided on the course of action yet
- `bug` - a definitely verified bug
- `can't reproduce` - a problem which we can't reproduce
- `doc fix` - a bug in the documentation - if users need help understanding the
docs add this label
- `duplicate` - normally close these and ask the user to subscribe to the original
- `enhancement: new remote` - a new rclone backend
- `enhancement` - a new feature
- `FUSE` - to do with `rclone mount` command
- `good first issue` - mark these if you find a small self-contained issue -
these get shown to new visitors to the project
- `help` wanted - mark these if you find a self-contained issue - these get
shown to new visitors to the project
- `IMPORTANT` - note to maintainers not to forget to fix this for the release
- `maintenance` - internal enhancement, code re-organisation, etc.
- `Needs Go 1.XX` - waiting for that version of Go to be released
- `question` - not a `bug` or `enhancement` - direct to the forum for next time
- `Remote: XXX` - which rclone backend this affects
- `thinking` - not decided on the course of action yet
If it turns out to be a bug or an enhancement it should be tagged as such, with the appropriate other tags. Don't forget the "good first issue" tag to give new contributors something easy to do to get going.
If it turns out to be a bug or an enhancement it should be tagged as such, with
the appropriate other tags. Don't forget the "good first issue" tag to give new
contributors something easy to do to get going.
When a ticket is tagged it should be added to a milestone, either the next release, the one after, Soon or Help Wanted. Bugs can be added to the "Known Bugs" milestone if they aren't planned to be fixed or need to wait for something (e.g. the next go release).
When a ticket is tagged it should be added to a milestone, either the next
release, the one after, Soon or Help Wanted. Bugs can be added to the
"Known Bugs" milestone if they aren't planned to be fixed or need to wait for
something (e.g. the next go release).
The milestones have these meanings:
* v1.XX - stuff we would like to fit into this release
* v1.XX+1 - stuff we are leaving until the next release
* Soon - stuff we think is a good idea - waiting to be scheduled for a release
* Help wanted - blue sky stuff that might get moved up, or someone could help with
* Known bugs - bugs waiting on external factors or we aren't going to fix for the moment
- v1.XX - stuff we would like to fit into this release
- v1.XX+1 - stuff we are leaving until the next release
- Soon - stuff we think is a good idea - waiting to be scheduled for a release
- Help wanted - blue sky stuff that might get moved up, or someone could help with
- Known bugs - bugs waiting on external factors or we aren't going to fix for
the moment
Tickets [with no milestone](https://github.com/rclone/rclone/issues?utf8=✓&q=is%3Aissue%20is%3Aopen%20no%3Amile) are good candidates for ones that have slipped between the gaps and need following up.
Tickets [with no milestone](https://github.com/rclone/rclone/issues?utf8=✓&q=is%3Aissue%20is%3Aopen%20no%3Amile)
are good candidates for ones that have slipped between the gaps and need
following up.
## Closing Tickets ##
## Closing Tickets
Close tickets as soon as you can - make sure they are tagged with a release. Post a link to a beta in the ticket with the fix in, asking for feedback.
Close tickets as soon as you can - make sure they are tagged with a release.
Post a link to a beta in the ticket with the fix in, asking for feedback.
## Pull requests ##
## Pull requests
Try to process pull requests promptly!
Merging pull requests on GitHub itself works quite well nowadays so you can squash and rebase or rebase pull requests. rclone doesn't use merge commits. Use the squash and rebase option if you need to edit the commit message.
Merging pull requests on GitHub itself works quite well nowadays so you can
squash and rebase or rebase pull requests. rclone doesn't use merge commits.
Use the squash and rebase option if you need to edit the commit message.
After merging the commit, in your local master branch, do `git pull` then run `bin/update-authors.py` to update the authors file then `git push`.
After merging the commit, in your local master branch, do `git pull` then run
`bin/update-authors.py` to update the authors file then `git push`.
Sometimes pull requests need to be left open for a while - this especially true of contributions of new backends which take a long time to get right.
Sometimes pull requests need to be left open for a while - this especially true
of contributions of new backends which take a long time to get right.
## Merges ##
## Merges
If you are merging a branch locally then do `git merge --ff-only branch-name` to avoid a merge commit. You'll need to rebase the branch if it doesn't merge cleanly.
If you are merging a branch locally then do `git merge --ff-only branch-name` to
avoid a merge commit. You'll need to rebase the branch if it doesn't merge cleanly.
## Release cycle ##
## Release cycle
Rclone aims for a 6-8 week release cycle. Sometimes release cycles take longer if there is something big to merge that didn't stabilize properly or for personal reasons.
Rclone aims for a 6-8 week release cycle. Sometimes release cycles take longer
if there is something big to merge that didn't stabilize properly or for personal
reasons.
High impact regressions should be fixed before the next release.
Near the start of the release cycle, the dependencies should be updated with `make update` to give time for bugs to surface.
Near the start of the release cycle, the dependencies should be updated with
`make update` to give time for bugs to surface.
Towards the end of the release cycle try not to merge anything too big so let things settle down.
Towards the end of the release cycle try not to merge anything too big so let
things settle down.
Follow the instructions in RELEASE.md for making the release. Note that the testing part is the most time-consuming often needing several rounds of test and fix depending on exactly how many new features rclone has gained.
Follow the instructions in RELEASE.md for making the release. Note that the
testing part is the most time-consuming often needing several rounds of test
and fix depending on exactly how many new features rclone has gained.
## Mailing list ##
## Mailing list
There is now an invite-only mailing list for rclone developers `rclone-dev` on google groups.
There is now an invite-only mailing list for rclone developers `rclone-dev` on
google groups.
## TODO ##
## TODO
I should probably make a dev@rclone.org to register with cloud providers.
I should probably make a <dev@rclone.org> to register with cloud providers.

61819
MANUAL.html generated

File diff suppressed because it is too large Load Diff

44304
MANUAL.md generated

File diff suppressed because it is too large Load Diff

34892
MANUAL.txt generated

File diff suppressed because it is too large Load Diff

View File

@@ -30,29 +30,37 @@ ifdef RELEASE_TAG
TAG := $(RELEASE_TAG)
endif
GO_VERSION := $(shell go version)
GO_OS := $(shell go env GOOS)
ifdef BETA_SUBDIR
BETA_SUBDIR := /$(BETA_SUBDIR)
endif
BETA_PATH := $(BRANCH_PATH)$(TAG)$(BETA_SUBDIR)
BETA_URL := https://beta.rclone.org/$(BETA_PATH)/
BETA_UPLOAD_ROOT := memstore:beta-rclone-org
BETA_UPLOAD_ROOT := beta.rclone.org:
BETA_UPLOAD := $(BETA_UPLOAD_ROOT)/$(BETA_PATH)
# Pass in GOTAGS=xyz on the make command line to set build tags
ifdef GOTAGS
BUILDTAGS=-tags "$(GOTAGS)"
LINTTAGS=--build-tags "$(GOTAGS)"
endif
LDFLAGS=--ldflags "-s -X github.com/rclone/rclone/fs.Version=$(TAG)"
.PHONY: rclone test_all vars version
rclone:
go build -v --ldflags "-s -X github.com/rclone/rclone/fs.Version=$(TAG)" $(BUILDTAGS) $(BUILD_ARGS)
ifeq ($(GO_OS),windows)
go run bin/resource_windows.go -version $(TAG) -syso resource_windows_`go env GOARCH`.syso
endif
go build -v $(LDFLAGS) $(BUILDTAGS) $(BUILD_ARGS)
ifeq ($(GO_OS),windows)
rm resource_windows_`go env GOARCH`.syso
endif
mkdir -p `go env GOPATH`/bin/
cp -av rclone`go env GOEXE` `go env GOPATH`/bin/rclone`go env GOEXE`.new
mv -v `go env GOPATH`/bin/rclone`go env GOEXE`.new `go env GOPATH`/bin/rclone`go env GOEXE`
test_all:
go install --ldflags "-s -X github.com/rclone/rclone/fs.Version=$(TAG)" $(BUILDTAGS) $(BUILD_ARGS) github.com/rclone/rclone/fstest/test_all
go install $(LDFLAGS) $(BUILDTAGS) $(BUILD_ARGS) github.com/rclone/rclone/fstest/test_all
vars:
@echo SHELL="'$(SHELL)'"
@@ -66,6 +74,10 @@ btest:
@echo "[$(TAG)]($(BETA_URL)) on branch [$(BRANCH)](https://github.com/rclone/rclone/tree/$(BRANCH)) (uploaded in 15-30 mins)" | xclip -r -sel clip
@echo "Copied markdown of beta release to clip board"
btesth:
@echo "<a href="$(BETA_URL)">$(TAG)</a> on branch <a href="https://github.com/rclone/rclone/tree/$(BRANCH)">$(BRANCH)</a> (uploaded in 15-30 mins)" | xclip -r -sel clip -t text/html
@echo "Copied beta release in HTML to clip board"
version:
@echo '$(TAG)'
@@ -76,47 +88,47 @@ test: rclone test_all
# Quick test
quicktest:
RCLONE_CONFIG="/notfound" go test $(BUILDTAGS) ./...
RCLONE_CONFIG="/notfound" go test $(LDFLAGS) $(BUILDTAGS) ./...
racequicktest:
RCLONE_CONFIG="/notfound" go test $(BUILDTAGS) -cpu=2 -race ./...
RCLONE_CONFIG="/notfound" go test $(LDFLAGS) $(BUILDTAGS) -cpu=2 -race ./...
compiletest:
RCLONE_CONFIG="/notfound" go test $(LDFLAGS) $(BUILDTAGS) -run XXX ./...
# Do source code quality checks
check: rclone
@echo "-- START CODE QUALITY REPORT -------------------------------"
@golangci-lint run $(LINTTAGS) ./...
@bin/markdown-lint
@echo "-- END CODE QUALITY REPORT ---------------------------------"
# Get the build dependencies
build_dep:
go run bin/get-github-release.go -extract golangci-lint golangci/golangci-lint 'golangci-lint-.*\.tar\.gz'
go run bin/get-github-release.go -use-api -extract golangci-lint golangci/golangci-lint 'golangci-lint-.*\.tar\.gz'
# Get the release dependencies we only install on linux
release_dep_linux:
go run bin/get-github-release.go -extract nfpm goreleaser/nfpm 'nfpm_.*_Linux_x86_64\.tar\.gz'
# Get the release dependencies we only install on Windows
release_dep_windows:
GOOS="" GOARCH="" go install github.com/josephspurrier/goversioninfo/cmd/goversioninfo@latest
go install github.com/goreleaser/nfpm/v2/cmd/nfpm@latest
# Update dependencies
showupdates:
@echo "*** Direct dependencies that could be updated ***"
@GO111MODULE=on go list -u -f '{{if (and (not (or .Main .Indirect)) .Update)}}{{.Path}}: {{.Version}} -> {{.Update.Version}}{{end}}' -m all 2> /dev/null
@go list -u -f '{{if (and (not (or .Main .Indirect)) .Update)}}{{.Path}}: {{.Version}} -> {{.Update.Version}}{{end}}' -m all 2> /dev/null
# Update direct dependencies only
updatedirect:
GO111MODULE=on go get -d $$(go list -m -f '{{if not (or .Main .Indirect)}}{{.Path}}{{end}}' all)
GO111MODULE=on go mod tidy
go get $$(go list -m -f '{{if not (or .Main .Indirect)}}{{.Path}}{{end}}' all)
go mod tidy
# Update direct and indirect dependencies and test dependencies
update:
GO111MODULE=on go get -d -u -t ./...
GO111MODULE=on go mod tidy
go get -u -t ./...
go mod tidy
# Tidy the module dependencies
tidy:
GO111MODULE=on go mod tidy
go mod tidy
doc: rclone.1 MANUAL.html MANUAL.txt rcdocs commanddocs
@@ -133,17 +145,23 @@ MANUAL.txt: MANUAL.md
pandoc -s --from markdown-smart --to plain MANUAL.md -o MANUAL.txt
commanddocs: rclone
XDG_CACHE_HOME="" XDG_CONFIG_HOME="" HOME="\$$HOME" USER="\$$USER" rclone gendocs docs/content/
go generate ./lib/transform
-@rmdir -p '$$HOME/.config/rclone'
XDG_CACHE_HOME="" XDG_CONFIG_HOME="" HOME="\$$HOME" USER="\$$USER" rclone gendocs --config=/notfound docs/content/
@[ ! -e '$$HOME' ] || (echo 'Error: created unwanted directory named $$HOME' && exit 1)
go run bin/make_bisync_docs.go ./docs/content/
backenddocs: rclone bin/make_backend_docs.py
-@rmdir -p '$$HOME/.config/rclone'
XDG_CACHE_HOME="" XDG_CONFIG_HOME="" HOME="\$$HOME" USER="\$$USER" ./bin/make_backend_docs.py
@[ ! -e '$$HOME' ] || (echo 'Error: created unwanted directory named $$HOME' && exit 1)
rcdocs: rclone
bin/make_rc_docs.sh
install: rclone
install -d ${DESTDIR}/usr/bin
install -t ${DESTDIR}/usr/bin ${GOPATH}/bin/rclone
install ${GOPATH}/bin/rclone ${DESTDIR}/usr/bin
clean:
go clean ./...
@@ -157,7 +175,7 @@ website:
@if grep -R "raw HTML omitted" docs/public ; then echo "ERROR: found unescaped HTML - fix the markdown source" ; fi
upload_website: website
rclone -v sync docs/public memstore:www-rclone-org
rclone -v sync docs/public www.rclone.org:
upload_test_website: website
rclone -P sync docs/public test-rclone-org:
@@ -184,8 +202,8 @@ check_sign:
cd build && gpg --verify SHA256SUMS && gpg --decrypt SHA256SUMS | sha256sum -c
upload:
rclone -P copy build/ memstore:downloads-rclone-org/$(TAG)
rclone lsf build --files-only --include '*.{zip,deb,rpm}' --include version.txt | xargs -i bash -c 'i={}; j="$$i"; [[ $$i =~ (.*)(-v[0-9\.]+-)(.*) ]] && j=$${BASH_REMATCH[1]}-current-$${BASH_REMATCH[3]}; rclone copyto -v "memstore:downloads-rclone-org/$(TAG)/$$i" "memstore:downloads-rclone-org/$$j"'
rclone -P copy build/ downloads.rclone.org:/$(TAG)
rclone lsf build --files-only --include '*.{zip,deb,rpm}' --include version.txt | xargs -i bash -c 'i={}; j="$$i"; [[ $$i =~ (.*)(-v[0-9\.]+-)(.*) ]] && j=$${BASH_REMATCH[1]}-current-$${BASH_REMATCH[3]}; rclone copyto -v "downloads.rclone.org:/$(TAG)/$$i" "downloads.rclone.org:/$$j"'
upload_github:
./bin/upload-github $(TAG)
@@ -195,7 +213,7 @@ cross: doc
beta:
go run bin/cross-compile.go $(BUILD_FLAGS) $(BUILDTAGS) $(BUILD_ARGS) $(TAG)
rclone -v copy build/ memstore:pub-rclone-org/$(TAG)
rclone -v copy build/ pub.rclone.org:/$(TAG)
@echo Beta release ready at https://pub.rclone.org/$(TAG)/
log_since_last_release:
@@ -208,18 +226,18 @@ ci_upload:
sudo chown -R $$USER build
find build -type l -delete
gzip -r9v build
./rclone --config bin/travis.rclone.conf -v copy build/ $(BETA_UPLOAD)/testbuilds
./rclone --no-check-dest --config bin/ci.rclone.conf -v copy build/ $(BETA_UPLOAD)/testbuilds
ifeq ($(or $(BRANCH_PATH),$(RELEASE_TAG)),)
./rclone --config bin/travis.rclone.conf -v copy build/ $(BETA_UPLOAD_ROOT)/test/testbuilds-latest
./rclone --no-check-dest --config bin/ci.rclone.conf -v copy build/ $(BETA_UPLOAD_ROOT)/test/testbuilds-latest
endif
@echo Beta release ready at $(BETA_URL)/testbuilds
ci_beta:
git log $(LAST_TAG).. > /tmp/git-log.txt
go run bin/cross-compile.go -release beta-latest -git-log /tmp/git-log.txt $(BUILD_FLAGS) $(BUILDTAGS) $(BUILD_ARGS) $(TAG)
rclone --config bin/travis.rclone.conf -v copy --exclude '*beta-latest*' build/ $(BETA_UPLOAD)
rclone --no-check-dest --config bin/ci.rclone.conf -v copy --exclude '*beta-latest*' build/ $(BETA_UPLOAD)
ifeq ($(or $(BRANCH_PATH),$(RELEASE_TAG)),)
rclone --config bin/travis.rclone.conf -v copy --include '*beta-latest*' --include version.txt build/ $(BETA_UPLOAD_ROOT)$(BETA_SUBDIR)
rclone --no-check-dest --config bin/ci.rclone.conf -v copy --include '*beta-latest*' --include version.txt build/ $(BETA_UPLOAD_ROOT)$(BETA_SUBDIR)
endif
@echo Beta release ready at $(BETA_URL)
@@ -228,7 +246,7 @@ fetch_binaries:
rclone -P sync --exclude "/testbuilds/**" --delete-excluded $(BETA_UPLOAD) build/
serve: website
cd docs && hugo server -v -w --disableFastRender
cd docs && hugo server --logLevel info -w --disableFastRender --ignoreCache
tag: retag doc
bin/make_changelog.py $(LAST_TAG) $(VERSION) > docs/content/changelog.md.new

238
README.md
View File

@@ -1,4 +1,6 @@
<!-- markdownlint-disable-next-line first-line-heading no-inline-html -->
[<img src="https://rclone.org/img/logo_on_light__horizontal_color.svg" width="50%" alt="rclone logo">](https://rclone.org/#gh-light-mode-only)
<!-- markdownlint-disable-next-line no-inline-html -->
[<img src="https://rclone.org/img/logo_on_dark__horizontal_color.svg" width="50%" alt="rclone logo">](https://rclone.org/#gh-dark-mode-only)
[Website](https://rclone.org) |
@@ -16,75 +18,111 @@
# Rclone
Rclone *("rsync for cloud storage")* is a command-line program to sync files and directories to and from different cloud storage providers.
Rclone *("rsync for cloud storage")* is a command-line program to sync files and
directories to and from different cloud storage providers.
## Storage providers
* 1Fichier [:page_facing_up:](https://rclone.org/fichier/)
* Akamai Netstorage [:page_facing_up:](https://rclone.org/netstorage/)
* Alibaba Cloud (Aliyun) Object Storage System (OSS) [:page_facing_up:](https://rclone.org/s3/#alibaba-oss)
* Amazon Drive [:page_facing_up:](https://rclone.org/amazonclouddrive/) ([See note](https://rclone.org/amazonclouddrive/#status))
* Amazon S3 [:page_facing_up:](https://rclone.org/s3/)
* Backblaze B2 [:page_facing_up:](https://rclone.org/b2/)
* Box [:page_facing_up:](https://rclone.org/box/)
* Ceph [:page_facing_up:](https://rclone.org/s3/#ceph)
* China Mobile Ecloud Elastic Object Storage (EOS) [:page_facing_up:](https://rclone.org/s3/#china-mobile-ecloud-eos)
* Cloudflare R2 [:page_facing_up:](https://rclone.org/s3/#cloudflare-r2)
* Arvan Cloud Object Storage (AOS) [:page_facing_up:](https://rclone.org/s3/#arvan-cloud-object-storage-aos)
* Citrix ShareFile [:page_facing_up:](https://rclone.org/sharefile/)
* DigitalOcean Spaces [:page_facing_up:](https://rclone.org/s3/#digitalocean-spaces)
* Digi Storage [:page_facing_up:](https://rclone.org/koofr/#digi-storage)
* Dreamhost [:page_facing_up:](https://rclone.org/s3/#dreamhost)
* Dropbox [:page_facing_up:](https://rclone.org/dropbox/)
* Enterprise File Fabric [:page_facing_up:](https://rclone.org/filefabric/)
* FTP [:page_facing_up:](https://rclone.org/ftp/)
* Google Cloud Storage [:page_facing_up:](https://rclone.org/googlecloudstorage/)
* Google Drive [:page_facing_up:](https://rclone.org/drive/)
* Google Photos [:page_facing_up:](https://rclone.org/googlephotos/)
* HDFS (Hadoop Distributed Filesystem) [:page_facing_up:](https://rclone.org/hdfs/)
* HiDrive [:page_facing_up:](https://rclone.org/hidrive/)
* HTTP [:page_facing_up:](https://rclone.org/http/)
* Huawei Cloud Object Storage Service(OBS) [:page_facing_up:](https://rclone.org/s3/#huawei-obs)
* Internet Archive [:page_facing_up:](https://rclone.org/internetarchive/)
* Jottacloud [:page_facing_up:](https://rclone.org/jottacloud/)
* IBM COS S3 [:page_facing_up:](https://rclone.org/s3/#ibm-cos-s3)
* IONOS Cloud [:page_facing_up:](https://rclone.org/s3/#ionos)
* Koofr [:page_facing_up:](https://rclone.org/koofr/)
* Mail.ru Cloud [:page_facing_up:](https://rclone.org/mailru/)
* Memset Memstore [:page_facing_up:](https://rclone.org/swift/)
* Mega [:page_facing_up:](https://rclone.org/mega/)
* Memory [:page_facing_up:](https://rclone.org/memory/)
* Microsoft Azure Blob Storage [:page_facing_up:](https://rclone.org/azureblob/)
* Microsoft OneDrive [:page_facing_up:](https://rclone.org/onedrive/)
* Minio [:page_facing_up:](https://rclone.org/s3/#minio)
* Nextcloud [:page_facing_up:](https://rclone.org/webdav/#nextcloud)
* OVH [:page_facing_up:](https://rclone.org/swift/)
* OpenDrive [:page_facing_up:](https://rclone.org/opendrive/)
* OpenStack Swift [:page_facing_up:](https://rclone.org/swift/)
* Oracle Cloud Storage [:page_facing_up:](https://rclone.org/swift/)
* Oracle Object Storage [:page_facing_up:](https://rclone.org/oracleobjectstorage/)
* ownCloud [:page_facing_up:](https://rclone.org/webdav/#owncloud)
* pCloud [:page_facing_up:](https://rclone.org/pcloud/)
* premiumize.me [:page_facing_up:](https://rclone.org/premiumizeme/)
* put.io [:page_facing_up:](https://rclone.org/putio/)
* QingStor [:page_facing_up:](https://rclone.org/qingstor/)
* Qiniu Cloud Object Storage (Kodo) [:page_facing_up:](https://rclone.org/s3/#qiniu)
* Rackspace Cloud Files [:page_facing_up:](https://rclone.org/swift/)
* RackCorp Object Storage [:page_facing_up:](https://rclone.org/s3/#RackCorp)
* Scaleway [:page_facing_up:](https://rclone.org/s3/#scaleway)
* Seafile [:page_facing_up:](https://rclone.org/seafile/)
* SeaweedFS [:page_facing_up:](https://rclone.org/s3/#seaweedfs)
* SFTP [:page_facing_up:](https://rclone.org/sftp/)
* SMB / CIFS [:page_facing_up:](https://rclone.org/smb/)
* StackPath [:page_facing_up:](https://rclone.org/s3/#stackpath)
* Storj [:page_facing_up:](https://rclone.org/storj/)
* SugarSync [:page_facing_up:](https://rclone.org/sugarsync/)
* Tencent Cloud Object Storage (COS) [:page_facing_up:](https://rclone.org/s3/#tencent-cos)
* Wasabi [:page_facing_up:](https://rclone.org/s3/#wasabi)
* WebDAV [:page_facing_up:](https://rclone.org/webdav/)
* Yandex Disk [:page_facing_up:](https://rclone.org/yandex/)
* Zoho WorkDrive [:page_facing_up:](https://rclone.org/zoho/)
* The local filesystem [:page_facing_up:](https://rclone.org/local/)
- 1Fichier [:page_facing_up:](https://rclone.org/fichier/)
- Akamai Netstorage [:page_facing_up:](https://rclone.org/netstorage/)
- Alibaba Cloud (Aliyun) Object Storage System (OSS) [:page_facing_up:](https://rclone.org/s3/#alibaba-oss)
- Amazon S3 [:page_facing_up:](https://rclone.org/s3/)
- ArvanCloud Object Storage (AOS) [:page_facing_up:](https://rclone.org/s3/#arvan-cloud-object-storage-aos)
- Backblaze B2 [:page_facing_up:](https://rclone.org/b2/)
- Box [:page_facing_up:](https://rclone.org/box/)
- Ceph [:page_facing_up:](https://rclone.org/s3/#ceph)
- China Mobile Ecloud Elastic Object Storage (EOS) [:page_facing_up:](https://rclone.org/s3/#china-mobile-ecloud-eos)
- Cloudflare R2 [:page_facing_up:](https://rclone.org/s3/#cloudflare-r2)
- Citrix ShareFile [:page_facing_up:](https://rclone.org/sharefile/)
- Cubbit DS3 [:page_facing_up:](https://rclone.org/s3/#Cubbit)
- DigitalOcean Spaces [:page_facing_up:](https://rclone.org/s3/#digitalocean-spaces)
- Digi Storage [:page_facing_up:](https://rclone.org/koofr/#digi-storage)
- Dreamhost [:page_facing_up:](https://rclone.org/s3/#dreamhost)
- Dropbox [:page_facing_up:](https://rclone.org/dropbox/)
- Enterprise File Fabric [:page_facing_up:](https://rclone.org/filefabric/)
- Exaba [:page_facing_up:](https://rclone.org/s3/#exaba)
- Fastmail Files [:page_facing_up:](https://rclone.org/webdav/#fastmail-files)
- FileLu [:page_facing_up:](https://rclone.org/filelu/)
- Files.com [:page_facing_up:](https://rclone.org/filescom/)
- FlashBlade [:page_facing_up:](https://rclone.org/s3/#pure-storage-flashblade)
- FTP [:page_facing_up:](https://rclone.org/ftp/)
- GoFile [:page_facing_up:](https://rclone.org/gofile/)
- Google Cloud Storage [:page_facing_up:](https://rclone.org/googlecloudstorage/)
- Google Drive [:page_facing_up:](https://rclone.org/drive/)
- Google Photos [:page_facing_up:](https://rclone.org/googlephotos/)
- HDFS (Hadoop Distributed Filesystem) [:page_facing_up:](https://rclone.org/hdfs/)
- Hetzner Object Storage [:page_facing_up:](https://rclone.org/s3/#hetzner)
- Hetzner Storage Box [:page_facing_up:](https://rclone.org/sftp/#hetzner-storage-box)
- HiDrive [:page_facing_up:](https://rclone.org/hidrive/)
- HTTP [:page_facing_up:](https://rclone.org/http/)
- Huawei Cloud Object Storage Service(OBS) [:page_facing_up:](https://rclone.org/s3/#huawei-obs)
- iCloud Drive [:page_facing_up:](https://rclone.org/iclouddrive/)
- ImageKit [:page_facing_up:](https://rclone.org/imagekit/)
- Internet Archive [:page_facing_up:](https://rclone.org/internetarchive/)
- Jottacloud [:page_facing_up:](https://rclone.org/jottacloud/)
- IBM COS S3 [:page_facing_up:](https://rclone.org/s3/#ibm-cos-s3)
- Intercolo Object Storage [:page_facing_up:](https://rclone.org/s3/#intercolo)
- IONOS Cloud [:page_facing_up:](https://rclone.org/s3/#ionos)
- Koofr [:page_facing_up:](https://rclone.org/koofr/)
- Leviia Object Storage [:page_facing_up:](https://rclone.org/s3/#leviia)
- Liara Object Storage [:page_facing_up:](https://rclone.org/s3/#liara-object-storage)
- Linkbox [:page_facing_up:](https://rclone.org/linkbox)
- Linode Object Storage [:page_facing_up:](https://rclone.org/s3/#linode)
- Magalu Object Storage [:page_facing_up:](https://rclone.org/s3/#magalu)
- Mail.ru Cloud [:page_facing_up:](https://rclone.org/mailru/)
- Memset Memstore [:page_facing_up:](https://rclone.org/swift/)
- MEGA [:page_facing_up:](https://rclone.org/mega/)
- MEGA S4 Object Storage [:page_facing_up:](https://rclone.org/s3/#mega)
- Memory [:page_facing_up:](https://rclone.org/memory/)
- Microsoft Azure Blob Storage [:page_facing_up:](https://rclone.org/azureblob/)
- Microsoft Azure Files Storage [:page_facing_up:](https://rclone.org/azurefiles/)
- Microsoft OneDrive [:page_facing_up:](https://rclone.org/onedrive/)
- Minio [:page_facing_up:](https://rclone.org/s3/#minio)
- Nextcloud [:page_facing_up:](https://rclone.org/webdav/#nextcloud)
- Blomp Cloud Storage [:page_facing_up:](https://rclone.org/swift/)
- OpenDrive [:page_facing_up:](https://rclone.org/opendrive/)
- OpenStack Swift [:page_facing_up:](https://rclone.org/swift/)
- Oracle Cloud Storage [:page_facing_up:](https://rclone.org/swift/)
- Oracle Object Storage [:page_facing_up:](https://rclone.org/oracleobjectstorage/)
- Outscale [:page_facing_up:](https://rclone.org/s3/#outscale)
- OVHcloud Object Storage (Swift) [:page_facing_up:](https://rclone.org/swift/)
- OVHcloud Object Storage (S3-compatible) [:page_facing_up:](https://rclone.org/s3/#ovhcloud)
- ownCloud [:page_facing_up:](https://rclone.org/webdav/#owncloud)
- pCloud [:page_facing_up:](https://rclone.org/pcloud/)
- Petabox [:page_facing_up:](https://rclone.org/s3/#petabox)
- PikPak [:page_facing_up:](https://rclone.org/pikpak/)
- Pixeldrain [:page_facing_up:](https://rclone.org/pixeldrain/)
- premiumize.me [:page_facing_up:](https://rclone.org/premiumizeme/)
- put.io [:page_facing_up:](https://rclone.org/putio/)
- Proton Drive [:page_facing_up:](https://rclone.org/protondrive/)
- QingStor [:page_facing_up:](https://rclone.org/qingstor/)
- Qiniu Cloud Object Storage (Kodo) [:page_facing_up:](https://rclone.org/s3/#qiniu)
- Rabata Cloud Storage [:page_facing_up:](https://rclone.org/s3/#Rabata)
- Quatrix [:page_facing_up:](https://rclone.org/quatrix/)
- Rackspace Cloud Files [:page_facing_up:](https://rclone.org/swift/)
- RackCorp Object Storage [:page_facing_up:](https://rclone.org/s3/#RackCorp)
- rsync.net [:page_facing_up:](https://rclone.org/sftp/#rsync-net)
- Scaleway [:page_facing_up:](https://rclone.org/s3/#scaleway)
- Seafile [:page_facing_up:](https://rclone.org/seafile/)
- Seagate Lyve Cloud [:page_facing_up:](https://rclone.org/s3/#lyve)
- SeaweedFS [:page_facing_up:](https://rclone.org/s3/#seaweedfs)
- Selectel Object Storage [:page_facing_up:](https://rclone.org/s3/#selectel)
- Servercore Object Storage [:page_facing_up:](https://rclone.org/s3/#servercore)
- SFTP [:page_facing_up:](https://rclone.org/sftp/)
- SMB / CIFS [:page_facing_up:](https://rclone.org/smb/)
- Spectra Logic [:page_facing_up:](https://rclone.org/s3/#spectralogic)
- StackPath [:page_facing_up:](https://rclone.org/s3/#stackpath)
- Storj [:page_facing_up:](https://rclone.org/storj/)
- SugarSync [:page_facing_up:](https://rclone.org/sugarsync/)
- Synology C2 Object Storage [:page_facing_up:](https://rclone.org/s3/#synology-c2)
- Tencent Cloud Object Storage (COS) [:page_facing_up:](https://rclone.org/s3/#tencent-cos)
- Uloz.to [:page_facing_up:](https://rclone.org/ulozto/)
- Wasabi [:page_facing_up:](https://rclone.org/s3/#wasabi)
- WebDAV [:page_facing_up:](https://rclone.org/webdav/)
- Yandex Disk [:page_facing_up:](https://rclone.org/yandex/)
- Zoho WorkDrive [:page_facing_up:](https://rclone.org/zoho/)
- Zata.ai [:page_facing_up:](https://rclone.org/s3/#Zata)
- The local filesystem [:page_facing_up:](https://rclone.org/local/)
Please see [the full list of all storage providers and their features](https://rclone.org/overview/)
@@ -92,49 +130,55 @@ Please see [the full list of all storage providers and their features](https://r
These backends adapt or modify other storage providers
* Alias: rename existing remotes [:page_facing_up:](https://rclone.org/alias/)
* Cache: cache remotes (DEPRECATED) [:page_facing_up:](https://rclone.org/cache/)
* Chunker: split large files [:page_facing_up:](https://rclone.org/chunker/)
* Combine: combine multiple remotes into a directory tree [:page_facing_up:](https://rclone.org/combine/)
* Compress: compress files [:page_facing_up:](https://rclone.org/compress/)
* Crypt: encrypt files [:page_facing_up:](https://rclone.org/crypt/)
* Hasher: hash files [:page_facing_up:](https://rclone.org/hasher/)
* Union: join multiple remotes to work together [:page_facing_up:](https://rclone.org/union/)
- Alias: rename existing remotes [:page_facing_up:](https://rclone.org/alias/)
- Archive: read archive files [:page_facing_up:](https://rclone.org/archive/)
- Cache: cache remotes (DEPRECATED) [:page_facing_up:](https://rclone.org/cache/)
- Chunker: split large files [:page_facing_up:](https://rclone.org/chunker/)
- Combine: combine multiple remotes into a directory tree [:page_facing_up:](https://rclone.org/combine/)
- Compress: compress files [:page_facing_up:](https://rclone.org/compress/)
- Crypt: encrypt files [:page_facing_up:](https://rclone.org/crypt/)
- Hasher: hash files [:page_facing_up:](https://rclone.org/hasher/)
- Union: join multiple remotes to work together [:page_facing_up:](https://rclone.org/union/)
## Features
* MD5/SHA-1 hashes checked at all times for file integrity
* Timestamps preserved on files
* Partial syncs supported on a whole file basis
* [Copy](https://rclone.org/commands/rclone_copy/) mode to just copy new/changed files
* [Sync](https://rclone.org/commands/rclone_sync/) (one way) mode to make a directory identical
* [Check](https://rclone.org/commands/rclone_check/) mode to check for file hash equality
* Can sync to and from network, e.g. two different cloud accounts
* Optional large file chunking ([Chunker](https://rclone.org/chunker/))
* Optional transparent compression ([Compress](https://rclone.org/compress/))
* Optional encryption ([Crypt](https://rclone.org/crypt/))
* Optional FUSE mount ([rclone mount](https://rclone.org/commands/rclone_mount/))
* Multi-threaded downloads to local disk
* Can [serve](https://rclone.org/commands/rclone_serve/) local or remote files over HTTP/WebDAV/FTP/SFTP/DLNA
- MD5/SHA-1 hashes checked at all times for file integrity
- Timestamps preserved on files
- Partial syncs supported on a whole file basis
- [Copy](https://rclone.org/commands/rclone_copy/) mode to just copy new/changed
files
- [Sync](https://rclone.org/commands/rclone_sync/) (one way) mode to make a directory
identical
- [Bisync](https://rclone.org/bisync/) (two way) to keep two directories in sync
bidirectionally
- [Check](https://rclone.org/commands/rclone_check/) mode to check for file hash
equality
- Can sync to and from network, e.g. two different cloud accounts
- Optional large file chunking ([Chunker](https://rclone.org/chunker/))
- Optional transparent compression ([Compress](https://rclone.org/compress/))
- Optional encryption ([Crypt](https://rclone.org/crypt/))
- Optional FUSE mount ([rclone mount](https://rclone.org/commands/rclone_mount/))
- Multi-threaded downloads to local disk
- Can [serve](https://rclone.org/commands/rclone_serve/) local or remote files
over HTTP/WebDAV/FTP/SFTP/DLNA
## Installation & documentation
Please see the [rclone website](https://rclone.org/) for:
* [Installation](https://rclone.org/install/)
* [Documentation & configuration](https://rclone.org/docs/)
* [Changelog](https://rclone.org/changelog/)
* [FAQ](https://rclone.org/faq/)
* [Storage providers](https://rclone.org/overview/)
* [Forum](https://forum.rclone.org/)
* ...and more
- [Installation](https://rclone.org/install/)
- [Documentation & configuration](https://rclone.org/docs/)
- [Changelog](https://rclone.org/changelog/)
- [FAQ](https://rclone.org/faq/)
- [Storage providers](https://rclone.org/overview/)
- [Forum](https://forum.rclone.org/)
- ...and more
## Downloads
* https://rclone.org/downloads/
- <https://rclone.org/downloads/>
License
-------
## License
This is free software under the terms of the MIT license (check the
[COPYING file](/COPYING) included in this package).

View File

@@ -4,48 +4,88 @@ This file describes how to make the various kinds of releases
## Extra required software for making a release
* [gh the github cli](https://github.com/cli/cli) for uploading packages
* pandoc for making the html and man pages
- [gh the github cli](https://github.com/cli/cli) for uploading packages
- pandoc for making the html and man pages
## Making a release
* git checkout master # see below for stable branch
* git pull
* git status - make sure everything is checked in
* Check GitHub actions build for master is Green
* make test # see integration test server or run locally
* make tag
* edit docs/content/changelog.md # make sure to remove duplicate logs from point releases
* make tidy
* make doc
* git status - to check for new man pages - git add them
* git commit -a -v -m "Version v1.XX.0"
* make retag
* git push --follow-tags origin
* # Wait for the GitHub builds to complete then...
* make fetch_binaries
* make tarball
* make vendorball
* make sign_upload
* make check_sign
* make upload
* make upload_website
* make upload_github
* make startdev # make startstable for stable branch
* # announce with forum post, twitter post, patreon post
- git checkout master # see below for stable branch
- git pull # IMPORTANT
- git status - make sure everything is checked in
- Check GitHub actions build for master is Green
- make test # see integration test server or run locally
- make tag
- edit docs/content/changelog.md # make sure to remove duplicate logs from point
releases
- make tidy
- make doc
- git status - to check for new man pages - git add them
- git commit -a -v -m "Version v1.XX.0"
- make retag
- git push origin # without --follow-tags so it doesn't push the tag if it fails
- git push --follow-tags origin
- \# Wait for the GitHub builds to complete then...
- make fetch_binaries
- make tarball
- make vendorball
- make sign_upload
- make check_sign
- make upload
- make upload_website
- make upload_github
- make startdev # make startstable for stable branch
- \# announce with forum post, twitter post, patreon post
## Update dependencies
Early in the next release cycle update the dependencies
Early in the next release cycle update the dependencies.
* Review any pinned packages in go.mod and remove if possible
* make updatedirect
* make
* git commit -a -v
* make update
* make
* roll back any updates which didn't compile
* git commit -a -v --amend
- Review any pinned packages in go.mod and remove if possible
- `make updatedirect`
- `make GOTAGS=cmount`
- `make compiletest`
- Fix anything which doesn't compile at this point and commit changes here
- `git commit -a -v -m "build: update all dependencies"`
If the `make updatedirect` upgrades the version of go in the `go.mod`
```text
go 1.22.0
```
then go to manual mode. `go1.22` here is the lowest supported version
in the `go.mod`.
If `make updatedirect` added a `toolchain` directive then remove it.
We don't want to force a toolchain on our users. Linux packagers are
often using a version of Go that is a few versions out of date.
```sh
go list -m -f '{{if not (or .Main .Indirect)}}{{.Path}}{{end}}' all > /tmp/potential-upgrades
go get -d $(cat /tmp/potential-upgrades)
go mod tidy -go=1.22 -compat=1.22
```
If the `go mod tidy` fails use the output from it to remove the
package which can't be upgraded from `/tmp/potential-upgrades` when
done
```sh
git co go.mod go.sum
```
And try again.
Optionally upgrade the direct and indirect dependencies. This is very
likely to fail if the manual method was used abve - in that case
ignore it as it is too time consuming to fix.
- `make update`
- `make GOTAGS=cmount`
- `make compiletest`
- roll back any updates which didn't compile
- `git commit -a -v --amend`
- **NB** watch out for this changing the default go version in `go.mod`
Note that `make update` updates all direct and indirect dependencies
and there can occasionally be forwards compatibility problems with
@@ -53,11 +93,28 @@ doing that so it may be necessary to roll back dependencies to the
version specified by `make updatedirect` in order to get rclone to
build.
Once it compiles locally, push it on a test branch and commit fixes
until the tests pass.
### Major versions
The above procedure will not upgrade major versions, so v2 to v3.
However this tool can show which major versions might need to be
upgraded:
```sh
go run github.com/icholy/gomajor@latest list -major
```
Expect API breakage when updating major versions.
## Tidy beta
At some point after the release run
bin/tidy-beta v1.55
```sh
bin/tidy-beta v1.55
```
where the version number is that of a couple ago to remove old beta binaries.
@@ -67,57 +124,86 @@ If rclone needs a point release due to some horrendous bug:
Set vars
* BASE_TAG=v1.XX # e.g. v1.52
* NEW_TAG=${BASE_TAG}.Y # e.g. v1.52.1
* echo $BASE_TAG $NEW_TAG # v1.52 v1.52.1
- BASE_TAG=v1.XX # e.g. v1.52
- NEW_TAG=${BASE_TAG}.Y # e.g. v1.52.1
- echo $BASE_TAG $NEW_TAG # v1.52 v1.52.1
First make the release branch. If this is a second point release then
this will be done already.
* git branch ${BASE_TAG} ${BASE_TAG}-stable
* git co ${BASE_TAG}-stable
* make startstable
- git co -b ${BASE_TAG}-stable ${BASE_TAG}.0
- make startstable
Now
* git co ${BASE_TAG}-stable
* git cherry-pick any fixes
* Do the steps as above
* make startstable
* git co master
* `#` cherry pick the changes to the changelog - check the diff to make sure it is correct
* git checkout ${BASE_TAG}-stable docs/content/changelog.md
* git commit -a -v -m "Changelog updates from Version ${NEW_TAG}"
* git push
- git co ${BASE_TAG}-stable
- git cherry-pick any fixes
- make startstable
- Do the steps as above
- git co master
- `#` cherry pick the changes to the changelog - check the diff to make sure it
is correct
- git checkout ${BASE_TAG}-stable docs/content/changelog.md
- git commit -a -v -m "Changelog updates from Version ${NEW_TAG}"
- git push
## Sponsor logos
If updating the website note that the sponsor logos have been moved out of the
main repository.
You will need to checkout `/docs/static/img/logos` from <https://github.com/rclone/third-party-logos>
which is a private repo containing artwork from sponsors.
## Update the website between releases
Create an update website branch based off the last release
```sh
git co -b update-website
```
If the branch already exists, double check there are no commits that need saving.
Now reset the branch to the last release
```sh
git reset --hard v1.64.0
```
Create the changes, check them in, test with `make serve` then
```sh
make upload_test_website
```
Check out <https://test.rclone.org> and when happy
```sh
make upload_website
```
Cherry pick any changes back to master and the stable branch if it is active.
## Making a manual build of docker
The rclone docker image should autobuild on via GitHub actions. If it doesn't
or needs to be updated then rebuild like this.
To do a basic build of rclone's docker image to debug builds locally:
See: https://github.com/ilteoood/docker_buildx/issues/19
See: https://github.com/ilteoood/docker_buildx/blob/master/scripts/install_buildx.sh
```
git co v1.54.1
docker pull golang
export DOCKER_CLI_EXPERIMENTAL=enabled
docker buildx create --name actions_builder --use
docker run --rm --privileged docker/binfmt:820fdd95a9972a5308930a2bdfb8573dd4447ad3
docker run --rm --privileged multiarch/qemu-user-static --reset -p yes
SUPPORTED_PLATFORMS=$(docker buildx inspect --bootstrap | grep 'Platforms:*.*' | cut -d : -f2,3)
echo "Supported platforms: $SUPPORTED_PLATFORMS"
docker buildx build --platform linux/amd64,linux/386,linux/arm64,linux/arm/v7 -t rclone/rclone:1.54.1 -t rclone/rclone:1.54 -t rclone/rclone:1 -t rclone/rclone:latest --push .
docker buildx stop actions_builder
```sh
docker buildx build --load -t rclone/rclone:testing --progress=plain .
docker run --rm rclone/rclone:testing version
```
### Old build for linux/amd64 only
To test the multipatform build
```sh
docker buildx build -t rclone/rclone:testing --progress=plain --platform linux/amd64,linux/386,linux/arm64,linux/arm/v7,linux/arm/v6 .
```
docker pull golang
docker build --rm --ulimit memlock=67108864 -t rclone/rclone:1.52.0 -t rclone/rclone:1.52 -t rclone/rclone:1 -t rclone/rclone:latest .
docker push rclone/rclone:1.52.0
docker push rclone/rclone:1.52
docker push rclone/rclone:1
docker push rclone/rclone:latest
To make a full build then set the tags correctly and add `--push`
Note that you can't only build one architecture - you need to build them all.
```sh
docker buildx build --platform linux/amd64,linux/386,linux/arm64,linux/arm/v7,linux/arm/v6 -t rclone/rclone:1.54.1 -t rclone/rclone:1.54 -t rclone/rclone:1 -t rclone/rclone:latest --push .
```

View File

@@ -1 +1 @@
v1.60.0
v1.72.0

View File

@@ -23,8 +23,8 @@ func prepare(t *testing.T, root string) {
configfile.Install()
// Configure the remote
config.FileSet(remoteName, "type", "alias")
config.FileSet(remoteName, "remote", root)
config.FileSetValue(remoteName, "type", "alias")
config.FileSetValue(remoteName, "remote", root)
}
func TestNewFS(t *testing.T) {
@@ -81,10 +81,12 @@ func TestNewFS(t *testing.T) {
for i, gotEntry := range gotEntries {
what := fmt.Sprintf("%s, entry=%d", what, i)
wantEntry := test.entries[i]
_, isDir := gotEntry.(fs.Directory)
require.Equal(t, wantEntry.remote, gotEntry.Remote(), what)
require.Equal(t, wantEntry.size, gotEntry.Size(), what)
_, isDir := gotEntry.(fs.Directory)
if !isDir {
require.Equal(t, wantEntry.size, gotEntry.Size(), what)
}
require.Equal(t, wantEntry.isDir, isDir, what)
}
}

View File

@@ -4,29 +4,38 @@ package all
import (
// Active file systems
_ "github.com/rclone/rclone/backend/alias"
_ "github.com/rclone/rclone/backend/amazonclouddrive"
_ "github.com/rclone/rclone/backend/archive"
_ "github.com/rclone/rclone/backend/azureblob"
_ "github.com/rclone/rclone/backend/azurefiles"
_ "github.com/rclone/rclone/backend/b2"
_ "github.com/rclone/rclone/backend/box"
_ "github.com/rclone/rclone/backend/cache"
_ "github.com/rclone/rclone/backend/chunker"
_ "github.com/rclone/rclone/backend/cloudinary"
_ "github.com/rclone/rclone/backend/combine"
_ "github.com/rclone/rclone/backend/compress"
_ "github.com/rclone/rclone/backend/crypt"
_ "github.com/rclone/rclone/backend/doi"
_ "github.com/rclone/rclone/backend/drive"
_ "github.com/rclone/rclone/backend/dropbox"
_ "github.com/rclone/rclone/backend/fichier"
_ "github.com/rclone/rclone/backend/filefabric"
_ "github.com/rclone/rclone/backend/filelu"
_ "github.com/rclone/rclone/backend/filescom"
_ "github.com/rclone/rclone/backend/ftp"
_ "github.com/rclone/rclone/backend/gofile"
_ "github.com/rclone/rclone/backend/googlecloudstorage"
_ "github.com/rclone/rclone/backend/googlephotos"
_ "github.com/rclone/rclone/backend/hasher"
_ "github.com/rclone/rclone/backend/hdfs"
_ "github.com/rclone/rclone/backend/hidrive"
_ "github.com/rclone/rclone/backend/http"
_ "github.com/rclone/rclone/backend/iclouddrive"
_ "github.com/rclone/rclone/backend/imagekit"
_ "github.com/rclone/rclone/backend/internetarchive"
_ "github.com/rclone/rclone/backend/jottacloud"
_ "github.com/rclone/rclone/backend/koofr"
_ "github.com/rclone/rclone/backend/linkbox"
_ "github.com/rclone/rclone/backend/local"
_ "github.com/rclone/rclone/backend/mailru"
_ "github.com/rclone/rclone/backend/mega"
@@ -36,9 +45,13 @@ import (
_ "github.com/rclone/rclone/backend/opendrive"
_ "github.com/rclone/rclone/backend/oracleobjectstorage"
_ "github.com/rclone/rclone/backend/pcloud"
_ "github.com/rclone/rclone/backend/pikpak"
_ "github.com/rclone/rclone/backend/pixeldrain"
_ "github.com/rclone/rclone/backend/premiumizeme"
_ "github.com/rclone/rclone/backend/protondrive"
_ "github.com/rclone/rclone/backend/putio"
_ "github.com/rclone/rclone/backend/qingstor"
_ "github.com/rclone/rclone/backend/quatrix"
_ "github.com/rclone/rclone/backend/s3"
_ "github.com/rclone/rclone/backend/seafile"
_ "github.com/rclone/rclone/backend/sftp"
@@ -48,6 +61,7 @@ import (
_ "github.com/rclone/rclone/backend/storj"
_ "github.com/rclone/rclone/backend/sugarsync"
_ "github.com/rclone/rclone/backend/swift"
_ "github.com/rclone/rclone/backend/ulozto"
_ "github.com/rclone/rclone/backend/union"
_ "github.com/rclone/rclone/backend/uptobox"
_ "github.com/rclone/rclone/backend/webdav"

File diff suppressed because it is too large Load Diff

View File

@@ -1,21 +0,0 @@
// Test AmazonCloudDrive filesystem interface
//go:build acd
// +build acd
package amazonclouddrive_test
import (
"testing"
"github.com/rclone/rclone/backend/amazonclouddrive"
"github.com/rclone/rclone/fs"
"github.com/rclone/rclone/fstest/fstests"
)
// TestIntegration runs integration tests against the remote
func TestIntegration(t *testing.T) {
fstests.NilObject = fs.Object((*amazonclouddrive.Object)(nil))
fstests.RemoteName = "TestAmazonCloudDrive:"
fstests.Run(t)
}

679
backend/archive/archive.go Normal file
View File

@@ -0,0 +1,679 @@
//go:build !plan9
// Package archive implements a backend to access archive files in a remote
package archive
// FIXME factor common code between backends out - eg VFS initialization
// FIXME can we generalize the VFS handle caching and use it in zip backend
// Factor more stuff out if possible
// Odd stats which are probably coming from the VFS
// * tensorflow.sqfs: 0% /3.074Gi, 204.426Ki/s, 4h22m46s
// FIXME this will perform poorly for unpacking as the VFS Reader is bad
// at multiple streams - need cache mode setting?
import (
"context"
"errors"
"fmt"
"io"
"path"
"strings"
"sync"
"time"
// Import all the required archivers here
_ "github.com/rclone/rclone/backend/archive/squashfs"
_ "github.com/rclone/rclone/backend/archive/zip"
"github.com/rclone/rclone/backend/archive/archiver"
"github.com/rclone/rclone/fs"
"github.com/rclone/rclone/fs/cache"
"github.com/rclone/rclone/fs/config/configmap"
"github.com/rclone/rclone/fs/config/configstruct"
"github.com/rclone/rclone/fs/fspath"
"github.com/rclone/rclone/fs/hash"
)
// Register with Fs
func init() {
fsi := &fs.RegInfo{
Name: "archive",
Description: "Read archives",
NewFs: NewFs,
MetadataInfo: &fs.MetadataInfo{
Help: `Any metadata supported by the underlying remote is read and written.`,
},
Options: []fs.Option{{
Name: "remote",
Help: `Remote to wrap to read archives from.
Normally should contain a ':' and a path, e.g. "myremote:path/to/dir",
"myremote:bucket" or "myremote:".
If this is left empty, then the archive backend will use the root as
the remote.
This means that you can use :archive:remote:path and it will be
equivalent to setting remote="remote:path".
`,
Required: false,
}},
}
fs.Register(fsi)
}
// Options defines the configuration for this backend
type Options struct {
Remote string `config:"remote"`
}
// Fs represents a archive of upstreams
type Fs struct {
name string // name of this remote
features *fs.Features // optional features
opt Options // options for this Fs
root string // the path we are working on
f fs.Fs // remote we are wrapping
wrapper fs.Fs // fs that wraps us
mu sync.Mutex // protects the below
archives map[string]*archive // the archives we have, by path
}
// A single open archive
type archive struct {
archiver archiver.Archiver // archiver responsible
remote string // path to the archive
prefix string // prefix to add on to listings
root string // root of the archive to remove from listings
mu sync.Mutex // protects the following variables
f fs.Fs // the archive Fs, may be nil
}
// If remote is an archive then return it otherwise return nil
func findArchive(remote string) *archive {
// FIXME use something faster than linear search?
for _, archiver := range archiver.Archivers {
if strings.HasSuffix(remote, archiver.Extension) {
return &archive{
archiver: archiver,
remote: remote,
prefix: remote,
root: "",
}
}
}
return nil
}
// Find an archive buried in remote
func subArchive(remote string) *archive {
archive := findArchive(remote)
if archive != nil {
return archive
}
parent := path.Dir(remote)
if parent == "/" || parent == "." {
return nil
}
return subArchive(parent)
}
// If remote is an archive then return it otherwise return nil
func (f *Fs) findArchive(remote string) (archive *archive) {
archive = findArchive(remote)
if archive != nil {
f.mu.Lock()
f.archives[remote] = archive
f.mu.Unlock()
}
return archive
}
// Instantiate archive if it hasn't been instantiated yet
//
// This is done lazily so that we can list a directory full of
// archives without opening them all.
func (a *archive) init(ctx context.Context, f fs.Fs) (fs.Fs, error) {
a.mu.Lock()
defer a.mu.Unlock()
if a.f != nil {
return a.f, nil
}
newFs, err := a.archiver.New(ctx, f, a.remote, a.prefix, a.root)
if err != nil && err != fs.ErrorIsFile {
return nil, fmt.Errorf("failed to create archive %q: %w", a.remote, err)
}
a.f = newFs
return a.f, nil
}
// NewFs constructs an Fs from the path.
//
// The returned Fs is the actual Fs, referenced by remote in the config
func NewFs(ctx context.Context, name, root string, m configmap.Mapper) (outFs fs.Fs, err error) {
// defer log.Trace(nil, "name=%q, root=%q, m=%v", name, root, m)("f=%+v, err=%v", &outFs, &err)
// Parse config into Options struct
opt := new(Options)
err = configstruct.Set(m, opt)
if err != nil {
return nil, err
}
remote := opt.Remote
origRoot := root
// If remote is empty, use the root instead
if remote == "" {
remote = root
root = ""
}
isDirectory := strings.HasSuffix(remote, "/")
remote = strings.TrimRight(remote, "/")
if remote == "" {
remote = "/"
}
if strings.HasPrefix(remote, name+":") {
return nil, errors.New("can't point archive remote at itself - check the value of the upstreams setting")
}
_ = isDirectory
foundArchive := subArchive(remote)
if foundArchive != nil {
fs.Debugf(nil, "Found archiver for %q remote %q", foundArchive.archiver.Extension, foundArchive.remote)
// Archive path
foundArchive.root = strings.Trim(remote[len(foundArchive.remote):], "/")
// Path to the archive
archiveRemote := remote[:len(foundArchive.remote)]
// Remote is archive leaf name
foundArchive.remote = path.Base(archiveRemote)
foundArchive.prefix = ""
// Point remote to archive file
remote = archiveRemote
}
// Make sure to remove trailing . referring to the current dir
if path.Base(root) == "." {
root = strings.TrimSuffix(root, ".")
}
remotePath := fspath.JoinRootPath(remote, root)
wrappedFs, err := cache.Get(ctx, remotePath)
if err != fs.ErrorIsFile && err != nil {
return nil, fmt.Errorf("failed to make remote %q to wrap: %w", remote, err)
}
f := &Fs{
name: name,
//root: path.Join(remotePath, root),
root: origRoot,
opt: *opt,
f: wrappedFs,
archives: make(map[string]*archive),
}
cache.PinUntilFinalized(f.f, f)
// the features here are ones we could support, and they are
// ANDed with the ones from wrappedFs
f.features = (&fs.Features{
CaseInsensitive: true,
DuplicateFiles: false,
ReadMimeType: true,
WriteMimeType: true,
CanHaveEmptyDirectories: true,
BucketBased: true,
SetTier: true,
GetTier: true,
ReadMetadata: true,
WriteMetadata: true,
UserMetadata: true,
PartialUploads: true,
}).Fill(ctx, f).Mask(ctx, wrappedFs).WrapsFs(f, wrappedFs)
if foundArchive != nil {
fs.Debugf(f, "Root is an archive")
if err != fs.ErrorIsFile {
return nil, fmt.Errorf("expecting to find a file at %q", remote)
}
return foundArchive.init(ctx, f.f)
}
// Correct root if definitely pointing to a file
if err == fs.ErrorIsFile {
f.root = path.Dir(f.root)
if f.root == "." || f.root == "/" {
f.root = ""
}
}
return f, err
}
// Name of the remote (as passed into NewFs)
func (f *Fs) Name() string {
return f.name
}
// Root of the remote (as passed into NewFs)
func (f *Fs) Root() string {
return f.root
}
// String converts this Fs to a string
func (f *Fs) String() string {
return fmt.Sprintf("archive root '%s'", f.root)
}
// Features returns the optional features of this Fs
func (f *Fs) Features() *fs.Features {
return f.features
}
// Rmdir removes the root directory of the Fs object
func (f *Fs) Rmdir(ctx context.Context, dir string) error {
return f.f.Rmdir(ctx, dir)
}
// Hashes returns hash.HashNone to indicate remote hashing is unavailable
func (f *Fs) Hashes() hash.Set {
return f.f.Hashes()
}
// Mkdir makes the root directory of the Fs object
func (f *Fs) Mkdir(ctx context.Context, dir string) error {
return f.f.Mkdir(ctx, dir)
}
// Purge all files in the directory
//
// Implement this if you have a way of deleting all the files
// quicker than just running Remove() on the result of List()
//
// Return an error if it doesn't exist
func (f *Fs) Purge(ctx context.Context, dir string) error {
do := f.f.Features().Purge
if do == nil {
return fs.ErrorCantPurge
}
return do(ctx, dir)
}
// Copy src to this remote using server-side copy operations.
//
// This is stored with the remote path given.
//
// It returns the destination Object and a possible error.
//
// Will only be called if src.Fs().Name() == f.Name()
//
// If it isn't possible then return fs.ErrorCantCopy
func (f *Fs) Copy(ctx context.Context, src fs.Object, remote string) (fs.Object, error) {
do := f.f.Features().Copy
if do == nil {
return nil, fs.ErrorCantCopy
}
// FIXME
// o, ok := src.(*Object)
// if !ok {
// return nil, fs.ErrorCantCopy
// }
return do(ctx, src, remote)
}
// Move src to this remote using server-side move operations.
//
// This is stored with the remote path given.
//
// It returns the destination Object and a possible error.
//
// Will only be called if src.Fs().Name() == f.Name()
//
// If it isn't possible then return fs.ErrorCantMove
func (f *Fs) Move(ctx context.Context, src fs.Object, remote string) (fs.Object, error) {
do := f.f.Features().Move
if do == nil {
return nil, fs.ErrorCantMove
}
// FIXME
// o, ok := src.(*Object)
// if !ok {
// return nil, fs.ErrorCantMove
// }
return do(ctx, src, remote)
}
// DirMove moves src, srcRemote to this remote at dstRemote
// using server-side move operations.
//
// Will only be called if src.Fs().Name() == f.Name()
//
// If it isn't possible then return fs.ErrorCantDirMove
//
// If destination exists then return fs.ErrorDirExists
func (f *Fs) DirMove(ctx context.Context, src fs.Fs, srcRemote, dstRemote string) (err error) {
do := f.f.Features().DirMove
if do == nil {
return fs.ErrorCantDirMove
}
srcFs, ok := src.(*Fs)
if !ok {
fs.Debugf(srcFs, "Can't move directory - not same remote type")
return fs.ErrorCantDirMove
}
return do(ctx, srcFs.f, srcRemote, dstRemote)
}
// ChangeNotify calls the passed function with a path
// that has had changes. If the implementation
// uses polling, it should adhere to the given interval.
// At least one value will be written to the channel,
// specifying the initial value and updated values might
// follow. A 0 Duration should pause the polling.
// The ChangeNotify implementation must empty the channel
// regularly. When the channel gets closed, the implementation
// should stop polling and release resources.
func (f *Fs) ChangeNotify(ctx context.Context, notifyFunc func(string, fs.EntryType), ch <-chan time.Duration) {
do := f.f.Features().ChangeNotify
if do == nil {
return
}
wrappedNotifyFunc := func(path string, entryType fs.EntryType) {
// fs.Debugf(f, "ChangeNotify: path %q entryType %d", path, entryType)
notifyFunc(path, entryType)
}
do(ctx, wrappedNotifyFunc, ch)
}
// DirCacheFlush resets the directory cache - used in testing
// as an optional interface
func (f *Fs) DirCacheFlush() {
do := f.f.Features().DirCacheFlush
if do != nil {
do()
}
}
func (f *Fs) put(ctx context.Context, in io.Reader, src fs.ObjectInfo, stream bool, options ...fs.OpenOption) (fs.Object, error) {
var o fs.Object
var err error
if stream {
o, err = f.f.Features().PutStream(ctx, in, src, options...)
} else {
o, err = f.f.Put(ctx, in, src, options...)
}
if err != nil {
return nil, err
}
return o, nil
}
// Put in to the remote path with the modTime given of the given size
//
// May create the object even if it returns an error - if so
// will return the object and the error, otherwise will return
// nil and the error
func (f *Fs) Put(ctx context.Context, in io.Reader, src fs.ObjectInfo, options ...fs.OpenOption) (fs.Object, error) {
o, err := f.NewObject(ctx, src.Remote())
switch err {
case nil:
return o, o.Update(ctx, in, src, options...)
case fs.ErrorObjectNotFound:
return f.put(ctx, in, src, false, options...)
default:
return nil, err
}
}
// PutStream uploads to the remote path with the modTime given of indeterminate size
//
// May create the object even if it returns an error - if so
// will return the object and the error, otherwise will return
// nil and the error
func (f *Fs) PutStream(ctx context.Context, in io.Reader, src fs.ObjectInfo, options ...fs.OpenOption) (fs.Object, error) {
o, err := f.NewObject(ctx, src.Remote())
switch err {
case nil:
return o, o.Update(ctx, in, src, options...)
case fs.ErrorObjectNotFound:
return f.put(ctx, in, src, true, options...)
default:
return nil, err
}
}
// About gets quota information from the Fs
func (f *Fs) About(ctx context.Context) (*fs.Usage, error) {
do := f.f.Features().About
if do == nil {
return nil, errors.New("not supported by underlying remote")
}
return do(ctx)
}
// Find the Fs for the directory
func (f *Fs) findFs(ctx context.Context, dir string) (subFs fs.Fs, err error) {
f.mu.Lock()
defer f.mu.Unlock()
subFs = f.f
// FIXME should do this with a better datastructure like a prefix tree
// FIXME want to find the longest first otherwise nesting won't work
dirSlash := dir + "/"
for archiverRemote, archive := range f.archives {
subRemote := archiverRemote + "/"
if strings.HasPrefix(dirSlash, subRemote) {
subFs, err = archive.init(ctx, f.f)
if err != nil {
return nil, err
}
break
}
}
return subFs, nil
}
// List the objects and directories in dir into entries. The
// entries can be returned in any order but should be for a
// complete directory.
//
// dir should be "" to list the root, and should not have
// trailing slashes.
//
// This should return ErrDirNotFound if the directory isn't
// found.
func (f *Fs) List(ctx context.Context, dir string) (entries fs.DirEntries, err error) {
// defer log.Trace(f, "dir=%q", dir)("entries = %v, err=%v", &entries, &err)
subFs, err := f.findFs(ctx, dir)
if err != nil {
return nil, err
}
entries, err = subFs.List(ctx, dir)
if err != nil {
return nil, err
}
for i, entry := range entries {
// Can only unarchive files
if o, ok := entry.(fs.Object); ok {
remote := o.Remote()
archive := f.findArchive(remote)
if archive != nil {
// Overwrite entry with directory
entries[i] = fs.NewDir(remote, o.ModTime(ctx))
}
}
}
return entries, nil
}
// NewObject creates a new remote archive file object
func (f *Fs) NewObject(ctx context.Context, remote string) (fs.Object, error) {
dir := path.Dir(remote)
if dir == "/" || dir == "." {
dir = ""
}
subFs, err := f.findFs(ctx, dir)
if err != nil {
return nil, err
}
o, err := subFs.NewObject(ctx, remote)
if err != nil {
return nil, err
}
return o, nil
}
// Precision is the greatest precision of all the archivers
func (f *Fs) Precision() time.Duration {
return time.Second
}
// Shutdown the backend, closing any background tasks and any
// cached connections.
func (f *Fs) Shutdown(ctx context.Context) error {
if do := f.f.Features().Shutdown; do != nil {
return do(ctx)
}
return nil
}
// PublicLink generates a public link to the remote path (usually readable by anyone)
func (f *Fs) PublicLink(ctx context.Context, remote string, expire fs.Duration, unlink bool) (string, error) {
do := f.f.Features().PublicLink
if do == nil {
return "", errors.New("PublicLink not supported")
}
return do(ctx, remote, expire, unlink)
}
// PutUnchecked in to the remote path with the modTime given of the given size
//
// May create the object even if it returns an error - if so
// will return the object and the error, otherwise will return
// nil and the error
//
// May create duplicates or return errors if src already
// exists.
func (f *Fs) PutUnchecked(ctx context.Context, in io.Reader, src fs.ObjectInfo, options ...fs.OpenOption) (fs.Object, error) {
do := f.f.Features().PutUnchecked
if do == nil {
return nil, errors.New("can't PutUnchecked")
}
o, err := do(ctx, in, src, options...)
if err != nil {
return nil, err
}
return o, nil
}
// MergeDirs merges the contents of all the directories passed
// in into the first one and rmdirs the other directories.
func (f *Fs) MergeDirs(ctx context.Context, dirs []fs.Directory) error {
if len(dirs) == 0 {
return nil
}
do := f.f.Features().MergeDirs
if do == nil {
return errors.New("MergeDirs not supported")
}
return do(ctx, dirs)
}
// CleanUp the trash in the Fs
//
// Implement this if you have a way of emptying the trash or
// otherwise cleaning up old versions of files.
func (f *Fs) CleanUp(ctx context.Context) error {
do := f.f.Features().CleanUp
if do == nil {
return errors.New("not supported by underlying remote")
}
return do(ctx)
}
// OpenWriterAt opens with a handle for random access writes
//
// Pass in the remote desired and the size if known.
//
// It truncates any existing object
func (f *Fs) OpenWriterAt(ctx context.Context, remote string, size int64) (fs.WriterAtCloser, error) {
do := f.f.Features().OpenWriterAt
if do == nil {
return nil, fs.ErrorNotImplemented
}
return do(ctx, remote, size)
}
// UnWrap returns the Fs that this Fs is wrapping
func (f *Fs) UnWrap() fs.Fs {
return f.f
}
// WrapFs returns the Fs that is wrapping this Fs
func (f *Fs) WrapFs() fs.Fs {
return f.wrapper
}
// SetWrapper sets the Fs that is wrapping this Fs
func (f *Fs) SetWrapper(wrapper fs.Fs) {
f.wrapper = wrapper
}
// OpenChunkWriter returns the chunk size and a ChunkWriter
//
// Pass in the remote and the src object
// You can also use options to hint at the desired chunk size
func (f *Fs) OpenChunkWriter(ctx context.Context, remote string, src fs.ObjectInfo, options ...fs.OpenOption) (info fs.ChunkWriterInfo, writer fs.ChunkWriter, err error) {
do := f.f.Features().OpenChunkWriter
if do == nil {
return info, nil, fs.ErrorNotImplemented
}
return do(ctx, remote, src, options...)
}
// UserInfo returns info about the connected user
func (f *Fs) UserInfo(ctx context.Context) (map[string]string, error) {
do := f.f.Features().UserInfo
if do == nil {
return nil, fs.ErrorNotImplemented
}
return do(ctx)
}
// Disconnect the current user
func (f *Fs) Disconnect(ctx context.Context) error {
do := f.f.Features().Disconnect
if do == nil {
return fs.ErrorNotImplemented
}
return do(ctx)
}
// Check the interfaces are satisfied
var (
_ fs.Fs = (*Fs)(nil)
_ fs.Purger = (*Fs)(nil)
_ fs.PutStreamer = (*Fs)(nil)
_ fs.Copier = (*Fs)(nil)
_ fs.Mover = (*Fs)(nil)
_ fs.DirMover = (*Fs)(nil)
_ fs.DirCacheFlusher = (*Fs)(nil)
_ fs.ChangeNotifier = (*Fs)(nil)
_ fs.Abouter = (*Fs)(nil)
_ fs.Shutdowner = (*Fs)(nil)
_ fs.PublicLinker = (*Fs)(nil)
_ fs.PutUncheckeder = (*Fs)(nil)
_ fs.MergeDirser = (*Fs)(nil)
_ fs.CleanUpper = (*Fs)(nil)
_ fs.OpenWriterAter = (*Fs)(nil)
_ fs.OpenChunkWriter = (*Fs)(nil)
_ fs.UserInfoer = (*Fs)(nil)
_ fs.Disconnecter = (*Fs)(nil)
// FIXME _ fs.FullObject = (*Object)(nil)
)

View File

@@ -0,0 +1,221 @@
//go:build !plan9
package archive
import (
"bytes"
"context"
"fmt"
"os"
"os/exec"
"path"
"path/filepath"
"strconv"
"strings"
"testing"
_ "github.com/rclone/rclone/backend/local"
"github.com/rclone/rclone/fs"
"github.com/rclone/rclone/fs/cache"
"github.com/rclone/rclone/fs/filter"
"github.com/rclone/rclone/fs/operations"
"github.com/rclone/rclone/fstest"
"github.com/rclone/rclone/fstest/fstests"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
)
// FIXME need to test Open with seek
// run - run a shell command
func run(t *testing.T, args ...string) {
cmd := exec.Command(args[0], args[1:]...)
fs.Debugf(nil, "run args = %v", args)
out, err := cmd.CombinedOutput()
if err != nil {
t.Fatalf(`
----------------------------
Failed to run %v: %v
Command output was:
%s
----------------------------
`, args, err, out)
}
}
// check the dst and src are identical
func checkTree(ctx context.Context, name string, t *testing.T, dstArchive, src string, expectedCount int) {
t.Run(name, func(t *testing.T) {
fs.Debugf(nil, "check %q vs %q", dstArchive, src)
Farchive, err := cache.Get(ctx, dstArchive)
if err != fs.ErrorIsFile {
require.NoError(t, err)
}
Fsrc, err := cache.Get(ctx, src)
if err != fs.ErrorIsFile {
require.NoError(t, err)
}
var matches bytes.Buffer
opt := operations.CheckOpt{
Fdst: Farchive,
Fsrc: Fsrc,
Match: &matches,
}
for _, action := range []string{"Check", "Download"} {
t.Run(action, func(t *testing.T) {
matches.Reset()
if action == "Download" {
assert.NoError(t, operations.CheckDownload(ctx, &opt))
} else {
assert.NoError(t, operations.Check(ctx, &opt))
}
if expectedCount > 0 {
assert.Equal(t, expectedCount, strings.Count(matches.String(), "\n"))
}
})
}
t.Run("NewObject", func(t *testing.T) {
// Check we can run NewObject on all files and read them
assert.NoError(t, operations.ListFn(ctx, Fsrc, func(srcObj fs.Object) {
if t.Failed() {
return
}
remote := srcObj.Remote()
archiveObj, err := Farchive.NewObject(ctx, remote)
require.NoError(t, err, remote)
assert.Equal(t, remote, archiveObj.Remote(), remote)
// Test that the contents are the same
archiveBuf := fstests.ReadObject(ctx, t, archiveObj, -1)
srcBuf := fstests.ReadObject(ctx, t, srcObj, -1)
assert.Equal(t, srcBuf, archiveBuf)
if len(srcBuf) < 81 {
return
}
// Tests that Open works with SeekOption
assert.Equal(t, srcBuf[50:], fstests.ReadObject(ctx, t, archiveObj, -1, &fs.SeekOption{Offset: 50}), "contents differ after seek")
// Tests that Open works with RangeOption
for _, test := range []struct {
ro fs.RangeOption
wantStart, wantEnd int
}{
{fs.RangeOption{Start: 5, End: 15}, 5, 16},
{fs.RangeOption{Start: 80, End: -1}, 80, len(srcBuf)},
{fs.RangeOption{Start: 81, End: 100000}, 81, len(srcBuf)},
{fs.RangeOption{Start: -1, End: 20}, len(srcBuf) - 20, len(srcBuf)}, // if start is omitted this means get the final bytes
// {fs.RangeOption{Start: -1, End: -1}, 0, len(srcBuf)}, - this seems to work but the RFC doesn't define it
} {
got := fstests.ReadObject(ctx, t, archiveObj, -1, &test.ro)
foundAt := strings.Index(srcBuf, got)
help := fmt.Sprintf("%#v failed want [%d:%d] got [%d:%d]", test.ro, test.wantStart, test.wantEnd, foundAt, foundAt+len(got))
assert.Equal(t, srcBuf[test.wantStart:test.wantEnd], got, help)
}
// Test that the modtimes are correct
fstest.AssertTimeEqualWithPrecision(t, remote, srcObj.ModTime(ctx), archiveObj.ModTime(ctx), Farchive.Precision())
// Test that the sizes are correct
assert.Equal(t, srcObj.Size(), archiveObj.Size())
// Test that Strings are OK
assert.Equal(t, srcObj.String(), archiveObj.String())
}))
})
// t.Logf("Fdst ------------- %v", Fdst)
// operations.List(ctx, Fdst, os.Stdout)
// t.Logf("Fsrc ------------- %v", Fsrc)
// operations.List(ctx, Fsrc, os.Stdout)
})
}
// test creating and reading back some archives
//
// Note that this uses rclone and zip as external binaries.
func testArchive(t *testing.T, archiveName string, archiveFn func(t *testing.T, output, input string)) {
ctx := context.Background()
checkFiles := 1000
// create random test input files
inputRoot := t.TempDir()
input := filepath.Join(inputRoot, archiveName)
require.NoError(t, os.Mkdir(input, 0777))
run(t, "rclone", "test", "makefiles", "--files", strconv.Itoa(checkFiles), "--ascii", input)
// Create the archive
output := t.TempDir()
zipFile := path.Join(output, archiveName)
archiveFn(t, zipFile, input)
// Check the archive itself
checkTree(ctx, "Archive", t, ":archive:"+zipFile, input, checkFiles)
// Now check a subdirectory
fis, err := os.ReadDir(input)
require.NoError(t, err)
subDir := "NOT FOUND"
aFile := "NOT FOUND"
for _, fi := range fis {
if fi.IsDir() {
subDir = fi.Name()
} else {
aFile = fi.Name()
}
}
checkTree(ctx, "SubDir", t, ":archive:"+zipFile+"/"+subDir, filepath.Join(input, subDir), 0)
// Now check a single file
fiCtx, fi := filter.AddConfig(ctx)
require.NoError(t, fi.AddRule("+ "+aFile))
require.NoError(t, fi.AddRule("- *"))
checkTree(fiCtx, "SingleFile", t, ":archive:"+zipFile+"/"+aFile, filepath.Join(input, aFile), 0)
// Now check the level above
checkTree(ctx, "Root", t, ":archive:"+output, inputRoot, checkFiles)
// run(t, "cp", "-a", inputRoot, output, "/tmp/test-"+archiveName)
}
// Make sure we have the executable named
func skipIfNoExe(t *testing.T, exeName string) {
_, err := exec.LookPath(exeName)
if err != nil {
t.Skipf("%s executable not installed", exeName)
}
}
// Test creating and reading back some archives
//
// Note that this uses rclone and zip as external binaries.
func TestArchiveZip(t *testing.T) {
fstest.Initialise()
skipIfNoExe(t, "zip")
skipIfNoExe(t, "rclone")
testArchive(t, "test.zip", func(t *testing.T, output, input string) {
oldcwd, err := os.Getwd()
require.NoError(t, err)
require.NoError(t, os.Chdir(input))
defer func() {
require.NoError(t, os.Chdir(oldcwd))
}()
run(t, "zip", "-9r", output, ".")
})
}
// Test creating and reading back some archives
//
// Note that this uses rclone and squashfs as external binaries.
func TestArchiveSquashfs(t *testing.T) {
fstest.Initialise()
skipIfNoExe(t, "mksquashfs")
skipIfNoExe(t, "rclone")
testArchive(t, "test.sqfs", func(t *testing.T, output, input string) {
run(t, "mksquashfs", input, output)
})
}

View File

@@ -0,0 +1,67 @@
//go:build !plan9
// Test Archive filesystem interface
package archive_test
import (
"testing"
_ "github.com/rclone/rclone/backend/local"
_ "github.com/rclone/rclone/backend/memory"
"github.com/rclone/rclone/fstest"
"github.com/rclone/rclone/fstest/fstests"
)
var (
unimplementableFsMethods = []string{"ListR", "ListP", "MkdirMetadata", "DirSetModTime"}
// In these tests we receive objects from the underlying remote which don't implement these methods
unimplementableObjectMethods = []string{"GetTier", "ID", "Metadata", "MimeType", "SetTier", "UnWrap", "SetMetadata"}
)
// TestIntegration runs integration tests against the remote
func TestIntegration(t *testing.T) {
if *fstest.RemoteName == "" {
t.Skip("Skipping as -remote not set")
}
fstests.Run(t, &fstests.Opt{
RemoteName: *fstest.RemoteName,
UnimplementableFsMethods: unimplementableFsMethods,
UnimplementableObjectMethods: unimplementableObjectMethods,
})
}
func TestLocal(t *testing.T) {
if *fstest.RemoteName != "" {
t.Skip("Skipping as -remote set")
}
remote := t.TempDir()
name := "TestArchiveLocal"
fstests.Run(t, &fstests.Opt{
RemoteName: name + ":",
ExtraConfig: []fstests.ExtraConfigItem{
{Name: name, Key: "type", Value: "archive"},
{Name: name, Key: "remote", Value: remote},
},
QuickTestOK: true,
UnimplementableFsMethods: unimplementableFsMethods,
UnimplementableObjectMethods: unimplementableObjectMethods,
})
}
func TestMemory(t *testing.T) {
if *fstest.RemoteName != "" {
t.Skip("Skipping as -remote set")
}
remote := ":memory:"
name := "TestArchiveMemory"
fstests.Run(t, &fstests.Opt{
RemoteName: name + ":",
ExtraConfig: []fstests.ExtraConfigItem{
{Name: name, Key: "type", Value: "archive"},
{Name: name, Key: "remote", Value: remote},
},
QuickTestOK: true,
UnimplementableFsMethods: unimplementableFsMethods,
UnimplementableObjectMethods: unimplementableObjectMethods,
})
}

View File

@@ -0,0 +1,7 @@
// Build for archive for unsupported platforms to stop go complaining
// about "no buildable Go source files "
//go:build plan9
// Package archive implements a backend to access archive files in a remote
package archive

View File

@@ -0,0 +1,24 @@
// Package archiver registers all the archivers
package archiver
import (
"context"
"github.com/rclone/rclone/fs"
)
// Archiver describes an archive package
type Archiver struct {
// New constructs an Fs from the (wrappedFs, remote) with the objects
// prefix with prefix and rooted at root
New func(ctx context.Context, f fs.Fs, remote, prefix, root string) (fs.Fs, error)
Extension string
}
// Archivers is a slice of all registered archivers
var Archivers []Archiver
// Register adds the archivers provided to the list of known archivers
func Register(as ...Archiver) {
Archivers = append(Archivers, as...)
}

View File

@@ -0,0 +1,233 @@
// Package base is a base archive Fs
package base
import (
"context"
"errors"
"fmt"
"io"
"path"
"time"
"github.com/rclone/rclone/fs"
"github.com/rclone/rclone/fs/hash"
"github.com/rclone/rclone/vfs"
)
// Fs represents a wrapped fs.Fs
type Fs struct {
f fs.Fs
wrapper fs.Fs
name string
features *fs.Features // optional features
vfs *vfs.VFS
node vfs.Node // archive object
remote string // remote of the archive object
prefix string // position for objects
prefixSlash string // position for objects with a slash on
root string // position to read from within the archive
}
var errNotImplemented = errors.New("internal error: method not implemented in archiver")
// New constructs an Fs from the (wrappedFs, remote) with the objects
// prefix with prefix and rooted at root
func New(ctx context.Context, wrappedFs fs.Fs, remote, prefix, root string) (*Fs, error) {
// FIXME vfs cache?
// FIXME could factor out ReadFileHandle and just use that rather than the full VFS
fs.Debugf(nil, "New: remote=%q, prefix=%q, root=%q", remote, prefix, root)
VFS := vfs.New(wrappedFs, nil)
node, err := VFS.Stat(remote)
if err != nil {
return nil, fmt.Errorf("failed to find %q archive: %w", remote, err)
}
f := &Fs{
f: wrappedFs,
name: path.Join(fs.ConfigString(wrappedFs), remote),
vfs: VFS,
node: node,
remote: remote,
root: root,
prefix: prefix,
prefixSlash: prefix + "/",
}
// FIXME
// the features here are ones we could support, and they are
// ANDed with the ones from wrappedFs
//
// FIXME some of these need to be forced on - CanHaveEmptyDirectories
f.features = (&fs.Features{
CaseInsensitive: false,
DuplicateFiles: false,
ReadMimeType: false, // MimeTypes not supported with gzip
WriteMimeType: false,
BucketBased: false,
CanHaveEmptyDirectories: true,
}).Fill(ctx, f).Mask(ctx, wrappedFs).WrapsFs(f, wrappedFs)
return f, nil
}
// Name of the remote (as passed into NewFs)
func (f *Fs) Name() string {
return f.name
}
// Root of the remote (as passed into NewFs)
func (f *Fs) Root() string {
return f.root
}
// Features returns the optional features of this Fs
func (f *Fs) Features() *fs.Features {
return f.features
}
// String returns a description of the FS
func (f *Fs) String() string {
return f.name
}
// List the objects and directories in dir into entries. The
// entries can be returned in any order but should be for a
// complete directory.
//
// dir should be "" to list the root, and should not have
// trailing slashes.
//
// This should return ErrDirNotFound if the directory isn't
// found.
func (f *Fs) List(ctx context.Context, dir string) (entries fs.DirEntries, err error) {
return nil, errNotImplemented
}
// NewObject finds the Object at remote.
func (f *Fs) NewObject(ctx context.Context, remote string) (o fs.Object, err error) {
return nil, errNotImplemented
}
// Precision of the ModTimes in this Fs
func (f *Fs) Precision() time.Duration {
return time.Second
}
// Mkdir makes the directory (container, bucket)
//
// Shouldn't return an error if it already exists
func (f *Fs) Mkdir(ctx context.Context, dir string) error {
return vfs.EROFS
}
// Rmdir removes the directory (container, bucket) if empty
//
// Return an error if it doesn't exist or isn't empty
func (f *Fs) Rmdir(ctx context.Context, dir string) error {
return vfs.EROFS
}
// Put in to the remote path with the modTime given of the given size
//
// May create the object even if it returns an error - if so
// will return the object and the error, otherwise will return
// nil and the error
func (f *Fs) Put(ctx context.Context, in io.Reader, src fs.ObjectInfo, options ...fs.OpenOption) (o fs.Object, err error) {
return nil, vfs.EROFS
}
// Hashes returns the supported hash sets.
func (f *Fs) Hashes() hash.Set {
return hash.Set(hash.None)
}
// UnWrap returns the Fs that this Fs is wrapping
func (f *Fs) UnWrap() fs.Fs {
return f.f
}
// WrapFs returns the Fs that is wrapping this Fs
func (f *Fs) WrapFs() fs.Fs {
return f.wrapper
}
// SetWrapper sets the Fs that is wrapping this Fs
func (f *Fs) SetWrapper(wrapper fs.Fs) {
f.wrapper = wrapper
}
// Object describes an object to be read from the raw zip file
type Object struct {
f *Fs
remote string
}
// Fs returns read only access to the Fs that this object is part of
func (o *Object) Fs() fs.Info {
return o.f
}
// Return a string version
func (o *Object) String() string {
if o == nil {
return "<nil>"
}
return o.Remote()
}
// Remote returns the remote path
func (o *Object) Remote() string {
return o.remote
}
// Size returns the size of the file
func (o *Object) Size() int64 {
return -1
}
// ModTime returns the modification time of the object
//
// It attempts to read the objects mtime and if that isn't present the
// LastModified returned in the http headers
func (o *Object) ModTime(ctx context.Context) time.Time {
return time.Now()
}
// SetModTime sets the modification time of the local fs object
func (o *Object) SetModTime(ctx context.Context, modTime time.Time) error {
return vfs.EROFS
}
// Storable raturns a boolean indicating if this object is storable
func (o *Object) Storable() bool {
return true
}
// Hash returns the selected checksum of the file
// If no checksum is available it returns ""
func (o *Object) Hash(ctx context.Context, ht hash.Type) (string, error) {
return "", hash.ErrUnsupported
}
// Open opens the file for read. Call Close() on the returned io.ReadCloser
func (o *Object) Open(ctx context.Context, options ...fs.OpenOption) (rc io.ReadCloser, err error) {
return nil, errNotImplemented
}
// Update in to the object with the modTime given of the given size
func (o *Object) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, options ...fs.OpenOption) error {
return vfs.EROFS
}
// Remove an object
func (o *Object) Remove(ctx context.Context) error {
return vfs.EROFS
}
// Check the interfaces are satisfied
var (
_ fs.Fs = (*Fs)(nil)
_ fs.UnWrapper = (*Fs)(nil)
_ fs.Wrapper = (*Fs)(nil)
_ fs.Object = (*Object)(nil)
)

View File

@@ -0,0 +1,165 @@
package squashfs
// Could just be using bare object Open with RangeRequest which
// would transfer the minimum amount of data but may be slower.
import (
"errors"
"fmt"
"io/fs"
"os"
"sync"
"github.com/diskfs/go-diskfs/backend"
"github.com/rclone/rclone/vfs"
)
// Cache file handles for accessing the file
type cache struct {
node vfs.Node
fhsMu sync.Mutex
fhs []cacheHandle
}
// A cached file handle
type cacheHandle struct {
offset int64
fh vfs.Handle
}
// Make a new cache
func newCache(node vfs.Node) *cache {
return &cache{
node: node,
}
}
// Get a vfs.Handle from the pool or open one
//
// This tries to find an open file handle which doesn't require seeking.
func (c *cache) open(off int64) (fh vfs.Handle, err error) {
c.fhsMu.Lock()
defer c.fhsMu.Unlock()
if len(c.fhs) > 0 {
// Look for exact match first
for i, cfh := range c.fhs {
if cfh.offset == off {
// fs.Debugf(nil, "CACHE MATCH")
c.fhs = append(c.fhs[:i], c.fhs[i+1:]...)
return cfh.fh, nil
}
}
// fs.Debugf(nil, "CACHE MISS")
// Just take the first one if not found
cfh := c.fhs[0]
c.fhs = c.fhs[1:]
return cfh.fh, nil
}
fh, err = c.node.Open(os.O_RDONLY)
if err != nil {
return nil, fmt.Errorf("failed to open squashfs archive: %w", err)
}
return fh, nil
}
// Close a vfs.Handle or return it to the pool
//
// off should be the offset the file handle would read from without seeking
func (c *cache) close(fh vfs.Handle, off int64) {
c.fhsMu.Lock()
defer c.fhsMu.Unlock()
c.fhs = append(c.fhs, cacheHandle{
offset: off,
fh: fh,
})
}
// ReadAt reads len(p) bytes into p starting at offset off in the underlying
// input source. It returns the number of bytes read (0 <= n <= len(p)) and any
// error encountered.
//
// When ReadAt returns n < len(p), it returns a non-nil error explaining why
// more bytes were not returned. In this respect, ReadAt is stricter than Read.
//
// Even if ReadAt returns n < len(p), it may use all of p as scratch
// space during the call. If some data is available but not len(p) bytes,
// ReadAt blocks until either all the data is available or an error occurs.
// In this respect ReadAt is different from Read.
//
// If the n = len(p) bytes returned by ReadAt are at the end of the input
// source, ReadAt may return either err == EOF or err == nil.
//
// If ReadAt is reading from an input source with a seek offset, ReadAt should
// not affect nor be affected by the underlying seek offset.
//
// Clients of ReadAt can execute parallel ReadAt calls on the same input
// source.
//
// Implementations must not retain p.
func (c *cache) ReadAt(p []byte, off int64) (n int, err error) {
fh, err := c.open(off)
if err != nil {
return n, err
}
defer func() {
c.close(fh, off+int64(len(p)))
}()
// fs.Debugf(nil, "ReadAt(p[%d], off=%d, fh=%p)", len(p), off, fh)
return fh.ReadAt(p, off)
}
var errCacheNotImplemented = errors.New("internal error: squashfs cache doesn't implement method")
// WriteAt method dummy stub to satisfy interface
func (c *cache) WriteAt(p []byte, off int64) (n int, err error) {
return 0, errCacheNotImplemented
}
// Seek method dummy stub to satisfy interface
func (c *cache) Seek(offset int64, whence int) (int64, error) {
return 0, errCacheNotImplemented
}
// Read method dummy stub to satisfy interface
func (c *cache) Read(p []byte) (n int, err error) {
return 0, errCacheNotImplemented
}
func (c *cache) Stat() (fs.FileInfo, error) {
return nil, errCacheNotImplemented
}
// Close the file
func (c *cache) Close() (err error) {
c.fhsMu.Lock()
defer c.fhsMu.Unlock()
// Close any open file handles
for i := range c.fhs {
fh := &c.fhs[i]
newErr := fh.fh.Close()
if err == nil {
err = newErr
}
}
c.fhs = nil
return err
}
// Sys returns OS-specific file for ioctl calls via fd
func (c *cache) Sys() (*os.File, error) {
return nil, errCacheNotImplemented
}
// Writable returns file for read-write operations
func (c *cache) Writable() (backend.WritableFile, error) {
return nil, errCacheNotImplemented
}
// check interfaces
var _ backend.Storage = (*cache)(nil)

View File

@@ -0,0 +1,446 @@
// Package squashfs implements a squashfs archiver for the archive backend
package squashfs
import (
"context"
"fmt"
"io"
"path"
"strings"
"time"
"github.com/diskfs/go-diskfs/filesystem/squashfs"
"github.com/rclone/rclone/backend/archive/archiver"
"github.com/rclone/rclone/fs"
"github.com/rclone/rclone/fs/hash"
"github.com/rclone/rclone/fs/log"
"github.com/rclone/rclone/lib/readers"
"github.com/rclone/rclone/vfs"
"github.com/rclone/rclone/vfs/vfscommon"
)
func init() {
archiver.Register(archiver.Archiver{
New: New,
Extension: ".sqfs",
})
}
// Fs represents a wrapped fs.Fs
type Fs struct {
f fs.Fs
wrapper fs.Fs
name string
features *fs.Features // optional features
vfs *vfs.VFS
sqfs *squashfs.FileSystem // interface to the squashfs
c *cache
node vfs.Node // squashfs file object - set if reading
remote string // remote of the squashfs file object
prefix string // position for objects
prefixSlash string // position for objects with a slash on
root string // position to read from within the archive
}
// New constructs an Fs from the (wrappedFs, remote) with the objects
// prefix with prefix and rooted at root
func New(ctx context.Context, wrappedFs fs.Fs, remote, prefix, root string) (fs.Fs, error) {
// FIXME vfs cache?
// FIXME could factor out ReadFileHandle and just use that rather than the full VFS
fs.Debugf(nil, "Squashfs: New: remote=%q, prefix=%q, root=%q", remote, prefix, root)
vfsOpt := vfscommon.Opt
vfsOpt.ReadWait = 0
VFS := vfs.New(wrappedFs, &vfsOpt)
node, err := VFS.Stat(remote)
if err != nil {
return nil, fmt.Errorf("failed to find %q archive: %w", remote, err)
}
c := newCache(node)
// FIXME blocksize
sqfs, err := squashfs.Read(c, node.Size(), 0, 1024*1024)
if err != nil {
return nil, fmt.Errorf("failed to read squashfs: %w", err)
}
f := &Fs{
f: wrappedFs,
name: path.Join(fs.ConfigString(wrappedFs), remote),
vfs: VFS,
node: node,
sqfs: sqfs,
c: c,
remote: remote,
root: strings.Trim(root, "/"),
prefix: prefix,
prefixSlash: prefix + "/",
}
if prefix == "" {
f.prefixSlash = ""
}
singleObject := false
// Find the directory the root points to
if f.root != "" && !strings.HasSuffix(root, "/") {
native, err := f.toNative("")
if err == nil {
native = strings.TrimRight(native, "/")
_, err := f.newObjectNative(native)
if err == nil {
// If it pointed to a file, find the directory above
f.root = path.Dir(f.root)
if f.root == "." || f.root == "/" {
f.root = ""
}
}
}
}
// FIXME
// the features here are ones we could support, and they are
// ANDed with the ones from wrappedFs
//
// FIXME some of these need to be forced on - CanHaveEmptyDirectories
f.features = (&fs.Features{
CaseInsensitive: false,
DuplicateFiles: false,
ReadMimeType: false, // MimeTypes not supported with gsquashfs
WriteMimeType: false,
BucketBased: false,
CanHaveEmptyDirectories: true,
}).Fill(ctx, f).Mask(ctx, wrappedFs).WrapsFs(f, wrappedFs)
if singleObject {
return f, fs.ErrorIsFile
}
return f, nil
}
// Name of the remote (as passed into NewFs)
func (f *Fs) Name() string {
return f.name
}
// Root of the remote (as passed into NewFs)
func (f *Fs) Root() string {
return f.root
}
// Features returns the optional features of this Fs
func (f *Fs) Features() *fs.Features {
return f.features
}
// String returns a description of the FS
func (f *Fs) String() string {
return fmt.Sprintf("Squashfs %q", f.name)
}
// This turns a remote into a native path in the squashfs starting with a /
func (f *Fs) toNative(remote string) (string, error) {
native := strings.Trim(remote, "/")
if f.prefix == "" {
native = "/" + native
} else if native == f.prefix {
native = "/"
} else if !strings.HasPrefix(native, f.prefixSlash) {
return "", fmt.Errorf("internal error: %q doesn't start with prefix %q", native, f.prefixSlash)
} else {
native = native[len(f.prefix):]
}
if f.root != "" {
native = "/" + f.root + native
}
return native, nil
}
// Turn a (nativeDir, leaf) into a remote
func (f *Fs) fromNative(nativeDir string, leaf string) string {
// fs.Debugf(nil, "nativeDir = %q, leaf = %q, root=%q", nativeDir, leaf, f.root)
dir := nativeDir
if f.root != "" {
dir = strings.TrimPrefix(dir, "/"+f.root)
}
remote := f.prefixSlash + strings.Trim(path.Join(dir, leaf), "/")
// fs.Debugf(nil, "dir = %q, remote=%q", dir, remote)
return remote
}
// Convert a FileInfo into an Object from native dir
func (f *Fs) objectFromFileInfo(nativeDir string, item squashfs.FileStat) *Object {
return &Object{
fs: f,
remote: f.fromNative(nativeDir, item.Name()),
size: item.Size(),
modTime: item.ModTime(),
item: item,
}
}
// List the objects and directories in dir into entries. The
// entries can be returned in any order but should be for a
// complete directory.
//
// dir should be "" to list the root, and should not have
// trailing slashes.
//
// This should return ErrDirNotFound if the directory isn't
// found.
func (f *Fs) List(ctx context.Context, dir string) (entries fs.DirEntries, err error) {
defer log.Trace(f, "dir=%q", dir)("entries=%v, err=%v", &entries, &err)
nativeDir, err := f.toNative(dir)
if err != nil {
return nil, err
}
items, err := f.sqfs.ReadDir(nativeDir)
if err != nil {
return nil, fmt.Errorf("read squashfs: couldn't read directory: %w", err)
}
entries = make(fs.DirEntries, 0, len(items))
for _, fi := range items {
item, ok := fi.(squashfs.FileStat)
if !ok {
return nil, fmt.Errorf("internal error: unexpected type for %q: %T", fi.Name(), fi)
}
// fs.Debugf(item.Name(), "entry = %#v", item)
var entry fs.DirEntry
if err != nil {
return nil, fmt.Errorf("error reading item %q: %q", item.Name(), err)
}
if item.IsDir() {
var remote = f.fromNative(nativeDir, item.Name())
entry = fs.NewDir(remote, item.ModTime())
} else {
if item.Mode().IsRegular() {
entry = f.objectFromFileInfo(nativeDir, item)
} else {
fs.Debugf(item.Name(), "FIXME Not regular file - skipping")
continue
}
}
entries = append(entries, entry)
}
// fs.Debugf(f, "dir=%q, entries=%v", dir, entries)
return entries, nil
}
// newObjectNative finds the object at the native path passed in
func (f *Fs) newObjectNative(nativePath string) (o fs.Object, err error) {
// get the path and filename
dir, leaf := path.Split(nativePath)
dir = strings.TrimRight(dir, "/")
leaf = strings.Trim(leaf, "/")
// FIXME need to detect directory not found
fis, err := f.sqfs.ReadDir(dir)
if err != nil {
return nil, fs.ErrorObjectNotFound
}
for _, fi := range fis {
if fi.Name() == leaf {
if fi.IsDir() {
return nil, fs.ErrorNotAFile
}
item, ok := fi.(squashfs.FileStat)
if !ok {
return nil, fmt.Errorf("internal error: unexpected type for %q: %T", fi.Name(), fi)
}
o = f.objectFromFileInfo(dir, item)
break
}
}
if o == nil {
return nil, fs.ErrorObjectNotFound
}
return o, nil
}
// NewObject finds the Object at remote.
func (f *Fs) NewObject(ctx context.Context, remote string) (o fs.Object, err error) {
defer log.Trace(f, "remote=%q", remote)("obj=%v, err=%v", &o, &err)
nativePath, err := f.toNative(remote)
if err != nil {
return nil, err
}
return f.newObjectNative(nativePath)
}
// Precision of the ModTimes in this Fs
func (f *Fs) Precision() time.Duration {
return time.Second
}
// Mkdir makes the directory (container, bucket)
//
// Shouldn't return an error if it already exists
func (f *Fs) Mkdir(ctx context.Context, dir string) error {
return vfs.EROFS
}
// Rmdir removes the directory (container, bucket) if empty
//
// Return an error if it doesn't exist or isn't empty
func (f *Fs) Rmdir(ctx context.Context, dir string) error {
return vfs.EROFS
}
// Put in to the remote path with the modTime given of the given size
//
// May create the object even if it returns an error - if so
// will return the object and the error, otherwise will return
// nil and the error
func (f *Fs) Put(ctx context.Context, in io.Reader, src fs.ObjectInfo, options ...fs.OpenOption) (o fs.Object, err error) {
return nil, vfs.EROFS
}
// Hashes returns the supported hash sets.
func (f *Fs) Hashes() hash.Set {
return hash.Set(hash.None)
}
// UnWrap returns the Fs that this Fs is wrapping
func (f *Fs) UnWrap() fs.Fs {
return f.f
}
// WrapFs returns the Fs that is wrapping this Fs
func (f *Fs) WrapFs() fs.Fs {
return f.wrapper
}
// SetWrapper sets the Fs that is wrapping this Fs
func (f *Fs) SetWrapper(wrapper fs.Fs) {
f.wrapper = wrapper
}
// Object describes an object to be read from the raw squashfs file
type Object struct {
fs *Fs
remote string
size int64
modTime time.Time
item squashfs.FileStat
}
// Fs returns read only access to the Fs that this object is part of
func (o *Object) Fs() fs.Info {
return o.fs
}
// Return a string version
func (o *Object) String() string {
if o == nil {
return "<nil>"
}
return o.Remote()
}
// Turn a squashfs path into a full path for the parent Fs
// func (o *Object) path(remote string) string {
// return path.Join(o.fs.prefix, remote)
// }
// Remote returns the remote path
func (o *Object) Remote() string {
return o.remote
}
// Size returns the size of the file
func (o *Object) Size() int64 {
return o.size
}
// ModTime returns the modification time of the object
//
// It attempts to read the objects mtime and if that isn't present the
// LastModified returned in the http headers
func (o *Object) ModTime(ctx context.Context) time.Time {
return o.modTime
}
// SetModTime sets the modification time of the local fs object
func (o *Object) SetModTime(ctx context.Context, modTime time.Time) error {
return vfs.EROFS
}
// Storable raturns a boolean indicating if this object is storable
func (o *Object) Storable() bool {
return true
}
// Hash returns the selected checksum of the file
// If no checksum is available it returns ""
func (o *Object) Hash(ctx context.Context, ht hash.Type) (string, error) {
return "", hash.ErrUnsupported
}
// Open opens the file for read. Call Close() on the returned io.ReadCloser
func (o *Object) Open(ctx context.Context, options ...fs.OpenOption) (rc io.ReadCloser, err error) {
var offset, limit int64 = 0, -1
for _, option := range options {
switch x := option.(type) {
case *fs.SeekOption:
offset = x.Offset
case *fs.RangeOption:
offset, limit = x.Decode(o.Size())
default:
if option.Mandatory() {
fs.Logf(o, "Unsupported mandatory option: %v", option)
}
}
}
remote, err := o.fs.toNative(o.remote)
if err != nil {
return nil, err
}
fs.Debugf(o, "Opening %q", remote)
//fh, err := o.fs.sqfs.OpenFile(remote, os.O_RDONLY)
fh, err := o.item.Open()
if err != nil {
return nil, err
}
// discard data from start as necessary
if offset > 0 {
_, err = fh.Seek(offset, io.SeekStart)
if err != nil {
return nil, err
}
}
// If limited then don't return everything
if limit >= 0 {
fs.Debugf(nil, "limit=%d, offset=%d, options=%v", limit, offset, options)
return readers.NewLimitedReadCloser(fh, limit), nil
}
return fh, nil
}
// Update in to the object with the modTime given of the given size
func (o *Object) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, options ...fs.OpenOption) error {
return vfs.EROFS
}
// Remove an object
func (o *Object) Remove(ctx context.Context) error {
return vfs.EROFS
}
// Check the interfaces are satisfied
var (
_ fs.Fs = (*Fs)(nil)
_ fs.UnWrapper = (*Fs)(nil)
_ fs.Wrapper = (*Fs)(nil)
_ fs.Object = (*Object)(nil)
)

385
backend/archive/zip/zip.go Normal file
View File

@@ -0,0 +1,385 @@
// Package zip implements a zip archiver for the archive backend
package zip
import (
"archive/zip"
"context"
"errors"
"fmt"
"io"
"os"
"path"
"strings"
"time"
"github.com/rclone/rclone/backend/archive/archiver"
"github.com/rclone/rclone/fs"
"github.com/rclone/rclone/fs/dirtree"
"github.com/rclone/rclone/fs/hash"
"github.com/rclone/rclone/fs/log"
"github.com/rclone/rclone/lib/readers"
"github.com/rclone/rclone/vfs"
"github.com/rclone/rclone/vfs/vfscommon"
)
func init() {
archiver.Register(archiver.Archiver{
New: New,
Extension: ".zip",
})
}
// Fs represents a wrapped fs.Fs
type Fs struct {
f fs.Fs
wrapper fs.Fs
name string
features *fs.Features // optional features
vfs *vfs.VFS
node vfs.Node // zip file object - set if reading
remote string // remote of the zip file object
prefix string // position for objects
prefixSlash string // position for objects with a slash on
root string // position to read from within the archive
dt dirtree.DirTree // read from zipfile
}
// New constructs an Fs from the (wrappedFs, remote) with the objects
// prefix with prefix and rooted at root
func New(ctx context.Context, wrappedFs fs.Fs, remote, prefix, root string) (fs.Fs, error) {
// FIXME vfs cache?
// FIXME could factor out ReadFileHandle and just use that rather than the full VFS
fs.Debugf(nil, "Zip: New: remote=%q, prefix=%q, root=%q", remote, prefix, root)
vfsOpt := vfscommon.Opt
vfsOpt.ReadWait = 0
VFS := vfs.New(wrappedFs, &vfsOpt)
node, err := VFS.Stat(remote)
if err != nil {
return nil, fmt.Errorf("failed to find %q archive: %w", remote, err)
}
f := &Fs{
f: wrappedFs,
name: path.Join(fs.ConfigString(wrappedFs), remote),
vfs: VFS,
node: node,
remote: remote,
root: root,
prefix: prefix,
prefixSlash: prefix + "/",
}
// Read the contents of the zip file
singleObject, err := f.readZip()
if err != nil {
return nil, fmt.Errorf("failed to open zip file: %w", err)
}
// FIXME
// the features here are ones we could support, and they are
// ANDed with the ones from wrappedFs
//
// FIXME some of these need to be forced on - CanHaveEmptyDirectories
f.features = (&fs.Features{
CaseInsensitive: false,
DuplicateFiles: false,
ReadMimeType: false, // MimeTypes not supported with gzip
WriteMimeType: false,
BucketBased: false,
CanHaveEmptyDirectories: true,
}).Fill(ctx, f).Mask(ctx, wrappedFs).WrapsFs(f, wrappedFs)
if singleObject {
return f, fs.ErrorIsFile
}
return f, nil
}
// Name of the remote (as passed into NewFs)
func (f *Fs) Name() string {
return f.name
}
// Root of the remote (as passed into NewFs)
func (f *Fs) Root() string {
return f.root
}
// Features returns the optional features of this Fs
func (f *Fs) Features() *fs.Features {
return f.features
}
// String returns a description of the FS
func (f *Fs) String() string {
return fmt.Sprintf("Zip %q", f.name)
}
// readZip the zip file into f
//
// Returns singleObject=true if f.root points to a file
func (f *Fs) readZip() (singleObject bool, err error) {
if f.node == nil {
return singleObject, fs.ErrorDirNotFound
}
size := f.node.Size()
if size < 0 {
return singleObject, errors.New("can't read from zip file with unknown size")
}
r, err := f.node.Open(os.O_RDONLY)
if err != nil {
return singleObject, fmt.Errorf("failed to open zip file: %w", err)
}
zr, err := zip.NewReader(r, size)
if err != nil {
return singleObject, fmt.Errorf("failed to read zip file: %w", err)
}
dt := dirtree.New()
for _, file := range zr.File {
remote := strings.Trim(path.Clean(file.Name), "/")
if remote == "." {
remote = ""
}
remote = path.Join(f.prefix, remote)
if f.root != "" {
// Ignore all files outside the root
if !strings.HasPrefix(remote, f.root) {
continue
}
if remote == f.root {
remote = ""
} else {
remote = strings.TrimPrefix(remote, f.root+"/")
}
}
if strings.HasSuffix(file.Name, "/") {
dir := fs.NewDir(remote, file.Modified)
dt.AddDir(dir)
} else {
if remote == "" {
remote = path.Base(f.root)
singleObject = true
dt = dirtree.New()
}
o := &Object{
f: f,
remote: remote,
fh: &file.FileHeader,
file: file,
}
dt.Add(o)
if singleObject {
break
}
}
}
dt.CheckParents("")
dt.Sort()
f.dt = dt
//fs.Debugf(nil, "dt = %v", dt)
return singleObject, nil
}
// List the objects and directories in dir into entries. The
// entries can be returned in any order but should be for a
// complete directory.
//
// dir should be "" to list the root, and should not have
// trailing slashes.
//
// This should return ErrDirNotFound if the directory isn't
// found.
func (f *Fs) List(ctx context.Context, dir string) (entries fs.DirEntries, err error) {
defer log.Trace(f, "dir=%q", dir)("entries=%v, err=%v", &entries, &err)
// _, err = f.strip(dir)
// if err != nil {
// return nil, err
// }
entries, ok := f.dt[dir]
if !ok {
return nil, fs.ErrorDirNotFound
}
fs.Debugf(f, "dir=%q, entries=%v", dir, entries)
return entries, nil
}
// NewObject finds the Object at remote.
func (f *Fs) NewObject(ctx context.Context, remote string) (o fs.Object, err error) {
defer log.Trace(f, "remote=%q", remote)("obj=%v, err=%v", &o, &err)
if f.dt == nil {
return nil, fs.ErrorObjectNotFound
}
_, entry := f.dt.Find(remote)
if entry == nil {
return nil, fs.ErrorObjectNotFound
}
o, ok := entry.(*Object)
if !ok {
return nil, fs.ErrorNotAFile
}
return o, nil
}
// Precision of the ModTimes in this Fs
func (f *Fs) Precision() time.Duration {
return time.Second
}
// Mkdir makes the directory (container, bucket)
//
// Shouldn't return an error if it already exists
func (f *Fs) Mkdir(ctx context.Context, dir string) error {
return vfs.EROFS
}
// Rmdir removes the directory (container, bucket) if empty
//
// Return an error if it doesn't exist or isn't empty
func (f *Fs) Rmdir(ctx context.Context, dir string) error {
return vfs.EROFS
}
// Put in to the remote path with the modTime given of the given size
//
// May create the object even if it returns an error - if so
// will return the object and the error, otherwise will return
// nil and the error
func (f *Fs) Put(ctx context.Context, in io.Reader, src fs.ObjectInfo, options ...fs.OpenOption) (o fs.Object, err error) {
return nil, vfs.EROFS
}
// Hashes returns the supported hash sets.
func (f *Fs) Hashes() hash.Set {
return hash.Set(hash.CRC32)
}
// UnWrap returns the Fs that this Fs is wrapping
func (f *Fs) UnWrap() fs.Fs {
return f.f
}
// WrapFs returns the Fs that is wrapping this Fs
func (f *Fs) WrapFs() fs.Fs {
return f.wrapper
}
// SetWrapper sets the Fs that is wrapping this Fs
func (f *Fs) SetWrapper(wrapper fs.Fs) {
f.wrapper = wrapper
}
// Object describes an object to be read from the raw zip file
type Object struct {
f *Fs
remote string
fh *zip.FileHeader
file *zip.File
}
// Fs returns read only access to the Fs that this object is part of
func (o *Object) Fs() fs.Info {
return o.f
}
// Return a string version
func (o *Object) String() string {
if o == nil {
return "<nil>"
}
return o.Remote()
}
// Remote returns the remote path
func (o *Object) Remote() string {
return o.remote
}
// Size returns the size of the file
func (o *Object) Size() int64 {
return int64(o.fh.UncompressedSize64)
}
// ModTime returns the modification time of the object
//
// It attempts to read the objects mtime and if that isn't present the
// LastModified returned in the http headers
func (o *Object) ModTime(ctx context.Context) time.Time {
return o.fh.Modified
}
// SetModTime sets the modification time of the local fs object
func (o *Object) SetModTime(ctx context.Context, modTime time.Time) error {
return vfs.EROFS
}
// Storable raturns a boolean indicating if this object is storable
func (o *Object) Storable() bool {
return true
}
// Hash returns the selected checksum of the file
// If no checksum is available it returns ""
func (o *Object) Hash(ctx context.Context, ht hash.Type) (string, error) {
if ht == hash.CRC32 {
// FIXME return empty CRC if writing
if o.f.dt == nil {
return "", nil
}
return fmt.Sprintf("%08x", o.fh.CRC32), nil
}
return "", hash.ErrUnsupported
}
// Open opens the file for read. Call Close() on the returned io.ReadCloser
func (o *Object) Open(ctx context.Context, options ...fs.OpenOption) (rc io.ReadCloser, err error) {
var offset, limit int64 = 0, -1
for _, option := range options {
switch x := option.(type) {
case *fs.SeekOption:
offset = x.Offset
case *fs.RangeOption:
offset, limit = x.Decode(o.Size())
default:
if option.Mandatory() {
fs.Logf(o, "Unsupported mandatory option: %v", option)
}
}
}
rc, err = o.file.Open()
if err != nil {
return nil, err
}
// discard data from start as necessary
if offset > 0 {
_, err = io.CopyN(io.Discard, rc, offset)
if err != nil {
return nil, err
}
}
// If limited then don't return everything
if limit >= 0 {
return readers.NewLimitedReadCloser(rc, limit), nil
}
return rc, nil
}
// Update in to the object with the modTime given of the given size
func (o *Object) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, options ...fs.OpenOption) error {
return vfs.EROFS
}
// Remove an object
func (o *Object) Remove(ctx context.Context) error {
return vfs.EROFS
}
// Check the interfaces are satisfied
var (
_ fs.Fs = (*Fs)(nil)
_ fs.UnWrapper = (*Fs)(nil)
_ fs.Wrapper = (*Fs)(nil)
_ fs.Object = (*Object)(nil)
)

File diff suppressed because it is too large Load Diff

View File

@@ -1,36 +1,151 @@
//go:build !plan9 && !solaris && !js
// +build !plan9,!solaris,!js
package azureblob
import (
"context"
"encoding/base64"
"strings"
"testing"
"github.com/Azure/azure-sdk-for-go/sdk/storage/azblob/blockblob"
"github.com/rclone/rclone/fs"
"github.com/rclone/rclone/fstest"
"github.com/rclone/rclone/fstest/fstests"
"github.com/rclone/rclone/lib/random"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
)
func (f *Fs) InternalTest(t *testing.T) {
// Check first feature flags are set on this
// remote
func TestBlockIDCreator(t *testing.T) {
// Check creation and random number
bic, err := newBlockIDCreator()
require.NoError(t, err)
bic2, err := newBlockIDCreator()
require.NoError(t, err)
assert.NotEqual(t, bic.random, bic2.random)
assert.NotEqual(t, bic.random, [8]byte{})
// Set random to known value for tests
bic.random = [8]byte{1, 2, 3, 4, 5, 6, 7, 8}
chunkNumber := uint64(0xFEDCBA9876543210)
// Check creation of ID
want := base64.StdEncoding.EncodeToString([]byte{0xFE, 0xDC, 0xBA, 0x98, 0x76, 0x54, 0x32, 0x10, 1, 2, 3, 4, 5, 6, 7, 8})
assert.Equal(t, "/ty6mHZUMhABAgMEBQYHCA==", want)
got := bic.newBlockID(chunkNumber)
assert.Equal(t, want, got)
assert.Equal(t, "/ty6mHZUMhABAgMEBQYHCA==", got)
// Test checkID is working
assert.NoError(t, bic.checkID(chunkNumber, got))
assert.ErrorContains(t, bic.checkID(chunkNumber, "$"+got), "illegal base64")
assert.ErrorContains(t, bic.checkID(chunkNumber, "AAAA"+got), "bad block ID length")
assert.ErrorContains(t, bic.checkID(chunkNumber+1, got), "expecting decoded")
assert.ErrorContains(t, bic2.checkID(chunkNumber, got), "random bytes")
}
func (f *Fs) testFeatures(t *testing.T) {
// Check first feature flags are set on this remote
enabled := f.Features().SetTier
assert.True(t, enabled)
enabled = f.Features().GetTier
assert.True(t, enabled)
}
func TestIncrement(t *testing.T) {
for _, test := range []struct {
in []byte
want []byte
}{
{[]byte{0, 0, 0, 0}, []byte{1, 0, 0, 0}},
{[]byte{0xFE, 0, 0, 0}, []byte{0xFF, 0, 0, 0}},
{[]byte{0xFF, 0, 0, 0}, []byte{0, 1, 0, 0}},
{[]byte{0, 1, 0, 0}, []byte{1, 1, 0, 0}},
{[]byte{0xFF, 0xFF, 0xFF, 0xFE}, []byte{0, 0, 0, 0xFF}},
{[]byte{0xFF, 0xFF, 0xFF, 0xFF}, []byte{0, 0, 0, 0}},
} {
increment(test.in)
assert.Equal(t, test.want, test.in)
}
type ReadSeekCloser struct {
*strings.Reader
}
func (r *ReadSeekCloser) Close() error {
return nil
}
// Stage a block at remote but don't commit it
func (f *Fs) stageBlockWithoutCommit(ctx context.Context, t *testing.T, remote string) {
var (
containerName, blobPath = f.split(remote)
containerClient = f.cntSVC(containerName)
blobClient = containerClient.NewBlockBlobClient(blobPath)
data = "uncommitted data"
blockID = "1"
blockIDBase64 = base64.StdEncoding.EncodeToString([]byte(blockID))
)
r := &ReadSeekCloser{strings.NewReader(data)}
_, err := blobClient.StageBlock(ctx, blockIDBase64, r, nil)
require.NoError(t, err)
// Verify the block is staged but not committed
blockList, err := blobClient.GetBlockList(ctx, blockblob.BlockListTypeAll, nil)
require.NoError(t, err)
found := false
for _, block := range blockList.UncommittedBlocks {
if *block.Name == blockIDBase64 {
found = true
break
}
}
require.True(t, found, "Block ID not found in uncommitted blocks")
}
// This tests uploading a blob where it has uncommitted blocks with a different ID size.
//
// https://gauravmantri.com/2013/05/18/windows-azure-blob-storage-dealing-with-the-specified-blob-or-block-content-is-invalid-error/
//
// TestIntegration/FsMkdir/FsPutFiles/Internal/WriteUncommittedBlocks
func (f *Fs) testWriteUncommittedBlocks(t *testing.T) {
var (
ctx = context.Background()
remote = "testBlob"
)
// Multipart copy the blob please
oldUseCopyBlob, oldCopyCutoff := f.opt.UseCopyBlob, f.opt.CopyCutoff
f.opt.UseCopyBlob = false
f.opt.CopyCutoff = f.opt.ChunkSize
defer func() {
f.opt.UseCopyBlob, f.opt.CopyCutoff = oldUseCopyBlob, oldCopyCutoff
}()
// Create a blob with uncommitted blocks
f.stageBlockWithoutCommit(ctx, t, remote)
// Now attempt to overwrite the block with a different sized block ID to provoke this error
// Check the object does not exist
_, err := f.NewObject(ctx, remote)
require.Equal(t, fs.ErrorObjectNotFound, err)
// Upload a multipart file over the block with uncommitted chunks of a different ID size
size := 4*int(f.opt.ChunkSize) - 1
contents := random.String(size)
item := fstest.NewItem(remote, contents, fstest.Time("2001-05-06T04:05:06.499Z"))
o := fstests.PutTestContents(ctx, t, f, &item, contents, true)
// Check size
assert.Equal(t, int64(size), o.Size())
// Create a new blob with uncommitted blocks
newRemote := "testBlob2"
f.stageBlockWithoutCommit(ctx, t, newRemote)
// Copy over that block
dst, err := f.Copy(ctx, o, newRemote)
require.NoError(t, err)
// Check basics
assert.Equal(t, int64(size), dst.Size())
assert.Equal(t, newRemote, dst.Remote())
// Check contents
gotContents := fstests.ReadObject(ctx, t, dst, -1)
assert.Equal(t, contents, gotContents)
// Remove the object
require.NoError(t, dst.Remove(ctx))
}
func (f *Fs) InternalTest(t *testing.T) {
t.Run("Features", f.testFeatures)
t.Run("WriteUncommittedBlocks", f.testWriteUncommittedBlocks)
}

View File

@@ -1,26 +1,51 @@
// Test AzureBlob filesystem interface
//go:build !plan9 && !solaris && !js
// +build !plan9,!solaris,!js
package azureblob
import (
"context"
"testing"
"github.com/rclone/rclone/fs"
"github.com/rclone/rclone/fstest"
"github.com/rclone/rclone/fstest/fstests"
"github.com/stretchr/testify/assert"
)
// TestIntegration runs integration tests against the remote
func TestIntegration(t *testing.T) {
name := "TestAzureBlob"
fstests.Run(t, &fstests.Opt{
RemoteName: "TestAzureBlob:",
NilObject: (*Object)(nil),
TiersToTest: []string{"Hot", "Cool"},
ChunkedUpload: fstests.ChunkedUploadConfig{},
RemoteName: name + ":",
NilObject: (*Object)(nil),
TiersToTest: []string{"Hot", "Cool", "Cold"},
ChunkedUpload: fstests.ChunkedUploadConfig{
MinChunkSize: defaultChunkSize,
},
ExtraConfig: []fstests.ExtraConfigItem{
{Name: name, Key: "use_copy_blob", Value: "false"},
},
})
}
// TestIntegration2 runs integration tests against the remote
func TestIntegration2(t *testing.T) {
if *fstest.RemoteName != "" {
t.Skip("Skipping as -remote set")
}
name := "TestAzureBlob"
fstests.Run(t, &fstests.Opt{
RemoteName: name + ":",
NilObject: (*Object)(nil),
TiersToTest: []string{"Hot", "Cool", "Cold"},
ChunkedUpload: fstests.ChunkedUploadConfig{
MinChunkSize: defaultChunkSize,
},
ExtraConfig: []fstests.ExtraConfigItem{
{Name: name, Key: "directory_markers", Value: "true"},
{Name: name, Key: "use_copy_blob", Value: "false"},
},
})
}
@@ -28,40 +53,15 @@ func (f *Fs) SetUploadChunkSize(cs fs.SizeSuffix) (fs.SizeSuffix, error) {
return f.setUploadChunkSize(cs)
}
func (f *Fs) SetCopyCutoff(cs fs.SizeSuffix) (fs.SizeSuffix, error) {
return f.setCopyCutoff(cs)
}
var (
_ fstests.SetUploadChunkSizer = (*Fs)(nil)
_ fstests.SetCopyCutoffer = (*Fs)(nil)
)
// TestServicePrincipalFileSuccess checks that, given a proper JSON file, we can create a token.
func TestServicePrincipalFileSuccess(t *testing.T) {
ctx := context.TODO()
credentials := `
{
"appId": "my application (client) ID",
"password": "my secret",
"tenant": "my active directory tenant ID"
}
`
tokenRefresher, err := newServicePrincipalTokenRefresher(ctx, []byte(credentials))
if assert.NoError(t, err) {
assert.NotNil(t, tokenRefresher)
}
}
// TestServicePrincipalFileFailure checks that, given a JSON file with a missing secret, it returns an error.
func TestServicePrincipalFileFailure(t *testing.T) {
ctx := context.TODO()
credentials := `
{
"appId": "my application (client) ID",
"tenant": "my active directory tenant ID"
}
`
_, err := newServicePrincipalTokenRefresher(ctx, []byte(credentials))
assert.Error(t, err)
assert.EqualError(t, err, "error creating service principal token: parameter 'secret' cannot be empty")
}
func TestValidateAccessTier(t *testing.T) {
tests := map[string]struct {
accessTier string
@@ -71,6 +71,7 @@ func TestValidateAccessTier(t *testing.T) {
"HOT": {"HOT", true},
"Hot": {"Hot", true},
"cool": {"cool", true},
"cold": {"cold", true},
"archive": {"archive", true},
"empty": {"", false},
"unknown": {"unknown", false},

View File

@@ -2,6 +2,6 @@
// about "no buildable Go source files "
//go:build plan9 || solaris || js
// +build plan9 solaris js
// Package azureblob provides an interface to the Microsoft Azure blob object storage system
package azureblob

View File

@@ -1,137 +0,0 @@
//go:build !plan9 && !solaris && !js
// +build !plan9,!solaris,!js
package azureblob
import (
"bytes"
"context"
"encoding/json"
"fmt"
"io"
"io/ioutil"
"net/http"
"github.com/Azure/go-autorest/autorest/adal"
"github.com/rclone/rclone/fs"
"github.com/rclone/rclone/fs/fshttp"
)
const (
azureResource = "https://storage.azure.com"
imdsAPIVersion = "2018-02-01"
msiEndpointDefault = "http://169.254.169.254/metadata/identity/oauth2/token"
)
// This custom type is used to add the port the test server has bound to
// to the request context.
type testPortKey string
type msiIdentifierType int
const (
msiClientID msiIdentifierType = iota
msiObjectID
msiResourceID
)
type userMSI struct {
Type msiIdentifierType
Value string
}
type httpError struct {
Response *http.Response
}
func (e httpError) Error() string {
return fmt.Sprintf("HTTP error %v (%v)", e.Response.StatusCode, e.Response.Status)
}
// GetMSIToken attempts to obtain an MSI token from the Azure Instance
// Metadata Service.
func GetMSIToken(ctx context.Context, identity *userMSI) (adal.Token, error) {
// Attempt to get an MSI token; silently continue if unsuccessful.
// This code has been lovingly stolen from azcopy's OAuthTokenManager.
result := adal.Token{}
req, err := http.NewRequestWithContext(ctx, "GET", msiEndpointDefault, nil)
if err != nil {
fs.Debugf(nil, "Failed to create request: %v", err)
return result, err
}
params := req.URL.Query()
params.Set("resource", azureResource)
params.Set("api-version", imdsAPIVersion)
// Specify user-assigned identity if requested.
if identity != nil {
switch identity.Type {
case msiClientID:
params.Set("client_id", identity.Value)
case msiObjectID:
params.Set("object_id", identity.Value)
case msiResourceID:
params.Set("mi_res_id", identity.Value)
default:
// If this happens, the calling function and this one don't agree on
// what valid ID types exist.
return result, fmt.Errorf("unknown MSI identity type specified")
}
}
req.URL.RawQuery = params.Encode()
// The Metadata header is required by all calls to IMDS.
req.Header.Set("Metadata", "true")
// If this function is run in a test, query the test server instead of IMDS.
testPort, isTest := ctx.Value(testPortKey("testPort")).(int)
if isTest {
req.URL.Host = fmt.Sprintf("localhost:%d", testPort)
req.Host = req.URL.Host
}
// Send request
httpClient := fshttp.NewClient(ctx)
resp, err := httpClient.Do(req)
if err != nil {
return result, fmt.Errorf("MSI is not enabled on this VM: %w", err)
}
defer func() { // resp and Body should not be nil
_, err = io.Copy(ioutil.Discard, resp.Body)
if err != nil {
fs.Debugf(nil, "Unable to drain IMDS response: %v", err)
}
err = resp.Body.Close()
if err != nil {
fs.Debugf(nil, "Unable to close IMDS response: %v", err)
}
}()
// Check if the status code indicates success
// The request returns 200 currently, add 201 and 202 as well for possible extension.
switch resp.StatusCode {
case 200, 201, 202:
break
default:
body, _ := ioutil.ReadAll(resp.Body)
fs.Errorf(nil, "Couldn't obtain OAuth token from IMDS; server returned status code %d and body: %v", resp.StatusCode, string(body))
return result, httpError{Response: resp}
}
b, err := ioutil.ReadAll(resp.Body)
if err != nil {
return result, fmt.Errorf("couldn't read IMDS response: %w", err)
}
// Remove BOM, if any. azcopy does this so I'm following along.
b = bytes.TrimPrefix(b, []byte("\xef\xbb\xbf"))
// This would be a good place to persist the token if a large number of rclone
// invocations are being made in a short amount of time. If the token is
// persisted, the azureblob code will need to check for expiry before every
// storage API call.
err = json.Unmarshal(b, &result)
if err != nil {
return result, fmt.Errorf("couldn't unmarshal IMDS response: %w", err)
}
return result, nil
}

View File

@@ -1,118 +0,0 @@
//go:build !plan9 && !solaris && !js
// +build !plan9,!solaris,!js
package azureblob
import (
"context"
"encoding/json"
"net/http"
"net/http/httptest"
"strconv"
"strings"
"testing"
"github.com/Azure/go-autorest/autorest/adal"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
)
func handler(t *testing.T, actual *map[string]string) http.HandlerFunc {
return func(w http.ResponseWriter, r *http.Request) {
err := r.ParseForm()
require.NoError(t, err)
parameters := r.URL.Query()
(*actual)["path"] = r.URL.Path
(*actual)["Metadata"] = r.Header.Get("Metadata")
(*actual)["method"] = r.Method
for paramName := range parameters {
(*actual)[paramName] = parameters.Get(paramName)
}
// Make response.
response := adal.Token{}
responseBytes, err := json.Marshal(response)
require.NoError(t, err)
_, err = w.Write(responseBytes)
require.NoError(t, err)
}
}
func TestManagedIdentity(t *testing.T) {
// test user-assigned identity specifiers to use
testMSIClientID := "d859b29f-5c9c-42f8-a327-ec1bc6408d79"
testMSIObjectID := "9ffeb650-3ca0-4278-962b-5a38d520591a"
testMSIResourceID := "/subscriptions/fe714c49-b8a4-4d49-9388-96a20daa318f/resourceGroups/somerg/providers/Microsoft.ManagedIdentity/userAssignedIdentities/someidentity"
tests := []struct {
identity *userMSI
identityParameterName string
expectedAbsent []string
}{
{&userMSI{msiClientID, testMSIClientID}, "client_id", []string{"object_id", "mi_res_id"}},
{&userMSI{msiObjectID, testMSIObjectID}, "object_id", []string{"client_id", "mi_res_id"}},
{&userMSI{msiResourceID, testMSIResourceID}, "mi_res_id", []string{"object_id", "client_id"}},
{nil, "(default)", []string{"object_id", "client_id", "mi_res_id"}},
}
alwaysExpected := map[string]string{
"path": "/metadata/identity/oauth2/token",
"resource": "https://storage.azure.com",
"Metadata": "true",
"api-version": "2018-02-01",
"method": "GET",
}
for _, test := range tests {
actual := make(map[string]string, 10)
testServer := httptest.NewServer(handler(t, &actual))
defer testServer.Close()
testServerPort, err := strconv.Atoi(strings.Split(testServer.URL, ":")[2])
require.NoError(t, err)
ctx := context.WithValue(context.TODO(), testPortKey("testPort"), testServerPort)
_, err = GetMSIToken(ctx, test.identity)
require.NoError(t, err)
// Validate expected query parameters present
expected := make(map[string]string)
for k, v := range alwaysExpected {
expected[k] = v
}
if test.identity != nil {
expected[test.identityParameterName] = test.identity.Value
}
for key := range expected {
value, exists := actual[key]
if assert.Truef(t, exists, "test of %s: query parameter %s was not passed",
test.identityParameterName, key) {
assert.Equalf(t, expected[key], value,
"test of %s: parameter %s has incorrect value", test.identityParameterName, key)
}
}
// Validate unexpected query parameters absent
for _, key := range test.expectedAbsent {
_, exists := actual[key]
assert.Falsef(t, exists, "query parameter %s was unexpectedly passed")
}
}
}
func errorHandler(resultCode int) http.HandlerFunc {
return func(w http.ResponseWriter, r *http.Request) {
http.Error(w, "Test error generated", resultCode)
}
}
func TestIMDSErrors(t *testing.T) {
errorCodes := []int{404, 429, 500}
for _, code := range errorCodes {
testServer := httptest.NewServer(errorHandler(code))
defer testServer.Close()
testServerPort, err := strconv.Atoi(strings.Split(testServer.URL, ":")[2])
require.NoError(t, err)
ctx := context.WithValue(context.TODO(), testPortKey("testPort"), testServerPort)
_, err = GetMSIToken(ctx, nil)
require.Error(t, err)
httpErr, ok := err.(httpError)
require.Truef(t, ok, "HTTP error %d did not result in an httpError object", code)
assert.Equalf(t, httpErr.Response.StatusCode, code, "desired error %d but didn't get it", code)
}
}

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,69 @@
//go:build !plan9 && !js
package azurefiles
import (
"context"
"math/rand"
"strings"
"testing"
"github.com/rclone/rclone/fstest/fstests"
"github.com/stretchr/testify/assert"
)
func (f *Fs) InternalTest(t *testing.T) {
t.Run("Authentication", f.InternalTestAuth)
}
var _ fstests.InternalTester = (*Fs)(nil)
func (f *Fs) InternalTestAuth(t *testing.T) {
t.Skip("skipping since this requires authentication credentials which are not part of repo")
shareName := "test-rclone-oct-2023"
testCases := []struct {
name string
options *Options
}{
{
name: "ConnectionString",
options: &Options{
ShareName: shareName,
ConnectionString: "",
},
},
{
name: "AccountAndKey",
options: &Options{
ShareName: shareName,
Account: "",
Key: "",
}},
{
name: "SASUrl",
options: &Options{
ShareName: shareName,
SASURL: "",
}},
}
for _, tc := range testCases {
t.Run(tc.name, func(t *testing.T) {
fs, err := newFsFromOptions(context.TODO(), "TestAzureFiles", "", tc.options)
assert.NoError(t, err)
dirName := randomString(10)
assert.NoError(t, fs.Mkdir(context.TODO(), dirName))
})
}
}
const chars = "abcdefghijklmnopqrstuvwzyxABCDEFGHIJKLMNOPQRSTUVWZYX"
func randomString(charCount int) string {
strBldr := strings.Builder{}
for range charCount {
randPos := rand.Int63n(52)
strBldr.WriteByte(chars[randPos])
}
return strBldr.String()
}

View File

@@ -0,0 +1,17 @@
//go:build !plan9 && !js
package azurefiles
import (
"testing"
"github.com/rclone/rclone/fstest/fstests"
)
func TestIntegration(t *testing.T) {
var objPtr *Object
fstests.Run(t, &fstests.Opt{
RemoteName: "TestAzureFiles:",
NilObject: objPtr,
})
}

View File

@@ -0,0 +1,7 @@
// Build for azurefiles for unsupported platforms to stop go complaining
// about "no buildable Go source files "
//go:build plan9 || js
// Package azurefiles provides an interface to Microsoft Azure Files
package azurefiles

View File

@@ -33,10 +33,19 @@ var _ fserrors.Fataler = (*Error)(nil)
// Bucket describes a B2 bucket
type Bucket struct {
ID string `json:"bucketId"`
AccountID string `json:"accountId"`
Name string `json:"bucketName"`
Type string `json:"bucketType"`
ID string `json:"bucketId"`
AccountID string `json:"accountId"`
Name string `json:"bucketName"`
Type string `json:"bucketType"`
LifecycleRules []LifecycleRule `json:"lifecycleRules,omitempty"`
}
// LifecycleRule is a single lifecycle rule
type LifecycleRule struct {
DaysFromHidingToDeleting *int `json:"daysFromHidingToDeleting"`
DaysFromUploadingToHiding *int `json:"daysFromUploadingToHiding"`
DaysFromStartingToCancelingUnfinishedLargeFiles *int `json:"daysFromStartingToCancelingUnfinishedLargeFiles"`
FileNamePrefix string `json:"fileNamePrefix"`
}
// Timestamp is a UTC time when this file was uploaded. It is a base
@@ -121,10 +130,10 @@ type AuthorizeAccountResponse struct {
AbsoluteMinimumPartSize int `json:"absoluteMinimumPartSize"` // The smallest possible size of a part of a large file.
AccountID string `json:"accountId"` // The identifier for the account.
Allowed struct { // An object (see below) containing the capabilities of this auth token, and any restrictions on using it.
BucketID string `json:"bucketId"` // When present, access is restricted to one bucket.
BucketName string `json:"bucketName"` // When present, name of bucket - may be empty
Capabilities []string `json:"capabilities"` // A list of strings, each one naming a capability the key has.
NamePrefix interface{} `json:"namePrefix"` // When present, access is restricted to files whose names start with the prefix
BucketID string `json:"bucketId"` // When present, access is restricted to one bucket.
BucketName string `json:"bucketName"` // When present, name of bucket - may be empty
Capabilities []string `json:"capabilities"` // A list of strings, each one naming a capability the key has.
NamePrefix any `json:"namePrefix"` // When present, access is restricted to files whose names start with the prefix
} `json:"allowed"`
APIURL string `json:"apiUrl"` // The base URL to use for all API calls except for uploading and downloading files.
AuthorizationToken string `json:"authorizationToken"` // An authorization token to use with all calls, other than b2_authorize_account, that need an Authorization header.
@@ -206,9 +215,10 @@ type FileInfo struct {
// CreateBucketRequest is used to create a bucket
type CreateBucketRequest struct {
AccountID string `json:"accountId"`
Name string `json:"bucketName"`
Type string `json:"bucketType"`
AccountID string `json:"accountId"`
Name string `json:"bucketName"`
Type string `json:"bucketType"`
LifecycleRules []LifecycleRule `json:"lifecycleRules,omitempty"`
}
// DeleteBucketRequest is used to create a bucket
@@ -331,3 +341,11 @@ type CopyPartRequest struct {
PartNumber int64 `json:"partNumber"` // Which part this is (starting from 1)
Range string `json:"range,omitempty"` // The range of bytes to copy. If not provided, the whole source file will be copied.
}
// UpdateBucketRequest describes a request to modify a B2 bucket
type UpdateBucketRequest struct {
ID string `json:"bucketId"`
AccountID string `json:"accountId"`
Type string `json:"bucketType,omitempty"`
LifecycleRules []LifecycleRule `json:"lifecycleRules,omitempty"`
}

View File

@@ -42,11 +42,11 @@ func TestTimestampIsZero(t *testing.T) {
}
func TestTimestampEqual(t *testing.T) {
assert.False(t, emptyT.Equal(emptyT))
assert.False(t, emptyT.Equal(emptyT)) //nolint:gocritic // Don't include gocritic when running golangci-lint to avoid dupArg: suspicious method call with the same argument and receiver
assert.False(t, t0.Equal(emptyT))
assert.False(t, emptyT.Equal(t0))
assert.False(t, t0.Equal(t1))
assert.False(t, t1.Equal(t0))
assert.True(t, t0.Equal(t0))
assert.True(t, t1.Equal(t1))
assert.True(t, t0.Equal(t0)) //nolint:gocritic // Don't include gocritic when running golangci-lint to avoid dupArg: suspicious method call with the same argument and receiver
assert.True(t, t1.Equal(t1)) //nolint:gocritic // Don't include gocritic when running golangci-lint to avoid dupArg: suspicious method call with the same argument and receiver
}

File diff suppressed because it is too large Load Diff

View File

@@ -1,14 +1,31 @@
package b2
import (
"context"
"crypto/sha1"
"fmt"
"path"
"sort"
"strings"
"testing"
"time"
"github.com/rclone/rclone/backend/b2/api"
"github.com/rclone/rclone/fs"
"github.com/rclone/rclone/fs/cache"
"github.com/rclone/rclone/fs/hash"
"github.com/rclone/rclone/fs/object"
"github.com/rclone/rclone/fstest"
"github.com/rclone/rclone/fstest/fstests"
"github.com/rclone/rclone/lib/bucket"
"github.com/rclone/rclone/lib/random"
"github.com/rclone/rclone/lib/version"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
)
// Test b2 string encoding
// https://www.backblaze.com/b2/docs/string_encoding.html
// https://www.backblaze.com/docs/cloud-storage-native-api-string-encoding
var encodeTest = []struct {
fullyEncoded string
@@ -168,3 +185,435 @@ func TestParseTimeString(t *testing.T) {
}
}
// Return a map of the headers in the options with keys stripped of the "x-bz-info-" prefix
func OpenOptionToMetaData(options []fs.OpenOption) map[string]string {
var headers = make(map[string]string)
for _, option := range options {
k, v := option.Header()
k = strings.ToLower(k)
if strings.HasPrefix(k, headerPrefix) {
headers[k[len(headerPrefix):]] = v
}
}
return headers
}
func (f *Fs) internalTestMetadata(t *testing.T, size string, uploadCutoff string, chunkSize string) {
what := fmt.Sprintf("Size%s/UploadCutoff%s/ChunkSize%s", size, uploadCutoff, chunkSize)
t.Run(what, func(t *testing.T) {
ctx := context.Background()
ss := fs.SizeSuffix(0)
err := ss.Set(size)
require.NoError(t, err)
original := random.String(int(ss))
contents := fstest.Gz(t, original)
mimeType := "text/html"
if chunkSize != "" {
ss := fs.SizeSuffix(0)
err := ss.Set(chunkSize)
require.NoError(t, err)
_, err = f.SetUploadChunkSize(ss)
require.NoError(t, err)
}
if uploadCutoff != "" {
ss := fs.SizeSuffix(0)
err := ss.Set(uploadCutoff)
require.NoError(t, err)
_, err = f.SetUploadCutoff(ss)
require.NoError(t, err)
}
item := fstest.NewItem("test-metadata", contents, fstest.Time("2001-05-06T04:05:06.499Z"))
btime := time.Now()
metadata := fs.Metadata{
// Just mtime for now - limit to milliseconds since x-bz-info-src_last_modified_millis can't support any
"mtime": "2009-05-06T04:05:06.499Z",
}
// Need to specify HTTP options with the header prefix since they are passed as-is
options := []fs.OpenOption{
&fs.HTTPOption{Key: "X-Bz-Info-a", Value: "1"},
&fs.HTTPOption{Key: "X-Bz-Info-b", Value: "2"},
}
obj := fstests.PutTestContentsMetadata(ctx, t, f, &item, true, contents, true, mimeType, metadata, options...)
defer func() {
assert.NoError(t, obj.Remove(ctx))
}()
o := obj.(*Object)
gotMetadata, err := o.getMetaData(ctx)
require.NoError(t, err)
// X-Bz-Info-a & X-Bz-Info-b
optMetadata := OpenOptionToMetaData(options)
for k, v := range optMetadata {
got := gotMetadata.Info[k]
assert.Equal(t, v, got, k)
}
assert.Equal(t, mimeType, gotMetadata.ContentType, "Content-Type")
// Modification time from the x-bz-info-src_last_modified_millis header
var mtime api.Timestamp
err = mtime.UnmarshalJSON([]byte(gotMetadata.Info[timeKey]))
if err != nil {
fs.Debugf(o, "Bad "+timeHeader+" header: %v", err)
}
assert.Equal(t, item.ModTime, time.Time(mtime), "Modification time")
// Upload time
gotBtime := time.Time(gotMetadata.UploadTimestamp)
dt := gotBtime.Sub(btime)
assert.True(t, dt < time.Minute && dt > -time.Minute, fmt.Sprintf("btime more than 1 minute out want %v got %v delta %v", btime, gotBtime, dt))
t.Run("GzipEncoding", func(t *testing.T) {
// Test that the gzipped file we uploaded can be
// downloaded
checkDownload := func(wantContents string, wantSize int64, wantHash string) {
gotContents := fstests.ReadObject(ctx, t, o, -1)
assert.Equal(t, wantContents, gotContents)
assert.Equal(t, wantSize, o.Size())
gotHash, err := o.Hash(ctx, hash.SHA1)
require.NoError(t, err)
assert.Equal(t, wantHash, gotHash)
}
t.Run("NoDecompress", func(t *testing.T) {
checkDownload(contents, int64(len(contents)), sha1Sum(t, contents))
})
})
})
}
func (f *Fs) InternalTestMetadata(t *testing.T) {
// 1 kB regular file
f.internalTestMetadata(t, "1kiB", "", "")
// 10 MiB large file
f.internalTestMetadata(t, "10MiB", "6MiB", "6MiB")
}
func sha1Sum(t *testing.T, s string) string {
hash := sha1.Sum([]byte(s))
return fmt.Sprintf("%x", hash)
}
// This is adapted from the s3 equivalent.
func (f *Fs) InternalTestVersions(t *testing.T) {
ctx := context.Background()
// Small pause to make the LastModified different since AWS
// only seems to track them to 1 second granularity
time.Sleep(2 * time.Second)
// Create an object
const dirName = "versions"
const fileName = dirName + "/" + "test-versions.txt"
contents := random.String(100)
item := fstest.NewItem(fileName, contents, fstest.Time("2001-05-06T04:05:06.499999999Z"))
obj := fstests.PutTestContents(ctx, t, f, &item, contents, true)
defer func() {
assert.NoError(t, obj.Remove(ctx))
}()
objMetadata, err := obj.(*Object).getMetaData(ctx)
require.NoError(t, err)
// Small pause
time.Sleep(2 * time.Second)
// Remove it
assert.NoError(t, obj.Remove(ctx))
// Small pause to make the LastModified different since AWS only seems to track them to 1 second granularity
time.Sleep(2 * time.Second)
// And create it with different size and contents
newContents := random.String(101)
newItem := fstest.NewItem(fileName, newContents, fstest.Time("2002-05-06T04:05:06.499999999Z"))
newObj := fstests.PutTestContents(ctx, t, f, &newItem, newContents, true)
newObjMetadata, err := newObj.(*Object).getMetaData(ctx)
require.NoError(t, err)
t.Run("Versions", func(t *testing.T) {
// Set --b2-versions for this test
f.opt.Versions = true
defer func() {
f.opt.Versions = false
}()
// Read the contents
entries, err := f.List(ctx, dirName)
require.NoError(t, err)
tests := 0
var fileNameVersion string
for _, entry := range entries {
t.Log(entry)
remote := entry.Remote()
if remote == fileName {
t.Run("ReadCurrent", func(t *testing.T) {
assert.Equal(t, newContents, fstests.ReadObject(ctx, t, entry.(fs.Object), -1))
})
tests++
} else if versionTime, p := version.Remove(remote); !versionTime.IsZero() && p == fileName {
t.Run("ReadVersion", func(t *testing.T) {
assert.Equal(t, contents, fstests.ReadObject(ctx, t, entry.(fs.Object), -1))
})
assert.WithinDuration(t, time.Time(objMetadata.UploadTimestamp), versionTime, time.Second, "object time must be with 1 second of version time")
fileNameVersion = remote
tests++
}
}
assert.Equal(t, 2, tests, "object missing from listing")
// Check we can read the object with a version suffix
t.Run("NewObject", func(t *testing.T) {
o, err := f.NewObject(ctx, fileNameVersion)
require.NoError(t, err)
require.NotNil(t, o)
assert.Equal(t, int64(100), o.Size(), o.Remote())
})
// Check we can make a NewFs from that object with a version suffix
t.Run("NewFs", func(t *testing.T) {
newPath := bucket.Join(fs.ConfigStringFull(f), fileNameVersion)
// Make sure --b2-versions is set in the config of the new remote
fs.Debugf(nil, "oldPath = %q", newPath)
lastColon := strings.LastIndex(newPath, ":")
require.True(t, lastColon >= 0)
newPath = newPath[:lastColon] + ",versions" + newPath[lastColon:]
fs.Debugf(nil, "newPath = %q", newPath)
fNew, err := cache.Get(ctx, newPath)
// This should return pointing to a file
require.Equal(t, fs.ErrorIsFile, err)
require.NotNil(t, fNew)
// With the directory above
assert.Equal(t, dirName, path.Base(fs.ConfigStringFull(fNew)))
})
})
t.Run("VersionAt", func(t *testing.T) {
// We set --b2-version-at for this test so make sure we reset it at the end
defer func() {
f.opt.VersionAt = fs.Time{}
}()
var (
firstObjectTime = time.Time(objMetadata.UploadTimestamp)
secondObjectTime = time.Time(newObjMetadata.UploadTimestamp)
)
for _, test := range []struct {
what string
at time.Time
want []fstest.Item
wantErr error
wantSize int64
}{
{
what: "Before",
at: firstObjectTime.Add(-time.Second),
want: fstests.InternalTestFiles,
wantErr: fs.ErrorObjectNotFound,
},
{
what: "AfterOne",
at: firstObjectTime.Add(time.Second),
want: append([]fstest.Item{item}, fstests.InternalTestFiles...),
wantSize: 100,
},
{
what: "AfterDelete",
at: secondObjectTime.Add(-time.Second),
want: fstests.InternalTestFiles,
wantErr: fs.ErrorObjectNotFound,
},
{
what: "AfterTwo",
at: secondObjectTime.Add(time.Second),
want: append([]fstest.Item{newItem}, fstests.InternalTestFiles...),
wantSize: 101,
},
} {
t.Run(test.what, func(t *testing.T) {
f.opt.VersionAt = fs.Time(test.at)
t.Run("List", func(t *testing.T) {
fstest.CheckListing(t, f, test.want)
})
t.Run("NewObject", func(t *testing.T) {
gotObj, gotErr := f.NewObject(ctx, fileName)
assert.Equal(t, test.wantErr, gotErr)
if gotErr == nil {
assert.Equal(t, test.wantSize, gotObj.Size())
}
})
})
}
})
t.Run("Cleanup", func(t *testing.T) {
t.Run("DryRun", func(t *testing.T) {
f.opt.Versions = true
defer func() {
f.opt.Versions = false
}()
// Listing should be unchanged after dry run
before := listAllFiles(ctx, t, f, dirName)
ctx, ci := fs.AddConfig(ctx)
ci.DryRun = true
require.NoError(t, f.cleanUp(ctx, true, false, 0))
after := listAllFiles(ctx, t, f, dirName)
assert.Equal(t, before, after)
})
t.Run("RealThing", func(t *testing.T) {
f.opt.Versions = true
defer func() {
f.opt.Versions = false
}()
// Listing should reflect current state after cleanup
require.NoError(t, f.cleanUp(ctx, true, false, 0))
items := append([]fstest.Item{newItem}, fstests.InternalTestFiles...)
fstest.CheckListing(t, f, items)
})
})
// Purge gets tested later
}
func (f *Fs) InternalTestCleanupUnfinished(t *testing.T) {
ctx := context.Background()
// B2CleanupHidden tests cleaning up hidden files
t.Run("CleanupUnfinished", func(t *testing.T) {
dirName := "unfinished"
fileCount := 5
expectedFiles := []string{}
for i := 1; i < fileCount; i++ {
fileName := fmt.Sprintf("%s/unfinished-%d", dirName, i)
expectedFiles = append(expectedFiles, fileName)
obj := &Object{
fs: f,
remote: fileName,
}
objInfo := object.NewStaticObjectInfo(fileName, fstest.Time("2002-02-03T04:05:06.499999999Z"), -1, true, nil, nil)
_, err := f.newLargeUpload(ctx, obj, nil, objInfo, f.opt.ChunkSize, false, nil)
require.NoError(t, err)
}
checkListing(ctx, t, f, dirName, expectedFiles)
t.Run("DryRun", func(t *testing.T) {
// Listing should not change after dry run
ctx, ci := fs.AddConfig(ctx)
ci.DryRun = true
require.NoError(t, f.cleanUp(ctx, false, true, 0))
checkListing(ctx, t, f, dirName, expectedFiles)
})
t.Run("RealThing", func(t *testing.T) {
// Listing should be empty after real cleanup
require.NoError(t, f.cleanUp(ctx, false, true, 0))
checkListing(ctx, t, f, dirName, []string{})
})
})
}
func listAllFiles(ctx context.Context, t *testing.T, f *Fs, dirName string) []string {
bucket, directory := f.split(dirName)
foundFiles := []string{}
require.NoError(t, f.list(ctx, bucket, directory, "", false, true, 0, true, false, func(remote string, object *api.File, isDirectory bool) error {
if !isDirectory {
foundFiles = append(foundFiles, object.Name)
}
return nil
}))
sort.Strings(foundFiles)
return foundFiles
}
func checkListing(ctx context.Context, t *testing.T, f *Fs, dirName string, expectedFiles []string) {
foundFiles := listAllFiles(ctx, t, f, dirName)
sort.Strings(expectedFiles)
assert.Equal(t, expectedFiles, foundFiles)
}
func (f *Fs) InternalTestLifecycleRules(t *testing.T) {
ctx := context.Background()
opt := map[string]string{}
t.Run("InitState", func(t *testing.T) {
// There should be no lifecycle rules at the outset
lifecycleRulesIf, err := f.lifecycleCommand(ctx, "lifecycle", nil, opt)
lifecycleRules := lifecycleRulesIf.([]api.LifecycleRule)
require.NoError(t, err)
assert.Equal(t, 0, len(lifecycleRules))
})
t.Run("DryRun", func(t *testing.T) {
// There should still be no lifecycle rules after each dry run operation
ctx, ci := fs.AddConfig(ctx)
ci.DryRun = true
opt["daysFromHidingToDeleting"] = "30"
lifecycleRulesIf, err := f.lifecycleCommand(ctx, "lifecycle", nil, opt)
lifecycleRules := lifecycleRulesIf.([]api.LifecycleRule)
require.NoError(t, err)
assert.Equal(t, 0, len(lifecycleRules))
delete(opt, "daysFromHidingToDeleting")
opt["daysFromUploadingToHiding"] = "40"
lifecycleRulesIf, err = f.lifecycleCommand(ctx, "lifecycle", nil, opt)
lifecycleRules = lifecycleRulesIf.([]api.LifecycleRule)
require.NoError(t, err)
assert.Equal(t, 0, len(lifecycleRules))
opt["daysFromHidingToDeleting"] = "30"
lifecycleRulesIf, err = f.lifecycleCommand(ctx, "lifecycle", nil, opt)
lifecycleRules = lifecycleRulesIf.([]api.LifecycleRule)
require.NoError(t, err)
assert.Equal(t, 0, len(lifecycleRules))
})
t.Run("RealThing", func(t *testing.T) {
opt["daysFromHidingToDeleting"] = "30"
lifecycleRulesIf, err := f.lifecycleCommand(ctx, "lifecycle", nil, opt)
lifecycleRules := lifecycleRulesIf.([]api.LifecycleRule)
require.NoError(t, err)
assert.Equal(t, 1, len(lifecycleRules))
assert.Equal(t, 30, *lifecycleRules[0].DaysFromHidingToDeleting)
delete(opt, "daysFromHidingToDeleting")
opt["daysFromUploadingToHiding"] = "40"
lifecycleRulesIf, err = f.lifecycleCommand(ctx, "lifecycle", nil, opt)
lifecycleRules = lifecycleRulesIf.([]api.LifecycleRule)
require.NoError(t, err)
assert.Equal(t, 1, len(lifecycleRules))
assert.Equal(t, 40, *lifecycleRules[0].DaysFromUploadingToHiding)
opt["daysFromHidingToDeleting"] = "30"
lifecycleRulesIf, err = f.lifecycleCommand(ctx, "lifecycle", nil, opt)
lifecycleRules = lifecycleRulesIf.([]api.LifecycleRule)
require.NoError(t, err)
assert.Equal(t, 1, len(lifecycleRules))
assert.Equal(t, 30, *lifecycleRules[0].DaysFromHidingToDeleting)
assert.Equal(t, 40, *lifecycleRules[0].DaysFromUploadingToHiding)
})
}
// -run TestIntegration/FsMkdir/FsPutFiles/Internal
func (f *Fs) InternalTest(t *testing.T) {
t.Run("Metadata", f.InternalTestMetadata)
t.Run("Versions", f.InternalTestVersions)
t.Run("CleanupUnfinished", f.InternalTestCleanupUnfinished)
t.Run("LifecycleRules", f.InternalTestLifecycleRules)
}
var _ fstests.InternalTester = (*Fs)(nil)

View File

@@ -28,7 +28,12 @@ func (f *Fs) SetUploadCutoff(cs fs.SizeSuffix) (fs.SizeSuffix, error) {
return f.setUploadCutoff(cs)
}
func (f *Fs) SetCopyCutoff(cs fs.SizeSuffix) (fs.SizeSuffix, error) {
return f.setCopyCutoff(cs)
}
var (
_ fstests.SetUploadChunkSizer = (*Fs)(nil)
_ fstests.SetUploadCutoffer = (*Fs)(nil)
_ fstests.SetCopyCutoffer = (*Fs)(nil)
)

View File

@@ -1,11 +1,10 @@
// Upload large files for b2
//
// Docs - https://www.backblaze.com/b2/docs/large_files.html
// Docs - https://www.backblaze.com/docs/cloud-storage-large-files
package b2
import (
"bytes"
"context"
"crypto/sha1"
"encoding/hex"
@@ -21,6 +20,7 @@ import (
"github.com/rclone/rclone/fs/chunksize"
"github.com/rclone/rclone/fs/hash"
"github.com/rclone/rclone/lib/atexit"
"github.com/rclone/rclone/lib/pool"
"github.com/rclone/rclone/lib/rest"
"golang.org/x/sync/errgroup"
)
@@ -78,36 +78,31 @@ type largeUpload struct {
wrap accounting.WrapFn // account parts being transferred
id string // ID of the file being uploaded
size int64 // total size
parts int64 // calculated number of parts, if known
parts int // calculated number of parts, if known
sha1smu sync.Mutex // mutex to protect sha1s
sha1s []string // slice of SHA1s for each part
uploadMu sync.Mutex // lock for upload variable
uploads []*api.GetUploadPartURLResponse // result of get upload URL calls
chunkSize int64 // chunk size to use
src *Object // if copying, object we are reading from
info *api.FileInfo // final response with info about the object
}
// newLargeUpload starts an upload of object o from in with metadata in src
//
// If newInfo is set then metadata from that will be used instead of reading it from src
func (f *Fs) newLargeUpload(ctx context.Context, o *Object, in io.Reader, src fs.ObjectInfo, defaultChunkSize fs.SizeSuffix, doCopy bool, newInfo *api.File) (up *largeUpload, err error) {
func (f *Fs) newLargeUpload(ctx context.Context, o *Object, in io.Reader, src fs.ObjectInfo, defaultChunkSize fs.SizeSuffix, doCopy bool, newInfo *api.File, options ...fs.OpenOption) (up *largeUpload, err error) {
size := src.Size()
parts := int64(0)
sha1SliceSize := int64(maxParts)
parts := 0
chunkSize := defaultChunkSize
if size == -1 {
fs.Debugf(o, "Streaming upload with --b2-chunk-size %s allows uploads of up to %s and will fail only when that limit is reached.", f.opt.ChunkSize, maxParts*f.opt.ChunkSize)
} else {
chunkSize = chunksize.Calculator(o, size, maxParts, defaultChunkSize)
parts = size / int64(chunkSize)
parts = int(size / int64(chunkSize))
if size%int64(chunkSize) != 0 {
parts++
}
sha1SliceSize = parts
}
opts := rest.Opts{
Method: "POST",
Path: "/b2_start_large_file",
}
bucket, bucketPath := o.split()
bucketID, err := f.getBucketID(ctx, bucket)
@@ -118,12 +113,27 @@ func (f *Fs) newLargeUpload(ctx context.Context, o *Object, in io.Reader, src fs
BucketID: bucketID,
Name: f.opt.Enc.FromStandardPath(bucketPath),
}
optionsToSend := make([]fs.OpenOption, 0, len(options))
if newInfo == nil {
modTime := src.ModTime(ctx)
modTime, err := o.getModTime(ctx, src, options)
if err != nil {
return nil, err
}
request.ContentType = fs.MimeType(ctx, src)
request.Info = map[string]string{
timeKey: timeString(modTime),
}
// Custom upload headers - remove header prefix since they are sent in the body
for _, option := range options {
k, v := option.Header()
k = strings.ToLower(k)
if strings.HasPrefix(k, headerPrefix) {
request.Info[k[len(headerPrefix):]] = v
} else {
optionsToSend = append(optionsToSend, option)
}
}
// Set the SHA1 if known
if !o.fs.opt.DisableCheckSum || doCopy {
if calculatedSha1, err := src.Hash(ctx, hash.SHA1); err == nil && calculatedSha1 != "" {
@@ -134,6 +144,11 @@ func (f *Fs) newLargeUpload(ctx context.Context, o *Object, in io.Reader, src fs
request.ContentType = newInfo.ContentType
request.Info = newInfo.Info
}
opts := rest.Opts{
Method: "POST",
Path: "/b2_start_large_file",
Options: optionsToSend,
}
var response api.StartLargeFileResponse
err = f.pacer.Call(func() (bool, error) {
resp, err := f.srv.CallJSON(ctx, &opts, &request, &response)
@@ -150,7 +165,7 @@ func (f *Fs) newLargeUpload(ctx context.Context, o *Object, in io.Reader, src fs
id: response.ID,
size: size,
parts: parts,
sha1s: make([]string, sha1SliceSize),
sha1s: make([]string, 0, 16),
chunkSize: int64(chunkSize),
}
// unwrap the accounting from the input, we use wrap to put it
@@ -169,24 +184,26 @@ func (f *Fs) newLargeUpload(ctx context.Context, o *Object, in io.Reader, src fs
// This should be returned with returnUploadURL when finished
func (up *largeUpload) getUploadURL(ctx context.Context) (upload *api.GetUploadPartURLResponse, err error) {
up.uploadMu.Lock()
defer up.uploadMu.Unlock()
if len(up.uploads) == 0 {
opts := rest.Opts{
Method: "POST",
Path: "/b2_get_upload_part_url",
}
var request = api.GetUploadPartURLRequest{
ID: up.id,
}
err := up.f.pacer.Call(func() (bool, error) {
resp, err := up.f.srv.CallJSON(ctx, &opts, &request, &upload)
return up.f.shouldRetry(ctx, resp, err)
})
if err != nil {
return nil, fmt.Errorf("failed to get upload URL: %w", err)
}
} else {
if len(up.uploads) > 0 {
upload, up.uploads = up.uploads[0], up.uploads[1:]
up.uploadMu.Unlock()
return upload, nil
}
up.uploadMu.Unlock()
opts := rest.Opts{
Method: "POST",
Path: "/b2_get_upload_part_url",
}
var request = api.GetUploadPartURLRequest{
ID: up.id,
}
err = up.f.pacer.Call(func() (bool, error) {
resp, err := up.f.srv.CallJSON(ctx, &opts, &request, &upload)
return up.f.shouldRetry(ctx, resp, err)
})
if err != nil {
return nil, fmt.Errorf("failed to get upload URL: %w", err)
}
return upload, nil
}
@@ -201,10 +218,39 @@ func (up *largeUpload) returnUploadURL(upload *api.GetUploadPartURLResponse) {
up.uploadMu.Unlock()
}
// Transfer a chunk
func (up *largeUpload) transferChunk(ctx context.Context, part int64, body []byte) error {
err := up.f.pacer.Call(func() (bool, error) {
fs.Debugf(up.o, "Sending chunk %d length %d", part, len(body))
// Add an sha1 to the being built up sha1s
func (up *largeUpload) addSha1(chunkNumber int, sha1 string) {
up.sha1smu.Lock()
defer up.sha1smu.Unlock()
if len(up.sha1s) < chunkNumber+1 {
up.sha1s = append(up.sha1s, make([]string, chunkNumber+1-len(up.sha1s))...)
}
up.sha1s[chunkNumber] = sha1
}
// WriteChunk will write chunk number with reader bytes, where chunk number >= 0
func (up *largeUpload) WriteChunk(ctx context.Context, chunkNumber int, reader io.ReadSeeker) (size int64, err error) {
// Only account after the checksum reads have been done
if do, ok := reader.(pool.DelayAccountinger); ok {
// To figure out this number, do a transfer and if the accounted size is 0 or a
// multiple of what it should be, increase or decrease this number.
do.DelayAccounting(1)
}
err = up.f.pacer.Call(func() (bool, error) {
// Discover the size by seeking to the end
size, err = reader.Seek(0, io.SeekEnd)
if err != nil {
return false, err
}
// rewind the reader on retry and after reading size
_, err = reader.Seek(0, io.SeekStart)
if err != nil {
return false, err
}
fs.Debugf(up.o, "Sending chunk %d length %d", chunkNumber, size)
// Get upload URL
upload, err := up.getUploadURL(ctx)
@@ -212,8 +258,8 @@ func (up *largeUpload) transferChunk(ctx context.Context, part int64, body []byt
return false, err
}
in := newHashAppendingReader(bytes.NewReader(body), sha1.New())
size := int64(len(body)) + int64(in.AdditionalLength())
in := newHashAppendingReader(reader, sha1.New())
sizeWithHash := size + int64(in.AdditionalLength())
// Authorization
//
@@ -243,10 +289,10 @@ func (up *largeUpload) transferChunk(ctx context.Context, part int64, body []byt
Body: up.wrap(in),
ExtraHeaders: map[string]string{
"Authorization": upload.AuthorizationToken,
"X-Bz-Part-Number": fmt.Sprintf("%d", part),
"X-Bz-Part-Number": fmt.Sprintf("%d", chunkNumber+1),
sha1Header: "hex_digits_at_end",
},
ContentLength: &size,
ContentLength: &sizeWithHash,
}
var response api.UploadPartResponse
@@ -254,7 +300,7 @@ func (up *largeUpload) transferChunk(ctx context.Context, part int64, body []byt
resp, err := up.f.srv.CallJSON(ctx, &opts, nil, &response)
retry, err := up.f.shouldRetry(ctx, resp, err)
if err != nil {
fs.Debugf(up.o, "Error sending chunk %d (retry=%v): %v: %#v", part, retry, err, err)
fs.Debugf(up.o, "Error sending chunk %d (retry=%v): %v: %#v", chunkNumber, retry, err, err)
}
// On retryable error clear PartUploadURL
if retry {
@@ -262,30 +308,30 @@ func (up *largeUpload) transferChunk(ctx context.Context, part int64, body []byt
upload = nil
}
up.returnUploadURL(upload)
up.sha1s[part-1] = in.HexSum()
up.addSha1(chunkNumber, in.HexSum())
return retry, err
})
if err != nil {
fs.Debugf(up.o, "Error sending chunk %d: %v", part, err)
fs.Debugf(up.o, "Error sending chunk %d: %v", chunkNumber, err)
} else {
fs.Debugf(up.o, "Done sending chunk %d", part)
fs.Debugf(up.o, "Done sending chunk %d", chunkNumber)
}
return err
return size, err
}
// Copy a chunk
func (up *largeUpload) copyChunk(ctx context.Context, part int64, partSize int64) error {
func (up *largeUpload) copyChunk(ctx context.Context, part int, partSize int64) error {
err := up.f.pacer.Call(func() (bool, error) {
fs.Debugf(up.o, "Copying chunk %d length %d", part, partSize)
opts := rest.Opts{
Method: "POST",
Path: "/b2_copy_part",
}
offset := (part - 1) * up.chunkSize // where we are in the source file
offset := int64(part) * up.chunkSize // where we are in the source file
var request = api.CopyPartRequest{
SourceID: up.src.id,
LargeFileID: up.id,
PartNumber: part,
PartNumber: int64(part + 1),
Range: fmt.Sprintf("bytes=%d-%d", offset, offset+partSize-1),
}
var response api.UploadPartResponse
@@ -294,7 +340,7 @@ func (up *largeUpload) copyChunk(ctx context.Context, part int64, partSize int64
if err != nil {
fs.Debugf(up.o, "Error copying chunk %d (retry=%v): %v: %#v", part, retry, err, err)
}
up.sha1s[part-1] = response.SHA1
up.addSha1(part, response.SHA1)
return retry, err
})
if err != nil {
@@ -305,8 +351,8 @@ func (up *largeUpload) copyChunk(ctx context.Context, part int64, partSize int64
return err
}
// finish closes off the large upload
func (up *largeUpload) finish(ctx context.Context) error {
// Close closes off the large upload
func (up *largeUpload) Close(ctx context.Context) error {
fs.Debugf(up.o, "Finishing large file %s with %d parts", up.what, up.parts)
opts := rest.Opts{
Method: "POST",
@@ -324,11 +370,12 @@ func (up *largeUpload) finish(ctx context.Context) error {
if err != nil {
return err
}
return up.o.decodeMetaDataFileInfo(&response)
up.info = &response
return nil
}
// cancel aborts the large upload
func (up *largeUpload) cancel(ctx context.Context) error {
// Abort aborts the large upload
func (up *largeUpload) Abort(ctx context.Context) error {
fs.Debugf(up.o, "Cancelling large file %s", up.what)
opts := rest.Opts{
Method: "POST",
@@ -353,128 +400,102 @@ func (up *largeUpload) cancel(ctx context.Context) error {
// reaches EOF.
//
// Note that initialUploadBlock must be returned to f.putBuf()
func (up *largeUpload) Stream(ctx context.Context, initialUploadBlock []byte) (err error) {
defer atexit.OnError(&err, func() { _ = up.cancel(ctx) })()
func (up *largeUpload) Stream(ctx context.Context, initialUploadBlock *pool.RW) (err error) {
defer atexit.OnError(&err, func() { _ = up.Abort(ctx) })()
fs.Debugf(up.o, "Starting streaming of large file (id %q)", up.id)
var (
g, gCtx = errgroup.WithContext(ctx)
hasMoreParts = true
)
up.size = int64(len(initialUploadBlock))
g.Go(func() error {
for part := int64(1); hasMoreParts; part++ {
// Get a block of memory from the pool and token which limits concurrency.
var buf []byte
if part == 1 {
buf = initialUploadBlock
} else {
buf = up.f.getBuf(false)
}
// Fail fast, in case an errgroup managed function returns an error
// gCtx is cancelled. There is no point in uploading all the other parts.
if gCtx.Err() != nil {
up.f.putBuf(buf, false)
return nil
}
// Read the chunk
var n int
if part == 1 {
n = len(buf)
} else {
n, err = io.ReadFull(up.in, buf)
if err == io.ErrUnexpectedEOF {
fs.Debugf(up.o, "Read less than a full chunk, making this the last one.")
buf = buf[:n]
hasMoreParts = false
} else if err == io.EOF {
fs.Debugf(up.o, "Could not read any more bytes, previous chunk was the last.")
up.f.putBuf(buf, false)
return nil
} else if err != nil {
// other kinds of errors indicate failure
up.f.putBuf(buf, false)
return err
}
}
// Keep stats up to date
up.parts = part
up.size += int64(n)
if part > maxParts {
up.f.putBuf(buf, false)
return fmt.Errorf("%q too big (%d bytes so far) makes too many parts %d > %d - increase --b2-chunk-size", up.o, up.size, up.parts, maxParts)
}
part := part // for the closure
g.Go(func() (err error) {
defer up.f.putBuf(buf, false)
return up.transferChunk(gCtx, part, buf)
})
up.size = initialUploadBlock.Size()
up.parts = 0
for part := 0; hasMoreParts; part++ {
// Get a block of memory from the pool and token which limits concurrency.
var rw *pool.RW
if part == 0 {
rw = initialUploadBlock
} else {
rw = up.f.getRW(false)
}
return nil
})
// Fail fast, in case an errgroup managed function returns an error
// gCtx is cancelled. There is no point in uploading all the other parts.
if gCtx.Err() != nil {
up.f.putRW(rw)
break
}
// Read the chunk
var n int64
if part == 0 {
n = rw.Size()
} else {
n, err = io.CopyN(rw, up.in, up.chunkSize)
if err == io.EOF {
if n == 0 {
fs.Debugf(up.o, "Not sending empty chunk after EOF - ending.")
up.f.putRW(rw)
break
} else {
fs.Debugf(up.o, "Read less than a full chunk %d, making this the last one.", n)
}
hasMoreParts = false
} else if err != nil {
// other kinds of errors indicate failure
up.f.putRW(rw)
return err
}
}
// Keep stats up to date
up.parts += 1
up.size += n
if part > maxParts {
up.f.putRW(rw)
return fmt.Errorf("%q too big (%d bytes so far) makes too many parts %d > %d - increase --b2-chunk-size", up.o, up.size, up.parts, maxParts)
}
part := part // for the closure
g.Go(func() (err error) {
defer up.f.putRW(rw)
_, err = up.WriteChunk(gCtx, part, rw)
return err
})
}
err = g.Wait()
if err != nil {
return err
}
up.sha1s = up.sha1s[:up.parts]
return up.finish(ctx)
return up.Close(ctx)
}
// Upload uploads the chunks from the input
func (up *largeUpload) Upload(ctx context.Context) (err error) {
defer atexit.OnError(&err, func() { _ = up.cancel(ctx) })()
// Copy the chunks from the source to the destination
func (up *largeUpload) Copy(ctx context.Context) (err error) {
defer atexit.OnError(&err, func() { _ = up.Abort(ctx) })()
fs.Debugf(up.o, "Starting %s of large file in %d chunks (id %q)", up.what, up.parts, up.id)
var (
g, gCtx = errgroup.WithContext(ctx)
remaining = up.size
)
g.Go(func() error {
for part := int64(1); part <= up.parts; part++ {
// Get a block of memory from the pool and token which limits concurrency.
buf := up.f.getBuf(up.doCopy)
// Fail fast, in case an errgroup managed function returns an error
// gCtx is cancelled. There is no point in uploading all the other parts.
if gCtx.Err() != nil {
up.f.putBuf(buf, up.doCopy)
return nil
}
reqSize := remaining
if reqSize >= up.chunkSize {
reqSize = up.chunkSize
}
if !up.doCopy {
// Read the chunk
buf = buf[:reqSize]
_, err = io.ReadFull(up.in, buf)
if err != nil {
up.f.putBuf(buf, up.doCopy)
return err
}
}
part := part // for the closure
g.Go(func() (err error) {
defer up.f.putBuf(buf, up.doCopy)
if !up.doCopy {
err = up.transferChunk(gCtx, part, buf)
} else {
err = up.copyChunk(gCtx, part, reqSize)
}
return err
})
remaining -= reqSize
g.SetLimit(up.f.opt.UploadConcurrency)
for part := range up.parts {
// Fail fast, in case an errgroup managed function returns an error
// gCtx is cancelled. There is no point in copying all the other parts.
if gCtx.Err() != nil {
break
}
return nil
})
reqSize := min(remaining, up.chunkSize)
part := part // for the closure
g.Go(func() (err error) {
return up.copyChunk(gCtx, part, reqSize)
})
remaining -= reqSize
}
err = g.Wait()
if err != nil {
return err
}
return up.finish(ctx)
return up.Close(ctx)
}

View File

@@ -52,7 +52,7 @@ func (e *Error) Error() string {
out += ": " + e.Message
}
if e.ContextInfo != nil {
out += fmt.Sprintf(" (%+v)", e.ContextInfo)
out += fmt.Sprintf(" (%s)", string(e.ContextInfo))
}
return out
}
@@ -63,7 +63,7 @@ var _ error = (*Error)(nil)
// ItemFields are the fields needed for FileInfo
var ItemFields = "type,id,sequence_id,etag,sha1,name,size,created_at,modified_at,content_created_at,content_modified_at,item_status,shared_link,owned_by"
// Types of things in Item
// Types of things in Item/ItemMini
const (
ItemTypeFolder = "folder"
ItemTypeFile = "file"
@@ -72,20 +72,31 @@ const (
ItemStatusDeleted = "deleted"
)
// ItemMini is a subset of the elements in a full Item returned by some API calls
type ItemMini struct {
Type string `json:"type"`
ID string `json:"id"`
SequenceID int64 `json:"sequence_id,string"`
Etag string `json:"etag"`
SHA1 string `json:"sha1"`
Name string `json:"name"`
}
// Item describes a folder or a file as returned by Get Folder Items and others
type Item struct {
Type string `json:"type"`
ID string `json:"id"`
SequenceID string `json:"sequence_id"`
Etag string `json:"etag"`
SHA1 string `json:"sha1"`
Name string `json:"name"`
Size float64 `json:"size"` // box returns this in xEyy format for very large numbers - see #2261
CreatedAt Time `json:"created_at"`
ModifiedAt Time `json:"modified_at"`
ContentCreatedAt Time `json:"content_created_at"`
ContentModifiedAt Time `json:"content_modified_at"`
ItemStatus string `json:"item_status"` // active, trashed if the file has been moved to the trash, and deleted if the file has been permanently deleted
Type string `json:"type"`
ID string `json:"id"`
SequenceID int64 `json:"sequence_id,string"`
Etag string `json:"etag"`
SHA1 string `json:"sha1"`
Name string `json:"name"`
Size float64 `json:"size"` // box returns this in xEyy format for very large numbers - see #2261
CreatedAt Time `json:"created_at"`
ModifiedAt Time `json:"modified_at"`
ContentCreatedAt Time `json:"content_created_at"`
ContentModifiedAt Time `json:"content_modified_at"`
ItemStatus string `json:"item_status"` // active, trashed if the file has been moved to the trash, and deleted if the file has been permanently deleted
Parent ItemMini `json:"parent"`
SharedLink struct {
URL string `json:"url,omitempty"`
Access string `json:"access,omitempty"`
@@ -114,10 +125,21 @@ type FolderItems struct {
Offset int `json:"offset"`
Limit int `json:"limit"`
NextMarker *string `json:"next_marker,omitempty"`
Order []struct {
By string `json:"by"`
Direction string `json:"direction"`
} `json:"order"`
// There is some confusion about how this is actually
// returned. The []struct has worked for many years, but in
// https://github.com/rclone/rclone/issues/8776 box was
// returning it returned not as a list. We don't actually use
// this so comment it out.
//
// Order struct {
// By string `json:"by"`
// Direction string `json:"direction"`
// } `json:"order"`
//
// Order []struct {
// By string `json:"by"`
// Direction string `json:"direction"`
// } `json:"order"`
}
// Parent defined the ID of the parent directory
@@ -156,19 +178,7 @@ type PreUploadCheckResponse struct {
// PreUploadCheckConflict is returned in the ContextInfo error field
// from PreUploadCheck when the error code is "item_name_in_use"
type PreUploadCheckConflict struct {
Conflicts struct {
Type string `json:"type"`
ID string `json:"id"`
FileVersion struct {
Type string `json:"type"`
ID string `json:"id"`
Sha1 string `json:"sha1"`
} `json:"file_version"`
SequenceID string `json:"sequence_id"`
Etag string `json:"etag"`
Sha1 string `json:"sha1"`
Name string `json:"name"`
} `json:"conflicts"`
Conflicts ItemMini `json:"conflicts"`
}
// UpdateFileModTime is used in Update File Info
@@ -272,12 +282,39 @@ type User struct {
ModifiedAt time.Time `json:"modified_at"`
Language string `json:"language"`
Timezone string `json:"timezone"`
SpaceAmount int64 `json:"space_amount"`
SpaceUsed int64 `json:"space_used"`
MaxUploadSize int64 `json:"max_upload_size"`
SpaceAmount float64 `json:"space_amount"`
SpaceUsed float64 `json:"space_used"`
MaxUploadSize float64 `json:"max_upload_size"`
Status string `json:"status"`
JobTitle string `json:"job_title"`
Phone string `json:"phone"`
Address string `json:"address"`
AvatarURL string `json:"avatar_url"`
}
// FileTreeChangeEventTypes are the events that can require cache invalidation
var FileTreeChangeEventTypes = map[string]struct{}{
"ITEM_COPY": {},
"ITEM_CREATE": {},
"ITEM_MAKE_CURRENT_VERSION": {},
"ITEM_MODIFY": {},
"ITEM_MOVE": {},
"ITEM_RENAME": {},
"ITEM_TRASH": {},
"ITEM_UNDELETE_VIA_TRASH": {},
"ITEM_UPLOAD": {},
}
// Event is an array element in the response returned from /events
type Event struct {
EventType string `json:"event_type"`
EventID string `json:"event_id"`
Source Item `json:"source"`
}
// Events is returned from /events
type Events struct {
ChunkSize int64 `json:"chunk_size"`
Entries []Event `json:"entries"`
NextStreamPosition int64 `json:"next_stream_position"`
}

View File

@@ -17,9 +17,9 @@ import (
"errors"
"fmt"
"io"
"io/ioutil"
"net/http"
"net/url"
"os"
"path"
"strconv"
"strings"
@@ -27,6 +27,7 @@ import (
"sync/atomic"
"time"
"github.com/golang-jwt/jwt/v4"
"github.com/rclone/rclone/backend/box/api"
"github.com/rclone/rclone/fs"
"github.com/rclone/rclone/fs/config"
@@ -36,16 +37,16 @@ import (
"github.com/rclone/rclone/fs/fserrors"
"github.com/rclone/rclone/fs/fshttp"
"github.com/rclone/rclone/fs/hash"
"github.com/rclone/rclone/fs/list"
"github.com/rclone/rclone/lib/dircache"
"github.com/rclone/rclone/lib/encoder"
"github.com/rclone/rclone/lib/env"
"github.com/rclone/rclone/lib/jwtutil"
"github.com/rclone/rclone/lib/oauthutil"
"github.com/rclone/rclone/lib/pacer"
"github.com/rclone/rclone/lib/random"
"github.com/rclone/rclone/lib/rest"
"github.com/youmark/pkcs8"
"golang.org/x/oauth2"
"golang.org/x/oauth2/jws"
)
const (
@@ -64,18 +65,21 @@ const (
// Globals
var (
// Description of how to auth for this app
oauthConfig = &oauth2.Config{
Scopes: nil,
Endpoint: oauth2.Endpoint{
AuthURL: "https://app.box.com/api/oauth2/authorize",
TokenURL: "https://app.box.com/api/oauth2/token",
},
oauthConfig = &oauthutil.Config{
Scopes: nil,
AuthURL: "https://app.box.com/api/oauth2/authorize",
TokenURL: "https://app.box.com/api/oauth2/token",
ClientID: rcloneClientID,
ClientSecret: obscure.MustReveal(rcloneEncryptedClientSecret),
RedirectURL: oauthutil.RedirectURL,
}
)
type boxCustomClaims struct {
jwt.StandardClaims
BoxSubType string `json:"box_sub_type,omitempty"`
}
// Register with Fs
func init() {
fs.Register(&fs.RegInfo{
@@ -102,16 +106,18 @@ func init() {
return nil, nil
},
Options: append(oauthutil.SharedOptions, []fs.Option{{
Name: "root_folder_id",
Help: "Fill in for rclone to use a non root folder as its starting point.",
Default: "0",
Advanced: true,
Name: "root_folder_id",
Help: "Fill in for rclone to use a non root folder as its starting point.",
Default: "0",
Advanced: true,
Sensitive: true,
}, {
Name: "box_config_file",
Help: "Box App config.json location\n\nLeave blank normally." + env.ShellExpandHelp,
}, {
Name: "access_token",
Help: "Box App Primary Access Token\n\nLeave blank normally.",
Name: "access_token",
Help: "Box App Primary Access Token\n\nLeave blank normally.",
Sensitive: true,
}, {
Name: "box_sub_type",
Default: "user",
@@ -142,6 +148,23 @@ func init() {
Default: "",
Help: "Only show items owned by the login (email address) passed in.",
Advanced: true,
}, {
Name: "impersonate",
Default: "",
Help: `Impersonate this user ID when using a service account.
Setting this flag allows rclone, when using a JWT service account, to
act on behalf of another user by setting the as-user header.
The user ID is the Box identifier for a user. User IDs can found for
any user via the GET /users endpoint, which is only available to
admins, or by calling the GET /users/me endpoint with an authenticated
user session.
See: https://developer.box.com/guides/authentication/jwt/as-user/
`,
Advanced: true,
Sensitive: true,
}, {
Name: config.ConfigEncoding,
Help: config.ConfigEncodingHelp,
@@ -178,12 +201,12 @@ func refreshJWTToken(ctx context.Context, jsonFile string, boxSubType string, na
signingHeaders := getSigningHeaders(boxConfig)
queryParams := getQueryParams(boxConfig)
client := fshttp.NewClient(ctx)
err = jwtutil.Config("box", name, claims, signingHeaders, queryParams, privateKey, m, client)
err = jwtutil.Config("box", name, tokenURL, *claims, signingHeaders, queryParams, privateKey, m, client)
return err
}
func getBoxConfig(configFile string) (boxConfig *api.ConfigJSON, err error) {
file, err := ioutil.ReadFile(configFile)
file, err := os.ReadFile(configFile)
if err != nil {
return nil, fmt.Errorf("box: failed to read Box config: %w", err)
}
@@ -194,34 +217,31 @@ func getBoxConfig(configFile string) (boxConfig *api.ConfigJSON, err error) {
return boxConfig, nil
}
func getClaims(boxConfig *api.ConfigJSON, boxSubType string) (claims *jws.ClaimSet, err error) {
func getClaims(boxConfig *api.ConfigJSON, boxSubType string) (claims *boxCustomClaims, err error) {
val, err := jwtutil.RandomHex(20)
if err != nil {
return nil, fmt.Errorf("box: failed to generate random string for jti: %w", err)
}
claims = &jws.ClaimSet{
Iss: boxConfig.BoxAppSettings.ClientID,
Sub: boxConfig.EnterpriseID,
Aud: tokenURL,
Exp: time.Now().Add(time.Second * 45).Unix(),
PrivateClaims: map[string]interface{}{
"box_sub_type": boxSubType,
"aud": tokenURL,
"jti": val,
claims = &boxCustomClaims{
//lint:ignore SA1019 since we need to use jwt.StandardClaims even if deprecated in jwt-go v4 until a more permanent solution is ready in time before jwt-go v5 where it is removed entirely
//nolint:staticcheck // Don't include staticcheck when running golangci-lint to avoid SA1019
StandardClaims: jwt.StandardClaims{
Id: val,
Issuer: boxConfig.BoxAppSettings.ClientID,
Subject: boxConfig.EnterpriseID,
Audience: tokenURL,
ExpiresAt: time.Now().Add(time.Second * 45).Unix(),
},
BoxSubType: boxSubType,
}
return claims, nil
}
func getSigningHeaders(boxConfig *api.ConfigJSON) *jws.Header {
signingHeaders := &jws.Header{
Algorithm: "RS256",
Typ: "JWT",
KeyID: boxConfig.BoxAppSettings.AppAuth.PublicKeyID,
func getSigningHeaders(boxConfig *api.ConfigJSON) map[string]any {
signingHeaders := map[string]any{
"kid": boxConfig.BoxAppSettings.AppAuth.PublicKeyID,
}
return signingHeaders
}
@@ -235,8 +255,10 @@ func getQueryParams(boxConfig *api.ConfigJSON) map[string]string {
}
func getDecryptedPrivateKey(boxConfig *api.ConfigJSON) (key *rsa.PrivateKey, err error) {
block, rest := pem.Decode([]byte(boxConfig.BoxAppSettings.AppAuth.PrivateKey))
if block == nil {
return nil, errors.New("box: failed to PEM decode private key")
}
if len(rest) > 0 {
return nil, fmt.Errorf("box: extra data included in private key: %w", err)
}
@@ -258,19 +280,29 @@ type Options struct {
AccessToken string `config:"access_token"`
ListChunk int `config:"list_chunk"`
OwnedBy string `config:"owned_by"`
Impersonate string `config:"impersonate"`
}
// ItemMeta defines metadata we cache for each Item ID
type ItemMeta struct {
SequenceID int64 // the most recent event processed for this item
ParentID string // ID of the parent directory of this item
Name string // leaf name of this item
}
// Fs represents a remote box
type Fs struct {
name string // name of this remote
root string // the path we are working on
opt Options // parsed options
features *fs.Features // optional features
srv *rest.Client // the connection to the server
dirCache *dircache.DirCache // Map of directory path to directory id
pacer *fs.Pacer // pacer for API calls
tokenRenewer *oauthutil.Renew // renew the token on expiry
uploadToken *pacer.TokenDispenser // control concurrency
name string // name of this remote
root string // the path we are working on
opt Options // parsed options
features *fs.Features // optional features
srv *rest.Client // the connection to the server
dirCache *dircache.DirCache // Map of directory path to directory id
pacer *fs.Pacer // pacer for API calls
tokenRenewer *oauthutil.Renew // renew the token on expiry
uploadToken *pacer.TokenDispenser // control concurrency
itemMetaCacheMu *sync.Mutex // protects itemMetaCache
itemMetaCache map[string]ItemMeta // map of Item ID to selected metadata
}
// Object describes a box object
@@ -349,7 +381,7 @@ func shouldRetry(ctx context.Context, resp *http.Response, err error) (bool, err
// readMetaDataForPath reads the metadata from the path
func (f *Fs) readMetaDataForPath(ctx context.Context, path string) (info *api.Item, err error) {
// defer fs.Trace(f, "path=%q", path)("info=%+v, err=%v", &info, &err)
// defer log.Trace(f, "path=%q", path)("info=%+v, err=%v", &info, &err)
leaf, directoryID, err := f.dirCache.FindPath(ctx, path, false)
if err != nil {
if err == fs.ErrorDirNotFound {
@@ -358,20 +390,30 @@ func (f *Fs) readMetaDataForPath(ctx context.Context, path string) (info *api.It
return nil, err
}
found, err := f.listAll(ctx, directoryID, false, true, true, func(item *api.Item) bool {
if strings.EqualFold(item.Name, leaf) {
info = item
return true
}
return false
// Use preupload to find the ID
itemMini, err := f.preUploadCheck(ctx, leaf, directoryID, -1)
if err != nil {
return nil, err
}
if itemMini == nil {
return nil, fs.ErrorObjectNotFound
}
// Now we have the ID we can look up the object proper
opts := rest.Opts{
Method: "GET",
Path: "/files/" + itemMini.ID,
Parameters: fieldsValue(),
}
var item api.Item
err = f.pacer.Call(func() (bool, error) {
resp, err := f.srv.CallJSON(ctx, &opts, nil, &item)
return shouldRetry(ctx, resp, err)
})
if err != nil {
return nil, err
}
if !found {
return nil, fs.ErrorObjectNotFound
}
return info, nil
return &item, nil
}
// errorHandler parses a non 2xx error response into an error
@@ -418,12 +460,14 @@ func NewFs(ctx context.Context, name, root string, m configmap.Mapper) (fs.Fs, e
ci := fs.GetConfig(ctx)
f := &Fs{
name: name,
root: root,
opt: *opt,
srv: rest.NewClient(client).SetRoot(rootURL),
pacer: fs.NewPacer(ctx, pacer.NewDefault(pacer.MinSleep(minSleep), pacer.MaxSleep(maxSleep), pacer.DecayConstant(decayConstant))),
uploadToken: pacer.NewTokenDispenser(ci.Transfers),
name: name,
root: root,
opt: *opt,
srv: rest.NewClient(client).SetRoot(rootURL),
pacer: fs.NewPacer(ctx, pacer.NewDefault(pacer.MinSleep(minSleep), pacer.MaxSleep(maxSleep), pacer.DecayConstant(decayConstant))),
uploadToken: pacer.NewTokenDispenser(ci.Transfers),
itemMetaCacheMu: new(sync.Mutex),
itemMetaCache: make(map[string]ItemMeta),
}
f.features = (&fs.Features{
CaseInsensitive: true,
@@ -436,6 +480,11 @@ func NewFs(ctx context.Context, name, root string, m configmap.Mapper) (fs.Fs, e
f.srv.SetHeader("Authorization", "Bearer "+f.opt.AccessToken)
}
// If using impersonate set an as-user header
if f.opt.Impersonate != "" {
f.srv.SetHeader("as-user", f.opt.Impersonate)
}
jsonFile, ok := m.Get("box_config_file")
boxSubType, boxSubTypeOk := m.Get("box_sub_type")
@@ -571,7 +620,7 @@ func (f *Fs) CreateDir(ctx context.Context, pathID, leaf string) (newID string,
return shouldRetry(ctx, resp, err)
})
if err != nil {
//fmt.Printf("...Error %v\n", err)
// fmt.Printf("...Error %v\n", err)
return "", err
}
// fmt.Printf("...Id %q\n", *info.Id)
@@ -657,9 +706,27 @@ OUTER:
// This should return ErrDirNotFound if the directory isn't
// found.
func (f *Fs) List(ctx context.Context, dir string) (entries fs.DirEntries, err error) {
return list.WithListP(ctx, dir, f)
}
// ListP lists the objects and directories of the Fs starting
// from dir non recursively into out.
//
// dir should be "" to start from the root, and should not
// have trailing slashes.
//
// This should return ErrDirNotFound if the directory isn't
// found.
//
// It should call callback for each tranche of entries read.
// These need not be returned in any particular order. If
// callback returns an error then the listing will stop
// immediately.
func (f *Fs) ListP(ctx context.Context, dir string, callback fs.ListRCallback) error {
list := list.NewHelper(callback)
directoryID, err := f.dirCache.FindDir(ctx, dir, false)
if err != nil {
return nil, err
return err
}
var iErr error
_, err = f.listAll(ctx, directoryID, false, false, true, func(info *api.Item) bool {
@@ -669,24 +736,43 @@ func (f *Fs) List(ctx context.Context, dir string) (entries fs.DirEntries, err e
f.dirCache.Put(remote, info.ID)
d := fs.NewDir(remote, info.ModTime()).SetID(info.ID)
// FIXME more info from dir?
entries = append(entries, d)
err = list.Add(d)
if err != nil {
iErr = err
return true
}
} else if info.Type == api.ItemTypeFile {
o, err := f.newObjectWithInfo(ctx, remote, info)
if err != nil {
iErr = err
return true
}
entries = append(entries, o)
err = list.Add(o)
if err != nil {
iErr = err
return true
}
}
// Cache some metadata for this Item to help us process events later
// on. In particular, the box event API does not provide the old path
// of the Item when it is renamed/deleted/moved/etc.
f.itemMetaCacheMu.Lock()
cachedItemMeta, found := f.itemMetaCache[info.ID]
if !found || cachedItemMeta.SequenceID < info.SequenceID {
f.itemMetaCache[info.ID] = ItemMeta{SequenceID: info.SequenceID, ParentID: directoryID, Name: info.Name}
}
f.itemMetaCacheMu.Unlock()
return false
})
if err != nil {
return nil, err
return err
}
if iErr != nil {
return nil, iErr
return iErr
}
return entries, nil
return list.Flush()
}
// Creates from the parameters passed in a half finished Object which
@@ -713,7 +799,7 @@ func (f *Fs) createObject(ctx context.Context, remote string, modTime time.Time,
//
// It returns "", nil if the file is good to go
// It returns "ID", nil if the file must be updated
func (f *Fs) preUploadCheck(ctx context.Context, leaf, directoryID string, size int64) (ID string, err error) {
func (f *Fs) preUploadCheck(ctx context.Context, leaf, directoryID string, size int64) (item *api.ItemMini, err error) {
check := api.PreUploadCheck{
Name: f.opt.Enc.FromStandardName(leaf),
Parent: api.Parent{
@@ -738,16 +824,16 @@ func (f *Fs) preUploadCheck(ctx context.Context, leaf, directoryID string, size
var conflict api.PreUploadCheckConflict
err = json.Unmarshal(apiErr.ContextInfo, &conflict)
if err != nil {
return "", fmt.Errorf("pre-upload check: JSON decode failed: %w", err)
return nil, fmt.Errorf("pre-upload check: JSON decode failed: %w", err)
}
if conflict.Conflicts.Type != api.ItemTypeFile {
return "", fmt.Errorf("pre-upload check: can't overwrite non file with file: %w", err)
return nil, fs.ErrorIsDir
}
return conflict.Conflicts.ID, nil
return &conflict.Conflicts, nil
}
return "", fmt.Errorf("pre-upload check: %w", err)
return nil, fmt.Errorf("pre-upload check: %w", err)
}
return "", nil
return nil, nil
}
// Put the object
@@ -768,11 +854,11 @@ func (f *Fs) Put(ctx context.Context, in io.Reader, src fs.ObjectInfo, options .
// Preflight check the upload, which returns the ID if the
// object already exists
ID, err := f.preUploadCheck(ctx, leaf, directoryID, src.Size())
item, err := f.preUploadCheck(ctx, leaf, directoryID, src.Size())
if err != nil {
return nil, err
}
if ID == "" {
if item == nil {
return f.PutUnchecked(ctx, in, src, options...)
}
@@ -780,7 +866,7 @@ func (f *Fs) Put(ctx context.Context, in io.Reader, src fs.ObjectInfo, options .
o := &Object{
fs: f,
remote: remote,
id: ID,
id: item.ID,
}
return o, o.Update(ctx, in, src, options...)
}
@@ -907,6 +993,26 @@ func (f *Fs) Copy(ctx context.Context, src fs.Object, remote string) (fs.Object,
return nil, err
}
// check if dest already exists
item, err := f.preUploadCheck(ctx, leaf, directoryID, src.Size())
if err != nil {
return nil, err
}
if item != nil { // dest already exists, need to copy to temp name and then move
tempSuffix := "-rclone-copy-" + random.String(8)
fs.Debugf(remote, "dst already exists, copying to temp name %v", remote+tempSuffix)
tempObj, err := f.Copy(ctx, src, remote+tempSuffix)
if err != nil {
return nil, err
}
fs.Debugf(remote+tempSuffix, "moving to real name %v", remote)
err = f.deleteObject(ctx, item.ID)
if err != nil {
return nil, err
}
return f.Move(ctx, tempObj, remote)
}
// Copy the object
opts := rest.Opts{
Method: "POST",
@@ -1117,7 +1223,7 @@ func (f *Fs) deletePermanently(ctx context.Context, itemType, id string) error {
// CleanUp empties the trash
func (f *Fs) CleanUp(ctx context.Context) (err error) {
var (
deleteErrors = int64(0)
deleteErrors atomic.Uint64
concurrencyControl = make(chan struct{}, fs.GetConfig(ctx).Checkers)
wg sync.WaitGroup
)
@@ -1133,7 +1239,7 @@ func (f *Fs) CleanUp(ctx context.Context) (err error) {
err := f.deletePermanently(ctx, item.Type, item.ID)
if err != nil {
fs.Errorf(f, "failed to delete trash item %q (%q): %v", item.Name, item.ID, err)
atomic.AddInt64(&deleteErrors, 1)
deleteErrors.Add(1)
}
}()
} else {
@@ -1142,12 +1248,279 @@ func (f *Fs) CleanUp(ctx context.Context) (err error) {
return false
})
wg.Wait()
if deleteErrors != 0 {
return fmt.Errorf("failed to delete %d trash items", deleteErrors)
if deleteErrors.Load() != 0 {
return fmt.Errorf("failed to delete %d trash items", deleteErrors.Load())
}
return err
}
// Shutdown shutdown the fs
func (f *Fs) Shutdown(ctx context.Context) error {
f.tokenRenewer.Shutdown()
return nil
}
// ChangeNotify calls the passed function with a path that has had changes.
// If the implementation uses polling, it should adhere to the given interval.
//
// Automatically restarts itself in case of unexpected behavior of the remote.
//
// Close the returned channel to stop being notified.
func (f *Fs) ChangeNotify(ctx context.Context, notifyFunc func(string, fs.EntryType), pollIntervalChan <-chan time.Duration) {
go func() {
// get the `stream_position` early so all changes from now on get processed
streamPosition, err := f.changeNotifyStreamPosition(ctx)
if err != nil {
fs.Infof(f, "Failed to get StreamPosition: %s", err)
}
// box can send duplicate Event IDs. Use this map to track and filter
// the ones we've already processed.
processedEventIDs := make(map[string]time.Time)
var ticker *time.Ticker
var tickerC <-chan time.Time
for {
select {
case pollInterval, ok := <-pollIntervalChan:
if !ok {
if ticker != nil {
ticker.Stop()
}
return
}
if ticker != nil {
ticker.Stop()
ticker, tickerC = nil, nil
}
if pollInterval != 0 {
ticker = time.NewTicker(pollInterval)
tickerC = ticker.C
}
case <-tickerC:
if streamPosition == "" {
streamPosition, err = f.changeNotifyStreamPosition(ctx)
if err != nil {
fs.Infof(f, "Failed to get StreamPosition: %s", err)
continue
}
}
// Garbage collect EventIDs older than 1 minute
for eventID, timestamp := range processedEventIDs {
if time.Since(timestamp) > time.Minute {
delete(processedEventIDs, eventID)
}
}
streamPosition, err = f.changeNotifyRunner(ctx, notifyFunc, streamPosition, processedEventIDs)
if err != nil {
fs.Infof(f, "Change notify listener failure: %s", err)
}
}
}
}()
}
func (f *Fs) changeNotifyStreamPosition(ctx context.Context) (streamPosition string, err error) {
opts := rest.Opts{
Method: "GET",
Path: "/events",
Parameters: fieldsValue(),
}
opts.Parameters.Set("stream_position", "now")
opts.Parameters.Set("stream_type", "changes")
var result api.Events
var resp *http.Response
err = f.pacer.Call(func() (bool, error) {
resp, err = f.srv.CallJSON(ctx, &opts, nil, &result)
return shouldRetry(ctx, resp, err)
})
if err != nil {
return "", err
}
return strconv.FormatInt(result.NextStreamPosition, 10), nil
}
// Attempts to construct the full path for an object, given the ID of its
// parent directory and the name of the object.
//
// Can return "" if the parentID is not currently in the directory cache.
func (f *Fs) getFullPath(parentID string, childName string) (fullPath string) {
fullPath = ""
name := f.opt.Enc.ToStandardName(childName)
if parentID != "" {
if parentDir, ok := f.dirCache.GetInv(parentID); ok {
if len(parentDir) > 0 {
fullPath = parentDir + "/" + name
} else {
fullPath = name
}
}
} else {
// No parent, this object is at the root
fullPath = name
}
return fullPath
}
func (f *Fs) changeNotifyRunner(ctx context.Context, notifyFunc func(string, fs.EntryType), streamPosition string, processedEventIDs map[string]time.Time) (nextStreamPosition string, err error) {
nextStreamPosition = streamPosition
for {
// box only allows a max of 500 events
limit := min(f.opt.ListChunk, 500)
opts := rest.Opts{
Method: "GET",
Path: "/events",
Parameters: fieldsValue(),
}
opts.Parameters.Set("stream_position", nextStreamPosition)
opts.Parameters.Set("stream_type", "changes")
opts.Parameters.Set("limit", strconv.Itoa(limit))
var result api.Events
var resp *http.Response
fs.Debugf(f, "Checking for changes on remote (next_stream_position: %q)", nextStreamPosition)
err = f.pacer.Call(func() (bool, error) {
resp, err = f.srv.CallJSON(ctx, &opts, nil, &result)
return shouldRetry(ctx, resp, err)
})
if err != nil {
return "", err
}
if result.ChunkSize != int64(len(result.Entries)) {
return "", fmt.Errorf("invalid response to event request, chunk_size (%v) not equal to number of entries (%v)", result.ChunkSize, len(result.Entries))
}
nextStreamPosition = strconv.FormatInt(result.NextStreamPosition, 10)
if result.ChunkSize == 0 {
return nextStreamPosition, nil
}
type pathToClear struct {
path string
entryType fs.EntryType
}
var pathsToClear []pathToClear
newEventIDs := 0
for _, entry := range result.Entries {
eventDetails := fmt.Sprintf("[%q(%d)|%s|%s|%s|%s]", entry.Source.Name, entry.Source.SequenceID,
entry.Source.Type, entry.EventType, entry.Source.ID, entry.EventID)
if entry.EventID == "" {
fs.Debugf(f, "%s ignored due to missing EventID", eventDetails)
continue
}
if _, ok := processedEventIDs[entry.EventID]; ok {
fs.Debugf(f, "%s ignored due to duplicate EventID", eventDetails)
continue
}
processedEventIDs[entry.EventID] = time.Now()
newEventIDs++
if entry.Source.ID == "" { // missing File or Folder ID
fs.Debugf(f, "%s ignored due to missing SourceID", eventDetails)
continue
}
if entry.Source.Type != api.ItemTypeFile && entry.Source.Type != api.ItemTypeFolder { // event is not for a file or folder
fs.Debugf(f, "%s ignored due to unsupported SourceType", eventDetails)
continue
}
// Only interested in event types that result in a file tree change
if _, found := api.FileTreeChangeEventTypes[entry.EventType]; !found {
fs.Debugf(f, "%s ignored due to unsupported EventType", eventDetails)
continue
}
f.itemMetaCacheMu.Lock()
itemMeta, cachedItemMetaFound := f.itemMetaCache[entry.Source.ID]
if cachedItemMetaFound {
if itemMeta.SequenceID >= entry.Source.SequenceID {
// Item in the cache has the same or newer SequenceID than
// this event. Ignore this event, it must be old.
f.itemMetaCacheMu.Unlock()
fs.Debugf(f, "%s ignored due to old SequenceID (%q)", eventDetails, itemMeta.SequenceID)
continue
}
// This event is newer. Delete its entry from the cache,
// we'll notify about its change below, then it's up to a
// future list operation to repopulate the cache.
delete(f.itemMetaCache, entry.Source.ID)
}
f.itemMetaCacheMu.Unlock()
entryType := fs.EntryDirectory
if entry.Source.Type == api.ItemTypeFile {
entryType = fs.EntryObject
}
// The box event only includes the new path for the object (e.g.
// the path after the object was moved). If there was an old path
// saved in our cache, it must be cleared.
if cachedItemMetaFound {
path := f.getFullPath(itemMeta.ParentID, itemMeta.Name)
if path != "" {
fs.Debugf(f, "%s added old path (%q) for notify", eventDetails, path)
pathsToClear = append(pathsToClear, pathToClear{path: path, entryType: entryType})
} else {
fs.Debugf(f, "%s old parent not cached", eventDetails)
}
// If this is a directory, also delete it from the dir cache.
// This will effectively invalidate the item metadata cache
// entries for all descendents of this directory, since we
// will no longer be able to construct a full path for them.
// This is exactly what we want, since we don't want to notify
// on the paths of these descendents if one of their ancestors
// has been renamed/deleted.
if entry.Source.Type == api.ItemTypeFolder {
f.dirCache.FlushDir(path)
}
}
// If the item is "active", then it is not trashed or deleted, so
// it potentially has a valid parent.
//
// Construct the new path of the object, based on the Parent ID
// and its name. If we get an empty result, it means we don't
// currently know about this object so notification is unnecessary.
if entry.Source.ItemStatus == api.ItemStatusActive {
path := f.getFullPath(entry.Source.Parent.ID, entry.Source.Name)
if path != "" {
fs.Debugf(f, "%s added new path (%q) for notify", eventDetails, path)
pathsToClear = append(pathsToClear, pathToClear{path: path, entryType: entryType})
} else {
fs.Debugf(f, "%s new parent not found", eventDetails)
}
}
}
// box can sometimes repeatedly return the same Event IDs within a
// short period of time. If it stops giving us new ones, treat it
// the same as if it returned us none at all.
if newEventIDs == 0 {
return nextStreamPosition, nil
}
notifiedPaths := make(map[string]bool)
for _, p := range pathsToClear {
if _, ok := notifiedPaths[p.path]; ok {
continue
}
notifiedPaths[p.path] = true
notifyFunc(p.path, p.entryType)
}
fs.Debugf(f, "Received %v events, resulting in %v paths and %v notifications", len(result.Entries), len(pathsToClear), len(notifiedPaths))
}
}
// DirCacheFlush resets the directory cache - used in testing as an
// optional interface
func (f *Fs) DirCacheFlush() {
@@ -1395,6 +1768,8 @@ var (
_ fs.DirCacheFlusher = (*Fs)(nil)
_ fs.PublicLinker = (*Fs)(nil)
_ fs.CleanUpper = (*Fs)(nil)
_ fs.ListPer = (*Fs)(nil)
_ fs.Shutdowner = (*Fs)(nil)
_ fs.Object = (*Object)(nil)
_ fs.IDer = (*Object)(nil)
)

View File

@@ -105,7 +105,7 @@ func (o *Object) commitUpload(ctx context.Context, SessionID string, parts []api
const defaultDelay = 10
var tries int
outer:
for tries = 0; tries < maxTries; tries++ {
for tries = range maxTries {
err = o.fs.pacer.Call(func() (bool, error) {
resp, err = o.fs.srv.CallJSON(ctx, &opts, &request, nil)
if err != nil {
@@ -203,7 +203,7 @@ func (o *Object) uploadMultipart(ctx context.Context, in io.Reader, leaf, direct
errs := make(chan error, 1)
var wg sync.WaitGroup
outer:
for part := 0; part < session.TotalParts; part++ {
for part := range session.TotalParts {
// Check any errors
select {
case err = <-errs:
@@ -211,10 +211,7 @@ outer:
default:
}
reqSize := remaining
if reqSize >= chunkSize {
reqSize = chunkSize
}
reqSize := min(remaining, chunkSize)
// Make a block of memory
buf := make([]byte, reqSize)

View File

@@ -1,5 +1,4 @@
//go:build !plan9 && !js
// +build !plan9,!js
// Package cache implements a virtual provider to cache existing remotes.
package cache
@@ -30,6 +29,7 @@ import (
"github.com/rclone/rclone/fs/config/obscure"
"github.com/rclone/rclone/fs/fspath"
"github.com/rclone/rclone/fs/hash"
"github.com/rclone/rclone/fs/list"
"github.com/rclone/rclone/fs/rc"
"github.com/rclone/rclone/fs/walk"
"github.com/rclone/rclone/lib/atexit"
@@ -76,17 +76,19 @@ func init() {
Name: "plex_url",
Help: "The URL of the Plex server.",
}, {
Name: "plex_username",
Help: "The username of the Plex user.",
Name: "plex_username",
Help: "The username of the Plex user.",
Sensitive: true,
}, {
Name: "plex_password",
Help: "The password of the Plex user.",
IsPassword: true,
}, {
Name: "plex_token",
Help: "The plex token for authentication - auto set normally.",
Hide: fs.OptionHideBoth,
Advanced: true,
Name: "plex_token",
Help: "The plex token for authentication - auto set normally.",
Hide: fs.OptionHideBoth,
Advanced: true,
Sensitive: true,
}, {
Name: "plex_insecure",
Help: "Skip all certificate verification when connecting to the Plex server.",
@@ -408,18 +410,16 @@ func NewFs(ctx context.Context, name, rootPath string, m configmap.Mapper) (fs.F
if err != nil {
return nil, fmt.Errorf("failed to connect to the Plex API %v: %w", opt.PlexURL, err)
}
} else {
if opt.PlexPassword != "" && opt.PlexUsername != "" {
decPass, err := obscure.Reveal(opt.PlexPassword)
if err != nil {
decPass = opt.PlexPassword
}
f.plexConnector, err = newPlexConnector(f, opt.PlexURL, opt.PlexUsername, decPass, opt.PlexInsecure, func(token string) {
m.Set("plex_token", token)
})
if err != nil {
return nil, fmt.Errorf("failed to connect to the Plex API %v: %w", opt.PlexURL, err)
}
} else if opt.PlexPassword != "" && opt.PlexUsername != "" {
decPass, err := obscure.Reveal(opt.PlexPassword)
if err != nil {
decPass = opt.PlexPassword
}
f.plexConnector, err = newPlexConnector(f, opt.PlexURL, opt.PlexUsername, decPass, opt.PlexInsecure, func(token string) {
m.Set("plex_token", token)
})
if err != nil {
return nil, fmt.Errorf("failed to connect to the Plex API %v: %w", opt.PlexURL, err)
}
}
}
@@ -684,7 +684,7 @@ func (f *Fs) rcFetch(ctx context.Context, in rc.Params) (rc.Params, error) {
start, end int64
}
parseChunks := func(ranges string) (crs []chunkRange, err error) {
for _, part := range strings.Split(ranges, ",") {
for part := range strings.SplitSeq(ranges, ",") {
var start, end int64 = 0, math.MaxInt64
switch ints := strings.Split(part, ":"); len(ints) {
case 1:
@@ -1038,7 +1038,7 @@ func (f *Fs) List(ctx context.Context, dir string) (entries fs.DirEntries, err e
}
fs.Debugf(dir, "list: remove entry: %v", entryRemote)
}
entries = nil
entries = nil //nolint:ineffassign
// and then iterate over the ones from source (temp Objects will override source ones)
var batchDirectories []*Directory
@@ -1087,13 +1087,13 @@ func (f *Fs) List(ctx context.Context, dir string) (entries fs.DirEntries, err e
return cachedEntries, nil
}
func (f *Fs) recurse(ctx context.Context, dir string, list *walk.ListRHelper) error {
func (f *Fs) recurse(ctx context.Context, dir string, list *list.Helper) error {
entries, err := f.List(ctx, dir)
if err != nil {
return err
}
for i := 0; i < len(entries); i++ {
for i := range entries {
innerDir, ok := entries[i].(fs.Directory)
if ok {
err := f.recurse(ctx, innerDir.Remote(), list)
@@ -1139,7 +1139,7 @@ func (f *Fs) ListR(ctx context.Context, dir string, callback fs.ListRCallback) (
}
// if we're here, we're gonna do a standard recursive traversal and cache everything
list := walk.NewListRHelper(callback)
list := list.NewHelper(callback)
err = f.recurse(ctx, dir, list)
if err != nil {
return err
@@ -1429,7 +1429,7 @@ func (f *Fs) cacheReader(u io.Reader, src fs.ObjectInfo, originalRead func(inn i
}()
// wait until both are done
for c := 0; c < 2; c++ {
for range 2 {
<-done
}
}
@@ -1754,7 +1754,7 @@ func (f *Fs) About(ctx context.Context) (*fs.Usage, error) {
}
// Stats returns stats about the cache storage
func (f *Fs) Stats() (map[string]map[string]interface{}, error) {
func (f *Fs) Stats() (map[string]map[string]any, error) {
return f.cache.Stats()
}
@@ -1787,7 +1787,7 @@ func (f *Fs) CleanUpCache(ignoreLastTs bool) {
}
}
// StopBackgroundRunners will signall all the runners to stop their work
// StopBackgroundRunners will signal all the runners to stop their work
// can be triggered from a terminate signal or from testing between runs
func (f *Fs) StopBackgroundRunners() {
f.cleanupChan <- false
@@ -1934,7 +1934,7 @@ var commandHelp = []fs.CommandHelp{
// The result should be capable of being JSON encoded
// If it is a string or a []string it will be shown to the user
// otherwise it will be JSON encoded and shown to the user like that
func (f *Fs) Command(ctx context.Context, name string, arg []string, opt map[string]string) (interface{}, error) {
func (f *Fs) Command(ctx context.Context, name string, arg []string, opt map[string]string) (any, error) {
switch name {
case "stats":
return f.Stats()

View File

@@ -1,5 +1,4 @@
//go:build !plan9 && !js && !race
// +build !plan9,!js,!race
package cache_test
@@ -11,8 +10,6 @@ import (
goflag "flag"
"fmt"
"io"
"io/ioutil"
"log"
"math/rand"
"os"
"path"
@@ -31,10 +28,11 @@ import (
"github.com/rclone/rclone/fs/config"
"github.com/rclone/rclone/fs/config/configmap"
"github.com/rclone/rclone/fs/object"
"github.com/rclone/rclone/fs/operations"
"github.com/rclone/rclone/fstest"
"github.com/rclone/rclone/fstest/testy"
"github.com/rclone/rclone/lib/random"
"github.com/rclone/rclone/vfs/vfsflags"
"github.com/rclone/rclone/vfs/vfscommon"
"github.com/stretchr/testify/require"
)
@@ -94,7 +92,7 @@ func TestMain(m *testing.M) {
goflag.Parse()
var rc int
log.Printf("Running with the following params: \n remote: %v", remoteName)
fs.Logf(nil, "Running with the following params: \n remote: %v", remoteName)
runInstance = newRun()
rc = m.Run()
os.Exit(rc)
@@ -102,14 +100,12 @@ func TestMain(m *testing.M) {
func TestInternalListRootAndInnerRemotes(t *testing.T) {
id := fmt.Sprintf("tilrair%v", time.Now().Unix())
rootFs, boltDb := runInstance.newCacheFs(t, remoteName, id, true, true, nil, nil)
defer runInstance.cleanupFs(t, rootFs, boltDb)
rootFs, _ := runInstance.newCacheFs(t, remoteName, id, true, true, nil)
// Instantiate inner fs
innerFolder := "inner"
runInstance.mkdir(t, rootFs, innerFolder)
rootFs2, boltDb2 := runInstance.newCacheFs(t, remoteName, id+"/"+innerFolder, true, true, nil, nil)
defer runInstance.cleanupFs(t, rootFs2, boltDb2)
rootFs2, _ := runInstance.newCacheFs(t, remoteName, id+"/"+innerFolder, true, true, nil)
runInstance.writeObjectString(t, rootFs2, "one", "content")
listRoot, err := runInstance.list(t, rootFs, "")
@@ -126,10 +122,10 @@ func TestInternalListRootAndInnerRemotes(t *testing.T) {
/* TODO: is this testing something?
func TestInternalVfsCache(t *testing.T) {
vfsflags.Opt.DirCacheTime = time.Second * 30
vfscommon.Opt.DirCacheTime = time.Second * 30
testSize := int64(524288000)
vfsflags.Opt.CacheMode = vfs.CacheModeWrites
vfscommon.Opt.CacheMode = vfs.CacheModeWrites
id := "tiuufo"
rootFs, boltDb := runInstance.newCacheFs(t, remoteName, id, true, true, nil, map[string]string{"writes": "true", "info_age": "1h"})
defer runInstance.cleanupFs(t, rootFs, boltDb)
@@ -167,7 +163,7 @@ func TestInternalVfsCache(t *testing.T) {
li2 := [2]string{path.Join("test", "one"), path.Join("test", "second")}
for _, r := range li2 {
var err error
ci, err := ioutil.ReadDir(path.Join(runInstance.chunkPath, runInstance.encryptRemoteIfNeeded(t, path.Join(id, r))))
ci, err := os.ReadDir(path.Join(runInstance.chunkPath, runInstance.encryptRemoteIfNeeded(t, path.Join(id, r))))
if err != nil || len(ci) == 0 {
log.Printf("========== '%v' not in cache", r)
} else {
@@ -226,8 +222,7 @@ func TestInternalVfsCache(t *testing.T) {
func TestInternalObjWrapFsFound(t *testing.T) {
id := fmt.Sprintf("tiowff%v", time.Now().Unix())
rootFs, boltDb := runInstance.newCacheFs(t, remoteName, id, true, true, nil, nil)
defer runInstance.cleanupFs(t, rootFs, boltDb)
rootFs, _ := runInstance.newCacheFs(t, remoteName, id, true, true, nil)
cfs, err := runInstance.getCacheFs(rootFs)
require.NoError(t, err)
@@ -259,8 +254,7 @@ func TestInternalObjWrapFsFound(t *testing.T) {
func TestInternalObjNotFound(t *testing.T) {
id := fmt.Sprintf("tionf%v", time.Now().Unix())
rootFs, boltDb := runInstance.newCacheFs(t, remoteName, id, false, true, nil, nil)
defer runInstance.cleanupFs(t, rootFs, boltDb)
rootFs, _ := runInstance.newCacheFs(t, remoteName, id, false, true, nil)
obj, err := rootFs.NewObject(context.Background(), "404")
require.Error(t, err)
@@ -270,8 +264,7 @@ func TestInternalObjNotFound(t *testing.T) {
func TestInternalCachedWrittenContentMatches(t *testing.T) {
testy.SkipUnreliable(t)
id := fmt.Sprintf("ticwcm%v", time.Now().Unix())
rootFs, boltDb := runInstance.newCacheFs(t, remoteName, id, false, true, nil, nil)
defer runInstance.cleanupFs(t, rootFs, boltDb)
rootFs, _ := runInstance.newCacheFs(t, remoteName, id, false, true, nil)
cfs, err := runInstance.getCacheFs(rootFs)
require.NoError(t, err)
@@ -298,8 +291,7 @@ func TestInternalDoubleWrittenContentMatches(t *testing.T) {
t.Skip("Skip test on windows/386")
}
id := fmt.Sprintf("tidwcm%v", time.Now().Unix())
rootFs, boltDb := runInstance.newCacheFs(t, remoteName, id, false, true, nil, nil)
defer runInstance.cleanupFs(t, rootFs, boltDb)
rootFs, _ := runInstance.newCacheFs(t, remoteName, id, false, true, nil)
// write the object
runInstance.writeRemoteString(t, rootFs, "one", "one content")
@@ -317,8 +309,7 @@ func TestInternalDoubleWrittenContentMatches(t *testing.T) {
func TestInternalCachedUpdatedContentMatches(t *testing.T) {
testy.SkipUnreliable(t)
id := fmt.Sprintf("ticucm%v", time.Now().Unix())
rootFs, boltDb := runInstance.newCacheFs(t, remoteName, id, false, true, nil, nil)
defer runInstance.cleanupFs(t, rootFs, boltDb)
rootFs, _ := runInstance.newCacheFs(t, remoteName, id, false, true, nil)
var err error
// create some rand test data
@@ -346,9 +337,8 @@ func TestInternalCachedUpdatedContentMatches(t *testing.T) {
func TestInternalWrappedWrittenContentMatches(t *testing.T) {
id := fmt.Sprintf("tiwwcm%v", time.Now().Unix())
vfsflags.Opt.DirCacheTime = time.Second
rootFs, boltDb := runInstance.newCacheFs(t, remoteName, id, true, true, nil, nil)
defer runInstance.cleanupFs(t, rootFs, boltDb)
vfscommon.Opt.DirCacheTime = fs.Duration(time.Second)
rootFs, _ := runInstance.newCacheFs(t, remoteName, id, true, true, nil)
if runInstance.rootIsCrypt {
t.Skip("test skipped with crypt remote")
}
@@ -370,16 +360,15 @@ func TestInternalWrappedWrittenContentMatches(t *testing.T) {
require.NoError(t, err)
require.Equal(t, int64(len(checkSample)), o.Size())
for i := 0; i < len(checkSample); i++ {
for i := range checkSample {
require.Equal(t, testData[i], checkSample[i])
}
}
func TestInternalLargeWrittenContentMatches(t *testing.T) {
id := fmt.Sprintf("tilwcm%v", time.Now().Unix())
vfsflags.Opt.DirCacheTime = time.Second
rootFs, boltDb := runInstance.newCacheFs(t, remoteName, id, true, true, nil, nil)
defer runInstance.cleanupFs(t, rootFs, boltDb)
vfscommon.Opt.DirCacheTime = fs.Duration(time.Second)
rootFs, _ := runInstance.newCacheFs(t, remoteName, id, true, true, nil)
if runInstance.rootIsCrypt {
t.Skip("test skipped with crypt remote")
}
@@ -398,15 +387,14 @@ func TestInternalLargeWrittenContentMatches(t *testing.T) {
readData, err := runInstance.readDataFromRemote(t, rootFs, "data.bin", 0, testSize, false)
require.NoError(t, err)
for i := 0; i < len(readData); i++ {
for i := range readData {
require.Equalf(t, testData[i], readData[i], "at byte %v", i)
}
}
func TestInternalWrappedFsChangeNotSeen(t *testing.T) {
id := fmt.Sprintf("tiwfcns%v", time.Now().Unix())
rootFs, boltDb := runInstance.newCacheFs(t, remoteName, id, false, true, nil, nil)
defer runInstance.cleanupFs(t, rootFs, boltDb)
rootFs, _ := runInstance.newCacheFs(t, remoteName, id, false, true, nil)
cfs, err := runInstance.getCacheFs(rootFs)
require.NoError(t, err)
@@ -419,7 +407,7 @@ func TestInternalWrappedFsChangeNotSeen(t *testing.T) {
// update in the wrapped fs
originalSize, err := runInstance.size(t, rootFs, "data.bin")
require.NoError(t, err)
log.Printf("original size: %v", originalSize)
fs.Logf(nil, "original size: %v", originalSize)
o, err := cfs.UnWrap().NewObject(context.Background(), runInstance.encryptRemoteIfNeeded(t, "data.bin"))
require.NoError(t, err)
@@ -428,7 +416,7 @@ func TestInternalWrappedFsChangeNotSeen(t *testing.T) {
if runInstance.rootIsCrypt {
data2, err = base64.StdEncoding.DecodeString(cryptedText3Base64)
require.NoError(t, err)
expectedSize = expectedSize + 1 // FIXME newline gets in, likely test data issue
expectedSize++ // FIXME newline gets in, likely test data issue
} else {
data2 = []byte("test content")
}
@@ -436,7 +424,7 @@ func TestInternalWrappedFsChangeNotSeen(t *testing.T) {
err = o.Update(context.Background(), bytes.NewReader(data2), objInfo)
require.NoError(t, err)
require.Equal(t, int64(len(data2)), o.Size())
log.Printf("updated size: %v", len(data2))
fs.Logf(nil, "updated size: %v", len(data2))
// get a new instance from the cache
if runInstance.wrappedIsExternal {
@@ -460,8 +448,7 @@ func TestInternalWrappedFsChangeNotSeen(t *testing.T) {
func TestInternalMoveWithNotify(t *testing.T) {
id := fmt.Sprintf("timwn%v", time.Now().Unix())
rootFs, boltDb := runInstance.newCacheFs(t, remoteName, id, false, true, nil, nil)
defer runInstance.cleanupFs(t, rootFs, boltDb)
rootFs, _ := runInstance.newCacheFs(t, remoteName, id, false, true, nil)
if !runInstance.wrappedIsExternal {
t.Skipf("Not external")
}
@@ -497,49 +484,49 @@ func TestInternalMoveWithNotify(t *testing.T) {
err = runInstance.retryBlock(func() error {
li, err := runInstance.list(t, rootFs, "test")
if err != nil {
log.Printf("err: %v", err)
fs.Logf(nil, "err: %v", err)
return err
}
if len(li) != 2 {
log.Printf("not expected listing /test: %v", li)
fs.Logf(nil, "not expected listing /test: %v", li)
return fmt.Errorf("not expected listing /test: %v", li)
}
li, err = runInstance.list(t, rootFs, "test/one")
if err != nil {
log.Printf("err: %v", err)
fs.Logf(nil, "err: %v", err)
return err
}
if len(li) != 0 {
log.Printf("not expected listing /test/one: %v", li)
fs.Logf(nil, "not expected listing /test/one: %v", li)
return fmt.Errorf("not expected listing /test/one: %v", li)
}
li, err = runInstance.list(t, rootFs, "test/second")
if err != nil {
log.Printf("err: %v", err)
fs.Logf(nil, "err: %v", err)
return err
}
if len(li) != 1 {
log.Printf("not expected listing /test/second: %v", li)
fs.Logf(nil, "not expected listing /test/second: %v", li)
return fmt.Errorf("not expected listing /test/second: %v", li)
}
if fi, ok := li[0].(os.FileInfo); ok {
if fi.Name() != "data.bin" {
log.Printf("not expected name: %v", fi.Name())
fs.Logf(nil, "not expected name: %v", fi.Name())
return fmt.Errorf("not expected name: %v", fi.Name())
}
} else if di, ok := li[0].(fs.DirEntry); ok {
if di.Remote() != "test/second/data.bin" {
log.Printf("not expected remote: %v", di.Remote())
fs.Logf(nil, "not expected remote: %v", di.Remote())
return fmt.Errorf("not expected remote: %v", di.Remote())
}
} else {
log.Printf("unexpected listing: %v", li)
fs.Logf(nil, "unexpected listing: %v", li)
return fmt.Errorf("unexpected listing: %v", li)
}
log.Printf("complete listing: %v", li)
fs.Logf(nil, "complete listing: %v", li)
return nil
}, 12, time.Second*10)
require.NoError(t, err)
@@ -547,8 +534,7 @@ func TestInternalMoveWithNotify(t *testing.T) {
func TestInternalNotifyCreatesEmptyParts(t *testing.T) {
id := fmt.Sprintf("tincep%v", time.Now().Unix())
rootFs, boltDb := runInstance.newCacheFs(t, remoteName, id, false, true, nil, nil)
defer runInstance.cleanupFs(t, rootFs, boltDb)
rootFs, boltDb := runInstance.newCacheFs(t, remoteName, id, false, true, nil)
if !runInstance.wrappedIsExternal {
t.Skipf("Not external")
}
@@ -590,43 +576,43 @@ func TestInternalNotifyCreatesEmptyParts(t *testing.T) {
err = runInstance.retryBlock(func() error {
found = boltDb.HasEntry(path.Join(cfs.Root(), runInstance.encryptRemoteIfNeeded(t, "test")))
if !found {
log.Printf("not found /test")
fs.Logf(nil, "not found /test")
return fmt.Errorf("not found /test")
}
found = boltDb.HasEntry(path.Join(cfs.Root(), runInstance.encryptRemoteIfNeeded(t, "test"), runInstance.encryptRemoteIfNeeded(t, "one")))
if !found {
log.Printf("not found /test/one")
fs.Logf(nil, "not found /test/one")
return fmt.Errorf("not found /test/one")
}
found = boltDb.HasEntry(path.Join(cfs.Root(), runInstance.encryptRemoteIfNeeded(t, "test"), runInstance.encryptRemoteIfNeeded(t, "one"), runInstance.encryptRemoteIfNeeded(t, "test2")))
if !found {
log.Printf("not found /test/one/test2")
fs.Logf(nil, "not found /test/one/test2")
return fmt.Errorf("not found /test/one/test2")
}
li, err := runInstance.list(t, rootFs, "test/one")
if err != nil {
log.Printf("err: %v", err)
fs.Logf(nil, "err: %v", err)
return err
}
if len(li) != 1 {
log.Printf("not expected listing /test/one: %v", li)
fs.Logf(nil, "not expected listing /test/one: %v", li)
return fmt.Errorf("not expected listing /test/one: %v", li)
}
if fi, ok := li[0].(os.FileInfo); ok {
if fi.Name() != "test2" {
log.Printf("not expected name: %v", fi.Name())
fs.Logf(nil, "not expected name: %v", fi.Name())
return fmt.Errorf("not expected name: %v", fi.Name())
}
} else if di, ok := li[0].(fs.DirEntry); ok {
if di.Remote() != "test/one/test2" {
log.Printf("not expected remote: %v", di.Remote())
fs.Logf(nil, "not expected remote: %v", di.Remote())
return fmt.Errorf("not expected remote: %v", di.Remote())
}
} else {
log.Printf("unexpected listing: %v", li)
fs.Logf(nil, "unexpected listing: %v", li)
return fmt.Errorf("unexpected listing: %v", li)
}
log.Printf("complete listing /test/one/test2")
fs.Logf(nil, "complete listing /test/one/test2")
return nil
}, 12, time.Second*10)
require.NoError(t, err)
@@ -634,8 +620,7 @@ func TestInternalNotifyCreatesEmptyParts(t *testing.T) {
func TestInternalChangeSeenAfterDirCacheFlush(t *testing.T) {
id := fmt.Sprintf("ticsadcf%v", time.Now().Unix())
rootFs, boltDb := runInstance.newCacheFs(t, remoteName, id, false, true, nil, nil)
defer runInstance.cleanupFs(t, rootFs, boltDb)
rootFs, _ := runInstance.newCacheFs(t, remoteName, id, false, true, nil)
cfs, err := runInstance.getCacheFs(rootFs)
require.NoError(t, err)
@@ -667,8 +652,7 @@ func TestInternalChangeSeenAfterDirCacheFlush(t *testing.T) {
func TestInternalCacheWrites(t *testing.T) {
id := "ticw"
rootFs, boltDb := runInstance.newCacheFs(t, remoteName, id, false, true, nil, map[string]string{"writes": "true"})
defer runInstance.cleanupFs(t, rootFs, boltDb)
rootFs, boltDb := runInstance.newCacheFs(t, remoteName, id, false, true, map[string]string{"writes": "true"})
cfs, err := runInstance.getCacheFs(rootFs)
require.NoError(t, err)
@@ -689,8 +673,7 @@ func TestInternalMaxChunkSizeRespected(t *testing.T) {
t.Skip("Skip test on windows/386")
}
id := fmt.Sprintf("timcsr%v", time.Now().Unix())
rootFs, boltDb := runInstance.newCacheFs(t, remoteName, id, false, true, nil, map[string]string{"workers": "1"})
defer runInstance.cleanupFs(t, rootFs, boltDb)
rootFs, boltDb := runInstance.newCacheFs(t, remoteName, id, false, true, map[string]string{"workers": "1"})
cfs, err := runInstance.getCacheFs(rootFs)
require.NoError(t, err)
@@ -705,7 +688,7 @@ func TestInternalMaxChunkSizeRespected(t *testing.T) {
co, ok := o.(*cache.Object)
require.True(t, ok)
for i := 0; i < 4; i++ { // read first 4
for i := range 4 { // read first 4
_ = runInstance.readDataFromObj(t, co, chunkSize*int64(i), chunkSize*int64(i+1), false)
}
cfs.CleanUpCache(true)
@@ -724,9 +707,8 @@ func TestInternalMaxChunkSizeRespected(t *testing.T) {
func TestInternalExpiredEntriesRemoved(t *testing.T) {
id := fmt.Sprintf("tieer%v", time.Now().Unix())
vfsflags.Opt.DirCacheTime = time.Second * 4 // needs to be lower than the defined
rootFs, boltDb := runInstance.newCacheFs(t, remoteName, id, true, true, map[string]string{"info_age": "5s"}, nil)
defer runInstance.cleanupFs(t, rootFs, boltDb)
vfscommon.Opt.DirCacheTime = fs.Duration(time.Second * 4) // needs to be lower than the defined
rootFs, _ := runInstance.newCacheFs(t, remoteName, id, true, true, nil)
cfs, err := runInstance.getCacheFs(rootFs)
require.NoError(t, err)
@@ -760,12 +742,10 @@ func TestInternalExpiredEntriesRemoved(t *testing.T) {
}
func TestInternalBug2117(t *testing.T) {
vfsflags.Opt.DirCacheTime = time.Second * 10
vfscommon.Opt.DirCacheTime = fs.Duration(time.Second * 10)
id := fmt.Sprintf("tib2117%v", time.Now().Unix())
rootFs, boltDb := runInstance.newCacheFs(t, remoteName, id, false, true, nil,
map[string]string{"info_age": "72h", "chunk_clean_interval": "15m"})
defer runInstance.cleanupFs(t, rootFs, boltDb)
rootFs, _ := runInstance.newCacheFs(t, remoteName, id, false, true, map[string]string{"info_age": "72h", "chunk_clean_interval": "15m"})
if runInstance.rootIsCrypt {
t.Skipf("skipping crypt")
@@ -790,24 +770,24 @@ func TestInternalBug2117(t *testing.T) {
di, err := runInstance.list(t, rootFs, "test/dir1/dir2")
require.NoError(t, err)
log.Printf("len: %v", len(di))
fs.Logf(nil, "len: %v", len(di))
require.Len(t, di, 1)
time.Sleep(time.Second * 30)
di, err = runInstance.list(t, rootFs, "test/dir1/dir2")
require.NoError(t, err)
log.Printf("len: %v", len(di))
fs.Logf(nil, "len: %v", len(di))
require.Len(t, di, 1)
di, err = runInstance.list(t, rootFs, "test/dir1")
require.NoError(t, err)
log.Printf("len: %v", len(di))
fs.Logf(nil, "len: %v", len(di))
require.Len(t, di, 4)
di, err = runInstance.list(t, rootFs, "test")
require.NoError(t, err)
log.Printf("len: %v", len(di))
fs.Logf(nil, "len: %v", len(di))
require.Len(t, di, 4)
}
@@ -841,14 +821,14 @@ func newRun() *run {
}
if uploadDir == "" {
r.tmpUploadDir, err = ioutil.TempDir("", "rclonecache-tmp")
r.tmpUploadDir, err = os.MkdirTemp("", "rclonecache-tmp")
if err != nil {
panic(fmt.Sprintf("Failed to create temp dir: %v", err))
}
} else {
r.tmpUploadDir = uploadDir
}
log.Printf("Temp Upload Dir: %v", r.tmpUploadDir)
fs.Logf(nil, "Temp Upload Dir: %v", r.tmpUploadDir)
return r
}
@@ -866,11 +846,11 @@ func (r *run) encryptRemoteIfNeeded(t *testing.T, remote string) string {
return enc
}
func (r *run) newCacheFs(t *testing.T, remote, id string, needRemote, purge bool, cfg map[string]string, flags map[string]string) (fs.Fs, *cache.Persistent) {
func (r *run) newCacheFs(t *testing.T, remote, id string, needRemote, purge bool, flags map[string]string) (fs.Fs, *cache.Persistent) {
fstest.Initialise()
remoteExists := false
for _, s := range config.FileSections() {
if s == remote {
for _, s := range config.GetRemotes() {
if s.Name == remote {
remoteExists = true
}
}
@@ -894,12 +874,12 @@ func (r *run) newCacheFs(t *testing.T, remote, id string, needRemote, purge bool
cacheRemote := remote
if !remoteExists {
localRemote := remote + "-local"
config.FileSet(localRemote, "type", "local")
config.FileSet(localRemote, "nounc", "true")
config.FileSetValue(localRemote, "type", "local")
config.FileSetValue(localRemote, "nounc", "true")
m.Set("type", "cache")
m.Set("remote", localRemote+":"+filepath.Join(os.TempDir(), localRemote))
} else {
remoteType := config.FileGet(remote, "type")
remoteType := config.GetValue(remote, "type")
if remoteType == "" {
t.Skipf("skipped due to invalid remote type for %v", remote)
return nil, nil
@@ -910,14 +890,14 @@ func (r *run) newCacheFs(t *testing.T, remote, id string, needRemote, purge bool
m.Set("password", cryptPassword1)
m.Set("password2", cryptPassword2)
}
remoteRemote := config.FileGet(remote, "remote")
remoteRemote := config.GetValue(remote, "remote")
if remoteRemote == "" {
t.Skipf("skipped due to invalid remote wrapper for %v", remote)
return nil, nil
}
remoteRemoteParts := strings.Split(remoteRemote, ":")
remoteWrapping := remoteRemoteParts[0]
remoteType := config.FileGet(remoteWrapping, "type")
remoteType := config.GetValue(remoteWrapping, "type")
if remoteType != "cache" {
t.Skipf("skipped due to invalid remote type for %v: '%v'", remoteWrapping, remoteType)
return nil, nil
@@ -954,16 +934,20 @@ func (r *run) newCacheFs(t *testing.T, remote, id string, needRemote, purge bool
}
if purge {
_ = f.Features().Purge(context.Background(), "")
require.NoError(t, err)
_ = operations.Purge(context.Background(), f, "")
}
err = f.Mkdir(context.Background(), "")
require.NoError(t, err)
t.Cleanup(func() {
runInstance.cleanupFs(t, f)
})
return f, boltDb
}
func (r *run) cleanupFs(t *testing.T, f fs.Fs, b *cache.Persistent) {
err := f.Features().Purge(context.Background(), "")
func (r *run) cleanupFs(t *testing.T, f fs.Fs) {
err := operations.Purge(context.Background(), f, "")
require.NoError(t, err)
cfs, err := r.getCacheFs(f)
require.NoError(t, err)
@@ -984,10 +968,10 @@ func (r *run) randomReader(t *testing.T, size int64) io.ReadCloser {
chunk := int64(1024)
cnt := size / chunk
left := size % chunk
f, err := ioutil.TempFile("", "rclonecache-tempfile")
f, err := os.CreateTemp("", "rclonecache-tempfile")
require.NoError(t, err)
for i := 0; i < int(cnt); i++ {
for range int(cnt) {
data := randStringBytes(int(chunk))
_, _ = f.Write(data)
}
@@ -1101,9 +1085,9 @@ func (r *run) rm(t *testing.T, f fs.Fs, remote string) error {
return err
}
func (r *run) list(t *testing.T, f fs.Fs, remote string) ([]interface{}, error) {
func (r *run) list(t *testing.T, f fs.Fs, remote string) ([]any, error) {
var err error
var l []interface{}
var l []any
var list fs.DirEntries
list, err = f.List(context.Background(), remote)
for _, ll := range list {
@@ -1112,27 +1096,6 @@ func (r *run) list(t *testing.T, f fs.Fs, remote string) ([]interface{}, error)
return l, err
}
func (r *run) copyFile(t *testing.T, f fs.Fs, src, dst string) error {
in, err := os.Open(src)
if err != nil {
return err
}
defer func() {
_ = in.Close()
}()
out, err := os.Create(dst)
if err != nil {
return err
}
defer func() {
_ = out.Close()
}()
_, err = io.Copy(out, in)
return err
}
func (r *run) dirMove(t *testing.T, rootFs fs.Fs, src, dst string) error {
var err error
@@ -1228,7 +1191,7 @@ func (r *run) updateData(t *testing.T, rootFs fs.Fs, src, data, append string) e
func (r *run) cleanSize(t *testing.T, size int64) int64 {
if r.rootIsCrypt {
denominator := int64(65536 + 16)
size = size - 32
size -= 32
quotient := size / denominator
remainder := size % denominator
return (quotient*65536 + remainder - 16)
@@ -1252,7 +1215,7 @@ func (r *run) listenForBackgroundUpload(t *testing.T, f fs.Fs, remote string) ch
var err error
var state cache.BackgroundUploadState
for i := 0; i < 2; i++ {
for range 2 {
select {
case state = <-buCh:
// continue
@@ -1330,7 +1293,7 @@ func (r *run) completeAllBackgroundUploads(t *testing.T, f fs.Fs, lastRemote str
func (r *run) retryBlock(block func() error, maxRetries int, rate time.Duration) error {
var err error
for i := 0; i < maxRetries; i++ {
for range maxRetries {
err = block()
if err == nil {
return nil

View File

@@ -1,7 +1,6 @@
// Test Cache filesystem interface
//go:build !plan9 && !js && !race
// +build !plan9,!js,!race
package cache_test
@@ -16,10 +15,11 @@ import (
// TestIntegration runs integration tests against the remote
func TestIntegration(t *testing.T) {
fstests.Run(t, &fstests.Opt{
RemoteName: "TestCache:",
NilObject: (*cache.Object)(nil),
UnimplementableFsMethods: []string{"PublicLink", "OpenWriterAt"},
UnimplementableObjectMethods: []string{"MimeType", "ID", "GetTier", "SetTier", "Metadata"},
SkipInvalidUTF8: true, // invalid UTF-8 confuses the cache
RemoteName: "TestCache:",
NilObject: (*cache.Object)(nil),
UnimplementableFsMethods: []string{"PublicLink", "OpenWriterAt", "OpenChunkWriter", "DirSetModTime", "MkdirMetadata", "ListP"},
UnimplementableObjectMethods: []string{"MimeType", "ID", "GetTier", "SetTier", "Metadata", "SetMetadata"},
UnimplementableDirectoryMethods: []string{"Metadata", "SetMetadata", "SetModTime"},
SkipInvalidUTF8: true, // invalid UTF-8 confuses the cache
})
}

View File

@@ -2,6 +2,6 @@
// about "no buildable Go source files "
//go:build plan9 || js
// +build plan9 js
// Package cache implements a virtual provider to cache existing remotes.
package cache

View File

@@ -1,5 +1,4 @@
//go:build !plan9 && !js && !race
// +build !plan9,!js,!race
package cache_test
@@ -21,10 +20,8 @@ import (
func TestInternalUploadTempDirCreated(t *testing.T) {
id := fmt.Sprintf("tiutdc%v", time.Now().Unix())
rootFs, boltDb := runInstance.newCacheFs(t, remoteName, id, false, true,
nil,
runInstance.newCacheFs(t, remoteName, id, false, true,
map[string]string{"tmp_upload_path": path.Join(runInstance.tmpUploadDir, id)})
defer runInstance.cleanupFs(t, rootFs, boltDb)
_, err := os.Stat(path.Join(runInstance.tmpUploadDir, id))
require.NoError(t, err)
@@ -63,9 +60,7 @@ func testInternalUploadQueueOneFile(t *testing.T, id string, rootFs fs.Fs, boltD
func TestInternalUploadQueueOneFileNoRest(t *testing.T) {
id := fmt.Sprintf("tiuqofnr%v", time.Now().Unix())
rootFs, boltDb := runInstance.newCacheFs(t, remoteName, id, true, true,
nil,
map[string]string{"tmp_upload_path": path.Join(runInstance.tmpUploadDir, id), "tmp_wait_time": "0s"})
defer runInstance.cleanupFs(t, rootFs, boltDb)
testInternalUploadQueueOneFile(t, id, rootFs, boltDb)
}
@@ -73,19 +68,15 @@ func TestInternalUploadQueueOneFileNoRest(t *testing.T) {
func TestInternalUploadQueueOneFileWithRest(t *testing.T) {
id := fmt.Sprintf("tiuqofwr%v", time.Now().Unix())
rootFs, boltDb := runInstance.newCacheFs(t, remoteName, id, true, true,
nil,
map[string]string{"tmp_upload_path": path.Join(runInstance.tmpUploadDir, id), "tmp_wait_time": "1m"})
defer runInstance.cleanupFs(t, rootFs, boltDb)
testInternalUploadQueueOneFile(t, id, rootFs, boltDb)
}
func TestInternalUploadMoveExistingFile(t *testing.T) {
id := fmt.Sprintf("tiumef%v", time.Now().Unix())
rootFs, boltDb := runInstance.newCacheFs(t, remoteName, id, true, true,
nil,
rootFs, _ := runInstance.newCacheFs(t, remoteName, id, true, true,
map[string]string{"tmp_upload_path": path.Join(runInstance.tmpUploadDir, id), "tmp_wait_time": "3s"})
defer runInstance.cleanupFs(t, rootFs, boltDb)
err := rootFs.Mkdir(context.Background(), "one")
require.NoError(t, err)
@@ -119,10 +110,8 @@ func TestInternalUploadMoveExistingFile(t *testing.T) {
func TestInternalUploadTempPathCleaned(t *testing.T) {
id := fmt.Sprintf("tiutpc%v", time.Now().Unix())
rootFs, boltDb := runInstance.newCacheFs(t, remoteName, id, true, true,
nil,
rootFs, _ := runInstance.newCacheFs(t, remoteName, id, true, true,
map[string]string{"cache-tmp-upload-path": path.Join(runInstance.tmpUploadDir, id), "cache-tmp-wait-time": "5s"})
defer runInstance.cleanupFs(t, rootFs, boltDb)
err := rootFs.Mkdir(context.Background(), "one")
require.NoError(t, err)
@@ -162,21 +151,19 @@ func TestInternalUploadTempPathCleaned(t *testing.T) {
func TestInternalUploadQueueMoreFiles(t *testing.T) {
id := fmt.Sprintf("tiuqmf%v", time.Now().Unix())
rootFs, boltDb := runInstance.newCacheFs(t, remoteName, id, true, true,
nil,
rootFs, _ := runInstance.newCacheFs(t, remoteName, id, true, true,
map[string]string{"tmp_upload_path": path.Join(runInstance.tmpUploadDir, id), "tmp_wait_time": "1s"})
defer runInstance.cleanupFs(t, rootFs, boltDb)
err := rootFs.Mkdir(context.Background(), "test")
require.NoError(t, err)
minSize := 5242880
maxSize := 10485760
totalFiles := 10
rand.Seed(time.Now().Unix())
randInstance := rand.New(rand.NewSource(time.Now().Unix()))
lastFile := ""
for i := 0; i < totalFiles; i++ {
size := int64(rand.Intn(maxSize-minSize) + minSize)
for i := range totalFiles {
size := int64(randInstance.Intn(maxSize-minSize) + minSize)
testReader := runInstance.randomReader(t, size)
remote := "test/" + strconv.Itoa(i) + ".bin"
runInstance.writeRemoteReader(t, rootFs, remote, testReader)
@@ -213,9 +200,7 @@ func TestInternalUploadQueueMoreFiles(t *testing.T) {
func TestInternalUploadTempFileOperations(t *testing.T) {
id := "tiutfo"
rootFs, boltDb := runInstance.newCacheFs(t, remoteName, id, true, true,
nil,
map[string]string{"tmp_upload_path": path.Join(runInstance.tmpUploadDir, id), "tmp_wait_time": "1h"})
defer runInstance.cleanupFs(t, rootFs, boltDb)
boltDb.PurgeTempUploads()
@@ -343,9 +328,7 @@ func TestInternalUploadTempFileOperations(t *testing.T) {
func TestInternalUploadUploadingFileOperations(t *testing.T) {
id := "tiuufo"
rootFs, boltDb := runInstance.newCacheFs(t, remoteName, id, true, true,
nil,
map[string]string{"tmp_upload_path": path.Join(runInstance.tmpUploadDir, id), "tmp_wait_time": "1h"})
defer runInstance.cleanupFs(t, rootFs, boltDb)
boltDb.PurgeTempUploads()

View File

@@ -1,5 +1,4 @@
//go:build !plan9 && !js
// +build !plan9,!js
package cache

View File

@@ -1,5 +1,4 @@
//go:build !plan9 && !js
// +build !plan9,!js
package cache
@@ -119,7 +118,7 @@ func (r *Handle) startReadWorkers() {
r.scaleWorkers(totalWorkers)
}
// scaleOutWorkers will increase the worker pool count by the provided amount
// scaleWorkers will increase the worker pool count by the provided amount
func (r *Handle) scaleWorkers(desired int) {
current := r.workers
if current == desired {
@@ -183,7 +182,7 @@ func (r *Handle) queueOffset(offset int64) {
}
}
for i := 0; i < r.workers; i++ {
for i := range r.workers {
o := r.preloadOffset + int64(r.cacheFs().opt.ChunkSize)*int64(i)
if o < 0 || o >= r.cachedObject.Size() {
continue
@@ -209,7 +208,7 @@ func (r *Handle) getChunk(chunkStart int64) ([]byte, error) {
offset := chunkStart % int64(r.cacheFs().opt.ChunkSize)
// we align the start offset of the first chunk to a likely chunk in the storage
chunkStart = chunkStart - offset
chunkStart -= offset
r.queueOffset(chunkStart)
found := false
@@ -223,7 +222,7 @@ func (r *Handle) getChunk(chunkStart int64) ([]byte, error) {
if !found {
// we're gonna give the workers a chance to pickup the chunk
// and retry a couple of times
for i := 0; i < r.cacheFs().opt.ReadRetries*8; i++ {
for i := range r.cacheFs().opt.ReadRetries * 8 {
data, err = r.storage().GetChunk(r.cachedObject, chunkStart)
if err == nil {
found = true
@@ -328,7 +327,7 @@ func (r *Handle) Seek(offset int64, whence int) (int64, error) {
chunkStart := r.offset - (r.offset % int64(r.cacheFs().opt.ChunkSize))
if chunkStart >= int64(r.cacheFs().opt.ChunkSize) {
chunkStart = chunkStart - int64(r.cacheFs().opt.ChunkSize)
chunkStart -= int64(r.cacheFs().opt.ChunkSize)
}
r.queueOffset(chunkStart)
@@ -416,10 +415,8 @@ func (w *worker) run() {
continue
}
}
} else {
if w.r.storage().HasChunk(w.r.cachedObject, chunkStart) {
continue
}
} else if w.r.storage().HasChunk(w.r.cachedObject, chunkStart) {
continue
}
chunkEnd := chunkStart + int64(w.r.cacheFs().opt.ChunkSize)

View File

@@ -1,5 +1,4 @@
//go:build !plan9 && !js
// +build !plan9,!js
package cache

13
backend/cache/plex.go vendored
View File

@@ -1,5 +1,4 @@
//go:build !plan9 && !js
// +build !plan9,!js
package cache
@@ -8,7 +7,7 @@ import (
"crypto/tls"
"encoding/json"
"fmt"
"io/ioutil"
"io"
"net/http"
"net/url"
"strings"
@@ -167,7 +166,7 @@ func (p *plexConnector) listenWebsocket() {
continue
}
var data []byte
data, err = ioutil.ReadAll(resp.Body)
data, err = io.ReadAll(resp.Body)
if err != nil {
continue
}
@@ -210,7 +209,7 @@ func (p *plexConnector) authenticate() error {
if err != nil {
return err
}
var data map[string]interface{}
var data map[string]any
err = json.NewDecoder(resp.Body).Decode(&data)
if err != nil {
return fmt.Errorf("failed to obtain token: %w", err)
@@ -274,11 +273,11 @@ func (p *plexConnector) isPlaying(co *Object) bool {
}
// adapted from: https://stackoverflow.com/a/28878037 (credit)
func get(m interface{}, path ...interface{}) (interface{}, bool) {
func get(m any, path ...any) (any, bool) {
for _, p := range path {
switch idx := p.(type) {
case string:
if mm, ok := m.(map[string]interface{}); ok {
if mm, ok := m.(map[string]any); ok {
if val, found := mm[idx]; found {
m = val
continue
@@ -286,7 +285,7 @@ func get(m interface{}, path ...interface{}) (interface{}, bool) {
}
return nil, false
case int:
if mm, ok := m.([]interface{}); ok {
if mm, ok := m.([]any); ok {
if len(mm) > idx {
m = mm[idx]
continue

View File

@@ -1,5 +1,4 @@
//go:build !plan9 && !js
// +build !plan9,!js
package cache

View File

@@ -1,5 +1,4 @@
//go:build !plan9 && !js
// +build !plan9,!js
package cache
@@ -9,7 +8,6 @@ import (
"encoding/binary"
"encoding/json"
"fmt"
"io/ioutil"
"os"
"path"
"strconv"
@@ -20,6 +18,7 @@ import (
"github.com/rclone/rclone/fs"
"github.com/rclone/rclone/fs/walk"
bolt "go.etcd.io/bbolt"
"go.etcd.io/bbolt/errors"
)
// Constants
@@ -473,7 +472,7 @@ func (b *Persistent) GetChunk(cachedObject *Object, offset int64) ([]byte, error
var data []byte
fp := path.Join(b.dataPath, cachedObject.abs(), strconv.FormatInt(offset, 10))
data, err := ioutil.ReadFile(fp)
data, err := os.ReadFile(fp)
if err != nil {
return nil, err
}
@@ -486,7 +485,7 @@ func (b *Persistent) AddChunk(fp string, data []byte, offset int64) error {
_ = os.MkdirAll(path.Join(b.dataPath, fp), os.ModePerm)
filePath := path.Join(b.dataPath, fp, strconv.FormatInt(offset, 10))
err := ioutil.WriteFile(filePath, data, os.ModePerm)
err := os.WriteFile(filePath, data, os.ModePerm)
if err != nil {
return err
}
@@ -599,7 +598,7 @@ func (b *Persistent) CleanChunksBySize(maxSize int64) {
})
if err != nil {
if err == bolt.ErrDatabaseNotOpen {
if err == errors.ErrDatabaseNotOpen {
// we're likely a late janitor and we need to end quietly as there's no guarantee of what exists anymore
return
}
@@ -608,16 +607,16 @@ func (b *Persistent) CleanChunksBySize(maxSize int64) {
}
// Stats returns a go map with the stats key values
func (b *Persistent) Stats() (map[string]map[string]interface{}, error) {
r := make(map[string]map[string]interface{})
r["data"] = make(map[string]interface{})
func (b *Persistent) Stats() (map[string]map[string]any, error) {
r := make(map[string]map[string]any)
r["data"] = make(map[string]any)
r["data"]["oldest-ts"] = time.Now()
r["data"]["oldest-file"] = ""
r["data"]["newest-ts"] = time.Now()
r["data"]["newest-file"] = ""
r["data"]["total-chunks"] = 0
r["data"]["total-size"] = int64(0)
r["files"] = make(map[string]interface{})
r["files"] = make(map[string]any)
r["files"]["oldest-ts"] = time.Now()
r["files"]["oldest-name"] = ""
r["files"]["newest-ts"] = time.Now()

View File

@@ -1,3 +1,5 @@
//go:build !plan9 && !js
package cache
import bolt "go.etcd.io/bbolt"

View File

@@ -12,7 +12,6 @@ import (
"fmt"
gohash "hash"
"io"
"io/ioutil"
"math/rand"
"path"
"regexp"
@@ -30,6 +29,7 @@ import (
"github.com/rclone/rclone/fs/fspath"
"github.com/rclone/rclone/fs/hash"
"github.com/rclone/rclone/fs/operations"
"github.com/rclone/rclone/lib/encoder"
)
// Chunker's composite files have one or more chunks
@@ -102,8 +102,10 @@ var (
//
// And still chunker's primary function is to chunk large files
// rather than serve as a generic metadata container.
const maxMetadataSize = 1023
const maxMetadataSizeWritten = 255
const (
maxMetadataSize = 1023
maxMetadataSizeWritten = 255
)
// Current/highest supported metadata format.
const metadataVersion = 2
@@ -306,7 +308,6 @@ func NewFs(ctx context.Context, name, rpath string, m configmap.Mapper) (fs.Fs,
root: rpath,
opt: *opt,
}
cache.PinUntilFinalized(f.base, f)
f.dirSort = true // processEntries requires that meta Objects prerun data chunks atm.
if err := f.configure(opt.NameFormat, opt.MetaFormat, opt.HashType, opt.Transactions); err != nil {
@@ -318,29 +319,45 @@ func NewFs(ctx context.Context, name, rpath string, m configmap.Mapper) (fs.Fs,
// i.e. `rpath` does not exist in the wrapped remote, but chunker
// detects a composite file because it finds the first chunk!
// (yet can't satisfy fstest.CheckListing, will ignore)
if err == nil && !f.useMeta && strings.Contains(rpath, "/") {
if err == nil && !f.useMeta {
firstChunkPath := f.makeChunkName(remotePath, 0, "", "")
_, testErr := cache.Get(ctx, baseName+firstChunkPath)
newBase, testErr := cache.Get(ctx, baseName+firstChunkPath)
if testErr == fs.ErrorIsFile {
f.base = newBase
err = testErr
}
}
cache.PinUntilFinalized(f.base, f)
// Correct root if definitely pointing to a file
if err == fs.ErrorIsFile {
f.root = path.Dir(f.root)
if f.root == "." || f.root == "/" {
f.root = ""
}
}
// Note 1: the features here are ones we could support, and they are
// ANDed with the ones from wrappedFs.
// Note 2: features.Fill() points features.PutStream to our PutStream,
// but features.Mask() will nullify it if wrappedFs does not have it.
f.features = (&fs.Features{
CaseInsensitive: true,
DuplicateFiles: true,
ReadMimeType: false, // Object.MimeType not supported
WriteMimeType: true,
BucketBased: true,
CanHaveEmptyDirectories: true,
ServerSideAcrossConfigs: true,
CaseInsensitive: true,
DuplicateFiles: true,
ReadMimeType: false, // Object.MimeType not supported
WriteMimeType: true,
BucketBased: true,
CanHaveEmptyDirectories: true,
ServerSideAcrossConfigs: true,
ReadDirMetadata: true,
WriteDirMetadata: true,
WriteDirSetModTime: true,
UserDirMetadata: true,
DirModTimeUpdatesOnWrite: true,
}).Fill(ctx, f).Mask(ctx, baseFs).WrapsFs(f, baseFs)
f.features.Disable("ListR") // Recursive listing may cause chunker skip files
f.features.ListR = nil // Recursive listing may cause chunker skip files
f.features.ListP = nil // ListP not supported yet
return f, err
}
@@ -616,7 +633,7 @@ func (f *Fs) parseChunkName(filePath string) (parentPath string, chunkNo int, ct
// forbidChunk prints error message or raises error if file is chunk.
// First argument sets log prefix, use `false` to suppress message.
func (f *Fs) forbidChunk(o interface{}, filePath string) error {
func (f *Fs) forbidChunk(o any, filePath string) error {
if parentPath, _, _, _ := f.parseChunkName(filePath); parentPath != "" {
if f.opt.FailHard {
return fmt.Errorf("chunk overlap with %q", parentPath)
@@ -664,7 +681,7 @@ func (f *Fs) newXactID(ctx context.Context, filePath string) (xactID string, err
circleSec := unixSec % closestPrimeZzzzSeconds
first4chars := strconv.FormatInt(circleSec, 36)
for tries := 0; tries < maxTransactionProbes; tries++ {
for range maxTransactionProbes {
f.xactIDMutex.Lock()
randomness := f.xactIDRand.Int63n(maxTwoBase36Digits + 1)
f.xactIDMutex.Unlock()
@@ -814,8 +831,7 @@ func (f *Fs) processEntries(ctx context.Context, origEntries fs.DirEntries, dirP
}
case fs.Directory:
isSubdir[entry.Remote()] = true
wrapDir := fs.NewDirCopy(ctx, entry)
wrapDir.SetRemote(entry.Remote())
wrapDir := fs.NewDirWrapper(entry.Remote(), entry)
tempEntries = append(tempEntries, wrapDir)
default:
if f.opt.FailHard {
@@ -948,6 +964,11 @@ func (f *Fs) scanObject(ctx context.Context, remote string, quickScan bool) (fs.
}
if caseInsensitive {
sameMain = strings.EqualFold(mainRemote, remote)
if sameMain && f.base.Features().IsLocal {
// on local, make sure the EqualFold still holds true when accounting for encoding.
// sometimes paths with special characters will only normalize the same way in Standard Encoding.
sameMain = strings.EqualFold(encoder.OS.FromStandardPath(mainRemote), encoder.OS.FromStandardPath(remote))
}
} else {
sameMain = mainRemote == remote
}
@@ -961,13 +982,13 @@ func (f *Fs) scanObject(ctx context.Context, remote string, quickScan bool) (fs.
}
continue
}
//fs.Debugf(f, "%q belongs to %q as chunk %d", entryRemote, mainRemote, chunkNo)
// fs.Debugf(f, "%q belongs to %q as chunk %d", entryRemote, mainRemote, chunkNo)
if err := o.addChunk(entry, chunkNo); err != nil {
return nil, err
}
}
if o.main == nil && (o.chunks == nil || len(o.chunks) == 0) {
if o.main == nil && len(o.chunks) == 0 {
// Scanning hasn't found data chunks with conforming names.
if f.useMeta || quickScan {
// Metadata is required but absent and there are no chunks.
@@ -1038,7 +1059,7 @@ func (o *Object) readMetadata(ctx context.Context) error {
if err != nil {
return err
}
metadata, err := ioutil.ReadAll(reader)
metadata, err := io.ReadAll(reader)
_ = reader.Close() // ensure file handle is freed on windows
if err != nil {
return err
@@ -1097,7 +1118,7 @@ func (o *Object) readXactID(ctx context.Context) (xactID string, err error) {
if err != nil {
return "", err
}
data, err := ioutil.ReadAll(reader)
data, err := io.ReadAll(reader)
_ = reader.Close() // ensure file handle is freed on windows
if err != nil {
return "", err
@@ -1123,8 +1144,8 @@ func (o *Object) readXactID(ctx context.Context) (xactID string, err error) {
// put implements Put, PutStream, PutUnchecked, Update
func (f *Fs) put(
ctx context.Context, in io.Reader, src fs.ObjectInfo, remote string, options []fs.OpenOption,
basePut putFn, action string, target fs.Object) (obj fs.Object, err error) {
basePut putFn, action string, target fs.Object,
) (obj fs.Object, err error) {
// Perform consistency checks
if err := f.forbidChunk(src, remote); err != nil {
return nil, fmt.Errorf("%s refused: %w", action, err)
@@ -1169,10 +1190,7 @@ func (f *Fs) put(
}
tempRemote := f.makeChunkName(baseRemote, c.chunkNo, "", xactID)
size := c.sizeLeft
if size > c.chunkSize {
size = c.chunkSize
}
size := min(c.sizeLeft, c.chunkSize)
savedReadCount := c.readCount
// If a single chunk is expected, avoid the extra rename operation
@@ -1457,10 +1475,7 @@ func (c *chunkingReader) dummyRead(in io.Reader, size int64) error {
const bufLen = 1048576 // 1 MiB
buf := make([]byte, bufLen)
for size > 0 {
n := size
if n > bufLen {
n = bufLen
}
n := min(size, bufLen)
if _, err := io.ReadFull(in, buf[0:n]); err != nil {
return err
}
@@ -1564,6 +1579,14 @@ func (f *Fs) Mkdir(ctx context.Context, dir string) error {
return f.base.Mkdir(ctx, dir)
}
// MkdirMetadata makes the root directory of the Fs object
func (f *Fs) MkdirMetadata(ctx context.Context, dir string, metadata fs.Metadata) (fs.Directory, error) {
if do := f.base.Features().MkdirMetadata; do != nil {
return do(ctx, dir, metadata)
}
return nil, fs.ErrorNotImplemented
}
// Rmdir removes the directory (container, bucket) if empty
//
// Return an error if it doesn't exist or isn't empty
@@ -1838,6 +1861,8 @@ func (f *Fs) Move(ctx context.Context, src fs.Object, remote string) (fs.Object,
// baseMove chains to the wrapped Move or simulates it by Copy+Delete
func (f *Fs) baseMove(ctx context.Context, src fs.Object, remote string, delMode int) (fs.Object, error) {
ctx, ci := fs.AddConfig(ctx)
ci.NameTransform = nil // ensure operations.Move does not double-transform here
var (
dest fs.Object
err error
@@ -1881,6 +1906,14 @@ func (f *Fs) DirMove(ctx context.Context, src fs.Fs, srcRemote, dstRemote string
return do(ctx, srcFs.base, srcRemote, dstRemote)
}
// DirSetModTime sets the directory modtime for dir
func (f *Fs) DirSetModTime(ctx context.Context, dir string, modTime time.Time) error {
if do := f.base.Features().DirSetModTime; do != nil {
return do(ctx, dir, modTime)
}
return fs.ErrorNotImplemented
}
// CleanUp the trash in the Fs
//
// Implement this if you have a way of emptying the trash or
@@ -1929,7 +1962,7 @@ func (f *Fs) ChangeNotify(ctx context.Context, notifyFunc func(string, fs.EntryT
return
}
wrappedNotifyFunc := func(path string, entryType fs.EntryType) {
//fs.Debugf(f, "ChangeNotify: path %q entryType %d", path, entryType)
// fs.Debugf(f, "ChangeNotify: path %q entryType %d", path, entryType)
if entryType == fs.EntryObject {
mainPath, _, _, xactID := f.parseChunkName(path)
metaXactID := ""
@@ -2444,7 +2477,7 @@ func unmarshalSimpleJSON(ctx context.Context, metaObject fs.Object, data []byte)
if len(data) > maxMetadataSizeWritten {
return nil, false, ErrMetaTooBig
}
if data == nil || len(data) < 2 || data[0] != '{' || data[len(data)-1] != '}' {
if len(data) < 2 || data[0] != '{' || data[len(data)-1] != '}' {
return nil, false, errors.New("invalid json")
}
var metadata metaSimpleJSON
@@ -2541,6 +2574,8 @@ var (
_ fs.Copier = (*Fs)(nil)
_ fs.Mover = (*Fs)(nil)
_ fs.DirMover = (*Fs)(nil)
_ fs.DirSetModTimer = (*Fs)(nil)
_ fs.MkdirMetadataer = (*Fs)(nil)
_ fs.PutUncheckeder = (*Fs)(nil)
_ fs.PutStreamer = (*Fs)(nil)
_ fs.CleanUpper = (*Fs)(nil)

View File

@@ -5,7 +5,7 @@ import (
"context"
"flag"
"fmt"
"io/ioutil"
"io"
"path"
"regexp"
"strings"
@@ -40,7 +40,7 @@ func testPutLarge(t *testing.T, f *Fs, kilobytes int) {
})
}
type settings map[string]interface{}
type settings map[string]any
func deriveFs(ctx context.Context, t *testing.T, f fs.Fs, path string, opts settings) fs.Fs {
fsName := strings.Split(f.Name(), "{")[0] // strip off hash
@@ -413,7 +413,7 @@ func testSmallFileInternals(t *testing.T, f *Fs) {
if r == nil {
return
}
data, err := ioutil.ReadAll(r)
data, err := io.ReadAll(r)
assert.NoError(t, err)
assert.Equal(t, contents, string(data))
_ = r.Close()
@@ -538,7 +538,7 @@ func testPreventCorruption(t *testing.T, f *Fs) {
assert.NoError(t, err)
var chunkContents []byte
assert.NotPanics(t, func() {
chunkContents, err = ioutil.ReadAll(r)
chunkContents, err = io.ReadAll(r)
_ = r.Close()
})
assert.NoError(t, err)
@@ -573,7 +573,7 @@ func testPreventCorruption(t *testing.T, f *Fs) {
r, err = willyChunk.Open(ctx)
assert.NoError(t, err)
assert.NotPanics(t, func() {
_, err = ioutil.ReadAll(r)
_, err = io.ReadAll(r)
_ = r.Close()
})
assert.NoError(t, err)
@@ -672,7 +672,7 @@ func testMetadataInput(t *testing.T, f *Fs) {
assert.NoError(t, err, "open "+description)
assert.NotNil(t, r, "open stream of "+description)
if err == nil && r != nil {
data, err := ioutil.ReadAll(r)
data, err := io.ReadAll(r)
assert.NoError(t, err, "read all of "+description)
assert.Equal(t, contents, string(data), description+" contents is ok")
_ = r.Close()
@@ -758,8 +758,8 @@ func testFutureProof(t *testing.T, f *Fs) {
assert.Error(t, err)
// Rcat must fail
in := ioutil.NopCloser(bytes.NewBufferString("abc"))
robj, err := operations.Rcat(ctx, f, file, in, modTime)
in := io.NopCloser(bytes.NewBufferString("abc"))
robj, err := operations.Rcat(ctx, f, file, in, modTime, nil)
assert.Nil(t, robj)
assert.NotNil(t, err)
if err != nil {
@@ -854,7 +854,7 @@ func testChunkerServerSideMove(t *testing.T, f *Fs) {
r, err := dstFile.Open(ctx)
assert.NoError(t, err)
assert.NotNil(t, r)
data, err := ioutil.ReadAll(r)
data, err := io.ReadAll(r)
assert.NoError(t, err)
assert.Equal(t, contents, string(data))
_ = r.Close()

View File

@@ -36,14 +36,17 @@ func TestIntegration(t *testing.T) {
"GetTier",
"SetTier",
"Metadata",
"SetMetadata",
},
UnimplementableFsMethods: []string{
"PublicLink",
"OpenWriterAt",
"OpenChunkWriter",
"MergeDirs",
"DirCacheFlush",
"UserInfo",
"Disconnect",
"ListP",
},
}
if *fstest.RemoteName == "" {

View File

@@ -0,0 +1,48 @@
// Package api has type definitions for cloudinary
package api
import (
"fmt"
)
// CloudinaryEncoder extends the built-in encoder
type CloudinaryEncoder interface {
// FromStandardPath takes a / separated path in Standard encoding
// and converts it to a / separated path in this encoding.
FromStandardPath(string) string
// FromStandardName takes name in Standard encoding and converts
// it in this encoding.
FromStandardName(string) string
// ToStandardPath takes a / separated path in this encoding
// and converts it to a / separated path in Standard encoding.
ToStandardPath(string) string
// ToStandardName takes name in this encoding and converts
// it in Standard encoding.
ToStandardName(string, string) string
// Encoded root of the remote (as passed into NewFs)
FromStandardFullPath(string) string
}
// UpdateOptions was created to pass options from Update to Put
type UpdateOptions struct {
PublicID string
ResourceType string
DeliveryType string
AssetFolder string
DisplayName string
}
// Header formats the option as a string
func (o *UpdateOptions) Header() (string, string) {
return "UpdateOption", fmt.Sprintf("%s/%s/%s", o.ResourceType, o.DeliveryType, o.PublicID)
}
// Mandatory returns whether the option must be parsed or can be ignored
func (o *UpdateOptions) Mandatory() bool {
return false
}
// String formats the option into human-readable form
func (o *UpdateOptions) String() string {
return fmt.Sprintf("Fully qualified Public ID: %s/%s/%s", o.ResourceType, o.DeliveryType, o.PublicID)
}

View File

@@ -0,0 +1,754 @@
// Package cloudinary provides an interface to the Cloudinary DAM
package cloudinary
import (
"context"
"encoding/hex"
"errors"
"fmt"
"io"
"net/http"
"net/url"
"path"
"slices"
"strconv"
"strings"
"time"
"github.com/cloudinary/cloudinary-go/v2"
SDKApi "github.com/cloudinary/cloudinary-go/v2/api"
"github.com/cloudinary/cloudinary-go/v2/api/admin"
"github.com/cloudinary/cloudinary-go/v2/api/admin/search"
"github.com/cloudinary/cloudinary-go/v2/api/uploader"
"github.com/rclone/rclone/backend/cloudinary/api"
"github.com/rclone/rclone/fs"
"github.com/rclone/rclone/fs/config"
"github.com/rclone/rclone/fs/config/configmap"
"github.com/rclone/rclone/fs/config/configstruct"
"github.com/rclone/rclone/fs/fserrors"
"github.com/rclone/rclone/fs/fshttp"
"github.com/rclone/rclone/fs/hash"
"github.com/rclone/rclone/lib/encoder"
"github.com/rclone/rclone/lib/pacer"
"github.com/rclone/rclone/lib/rest"
"github.com/zeebo/blake3"
)
// Cloudinary shouldn't have a trailing dot if there is no path
func cldPathDir(somePath string) string {
if somePath == "" || somePath == "." {
return somePath
}
dir := path.Dir(somePath)
if dir == "." {
return ""
}
return dir
}
// Register with Fs
func init() {
fs.Register(&fs.RegInfo{
Name: "cloudinary",
Description: "Cloudinary",
NewFs: NewFs,
Options: []fs.Option{
{
Name: "cloud_name",
Help: "Cloudinary Environment Name",
Required: true,
Sensitive: true,
},
{
Name: "api_key",
Help: "Cloudinary API Key",
Required: true,
Sensitive: true,
},
{
Name: "api_secret",
Help: "Cloudinary API Secret",
Required: true,
Sensitive: true,
},
{
Name: "upload_prefix",
Help: "Specify the API endpoint for environments out of the US",
},
{
Name: "upload_preset",
Help: "Upload Preset to select asset manipulation on upload",
},
{
Name: config.ConfigEncoding,
Help: config.ConfigEncodingHelp,
Advanced: true,
Default: (encoder.Base | // Slash,LtGt,DoubleQuote,Question,Asterisk,Pipe,Hash,Percent,BackSlash,Del,Ctl,RightSpace,InvalidUtf8,Dot
encoder.EncodeSlash |
encoder.EncodeLtGt |
encoder.EncodeDoubleQuote |
encoder.EncodeQuestion |
encoder.EncodeAsterisk |
encoder.EncodePipe |
encoder.EncodeHash |
encoder.EncodePercent |
encoder.EncodeBackSlash |
encoder.EncodeDel |
encoder.EncodeCtl |
encoder.EncodeRightSpace |
encoder.EncodeInvalidUtf8 |
encoder.EncodeDot),
},
{
Name: "eventually_consistent_delay",
Default: fs.Duration(0),
Advanced: true,
Help: "Wait N seconds for eventual consistency of the databases that support the backend operation",
},
{
Name: "adjust_media_files_extensions",
Default: true,
Advanced: true,
Help: "Cloudinary handles media formats as a file attribute and strips it from the name, which is unlike most other file systems",
},
{
Name: "media_extensions",
Default: []string{
"3ds", "3g2", "3gp", "ai", "arw", "avi", "avif", "bmp", "bw",
"cr2", "cr3", "djvu", "dng", "eps3", "fbx", "flif", "flv", "gif",
"glb", "gltf", "hdp", "heic", "heif", "ico", "indd", "jp2", "jpe",
"jpeg", "jpg", "jxl", "jxr", "m2ts", "mov", "mp4", "mpeg", "mts",
"mxf", "obj", "ogv", "pdf", "ply", "png", "psd", "svg", "tga",
"tif", "tiff", "ts", "u3ma", "usdz", "wdp", "webm", "webp", "wmv"},
Advanced: true,
Help: "Cloudinary supported media extensions",
},
},
})
}
// Options defines the configuration for this backend
type Options struct {
CloudName string `config:"cloud_name"`
APIKey string `config:"api_key"`
APISecret string `config:"api_secret"`
UploadPrefix string `config:"upload_prefix"`
UploadPreset string `config:"upload_preset"`
Enc encoder.MultiEncoder `config:"encoding"`
EventuallyConsistentDelay fs.Duration `config:"eventually_consistent_delay"`
MediaExtensions []string `config:"media_extensions"`
AdjustMediaFilesExtensions bool `config:"adjust_media_files_extensions"`
}
// Fs represents a remote cloudinary server
type Fs struct {
name string
root string
opt Options
features *fs.Features
pacer *fs.Pacer
srv *rest.Client // For downloading assets via the Cloudinary CDN
cld *cloudinary.Cloudinary // API calls are going through the Cloudinary SDK
lastCRUD time.Time
}
// Object describes a cloudinary object
type Object struct {
fs *Fs
remote string
size int64
modTime time.Time
url string
md5sum string
publicID string
resourceType string
deliveryType string
}
// NewFs constructs an Fs from the path, bucket:path
func NewFs(ctx context.Context, name string, root string, m configmap.Mapper) (fs.Fs, error) {
opt := new(Options)
err := configstruct.Set(m, opt)
if err != nil {
return nil, err
}
// Initialize the Cloudinary client
cld, err := cloudinary.NewFromParams(opt.CloudName, opt.APIKey, opt.APISecret)
if err != nil {
return nil, fmt.Errorf("failed to create Cloudinary client: %w", err)
}
cld.Admin.Client = *fshttp.NewClient(ctx)
cld.Upload.Client = *fshttp.NewClient(ctx)
if opt.UploadPrefix != "" {
cld.Config.API.UploadPrefix = opt.UploadPrefix
}
client := fshttp.NewClient(ctx)
f := &Fs{
name: name,
root: root,
opt: *opt,
cld: cld,
pacer: fs.NewPacer(ctx, pacer.NewDefault(pacer.MinSleep(1000), pacer.MaxSleep(10000), pacer.DecayConstant(2))),
srv: rest.NewClient(client),
}
f.features = (&fs.Features{
CanHaveEmptyDirectories: true,
}).Fill(ctx, f)
if root != "" {
// Check to see if the root actually an existing file
remote := path.Base(root)
f.root = cldPathDir(root)
_, err := f.NewObject(ctx, remote)
if err != nil {
if err == fs.ErrorObjectNotFound || errors.Is(err, fs.ErrorNotAFile) {
// File doesn't exist so return the previous root
f.root = root
return f, nil
}
return nil, err
}
// return an error with an fs which points to the parent
return f, fs.ErrorIsFile
}
return f, nil
}
// ------------------------------------------------------------
// FromStandardPath implementation of the api.CloudinaryEncoder
func (f *Fs) FromStandardPath(s string) string {
return strings.ReplaceAll(f.opt.Enc.FromStandardPath(s), "&", "\uFF06")
}
// FromStandardName implementation of the api.CloudinaryEncoder
func (f *Fs) FromStandardName(s string) string {
if f.opt.AdjustMediaFilesExtensions {
parsedURL, err := url.Parse(s)
ext := ""
if err != nil {
fs.Logf(nil, "Error parsing URL: %v", err)
} else {
ext = path.Ext(parsedURL.Path)
if slices.Contains(f.opt.MediaExtensions, strings.ToLower(strings.TrimPrefix(ext, "."))) {
s = strings.TrimSuffix(parsedURL.Path, ext)
}
}
}
return strings.ReplaceAll(f.opt.Enc.FromStandardName(s), "&", "\uFF06")
}
// ToStandardPath implementation of the api.CloudinaryEncoder
func (f *Fs) ToStandardPath(s string) string {
return strings.ReplaceAll(f.opt.Enc.ToStandardPath(s), "\uFF06", "&")
}
// ToStandardName implementation of the api.CloudinaryEncoder
func (f *Fs) ToStandardName(s string, assetURL string) string {
ext := ""
if f.opt.AdjustMediaFilesExtensions {
parsedURL, err := url.Parse(assetURL)
if err != nil {
fs.Logf(nil, "Error parsing URL: %v", err)
} else {
ext = path.Ext(parsedURL.Path)
if !slices.Contains(f.opt.MediaExtensions, strings.ToLower(strings.TrimPrefix(ext, "."))) {
ext = ""
}
}
}
return strings.ReplaceAll(f.opt.Enc.ToStandardName(s), "\uFF06", "&") + ext
}
// FromStandardFullPath encodes a full path to Cloudinary standard
func (f *Fs) FromStandardFullPath(dir string) string {
return path.Join(api.CloudinaryEncoder.FromStandardPath(f, f.root), api.CloudinaryEncoder.FromStandardPath(f, dir))
}
// ToAssetFolderAPI encodes folders as expected by the Cloudinary SDK
func (f *Fs) ToAssetFolderAPI(dir string) string {
return strings.ReplaceAll(dir, "%", "%25")
}
// ToDisplayNameElastic encodes a special case of elasticsearch
func (f *Fs) ToDisplayNameElastic(dir string) string {
return strings.ReplaceAll(dir, "!", "\\!")
}
// Name of the remote (as passed into NewFs)
func (f *Fs) Name() string {
return f.name
}
// Root of the remote (as passed into NewFs)
func (f *Fs) Root() string {
return f.root
}
// WaitEventuallyConsistent waits till the FS is eventually consistent
func (f *Fs) WaitEventuallyConsistent() {
if f.opt.EventuallyConsistentDelay == fs.Duration(0) {
return
}
delay := time.Duration(f.opt.EventuallyConsistentDelay)
timeSinceLastCRUD := time.Since(f.lastCRUD)
if timeSinceLastCRUD < delay {
time.Sleep(delay - timeSinceLastCRUD)
}
}
// String converts this Fs to a string
func (f *Fs) String() string {
return fmt.Sprintf("Cloudinary root '%s'", f.root)
}
// Features returns the optional features of this Fs
func (f *Fs) Features() *fs.Features {
return f.features
}
// List the objects and directories in dir into entries
func (f *Fs) List(ctx context.Context, dir string) (fs.DirEntries, error) {
remotePrefix := f.FromStandardFullPath(dir)
if remotePrefix != "" && !strings.HasSuffix(remotePrefix, "/") {
remotePrefix += "/"
}
var entries fs.DirEntries
dirs := make(map[string]struct{})
nextCursor := ""
f.WaitEventuallyConsistent()
for {
// user the folders api to list folders.
folderParams := admin.SubFoldersParams{
Folder: f.ToAssetFolderAPI(remotePrefix),
MaxResults: 500,
}
if nextCursor != "" {
folderParams.NextCursor = nextCursor
}
results, err := f.cld.Admin.SubFolders(ctx, folderParams)
if err != nil {
return nil, fmt.Errorf("failed to list sub-folders: %w", err)
}
if results.Error.Message != "" {
if strings.HasPrefix(results.Error.Message, "Can't find folder with path") {
return nil, fs.ErrorDirNotFound
}
return nil, fmt.Errorf("failed to list sub-folders: %s", results.Error.Message)
}
for _, folder := range results.Folders {
relativePath := api.CloudinaryEncoder.ToStandardPath(f, strings.TrimPrefix(folder.Path, remotePrefix))
parts := strings.Split(relativePath, "/")
// It's a directory
dirName := parts[len(parts)-1]
if _, found := dirs[dirName]; !found {
d := fs.NewDir(path.Join(dir, dirName), time.Time{})
entries = append(entries, d)
dirs[dirName] = struct{}{}
}
}
// Break if there are no more results
if results.NextCursor == "" {
break
}
nextCursor = results.NextCursor
}
for {
// Use the assets.AssetsByAssetFolder API to list assets
assetsParams := admin.AssetsByAssetFolderParams{
AssetFolder: remotePrefix,
MaxResults: 500,
}
if nextCursor != "" {
assetsParams.NextCursor = nextCursor
}
results, err := f.cld.Admin.AssetsByAssetFolder(ctx, assetsParams)
if err != nil {
return nil, fmt.Errorf("failed to list assets: %w", err)
}
for _, asset := range results.Assets {
remote := path.Join(dir, api.CloudinaryEncoder.ToStandardName(f, asset.DisplayName, asset.SecureURL))
o := &Object{
fs: f,
remote: remote,
size: int64(asset.Bytes),
modTime: asset.CreatedAt,
url: asset.SecureURL,
publicID: asset.PublicID,
resourceType: asset.AssetType,
deliveryType: asset.Type,
}
entries = append(entries, o)
}
// Break if there are no more results
if results.NextCursor == "" {
break
}
nextCursor = results.NextCursor
}
return entries, nil
}
// NewObject finds the Object at remote. If it can't be found it returns the error fs.ErrorObjectNotFound.
func (f *Fs) NewObject(ctx context.Context, remote string) (fs.Object, error) {
searchParams := search.Query{
Expression: fmt.Sprintf("asset_folder:\"%s\" AND display_name:\"%s\"",
f.FromStandardFullPath(cldPathDir(remote)),
f.ToDisplayNameElastic(api.CloudinaryEncoder.FromStandardName(f, path.Base(remote)))),
SortBy: []search.SortByField{{"uploaded_at": "desc"}},
MaxResults: 2,
}
var results *admin.SearchResult
f.WaitEventuallyConsistent()
err := f.pacer.Call(func() (bool, error) {
var err1 error
results, err1 = f.cld.Admin.Search(ctx, searchParams)
if err1 == nil && results.TotalCount != len(results.Assets) {
err1 = errors.New("partial response so waiting for eventual consistency")
}
return shouldRetry(ctx, nil, err1)
})
if err != nil {
return nil, fs.ErrorObjectNotFound
}
if results.TotalCount == 0 || len(results.Assets) == 0 {
return nil, fs.ErrorObjectNotFound
}
asset := results.Assets[0]
o := &Object{
fs: f,
remote: remote,
size: int64(asset.Bytes),
modTime: asset.UploadedAt,
url: asset.SecureURL,
md5sum: asset.Etag,
publicID: asset.PublicID,
resourceType: asset.ResourceType,
deliveryType: asset.Type,
}
return o, nil
}
func (f *Fs) getSuggestedPublicID(assetFolder string, displayName string, modTime time.Time) string {
payload := []byte(path.Join(assetFolder, displayName))
hash := blake3.Sum256(payload)
return hex.EncodeToString(hash[:])
}
// Put uploads content to Cloudinary
func (f *Fs) Put(ctx context.Context, in io.Reader, src fs.ObjectInfo, options ...fs.OpenOption) (fs.Object, error) {
if src.Size() == 0 {
return nil, fs.ErrorCantUploadEmptyFiles
}
params := uploader.UploadParams{
UploadPreset: f.opt.UploadPreset,
}
updateObject := false
var modTime time.Time
for _, option := range options {
if updateOptions, ok := option.(*api.UpdateOptions); ok {
if updateOptions.PublicID != "" {
updateObject = true
params.Overwrite = SDKApi.Bool(true)
params.Invalidate = SDKApi.Bool(true)
params.PublicID = updateOptions.PublicID
params.ResourceType = updateOptions.ResourceType
params.Type = SDKApi.DeliveryType(updateOptions.DeliveryType)
params.AssetFolder = updateOptions.AssetFolder
params.DisplayName = updateOptions.DisplayName
modTime = src.ModTime(ctx)
}
}
}
if !updateObject {
params.AssetFolder = f.FromStandardFullPath(cldPathDir(src.Remote()))
params.DisplayName = api.CloudinaryEncoder.FromStandardName(f, path.Base(src.Remote()))
// We want to conform to the unique asset ID of rclone, which is (asset_folder,display_name,last_modified).
// We also want to enable customers to choose their own public_id, in case duplicate names are not a crucial use case.
// Upload_presets that apply randomness to the public ID would not work well with rclone duplicate assets support.
params.FilenameOverride = f.getSuggestedPublicID(params.AssetFolder, params.DisplayName, src.ModTime(ctx))
}
uploadResult, err := f.cld.Upload.Upload(ctx, in, params)
f.lastCRUD = time.Now()
if err != nil {
return nil, fmt.Errorf("failed to upload to Cloudinary: %w", err)
}
if !updateObject {
modTime = uploadResult.CreatedAt
}
if uploadResult.Error.Message != "" {
return nil, errors.New(uploadResult.Error.Message)
}
o := &Object{
fs: f,
remote: src.Remote(),
size: int64(uploadResult.Bytes),
modTime: modTime,
url: uploadResult.SecureURL,
md5sum: uploadResult.Etag,
publicID: uploadResult.PublicID,
resourceType: uploadResult.ResourceType,
deliveryType: uploadResult.Type,
}
return o, nil
}
// Precision of the remote
func (f *Fs) Precision() time.Duration {
return fs.ModTimeNotSupported
}
// Hashes returns the supported hash sets
func (f *Fs) Hashes() hash.Set {
return hash.Set(hash.MD5)
}
// Mkdir creates empty folders
func (f *Fs) Mkdir(ctx context.Context, dir string) error {
params := admin.CreateFolderParams{Folder: f.ToAssetFolderAPI(f.FromStandardFullPath(dir))}
res, err := f.cld.Admin.CreateFolder(ctx, params)
f.lastCRUD = time.Now()
if err != nil {
return err
}
if res.Error.Message != "" {
return errors.New(res.Error.Message)
}
return nil
}
// Rmdir deletes empty folders
func (f *Fs) Rmdir(ctx context.Context, dir string) error {
// Additional test because Cloudinary will delete folders without
// assets, regardless of empty sub-folders
folder := f.ToAssetFolderAPI(f.FromStandardFullPath(dir))
folderParams := admin.SubFoldersParams{
Folder: folder,
MaxResults: 1,
}
results, err := f.cld.Admin.SubFolders(ctx, folderParams)
if err != nil {
return err
}
if results.TotalCount > 0 {
return fs.ErrorDirectoryNotEmpty
}
params := admin.DeleteFolderParams{Folder: folder}
res, err := f.cld.Admin.DeleteFolder(ctx, params)
f.lastCRUD = time.Now()
if err != nil {
return err
}
if res.Error.Message != "" {
if strings.HasPrefix(res.Error.Message, "Can't find folder with path") {
return fs.ErrorDirNotFound
}
return errors.New(res.Error.Message)
}
return nil
}
// retryErrorCodes is a slice of error codes that we will retry
var retryErrorCodes = []int{
420, // Too Many Requests (legacy)
429, // Too Many Requests
500, // Internal Server Error
502, // Bad Gateway
503, // Service Unavailable
504, // Gateway Timeout
509, // Bandwidth Limit Exceeded
}
// shouldRetry returns a boolean as to whether this resp and err
// deserve to be retried. It returns the err as a convenience
func shouldRetry(ctx context.Context, resp *http.Response, err error) (bool, error) {
if fserrors.ContextError(ctx, &err) {
return false, err
}
if err != nil {
tryAgain := "Try again on "
if idx := strings.Index(err.Error(), tryAgain); idx != -1 {
layout := "2006-01-02 15:04:05 UTC"
dateStr := err.Error()[idx+len(tryAgain) : idx+len(tryAgain)+len(layout)]
timestamp, err2 := time.Parse(layout, dateStr)
if err2 == nil {
return true, fserrors.NewErrorRetryAfter(time.Until(timestamp))
}
}
fs.Debugf(nil, "Retrying API error %v", err)
return true, err
}
return fserrors.ShouldRetry(err) || fserrors.ShouldRetryHTTP(resp, retryErrorCodes), err
}
// ------------------------------------------------------------
// Hash returns the MD5 of an object
func (o *Object) Hash(ctx context.Context, ty hash.Type) (string, error) {
if ty != hash.MD5 {
return "", hash.ErrUnsupported
}
return o.md5sum, nil
}
// Return a string version
func (o *Object) String() string {
if o == nil {
return "<nil>"
}
return o.remote
}
// Fs returns the parent Fs
func (o *Object) Fs() fs.Info {
return o.fs
}
// Remote returns the remote path
func (o *Object) Remote() string {
return o.remote
}
// ModTime returns the modification time of the object
func (o *Object) ModTime(ctx context.Context) time.Time {
return o.modTime
}
// Size of object in bytes
func (o *Object) Size() int64 {
return o.size
}
// Storable returns if this object is storable
func (o *Object) Storable() bool {
return true
}
// SetModTime sets the modification time of the local fs object
func (o *Object) SetModTime(ctx context.Context, modTime time.Time) error {
return fs.ErrorCantSetModTime
}
// Open an object for read
func (o *Object) Open(ctx context.Context, options ...fs.OpenOption) (in io.ReadCloser, err error) {
var resp *http.Response
opts := rest.Opts{
Method: "GET",
RootURL: o.url,
Options: options,
}
var offset int64
var count int64
var key string
var value string
fs.FixRangeOption(options, o.size)
for _, option := range options {
switch x := option.(type) {
case *fs.RangeOption:
offset, count = x.Decode(o.size)
if count < 0 {
count = o.size - offset
}
key, value = option.Header()
case *fs.SeekOption:
offset = x.Offset
count = o.size - offset
key, value = option.Header()
default:
if option.Mandatory() {
fs.Logf(o, "Unsupported mandatory option: %v", option)
}
}
}
if key != "" && value != "" {
opts.ExtraHeaders = make(map[string]string)
opts.ExtraHeaders[key] = value
}
// Make sure that the asset is fully available
err = o.fs.pacer.Call(func() (bool, error) {
resp, err = o.fs.srv.Call(ctx, &opts)
if err == nil {
cl, clErr := strconv.Atoi(resp.Header.Get("content-length"))
if clErr == nil && count == int64(cl) {
return false, nil
}
}
return shouldRetry(ctx, resp, err)
})
if err != nil {
return nil, fmt.Errorf("failed download of \"%s\": %w", o.url, err)
}
return resp.Body, err
}
// Update the object with the contents of the io.Reader
func (o *Object) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, options ...fs.OpenOption) error {
options = append(options, &api.UpdateOptions{
PublicID: o.publicID,
ResourceType: o.resourceType,
DeliveryType: o.deliveryType,
DisplayName: api.CloudinaryEncoder.FromStandardName(o.fs, path.Base(o.Remote())),
AssetFolder: o.fs.FromStandardFullPath(cldPathDir(o.Remote())),
})
updatedObj, err := o.fs.Put(ctx, in, src, options...)
if err != nil {
return err
}
if uo, ok := updatedObj.(*Object); ok {
o.size = uo.size
o.modTime = time.Now() // Skipping uo.modTime because the API returns the create time
o.url = uo.url
o.md5sum = uo.md5sum
o.publicID = uo.publicID
o.resourceType = uo.resourceType
o.deliveryType = uo.deliveryType
}
return nil
}
// Remove an object
func (o *Object) Remove(ctx context.Context) error {
params := uploader.DestroyParams{
PublicID: o.publicID,
ResourceType: o.resourceType,
Type: o.deliveryType,
}
res, dErr := o.fs.cld.Upload.Destroy(ctx, params)
o.fs.lastCRUD = time.Now()
if dErr != nil {
return dErr
}
if res.Error.Message != "" {
return errors.New(res.Error.Message)
}
if res.Result != "ok" {
return errors.New(res.Result)
}
return nil
}

View File

@@ -0,0 +1,23 @@
// Test Cloudinary filesystem interface
package cloudinary_test
import (
"testing"
"github.com/rclone/rclone/backend/cloudinary"
"github.com/rclone/rclone/fstest/fstests"
)
// TestIntegration runs integration tests against the remote
func TestIntegration(t *testing.T) {
name := "TestCloudinary"
fstests.Run(t, &fstests.Opt{
RemoteName: name + ":",
NilObject: (*cloudinary.Object)(nil),
SkipInvalidUTF8: true,
ExtraConfig: []fstests.ExtraConfigItem{
{Name: name, Key: "eventually_consistent_delay", Value: "7"},
},
})
}

View File

@@ -1,4 +1,4 @@
// Package combine implents a backend to combine multiple remotes in a directory tree
// Package combine implements a backend to combine multiple remotes in a directory tree
package combine
/*
@@ -20,6 +20,7 @@ import (
"github.com/rclone/rclone/fs/config/configmap"
"github.com/rclone/rclone/fs/config/configstruct"
"github.com/rclone/rclone/fs/hash"
"github.com/rclone/rclone/fs/list"
"github.com/rclone/rclone/fs/operations"
"github.com/rclone/rclone/fs/walk"
"golang.org/x/sync/errgroup"
@@ -186,7 +187,6 @@ func NewFs(ctx context.Context, name, root string, m configmap.Mapper) (outFs fs
g, gCtx := errgroup.WithContext(ctx)
var mu sync.Mutex
for _, upstream := range opt.Upstreams {
upstream := upstream
g.Go(func() (err error) {
equal := strings.IndexRune(upstream, '=')
if equal < 0 {
@@ -222,30 +222,40 @@ func NewFs(ctx context.Context, name, root string, m configmap.Mapper) (outFs fs
}
// check features
var features = (&fs.Features{
CaseInsensitive: true,
DuplicateFiles: false,
ReadMimeType: true,
WriteMimeType: true,
CanHaveEmptyDirectories: true,
BucketBased: true,
SetTier: true,
GetTier: true,
ReadMetadata: true,
WriteMetadata: true,
UserMetadata: true,
CaseInsensitive: true,
DuplicateFiles: false,
ReadMimeType: true,
WriteMimeType: true,
CanHaveEmptyDirectories: true,
BucketBased: true,
SetTier: true,
GetTier: true,
ReadMetadata: true,
WriteMetadata: true,
UserMetadata: true,
ReadDirMetadata: true,
WriteDirMetadata: true,
WriteDirSetModTime: true,
UserDirMetadata: true,
DirModTimeUpdatesOnWrite: true,
PartialUploads: true,
}).Fill(ctx, f)
canMove := true
canMove, slowHash := true, false
for _, u := range f.upstreams {
features = features.Mask(ctx, u.f) // Mask all upstream fs
if !operations.CanServerSideMove(u.f) {
canMove = false
}
slowHash = slowHash || u.f.Features().SlowHash
}
// We can move if all remotes support Move or Copy
if canMove {
features.Move = f.Move
}
// If any of upstreams are SlowHash, propagate it
features.SlowHash = slowHash
// Enable ListR when upstreams either support ListR or is local
// But not when all upstreams are local
if features.ListR == nil {
@@ -259,6 +269,9 @@ func NewFs(ctx context.Context, name, root string, m configmap.Mapper) (outFs fs
}
}
// Enable ListP always
features.ListP = f.ListP
// Enable Purge when any upstreams support it
if features.Purge == nil {
for _, u := range f.upstreams {
@@ -289,6 +302,16 @@ func NewFs(ctx context.Context, name, root string, m configmap.Mapper) (outFs fs
}
}
// Enable CleanUp when any upstreams support it
if features.CleanUp == nil {
for _, u := range f.upstreams {
if u.f.Features().CleanUp != nil {
features.CleanUp = f.CleanUp
break
}
}
}
// Enable ChangeNotify when any upstreams support it
if features.ChangeNotify == nil {
for _, u := range f.upstreams {
@@ -299,6 +322,9 @@ func NewFs(ctx context.Context, name, root string, m configmap.Mapper) (outFs fs
}
}
// show that we wrap other backends
features.Overlay = true
f.features = features
// Get common intersection of hashes
@@ -343,7 +369,6 @@ func NewFs(ctx context.Context, name, root string, m configmap.Mapper) (outFs fs
func (f *Fs) multithread(ctx context.Context, fn func(context.Context, *upstream) error) error {
g, gCtx := errgroup.WithContext(ctx)
for _, u := range f.upstreams {
u := u
g.Go(func() (err error) {
return fn(gCtx, u)
})
@@ -351,7 +376,7 @@ func (f *Fs) multithread(ctx context.Context, fn func(context.Context, *upstream
return g.Wait()
}
// join the elements together but unline path.Join return empty string
// join the elements together but unlike path.Join return empty string
func join(elem ...string) string {
result := path.Join(elem...)
if result == "." {
@@ -426,6 +451,32 @@ func (f *Fs) Mkdir(ctx context.Context, dir string) error {
return u.f.Mkdir(ctx, uRemote)
}
// MkdirMetadata makes the root directory of the Fs object
func (f *Fs) MkdirMetadata(ctx context.Context, dir string, metadata fs.Metadata) (fs.Directory, error) {
u, uRemote, err := f.findUpstream(dir)
if err != nil {
return nil, err
}
do := u.f.Features().MkdirMetadata
if do == nil {
return nil, fs.ErrorNotImplemented
}
newDir, err := do(ctx, uRemote, metadata)
if err != nil {
return nil, err
}
entries := fs.DirEntries{newDir}
entries, err = u.wrapEntries(ctx, entries)
if err != nil {
return nil, err
}
newDir, ok := entries[0].(fs.Directory)
if !ok {
return nil, fmt.Errorf("internal error: expecting %T to be fs.Directory", entries[0])
}
return newDir, nil
}
// purge the upstream or fallback to a slow way
func (u *upstream) purge(ctx context.Context, dir string) (err error) {
if do := u.f.Features().Purge; do != nil {
@@ -584,7 +635,6 @@ func (f *Fs) ChangeNotify(ctx context.Context, notifyFunc func(string, fs.EntryT
var uChans []chan time.Duration
for _, u := range f.upstreams {
u := u
if do := u.f.Features().ChangeNotify; do != nil {
ch := make(chan time.Duration)
uChans = append(uChans, ch)
@@ -631,7 +681,7 @@ func (f *Fs) put(ctx context.Context, in io.Reader, src fs.ObjectInfo, stream bo
if err != nil {
return nil, err
}
uSrc := operations.NewOverrideRemote(src, uRemote)
uSrc := fs.NewOverrideRemote(src, uRemote)
var o fs.Object
if stream {
o, err = u.f.Features().PutStream(ctx, in, uSrc, options...)
@@ -741,12 +791,11 @@ func (u *upstream) wrapEntries(ctx context.Context, entries fs.DirEntries) (fs.D
case fs.Object:
entries[i] = u.newObject(x)
case fs.Directory:
newDir := fs.NewDirCopy(ctx, x)
newPath, err := u.pathAdjustment.do(newDir.Remote())
newPath, err := u.pathAdjustment.do(x.Remote())
if err != nil {
return nil, err
}
newDir.SetRemote(newPath)
newDir := fs.NewDirWrapper(newPath, x)
entries[i] = newDir
default:
return nil, fmt.Errorf("unknown entry type %T", entry)
@@ -765,24 +814,52 @@ func (u *upstream) wrapEntries(ctx context.Context, entries fs.DirEntries) (fs.D
// This should return ErrDirNotFound if the directory isn't
// found.
func (f *Fs) List(ctx context.Context, dir string) (entries fs.DirEntries, err error) {
return list.WithListP(ctx, dir, f)
}
// ListP lists the objects and directories of the Fs starting
// from dir non recursively into out.
//
// dir should be "" to start from the root, and should not
// have trailing slashes.
//
// This should return ErrDirNotFound if the directory isn't
// found.
//
// It should call callback for each tranche of entries read.
// These need not be returned in any particular order. If
// callback returns an error then the listing will stop
// immediately.
func (f *Fs) ListP(ctx context.Context, dir string, callback fs.ListRCallback) error {
// defer log.Trace(f, "dir=%q", dir)("entries = %v, err=%v", &entries, &err)
if f.root == "" && dir == "" {
entries = make(fs.DirEntries, 0, len(f.upstreams))
entries := make(fs.DirEntries, 0, len(f.upstreams))
for combineDir := range f.upstreams {
d := fs.NewDir(combineDir, f.when)
d := fs.NewLimitedDirWrapper(combineDir, fs.NewDir(combineDir, f.when))
entries = append(entries, d)
}
return entries, nil
return callback(entries)
}
u, uRemote, err := f.findUpstream(dir)
if err != nil {
return nil, err
return err
}
entries, err = u.f.List(ctx, uRemote)
if err != nil {
return nil, err
wrappedCallback := func(entries fs.DirEntries) error {
entries, err := u.wrapEntries(ctx, entries)
if err != nil {
return err
}
return callback(entries)
}
return u.wrapEntries(ctx, entries)
listP := u.f.Features().ListP
if listP == nil {
entries, err := u.f.List(ctx, uRemote)
if err != nil {
return err
}
return wrappedCallback(entries)
}
return listP(ctx, uRemote, wrappedCallback)
}
// ListR lists the objects and directories of the Fs starting
@@ -887,6 +964,116 @@ func (f *Fs) Shutdown(ctx context.Context) error {
})
}
// PublicLink generates a public link to the remote path (usually readable by anyone)
func (f *Fs) PublicLink(ctx context.Context, remote string, expire fs.Duration, unlink bool) (string, error) {
u, uRemote, err := f.findUpstream(remote)
if err != nil {
return "", err
}
do := u.f.Features().PublicLink
if do == nil {
return "", fs.ErrorNotImplemented
}
return do(ctx, uRemote, expire, unlink)
}
// PutUnchecked in to the remote path with the modTime given of the given size
//
// May create the object even if it returns an error - if so
// will return the object and the error, otherwise will return
// nil and the error
//
// May create duplicates or return errors if src already
// exists.
func (f *Fs) PutUnchecked(ctx context.Context, in io.Reader, src fs.ObjectInfo, options ...fs.OpenOption) (fs.Object, error) {
srcPath := src.Remote()
u, uRemote, err := f.findUpstream(srcPath)
if err != nil {
return nil, err
}
do := u.f.Features().PutUnchecked
if do == nil {
return nil, fs.ErrorNotImplemented
}
uSrc := fs.NewOverrideRemote(src, uRemote)
return do(ctx, in, uSrc, options...)
}
// MergeDirs merges the contents of all the directories passed
// in into the first one and rmdirs the other directories.
func (f *Fs) MergeDirs(ctx context.Context, dirs []fs.Directory) error {
if len(dirs) == 0 {
return nil
}
var (
u *upstream
uDirs []fs.Directory
)
for _, dir := range dirs {
uNew, uDir, err := f.findUpstream(dir.Remote())
if err != nil {
return err
}
if u == nil {
u = uNew
} else if u != uNew {
return fmt.Errorf("can't merge directories from different upstreams")
}
uDirs = append(uDirs, fs.NewOverrideDirectory(dir, uDir))
}
do := u.f.Features().MergeDirs
if do == nil {
return fs.ErrorNotImplemented
}
return do(ctx, uDirs)
}
// DirSetModTime sets the directory modtime for dir
func (f *Fs) DirSetModTime(ctx context.Context, dir string, modTime time.Time) error {
u, uDir, err := f.findUpstream(dir)
if err != nil {
return err
}
if uDir == "" {
fs.Debugf(dir, "Can't set modtime on upstream root. skipping.")
return nil
}
if do := u.f.Features().DirSetModTime; do != nil {
return do(ctx, uDir, modTime)
}
return fs.ErrorNotImplemented
}
// CleanUp the trash in the Fs
//
// Implement this if you have a way of emptying the trash or
// otherwise cleaning up old versions of files.
func (f *Fs) CleanUp(ctx context.Context) error {
return f.multithread(ctx, func(ctx context.Context, u *upstream) error {
if do := u.f.Features().CleanUp; do != nil {
return do(ctx)
}
return nil
})
}
// OpenWriterAt opens with a handle for random access writes
//
// Pass in the remote desired and the size if known.
//
// It truncates any existing object
func (f *Fs) OpenWriterAt(ctx context.Context, remote string, size int64) (fs.WriterAtCloser, error) {
u, uRemote, err := f.findUpstream(remote)
if err != nil {
return nil, err
}
do := u.f.Features().OpenWriterAt
if do == nil {
return nil, fs.ErrorNotImplemented
}
return do(ctx, uRemote, size)
}
// Object describes a wrapped Object
//
// This is a wrapped Object which knows its path prefix
@@ -916,7 +1103,7 @@ func (o *Object) String() string {
func (o *Object) Remote() string {
newPath, err := o.u.pathAdjustment.do(o.Object.String())
if err != nil {
fs.Errorf(o, "Bad object: %v", err)
fs.Errorf(o.Object, "Bad object: %v", err)
return err.Error()
}
return newPath
@@ -965,6 +1152,17 @@ func (o *Object) Metadata(ctx context.Context) (fs.Metadata, error) {
return do.Metadata(ctx)
}
// SetMetadata sets metadata for an Object
//
// It should return fs.ErrorNotImplemented if it can't set metadata
func (o *Object) SetMetadata(ctx context.Context, metadata fs.Metadata) error {
do, ok := o.Object.(fs.SetMetadataer)
if !ok {
return fs.ErrorNotImplemented
}
return do.SetMetadata(ctx, metadata)
}
// SetTier performs changing storage tier of the Object if
// multiple storage classes supported
func (o *Object) SetTier(tier string) error {
@@ -988,5 +1186,12 @@ var (
_ fs.Abouter = (*Fs)(nil)
_ fs.ListRer = (*Fs)(nil)
_ fs.Shutdowner = (*Fs)(nil)
_ fs.PublicLinker = (*Fs)(nil)
_ fs.PutUncheckeder = (*Fs)(nil)
_ fs.MergeDirser = (*Fs)(nil)
_ fs.DirSetModTimer = (*Fs)(nil)
_ fs.MkdirMetadataer = (*Fs)(nil)
_ fs.CleanUpper = (*Fs)(nil)
_ fs.OpenWriterAter = (*Fs)(nil)
_ fs.FullObject = (*Object)(nil)
)

View File

@@ -10,6 +10,11 @@ import (
"github.com/rclone/rclone/fstest/fstests"
)
var (
unimplementableFsMethods = []string{"UnWrap", "WrapFs", "SetWrapper", "UserInfo", "Disconnect", "OpenChunkWriter"}
unimplementableObjectMethods = []string{}
)
// TestIntegration runs integration tests against the remote
func TestIntegration(t *testing.T) {
if *fstest.RemoteName == "" {
@@ -17,8 +22,8 @@ func TestIntegration(t *testing.T) {
}
fstests.Run(t, &fstests.Opt{
RemoteName: *fstest.RemoteName,
UnimplementableFsMethods: []string{"OpenWriterAt", "DuplicateFiles"},
UnimplementableObjectMethods: []string{"MimeType"},
UnimplementableFsMethods: unimplementableFsMethods,
UnimplementableObjectMethods: unimplementableObjectMethods,
})
}
@@ -35,7 +40,9 @@ func TestLocal(t *testing.T) {
{Name: name, Key: "type", Value: "combine"},
{Name: name, Key: "upstreams", Value: upstreams},
},
QuickTestOK: true,
QuickTestOK: true,
UnimplementableFsMethods: unimplementableFsMethods,
UnimplementableObjectMethods: unimplementableObjectMethods,
})
}
@@ -51,7 +58,9 @@ func TestMemory(t *testing.T) {
{Name: name, Key: "type", Value: "combine"},
{Name: name, Key: "upstreams", Value: upstreams},
},
QuickTestOK: true,
QuickTestOK: true,
UnimplementableFsMethods: unimplementableFsMethods,
UnimplementableObjectMethods: unimplementableObjectMethods,
})
}
@@ -68,6 +77,8 @@ func TestMixed(t *testing.T) {
{Name: name, Key: "type", Value: "combine"},
{Name: name, Key: "upstreams", Value: upstreams},
},
UnimplementableFsMethods: unimplementableFsMethods,
UnimplementableObjectMethods: unimplementableObjectMethods,
})
}

View File

@@ -13,8 +13,8 @@ import (
"errors"
"fmt"
"io"
"io/ioutil"
"os"
"path"
"regexp"
"strings"
"time"
@@ -29,6 +29,7 @@ import (
"github.com/rclone/rclone/fs/config/configstruct"
"github.com/rclone/rclone/fs/fspath"
"github.com/rclone/rclone/fs/hash"
"github.com/rclone/rclone/fs/list"
"github.com/rclone/rclone/fs/log"
"github.com/rclone/rclone/fs/object"
"github.com/rclone/rclone/fs/operations"
@@ -38,6 +39,7 @@ import (
const (
initialChunkSize = 262144 // Initial and max sizes of chunks when reading parts of the file. Currently
maxChunkSize = 8388608 // at 256 KiB and 8 MiB.
chunkStreams = 0 // Streams to use for reading
bufferSize = 8388608
heuristicBytes = 1048576
@@ -173,20 +175,33 @@ func NewFs(ctx context.Context, name, rpath string, m configmap.Mapper) (fs.Fs,
opt: *opt,
mode: compressionModeFromName(opt.CompressionMode),
}
// Correct root if definitely pointing to a file
if err == fs.ErrorIsFile {
f.root = path.Dir(f.root)
if f.root == "." || f.root == "/" {
f.root = ""
}
}
// the features here are ones we could support, and they are
// ANDed with the ones from wrappedFs
f.features = (&fs.Features{
CaseInsensitive: true,
DuplicateFiles: false,
ReadMimeType: false,
WriteMimeType: false,
GetTier: true,
SetTier: true,
BucketBased: true,
CanHaveEmptyDirectories: true,
ReadMetadata: true,
WriteMetadata: true,
UserMetadata: true,
CaseInsensitive: true,
DuplicateFiles: false,
ReadMimeType: false,
WriteMimeType: false,
GetTier: true,
SetTier: true,
BucketBased: true,
CanHaveEmptyDirectories: true,
ReadMetadata: true,
WriteMetadata: true,
UserMetadata: true,
ReadDirMetadata: true,
WriteDirMetadata: true,
WriteDirSetModTime: true,
UserDirMetadata: true,
DirModTimeUpdatesOnWrite: true,
PartialUploads: true,
}).Fill(ctx, f).Mask(ctx, wrappedFs).WrapsFs(f, wrappedFs)
// We support reading MIME types no matter the wrapped fs
f.features.ReadMimeType = true
@@ -194,6 +209,8 @@ func NewFs(ctx context.Context, name, rpath string, m configmap.Mapper) (fs.Fs,
if !operations.CanServerSideMove(wrappedFs) {
f.features.Disable("PutStream")
}
// Enable ListP always
f.features.ListP = f.ListP
return f, err
}
@@ -257,6 +274,16 @@ func isMetadataFile(filename string) bool {
return strings.HasSuffix(filename, metaFileExt)
}
// Checks whether a file is a metadata file and returns the original
// file name and a flag indicating whether it was a metadata file or
// not.
func unwrapMetadataFile(filename string) (string, bool) {
if !isMetadataFile(filename) {
return "", false
}
return filename[:len(filename)-len(metaFileExt)], true
}
// makeDataName generates the file name for a data file with specified compression mode
func makeDataName(remote string, size int64, mode int) (newRemote string) {
if mode != Uncompressed {
@@ -328,11 +355,39 @@ func (f *Fs) processEntries(entries fs.DirEntries) (newEntries fs.DirEntries, er
// found.
// List entries and process them
func (f *Fs) List(ctx context.Context, dir string) (entries fs.DirEntries, err error) {
entries, err = f.Fs.List(ctx, dir)
if err != nil {
return nil, err
return list.WithListP(ctx, dir, f)
}
// ListP lists the objects and directories of the Fs starting
// from dir non recursively into out.
//
// dir should be "" to start from the root, and should not
// have trailing slashes.
//
// This should return ErrDirNotFound if the directory isn't
// found.
//
// It should call callback for each tranche of entries read.
// These need not be returned in any particular order. If
// callback returns an error then the listing will stop
// immediately.
func (f *Fs) ListP(ctx context.Context, dir string, callback fs.ListRCallback) error {
wrappedCallback := func(entries fs.DirEntries) error {
entries, err := f.processEntries(entries)
if err != nil {
return err
}
return callback(entries)
}
return f.processEntries(entries)
listP := f.Fs.Features().ListP
if listP == nil {
entries, err := f.Fs.List(ctx, dir)
if err != nil {
return err
}
return wrappedCallback(entries)
}
return listP(ctx, dir, wrappedCallback)
}
// ListR lists the objects and directories of the Fs starting
@@ -432,7 +487,7 @@ func (f *Fs) verifyObjectHash(ctx context.Context, o fs.Object, hasher *hash.Mul
if err != nil {
fs.Errorf(o, "Failed to remove corrupted object: %v", err)
}
return fmt.Errorf("corrupted on transfer: %v compressed hashes differ %q vs %q", ht, srcHash, dstHash)
return fmt.Errorf("corrupted on transfer: %v compressed hashes differ src(%s) %q vs dst(%s) %q", ht, f.Fs, srcHash, o.Fs(), dstHash)
}
return nil
}
@@ -468,7 +523,7 @@ func (f *Fs) rcat(ctx context.Context, dstFileName string, in io.ReadCloser, mod
}
fs.Debugf(f, "Target remote doesn't support streaming uploads, creating temporary local file")
tempFile, err := ioutil.TempFile("", "rclone-press-")
tempFile, err := os.CreateTemp("", "rclone-press-")
defer func() {
// these errors should be relatively uncritical and the upload should've succeeded so it's okay-ish
// to ignore them
@@ -546,8 +601,8 @@ func (f *Fs) putCompress(ctx context.Context, in io.Reader, src fs.ObjectInfo, o
}
// Transfer the data
o, err := f.rcat(ctx, makeDataName(src.Remote(), src.Size(), f.mode), ioutil.NopCloser(wrappedIn), src.ModTime(ctx), options)
//o, err := operations.Rcat(ctx, f.Fs, makeDataName(src.Remote(), src.Size(), f.mode), ioutil.NopCloser(wrappedIn), src.ModTime(ctx))
o, err := f.rcat(ctx, makeDataName(src.Remote(), src.Size(), f.mode), io.NopCloser(wrappedIn), src.ModTime(ctx), options)
//o, err := operations.Rcat(ctx, f.Fs, makeDataName(src.Remote(), src.Size(), f.mode), io.NopCloser(wrappedIn), src.ModTime(ctx))
if err != nil {
if o != nil {
removeErr := o.Remove(ctx)
@@ -766,6 +821,14 @@ func (f *Fs) Mkdir(ctx context.Context, dir string) error {
return f.Fs.Mkdir(ctx, dir)
}
// MkdirMetadata makes the root directory of the Fs object
func (f *Fs) MkdirMetadata(ctx context.Context, dir string, metadata fs.Metadata) (fs.Directory, error) {
if do := f.Fs.Features().MkdirMetadata; do != nil {
return do(ctx, dir, metadata)
}
return nil, fs.ErrorNotImplemented
}
// Rmdir removes the directory (container, bucket) if empty
//
// Return an error if it doesn't exist or isn't empty
@@ -909,6 +972,14 @@ func (f *Fs) DirMove(ctx context.Context, src fs.Fs, srcRemote, dstRemote string
return do(ctx, srcFs.Fs, srcRemote, dstRemote)
}
// DirSetModTime sets the directory modtime for dir
func (f *Fs) DirSetModTime(ctx context.Context, dir string, modTime time.Time) error {
if do := f.Fs.Features().DirSetModTime; do != nil {
return do(ctx, dir, modTime)
}
return fs.ErrorNotImplemented
}
// CleanUp the trash in the Fs
//
// Implement this if you have a way of emptying the trash or
@@ -979,7 +1050,8 @@ func (f *Fs) ChangeNotify(ctx context.Context, notifyFunc func(string, fs.EntryT
wrappedNotifyFunc := func(path string, entryType fs.EntryType) {
fs.Logf(f, "path %q entryType %d", path, entryType)
var (
wrappedPath string
wrappedPath string
isMetadataFile bool
)
switch entryType {
case fs.EntryDirectory:
@@ -987,7 +1059,10 @@ func (f *Fs) ChangeNotify(ctx context.Context, notifyFunc func(string, fs.EntryT
case fs.EntryObject:
// Note: All we really need to do to monitor the object is to check whether the metadata changed,
// as the metadata contains the hash. This will work unless there's a hash collision and the sizes stay the same.
wrappedPath = makeMetadataName(path)
wrappedPath, isMetadataFile = unwrapMetadataFile(path)
if !isMetadataFile {
return
}
default:
fs.Errorf(path, "press ChangeNotify: ignoring unknown EntryType %d", entryType)
return
@@ -1243,6 +1318,17 @@ func (o *Object) Metadata(ctx context.Context) (fs.Metadata, error) {
return do.Metadata(ctx)
}
// SetMetadata sets metadata for an Object
//
// It should return fs.ErrorNotImplemented if it can't set metadata
func (o *Object) SetMetadata(ctx context.Context, metadata fs.Metadata) error {
do, ok := o.Object.(fs.SetMetadataer)
if !ok {
return fs.ErrorNotImplemented
}
return do.SetMetadata(ctx, metadata)
}
// Hash returns the selected checksum of the file
// If no checksum is available it returns ""
func (o *Object) Hash(ctx context.Context, ht hash.Type) (string, error) {
@@ -1308,7 +1394,7 @@ func (o *Object) Open(ctx context.Context, options ...fs.OpenOption) (rc io.Read
}
}
// Get a chunkedreader for the wrapped object
chunkedReader := chunkedreader.New(ctx, o.Object, initialChunkSize, maxChunkSize)
chunkedReader := chunkedreader.New(ctx, o.Object, initialChunkSize, maxChunkSize, chunkStreams)
// Get file handle
var file io.Reader
if offset != 0 {
@@ -1475,6 +1561,8 @@ var (
_ fs.Copier = (*Fs)(nil)
_ fs.Mover = (*Fs)(nil)
_ fs.DirMover = (*Fs)(nil)
_ fs.DirSetModTimer = (*Fs)(nil)
_ fs.MkdirMetadataer = (*Fs)(nil)
_ fs.PutStreamer = (*Fs)(nil)
_ fs.CleanUpper = (*Fs)(nil)
_ fs.UnWrapper = (*Fs)(nil)

View File

@@ -14,23 +14,26 @@ import (
"github.com/rclone/rclone/fstest/fstests"
)
var defaultOpt = fstests.Opt{
RemoteName: "TestCompress:",
NilObject: (*Object)(nil),
UnimplementableFsMethods: []string{
"OpenWriterAt",
"OpenChunkWriter",
"MergeDirs",
"DirCacheFlush",
"PutUnchecked",
"PutStream",
"UserInfo",
"Disconnect",
},
TiersToTest: []string{"STANDARD", "STANDARD_IA"},
UnimplementableObjectMethods: []string{},
}
// TestIntegration runs integration tests against the remote
func TestIntegration(t *testing.T) {
opt := fstests.Opt{
RemoteName: *fstest.RemoteName,
NilObject: (*Object)(nil),
UnimplementableFsMethods: []string{
"OpenWriterAt",
"MergeDirs",
"DirCacheFlush",
"PutUnchecked",
"PutStream",
"UserInfo",
"Disconnect",
},
TiersToTest: []string{"STANDARD", "STANDARD_IA"},
UnimplementableObjectMethods: []string{}}
fstests.Run(t, &opt)
fstests.Run(t, &defaultOpt)
}
// TestRemoteGzip tests GZIP compression
@@ -40,27 +43,13 @@ func TestRemoteGzip(t *testing.T) {
}
tempdir := filepath.Join(os.TempDir(), "rclone-compress-test-gzip")
name := "TestCompressGzip"
fstests.Run(t, &fstests.Opt{
RemoteName: name + ":",
NilObject: (*Object)(nil),
UnimplementableFsMethods: []string{
"OpenWriterAt",
"MergeDirs",
"DirCacheFlush",
"PutUnchecked",
"PutStream",
"UserInfo",
"Disconnect",
},
UnimplementableObjectMethods: []string{
"GetTier",
"SetTier",
},
ExtraConfig: []fstests.ExtraConfigItem{
{Name: name, Key: "type", Value: "compress"},
{Name: name, Key: "remote", Value: tempdir},
{Name: name, Key: "compression_mode", Value: "gzip"},
},
QuickTestOK: true,
})
opt := defaultOpt
opt.RemoteName = name + ":"
opt.ExtraConfig = []fstests.ExtraConfigItem{
{Name: name, Key: "type", Value: "compress"},
{Name: name, Key: "remote", Value: tempdir},
{Name: name, Key: "compression_mode", Value: "gzip"},
}
opt.QuickTestOK = true
fstests.Run(t, &opt)
}

View File

@@ -21,6 +21,7 @@ import (
"github.com/rclone/rclone/backend/crypt/pkcs7"
"github.com/rclone/rclone/fs"
"github.com/rclone/rclone/fs/accounting"
"github.com/rclone/rclone/lib/readers"
"github.com/rclone/rclone/lib/version"
"github.com/rfjakob/eme"
"golang.org/x/crypto/nacl/secretbox"
@@ -37,7 +38,6 @@ const (
blockHeaderSize = secretbox.Overhead
blockDataSize = 64 * 1024
blockSize = blockHeaderSize + blockDataSize
encryptedSuffix = ".bin" // when file name encryption is off we add this suffix to make sure the cloud provider doesn't process the file
)
// Errors returned by cipher
@@ -53,8 +53,9 @@ var (
ErrorEncryptedBadBlock = errors.New("failed to authenticate decrypted block - bad password?")
ErrorBadBase32Encoding = errors.New("bad base32 filename encoding")
ErrorFileClosed = errors.New("file already closed")
ErrorNotAnEncryptedFile = errors.New("not an encrypted file - no \"" + encryptedSuffix + "\" suffix")
ErrorNotAnEncryptedFile = errors.New("not an encrypted file - does not match suffix")
ErrorBadSeek = errors.New("Seek beyond end of file")
ErrorSuffixMissingDot = errors.New("suffix config setting should include a '.'")
defaultSalt = []byte{0xA8, 0x0D, 0xF4, 0x3A, 0x8F, 0xBD, 0x03, 0x08, 0xA7, 0xCA, 0xB8, 0x3E, 0x58, 0x1F, 0x86, 0xB1}
obfuscQuoteRune = '!'
)
@@ -169,27 +170,30 @@ func NewNameEncoding(s string) (enc fileNameEncoding, err error) {
// Cipher defines an encoding and decoding cipher for the crypt backend
type Cipher struct {
dataKey [32]byte // Key for secretbox
nameKey [32]byte // 16,24 or 32 bytes
nameTweak [nameCipherBlockSize]byte // used to tweak the name crypto
block gocipher.Block
mode NameEncryptionMode
fileNameEnc fileNameEncoding
buffers sync.Pool // encrypt/decrypt buffers
cryptoRand io.Reader // read crypto random numbers from here
dirNameEncrypt bool
dataKey [32]byte // Key for secretbox
nameKey [32]byte // 16,24 or 32 bytes
nameTweak [nameCipherBlockSize]byte // used to tweak the name crypto
block gocipher.Block
mode NameEncryptionMode
fileNameEnc fileNameEncoding
buffers sync.Pool // encrypt/decrypt buffers
cryptoRand io.Reader // read crypto random numbers from here
dirNameEncrypt bool
passBadBlocks bool // if set passed bad blocks as zeroed blocks
encryptedSuffix string
}
// newCipher initialises the cipher. If salt is "" then it uses a built in salt val
func newCipher(mode NameEncryptionMode, password, salt string, dirNameEncrypt bool, enc fileNameEncoding) (*Cipher, error) {
c := &Cipher{
mode: mode,
fileNameEnc: enc,
cryptoRand: rand.Reader,
dirNameEncrypt: dirNameEncrypt,
mode: mode,
fileNameEnc: enc,
cryptoRand: rand.Reader,
dirNameEncrypt: dirNameEncrypt,
encryptedSuffix: ".bin",
}
c.buffers.New = func() interface{} {
return make([]byte, blockSize)
c.buffers.New = func() any {
return new([blockSize]byte)
}
err := c.Key(password, salt)
if err != nil {
@@ -198,11 +202,29 @@ func newCipher(mode NameEncryptionMode, password, salt string, dirNameEncrypt bo
return c, nil
}
// setEncryptedSuffix set suffix, or an empty string
func (c *Cipher) setEncryptedSuffix(suffix string) {
if strings.EqualFold(suffix, "none") {
c.encryptedSuffix = ""
return
}
if !strings.HasPrefix(suffix, ".") {
fs.Errorf(nil, "crypt: bad suffix: %v", ErrorSuffixMissingDot)
suffix = "." + suffix
}
c.encryptedSuffix = suffix
}
// Call to set bad block pass through
func (c *Cipher) setPassBadBlocks(passBadBlocks bool) {
c.passBadBlocks = passBadBlocks
}
// Key creates all the internal keys from the password passed in using
// scrypt.
//
// If salt is "" we use a fixed salt just to make attackers lives
// slighty harder than using no salt.
// slightly harder than using no salt.
//
// Note that empty password makes all 0x00 keys which is used in the
// tests.
@@ -230,15 +252,12 @@ func (c *Cipher) Key(password, salt string) (err error) {
}
// getBlock gets a block from the pool of size blockSize
func (c *Cipher) getBlock() []byte {
return c.buffers.Get().([]byte)
func (c *Cipher) getBlock() *[blockSize]byte {
return c.buffers.Get().(*[blockSize]byte)
}
// putBlock returns a block to the pool of size blockSize
func (c *Cipher) putBlock(buf []byte) {
if len(buf) != blockSize {
panic("bad blocksize returned to pool")
}
func (c *Cipher) putBlock(buf *[blockSize]byte) {
c.buffers.Put(buf)
}
@@ -310,14 +329,14 @@ func (c *Cipher) obfuscateSegment(plaintext string) string {
for _, runeValue := range plaintext {
dir += int(runeValue)
}
dir = dir % 256
dir %= 256
// We'll use this number to store in the result filename...
var result bytes.Buffer
_, _ = result.WriteString(strconv.Itoa(dir) + ".")
// but we'll augment it with the nameKey for real calculation
for i := 0; i < len(c.nameKey); i++ {
for i := range len(c.nameKey) {
dir += int(c.nameKey[i])
}
@@ -399,7 +418,7 @@ func (c *Cipher) deobfuscateSegment(ciphertext string) (string, error) {
}
// add the nameKey to get the real rotate distance
for i := 0; i < len(c.nameKey); i++ {
for i := range len(c.nameKey) {
dir += int(c.nameKey[i])
}
@@ -431,7 +450,7 @@ func (c *Cipher) deobfuscateSegment(ciphertext string) (string, error) {
if pos >= 26 {
pos -= 6
}
pos = pos - thisdir
pos -= thisdir
if pos < 0 {
pos += 52
}
@@ -508,7 +527,7 @@ func (c *Cipher) encryptFileName(in string) string {
// EncryptFileName encrypts a file path
func (c *Cipher) EncryptFileName(in string) string {
if c.mode == NameEncryptionOff {
return in + encryptedSuffix
return in + c.encryptedSuffix
}
return c.encryptFileName(in)
}
@@ -568,8 +587,8 @@ func (c *Cipher) decryptFileName(in string) (string, error) {
// DecryptFileName decrypts a file path
func (c *Cipher) DecryptFileName(in string) (string, error) {
if c.mode == NameEncryptionOff {
remainingLength := len(in) - len(encryptedSuffix)
if remainingLength == 0 || !strings.HasSuffix(in, encryptedSuffix) {
remainingLength := len(in) - len(c.encryptedSuffix)
if remainingLength == 0 || !strings.HasSuffix(in, c.encryptedSuffix) {
return "", ErrorNotAnEncryptedFile
}
decrypted := in[:remainingLength]
@@ -609,7 +628,7 @@ func (n *nonce) pointer() *[fileNonceSize]byte {
// fromReader fills the nonce from an io.Reader - normally the OSes
// crypto random number generator
func (n *nonce) fromReader(in io.Reader) error {
read, err := io.ReadFull(in, (*n)[:])
read, err := readers.ReadFill(in, (*n)[:])
if read != fileNonceSize {
return fmt.Errorf("short read of nonce: %w", err)
}
@@ -645,7 +664,7 @@ func (n *nonce) increment() {
// add a uint64 to the nonce
func (n *nonce) add(x uint64) {
carry := uint16(0)
for i := 0; i < 8; i++ {
for i := range 8 {
digit := (*n)[i]
xDigit := byte(x)
x >>= 8
@@ -664,8 +683,8 @@ type encrypter struct {
in io.Reader
c *Cipher
nonce nonce
buf []byte
readBuf []byte
buf *[blockSize]byte
readBuf *[blockSize]byte
bufIndex int
bufSize int
err error
@@ -690,9 +709,9 @@ func (c *Cipher) newEncrypter(in io.Reader, nonce *nonce) (*encrypter, error) {
}
}
// Copy magic into buffer
copy(fh.buf, fileMagicBytes)
copy((*fh.buf)[:], fileMagicBytes)
// Copy nonce into buffer
copy(fh.buf[fileMagicSize:], fh.nonce[:])
copy((*fh.buf)[fileMagicSize:], fh.nonce[:])
return fh, nil
}
@@ -707,22 +726,20 @@ func (fh *encrypter) Read(p []byte) (n int, err error) {
if fh.bufIndex >= fh.bufSize {
// Read data
// FIXME should overlap the reads with a go-routine and 2 buffers?
readBuf := fh.readBuf[:blockDataSize]
n, err = io.ReadFull(fh.in, readBuf)
readBuf := (*fh.readBuf)[:blockDataSize]
n, err = readers.ReadFill(fh.in, readBuf)
if n == 0 {
// err can't be nil since:
// n == len(buf) if and only if err == nil.
return fh.finish(err)
}
// possibly err != nil here, but we will process the
// data and the next call to ReadFull will return 0, err
// data and the next call to ReadFill will return 0, err
// Encrypt the block using the nonce
secretbox.Seal(fh.buf[:0], readBuf[:n], fh.nonce.pointer(), &fh.c.dataKey)
secretbox.Seal((*fh.buf)[:0], readBuf[:n], fh.nonce.pointer(), &fh.c.dataKey)
fh.bufIndex = 0
fh.bufSize = blockHeaderSize + n
fh.nonce.increment()
}
n = copy(p, fh.buf[fh.bufIndex:fh.bufSize])
n = copy(p, (*fh.buf)[fh.bufIndex:fh.bufSize])
fh.bufIndex += n
return n, nil
}
@@ -763,8 +780,8 @@ type decrypter struct {
nonce nonce
initialNonce nonce
c *Cipher
buf []byte
readBuf []byte
buf *[blockSize]byte
readBuf *[blockSize]byte
bufIndex int
bufSize int
err error
@@ -782,12 +799,12 @@ func (c *Cipher) newDecrypter(rc io.ReadCloser) (*decrypter, error) {
limit: -1,
}
// Read file header (magic + nonce)
readBuf := fh.readBuf[:fileHeaderSize]
_, err := io.ReadFull(fh.rc, readBuf)
if err == io.EOF || err == io.ErrUnexpectedEOF {
readBuf := (*fh.readBuf)[:fileHeaderSize]
n, err := readers.ReadFill(fh.rc, readBuf)
if n < fileHeaderSize && err == io.EOF {
// This read from 0..fileHeaderSize-1 bytes
return nil, fh.finishAndClose(ErrorEncryptedFileTooShort)
} else if err != nil {
} else if err != io.EOF && err != nil {
return nil, fh.finishAndClose(err)
}
// check the magic
@@ -845,10 +862,8 @@ func (c *Cipher) newDecrypterSeek(ctx context.Context, open OpenRangeSeek, offse
func (fh *decrypter) fillBuffer() (err error) {
// FIXME should overlap the reads with a go-routine and 2 buffers?
readBuf := fh.readBuf
n, err := io.ReadFull(fh.rc, readBuf)
n, err := readers.ReadFill(fh.rc, (*readBuf)[:])
if n == 0 {
// err can't be nil since:
// n == len(buf) if and only if err == nil.
return err
}
// possibly err != nil here, but we will process the data and
@@ -856,18 +871,25 @@ func (fh *decrypter) fillBuffer() (err error) {
// Check header + 1 byte exists
if n <= blockHeaderSize {
if err != nil {
if err != nil && err != io.EOF {
return err // return pending error as it is likely more accurate
}
return ErrorEncryptedFileBadHeader
}
// Decrypt the block using the nonce
_, ok := secretbox.Open(fh.buf[:0], readBuf[:n], fh.nonce.pointer(), &fh.c.dataKey)
_, ok := secretbox.Open((*fh.buf)[:0], (*readBuf)[:n], fh.nonce.pointer(), &fh.c.dataKey)
if !ok {
if err != nil {
if err != nil && err != io.EOF {
return err // return pending error as it is likely more accurate
}
return ErrorEncryptedBadBlock
if !fh.c.passBadBlocks {
return ErrorEncryptedBadBlock
}
fs.Errorf(nil, "crypt: ignoring: %v", ErrorEncryptedBadBlock)
// Zero out the bad block and continue
for i := range (*fh.buf)[:n] {
fh.buf[i] = 0
}
}
fh.bufIndex = 0
fh.bufSize = n - blockHeaderSize
@@ -893,7 +915,7 @@ func (fh *decrypter) Read(p []byte) (n int, err error) {
if fh.limit >= 0 && fh.limit < int64(toCopy) {
toCopy = int(fh.limit)
}
n = copy(p, fh.buf[fh.bufIndex:fh.bufIndex+toCopy])
n = copy(p, (*fh.buf)[fh.bufIndex:fh.bufIndex+toCopy])
fh.bufIndex += n
if fh.limit >= 0 {
fh.limit -= int64(n)
@@ -904,9 +926,8 @@ func (fh *decrypter) Read(p []byte) (n int, err error) {
return n, nil
}
// calculateUnderlying converts an (offset, limit) in a crypted file
// into an (underlyingOffset, underlyingLimit) for the underlying
// file.
// calculateUnderlying converts an (offset, limit) in an encrypted file
// into an (underlyingOffset, underlyingLimit) for the underlying file.
//
// It also returns number of bytes to discard after reading the first
// block and number of blocks this is from the start so the nonce can

View File

@@ -8,7 +8,6 @@ import (
"errors"
"fmt"
"io"
"io/ioutil"
"strings"
"testing"
@@ -28,14 +27,14 @@ func TestNewNameEncryptionMode(t *testing.T) {
{"off", NameEncryptionOff, ""},
{"standard", NameEncryptionStandard, ""},
{"obfuscate", NameEncryptionObfuscated, ""},
{"potato", NameEncryptionOff, "Unknown file name encryption mode \"potato\""},
{"potato", NameEncryptionOff, "unknown file name encryption mode \"potato\""},
} {
actual, actualErr := NewNameEncryptionMode(test.in)
assert.Equal(t, actual, test.expected)
if test.expectedErr == "" {
assert.NoError(t, actualErr)
} else {
assert.Error(t, actualErr, test.expectedErr)
assert.EqualError(t, actualErr, test.expectedErr)
}
}
}
@@ -406,6 +405,13 @@ func TestNonStandardEncryptFileName(t *testing.T) {
// Off mode
c, _ := newCipher(NameEncryptionOff, "", "", true, nil)
assert.Equal(t, "1/12/123.bin", c.EncryptFileName("1/12/123"))
// Off mode with custom suffix
c, _ = newCipher(NameEncryptionOff, "", "", true, nil)
c.setEncryptedSuffix(".jpg")
assert.Equal(t, "1/12/123.jpg", c.EncryptFileName("1/12/123"))
// Off mode with empty suffix
c.setEncryptedSuffix("none")
assert.Equal(t, "1/12/123", c.EncryptFileName("1/12/123"))
// Obfuscation mode
c, _ = newCipher(NameEncryptionObfuscated, "", "", true, nil)
assert.Equal(t, "49.6/99.23/150.890/53.!!lipps", c.EncryptFileName("1/12/123/!hello"))
@@ -484,21 +490,27 @@ func TestNonStandardDecryptFileName(t *testing.T) {
in string
expected string
expectedErr error
customSuffix string
}{
{NameEncryptionOff, true, "1/12/123.bin", "1/12/123", nil},
{NameEncryptionOff, true, "1/12/123.bix", "", ErrorNotAnEncryptedFile},
{NameEncryptionOff, true, ".bin", "", ErrorNotAnEncryptedFile},
{NameEncryptionOff, true, "1/12/123-v2001-02-03-040506-123.bin", "1/12/123-v2001-02-03-040506-123", nil},
{NameEncryptionOff, true, "1/12/123-v1970-01-01-010101-123-v2001-02-03-040506-123.bin", "1/12/123-v1970-01-01-010101-123-v2001-02-03-040506-123", nil},
{NameEncryptionOff, true, "1/12/123-v1970-01-01-010101-123-v2001-02-03-040506-123.txt.bin", "1/12/123-v1970-01-01-010101-123-v2001-02-03-040506-123.txt", nil},
{NameEncryptionObfuscated, true, "!.hello", "hello", nil},
{NameEncryptionObfuscated, true, "hello", "", ErrorNotAnEncryptedFile},
{NameEncryptionObfuscated, true, "161.\u00e4", "\u00a1", nil},
{NameEncryptionObfuscated, true, "160.\u03c2", "\u03a0", nil},
{NameEncryptionObfuscated, false, "1/12/123/53.!!lipps", "1/12/123/!hello", nil},
{NameEncryptionObfuscated, false, "1/12/123/53-v2001-02-03-040506-123.!!lipps", "1/12/123/!hello-v2001-02-03-040506-123", nil},
{NameEncryptionOff, true, "1/12/123.bin", "1/12/123", nil, ""},
{NameEncryptionOff, true, "1/12/123.bix", "", ErrorNotAnEncryptedFile, ""},
{NameEncryptionOff, true, ".bin", "", ErrorNotAnEncryptedFile, ""},
{NameEncryptionOff, true, "1/12/123-v2001-02-03-040506-123.bin", "1/12/123-v2001-02-03-040506-123", nil, ""},
{NameEncryptionOff, true, "1/12/123-v1970-01-01-010101-123-v2001-02-03-040506-123.bin", "1/12/123-v1970-01-01-010101-123-v2001-02-03-040506-123", nil, ""},
{NameEncryptionOff, true, "1/12/123-v1970-01-01-010101-123-v2001-02-03-040506-123.txt.bin", "1/12/123-v1970-01-01-010101-123-v2001-02-03-040506-123.txt", nil, ""},
{NameEncryptionOff, true, "1/12/123.jpg", "1/12/123", nil, ".jpg"},
{NameEncryptionOff, true, "1/12/123", "1/12/123", nil, "none"},
{NameEncryptionObfuscated, true, "!.hello", "hello", nil, ""},
{NameEncryptionObfuscated, true, "hello", "", ErrorNotAnEncryptedFile, ""},
{NameEncryptionObfuscated, true, "161.\u00e4", "\u00a1", nil, ""},
{NameEncryptionObfuscated, true, "160.\u03c2", "\u03a0", nil, ""},
{NameEncryptionObfuscated, false, "1/12/123/53.!!lipps", "1/12/123/!hello", nil, ""},
{NameEncryptionObfuscated, false, "1/12/123/53-v2001-02-03-040506-123.!!lipps", "1/12/123/!hello-v2001-02-03-040506-123", nil, ""},
} {
c, _ := newCipher(test.mode, "", "", test.dirNameEncrypt, enc)
if test.customSuffix != "" {
c.setEncryptedSuffix(test.customSuffix)
}
actual, actualErr := c.DecryptFileName(test.in)
what := fmt.Sprintf("Testing %q (mode=%v)", test.in, test.mode)
assert.Equal(t, test.expected, actual, what)
@@ -727,7 +739,7 @@ func TestNonceFromReader(t *testing.T) {
assert.Equal(t, nonce{'1', '2', '3', '4', '5', '6', '7', '8', '9', 'a', 'b', 'c', 'd', 'e', 'f', 'g', 'h', 'i', 'j', 'k', 'l', 'm', 'n', 'o'}, x)
buf = bytes.NewBufferString("123456789abcdefghijklmn")
err = x.fromReader(buf)
assert.Error(t, err, "short read of nonce")
assert.EqualError(t, err, "short read of nonce: EOF")
}
func TestNonceFromBuf(t *testing.T) {
@@ -1051,7 +1063,7 @@ func TestRandomSource(t *testing.T) {
_, _ = source.Read(buf)
sink = newRandomSource(1e8)
_, err = io.Copy(sink, source)
assert.Error(t, err, "Error in stream")
assert.EqualError(t, err, "Error in stream at 1")
}
type zeroes struct{}
@@ -1073,7 +1085,7 @@ func testEncryptDecrypt(t *testing.T, bufSize int, copySize int64) {
source := newRandomSource(copySize)
encrypted, err := c.newEncrypter(source, nil)
assert.NoError(t, err)
decrypted, err := c.newDecrypter(ioutil.NopCloser(encrypted))
decrypted, err := c.newDecrypter(io.NopCloser(encrypted))
assert.NoError(t, err)
sink := newRandomSource(copySize)
n, err := io.CopyBuffer(sink, decrypted, buf)
@@ -1144,15 +1156,15 @@ func TestEncryptData(t *testing.T) {
buf := bytes.NewBuffer(test.in)
encrypted, err := c.EncryptData(buf)
assert.NoError(t, err)
out, err := ioutil.ReadAll(encrypted)
out, err := io.ReadAll(encrypted)
assert.NoError(t, err)
assert.Equal(t, test.expected, out)
// Check we can decode the data properly too...
buf = bytes.NewBuffer(out)
decrypted, err := c.DecryptData(ioutil.NopCloser(buf))
decrypted, err := c.DecryptData(io.NopCloser(buf))
assert.NoError(t, err)
out, err = ioutil.ReadAll(decrypted)
out, err = io.ReadAll(decrypted)
assert.NoError(t, err)
assert.Equal(t, test.in, out)
}
@@ -1168,13 +1180,13 @@ func TestNewEncrypter(t *testing.T) {
fh, err := c.newEncrypter(z, nil)
assert.NoError(t, err)
assert.Equal(t, nonce{0x01, 0x02, 0x03, 0x04, 0x05, 0x06, 0x07, 0x08, 0x09, 0x0a, 0x0b, 0x0c, 0x0d, 0x0e, 0x0f, 0x10, 0x11, 0x12, 0x13, 0x14, 0x15, 0x16, 0x17, 0x18}, fh.nonce)
assert.Equal(t, []byte{'R', 'C', 'L', 'O', 'N', 'E', 0x00, 0x00, 0x01, 0x02, 0x03, 0x04, 0x05, 0x06, 0x07, 0x08, 0x09, 0x0a, 0x0b, 0x0c, 0x0d, 0x0e, 0x0f, 0x10, 0x11, 0x12, 0x13, 0x14, 0x15, 0x16, 0x17, 0x18}, fh.buf[:32])
assert.Equal(t, []byte{'R', 'C', 'L', 'O', 'N', 'E', 0x00, 0x00, 0x01, 0x02, 0x03, 0x04, 0x05, 0x06, 0x07, 0x08, 0x09, 0x0a, 0x0b, 0x0c, 0x0d, 0x0e, 0x0f, 0x10, 0x11, 0x12, 0x13, 0x14, 0x15, 0x16, 0x17, 0x18}, (*fh.buf)[:32])
// Test error path
c.cryptoRand = bytes.NewBufferString("123456789abcdefghijklmn")
fh, err = c.newEncrypter(z, nil)
assert.Nil(t, fh)
assert.Error(t, err, "short read of nonce")
assert.EqualError(t, err, "short read of nonce: EOF")
}
// Test the stream returning 0, io.ErrUnexpectedEOF - this used to
@@ -1187,7 +1199,7 @@ func TestNewEncrypterErrUnexpectedEOF(t *testing.T) {
fh, err := c.newEncrypter(in, nil)
assert.NoError(t, err)
n, err := io.CopyN(ioutil.Discard, fh, 1e6)
n, err := io.CopyN(io.Discard, fh, 1e6)
assert.Equal(t, io.ErrUnexpectedEOF, err)
assert.Equal(t, int64(32), n)
}
@@ -1225,7 +1237,7 @@ func TestNewDecrypter(t *testing.T) {
cd := newCloseDetector(bytes.NewBuffer(file0[:i]))
fh, err = c.newDecrypter(cd)
assert.Nil(t, fh)
assert.Error(t, err, ErrorEncryptedFileTooShort.Error())
assert.EqualError(t, err, ErrorEncryptedFileTooShort.Error())
assert.Equal(t, 1, cd.closed)
}
@@ -1233,7 +1245,7 @@ func TestNewDecrypter(t *testing.T) {
cd = newCloseDetector(er)
fh, err = c.newDecrypter(cd)
assert.Nil(t, fh)
assert.Error(t, err, "potato")
assert.EqualError(t, err, "potato")
assert.Equal(t, 1, cd.closed)
// bad magic
@@ -1244,7 +1256,7 @@ func TestNewDecrypter(t *testing.T) {
cd := newCloseDetector(bytes.NewBuffer(file0copy))
fh, err := c.newDecrypter(cd)
assert.Nil(t, fh)
assert.Error(t, err, ErrorEncryptedBadMagic.Error())
assert.EqualError(t, err, ErrorEncryptedBadMagic.Error())
file0copy[i] ^= 0x1
assert.Equal(t, 1, cd.closed)
}
@@ -1257,12 +1269,12 @@ func TestNewDecrypterErrUnexpectedEOF(t *testing.T) {
in2 := &readers.ErrorReader{Err: io.ErrUnexpectedEOF}
in1 := bytes.NewBuffer(file16)
in := ioutil.NopCloser(io.MultiReader(in1, in2))
in := io.NopCloser(io.MultiReader(in1, in2))
fh, err := c.newDecrypter(in)
assert.NoError(t, err)
n, err := io.CopyN(ioutil.Discard, fh, 1e6)
n, err := io.CopyN(io.Discard, fh, 1e6)
assert.Equal(t, io.ErrUnexpectedEOF, err)
assert.Equal(t, int64(16), n)
}
@@ -1274,14 +1286,14 @@ func TestNewDecrypterSeekLimit(t *testing.T) {
// Make random data
const dataSize = 150000
plaintext, err := ioutil.ReadAll(newRandomSource(dataSize))
plaintext, err := io.ReadAll(newRandomSource(dataSize))
assert.NoError(t, err)
// Encrypt the data
buf := bytes.NewBuffer(plaintext)
encrypted, err := c.EncryptData(buf)
assert.NoError(t, err)
ciphertext, err := ioutil.ReadAll(encrypted)
ciphertext, err := io.ReadAll(encrypted)
assert.NoError(t, err)
trials := []int{0, 1, 2, 3, 4, 5, 7, 8, 9, 15, 16, 17, 31, 32, 33, 63, 64, 65,
@@ -1295,12 +1307,9 @@ func TestNewDecrypterSeekLimit(t *testing.T) {
open := func(ctx context.Context, underlyingOffset, underlyingLimit int64) (io.ReadCloser, error) {
end := len(ciphertext)
if underlyingLimit >= 0 {
end = int(underlyingOffset + underlyingLimit)
if end > len(ciphertext) {
end = len(ciphertext)
}
end = min(int(underlyingOffset+underlyingLimit), len(ciphertext))
}
reader = ioutil.NopCloser(bytes.NewBuffer(ciphertext[int(underlyingOffset):end]))
reader = io.NopCloser(bytes.NewBuffer(ciphertext[int(underlyingOffset):end]))
return reader, nil
}
@@ -1478,7 +1487,7 @@ func TestDecrypterRead(t *testing.T) {
assert.NoError(t, err)
// Test truncating the file at each possible point
for i := 0; i < len(file16)-1; i++ {
for i := range len(file16) - 1 {
what := fmt.Sprintf("truncating to %d/%d", i, len(file16))
cd := newCloseDetector(bytes.NewBuffer(file16[:i]))
fh, err := c.newDecrypter(cd)
@@ -1490,14 +1499,16 @@ func TestDecrypterRead(t *testing.T) {
assert.NoError(t, err, what)
continue
}
_, err = ioutil.ReadAll(fh)
_, err = io.ReadAll(fh)
var expectedErr error
switch {
case i == fileHeaderSize:
// This would normally produce an error *except* on the first block
expectedErr = nil
case i <= fileHeaderSize+blockHeaderSize:
expectedErr = ErrorEncryptedFileBadHeader
default:
expectedErr = io.ErrUnexpectedEOF
expectedErr = ErrorEncryptedBadBlock
}
if expectedErr != nil {
assert.EqualError(t, err, expectedErr.Error(), what)
@@ -1514,8 +1525,8 @@ func TestDecrypterRead(t *testing.T) {
cd := newCloseDetector(in)
fh, err := c.newDecrypter(cd)
assert.NoError(t, err)
_, err = ioutil.ReadAll(fh)
assert.Error(t, err, "potato")
_, err = io.ReadAll(fh)
assert.EqualError(t, err, "potato")
assert.Equal(t, 0, cd.closed)
// Test corrupting the input
@@ -1524,17 +1535,28 @@ func TestDecrypterRead(t *testing.T) {
copy(file16copy, file16)
for i := range file16copy {
file16copy[i] ^= 0xFF
fh, err := c.newDecrypter(ioutil.NopCloser(bytes.NewBuffer(file16copy)))
fh, err := c.newDecrypter(io.NopCloser(bytes.NewBuffer(file16copy)))
if i < fileMagicSize {
assert.Error(t, err, ErrorEncryptedBadMagic.Error())
assert.EqualError(t, err, ErrorEncryptedBadMagic.Error())
assert.Nil(t, fh)
} else {
assert.NoError(t, err)
_, err = ioutil.ReadAll(fh)
assert.Error(t, err, ErrorEncryptedFileBadHeader.Error())
_, err = io.ReadAll(fh)
assert.EqualError(t, err, ErrorEncryptedBadBlock.Error())
}
file16copy[i] ^= 0xFF
}
// Test that we can corrupt a byte and read zeroes if
// passBadBlocks is set
copy(file16copy, file16)
file16copy[len(file16copy)-1] ^= 0xFF
c.passBadBlocks = true
fh, err = c.newDecrypter(io.NopCloser(bytes.NewBuffer(file16copy)))
assert.NoError(t, err)
buf, err := io.ReadAll(fh)
assert.NoError(t, err)
assert.Equal(t, make([]byte, 16), buf)
}
func TestDecrypterClose(t *testing.T) {
@@ -1555,7 +1577,7 @@ func TestDecrypterClose(t *testing.T) {
// double close
err = fh.Close()
assert.Error(t, err, ErrorFileClosed.Error())
assert.EqualError(t, err, ErrorFileClosed.Error())
assert.Equal(t, 1, cd.closed)
// try again reading the file this time
@@ -1565,7 +1587,7 @@ func TestDecrypterClose(t *testing.T) {
assert.Equal(t, 0, cd.closed)
// close after reading
out, err := ioutil.ReadAll(fh)
out, err := io.ReadAll(fh)
assert.NoError(t, err)
assert.Equal(t, []byte{1}, out)
assert.Equal(t, io.EOF, fh.err)
@@ -1582,8 +1604,6 @@ func TestPutGetBlock(t *testing.T) {
block := c.getBlock()
c.putBlock(block)
c.putBlock(block)
assert.Panics(t, func() { c.putBlock(block[:len(block)-1]) })
}
func TestKey(t *testing.T) {

View File

@@ -18,6 +18,7 @@ import (
"github.com/rclone/rclone/fs/config/obscure"
"github.com/rclone/rclone/fs/fspath"
"github.com/rclone/rclone/fs/hash"
"github.com/rclone/rclone/fs/list"
)
// Globals
@@ -48,7 +49,7 @@ func init() {
Help: "Very simple filename obfuscation.",
}, {
Value: "off",
Help: "Don't encrypt the file names.\nAdds a \".bin\" extension only.",
Help: "Don't encrypt the file names.\nAdds a \".bin\", or \"suffix\" extension only.",
},
},
}, {
@@ -79,7 +80,9 @@ NB If filename_encryption is "off" then this option will do nothing.`,
}, {
Name: "server_side_across_configs",
Default: false,
Help: `Allow server-side operations (e.g. copy) to work across different crypt configs.
Help: `Deprecated: use --server-side-across-configs instead.
Allow server-side operations (e.g. copy) to work across different crypt configs.
Normally this option is not what you want, but if you have two crypts
pointing to the same backend you can use it.
@@ -119,6 +122,25 @@ names, or for debugging purposes.`,
Help: "Encrypt file data.",
},
},
}, {
Name: "pass_bad_blocks",
Help: `If set this will pass bad blocks through as all 0.
This should not be set in normal operation, it should only be set if
trying to recover an encrypted file with errors and it is desired to
recover as much of the file as possible.`,
Default: false,
Advanced: true,
}, {
Name: "strict_names",
Help: `If set, this will raise an error when crypt comes across a filename that can't be decrypted.
(By default, rclone will just log a NOTICE and continue as normal.)
This can happen if encrypted and unencrypted files are stored in the same
directory (which is not recommended.) It may also indicate a more serious
problem that should be investigated.`,
Default: false,
Advanced: true,
}, {
Name: "filename_encoding",
Help: `How to encode the encrypted filename to text string.
@@ -138,10 +160,18 @@ length and if it's case sensitive.`,
},
{
Value: "base32768",
Help: "Encode using base32768. Suitable if your remote counts UTF-16 or\nUnicode codepoint instead of UTF-8 byte length. (Eg. Onedrive)",
Help: "Encode using base32768. Suitable if your remote counts UTF-16 or\nUnicode codepoint instead of UTF-8 byte length. (Eg. Onedrive, Dropbox)",
},
},
Advanced: true,
}, {
Name: "suffix",
Help: `If this is set it will override the default suffix of ".bin".
Setting suffix to "none" will result in an empty suffix. This may be useful
when the path length is critical.`,
Default: ".bin",
Advanced: true,
}},
})
}
@@ -174,6 +204,8 @@ func newCipherForConfig(opt *Options) (*Cipher, error) {
if err != nil {
return nil, fmt.Errorf("failed to make cipher: %w", err)
}
cipher.setEncryptedSuffix(opt.Suffix)
cipher.setPassBadBlocks(opt.PassBadBlocks)
return cipher, nil
}
@@ -232,23 +264,39 @@ func NewFs(ctx context.Context, name, rpath string, m configmap.Mapper) (fs.Fs,
cipher: cipher,
}
cache.PinUntilFinalized(f.Fs, f)
// Correct root if definitely pointing to a file
if err == fs.ErrorIsFile {
f.root = path.Dir(f.root)
if f.root == "." || f.root == "/" {
f.root = ""
}
}
// the features here are ones we could support, and they are
// ANDed with the ones from wrappedFs
f.features = (&fs.Features{
CaseInsensitive: cipher.NameEncryptionMode() == NameEncryptionOff,
DuplicateFiles: true,
ReadMimeType: false, // MimeTypes not supported with crypt
WriteMimeType: false,
BucketBased: true,
CanHaveEmptyDirectories: true,
SetTier: true,
GetTier: true,
ServerSideAcrossConfigs: opt.ServerSideAcrossConfigs,
ReadMetadata: true,
WriteMetadata: true,
UserMetadata: true,
CaseInsensitive: !cipher.dirNameEncrypt || cipher.NameEncryptionMode() == NameEncryptionOff,
DuplicateFiles: true,
ReadMimeType: false, // MimeTypes not supported with crypt
WriteMimeType: false,
BucketBased: true,
CanHaveEmptyDirectories: true,
SetTier: true,
GetTier: true,
ServerSideAcrossConfigs: opt.ServerSideAcrossConfigs,
ReadMetadata: true,
WriteMetadata: true,
UserMetadata: true,
ReadDirMetadata: true,
WriteDirMetadata: true,
WriteDirSetModTime: true,
UserDirMetadata: true,
DirModTimeUpdatesOnWrite: true,
PartialUploads: true,
}).Fill(ctx, f).Mask(ctx, wrappedFs).WrapsFs(f, wrappedFs)
// Enable ListP always
f.features.ListP = f.ListP
return f, err
}
@@ -262,7 +310,10 @@ type Options struct {
Password2 string `config:"password2"`
ServerSideAcrossConfigs bool `config:"server_side_across_configs"`
ShowMapping bool `config:"show_mapping"`
PassBadBlocks bool `config:"pass_bad_blocks"`
FilenameEncoding string `config:"filename_encoding"`
Suffix string `config:"suffix"`
StrictNames bool `config:"strict_names"`
}
// Fs represents a wrapped fs.Fs
@@ -297,45 +348,64 @@ func (f *Fs) String() string {
}
// Encrypt an object file name to entries.
func (f *Fs) add(entries *fs.DirEntries, obj fs.Object) {
func (f *Fs) add(entries *fs.DirEntries, obj fs.Object) error {
remote := obj.Remote()
decryptedRemote, err := f.cipher.DecryptFileName(remote)
if err != nil {
fs.Debugf(remote, "Skipping undecryptable file name: %v", err)
return
if f.opt.StrictNames {
return fmt.Errorf("%s: undecryptable file name detected: %v", remote, err)
}
fs.Logf(remote, "Skipping undecryptable file name: %v", err)
return nil
}
if f.opt.ShowMapping {
fs.Logf(decryptedRemote, "Encrypts to %q", remote)
}
*entries = append(*entries, f.newObject(obj))
return nil
}
// Encrypt a directory file name to entries.
func (f *Fs) addDir(ctx context.Context, entries *fs.DirEntries, dir fs.Directory) {
func (f *Fs) addDir(ctx context.Context, entries *fs.DirEntries, dir fs.Directory) error {
remote := dir.Remote()
decryptedRemote, err := f.cipher.DecryptDirName(remote)
if err != nil {
fs.Debugf(remote, "Skipping undecryptable dir name: %v", err)
return
if f.opt.StrictNames {
return fmt.Errorf("%s: undecryptable dir name detected: %v", remote, err)
}
fs.Logf(remote, "Skipping undecryptable dir name: %v", err)
return nil
}
if f.opt.ShowMapping {
fs.Logf(decryptedRemote, "Encrypts to %q", remote)
}
*entries = append(*entries, f.newDir(ctx, dir))
return nil
}
// Encrypt some directory entries. This alters entries returning it as newEntries.
func (f *Fs) encryptEntries(ctx context.Context, entries fs.DirEntries) (newEntries fs.DirEntries, err error) {
newEntries = entries[:0] // in place filter
errors := 0
var firsterr error
for _, entry := range entries {
switch x := entry.(type) {
case fs.Object:
f.add(&newEntries, x)
err = f.add(&newEntries, x)
case fs.Directory:
f.addDir(ctx, &newEntries, x)
err = f.addDir(ctx, &newEntries, x)
default:
return nil, fmt.Errorf("unknown object type %T", entry)
}
if err != nil {
errors++
if firsterr == nil {
firsterr = err
}
}
}
if firsterr != nil {
return nil, fmt.Errorf("there were %v undecryptable name errors. first error: %v", errors, firsterr)
}
return newEntries, nil
}
@@ -350,11 +420,40 @@ func (f *Fs) encryptEntries(ctx context.Context, entries fs.DirEntries) (newEntr
// This should return ErrDirNotFound if the directory isn't
// found.
func (f *Fs) List(ctx context.Context, dir string) (entries fs.DirEntries, err error) {
entries, err = f.Fs.List(ctx, f.cipher.EncryptDirName(dir))
if err != nil {
return nil, err
return list.WithListP(ctx, dir, f)
}
// ListP lists the objects and directories of the Fs starting
// from dir non recursively into out.
//
// dir should be "" to start from the root, and should not
// have trailing slashes.
//
// This should return ErrDirNotFound if the directory isn't
// found.
//
// It should call callback for each tranche of entries read.
// These need not be returned in any particular order. If
// callback returns an error then the listing will stop
// immediately.
func (f *Fs) ListP(ctx context.Context, dir string, callback fs.ListRCallback) error {
wrappedCallback := func(entries fs.DirEntries) error {
entries, err := f.encryptEntries(ctx, entries)
if err != nil {
return err
}
return callback(entries)
}
return f.encryptEntries(ctx, entries)
listP := f.Fs.Features().ListP
encryptedDir := f.cipher.EncryptDirName(dir)
if listP == nil {
entries, err := f.Fs.List(ctx, encryptedDir)
if err != nil {
return err
}
return wrappedCallback(entries)
}
return listP(ctx, encryptedDir, wrappedCallback)
}
// ListR lists the objects and directories of the Fs starting
@@ -396,6 +495,8 @@ type putFn func(ctx context.Context, in io.Reader, src fs.ObjectInfo, options ..
// put implements Put or PutStream
func (f *Fs) put(ctx context.Context, in io.Reader, src fs.ObjectInfo, options []fs.OpenOption, put putFn) (fs.Object, error) {
ci := fs.GetConfig(ctx)
if f.opt.NoDataEncryption {
o, err := put(ctx, in, f.newObjectInfo(src, nonce{}), options...)
if err == nil && o != nil {
@@ -413,6 +514,9 @@ func (f *Fs) put(ctx context.Context, in io.Reader, src fs.ObjectInfo, options [
// Find a hash the destination supports to compute a hash of
// the encrypted data
ht := f.Fs.Hashes().GetOne()
if ci.IgnoreChecksum {
ht = hash.None
}
var hasher *hash.MultiHasher
if ht != hash.None {
hasher, err = hash.NewMultiHasherTypes(hash.NewHashSet(ht))
@@ -449,7 +553,7 @@ func (f *Fs) put(ctx context.Context, in io.Reader, src fs.ObjectInfo, options [
if err != nil {
fs.Errorf(o, "Failed to remove corrupted object: %v", err)
}
return nil, fmt.Errorf("corrupted on transfer: %v crypted hash differ src %q vs dst %q", ht, srcHash, dstHash)
return nil, fmt.Errorf("corrupted on transfer: %v encrypted hashes differ src(%s) %q vs dst(%s) %q", ht, f.Fs, srcHash, o.Fs(), dstHash)
}
fs.Debugf(src, "%v = %s OK", ht, srcHash)
}
@@ -484,6 +588,37 @@ func (f *Fs) Mkdir(ctx context.Context, dir string) error {
return f.Fs.Mkdir(ctx, f.cipher.EncryptDirName(dir))
}
// MkdirMetadata makes the root directory of the Fs object
func (f *Fs) MkdirMetadata(ctx context.Context, dir string, metadata fs.Metadata) (fs.Directory, error) {
do := f.Fs.Features().MkdirMetadata
if do == nil {
return nil, fs.ErrorNotImplemented
}
newDir, err := do(ctx, f.cipher.EncryptDirName(dir), metadata)
if err != nil {
return nil, err
}
var entries = make(fs.DirEntries, 0, 1)
err = f.addDir(ctx, &entries, newDir)
if err != nil {
return nil, err
}
newDir, ok := entries[0].(fs.Directory)
if !ok {
return nil, fmt.Errorf("internal error: expecting %T to be fs.Directory", entries[0])
}
return newDir, nil
}
// DirSetModTime sets the directory modtime for dir
func (f *Fs) DirSetModTime(ctx context.Context, dir string, modTime time.Time) error {
do := f.Fs.Features().DirSetModTime
if do == nil {
return fs.ErrorNotImplemented
}
return do(ctx, f.cipher.EncryptDirName(dir), modTime)
}
// Rmdir removes the directory (container, bucket) if empty
//
// Return an error if it doesn't exist or isn't empty
@@ -725,7 +860,7 @@ func (f *Fs) MergeDirs(ctx context.Context, dirs []fs.Directory) error {
}
out := make([]fs.Directory, len(dirs))
for i, dir := range dirs {
out[i] = fs.NewDirCopy(ctx, dir).SetRemote(f.cipher.EncryptDirName(dir.Remote()))
out[i] = fs.NewDirWrapper(f.cipher.EncryptDirName(dir.Remote()), dir)
}
return do(ctx, out)
}
@@ -822,7 +957,7 @@ Usage Example:
// The result should be capable of being JSON encoded
// If it is a string or a []string it will be shown to the user
// otherwise it will be JSON encoded and shown to the user like that
func (f *Fs) Command(ctx context.Context, name string, arg []string, opt map[string]string) (out interface{}, err error) {
func (f *Fs) Command(ctx context.Context, name string, arg []string, opt map[string]string) (out any, err error) {
switch name {
case "decode":
out := make([]string, 0, len(arg))
@@ -961,14 +1096,14 @@ func (o *Object) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, op
// newDir returns a dir with the Name decrypted
func (f *Fs) newDir(ctx context.Context, dir fs.Directory) fs.Directory {
newDir := fs.NewDirCopy(ctx, dir)
remote := dir.Remote()
decryptedRemote, err := f.cipher.DecryptDirName(remote)
if err != nil {
fs.Debugf(remote, "Undecryptable dir name: %v", err)
} else {
newDir.SetRemote(decryptedRemote)
remote = decryptedRemote
}
newDir := fs.NewDirWrapper(remote, dir)
return newDir
}
@@ -1047,10 +1182,11 @@ func (o *ObjectInfo) Hash(ctx context.Context, hash hash.Type) (string, error) {
// Get the underlying object if there is one
if srcObj, ok = o.ObjectInfo.(fs.Object); ok {
// Prefer direct interface assertion
} else if do, ok := o.ObjectInfo.(fs.ObjectUnWrapper); ok {
// Otherwise likely is an operations.OverrideRemote
} else if do, ok := o.ObjectInfo.(*fs.OverrideRemote); ok {
// Unwrap if it is an operations.OverrideRemote
srcObj = do.UnWrap()
} else {
// Otherwise don't unwrap any further
return "", nil
}
// if this is wrapping a local object then we work out the hash
@@ -1145,6 +1281,17 @@ func (o *Object) Metadata(ctx context.Context) (fs.Metadata, error) {
return do.Metadata(ctx)
}
// SetMetadata sets metadata for an Object
//
// It should return fs.ErrorNotImplemented if it can't set metadata
func (o *Object) SetMetadata(ctx context.Context, metadata fs.Metadata) error {
do, ok := o.Object.(fs.SetMetadataer)
if !ok {
return fs.ErrorNotImplemented
}
return do.SetMetadata(ctx, metadata)
}
// MimeType returns the content type of the Object if
// known, or "" if not
//
@@ -1170,6 +1317,8 @@ var (
_ fs.Abouter = (*Fs)(nil)
_ fs.Wrapper = (*Fs)(nil)
_ fs.MergeDirser = (*Fs)(nil)
_ fs.DirSetModTimer = (*Fs)(nil)
_ fs.MkdirMetadataer = (*Fs)(nil)
_ fs.DirCacheFlusher = (*Fs)(nil)
_ fs.ChangeNotifier = (*Fs)(nil)
_ fs.PublicLinker = (*Fs)(nil)

View File

@@ -17,41 +17,28 @@ import (
"github.com/stretchr/testify/require"
)
type testWrapper struct {
fs.ObjectInfo
}
// UnWrap returns the Object that this Object is wrapping or nil if it
// isn't wrapping anything
func (o testWrapper) UnWrap() fs.Object {
if o, ok := o.ObjectInfo.(fs.Object); ok {
return o
}
return nil
}
// Create a temporary local fs to upload things from
func makeTempLocalFs(t *testing.T) (localFs fs.Fs, cleanup func()) {
func makeTempLocalFs(t *testing.T) (localFs fs.Fs) {
localFs, err := fs.TemporaryLocalFs(context.Background())
require.NoError(t, err)
cleanup = func() {
t.Cleanup(func() {
require.NoError(t, localFs.Rmdir(context.Background(), ""))
}
return localFs, cleanup
})
return localFs
}
// Upload a file to a remote
func uploadFile(t *testing.T, f fs.Fs, remote, contents string) (obj fs.Object, cleanup func()) {
func uploadFile(t *testing.T, f fs.Fs, remote, contents string) (obj fs.Object) {
inBuf := bytes.NewBufferString(contents)
t1 := time.Date(2012, time.December, 17, 18, 32, 31, 0, time.UTC)
upSrc := object.NewStaticObjectInfo(remote, t1, int64(len(contents)), true, nil, nil)
obj, err := f.Put(context.Background(), inBuf, upSrc)
require.NoError(t, err)
cleanup = func() {
t.Cleanup(func() {
require.NoError(t, obj.Remove(context.Background()))
}
return obj, cleanup
})
return obj
}
// Test the ObjectInfo
@@ -65,11 +52,9 @@ func testObjectInfo(t *testing.T, f *Fs, wrap bool) {
path = "_wrap"
}
localFs, cleanupLocalFs := makeTempLocalFs(t)
defer cleanupLocalFs()
localFs := makeTempLocalFs(t)
obj, cleanupObj := uploadFile(t, localFs, path, contents)
defer cleanupObj()
obj := uploadFile(t, localFs, path, contents)
// encrypt the data
inBuf := bytes.NewBufferString(contents)
@@ -83,7 +68,7 @@ func testObjectInfo(t *testing.T, f *Fs, wrap bool) {
var oi fs.ObjectInfo = obj
if wrap {
// wrap the object in an fs.ObjectUnwrapper if required
oi = testWrapper{oi}
oi = fs.NewOverrideRemote(oi, "new_remote")
}
// wrap the object in a crypt for upload using the nonce we
@@ -116,16 +101,13 @@ func testComputeHash(t *testing.T, f *Fs) {
t.Skipf("%v: does not support hashes", f.Fs)
}
localFs, cleanupLocalFs := makeTempLocalFs(t)
defer cleanupLocalFs()
localFs := makeTempLocalFs(t)
// Upload a file to localFs as a test object
localObj, cleanupLocalObj := uploadFile(t, localFs, path, contents)
defer cleanupLocalObj()
localObj := uploadFile(t, localFs, path, contents)
// Upload the same data to the remote Fs also
remoteObj, cleanupRemoteObj := uploadFile(t, f, path, contents)
defer cleanupRemoteObj()
remoteObj := uploadFile(t, f, path, contents)
// Calculate the expected Hash of the remote object
computedHash, err := f.ComputeHash(ctx, remoteObj.(*Object), localObj, hashType)

View File

@@ -24,7 +24,7 @@ func TestIntegration(t *testing.T) {
fstests.Run(t, &fstests.Opt{
RemoteName: *fstest.RemoteName,
NilObject: (*crypt.Object)(nil),
UnimplementableFsMethods: []string{"OpenWriterAt"},
UnimplementableFsMethods: []string{"OpenWriterAt", "OpenChunkWriter"},
UnimplementableObjectMethods: []string{"MimeType"},
})
}
@@ -45,7 +45,7 @@ func TestStandardBase32(t *testing.T) {
{Name: name, Key: "password", Value: obscure.MustObscure("potato")},
{Name: name, Key: "filename_encryption", Value: "standard"},
},
UnimplementableFsMethods: []string{"OpenWriterAt"},
UnimplementableFsMethods: []string{"OpenWriterAt", "OpenChunkWriter"},
UnimplementableObjectMethods: []string{"MimeType"},
QuickTestOK: true,
})
@@ -67,7 +67,7 @@ func TestStandardBase64(t *testing.T) {
{Name: name, Key: "filename_encryption", Value: "standard"},
{Name: name, Key: "filename_encoding", Value: "base64"},
},
UnimplementableFsMethods: []string{"OpenWriterAt"},
UnimplementableFsMethods: []string{"OpenWriterAt", "OpenChunkWriter"},
UnimplementableObjectMethods: []string{"MimeType"},
QuickTestOK: true,
})
@@ -89,7 +89,7 @@ func TestStandardBase32768(t *testing.T) {
{Name: name, Key: "filename_encryption", Value: "standard"},
{Name: name, Key: "filename_encoding", Value: "base32768"},
},
UnimplementableFsMethods: []string{"OpenWriterAt"},
UnimplementableFsMethods: []string{"OpenWriterAt", "OpenChunkWriter"},
UnimplementableObjectMethods: []string{"MimeType"},
QuickTestOK: true,
})
@@ -111,7 +111,7 @@ func TestOff(t *testing.T) {
{Name: name, Key: "password", Value: obscure.MustObscure("potato2")},
{Name: name, Key: "filename_encryption", Value: "off"},
},
UnimplementableFsMethods: []string{"OpenWriterAt"},
UnimplementableFsMethods: []string{"OpenWriterAt", "OpenChunkWriter"},
UnimplementableObjectMethods: []string{"MimeType"},
QuickTestOK: true,
})
@@ -137,7 +137,7 @@ func TestObfuscate(t *testing.T) {
{Name: name, Key: "filename_encryption", Value: "obfuscate"},
},
SkipBadWindowsCharacters: true,
UnimplementableFsMethods: []string{"OpenWriterAt"},
UnimplementableFsMethods: []string{"OpenWriterAt", "OpenChunkWriter"},
UnimplementableObjectMethods: []string{"MimeType"},
QuickTestOK: true,
})
@@ -164,7 +164,7 @@ func TestNoDataObfuscate(t *testing.T) {
{Name: name, Key: "no_data_encryption", Value: "true"},
},
SkipBadWindowsCharacters: true,
UnimplementableFsMethods: []string{"OpenWriterAt"},
UnimplementableFsMethods: []string{"OpenWriterAt", "OpenChunkWriter"},
UnimplementableObjectMethods: []string{"MimeType"},
QuickTestOK: true,
})

View File

@@ -25,7 +25,7 @@ func Pad(n int, buf []byte) []byte {
}
length := len(buf)
padding := n - (length % n)
for i := 0; i < padding; i++ {
for range padding {
buf = append(buf, byte(padding))
}
if (len(buf) % n) != 0 {
@@ -54,7 +54,7 @@ func Unpad(n int, buf []byte) ([]byte, error) {
if padding == 0 {
return nil, ErrorPaddingTooShort
}
for i := 0; i < padding; i++ {
for i := range padding {
if buf[length-1-i] != byte(padding) {
return nil, ErrorPaddingNotAllTheSame
}

View File

@@ -0,0 +1,38 @@
// Type definitions specific to Dataverse
package api
// DataverseDatasetResponse is returned by the Dataverse dataset API
type DataverseDatasetResponse struct {
Status string `json:"status"`
Data DataverseDataset `json:"data"`
}
// DataverseDataset is the representation of a dataset
type DataverseDataset struct {
LatestVersion DataverseDatasetVersion `json:"latestVersion"`
}
// DataverseDatasetVersion is the representation of a dataset version
type DataverseDatasetVersion struct {
LastUpdateTime string `json:"lastUpdateTime"`
Files []DataverseFile `json:"files"`
}
// DataverseFile is the representation of a file found in a dataset
type DataverseFile struct {
DirectoryLabel string `json:"directoryLabel"`
DataFile DataverseDataFile `json:"dataFile"`
}
// DataverseDataFile represents file metadata details
type DataverseDataFile struct {
ID int64 `json:"id"`
Filename string `json:"filename"`
ContentType string `json:"contentType"`
FileSize int64 `json:"filesize"`
OriginalFileFormat string `json:"originalFileFormat"`
OriginalFileSize int64 `json:"originalFileSize"`
OriginalFileName string `json:"originalFileName"`
MD5 string `json:"md5"`
}

View File

@@ -0,0 +1,33 @@
// Type definitions specific to InvenioRDM
package api
// InvenioRecordResponse is the representation of a record stored in InvenioRDM
type InvenioRecordResponse struct {
Links InvenioRecordResponseLinks `json:"links"`
}
// InvenioRecordResponseLinks represents a record's links
type InvenioRecordResponseLinks struct {
Self string `json:"self"`
}
// InvenioFilesResponse is the representation of a record's files
type InvenioFilesResponse struct {
Entries []InvenioFilesResponseEntry `json:"entries"`
}
// InvenioFilesResponseEntry is the representation of a file entry
type InvenioFilesResponseEntry struct {
Key string `json:"key"`
Checksum string `json:"checksum"`
Size int64 `json:"size"`
Updated string `json:"updated"`
MimeType string `json:"mimetype"`
Links InvenioFilesResponseEntryLinks `json:"links"`
}
// InvenioFilesResponseEntryLinks represents file links details
type InvenioFilesResponseEntryLinks struct {
Content string `json:"content"`
}

26
backend/doi/api/types.go Normal file
View File

@@ -0,0 +1,26 @@
// Package api has general type definitions for doi
package api
// DoiResolverResponse is returned by the DOI resolver API
//
// Reference: https://www.doi.org/the-identifier/resources/factsheets/doi-resolution-documentation
type DoiResolverResponse struct {
ResponseCode int `json:"responseCode"`
Handle string `json:"handle"`
Values []DoiResolverResponseValue `json:"values"`
}
// DoiResolverResponseValue is a single handle record value
type DoiResolverResponseValue struct {
Index int `json:"index"`
Type string `json:"type"`
Data DoiResolverResponseValueData `json:"data"`
TTL int `json:"ttl"`
Timestamp string `json:"timestamp"`
}
// DoiResolverResponseValueData is the data held in a handle value
type DoiResolverResponseValueData struct {
Format string `json:"format"`
Value any `json:"value"`
}

112
backend/doi/dataverse.go Normal file
View File

@@ -0,0 +1,112 @@
// Implementation for Dataverse
package doi
import (
"context"
"fmt"
"net/http"
"net/url"
"path"
"strings"
"time"
"github.com/rclone/rclone/backend/doi/api"
"github.com/rclone/rclone/fs"
"github.com/rclone/rclone/lib/rest"
)
// Returns true if resolvedURL is likely a DOI hosted on a Dataverse intallation
func activateDataverse(resolvedURL *url.URL) (isActive bool) {
queryValues := resolvedURL.Query()
persistentID := queryValues.Get("persistentId")
return persistentID != ""
}
// Resolve the main API endpoint for a DOI hosted on a Dataverse installation
func resolveDataverseEndpoint(resolvedURL *url.URL) (provider Provider, endpoint *url.URL, err error) {
queryValues := resolvedURL.Query()
persistentID := queryValues.Get("persistentId")
query := url.Values{}
query.Add("persistentId", persistentID)
endpointURL := resolvedURL.ResolveReference(&url.URL{Path: "/api/datasets/:persistentId/", RawQuery: query.Encode()})
return Dataverse, endpointURL, nil
}
// dataverseProvider implements the doiProvider interface for Dataverse installations
type dataverseProvider struct {
f *Fs
}
// ListEntries returns the full list of entries found at the remote, regardless of root
func (dp *dataverseProvider) ListEntries(ctx context.Context) (entries []*Object, err error) {
// Use the cache if populated
cachedEntries, found := dp.f.cache.GetMaybe("files")
if found {
parsedEntries, ok := cachedEntries.([]Object)
if ok {
for _, entry := range parsedEntries {
newEntry := entry
entries = append(entries, &newEntry)
}
return entries, nil
}
}
filesURL := dp.f.endpoint
var res *http.Response
var result api.DataverseDatasetResponse
opts := rest.Opts{
Method: "GET",
Path: strings.TrimLeft(filesURL.EscapedPath(), "/"),
Parameters: filesURL.Query(),
}
err = dp.f.pacer.Call(func() (bool, error) {
res, err = dp.f.srv.CallJSON(ctx, &opts, nil, &result)
return shouldRetry(ctx, res, err)
})
if err != nil {
return nil, fmt.Errorf("readDir failed: %w", err)
}
modTime, modTimeErr := time.Parse(time.RFC3339, result.Data.LatestVersion.LastUpdateTime)
if modTimeErr != nil {
fs.Logf(dp.f, "error: could not parse last update time %v", modTimeErr)
modTime = timeUnset
}
for _, file := range result.Data.LatestVersion.Files {
contentURLPath := fmt.Sprintf("/api/access/datafile/%d", file.DataFile.ID)
query := url.Values{}
query.Add("format", "original")
contentURL := dp.f.endpoint.ResolveReference(&url.URL{Path: contentURLPath, RawQuery: query.Encode()})
entry := &Object{
fs: dp.f,
remote: path.Join(file.DirectoryLabel, file.DataFile.Filename),
contentURL: contentURL.String(),
size: file.DataFile.FileSize,
modTime: modTime,
md5: file.DataFile.MD5,
contentType: file.DataFile.ContentType,
}
if file.DataFile.OriginalFileName != "" {
entry.remote = path.Join(file.DirectoryLabel, file.DataFile.OriginalFileName)
entry.size = file.DataFile.OriginalFileSize
entry.contentType = file.DataFile.OriginalFileFormat
}
entries = append(entries, entry)
}
// Populate the cache
cacheEntries := []Object{}
for _, entry := range entries {
cacheEntries = append(cacheEntries, *entry)
}
dp.f.cache.Put("files", cacheEntries)
return entries, nil
}
func newDataverseProvider(f *Fs) doiProvider {
return &dataverseProvider{
f: f,
}
}

649
backend/doi/doi.go Normal file
View File

@@ -0,0 +1,649 @@
// Package doi provides a filesystem interface for digital objects identified by DOIs.
//
// See: https://www.doi.org/the-identifier/what-is-a-doi/
package doi
import (
"context"
"errors"
"fmt"
"io"
"net/http"
"net/url"
"path"
"strings"
"time"
"github.com/rclone/rclone/backend/doi/api"
"github.com/rclone/rclone/fs"
"github.com/rclone/rclone/fs/config/configmap"
"github.com/rclone/rclone/fs/config/configstruct"
"github.com/rclone/rclone/fs/fserrors"
"github.com/rclone/rclone/fs/fshttp"
"github.com/rclone/rclone/fs/hash"
"github.com/rclone/rclone/lib/cache"
"github.com/rclone/rclone/lib/pacer"
"github.com/rclone/rclone/lib/rest"
)
const (
// the URL of the DOI resolver
//
// Reference: https://www.doi.org/the-identifier/resources/factsheets/doi-resolution-documentation
doiResolverAPIURL = "https://doi.org/api"
minSleep = 10 * time.Millisecond
maxSleep = 2 * time.Second
decayConstant = 2 // bigger for slower decay, exponential
)
var (
errorReadOnly = errors.New("doi remotes are read only")
timeUnset = time.Unix(0, 0)
)
func init() {
fsi := &fs.RegInfo{
Name: "doi",
Description: "DOI datasets",
NewFs: NewFs,
CommandHelp: commandHelp,
Options: []fs.Option{{
Name: "doi",
Help: "The DOI or the doi.org URL.",
Required: true,
}, {
Name: fs.ConfigProvider,
Help: `DOI provider.
The DOI provider can be set when rclone does not automatically recognize a supported DOI provider.`,
Examples: []fs.OptionExample{
{
Value: "auto",
Help: "Auto-detect provider",
},
{
Value: string(Zenodo),
Help: "Zenodo",
}, {
Value: string(Dataverse),
Help: "Dataverse",
}, {
Value: string(Invenio),
Help: "Invenio",
}},
Required: false,
Advanced: true,
}, {
Name: "doi_resolver_api_url",
Help: `The URL of the DOI resolver API to use.
The DOI resolver can be set for testing or for cases when the the canonical DOI resolver API cannot be used.
Defaults to "https://doi.org/api".`,
Required: false,
Advanced: true,
}},
}
fs.Register(fsi)
}
// Provider defines the type of provider hosting the DOI
type Provider string
const (
// Zenodo provider, see https://zenodo.org
Zenodo Provider = "zenodo"
// Dataverse provider, see https://dataverse.harvard.edu
Dataverse Provider = "dataverse"
// Invenio provider, see https://inveniordm.docs.cern.ch
Invenio Provider = "invenio"
)
// Options defines the configuration for this backend
type Options struct {
Doi string `config:"doi"` // The DOI, a digital identifier of an object, usually a dataset
Provider string `config:"provider"` // The DOI provider
DoiResolverAPIURL string `config:"doi_resolver_api_url"` // The URL of the DOI resolver API to use.
}
// Fs stores the interface to the remote HTTP files
type Fs struct {
name string // name of this remote
root string // the path we are working on
provider Provider // the DOI provider
doiProvider doiProvider // the interface used to interact with the DOI provider
features *fs.Features // optional features
opt Options // options for this backend
ci *fs.ConfigInfo // global config
endpoint *url.URL // the main API endpoint for this remote
endpointURL string // endpoint as a string
srv *rest.Client // the connection to the server
pacer *fs.Pacer // pacer for API calls
cache *cache.Cache // a cache for the remote metadata
}
// Object is a remote object that has been stat'd (so it exists, but is not necessarily open for reading)
type Object struct {
fs *Fs // what this object is part of
remote string // the remote path
contentURL string // the URL where the contents of the file can be downloaded
size int64 // size of the object
modTime time.Time // modification time of the object
contentType string // content type of the object
md5 string // MD5 hash of the object content
}
// doiProvider is the interface used to list objects in a DOI
type doiProvider interface {
// ListEntries returns the full list of entries found at the remote, regardless of root
ListEntries(ctx context.Context) (entries []*Object, err error)
}
// Parse the input string as a DOI
// Examples:
// 10.1000/182 -> 10.1000/182
// https://doi.org/10.1000/182 -> 10.1000/182
// doi:10.1000/182 -> 10.1000/182
func parseDoi(doi string) string {
doiURL, err := url.Parse(doi)
if err != nil {
return doi
}
if doiURL.Scheme == "doi" {
return strings.TrimLeft(strings.TrimPrefix(doi, "doi:"), "/")
}
if strings.HasSuffix(doiURL.Hostname(), "doi.org") {
return strings.TrimLeft(doiURL.Path, "/")
}
return doi
}
// Resolve a DOI to a URL
// Reference: https://www.doi.org/the-identifier/resources/factsheets/doi-resolution-documentation
func resolveDoiURL(ctx context.Context, srv *rest.Client, pacer *fs.Pacer, opt *Options) (doiURL *url.URL, err error) {
resolverURL := opt.DoiResolverAPIURL
if resolverURL == "" {
resolverURL = doiResolverAPIURL
}
var result api.DoiResolverResponse
params := url.Values{}
params.Add("index", "1")
opts := rest.Opts{
Method: "GET",
RootURL: resolverURL,
Path: "/handles/" + opt.Doi,
Parameters: params,
}
err = pacer.Call(func() (bool, error) {
res, err := srv.CallJSON(ctx, &opts, nil, &result)
return shouldRetry(ctx, res, err)
})
if err != nil {
return nil, err
}
if result.ResponseCode != 1 {
return nil, fmt.Errorf("could not resolve DOI (error code %d)", result.ResponseCode)
}
resolvedURLStr := ""
for _, value := range result.Values {
if value.Type == "URL" && value.Data.Format == "string" {
valueStr, ok := value.Data.Value.(string)
if !ok {
return nil, fmt.Errorf("could not resolve DOI (incorrect response format)")
}
resolvedURLStr = valueStr
}
}
resolvedURL, err := url.Parse(resolvedURLStr)
if err != nil {
return nil, err
}
return resolvedURL, nil
}
// Resolve the passed configuration into a provider and enpoint
func resolveEndpoint(ctx context.Context, srv *rest.Client, pacer *fs.Pacer, opt *Options) (provider Provider, endpoint *url.URL, err error) {
resolvedURL, err := resolveDoiURL(ctx, srv, pacer, opt)
if err != nil {
return "", nil, err
}
switch opt.Provider {
case string(Dataverse):
return resolveDataverseEndpoint(resolvedURL)
case string(Invenio):
return resolveInvenioEndpoint(ctx, srv, pacer, resolvedURL)
case string(Zenodo):
return resolveZenodoEndpoint(ctx, srv, pacer, resolvedURL, opt.Doi)
}
hostname := strings.ToLower(resolvedURL.Hostname())
if hostname == "dataverse.harvard.edu" || activateDataverse(resolvedURL) {
return resolveDataverseEndpoint(resolvedURL)
}
if hostname == "zenodo.org" || strings.HasSuffix(hostname, ".zenodo.org") {
return resolveZenodoEndpoint(ctx, srv, pacer, resolvedURL, opt.Doi)
}
if activateInvenio(ctx, srv, pacer, resolvedURL) {
return resolveInvenioEndpoint(ctx, srv, pacer, resolvedURL)
}
return "", nil, fmt.Errorf("provider '%s' is not supported", resolvedURL.Hostname())
}
// Make the http connection from the passed options
func (f *Fs) httpConnection(ctx context.Context, opt *Options) (isFile bool, err error) {
provider, endpoint, err := resolveEndpoint(ctx, f.srv, f.pacer, opt)
if err != nil {
return false, err
}
// Update f with the new parameters
f.srv.SetRoot(endpoint.ResolveReference(&url.URL{Path: "/"}).String())
f.endpoint = endpoint
f.endpointURL = endpoint.String()
f.provider = provider
f.opt.Provider = string(provider)
switch f.provider {
case Dataverse:
f.doiProvider = newDataverseProvider(f)
case Invenio, Zenodo:
f.doiProvider = newInvenioProvider(f)
default:
return false, fmt.Errorf("provider type '%s' not supported", f.provider)
}
// Determine if the root is a file
entries, err := f.doiProvider.ListEntries(ctx)
if err != nil {
return false, err
}
for _, entry := range entries {
if entry.remote == f.root {
isFile = true
break
}
}
return isFile, nil
}
// retryErrorCodes is a slice of error codes that we will retry
var retryErrorCodes = []int{
429, // Too Many Requests.
500, // Internal Server Error
502, // Bad Gateway
503, // Service Unavailable
504, // Gateway Timeout
509, // Bandwidth Limit Exceeded
}
// shouldRetry returns a boolean as to whether this res and err
// deserve to be retried. It returns the err as a convenience.
func shouldRetry(ctx context.Context, res *http.Response, err error) (bool, error) {
if fserrors.ContextError(ctx, &err) {
return false, err
}
return fserrors.ShouldRetry(err) || fserrors.ShouldRetryHTTP(res, retryErrorCodes), err
}
// NewFs creates a new Fs object from the name and root. It connects to
// the host specified in the config file.
func NewFs(ctx context.Context, name, root string, m configmap.Mapper) (fs.Fs, error) {
root = strings.Trim(root, "/")
// Parse config into Options struct
opt := new(Options)
err := configstruct.Set(m, opt)
if err != nil {
return nil, err
}
opt.Doi = parseDoi(opt.Doi)
client := fshttp.NewClient(ctx)
ci := fs.GetConfig(ctx)
f := &Fs{
name: name,
root: root,
opt: *opt,
ci: ci,
srv: rest.NewClient(client),
pacer: fs.NewPacer(ctx, pacer.NewDefault(pacer.MinSleep(minSleep), pacer.MaxSleep(maxSleep), pacer.DecayConstant(decayConstant))),
cache: cache.New(),
}
f.features = (&fs.Features{
CanHaveEmptyDirectories: true,
}).Fill(ctx, f)
isFile, err := f.httpConnection(ctx, opt)
if err != nil {
return nil, err
}
if isFile {
// return an error with an fs which points to the parent
newRoot := path.Dir(f.root)
if newRoot == "." {
newRoot = ""
}
f.root = newRoot
return f, fs.ErrorIsFile
}
return f, nil
}
// Name returns the configured name of the file system
func (f *Fs) Name() string {
return f.name
}
// Root returns the root for the filesystem
func (f *Fs) Root() string {
return f.root
}
// String returns the URL for the filesystem
func (f *Fs) String() string {
return fmt.Sprintf("DOI %s", f.opt.Doi)
}
// Features returns the optional features of this Fs
func (f *Fs) Features() *fs.Features {
return f.features
}
// Precision is the remote http file system's modtime precision, which we have no way of knowing. We estimate at 1s
func (f *Fs) Precision() time.Duration {
return time.Second
}
// Hashes returns hash.HashNone to indicate remote hashing is unavailable
func (f *Fs) Hashes() hash.Set {
return hash.Set(hash.MD5)
// return hash.Set(hash.None)
}
// Mkdir makes the root directory of the Fs object
func (f *Fs) Mkdir(ctx context.Context, dir string) error {
return errorReadOnly
}
// Remove a remote http file object
func (o *Object) Remove(ctx context.Context) error {
return errorReadOnly
}
// Rmdir removes the root directory of the Fs object
func (f *Fs) Rmdir(ctx context.Context, dir string) error {
return errorReadOnly
}
// NewObject creates a new remote http file object
func (f *Fs) NewObject(ctx context.Context, remote string) (fs.Object, error) {
entries, err := f.doiProvider.ListEntries(ctx)
if err != nil {
return nil, err
}
remoteFullPath := remote
if f.root != "" {
remoteFullPath = path.Join(f.root, remote)
}
for _, entry := range entries {
if entry.Remote() == remoteFullPath {
return entry, nil
}
}
return nil, fs.ErrorObjectNotFound
}
// List the objects and directories in dir into entries. The
// entries can be returned in any order but should be for a
// complete directory.
//
// dir should be "" to list the root, and should not have
// trailing slashes.
//
// This should return ErrDirNotFound if the directory isn't
// found.
func (f *Fs) List(ctx context.Context, dir string) (entries fs.DirEntries, err error) {
fileEntries, err := f.doiProvider.ListEntries(ctx)
if err != nil {
return nil, fmt.Errorf("error listing %q: %w", dir, err)
}
fullDir := path.Join(f.root, dir)
if fullDir != "" {
fullDir += "/"
}
dirPaths := map[string]bool{}
for _, entry := range fileEntries {
// First, filter out files not in `fullDir`
if !strings.HasPrefix(entry.remote, fullDir) {
continue
}
// Then, find entries in subfolers
remotePath := entry.remote
if fullDir != "" {
remotePath = strings.TrimLeft(strings.TrimPrefix(remotePath, fullDir), "/")
}
parts := strings.SplitN(remotePath, "/", 2)
if len(parts) == 1 {
newEntry := *entry
newEntry.remote = path.Join(dir, remotePath)
entries = append(entries, &newEntry)
} else {
dirPaths[path.Join(dir, parts[0])] = true
}
}
for dirPath := range dirPaths {
entry := fs.NewDir(dirPath, time.Time{})
entries = append(entries, entry)
}
return entries, nil
}
// Put in to the remote path with the modTime given of the given size
//
// May create the object even if it returns an error - if so
// will return the object and the error, otherwise will return
// nil and the error
func (f *Fs) Put(ctx context.Context, in io.Reader, src fs.ObjectInfo, options ...fs.OpenOption) (fs.Object, error) {
return nil, errorReadOnly
}
// PutStream uploads to the remote path with the modTime given of indeterminate size
func (f *Fs) PutStream(ctx context.Context, in io.Reader, src fs.ObjectInfo, options ...fs.OpenOption) (fs.Object, error) {
return nil, errorReadOnly
}
// Fs is the filesystem this remote http file object is located within
func (o *Object) Fs() fs.Info {
return o.fs
}
// String returns the URL to the remote HTTP file
func (o *Object) String() string {
if o == nil {
return "<nil>"
}
return o.remote
}
// Remote the name of the remote HTTP file, relative to the fs root
func (o *Object) Remote() string {
return o.remote
}
// Hash returns "" since HTTP (in Go or OpenSSH) doesn't support remote calculation of hashes
func (o *Object) Hash(ctx context.Context, t hash.Type) (string, error) {
if t != hash.MD5 {
return "", hash.ErrUnsupported
}
return o.md5, nil
}
// Size returns the size in bytes of the remote http file
func (o *Object) Size() int64 {
return o.size
}
// ModTime returns the modification time of the remote http file
func (o *Object) ModTime(ctx context.Context) time.Time {
return o.modTime
}
// SetModTime sets the modification and access time to the specified time
//
// it also updates the info field
func (o *Object) SetModTime(ctx context.Context, modTime time.Time) error {
return errorReadOnly
}
// Storable returns whether the remote http file is a regular file (not a directory, symbolic link, block device, character device, named pipe, etc.)
func (o *Object) Storable() bool {
return true
}
// Open a remote http file object for reading. Seek is supported
func (o *Object) Open(ctx context.Context, options ...fs.OpenOption) (in io.ReadCloser, err error) {
fs.FixRangeOption(options, o.size)
opts := rest.Opts{
Method: "GET",
RootURL: o.contentURL,
Options: options,
}
var res *http.Response
err = o.fs.pacer.Call(func() (bool, error) {
res, err = o.fs.srv.Call(ctx, &opts)
return shouldRetry(ctx, res, err)
})
if err != nil {
return nil, fmt.Errorf("Open failed: %w", err)
}
// Handle non-compliant redirects
if res.Header.Get("Location") != "" {
newURL, err := res.Location()
if err == nil {
opts.RootURL = newURL.String()
err = o.fs.pacer.Call(func() (bool, error) {
res, err = o.fs.srv.Call(ctx, &opts)
return shouldRetry(ctx, res, err)
})
if err != nil {
return nil, fmt.Errorf("Open failed: %w", err)
}
}
}
return res.Body, nil
}
// Update in to the object with the modTime given of the given size
func (o *Object) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, options ...fs.OpenOption) error {
return errorReadOnly
}
// MimeType of an Object if known, "" otherwise
func (o *Object) MimeType(ctx context.Context) string {
return o.contentType
}
var commandHelp = []fs.CommandHelp{{
Name: "metadata",
Short: "Show metadata about the DOI.",
Long: `This command returns a JSON object with some information about the DOI.
rclone backend medatadata doi:
It returns a JSON object representing metadata about the DOI.
`,
}, {
Name: "set",
Short: "Set command for updating the config parameters.",
Long: `This set command can be used to update the config parameters
for a running doi backend.
Usage Examples:
rclone backend set doi: [-o opt_name=opt_value] [-o opt_name2=opt_value2]
rclone rc backend/command command=set fs=doi: [-o opt_name=opt_value] [-o opt_name2=opt_value2]
rclone rc backend/command command=set fs=doi: -o doi=NEW_DOI
The option keys are named as they are in the config file.
This rebuilds the connection to the doi backend when it is called with
the new parameters. Only new parameters need be passed as the values
will default to those currently in use.
It doesn't return anything.
`,
}}
// Command the backend to run a named command
//
// The command run is name
// args may be used to read arguments from
// opts may be used to read optional arguments from
//
// The result should be capable of being JSON encoded
// If it is a string or a []string it will be shown to the user
// otherwise it will be JSON encoded and shown to the user like that
func (f *Fs) Command(ctx context.Context, name string, arg []string, opt map[string]string) (out any, err error) {
switch name {
case "metadata":
return f.ShowMetadata(ctx)
case "set":
newOpt := f.opt
err := configstruct.Set(configmap.Simple(opt), &newOpt)
if err != nil {
return nil, fmt.Errorf("reading config: %w", err)
}
_, err = f.httpConnection(ctx, &newOpt)
if err != nil {
return nil, fmt.Errorf("updating session: %w", err)
}
f.opt = newOpt
keys := []string{}
for k := range opt {
keys = append(keys, k)
}
fs.Logf(f, "Updated config values: %s", strings.Join(keys, ", "))
return nil, nil
default:
return nil, fs.ErrorCommandNotFound
}
}
// ShowMetadata returns some metadata about the corresponding DOI
func (f *Fs) ShowMetadata(ctx context.Context) (metadata any, err error) {
doiURL, err := url.Parse("https://doi.org/" + f.opt.Doi)
if err != nil {
return nil, err
}
info := map[string]any{}
info["DOI"] = f.opt.Doi
info["URL"] = doiURL.String()
info["metadataURL"] = f.endpointURL
info["provider"] = f.provider
return info, nil
}
// Check the interfaces are satisfied
var (
_ fs.Fs = (*Fs)(nil)
_ fs.PutStreamer = (*Fs)(nil)
_ fs.Commander = (*Fs)(nil)
_ fs.Object = (*Object)(nil)
_ fs.MimeTyper = (*Object)(nil)
)

View File

@@ -0,0 +1,260 @@
package doi
import (
"context"
"crypto/md5"
"encoding/hex"
"encoding/json"
"io"
"net/http"
"net/http/httptest"
"net/url"
"sort"
"strings"
"testing"
"time"
"github.com/rclone/rclone/backend/doi/api"
"github.com/rclone/rclone/fs"
"github.com/rclone/rclone/fs/config/configmap"
"github.com/rclone/rclone/fs/hash"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
)
var remoteName = "TestDoi"
func TestParseDoi(t *testing.T) {
// 10.1000/182 -> 10.1000/182
doi := "10.1000/182"
parsed := parseDoi(doi)
assert.Equal(t, "10.1000/182", parsed)
// https://doi.org/10.1000/182 -> 10.1000/182
doi = "https://doi.org/10.1000/182"
parsed = parseDoi(doi)
assert.Equal(t, "10.1000/182", parsed)
// https://dx.doi.org/10.1000/182 -> 10.1000/182
doi = "https://dxdoi.org/10.1000/182"
parsed = parseDoi(doi)
assert.Equal(t, "10.1000/182", parsed)
// doi:10.1000/182 -> 10.1000/182
doi = "doi:10.1000/182"
parsed = parseDoi(doi)
assert.Equal(t, "10.1000/182", parsed)
// doi://10.1000/182 -> 10.1000/182
doi = "doi://10.1000/182"
parsed = parseDoi(doi)
assert.Equal(t, "10.1000/182", parsed)
}
// prepareMockDoiResolverServer prepares a test server to resolve DOIs
func prepareMockDoiResolverServer(t *testing.T, resolvedURL string) (doiResolverAPIURL string) {
mux := http.NewServeMux()
// Handle requests for resolving DOIs
mux.HandleFunc("GET /api/handles/{handle...}", func(w http.ResponseWriter, r *http.Request) {
// Check that we are resolving a DOI
handle := strings.TrimPrefix(r.URL.Path, "/api/handles/")
assert.NotEmpty(t, handle)
index := r.URL.Query().Get("index")
assert.Equal(t, "1", index)
// Return the most basic response
result := api.DoiResolverResponse{
ResponseCode: 1,
Handle: handle,
Values: []api.DoiResolverResponseValue{
{
Index: 1,
Type: "URL",
Data: api.DoiResolverResponseValueData{
Format: "string",
Value: resolvedURL,
},
},
},
}
resultBytes, err := json.Marshal(result)
require.NoError(t, err)
w.Header().Add("Content-Type", "application/json")
_, err = w.Write(resultBytes)
require.NoError(t, err)
})
// Make the test server
ts := httptest.NewServer(mux)
// Close the server at the end of the test
t.Cleanup(ts.Close)
return ts.URL + "/api"
}
func md5Sum(text string) string {
hash := md5.Sum([]byte(text))
return hex.EncodeToString(hash[:])
}
// prepareMockZenodoServer prepares a test server that mocks Zenodo.org
func prepareMockZenodoServer(t *testing.T, files map[string]string) *httptest.Server {
mux := http.NewServeMux()
// Handle requests for a single record
mux.HandleFunc("GET /api/records/{recordID...}", func(w http.ResponseWriter, r *http.Request) {
// Check that we are returning data about a single record
recordID := strings.TrimPrefix(r.URL.Path, "/api/records/")
assert.NotEmpty(t, recordID)
// Return the most basic response
selfURL, err := url.Parse("http://" + r.Host)
require.NoError(t, err)
selfURL = selfURL.JoinPath(r.URL.String())
result := api.InvenioRecordResponse{
Links: api.InvenioRecordResponseLinks{
Self: selfURL.String(),
},
}
resultBytes, err := json.Marshal(result)
require.NoError(t, err)
w.Header().Add("Content-Type", "application/json")
_, err = w.Write(resultBytes)
require.NoError(t, err)
})
// Handle requests for listing files in a record
mux.HandleFunc("GET /api/records/{record}/files", func(w http.ResponseWriter, r *http.Request) {
// Return the most basic response
filesBaseURL, err := url.Parse("http://" + r.Host)
require.NoError(t, err)
filesBaseURL = filesBaseURL.JoinPath("/api/files/")
entries := []api.InvenioFilesResponseEntry{}
for filename, contents := range files {
entries = append(entries,
api.InvenioFilesResponseEntry{
Key: filename,
Checksum: md5Sum(contents),
Size: int64(len(contents)),
Updated: time.Now().UTC().Format(time.RFC3339),
MimeType: "text/plain; charset=utf-8",
Links: api.InvenioFilesResponseEntryLinks{
Content: filesBaseURL.JoinPath(filename).String(),
},
},
)
}
result := api.InvenioFilesResponse{
Entries: entries,
}
resultBytes, err := json.Marshal(result)
require.NoError(t, err)
w.Header().Add("Content-Type", "application/json")
_, err = w.Write(resultBytes)
require.NoError(t, err)
})
// Handle requests for file contents
mux.HandleFunc("/api/files/{file}", func(w http.ResponseWriter, r *http.Request) {
// Check that we are returning the contents of a file
filename := strings.TrimPrefix(r.URL.Path, "/api/files/")
assert.NotEmpty(t, filename)
contents, found := files[filename]
if !found {
w.WriteHeader(404)
return
}
// Return the most basic response
_, err := w.Write([]byte(contents))
require.NoError(t, err)
})
// Make the test server
ts := httptest.NewServer(mux)
// Close the server at the end of the test
t.Cleanup(ts.Close)
return ts
}
func TestZenodoRemote(t *testing.T) {
recordID := "2600782"
doi := "10.5281/zenodo.2600782"
// The files in the dataset
files := map[string]string{
"README.md": "This is a dataset.",
"data.txt": "Some data",
}
ts := prepareMockZenodoServer(t, files)
resolvedURL := ts.URL + "/record/" + recordID
doiResolverAPIURL := prepareMockDoiResolverServer(t, resolvedURL)
testConfig := configmap.Simple{
"type": "doi",
"doi": doi,
"provider": "zenodo",
"doi_resolver_api_url": doiResolverAPIURL,
}
f, err := NewFs(context.Background(), remoteName, "", testConfig)
require.NoError(t, err)
// Test listing the DOI files
entries, err := f.List(context.Background(), "")
require.NoError(t, err)
sort.Sort(entries)
require.Equal(t, len(files), len(entries))
e := entries[0]
assert.Equal(t, "README.md", e.Remote())
assert.Equal(t, int64(18), e.Size())
_, ok := e.(*Object)
assert.True(t, ok)
e = entries[1]
assert.Equal(t, "data.txt", e.Remote())
assert.Equal(t, int64(9), e.Size())
_, ok = e.(*Object)
assert.True(t, ok)
// Test reading the DOI files
o, err := f.NewObject(context.Background(), "README.md")
require.NoError(t, err)
assert.Equal(t, int64(18), o.Size())
md5Hash, err := o.Hash(context.Background(), hash.MD5)
require.NoError(t, err)
assert.Equal(t, "464352b1cab5240e44528a56fda33d9d", md5Hash)
fd, err := o.Open(context.Background())
require.NoError(t, err)
data, err := io.ReadAll(fd)
require.NoError(t, err)
require.NoError(t, fd.Close())
assert.Equal(t, []byte(files["README.md"]), data)
do, ok := o.(fs.MimeTyper)
require.True(t, ok)
assert.Equal(t, "text/plain; charset=utf-8", do.MimeType(context.Background()))
o, err = f.NewObject(context.Background(), "data.txt")
require.NoError(t, err)
assert.Equal(t, int64(9), o.Size())
md5Hash, err = o.Hash(context.Background(), hash.MD5)
require.NoError(t, err)
assert.Equal(t, "5b82f8bf4df2bfb0e66ccaa7306fd024", md5Hash)
fd, err = o.Open(context.Background())
require.NoError(t, err)
data, err = io.ReadAll(fd)
require.NoError(t, err)
require.NoError(t, fd.Close())
assert.Equal(t, []byte(files["data.txt"]), data)
do, ok = o.(fs.MimeTyper)
require.True(t, ok)
assert.Equal(t, "text/plain; charset=utf-8", do.MimeType(context.Background()))
}

16
backend/doi/doi_test.go Normal file
View File

@@ -0,0 +1,16 @@
// Test DOI filesystem interface
package doi
import (
"testing"
"github.com/rclone/rclone/fstest/fstests"
)
// TestIntegration runs integration tests against the remote
func TestIntegration(t *testing.T) {
fstests.Run(t, &fstests.Opt{
RemoteName: "TestDoi:",
NilObject: (*Object)(nil),
})
}

164
backend/doi/invenio.go Normal file
View File

@@ -0,0 +1,164 @@
// Implementation for InvenioRDM
package doi
import (
"context"
"fmt"
"net/http"
"net/url"
"regexp"
"strings"
"time"
"github.com/rclone/rclone/backend/doi/api"
"github.com/rclone/rclone/fs"
"github.com/rclone/rclone/lib/rest"
)
var invenioRecordRegex = regexp.MustCompile(`\/records?\/(.+)`)
// Returns true if resolvedURL is likely a DOI hosted on an InvenioRDM intallation
func activateInvenio(ctx context.Context, srv *rest.Client, pacer *fs.Pacer, resolvedURL *url.URL) (isActive bool) {
_, _, err := resolveInvenioEndpoint(ctx, srv, pacer, resolvedURL)
return err == nil
}
// Resolve the main API endpoint for a DOI hosted on an InvenioRDM installation
func resolveInvenioEndpoint(ctx context.Context, srv *rest.Client, pacer *fs.Pacer, resolvedURL *url.URL) (provider Provider, endpoint *url.URL, err error) {
var res *http.Response
opts := rest.Opts{
Method: "GET",
RootURL: resolvedURL.String(),
}
err = pacer.Call(func() (bool, error) {
res, err = srv.Call(ctx, &opts)
return shouldRetry(ctx, res, err)
})
if err != nil {
return "", nil, err
}
// First, attempt to grab the API URL from the headers
var linksetURL *url.URL
links := parseLinkHeader(res.Header.Get("Link"))
for _, link := range links {
if link.Rel == "linkset" && link.Type == "application/linkset+json" {
parsed, err := url.Parse(link.Href)
if err == nil {
linksetURL = parsed
break
}
}
}
if linksetURL != nil {
endpoint, err = checkInvenioAPIURL(ctx, srv, pacer, linksetURL)
if err == nil {
return Invenio, endpoint, nil
}
fs.Logf(nil, "using linkset URL failed: %s", err.Error())
}
// If there is no linkset header, try to grab the record ID from the URL
recordID := ""
resURL := res.Request.URL
match := invenioRecordRegex.FindStringSubmatch(resURL.EscapedPath())
if match != nil {
recordID = match[1]
guessedURL := res.Request.URL.ResolveReference(&url.URL{
Path: "/api/records/" + recordID,
})
endpoint, err = checkInvenioAPIURL(ctx, srv, pacer, guessedURL)
if err == nil {
return Invenio, endpoint, nil
}
fs.Logf(nil, "guessing the URL failed: %s", err.Error())
}
return "", nil, fmt.Errorf("could not resolve the Invenio API endpoint for '%s'", resolvedURL.String())
}
func checkInvenioAPIURL(ctx context.Context, srv *rest.Client, pacer *fs.Pacer, resolvedURL *url.URL) (endpoint *url.URL, err error) {
var result api.InvenioRecordResponse
opts := rest.Opts{
Method: "GET",
RootURL: resolvedURL.String(),
}
err = pacer.Call(func() (bool, error) {
res, err := srv.CallJSON(ctx, &opts, nil, &result)
return shouldRetry(ctx, res, err)
})
if err != nil {
return nil, err
}
if result.Links.Self == "" {
return nil, fmt.Errorf("could not parse API response from '%s'", resolvedURL.String())
}
return url.Parse(result.Links.Self)
}
// invenioProvider implements the doiProvider interface for InvenioRDM installations
type invenioProvider struct {
f *Fs
}
// ListEntries returns the full list of entries found at the remote, regardless of root
func (ip *invenioProvider) ListEntries(ctx context.Context) (entries []*Object, err error) {
// Use the cache if populated
cachedEntries, found := ip.f.cache.GetMaybe("files")
if found {
parsedEntries, ok := cachedEntries.([]Object)
if ok {
for _, entry := range parsedEntries {
newEntry := entry
entries = append(entries, &newEntry)
}
return entries, nil
}
}
filesURL := ip.f.endpoint.JoinPath("files")
var result api.InvenioFilesResponse
opts := rest.Opts{
Method: "GET",
Path: strings.TrimLeft(filesURL.EscapedPath(), "/"),
}
err = ip.f.pacer.Call(func() (bool, error) {
res, err := ip.f.srv.CallJSON(ctx, &opts, nil, &result)
return shouldRetry(ctx, res, err)
})
if err != nil {
return nil, fmt.Errorf("readDir failed: %w", err)
}
for _, file := range result.Entries {
modTime, modTimeErr := time.Parse(time.RFC3339, file.Updated)
if modTimeErr != nil {
fs.Logf(ip.f, "error: could not parse last update time %v", modTimeErr)
modTime = timeUnset
}
entry := &Object{
fs: ip.f,
remote: file.Key,
contentURL: file.Links.Content,
size: file.Size,
modTime: modTime,
contentType: file.MimeType,
md5: strings.TrimPrefix(file.Checksum, "md5:"),
}
entries = append(entries, entry)
}
// Populate the cache
cacheEntries := []Object{}
for _, entry := range entries {
cacheEntries = append(cacheEntries, *entry)
}
ip.f.cache.Put("files", cacheEntries)
return entries, nil
}
func newInvenioProvider(f *Fs) doiProvider {
return &invenioProvider{
f: f,
}
}

View File

@@ -0,0 +1,75 @@
package doi
import (
"regexp"
"strings"
)
var linkRegex = regexp.MustCompile(`^<(.+)>$`)
var valueRegex = regexp.MustCompile(`^"(.+)"$`)
// headerLink represents a link as presented in HTTP headers
// MDN Reference: https://developer.mozilla.org/en-US/docs/Web/HTTP/Reference/Headers/Link
type headerLink struct {
Href string
Rel string
Type string
Extras map[string]string
}
func parseLinkHeader(header string) (links []headerLink) {
for link := range strings.SplitSeq(header, ",") {
link = strings.TrimSpace(link)
parsed := parseLink(link)
if parsed != nil {
links = append(links, *parsed)
}
}
return links
}
func parseLink(link string) (parsedLink *headerLink) {
var parts []string
for part := range strings.SplitSeq(link, ";") {
parts = append(parts, strings.TrimSpace(part))
}
match := linkRegex.FindStringSubmatch(parts[0])
if match == nil {
return nil
}
result := &headerLink{
Href: match[1],
Extras: map[string]string{},
}
for _, keyValue := range parts[1:] {
parsed := parseKeyValue(keyValue)
if parsed != nil {
key, value := parsed[0], parsed[1]
switch strings.ToLower(key) {
case "rel":
result.Rel = value
case "type":
result.Type = value
default:
result.Extras[key] = value
}
}
}
return result
}
func parseKeyValue(keyValue string) []string {
parts := strings.SplitN(keyValue, "=", 2)
if parts[0] == "" || len(parts) < 2 {
return nil
}
match := valueRegex.FindStringSubmatch(parts[1])
if match != nil {
parts[1] = match[1]
return parts
}
return parts
}

View File

@@ -0,0 +1,44 @@
package doi
import (
"testing"
"github.com/stretchr/testify/assert"
)
func TestParseLinkHeader(t *testing.T) {
header := "<https://zenodo.org/api/records/15063252> ; rel=\"linkset\" ; type=\"application/linkset+json\""
links := parseLinkHeader(header)
expected := headerLink{
Href: "https://zenodo.org/api/records/15063252",
Rel: "linkset",
Type: "application/linkset+json",
Extras: map[string]string{},
}
assert.Contains(t, links, expected)
header = "<https://api.example.com/issues?page=2>; rel=\"prev\", <https://api.example.com/issues?page=4>; rel=\"next\", <https://api.example.com/issues?page=10>; rel=\"last\", <https://api.example.com/issues?page=1>; rel=\"first\""
links = parseLinkHeader(header)
expectedList := []headerLink{{
Href: "https://api.example.com/issues?page=2",
Rel: "prev",
Type: "",
Extras: map[string]string{},
}, {
Href: "https://api.example.com/issues?page=4",
Rel: "next",
Type: "",
Extras: map[string]string{},
}, {
Href: "https://api.example.com/issues?page=10",
Rel: "last",
Type: "",
Extras: map[string]string{},
}, {
Href: "https://api.example.com/issues?page=1",
Rel: "first",
Type: "",
Extras: map[string]string{},
}}
assert.Equal(t, links, expectedList)
}

47
backend/doi/zenodo.go Normal file
View File

@@ -0,0 +1,47 @@
// Implementation for Zenodo
package doi
import (
"context"
"fmt"
"net/url"
"regexp"
"github.com/rclone/rclone/backend/doi/api"
"github.com/rclone/rclone/fs"
"github.com/rclone/rclone/lib/rest"
)
var zenodoRecordRegex = regexp.MustCompile(`zenodo[.](.+)`)
// Resolve the main API endpoint for a DOI hosted on Zenodo
func resolveZenodoEndpoint(ctx context.Context, srv *rest.Client, pacer *fs.Pacer, resolvedURL *url.URL, doi string) (provider Provider, endpoint *url.URL, err error) {
match := zenodoRecordRegex.FindStringSubmatch(doi)
if match == nil {
return "", nil, fmt.Errorf("could not derive API endpoint URL from '%s'", resolvedURL.String())
}
recordID := match[1]
endpointURL := resolvedURL.ResolveReference(&url.URL{Path: "/api/records/" + recordID})
var result api.InvenioRecordResponse
opts := rest.Opts{
Method: "GET",
RootURL: endpointURL.String(),
}
err = pacer.Call(func() (bool, error) {
res, err := srv.CallJSON(ctx, &opts, nil, &result)
return shouldRetry(ctx, res, err)
})
if err != nil {
return "", nil, err
}
endpointURL, err = url.Parse(result.Links.Self)
if err != nil {
return "", nil, err
}
return Zenodo, endpointURL, nil
}

File diff suppressed because it is too large Load Diff

View File

@@ -7,7 +7,6 @@ import (
"errors"
"fmt"
"io"
"io/ioutil"
"mime"
"os"
"path"
@@ -78,7 +77,7 @@ var additionalMimeTypes = map[string]string{
// Load the example export formats into exportFormats for testing
func TestInternalLoadExampleFormats(t *testing.T) {
fetchFormatsOnce.Do(func() {})
buf, err := ioutil.ReadFile(filepath.FromSlash("test/about.json"))
buf, err := os.ReadFile(filepath.FromSlash("test/about.json"))
var about struct {
ExportFormats map[string][]string `json:"exportFormats,omitempty"`
ImportFormats map[string][]string `json:"importFormats,omitempty"`
@@ -96,7 +95,7 @@ func TestInternalParseExtensions(t *testing.T) {
wantErr error
}{
{"doc", []string{".doc"}, nil},
{" docx ,XLSX, pptx,svg", []string{".docx", ".xlsx", ".pptx", ".svg"}, nil},
{" docx ,XLSX, pptx,svg,md", []string{".docx", ".xlsx", ".pptx", ".svg", ".md"}, nil},
{"docx,svg,Docx", []string{".docx", ".svg"}, nil},
{"docx,potato,docx", []string{".docx"}, errors.New(`couldn't find MIME type for extension ".potato"`)},
} {
@@ -244,6 +243,15 @@ func (f *Fs) InternalTestShouldRetry(t *testing.T) {
quotaExceededRetry, quotaExceededError := f.shouldRetry(ctx, &generic403)
assert.False(t, quotaExceededRetry)
assert.Equal(t, quotaExceededError, expectedQuotaError)
sqEItem := googleapi.ErrorItem{
Reason: "storageQuotaExceeded",
}
generic403.Errors[0] = sqEItem
expectedStorageQuotaError := fserrors.FatalError(&generic403)
storageQuotaExceededRetry, storageQuotaExceededError := f.shouldRetry(ctx, &generic403)
assert.False(t, storageQuotaExceededRetry)
assert.Equal(t, storageQuotaExceededError, expectedStorageQuotaError)
}
func (f *Fs) InternalTestDocumentImport(t *testing.T) {
@@ -471,8 +479,8 @@ func (f *Fs) InternalTestUnTrash(t *testing.T) {
require.NoError(t, f.Purge(ctx, "trashDir"))
}
// TestIntegration/FsMkdir/FsPutFiles/Internal/CopyID
func (f *Fs) InternalTestCopyID(t *testing.T) {
// TestIntegration/FsMkdir/FsPutFiles/Internal/CopyOrMoveID
func (f *Fs) InternalTestCopyOrMoveID(t *testing.T) {
ctx := context.Background()
obj, err := f.NewObject(ctx, existingFile)
require.NoError(t, err)
@@ -490,7 +498,7 @@ func (f *Fs) InternalTestCopyID(t *testing.T) {
}
t.Run("BadID", func(t *testing.T) {
err = f.copyID(ctx, "ID-NOT-FOUND", dir+"/")
err = f.copyOrMoveID(ctx, "moveid", "ID-NOT-FOUND", dir+"/")
require.Error(t, err)
assert.Contains(t, err.Error(), "couldn't find id")
})
@@ -498,22 +506,71 @@ func (f *Fs) InternalTestCopyID(t *testing.T) {
t.Run("Directory", func(t *testing.T) {
rootID, err := f.dirCache.RootID(ctx, false)
require.NoError(t, err)
err = f.copyID(ctx, rootID, dir+"/")
err = f.copyOrMoveID(ctx, "moveid", rootID, dir+"/")
require.Error(t, err)
assert.Contains(t, err.Error(), "can't copy directory")
assert.Contains(t, err.Error(), "can't moveid directory")
})
t.Run("WithoutDestName", func(t *testing.T) {
err = f.copyID(ctx, o.id, dir+"/")
t.Run("MoveWithoutDestName", func(t *testing.T) {
err = f.copyOrMoveID(ctx, "moveid", o.id, dir+"/")
require.NoError(t, err)
checkFile(path.Base(existingFile))
})
t.Run("WithDestName", func(t *testing.T) {
err = f.copyID(ctx, o.id, dir+"/potato.txt")
t.Run("CopyWithoutDestName", func(t *testing.T) {
err = f.copyOrMoveID(ctx, "copyid", o.id, dir+"/")
require.NoError(t, err)
checkFile(path.Base(existingFile))
})
t.Run("MoveWithDestName", func(t *testing.T) {
err = f.copyOrMoveID(ctx, "moveid", o.id, dir+"/potato.txt")
require.NoError(t, err)
checkFile("potato.txt")
})
t.Run("CopyWithDestName", func(t *testing.T) {
err = f.copyOrMoveID(ctx, "copyid", o.id, dir+"/potato.txt")
require.NoError(t, err)
checkFile("potato.txt")
})
}
// TestIntegration/FsMkdir/FsPutFiles/Internal/Query
func (f *Fs) InternalTestQuery(t *testing.T) {
ctx := context.Background()
var err error
t.Run("BadQuery", func(t *testing.T) {
_, err = f.query(ctx, "this is a bad query")
require.Error(t, err)
assert.Contains(t, err.Error(), "failed to execute query")
})
t.Run("NoMatch", func(t *testing.T) {
results, err := f.query(ctx, fmt.Sprintf("name='%s' and name!='%s'", existingSubDir, existingSubDir))
require.NoError(t, err)
assert.Len(t, results, 0)
})
t.Run("GoodQuery", func(t *testing.T) {
pathSegments := strings.Split(existingFile, "/")
var parent string
for _, item := range pathSegments {
// the file name contains ' characters which must be escaped
escapedItem := f.opt.Enc.FromStandardName(item)
escapedItem = strings.ReplaceAll(escapedItem, `\`, `\\`)
escapedItem = strings.ReplaceAll(escapedItem, `'`, `\'`)
results, err := f.query(ctx, fmt.Sprintf("%strashed=false and name='%s'", parent, escapedItem))
require.NoError(t, err)
require.True(t, len(results) > 0)
for _, result := range results {
assert.True(t, len(result.Id) > 0)
assert.Equal(t, result.Name, item)
}
parent = fmt.Sprintf("'%s' in parents and ", results[0].Id)
}
})
}
// TestIntegration/FsMkdir/FsPutFiles/Internal/AgeQuery
@@ -521,7 +578,7 @@ func (f *Fs) InternalTestAgeQuery(t *testing.T) {
// Check set up for filtering
assert.True(t, f.Features().FilterAware)
opt := &filter.Opt{}
opt := &filter.Options{}
err := opt.MaxAge.Set("1h")
assert.NoError(t, err)
flt, err := filter.NewFilter(opt)
@@ -602,7 +659,8 @@ func (f *Fs) InternalTest(t *testing.T) {
})
t.Run("Shortcuts", f.InternalTestShortcuts)
t.Run("UnTrash", f.InternalTestUnTrash)
t.Run("CopyID", f.InternalTestCopyID)
t.Run("CopyOrMoveID", f.InternalTestCopyOrMoveID)
t.Run("Query", f.InternalTestQuery)
t.Run("AgeQuery", f.InternalTestAgeQuery)
t.Run("ShouldRetry", f.InternalTestShouldRetry)
}

637
backend/drive/metadata.go Normal file
View File

@@ -0,0 +1,637 @@
package drive
import (
"context"
"encoding/json"
"fmt"
"maps"
"strconv"
"strings"
"sync"
"github.com/rclone/rclone/fs"
"github.com/rclone/rclone/fs/fserrors"
"github.com/rclone/rclone/lib/errcount"
"golang.org/x/sync/errgroup"
drive "google.golang.org/api/drive/v3"
"google.golang.org/api/googleapi"
)
// system metadata keys which this backend owns
var systemMetadataInfo = map[string]fs.MetadataHelp{
"content-type": {
Help: "The MIME type of the file.",
Type: "string",
Example: "text/plain",
},
"mtime": {
Help: "Time of last modification with mS accuracy.",
Type: "RFC 3339",
Example: "2006-01-02T15:04:05.999Z07:00",
},
"btime": {
Help: "Time of file birth (creation) with mS accuracy. Note that this is only writable on fresh uploads - it can't be written for updates.",
Type: "RFC 3339",
Example: "2006-01-02T15:04:05.999Z07:00",
},
"copy-requires-writer-permission": {
Help: "Whether the options to copy, print, or download this file, should be disabled for readers and commenters.",
Type: "boolean",
Example: "true",
},
"writers-can-share": {
Help: "Whether users with only writer permission can modify the file's permissions. Not populated and ignored when setting for items in shared drives.",
Type: "boolean",
Example: "false",
},
"viewed-by-me": {
Help: "Whether the file has been viewed by this user.",
Type: "boolean",
Example: "true",
ReadOnly: true,
},
"owner": {
Help: "The owner of the file. Usually an email address. Enable with --drive-metadata-owner.",
Type: "string",
Example: "user@example.com",
},
"permissions": {
Help: "Permissions in a JSON dump of Google drive format. On shared drives these will only be present if they aren't inherited. Enable with --drive-metadata-permissions.",
Type: "JSON",
Example: "{}",
},
"folder-color-rgb": {
Help: "The color for a folder or a shortcut to a folder as an RGB hex string.",
Type: "string",
Example: "881133",
},
"description": {
Help: "A short description of the file.",
Type: "string",
Example: "Contract for signing",
},
"starred": {
Help: "Whether the user has starred the file.",
Type: "boolean",
Example: "false",
},
"labels": {
Help: "Labels attached to this file in a JSON dump of Googled drive format. Enable with --drive-metadata-labels.",
Type: "JSON",
Example: "[]",
},
}
// Extra fields we need to fetch to implement the system metadata above
var metadataFields = googleapi.Field(strings.Join([]string{
"copyRequiresWriterPermission",
"description",
"folderColorRgb",
"hasAugmentedPermissions",
"owners",
"permissionIds",
"permissions",
"properties",
"starred",
"viewedByMe",
"viewedByMeTime",
"writersCanShare",
}, ","))
// Fields we need to read from permissions
var permissionsFields = googleapi.Field(strings.Join([]string{
"*",
"permissionDetails/*",
}, ","))
// getPermission returns permissions for the fileID and permissionID passed in
func (f *Fs) getPermission(ctx context.Context, fileID, permissionID string, useCache bool) (perm *drive.Permission, inherited bool, err error) {
f.permissionsMu.Lock()
defer f.permissionsMu.Unlock()
if useCache {
perm = f.permissions[permissionID]
if perm != nil {
return perm, false, nil
}
}
fs.Debugf(f, "Fetching permission %q", permissionID)
err = f.pacer.Call(func() (bool, error) {
perm, err = f.svc.Permissions.Get(fileID, permissionID).
Fields(permissionsFields).
SupportsAllDrives(true).
Context(ctx).Do()
return f.shouldRetry(ctx, err)
})
if err != nil {
return nil, false, err
}
inherited = len(perm.PermissionDetails) > 0 && perm.PermissionDetails[0].Inherited
cleanPermission(perm)
// cache the permission
f.permissions[permissionID] = perm
return perm, inherited, err
}
// Set the permissions on the info
func (f *Fs) setPermissions(ctx context.Context, info *drive.File, permissions []*drive.Permission) (err error) {
errs := errcount.New()
for _, perm := range permissions {
if perm.Role == "owner" {
// ignore owner permissions - these are set with owner
continue
}
cleanPermissionForWrite(perm)
err := f.pacer.Call(func() (bool, error) {
_, err := f.svc.Permissions.Create(info.Id, perm).
SupportsAllDrives(true).
SendNotificationEmail(false).
Context(ctx).Do()
return f.shouldRetry(ctx, err)
})
if err != nil {
fs.Errorf(f, "Failed to set permission %s for %q: %v", perm.Role, perm.EmailAddress, err)
errs.Add(err)
}
}
err = errs.Err("failed to set permission")
if err != nil {
err = fserrors.NoRetryError(err)
}
return err
}
// Clean attributes from permissions which we can't write
func cleanPermissionForWrite(perm *drive.Permission) {
perm.Deleted = false
perm.DisplayName = ""
perm.Id = ""
perm.Kind = ""
perm.PermissionDetails = nil
perm.TeamDrivePermissionDetails = nil
}
// Clean and cache the permission if not already cached
func (f *Fs) cleanAndCachePermission(perm *drive.Permission) {
f.permissionsMu.Lock()
defer f.permissionsMu.Unlock()
cleanPermission(perm)
if _, found := f.permissions[perm.Id]; !found {
f.permissions[perm.Id] = perm
}
}
// Clean fields we don't need to keep from the permission
func cleanPermission(perm *drive.Permission) {
// DisplayName: Output only. The "pretty" name of the value of the
// permission. The following is a list of examples for each type of
// permission: * `user` - User's full name, as defined for their Google
// account, such as "Joe Smith." * `group` - Name of the Google Group,
// such as "The Company Administrators." * `domain` - String domain
// name, such as "thecompany.com." * `anyone` - No `displayName` is
// present.
perm.DisplayName = ""
// Kind: Output only. Identifies what kind of resource this is. Value:
// the fixed string "drive#permission".
perm.Kind = ""
// PermissionDetails: Output only. Details of whether the permissions on
// this shared drive item are inherited or directly on this item. This
// is an output-only field which is present only for shared drive items.
perm.PermissionDetails = nil
// PhotoLink: Output only. A link to the user's profile photo, if
// available.
perm.PhotoLink = ""
// TeamDrivePermissionDetails: Output only. Deprecated: Output only. Use
// `permissionDetails` instead.
perm.TeamDrivePermissionDetails = nil
}
// Fields we need to read from labels
var labelsFields = googleapi.Field(strings.Join([]string{
"*",
}, ","))
// getLabels returns labels for the fileID passed in
func (f *Fs) getLabels(ctx context.Context, fileID string) (labels []*drive.Label, err error) {
fs.Debugf(f, "Fetching labels for %q", fileID)
listLabels := f.svc.Files.ListLabels(fileID).
Fields(labelsFields).
Context(ctx)
for {
var info *drive.LabelList
err = f.pacer.Call(func() (bool, error) {
info, err = listLabels.Do()
return f.shouldRetry(ctx, err)
})
if err != nil {
return nil, err
}
labels = append(labels, info.Labels...)
if info.NextPageToken == "" {
break
}
listLabels.PageToken(info.NextPageToken)
}
for _, label := range labels {
cleanLabel(label)
}
return labels, nil
}
// Set the labels on the info
func (f *Fs) setLabels(ctx context.Context, info *drive.File, labels []*drive.Label) (err error) {
if len(labels) == 0 {
return nil
}
req := drive.ModifyLabelsRequest{}
for _, label := range labels {
req.LabelModifications = append(req.LabelModifications, &drive.LabelModification{
FieldModifications: labelFieldsToFieldModifications(label.Fields),
LabelId: label.Id,
})
}
err = f.pacer.Call(func() (bool, error) {
_, err = f.svc.Files.ModifyLabels(info.Id, &req).
Context(ctx).Do()
return f.shouldRetry(ctx, err)
})
if err != nil {
return fmt.Errorf("failed to set labels: %w", err)
}
return nil
}
// Convert label fields into something which can set the fields
func labelFieldsToFieldModifications(fields map[string]drive.LabelField) (out []*drive.LabelFieldModification) {
for id, field := range fields {
var emails []string
for _, user := range field.User {
emails = append(emails, user.EmailAddress)
}
out = append(out, &drive.LabelFieldModification{
// FieldId: The ID of the field to be modified.
FieldId: id,
// SetDateValues: Replaces the value of a dateString Field with these
// new values. The string must be in the RFC 3339 full-date format:
// YYYY-MM-DD.
SetDateValues: field.DateString,
// SetIntegerValues: Replaces the value of an `integer` field with these
// new values.
SetIntegerValues: field.Integer,
// SetSelectionValues: Replaces a `selection` field with these new
// values.
SetSelectionValues: field.Selection,
// SetTextValues: Sets the value of a `text` field.
SetTextValues: field.Text,
// SetUserValues: Replaces a `user` field with these new values. The
// values must be valid email addresses.
SetUserValues: emails,
})
}
return out
}
// Clean fields we don't need to keep from the label
func cleanLabel(label *drive.Label) {
// Kind: This is always drive#label
label.Kind = ""
for name, field := range label.Fields {
// Kind: This is always drive#labelField.
field.Kind = ""
// Note the fields are copies so we need to write them
// back to the map
label.Fields[name] = field
}
}
// Parse the metadata from drive item
//
// It should return nil if there is no Metadata
func (o *baseObject) parseMetadata(ctx context.Context, info *drive.File) (err error) {
metadata := make(fs.Metadata, 16)
// Dump user metadata first as it overrides system metadata
maps.Copy(metadata, info.Properties)
// System metadata
metadata["copy-requires-writer-permission"] = fmt.Sprint(info.CopyRequiresWriterPermission)
metadata["writers-can-share"] = fmt.Sprint(info.WritersCanShare)
metadata["viewed-by-me"] = fmt.Sprint(info.ViewedByMe)
metadata["content-type"] = info.MimeType
// Owners: Output only. The owner of this file. Only certain legacy
// files may have more than one owner. This field isn't populated for
// items in shared drives.
if o.fs.opt.MetadataOwner.IsSet(rwRead) && len(info.Owners) > 0 {
user := info.Owners[0]
if len(info.Owners) > 1 {
fs.Logf(o, "Ignoring more than 1 owner")
}
if user != nil {
id := user.EmailAddress
if id == "" {
id = user.DisplayName
}
metadata["owner"] = id
}
}
if o.fs.opt.MetadataPermissions.IsSet(rwRead) {
// We only write permissions out if they are not inherited.
//
// On My Drives permissions seem to be attached to every item
// so they will always be written out.
//
// On Shared Drives only non-inherited permissions will be
// written out.
// To read the inherited permissions flag will mean we need to
// read the permissions for each object and the cache will be
// useless. However shared drives don't return permissions
// only permissionIds so will need to fetch them for each
// object. We use HasAugmentedPermissions to see if there are
// special permissions before fetching them to save transactions.
// HasAugmentedPermissions: Output only. Whether there are permissions
// directly on this file. This field is only populated for items in
// shared drives.
if o.fs.isTeamDrive && !info.HasAugmentedPermissions {
// Don't process permissions if there aren't any specifically set
fs.Debugf(o, "Ignoring %d permissions and %d permissionIds as is shared drive with hasAugmentedPermissions false", len(info.Permissions), len(info.PermissionIds))
info.Permissions = nil
info.PermissionIds = nil
}
// PermissionIds: Output only. List of permission IDs for users with
// access to this file.
//
// Only process these if we have no Permissions
if len(info.PermissionIds) > 0 && len(info.Permissions) == 0 {
info.Permissions = make([]*drive.Permission, 0, len(info.PermissionIds))
g, gCtx := errgroup.WithContext(ctx)
g.SetLimit(o.fs.ci.Checkers)
var mu sync.Mutex // protect the info.Permissions from concurrent writes
for _, permissionID := range info.PermissionIds {
g.Go(func() error {
// must fetch the team drive ones individually to check the inherited flag
perm, inherited, err := o.fs.getPermission(gCtx, actualID(info.Id), permissionID, !o.fs.isTeamDrive)
if err != nil {
return fmt.Errorf("failed to read permission: %w", err)
}
// Don't write inherited permissions out
if inherited {
return nil
}
// Don't write owner role out - these are covered by the owner metadata
if perm.Role == "owner" {
return nil
}
mu.Lock()
info.Permissions = append(info.Permissions, perm)
mu.Unlock()
return nil
})
}
err = g.Wait()
if err != nil {
return err
}
} else {
// Clean the fetched permissions
for _, perm := range info.Permissions {
o.fs.cleanAndCachePermission(perm)
}
}
// Permissions: Output only. The full list of permissions for the file.
// This is only available if the requesting user can share the file. Not
// populated for items in shared drives.
if len(info.Permissions) > 0 {
buf, err := json.Marshal(info.Permissions)
if err != nil {
return fmt.Errorf("failed to marshal permissions: %w", err)
}
metadata["permissions"] = string(buf)
}
// Permission propagation
// https://developers.google.com/drive/api/guides/manage-sharing#permission-propagation
// Leads me to believe that in non shared drives, permissions
// are added to each item when you set permissions for a
// folder whereas in shared drives they are inherited and
// placed on the item directly.
}
if info.FolderColorRgb != "" {
metadata["folder-color-rgb"] = info.FolderColorRgb
}
if info.Description != "" {
metadata["description"] = info.Description
}
metadata["starred"] = fmt.Sprint(info.Starred)
metadata["btime"] = info.CreatedTime
metadata["mtime"] = info.ModifiedTime
if o.fs.opt.MetadataLabels.IsSet(rwRead) {
// FIXME would be really nice if we knew if files had labels
// before listing but we need to know all possible label IDs
// to get it in the listing.
labels, err := o.fs.getLabels(ctx, actualID(info.Id))
if err != nil {
return fmt.Errorf("failed to fetch labels: %w", err)
}
buf, err := json.Marshal(labels)
if err != nil {
return fmt.Errorf("failed to marshal labels: %w", err)
}
metadata["labels"] = string(buf)
}
o.metadata = &metadata
return nil
}
// Set the owner on the info
func (f *Fs) setOwner(ctx context.Context, info *drive.File, owner string) (err error) {
perm := drive.Permission{
Role: "owner",
EmailAddress: owner,
// Type: The type of the grantee. Valid values are: * `user` * `group` *
// `domain` * `anyone` When creating a permission, if `type` is `user`
// or `group`, you must provide an `emailAddress` for the user or group.
// When `type` is `domain`, you must provide a `domain`. There isn't
// extra information required for an `anyone` type.
Type: "user",
}
err = f.pacer.Call(func() (bool, error) {
_, err = f.svc.Permissions.Create(info.Id, &perm).
SupportsAllDrives(true).
TransferOwnership(true).
// SendNotificationEmail(false). - required apparently!
Context(ctx).Do()
return f.shouldRetry(ctx, err)
})
if err != nil {
return fmt.Errorf("failed to set owner: %w", err)
}
return nil
}
// Call back to set metadata that can't be set on the upload/update
//
// The *drive.File passed in holds the current state of the drive.File
// and this should update it with any modifications.
type updateMetadataFn func(context.Context, *drive.File) error
// read the metadata from meta and write it into updateInfo
//
// update should be true if this is being used to create metadata for
// an update/PATCH call as the rules on what can be updated are
// slightly different there.
//
// It returns a callback which should be called to finish the updates
// after the data is uploaded.
func (f *Fs) updateMetadata(ctx context.Context, updateInfo *drive.File, meta fs.Metadata, update, isFolder bool) (callback updateMetadataFn, err error) {
callbackFns := []updateMetadataFn{}
callback = func(ctx context.Context, info *drive.File) error {
for _, fn := range callbackFns {
err := fn(ctx, info)
if err != nil {
return err
}
}
return nil
}
// merge metadata into request and user metadata
for k, v := range meta {
// parse a boolean from v and write into out
parseBool := func(out *bool) error {
b, err := strconv.ParseBool(v)
if err != nil {
return fmt.Errorf("can't parse metadata %q = %q: %w", k, v, err)
}
*out = b
return nil
}
switch k {
case "copy-requires-writer-permission":
if isFolder {
fs.Debugf(f, "Ignoring %s=%s as can't set on folders", k, v)
} else if err := parseBool(&updateInfo.CopyRequiresWriterPermission); err != nil {
return nil, err
}
case "writers-can-share":
if !f.isTeamDrive {
if err := parseBool(&updateInfo.WritersCanShare); err != nil {
return nil, err
}
} else {
fs.Debugf(f, "Ignoring %s=%s as can't set on shared drives", k, v)
}
case "viewed-by-me":
// Can't write this
case "content-type":
updateInfo.MimeType = v
case "owner":
if !f.opt.MetadataOwner.IsSet(rwWrite) {
continue
}
// Can't set Owner on upload so need to set afterwards
callbackFns = append(callbackFns, func(ctx context.Context, info *drive.File) error {
err := f.setOwner(ctx, info, v)
if err != nil && f.opt.MetadataOwner.IsSet(rwFailOK) {
fs.Errorf(f, "Ignoring error as failok is set: %v", err)
return nil
}
return err
})
case "permissions":
if !f.opt.MetadataPermissions.IsSet(rwWrite) {
continue
}
var perms []*drive.Permission
err := json.Unmarshal([]byte(v), &perms)
if err != nil {
return nil, fmt.Errorf("failed to unmarshal permissions: %w", err)
}
// Can't set Permissions on upload so need to set afterwards
callbackFns = append(callbackFns, func(ctx context.Context, info *drive.File) error {
err := f.setPermissions(ctx, info, perms)
if err != nil && f.opt.MetadataPermissions.IsSet(rwFailOK) {
// We've already logged the permissions errors individually here
fs.Debugf(f, "Ignoring error as failok is set: %v", err)
return nil
}
return err
})
case "labels":
if !f.opt.MetadataLabels.IsSet(rwWrite) {
continue
}
var labels []*drive.Label
err := json.Unmarshal([]byte(v), &labels)
if err != nil {
return nil, fmt.Errorf("failed to unmarshal labels: %w", err)
}
// Can't set Labels on upload so need to set afterwards
callbackFns = append(callbackFns, func(ctx context.Context, info *drive.File) error {
err := f.setLabels(ctx, info, labels)
if err != nil && f.opt.MetadataLabels.IsSet(rwFailOK) {
fs.Errorf(f, "Ignoring error as failok is set: %v", err)
return nil
}
return err
})
case "folder-color-rgb":
updateInfo.FolderColorRgb = v
case "description":
updateInfo.Description = v
case "starred":
if err := parseBool(&updateInfo.Starred); err != nil {
return nil, err
}
case "btime":
if update {
fs.Debugf(f, "Skipping btime metadata as can't update it on an existing file: %v", v)
} else {
updateInfo.CreatedTime = v
}
case "mtime":
updateInfo.ModifiedTime = v
default:
if updateInfo.Properties == nil {
updateInfo.Properties = make(map[string]string, 1)
}
updateInfo.Properties[k] = v
}
}
return callback, nil
}
// Fetch metadata and update updateInfo if --metadata is in use
func (f *Fs) fetchAndUpdateMetadata(ctx context.Context, src fs.ObjectInfo, options []fs.OpenOption, updateInfo *drive.File, update bool) (callback updateMetadataFn, err error) {
meta, err := fs.GetMetadataOptions(ctx, f, src, options)
if err != nil {
return nil, fmt.Errorf("failed to read metadata from source object: %w", err)
}
callback, err = f.updateMetadata(ctx, updateInfo, meta, update, false)
if err != nil {
return nil, fmt.Errorf("failed to update metadata from source object: %w", err)
}
return callback, nil
}

View File

@@ -177,10 +177,7 @@ func (rx *resumableUpload) Upload(ctx context.Context) (*drive.File, error) {
if start >= rx.ContentLength {
break
}
reqSize = rx.ContentLength - start
if reqSize >= int64(rx.f.opt.ChunkSize) {
reqSize = int64(rx.f.opt.ChunkSize)
}
reqSize = min(rx.ContentLength-start, int64(rx.f.opt.ChunkSize))
chunk = readers.NewRepeatableLimitReaderBuffer(rx.Media, buf, reqSize)
} else {
// If size unknown read into buffer

View File

@@ -8,130 +8,22 @@ package dropbox
import (
"context"
"errors"
"fmt"
"sync"
"time"
"github.com/dropbox/dropbox-sdk-go-unofficial/v6/dropbox/async"
"github.com/dropbox/dropbox-sdk-go-unofficial/v6/dropbox/files"
"github.com/rclone/rclone/fs"
"github.com/rclone/rclone/fs/fserrors"
"github.com/rclone/rclone/lib/atexit"
)
const (
maxBatchSize = 1000 // max size the batch can be
defaultTimeoutSync = 500 * time.Millisecond // kick off the batch if nothing added for this long (sync)
defaultTimeoutAsync = 10 * time.Second // kick off the batch if nothing added for this long (ssync)
defaultBatchSizeAsync = 100 // default batch size if async
)
// batcher holds info about the current items waiting for upload
type batcher struct {
f *Fs // Fs this batch is part of
mode string // configured batch mode
size int // maximum size for batch
timeout time.Duration // idle timeout for batch
async bool // whether we are using async batching
in chan batcherRequest // incoming items to batch
closed chan struct{} // close to indicate batcher shut down
atexit atexit.FnHandle // atexit handle
shutOnce sync.Once // make sure we shutdown once only
wg sync.WaitGroup // wait for shutdown
}
// batcherRequest holds an incoming request with a place for a reply
type batcherRequest struct {
commitInfo *files.UploadSessionFinishArg
result chan<- batcherResponse
}
// Return true if batcherRequest is the quit request
func (br *batcherRequest) isQuit() bool {
return br.commitInfo == nil
}
// Send this to get the engine to quit
var quitRequest = batcherRequest{}
// batcherResponse holds a response to be delivered to clients waiting
// for a batch to complete.
type batcherResponse struct {
err error
entry *files.FileMetadata
}
// newBatcher creates a new batcher structure
func newBatcher(ctx context.Context, f *Fs, mode string, size int, timeout time.Duration) (*batcher, error) {
// fs.Debugf(f, "Creating batcher with mode %q, size %d, timeout %v", mode, size, timeout)
if size > maxBatchSize || size < 0 {
return nil, fmt.Errorf("dropbox: batch size must be < %d and >= 0 - it is currently %d", maxBatchSize, size)
}
async := false
switch mode {
case "sync":
if size <= 0 {
ci := fs.GetConfig(ctx)
size = ci.Transfers
}
if timeout <= 0 {
timeout = defaultTimeoutSync
}
case "async":
if size <= 0 {
size = defaultBatchSizeAsync
}
if timeout <= 0 {
timeout = defaultTimeoutAsync
}
async = true
case "off":
size = 0
default:
return nil, fmt.Errorf("dropbox: batch mode must be sync|async|off not %q", mode)
}
b := &batcher{
f: f,
mode: mode,
size: size,
timeout: timeout,
async: async,
in: make(chan batcherRequest, size),
closed: make(chan struct{}),
}
if b.Batching() {
b.atexit = atexit.Register(b.Shutdown)
b.wg.Add(1)
go b.commitLoop(context.Background())
}
return b, nil
}
// Batching returns true if batching is active
func (b *batcher) Batching() bool {
return b.size > 0
}
// finishBatch commits the batch, returning a batch status to poll or maybe complete
func (b *batcher) finishBatch(ctx context.Context, items []*files.UploadSessionFinishArg) (complete *files.UploadSessionFinishBatchResult, err error) {
func (f *Fs) finishBatch(ctx context.Context, items []*files.UploadSessionFinishArg) (complete *files.UploadSessionFinishBatchResult, err error) {
var arg = &files.UploadSessionFinishBatchArg{
Entries: items,
}
err = b.f.pacer.Call(func() (bool, error) {
complete, err = b.f.srv.UploadSessionFinishBatchV2(arg)
// If error is insufficient space then don't retry
if e, ok := err.(files.UploadSessionFinishAPIError); ok {
if e.EndpointError != nil && e.EndpointError.Path != nil && e.EndpointError.Path.Tag == files.WriteErrorInsufficientSpace {
err = fserrors.NoRetryError(err)
return false, err
}
err = f.pacer.Call(func() (bool, error) {
complete, err = f.srv.UploadSessionFinishBatchV2(arg)
if retry, err := shouldRetryExclude(ctx, err); !retry {
return retry, err
}
// after the first chunk is uploaded, we retry everything
// after the first chunk is uploaded, we retry everything except the excluded errors
return err != nil, err
})
if err != nil {
@@ -140,66 +32,10 @@ func (b *batcher) finishBatch(ctx context.Context, items []*files.UploadSessionF
return complete, nil
}
// finishBatchJobStatus waits for the batch to complete returning completed entries
func (b *batcher) finishBatchJobStatus(ctx context.Context, launchBatchStatus *files.UploadSessionFinishBatchLaunch) (complete *files.UploadSessionFinishBatchResult, err error) {
if launchBatchStatus.AsyncJobId == "" {
return nil, errors.New("wait for batch completion: empty job ID")
}
var batchStatus *files.UploadSessionFinishBatchJobStatus
sleepTime := 100 * time.Millisecond
const maxSleepTime = 1 * time.Second
startTime := time.Now()
try := 1
for {
remaining := time.Duration(b.f.opt.BatchCommitTimeout) - time.Since(startTime)
if remaining < 0 {
break
}
err = b.f.pacer.Call(func() (bool, error) {
batchStatus, err = b.f.srv.UploadSessionFinishBatchCheck(&async.PollArg{
AsyncJobId: launchBatchStatus.AsyncJobId,
})
return shouldRetry(ctx, err)
})
if err != nil {
fs.Debugf(b.f, "Wait for batch: sleeping for %v after error: %v: try %d remaining %v", sleepTime, err, try, remaining)
} else {
if batchStatus.Tag == "complete" {
fs.Debugf(b.f, "Upload batch completed in %v", time.Since(startTime))
return batchStatus.Complete, nil
}
fs.Debugf(b.f, "Wait for batch: sleeping for %v after status: %q: try %d remaining %v", sleepTime, batchStatus.Tag, try, remaining)
}
time.Sleep(sleepTime)
sleepTime *= 2
if sleepTime > maxSleepTime {
sleepTime = maxSleepTime
}
try++
}
if err == nil {
err = errors.New("batch didn't complete")
}
return nil, fmt.Errorf("wait for batch failed after %d tries in %v: %w", try, time.Since(startTime), err)
}
// commit a batch
func (b *batcher) commitBatch(ctx context.Context, items []*files.UploadSessionFinishArg, results []chan<- batcherResponse) (err error) {
// If commit fails then signal clients if sync
var signalled = b.async
defer func() {
if err != nil && signalled {
// Signal to clients that there was an error
for _, result := range results {
result <- batcherResponse{err: err}
}
}
}()
desc := fmt.Sprintf("%s batch length %d starting with: %s", b.mode, len(items), items[0].Commit.Path)
fs.Debugf(b.f, "Committing %s", desc)
// Called by the batcher to commit a batch
func (f *Fs) commitBatch(ctx context.Context, items []*files.UploadSessionFinishArg, results []*files.FileMetadata, errors []error) (err error) {
// finalise the batch getting either a result or a job id to poll
complete, err := b.finishBatch(ctx, items)
complete, err := f.finishBatch(ctx, items)
if err != nil {
return err
}
@@ -210,19 +46,13 @@ func (b *batcher) commitBatch(ctx context.Context, items []*files.UploadSessionF
return fmt.Errorf("expecting %d items in batch but got %d", len(results), len(entries))
}
// Report results to clients
var (
errorTag = ""
errorCount = 0
)
// Format results for return
for i := range results {
item := entries[i]
resp := batcherResponse{}
if item.Tag == "success" {
resp.entry = item.Success
results[i] = item.Success
} else {
errorCount++
errorTag = item.Tag
errorTag := item.Tag
if item.Failure != nil {
errorTag = item.Failure.Tag
if item.Failure.LookupFailed != nil {
@@ -235,112 +65,9 @@ func (b *batcher) commitBatch(ctx context.Context, items []*files.UploadSessionF
errorTag += "/" + item.Failure.PropertiesError.Tag
}
}
resp.err = fmt.Errorf("batch upload failed: %s", errorTag)
}
if !b.async {
results[i] <- resp
errors[i] = fmt.Errorf("upload failed: %s", errorTag)
}
}
// Show signalled so no need to report error to clients from now on
signalled = true
// Report an error if any failed in the batch
if errorTag != "" {
return fmt.Errorf("batch had %d errors: last error: %s", errorCount, errorTag)
}
fs.Debugf(b.f, "Committed %s", desc)
return nil
}
// commitLoop runs the commit engine in the background
func (b *batcher) commitLoop(ctx context.Context) {
var (
items []*files.UploadSessionFinishArg // current batch of uncommitted files
results []chan<- batcherResponse // current batch of clients awaiting results
idleTimer = time.NewTimer(b.timeout)
commit = func() {
err := b.commitBatch(ctx, items, results)
if err != nil {
fs.Errorf(b.f, "%s batch commit: failed to commit batch length %d: %v", b.mode, len(items), err)
}
items, results = nil, nil
}
)
defer b.wg.Done()
defer idleTimer.Stop()
idleTimer.Stop()
outer:
for {
select {
case req := <-b.in:
if req.isQuit() {
break outer
}
items = append(items, req.commitInfo)
results = append(results, req.result)
idleTimer.Stop()
if len(items) >= b.size {
commit()
} else {
idleTimer.Reset(b.timeout)
}
case <-idleTimer.C:
if len(items) > 0 {
fs.Debugf(b.f, "Batch idle for %v so committing", b.timeout)
commit()
}
}
}
// commit any remaining items
if len(items) > 0 {
commit()
}
}
// Shutdown finishes any pending batches then shuts everything down
//
// Can be called from atexit handler
func (b *batcher) Shutdown() {
if !b.Batching() {
return
}
b.shutOnce.Do(func() {
atexit.Unregister(b.atexit)
fs.Infof(b.f, "Committing uploads - please wait...")
// show that batcher is shutting down
close(b.closed)
// quit the commitLoop by sending a quitRequest message
//
// Note that we don't close b.in because that will
// cause write to closed channel in Commit when we are
// exiting due to a signal.
b.in <- quitRequest
b.wg.Wait()
})
}
// Commit commits the file using a batch call, first adding it to the
// batch and then waiting for the batch to complete in a synchronous
// way if async is not set.
func (b *batcher) Commit(ctx context.Context, commitInfo *files.UploadSessionFinishArg) (entry *files.FileMetadata, err error) {
select {
case <-b.closed:
return nil, fserrors.FatalError(errors.New("batcher is shutting down"))
default:
}
fs.Debugf(b.f, "Adding %q to batch", commitInfo.Commit.Path)
resp := make(chan batcherResponse, 1)
b.in <- batcherRequest{
commitInfo: commitInfo,
result: resp,
}
// If running async then don't wait for the result
if b.async {
return nil, nil
}
result := <-resp
return result.entry, result.err
}

View File

@@ -55,10 +55,7 @@ func (d *digest) Write(p []byte) (n int, err error) {
n = len(p)
for len(p) > 0 {
d.writtenMore = true
toWrite := bytesPerBlock - d.n
if toWrite > len(p) {
toWrite = len(p)
}
toWrite := min(bytesPerBlock-d.n, len(p))
_, err = d.blockHash.Write(p[:toWrite])
if err != nil {
panic(hashReturnedError)

Some files were not shown because too many files have changed in this diff Show More