1
0
mirror of https://github.com/rclone/rclone.git synced 2025-12-24 04:04:37 +00:00

Compare commits

..

226 Commits

Author SHA1 Message Date
Nick Craig-Wood
23e9066b4f build: make rclone build with wasip1/wasm as well as js/wasm
Fixes #7831
2025-08-22 00:07:58 +01:00
Nick Craig-Wood
693ca3215f Revert "build: disable wasm/js build due to go bug"
This reverts commit 0470450583 as
the bug is now fixed in go1.25
2025-08-22 00:07:58 +01:00
Nick Craig-Wood
321cf23e9c s3: add --s3-use-arn-region flag - fixes #8686 2025-08-22 00:02:41 +01:00
Nick Craig-Wood
7e8d4bd915 Add Binbin Qian to contributors 2025-08-22 00:02:41 +01:00
Nick Craig-Wood
06f45e0ac0 Add Lucas Bremgartner to contributors 2025-08-22 00:02:41 +01:00
Binbin Qian
4af2f01abc docs: add tips about outdated certificates 2025-08-21 08:21:02 +02:00
Lucas Bremgartner
dd3fff6eae FAQ: specify the availability of SSL_CERT_* env vars
SSL_CERT_FILE and SSL_CERT_DIR env vars are only available on Unix systems other than macOS.

Addressing comment https://github.com/rclone/rclone/pull/1977#issuecomment-3201961570
2025-08-20 12:34:04 +01:00
wiserain
ca6631746a pikpak: add file name integrity check during upload
This commit introduces a new validation step to ensure data integrity 
during file uploads.

- The API's returned file name (new.File.Name) is now verified 
  against the requested file name (leaf) immediately after 
  the initial upload ticket is created.
- If a mismatch is detected, the upload process is aborted with an error, 
  and the defer cleanup logic is triggered to delete any partially created file.
- This addresses an unexpected API behavior where numbered suffixes 
  might be appended to filenames even without conflicts.
- This change prevents corrupted or misnamed files from being uploaded 
  without client-side awareness.
2025-08-19 22:00:23 +09:00
nielash
e5fe0b1476 bisync: skip TestBisyncConcurrent on non-local
See discussion on
https://github.com/rclone/rclone/pull/8708#discussion_r2280308808
2025-08-18 17:57:14 -04:00
Nick Craig-Wood
4c5764204d internetarchive: fix server side copy files with &
Before this change, server side copy of files with & gave the error:

    Invalid Argument</Message><Resource>x-(amz|archive)-copy-source
    header has bad character

This fix switches to using url.QueryEscape which escapes everything
from url.PathEscape which doesn't escape &.

Fixes #8754
2025-08-18 19:37:30 +01:00
Nick Craig-Wood
d70f40229e Revert "s3: set useAlreadyExists to false for Alibaba OSS"
This reverts commit 64ed9b175f.

This fails the integration tests with

s3_internal_test.go:434: Creating a bucket we already have created returned code: No Error
s3_internal_test.go:439:
    	Error Trace:	backend/s3/s3_internal_test.go:439
    	Error:      	Should be true
    	Test:       	TestIntegration/FsMkdir/FsPutFiles/Internal/Versions/Mkdir
    	Messages:   	Need to set UseAlreadyExists quirk
2025-08-18 19:37:30 +01:00
Nick Craig-Wood
05b13b47b5 Add huangnauh to contributors 2025-08-18 19:37:30 +01:00
Sudipto Baral
ecd52aa809 smb: improve multithreaded upload performance using multiple connections
In the current design, OpenWriterAt provides the interface for random-access
writes, and openChunkWriterFromOpenWriterAt wraps this interface to enable
parallel chunk uploads using multiple goroutines. A global connection pool is
already in place to manage SMB connections across files.

However, currently only one connection is used per file, which makes multiple
goroutines compete for the connection during multithreaded writes.

This changes create separate connections for each goroutine, which allows true
parallelism by giving each goroutine its own SMB connection

Signed-off-by: sudipto baral <sudiptobaral.me@gmail.com>
2025-08-18 16:29:18 +01:00
nielash
269abb1aee bisync: fix data races on tests 2025-08-17 20:16:46 -04:00
nielash
d91cbb2626 bisync: remove unused parameters 2025-08-17 20:16:46 -04:00
nielash
9073d17313 bisync: deglobalize to fix concurrent runs via rc - fixes #8675
Before this change, bisync used some global variables, which could cause errors
if running multiple concurrent bisync runs through the rc. (Running normally
from the command line was not affected.)

This change deglobalizes those variables so that multiple bisync runs can be
safely run at once, from the same rclone instance.
2025-08-17 20:16:46 -04:00
huangnauh
cc20d93f47 mount: fix identification of symlinks in directory listings 2025-08-17 12:57:35 +01:00
Nick Craig-Wood
cb1507fa96 s3: fix Content-Type: aws-chunked causing upload errors with --metadata
`Content-Type: aws-chunked` is used on S3 PUT requests to signal SigV4
streaming uploads: the body is sent in AWS-formatted chunks, each
chunk framed and HMAC-signed.

When copying from a non S3 compatible object store (like Digital
Ocean) the objects can have `Content-Type: aws-chunked` (which you
won't see on AWS S3). Attempting to copy these objects to S3 with
`--metadata` this produces this error.

    aws-chunked encoding is not supported when x-amz-content-sha256 UNSIGNED-PAYLOAD is supplied

This patch makes sure `aws-chunked` is removed from the `Content-Type`
metadata both on the way in and the way out.

Fixes #8724
2025-08-16 17:11:54 +01:00
Nick Craig-Wood
b0b3b04b3b config: fix problem reading pasted tokens over 4095 bytes
Before this change we were reading input from stdin using the terminal
in the default line mode which has a limit of 4095 characters.

The typical culprit was onedrive tokens (which are very long) giving the error

    Couldn't decode response: invalid character 'e' looking for beginning of value

This change swaps over to use the github.com/peterh/liner read line
library which does not have that limitation and also enables more
sensible cursor editing.

Fixes #8688 #8323 #5835
2025-08-16 16:44:35 +01:00
Nick Craig-Wood
8d878d0a5f config: fix test failure on local machine with a config file
This uses a temporary config file instead.
2025-08-16 16:44:00 +01:00
Nick Craig-Wood
8d353039a6 log: add log rotation to --log-file - fixes #2259 2025-08-16 16:38:23 +01:00
Nick Craig-Wood
4b777db20b accounting: Fix stats (speed=0 and eta=nil) when starting jobs via rc
Before this change we used the current context to start the average
loop. This means that if the context came from the rc the average loop
would be cancelled at the end of the rc request leading the speed not
being measured.

This uses the background context for the accounting loop so it doesn't
get cancelled when its parent gets cancelled.
2025-08-16 16:33:38 +01:00
Nick Craig-Wood
16ad0c2aef docs: update overview table for oracle object storage 2025-08-16 16:00:14 +01:00
Nick Craig-Wood
e46dec2a94 Add praveen-solanki-oracle to contributors 2025-08-16 16:00:14 +01:00
praveen-solanki-oracle
2b54b63cb3 oracleobjectstorage: add read only metadata support - Fixes #8705 2025-08-16 15:55:53 +01:00
Nick Craig-Wood
f2eb5f35f6 doc: sync doesn't symlinks in dest without --link - Fixes #8749 2025-08-16 09:22:31 +01:00
Nick Craig-Wood
d9a36ef45c s3: sort providers in docs 2025-08-15 17:38:31 +01:00
Nick Craig-Wood
eade7710e7 s3: add docs for Exaba Object Storage 2025-08-15 17:38:31 +01:00
Nick Craig-Wood
e6470d998c azureblob: fix double accounting for multipart uploads - fixes #8718
Before this change multipart uploads using OpenChunkWriter would
account for twice the space used.

This fixes the problem by adjusting the accounting delay.
2025-08-14 16:59:34 +01:00
Nick Craig-Wood
0c0fb93111 pool: fix deadlock with --max-buffer-memory
Before this change we used an overcomplicated method of memory
reservations in the pool.RW which caused deadlocks.

This changes it to use a much simpler reservation system where we
actually reserve the memory and store it in the pool.RW. This allows
us to use the semaphore.Weighted to count the actually memory in use
(rather than the memory in use and in the cache). This in turn allows
accurate use of the semaphore by users wanting memory.
2025-08-14 16:14:59 +01:00
Nick Craig-Wood
3f60764bd4 azureblob: fix deadlock with --max-connections with InvalidBlockOrBlob errors
Before this change the azureblob backend could deadlock when using
--max-connections. This is because when it receives InvalidBlockOrBlob
error it attempts to clear the condition before retrying. This in turn
involved recursively calling the pacer. At this point the pacer can
easily have no connections left which causes a deadlock as all the
other pacer connections are waiting for the InvalidBlockOrBlob to be
resolved.

This fixes the problem by using a temporary pacer when resolving the
InvalidBlockOrBlob errors.
2025-08-14 16:14:59 +01:00
Nick Craig-Wood
8f84f91666 build: downgrade linter to use go1.24 until it is fixed for go1.25 2025-08-13 17:54:45 +01:00
Nick Craig-Wood
2c91772bf1 build: update all dependencies 2025-08-13 17:54:45 +01:00
Nick Craig-Wood
c3f721755d build: update to go1.25 and make go1.24 the minimum required version 2025-08-13 17:54:45 +01:00
Nick Craig-Wood
8a952583a5 Add Timothy Jacobs to contributors 2025-08-13 17:54:40 +01:00
nielash
fc5bd21e28 bisync: fix time.Local data race on tests - fixes #8272
Before this change, the bisync tests were directly setting the time.Local
variable to UTC.

The reason for overriding the time zone on the tests is to make them
deterministic regardless of where in the world the user happens to be. There are
some goldenized strings which have the time zone hard-coded and would result in a
miscompare failure outside of that time zone.

However, mutating the time.Local variable is not the right way to do this, as OP
correctly pointed out on #8272.

Setting the TZ environment variable from within the code was also not an ideal
solution because, while it worked on unix, it did not work on Windows. See
fbac94a799/src/time/zoneinfo.go (L79-L80)

This change fixes the issue by defining a new bisync.LogTZ setting for use when
printing timestamps in /cmd/bisync/resolve.go. We override this on the tests
instead of time.Local.
2025-08-13 11:58:35 -04:00
nielash
be73a10a97 googlecloudstorage: fix rateLimitExceeded error on bisync tests
Additional to googlecloudstorage's general rate limiting, it apparently has a
separate limit for updating the same object more than once per second:

googleapi: Error 429: The object rclone-test-
demilaf1fexu/015108so/check_access/path2/modtime_write_test exceeded the rate
limit for object mutation operations (create, update, and delete). Please reduce
your request rate. See https://cloud.google.com/storage/docs/gcs429.,
rateLimitExceeded

We were encountering this in the part of the bisync tests where we create an
object, verify that we can edit its modtime, then remove it. We were not
encountering it elsewhere because it only concerns manipulations of the same
object -- not the rate of API calls in general. For the same reason, the standard
pacer is not an effective solution for enforcing this (unless, of course, we
want to slow the entire test down by setting a 1s MinSleep across the board.)

While ideally this would be handled in the backend, this gets around it by
sleeping for 1s in the relevant part of the bisync tests.
2025-08-13 11:58:35 -04:00
Timothy Jacobs
7edf8eb233 accounting: populate transfer snapshot with "what" value 2025-08-13 16:25:38 +01:00
dependabot[bot]
99144dcbba build(deps): bump actions/checkout from 4 to 5
Bumps [actions/checkout](https://github.com/actions/checkout) from 4 to 5.
- [Release notes](https://github.com/actions/checkout/releases)
- [Changelog](https://github.com/actions/checkout/blob/main/CHANGELOG.md)
- [Commits](https://github.com/actions/checkout/compare/v4...v5)

---
updated-dependencies:
- dependency-name: actions/checkout
  dependency-version: '5'
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-08-12 19:39:49 +02:00
dependabot[bot]
8f90f830bd build(deps): bump actions/download-artifact from 4 to 5
Bumps [actions/download-artifact](https://github.com/actions/download-artifact) from 4 to 5.
- [Release notes](https://github.com/actions/download-artifact/releases)
- [Commits](https://github.com/actions/download-artifact/compare/v4...v5)

---
updated-dependencies:
- dependency-name: actions/download-artifact
  dependency-version: '5'
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-08-12 17:49:55 +02:00
nielash
456108f29e googlecloudstorage: enable bisync integration tests
These were habitually failing at some point and ignored for that reason, but
seem to be passing now. It is possible that in the interim, the underlying issue
was resolved by another commit. If there is still an issue lurking, the nightly
tests will surely reveal it (and give us a log to look at.)
2025-08-09 18:12:17 -04:00
nielash
f7968aad1c fstest: fix parsing of commas in -remotes
Connection string remotes like "TestGoogleCloudStorage,directory_markers:" use
commas. Before this change, these could not be passed with the -remotes flag,
which expected commas to be used only as separators.

After this change, CSV parsing is used so that commas will be properly
recognized inside a terminal-escaped and quoted value, like:

-remotes local,\"TestGoogleCloudStorage,directory_markers:\"
2025-08-09 18:12:17 -04:00
nielash
2a587d21c4 azurefiles: fix hash getting erased when modtime is set
Before this change, setting an object's modtime with o.SetModTime() (without
updating the file's content) would inadvertently erase its md5 hash.

The documentation notes: "If this property isn't specified on the request, the
property is cleared for the file. Subsequent calls to Get File Properties won't
return this property, unless it's explicitly set on the file again."
https://learn.microsoft.com/en-us/rest/api/storageservices/set-file-properties#common-request-headers

This change fixes the issue by setting ContentMD5 (and ContentType), to the
extent we have it, during SetModTime.

Discovered on bisync integration tests such as TestBisyncRemoteRemote/resolve
2025-08-09 18:12:17 -04:00
nielash
4b0df05907 bisync: disable --sftp-copy-is-hardlink on sftp tests
Before this change, TestSFTPOpenssh integration tests would fail due to setting
copy_is_hardlink=true in /fstest/testserver/init.d/TestSFTPOpenssh.

For example, if a file was server-side copied from path1 to path2 and then the
bisync tests set the path2 modtime, the path1 modtime would also unexpectedly
mutate.

Hardlinks are not the same as copies. The bisync tests assume that they can
modify a file on one side without affecting a file on the other. This change
essentially sets --sftp-copy-is-hardlink to the default of false for the bisync
tests.
2025-08-09 18:12:17 -04:00
Anagh Kumar Baranwal
a92af34825 local: fix --copy-links on Windows when listing Junction points 2025-08-10 00:33:34 +05:30
Nick Craig-Wood
8ffde402f6 operations: fix too many connections open when using --max-memory
Before this change we opened the connection before allocating memory.
This meant a long wait sometimes for memory and too many connections
open.

Now we allocate the memory first before opening the connection.
2025-08-07 12:45:44 +01:00
Nick Craig-Wood
117d8d9fdb pool: fix deadlock with --max-memory and multipart transfers
Because multipart transfers can need more than one buffer to complete,
if transfers was set very high, it was possible for lots of multipart
transfers to start, grab fewer buffers than chunk size, then deadlock
because no more memory was available.

This fixes the problem by introducing a reservation system which the
multipart transfer uses to ensure it can reserve all the memory for
one chunk before starting.
2025-08-07 12:45:44 +01:00
Nick Craig-Wood
5050f42b8b pool: unify memory between multipart and asyncreader to use one pool
Before this the multipart code and asyncreader used separate pools
which is inefficient on memory use.
2025-08-07 12:45:44 +01:00
Nick Craig-Wood
fcbcdea067 docs: update links to rcloneui 2025-08-05 16:25:58 +01:00
Nick Craig-Wood
d4e68bf66b docs: add MEGA S4 as a gold sponsor
This also tidies the menu cards.
2025-08-01 12:40:29 +01:00
Nick Craig-Wood
743d160fdd about: fix potential overflow of about in various backends
Before this fix it was possible for an about call in various backends
to exceed an int64 and wrap.

This patch causes it to clip to the max int64 value instead.
2025-07-31 11:38:51 +01:00
Nick Craig-Wood
dc95f36bc1 box: fix about: cannot unmarshal number 1.0e+18 into Go struct field
Before this change rclone about was failing with

    cannot unmarshal number 1.0e+18 into Go struct field User.space_amount of type int64

As Box increased Enterprise accounts user.space_amount from 30PB to
1e+18 or 888.178PB returning it as a floating point number, not an integer.

This fix reads it as a float64 and clips it to the maximum value of an
int64 if necessary.
2025-07-31 11:38:51 +01:00
Nick Craig-Wood
d3e3af377a oauthutil: fix nil pointer crash when started with expired token 2025-07-31 11:38:51 +01:00
n4n5
db4812fbfa rc: listremotes should send an empty array instead of nil 2025-07-25 15:37:25 +01:00
n4n5
ff9cbab5fa config: add error if RCLONE_CONFIG_PASS was supplied but didn't decrypt config 2025-07-25 11:24:18 +01:00
n4n5
30d8ab5f2f rc: add config/unlock to unlock the config file 2025-07-25 11:19:07 +01:00
Anagh Kumar Baranwal
d71a4195d6 ftp: allow insecure TLS ciphers - fixes #8701
Signed-off-by: Anagh Kumar Baranwal <6824881+darthShadow@users.noreply.github.com>
2025-07-25 10:30:18 +01:00
zjx20
64ed9b175f s3: set useAlreadyExists to false for Alibaba OSS 2025-07-24 23:22:16 +01:00
Nick Craig-Wood
2b10340e4e docs: update sponsors page 2025-07-24 15:19:15 +01:00
Nick Craig-Wood
3c596f8d11 fs: allow global variables to be overriden or set on backend creation
This allows backend config to contain

- `override.var` - set var during remote creation only
- `global.var` - set var in the global config permanently

Fixes #8563
2025-07-23 15:09:51 +01:00
Nick Craig-Wood
6a9c221841 fs: allow setting of --http_proxy from command line
This in turn allows `override.http_proxy` to be set in backend configs
to set an http proxy for a single backend.
2025-07-23 15:09:51 +01:00
Nick Craig-Wood
c49b24ff90 tests: cloudinary: remove test ignore after merging fix from #8707 2025-07-23 13:12:55 +01:00
Nick Craig-Wood
edbbfd1e86 Add Antonin Goude to contributors 2025-07-23 13:12:55 +01:00
Nick Craig-Wood
0e0af7499c Add Yu Xin to contributors 2025-07-23 13:12:55 +01:00
Nick Craig-Wood
eb4fe3ef4c Add houance to contributors 2025-07-23 13:12:55 +01:00
Nick Craig-Wood
70eb0f21d9 Add Florent Vennetier to contributors 2025-07-23 13:12:55 +01:00
Nick Craig-Wood
12378bae27 Add n4n5 to contributors 2025-07-23 13:12:55 +01:00
Nick Craig-Wood
3c08c4df3a Add Albin Parou to contributors 2025-07-23 13:12:55 +01:00
Nick Craig-Wood
897509ae10 Add liubingrun to contributors 2025-07-23 13:12:55 +01:00
nielash
0eb7ee2e16 sync: fix testLoggerVsLsf when backend only reads modtime
There are some backends (like PikPak) that advertise a precision of
fs.ModTimeNotSupported but do actually return a modtime when asked. In the case
of PikPak, it is because the modtime can be read but not written, and is not
considered reliable enough to use for syncing.

Before this change, testLoggerVsLsf got confused in this scenario (expected a
blank modtime but got non-blank). Adding to the confusion, it only reaches this
code if the backend happens to support md5 hashes, and the fsrc and fdst have
the same precision.

This change fixes the issue by setting the modtime string on both sides to
"none" in this scenario. Note that we can't use "" (blank) because
(operations.ListFormat).AddModTime would replace that with "2006-01-02 15:04:05".
2025-07-23 12:49:52 +01:00
nielash
c1ebfb7e04 sync: fix testLoggerVsLsf checking wrong fs
Before this change, two tests (TestServerSideCopyOverSelf and
TestServerSideMoveOverSelf) were checking the wrong Fs in the call to
testLoggerVsLsf. This fixes it by making sure we are testing the same two Fs's
we synced.
2025-07-23 12:49:52 +01:00
Nick Craig-Wood
3d62058693 docs: fix make opengraph tags absolute as not all sites understand relative 2025-07-22 18:00:33 +01:00
albertony
122890799f docs: update contributing guide regarding markdown documentation 2025-07-21 20:23:16 +02:00
albertony
65078d5846 build: add markdown linting to workflow 2025-07-21 20:23:16 +02:00
albertony
92f304902d build: add markdownlint configuration 2025-07-21 20:23:16 +02:00
albertony
45477a6c7d docs: minor format cleanup install.md 2025-07-21 20:23:16 +02:00
albertony
79b549b5a4 docs: fix markdownlint issue md049/emphasis-style 2025-07-21 20:23:16 +02:00
albertony
318880b4ad docs: fix markdownlint issue md036/no-emphasis-as-heading 2025-07-21 20:23:16 +02:00
albertony
75521dcf6e docs: fix markdownlint issue md033/no-inline-html 2025-07-21 20:23:16 +02:00
albertony
8bf20dd545 docs: fix markdownlint issue md025/single-title 2025-07-21 20:23:16 +02:00
albertony
744bce1246 docs: fix markdownlint issue md041/first-line-heading 2025-07-21 20:23:16 +02:00
albertony
c817fc5c57 docs: fix markdownlint issue md001/heading-increment 2025-07-21 20:23:16 +02:00
albertony
0bb4d0a985 docs: fix markdownlint issue md003/heading-style 2025-07-21 20:23:16 +02:00
albertony
a8605abd34 docs: fix markdownlint issue md034/no-bare-urls 2025-07-21 20:23:16 +02:00
albertony
953fb4490b docs: fix markdownlint issue md010/no-hard-tabs 2025-07-21 20:23:16 +02:00
albertony
b17c3d18af docs: fix markdownlint issue md013/line-length 2025-07-21 20:23:16 +02:00
albertony
b45580fa19 docs: fix markdownlint issue md038/no-space-in-code 2025-07-21 20:23:16 +02:00
albertony
1c26f40078 docs: fix markdownlint issue md040/fenced-code-language 2025-07-21 20:23:16 +02:00
albertony
667ad093eb docs: fix markdownlint issue md046/code-block-style 2025-07-21 20:23:16 +02:00
albertony
2c369aedf5 docs: fix markdownlint issue md037/no-space-in-emphasis 2025-07-21 20:23:16 +02:00
albertony
7a0d5ab0b4 docs: fix markdownlint issue md059/descriptive-link-text 2025-07-21 20:23:16 +02:00
albertony
75582b804b docs: fix markdownlint issues md007/ul-indent md004/ul-style 2025-07-21 20:23:16 +02:00
albertony
73452551c6 docs: fix markdownlint issue md012/no-multiple-blanks 2025-07-21 20:23:16 +02:00
albertony
cb3cf5068b docs: fix markdownlint issue md058/blanks-around-tables 2025-07-21 20:23:16 +02:00
albertony
428f518771 docs: fix markdownlint issue md022/blanks-around-headings 2025-07-21 20:23:16 +02:00
albertony
0411a41e11 docs: fix markdownlint issue md031/blanks-around-fences 2025-07-21 20:23:16 +02:00
albertony
07b37bcd12 docs: fix markdownlint issue md032/blanks-around-lists 2025-07-21 20:23:16 +02:00
albertony
0506826ff5 docs: fix markdownlint issue md009/no-trailing-spaces 2025-07-21 20:23:16 +02:00
albertony
4fcd36a5ab docs: fix markdownlint issue md014/commands-show-output 2025-07-21 20:23:16 +02:00
albertony
b2f43f39ba docs: fix markdownlint issues md007/ul-indent md004/ul-style (bin/update-authors.py) 2025-07-21 20:23:16 +02:00
albertony
074d73d12b docs: fix markdownlint issues md007/ul-indent md004/ul-style (authors.md) 2025-07-21 20:23:16 +02:00
Nick Craig-Wood
6457bcf51e docs: add opengraph tags for website social media previews 2025-07-21 17:48:23 +01:00
Nick Craig-Wood
8d12519f3d mount: note that bucket based remotes can use directory markers 2025-07-21 17:48:23 +01:00
wiserain
8a7c401366 pikpak: add docs for methods to clarify name collision handling and restrictions 2025-07-21 17:43:15 +01:00
wiserain
0aae8f346f pikpak: enhance Copy method to handle name collisions and improve error management 2025-07-21 17:43:15 +01:00
wiserain
e991328967 pikpak: enhance Move for better handling of error and name collision 2025-07-21 17:43:15 +01:00
Yu Xin
614d02a673 accounting: fix incorrect stats with --transfers=1 - fixes #8670 2025-07-21 16:54:19 +01:00
houance
018ebdded5 rc: fix operations/check ignoring oneWay parameter
Change param from parsing "oneway" to "oneWay" as bool value, as the docs
say "oneWay -  check one way only, source files must exist on remote"
2025-07-21 16:41:08 +01:00
Florent Vennetier
fc08983d71 s3: add OVHcloud Object Storage provider
Co-Authored-By: Antonin Goude <antonin.goude@ovhcloud.com>
2025-07-21 16:34:53 +01:00
n4n5
7b61084891 docs: rc: fix description of how to read local config 2025-07-21 15:42:37 +01:00
albertony
d1ac6c2fe1 build: limit check for edits of autogenerated files to only commits in a pull request 2025-07-17 16:20:38 +02:00
albertony
da9c99272c build: extend check for edits of autogenerated files to all commits in a pull request 2025-07-17 16:20:38 +02:00
Sudipto Baral
9c7594d78f smb: refresh Kerberos credentials when ccache file changes
This change enhances the SMB backend in Rclone to automatically refresh
Kerberos credentials when the associated ccache file is updated.

Previously, credentials were only loaded once per path and cached
indefinitely, which caused issues when service tickets expired or the
cache was renewed on the server.
2025-07-17 14:34:44 +01:00
Albin Parou
70226cc653 s3: fix multipart upload and server side copy when using bucket policy SSE-C
When uploading or moving data within an s3-compatible bucket, the
`SSECustomer*` headers should always be forwarded: on
`CreateMultipartUpload`, `UploadPart`, `UploadCopyPart` and
`CompleteMultipartUpload`. But currently rclone doesn't forward those
headers to `CompleteMultipartUpload`.

This is a requirement if you want to enforce `SSE-C` at the bucket level
via a bucket policy. Cf: `This parameter is required only when the
object was created using a checksum algorithm or if your bucket policy
requires the use of SSE-C.` in
https://docs.aws.amazon.com/AmazonS3/latest/API/API_CompleteMultipartUpload.html
2025-07-17 14:29:31 +01:00
liubingrun
c20e4bd99c backend/s3: Fix memory leak by cloning strings #8683
This commit addresses a potential memory leak in the S3 backend where
strings extracted from large API responses were keeping the entire
response in memory. The issue occurs because Go strings share underlying
memory with their source, preventing garbage collection of large XML
responses even when only small substrings are needed.

Signed-off-by: liubingrun <liubr1@chinatelecom.cn>
2025-07-17 12:31:52 +01:00
Nick Craig-Wood
ccfe153e9b purge: exit with a fatal error if filters are set on rclone purge
Fixes #8491
2025-07-17 11:17:08 +01:00
Nick Craig-Wood
c9730bcaaf docs: Add Backblaze as a Platinum sponsor 2025-07-17 11:17:08 +01:00
Nick Craig-Wood
03dd7486c1 Add Sam Pegg to contributors 2025-07-17 11:17:08 +01:00
raider13209
6249009fdf googlephotos: added warning for Google Photos compatability-fixes #8672 2025-07-17 10:48:12 +01:00
Nick Craig-Wood
8e2d76459f test: remove flakey TestChunkerChunk50bYandex: test 2025-07-16 16:39:57 +01:00
albertony
5e539c6a72 docs: Consolidate entries for Josh Soref in contributors 2025-07-13 14:05:45 +02:00
albertony
8866112400 docs: remove dead link to example of writing a plugin 2025-07-13 13:51:38 +02:00
Nick Craig-Wood
bfdd5e2c22 filescom: document that hashes need to be enabled - fixes #8674 2025-07-11 14:15:59 +01:00
Nick Craig-Wood
f3f16cd2b9 Add Sudipto Baral to contributors 2025-07-11 14:15:59 +01:00
albertony
d84ea2ec52 docs: fix incorrect json syntax in sample output 2025-07-11 13:49:27 +02:00
albertony
b259241c07 docs: ignore author email piyushgarg80
This should merge the two duplicates:
- piyushgarg <piyushgarg80@gmail.com>
- Piyush <piyushgarg80>
2025-07-11 13:49:06 +02:00
albertony
a8ab0730a7 docs: fix header level for --dump option section 2025-07-10 12:36:10 +02:00
albertony
cef207cf94 docs: use stringArray as parameter type 2025-07-10 12:36:10 +02:00
albertony
e728ea32d1 docs: use consistent markdown heading syntax 2025-07-10 12:36:10 +02:00
Nick Craig-Wood
ccdee0420f imagekit: remove server side Copy method as it was downloading and uploading
The Copy method was downloading the file and uploading it again rather
than server side copying it.

It looks from the docs that the upload process can read a URL so this
might be possible, but the removed code is incorrect.
2025-07-10 11:29:27 +01:00
Nick Craig-Wood
8a51e11d23 imagekit: don't low level retry uploads
Low level retrying uploads can lead to partial or empty files being
uploaded as the io.Reader has been read in the first attempt.
2025-07-10 11:29:27 +01:00
Nick Craig-Wood
9083f1ff15 imagekit: return correct error when attempting to upload zero length files
Imagekit doesn't support empty files so return correct error for
integration tests to process properly.
2025-07-10 11:29:27 +01:00
Sudipto Baral
2964b1a169 smb: add --smb-kerberos-ccache option to set kerberos ccache per smb backend 2025-07-10 10:17:42 +01:00
Nick Craig-Wood
b6767820de test: fix smb kerberos integration tests
Thanks @sudiptob2 for the tip!
2025-07-09 18:05:29 +01:00
Nick Craig-Wood
821e7fce45 Changelog updates from Version v1.70.3 2025-07-09 16:26:56 +01:00
albertony
b7c6268d3e config: make parsing of duration options consistent
All user visible Durations should be fs.Duration rather than time.Duration. Suffix is then optional and defaults to s. Additional suffices d, w, M and y are supported, in addition to ms, s, m and h - which are the only ones supported by time.Duration. Absolute times can also be specified, and will be interpreted as duration relative to now.
2025-07-08 12:08:14 +02:00
albertony
521d6b88d4 docs: cleanup usage 2025-07-08 11:28:28 +02:00
albertony
cf767b0856 docs: break long lines 2025-07-08 11:28:28 +02:00
albertony
25f7809822 docs: add option value type to header where missing 2025-07-08 11:28:28 +02:00
albertony
74c0b1ea3b docs: mention that identifiers in option values are case insensitive 2025-07-08 11:28:28 +02:00
albertony
f4dcb1e9cf docs: rewrite dump option examples 2025-07-08 11:28:28 +02:00
albertony
90f1d023ff docs: use markdown inline code format for dump option headers that are real examples 2025-07-08 11:28:28 +02:00
albertony
e9c5f2d4e8 docs: change spelling from server side to server-side 2025-07-08 11:28:28 +02:00
albertony
1249e9b5ac docs: cleanup header casing 2025-07-08 11:28:28 +02:00
albertony
d47bc5f6c4 docs: rename OSX to macOS 2025-07-08 11:28:28 +02:00
albertony
efb1794135 docs: fix list and code block issue 2025-07-08 11:28:28 +02:00
albertony
71b98a03a9 docs: consistent markdown list format 2025-07-08 11:28:28 +02:00
albertony
8e625c6593 docs: split section with general description of options with that documenting actual main options 2025-07-08 11:28:28 +02:00
albertony
6b2cd7c631 docs: improve description of option types 2025-07-08 11:28:28 +02:00
albertony
aa4aead63c docs: use space instead of equal sign to separate option and value in headers 2025-07-08 11:28:28 +02:00
albertony
c491d12cd0 docs: use comma to separate short and long option format in headers 2025-07-08 11:28:28 +02:00
albertony
9e4d703a56 docs: remove use of uncommon parameter types 2025-07-08 11:28:28 +02:00
albertony
fc0c0a7771 docs: remove use of parameter type FILE 2025-07-08 11:28:28 +02:00
albertony
d5cc0d83b0 docs: remove use of parameter type DIR 2025-07-08 11:28:28 +02:00
albertony
52762dc866 docs: remove use of parameter type CONFIG_FILE 2025-07-08 11:28:28 +02:00
albertony
3c092cfc17 docs: change use of parameter type N and NUMBER to int consistent with flags and cli help 2025-07-08 11:28:28 +02:00
albertony
7f3f1af541 docs: change use of parameter type TIME to Duration consistent with flags and cli help 2025-07-08 11:28:28 +02:00
albertony
f885c481f0 docs: change use of parameter type BANDWIDTH_SPEC to BwTimetable consistent with flags and cli help 2025-07-08 11:28:28 +02:00
albertony
865d4b2bda docs: change use of parameter type SIZE to SizeSuffix consistent with flags and cli help 2025-07-08 11:28:28 +02:00
albertony
3cb1e65eb6 docs: cleanup markdown header format 2025-07-08 11:28:28 +02:00
albertony
f667346718 docs: explain separated list parameters 2025-07-08 11:28:28 +02:00
Nick Craig-Wood
c6e1f59415 azureblob: fix server side copy error "requires exactly one scope"
Before this change, if not using shared key or SAS URL authentication
for the source, rclone gave this error

    ManagedIdentityCredential.GetToken() requires exactly one scope

when doing server side copies.

This was introduced in:

3a5ddfcd3c azureblob: implement multipart server side copy

This fixes the problem by creating a temporary SAS URL using user
delegation to read the source blob when copying.

Fixes #8662
2025-07-08 07:50:51 +01:00
Nick Craig-Wood
f353c92852 test: remove and ignore failing integration tests
- remove non docker based Swift tests as they are too slow
- ignore TestChunkerChunk50b test which always fails
2025-07-08 07:48:54 +01:00
albertony
1e88c6a18b docs: explain the json log format in more detail 2025-07-07 10:21:13 +02:00
albertony
7242aed1c3 check: fix difference report (was reporting error counts) 2025-07-07 08:16:55 +01:00
albertony
81e63785fe serve sftp: add support for more hashes (crc32, sha256, blake3, xxh3, xxh128) 2025-07-07 09:11:29 +02:00
albertony
c7937f53d4 serve sftp: extract function refactoring for handling hashsum commands 2025-07-07 09:11:29 +02:00
albertony
58fa1c975f sftp: add support for more hashes (crc32, sha256, blake3, xxh3, xxh128) 2025-07-07 09:11:29 +02:00
albertony
da49fc1b6d local: configurable supported hashes 2025-07-07 09:11:29 +02:00
albertony
df9c921dd5 hash: add support for BLAKE3, XXH3, XXH128 2025-07-07 09:11:29 +02:00
Nick Craig-Wood
d9c227eff6 vfs: make integration TestDirEntryModTimeInvalidation test more reliable
Before this change it was not taking the Precision of the remote into account.
2025-07-06 14:35:16 +01:00
Nick Craig-Wood
524c285d88 smb: skip non integration tests when doing integration tests 2025-07-06 13:39:54 +01:00
Nick Craig-Wood
4107246335 seafile: fix integration test errors by adding dot to encoding
The seafile backend used to be able to cope with files called "." and
".." but at some point became unable to do so, causing integration
test failurs.

This adds EncodeDot to the encoding which encodes "." and ".." names.
2025-07-05 21:27:10 +01:00
Nick Craig-Wood
87a65ec6a5 linkbox: fix upload error "user upload file not exist"
Linkbox have started issuing 302 redirects on some of their PUT
requests when rclone uploads a file.

This is problematic for several reasons:

1. This is the wrong redirect code - it should be 307 to preserve the method
2. Since Expect/100-Continue isn't supported the whole body gets uploaded

This fixes the problem by first doing a HEAD request on the URL. This
will allow us to read the redirect Location and not upload the body to
the wrong place.

It should still work (albeit a little more inefficiently) if Linkbox
stop redirecting the PUT requests.

See: https://forum.rclone.org/t/linkbox-upload-error/51795
Fixes: #8606
2025-07-05 09:26:43 +01:00
Nick Craig-Wood
c6d0b61982 build: remove integration tests which are too slow
This removes

- TestCompressSwift: - never finishes - too slow - we have TestCompressS3 instead
- TestCryptSwift: - never finishes - too slow - we have TestCryptS3 instead
- TestChunkerChunk50bBox: - often times out - covered by other tests
2025-07-05 09:24:00 +01:00
Nick Craig-Wood
88e30eecbf march: fix deadlock when using --no-traverse - fixes #8656
This ocurred whenever there were more than 100 files in the source due
to the output channel filling up.

The fix is not to use list.NewSorter but take more care to output the
dst objects in the same order the src objects are delivered. As the
src objects are delivered sorted, no sorting is needed.

In order not to cause another deadlock, we need to send nil dst
objects which is safe since this adjusts the termination conditions
for the channels.

Thanks to @jeremy for the test script the Go tests are based on.
2025-07-04 14:52:28 +01:00
wiserain
f904378c4d pikpak: improve error handling for missing links and unrecoverable 500s
This commit improves error handling in two specific scenarios:

* Missing Download Links: A 5-second delay is introduced when a download
  link is missing, as low-level retries aren't enough. Empirically, it
  takes about 30s-1m for the link to become available. This resolves
  failed integration tests: backend: TestIntegration/FsMkdir/FsPutFiles/
  ObjectUpdate, vfs: TestFileReadAtNonZeroLength

* Unrecoverable 500 Errors: The shouldRetry method is updated to skip
  retries for 500 errors from "idx.shub.mypikpak.com" indicating "no
  record for gcid." These errors are non-recoverable, so retrying is futile.
2025-07-04 15:27:29 +09:00
wiserain
24eb8dcde0 pikpak: rewrite upload to bypass AWS S3 manager - fixes #8629
This commit introduces a significant rewrite of PikPak's upload, specifically
targeting direct handling of file uploads rather than relying on the generic
S3 manager. The primary motivation is to address critical upload failures
reported in #8629.

* Added new `multipart.go` file for multipart uploads using AWS S3 SDK.
* Removed dependency on AWS S3 manager; replaced with custom handling.
* Updated PikPak test package with new multipart upload tests,
  including configurable chunk size and upload cutoff.
* Added new configuration option `upload_cutoff` to control chunked uploads.
* Defined constraints for `chunk_size` and `upload_cutoff` (min/max values,
  validation).
* Adjusted default `upload_concurrency` from 5 to 4.
2025-07-04 11:25:12 +09:00
Nick Craig-Wood
a97425d9cb test: fix TestSMBKerberos password expiring errors
ERROR(runtime): uncaught exception - kinit for rclone@RCLONE.LOCAL failed (Password has expired)
2025-07-03 19:31:45 +01:00
Nick Craig-Wood
c51878f9a9 Add Vikas Bhansali to contributors 2025-07-03 19:31:45 +01:00
Nick Craig-Wood
92f0a73ac6 Add Ross Smith II to contributors 2025-07-03 19:31:45 +01:00
Vikas Bhansali
163c149f3f azureblob,azurefiles: add support for client assertion based authentication 2025-07-03 09:57:07 +01:00
WeidiDeng
224ca0ae8e webdav: fix setting modtime to that of local object instead of remote
In this commit the source of the modtime got changed to the wrong object by accident

0b9671313b webdav: add an ownCloud Infinite Scale vendor that enables tus chunked upload support

This reverts that change and fixes the integration tests.
2025-07-03 09:42:15 +01:00
Ross Smith II
5bf6cd1f4f build: set default shell to bash in build.yml
Per https://docs.github.com/en/actions/reference/workflow-syntax-for-github-actions#defaultsrunshell
2025-07-02 20:06:57 +01:00
Nick Craig-Wood
555739eec5 docs: fix filescom/filelu link mixup
See: https://forum.rclone.org/t/a-small-bug-in-rclone-documentation/51774
2025-07-02 15:37:28 +01:00
Nick Craig-Wood
c036ce90fe Add Davide Bizzarri to contributors 2025-07-02 15:37:28 +01:00
Davide Bizzarri
6163ae7cc7 fix: b2 versionAt read metadata 2025-07-01 17:36:07 +01:00
Nick Craig-Wood
cd950e30cb test: make TestWebdavInfiniteScale startup more reliable
This adds a _connect_delay=5s which allows the server to startup
properly. It also makes sure it stores its config in /tmp rather than
the current working directory.
2025-07-01 17:13:41 +01:00
Nick Craig-Wood
89dfae96ad test_all: add _connect_delay for slow starting servers 2025-07-01 17:13:11 +01:00
Nick Craig-Wood
c0a2d730a6 docs: update link for filescom 2025-06-30 11:09:37 +01:00
Nick Craig-Wood
592407230b test_all: make TestWebdav InfiniteScale integration tests run 2025-06-28 10:57:27 +01:00
Nick Craig-Wood
a7c3ddb482 test_all: make SMB with Kerberos integration tests run properly 2025-06-28 10:56:41 +01:00
Nick Craig-Wood
7a1813c531 test_all: allow an env parameter to set environment variables 2025-06-28 10:55:48 +01:00
Nick Craig-Wood
16e3d1becd Changelog updates from Version v1.70.2 2025-06-27 14:35:34 +01:00
Nick Craig-Wood
c0f6b910ae Add Ali Zein Yousuf to contributors 2025-06-27 14:35:34 +01:00
Nick Craig-Wood
e3bf8dc122 Add $@M@RTH_ to contributors 2025-06-27 14:35:34 +01:00
Ali Zein Yousuf
086a835131 docs: update client ID instructions to current Azure AD portal - fixes #8027 2025-06-27 12:22:10 +01:00
$@M@RTH_
d0668de192 s3: add Zata provider 2025-06-26 17:13:19 +01:00
Nick Craig-Wood
4df974ccc4 pacer: fix nil pointer deref in RetryError - fixes #8077
Before this change, if RetryAfterError was called with a nil err, then
it's Error method would return this when wrapped in a fmt.Errorf
statement

    error %!v(PANIC=Error method: runtime error: invalid memory address or nil pointer dereference))

Looking at the code, it looks like RetryAfterError will usually be
called with a nil pointer, so this patch makes sure it has a sensible
error.
2025-06-25 21:19:17 +01:00
Nick Craig-Wood
a50c903a82 docs: Remove Warp as a sponsor 2025-06-25 16:37:09 +01:00
Nick Craig-Wood
97a8092c14 docs: add files.com as a Gold sponsor 2025-06-25 16:37:09 +01:00
Nick Craig-Wood
526565b810 docs: add links to SecureBuild docker image 2025-06-25 16:37:09 +01:00
Nick Craig-Wood
64804b81bd Add curlwget to contributors 2025-06-25 16:37:09 +01:00
nielash
e10f516a5e convmv: fix moving to unicode-equivalent name - fixes #8634
Before this change, using convmv to convert filenames between NFD and NFC could
fail on certain backends (such as onedrive) that were insensitive to the
difference. This change fixes the issue by extending the existing
needsMoveCaseInsensitive logic for use in this scenario.
2025-06-25 11:19:50 +01:00
nielash
fe62a2bb4e transform: add truncate_keep_extension and truncate_bytes
This change adds a truncate_bytes mode which counts the number of bytes, as
opposed to the number of UTF-8 characters. This can be useful for ensuring that a
crypt-encoded filename will not exceed the underlying backend's length limits
(see https://forum.rclone.org/t/any-clear-file-name-length-when-using-crypt/36930 ).

This change also adds support for _keep_extension when using truncate and
truncate_bytes.
2025-06-25 11:19:50 +01:00
nielash
d6ecb949ca convmv: make --dry-run logs less noisy
Before this change, convmv dry runs would log a SkipDestructive message for
every single object, even objects that would not really be moved during a real
run. This made it quite difficult to tell what would actually happen during the
real run. This change fixes that by returning silently in such cases (as would
happen during a real run.)
2025-06-25 11:19:50 +01:00
nielash
a845a96538 sync: avoid copying dir metadata to itself
In convmv, src and dst can point to the same directory. Unless a dir's name is
changing, we should leave it alone and not attempt to copy its metadata to
itself.
2025-06-25 11:19:50 +01:00
curlwget
92f30fda8d docs: fix some function names in comments
Signed-off-by: curlwget <curlwget@icloud.com>
2025-06-24 15:04:45 +01:00
Nick Craig-Wood
559ef2eba8 combine: fix directory not found errors with ListP interface - Fixes #8627
In

b1d774c2e3 combine: implement ListP interface

We introduced the ListP interface to the combine backend. This was
passing the wrong remote to the upstreams. This was picked up by the
integration tests but was ignored by accident.
2025-06-23 17:43:52 +01:00
Nick Craig-Wood
17b25d7ce2 local: fix --skip-links on Windows when skipping Junction points
Due to a change in Go which was enabled by the `go 1.22` in `go.mod`
rclone has stopped skipping junction points ("My Documents" in
particular) if `--skip-links` is set on Windows.

This is because the output from os.Lstat has changed and junction
points are no longer marked with os.ModeSymlink but with
os.ModeIrregular instead.

This fix now skips os.ModeIrregular objects if --skip-links is set on
Windows only.

Fixes #8561
See: https://github.com/golang/go/issues/73827
2025-06-23 16:39:14 +01:00
Nick Craig-Wood
fe3253eefd Add Marvin Rösch to contributors 2025-06-23 16:39:14 +01:00
dependabot[bot]
c38ca6b2d1 build: bump github.com/go-chi/chi/v5 from 5.2.1 to 5.2.2 to fix GHSA-vrw8-fxc6-2r93
See: https://github.com/go-chi/chi/security/advisories/GHSA-vrw8-fxc6-2r93
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-06-20 18:27:36 +01:00
Marvin Rösch
5aa9811084 copy,copyto,move,moveto: implement logger flags to store result of sync
This enables the logger flags (`--combined`, `--missing-on-src`
etc.) for the `rclone copy` and `move` commands (as well as their
`copyto` and `moveto` variants) akin to `rclone sync`. Warnings for
unsupported/wonky flag combinations are also printed, e.g. when the
destination is not traversed but `--dest-after` is specified.

- fs/operations: add reusable methods for operation logging
- cmd/sync: use reusable methods for implementing logging in sync command
- cmd: implement logging for copy/copyto/move/moveto commands
- fs/operations/operationsflags: warn about logs in conjunction with --no-traverse
- cmd: add logger docs to copy and move commands

Fixes #8115
2025-06-20 16:55:00 +01:00
Nick Craig-Wood
3cae373064 log: fix deadlock when using systemd logging - fixes #8621
In this commit the logging system was re-worked

dfa4d94827 fs: Remove github.com/sirupsen/logrus and replace with log/slog

Unfortunately the systemd logging was still using the plain log
package and this caused a deadlock as it was recursively calling the
logging package.

The fix was to use the dedicated systemd journal logging routines in
the process removing a TODO!
2025-06-20 15:26:57 +01:00
Nick Craig-Wood
b6b8526fb4 docs: googlephotos: detail how to make your own client_id - fixes #8622 2025-06-20 12:14:46 +01:00
Nick Craig-Wood
6f86143176 Add necaran to contributors 2025-06-20 12:14:46 +01:00
necaran
beffef2882 mega: fix tls handshake failure - fixes #8565
The cipher suites used by Mega's storage endpoints: https://github.com/meganz/webclient/issues/103
are no longer supported by default since Go 1.22: https://tip.golang.org/doc/go1.22#minor_library_changes
This therefore assigns the cipher suites explicitly to include the one Mega needs.
2025-06-19 18:05:00 +01:00
Nick Craig-Wood
96f5bbcdd7 Changelog updates from Version v1.70.1 2025-06-19 14:25:38 +01:00
Nick Craig-Wood
27ce78bee4 Add jinjingroad to contributors 2025-06-19 14:25:29 +01:00
Ed Craig-Wood
898a59062b docs: DOI grammar error 2025-06-19 08:05:38 +02:00
albertony
c5f55243e1 docs: lib/transform: cleanup formatting 2025-06-19 08:04:46 +02:00
albertony
62a9727ab5 lib/transform: avoid empty charmap entry 2025-06-19 08:04:46 +02:00
jinjingroad
16f1e08b73 chore: fix function name
Signed-off-by: jinjingroad <jinjingroad@sina.com>
2025-06-19 08:02:51 +02:00
Nick Craig-Wood
4280ec75cc convmv: fix spurious "error running command echo" on Windows
Before this change the help for convmv was generated by running the
examples each time rclone started up. Unfortunately this involved
running the echo command which did not work on Windows.

This pre-generates the help into `transform.md` and embeds it. It can
be re-generated with `go generate` which is a better solution.

See: https://forum.rclone.org/t/invoke-of-1-70-0-complains-of-echo-not-found/51618
2025-06-18 14:28:14 +01:00
Ed Craig-Wood
b064cc2116 docs: client-credentials is not support by all backends 2025-06-18 14:06:57 +01:00
Nick Craig-Wood
f8b50f8d8f Start v1.71.0-DEV development 2025-06-18 11:31:52 +01:00
272 changed files with 15587 additions and 11748 deletions

View File

@@ -23,15 +23,18 @@ jobs:
build:
if: inputs.manual || (github.repository == 'rclone/rclone' && (github.event_name != 'pull_request' || github.event.pull_request.head.repo.full_name != github.event.pull_request.base.repo.full_name))
timeout-minutes: 60
defaults:
run:
shell: bash
strategy:
fail-fast: false
matrix:
job_name: ['linux', 'linux_386', 'mac_amd64', 'mac_arm64', 'windows', 'other_os', 'go1.23']
job_name: ['linux', 'linux_386', 'mac_amd64', 'mac_arm64', 'windows', 'other_os', 'go1.24']
include:
- job_name: linux
os: ubuntu-latest
go: '>=1.24.0-rc.1'
go: '>=1.25.0-rc.1'
gotags: cmount
build_flags: '-include "^linux/"'
check: true
@@ -42,14 +45,14 @@ jobs:
- job_name: linux_386
os: ubuntu-latest
go: '>=1.24.0-rc.1'
go: '>=1.25.0-rc.1'
goarch: 386
gotags: cmount
quicktest: true
- job_name: mac_amd64
os: macos-latest
go: '>=1.24.0-rc.1'
go: '>=1.25.0-rc.1'
gotags: 'cmount'
build_flags: '-include "^darwin/amd64" -cgo'
quicktest: true
@@ -58,14 +61,14 @@ jobs:
- job_name: mac_arm64
os: macos-latest
go: '>=1.24.0-rc.1'
go: '>=1.25.0-rc.1'
gotags: 'cmount'
build_flags: '-include "^darwin/arm64" -cgo -macos-arch arm64 -cgo-cflags=-I/usr/local/include -cgo-ldflags=-L/usr/local/lib'
deploy: true
- job_name: windows
os: windows-latest
go: '>=1.24.0-rc.1'
go: '>=1.25.0-rc.1'
gotags: cmount
cgo: '0'
build_flags: '-include "^windows/"'
@@ -75,14 +78,14 @@ jobs:
- job_name: other_os
os: ubuntu-latest
go: '>=1.24.0-rc.1'
go: '>=1.25.0-rc.1'
build_flags: '-exclude "^(windows/|darwin/|linux/)"'
compile_all: true
deploy: true
- job_name: go1.23
- job_name: go1.24
os: ubuntu-latest
go: '1.23'
go: '1.24'
quicktest: true
racequicktest: true
@@ -92,7 +95,7 @@ jobs:
steps:
- name: Checkout
uses: actions/checkout@v4
uses: actions/checkout@v5
with:
fetch-depth: 0
@@ -103,7 +106,6 @@ jobs:
check-latest: true
- name: Set environment variables
shell: bash
run: |
echo 'GOTAGS=${{ matrix.gotags }}' >> $GITHUB_ENV
echo 'BUILD_FLAGS=${{ matrix.build_flags }}' >> $GITHUB_ENV
@@ -112,7 +114,6 @@ jobs:
if [[ "${{ matrix.cgo }}" != "" ]]; then echo 'CGO_ENABLED=${{ matrix.cgo }}' >> $GITHUB_ENV ; fi
- name: Install Libraries on Linux
shell: bash
run: |
sudo modprobe fuse
sudo chmod 666 /dev/fuse
@@ -122,7 +123,6 @@ jobs:
if: matrix.os == 'ubuntu-latest'
- name: Install Libraries on macOS
shell: bash
run: |
# https://github.com/Homebrew/brew/issues/15621#issuecomment-1619266788
# https://github.com/orgs/Homebrew/discussions/4612#discussioncomment-6319008
@@ -151,7 +151,6 @@ jobs:
if: matrix.os == 'windows-latest'
- name: Print Go version and environment
shell: bash
run: |
printf "Using go at: $(which go)\n"
printf "Go version: $(go version)\n"
@@ -163,29 +162,24 @@ jobs:
env
- name: Build rclone
shell: bash
run: |
make
- name: Rclone version
shell: bash
run: |
rclone version
- name: Run tests
shell: bash
run: |
make quicktest
if: matrix.quicktest
- name: Race test
shell: bash
run: |
make racequicktest
if: matrix.racequicktest
- name: Run librclone tests
shell: bash
run: |
make -C librclone/ctest test
make -C librclone/ctest clean
@@ -193,14 +187,12 @@ jobs:
if: matrix.librclonetest
- name: Compile all architectures test
shell: bash
run: |
make
make compile_all
if: matrix.compile_all
- name: Deploy built binaries
shell: bash
run: |
if [[ "${{ matrix.os }}" == "ubuntu-latest" ]]; then make release_dep_linux ; fi
make ci_beta
@@ -219,13 +211,12 @@ jobs:
steps:
- name: Get runner parameters
id: get-runner-parameters
shell: bash
run: |
echo "year-week=$(/bin/date -u "+%Y%V")" >> $GITHUB_OUTPUT
echo "runner-os-version=$ImageOS" >> $GITHUB_OUTPUT
- name: Checkout
uses: actions/checkout@v4
uses: actions/checkout@v5
with:
fetch-depth: 0
@@ -233,7 +224,7 @@ jobs:
id: setup-go
uses: actions/setup-go@v5
with:
go-version: '>=1.23.0-rc.1'
go-version: '1.24'
check-latest: true
cache: false
@@ -291,8 +282,18 @@ jobs:
- name: Scan for vulnerabilities
run: govulncheck ./...
- name: Check Markdown format
uses: DavidAnson/markdownlint-cli2-action@v20
with:
globs: |
CONTRIBUTING.md
MAINTAINERS.md
README.md
RELEASE.md
docs/content/{authors,bugs,changelog,docs,downloads,faq,filtering,gui,install,licence,overview,privacy}.md
- name: Scan edits of autogenerated files
run: bin/check_autogenerated_edits.py
run: bin/check_autogenerated_edits.py 'origin/${{ github.base_ref }}'
if: github.event_name == 'pull_request'
android:
@@ -303,7 +304,7 @@ jobs:
steps:
- name: Checkout
uses: actions/checkout@v4
uses: actions/checkout@v5
with:
fetch-depth: 0
@@ -311,10 +312,9 @@ jobs:
- name: Set up Go
uses: actions/setup-go@v5
with:
go-version: '>=1.24.0-rc.1'
go-version: '>=1.25.0-rc.1'
- name: Set global environment variables
shell: bash
run: |
echo "VERSION=$(make version)" >> $GITHUB_ENV
@@ -333,7 +333,6 @@ jobs:
run: env PATH=$PATH:~/go/bin gomobile bind -androidapi ${RCLONE_NDK_VERSION} -v -target=android/arm -javapkg=org.rclone -ldflags '-s -X github.com/rclone/rclone/fs.Version='${VERSION} github.com/rclone/rclone/librclone/gomobile
- name: arm-v7a Set environment variables
shell: bash
run: |
echo "CC=$(echo $ANDROID_NDK/toolchains/llvm/prebuilt/linux-x86_64/bin/armv7a-linux-androideabi${RCLONE_NDK_VERSION}-clang)" >> $GITHUB_ENV
echo "CC_FOR_TARGET=$CC" >> $GITHUB_ENV
@@ -347,7 +346,6 @@ jobs:
run: go build -v -tags android -trimpath -ldflags '-s -X github.com/rclone/rclone/fs.Version='${VERSION} -o build/rclone-android-${RCLONE_NDK_VERSION}-armv7a .
- name: arm64-v8a Set environment variables
shell: bash
run: |
echo "CC=$(echo $ANDROID_NDK/toolchains/llvm/prebuilt/linux-x86_64/bin/aarch64-linux-android${RCLONE_NDK_VERSION}-clang)" >> $GITHUB_ENV
echo "CC_FOR_TARGET=$CC" >> $GITHUB_ENV
@@ -360,7 +358,6 @@ jobs:
run: go build -v -tags android -trimpath -ldflags '-s -X github.com/rclone/rclone/fs.Version='${VERSION} -o build/rclone-android-${RCLONE_NDK_VERSION}-armv8a .
- name: x86 Set environment variables
shell: bash
run: |
echo "CC=$(echo $ANDROID_NDK/toolchains/llvm/prebuilt/linux-x86_64/bin/i686-linux-android${RCLONE_NDK_VERSION}-clang)" >> $GITHUB_ENV
echo "CC_FOR_TARGET=$CC" >> $GITHUB_ENV
@@ -373,7 +370,6 @@ jobs:
run: go build -v -tags android -trimpath -ldflags '-s -X github.com/rclone/rclone/fs.Version='${VERSION} -o build/rclone-android-${RCLONE_NDK_VERSION}-x86 .
- name: x64 Set environment variables
shell: bash
run: |
echo "CC=$(echo $ANDROID_NDK/toolchains/llvm/prebuilt/linux-x86_64/bin/x86_64-linux-android${RCLONE_NDK_VERSION}-clang)" >> $GITHUB_ENV
echo "CC_FOR_TARGET=$CC" >> $GITHUB_ENV

View File

@@ -52,7 +52,7 @@ jobs:
df -h .
- name: Checkout Repository
uses: actions/checkout@v4
uses: actions/checkout@v5
with:
fetch-depth: 0
@@ -198,7 +198,7 @@ jobs:
steps:
- name: Download Image Digests
uses: actions/download-artifact@v4
uses: actions/download-artifact@v5
with:
path: /tmp/digests
pattern: digests-*

View File

@@ -30,7 +30,7 @@ jobs:
sudo rm -rf /usr/share/dotnet || true
df -h .
- name: Checkout master
uses: actions/checkout@v4
uses: actions/checkout@v5
with:
fetch-depth: 0
- name: Build and publish docker plugin

43
.markdownlint.yml Normal file
View File

@@ -0,0 +1,43 @@
default: true
# Use specific styles, to be consistent accross all documents.
# Default is to accept any as long as it is consistent within the same document.
heading-style: # MD003
style: atx
ul-style: # MD004
style: dash
hr-style: # MD035
style: ---
code-block-style: # MD046
style: fenced
code-fence-style: # MD048
style: backtick
emphasis-style: # MD049
style: asterisk
strong-style: # MD050
style: asterisk
# Allow multiple headers with same text as long as they are not siblings.
no-duplicate-heading: # MD024
siblings_only: true
# Allow long lines in code blocks and tables.
line-length: # MD013
code_blocks: false
tables: false
# The Markdown files used to generated docs with Hugo contain a top level
# header, even though the YAML front matter has a title property (which is
# used for the HTML document title only). Suppress Markdownlint warning:
# Multiple top-level headings in the same document.
single-title: # MD025
level: 1
front_matter_title:
# The HTML docs generated by Hugo from Markdown files may have slightly
# different header anchors than GitHub rendered Markdown, e.g. Hugo trims
# leading dashes so "--config string" becomes "#config-string" while it is
# "#--config-string" in GitHub preview. When writing links to headers in the
# Markdown files we must use whatever works in the final HTML generated docs.
# Suppress Markdownlint warning: Link fragments should be valid.
link-fragments: false # MD051

View File

@@ -15,61 +15,81 @@ with the [latest beta of rclone](https://beta.rclone.org/):
- Rclone version (e.g. output from `rclone version`)
- Which OS you are using and how many bits (e.g. Windows 10, 64 bit)
- The command you were trying to run (e.g. `rclone copy /tmp remote:tmp`)
- A log of the command with the `-vv` flag (e.g. output from `rclone -vv copy /tmp remote:tmp`)
- if the log contains secrets then edit the file with a text editor first to obscure them
- A log of the command with the `-vv` flag (e.g. output from
`rclone -vv copy /tmp remote:tmp`)
- if the log contains secrets then edit the file with a text editor first to
obscure them
## Submitting a new feature or bug fix
If you find a bug that you'd like to fix, or a new feature that you'd
like to implement then please submit a pull request via GitHub.
If it is a big feature, then [make an issue](https://github.com/rclone/rclone/issues) first so it can be discussed.
If it is a big feature, then [make an issue](https://github.com/rclone/rclone/issues)
first so it can be discussed.
To prepare your pull request first press the fork button on [rclone's GitHub
page](https://github.com/rclone/rclone).
Then [install Git](https://git-scm.com/downloads) and set your public contribution [name](https://docs.github.com/en/github/getting-started-with-github/setting-your-username-in-git) and [email](https://docs.github.com/en/github/setting-up-and-managing-your-github-user-account/setting-your-commit-email-address#setting-your-commit-email-address-in-git).
Then [install Git](https://git-scm.com/downloads) and set your public contribution
[name](https://docs.github.com/en/github/getting-started-with-github/setting-your-username-in-git)
and [email](https://docs.github.com/en/github/setting-up-and-managing-your-github-user-account/setting-your-commit-email-address#setting-your-commit-email-address-in-git).
Next open your terminal, change directory to your preferred folder and initialise your local rclone project:
Next open your terminal, change directory to your preferred folder and initialise
your local rclone project:
git clone https://github.com/rclone/rclone.git
cd rclone
git remote rename origin upstream
# if you have SSH keys setup in your GitHub account:
git remote add origin git@github.com:YOURUSER/rclone.git
# otherwise:
git remote add origin https://github.com/YOURUSER/rclone.git
```sh
git clone https://github.com/rclone/rclone.git
cd rclone
git remote rename origin upstream
# if you have SSH keys setup in your GitHub account:
git remote add origin git@github.com:YOURUSER/rclone.git
# otherwise:
git remote add origin https://github.com/YOURUSER/rclone.git
```
Note that most of the terminal commands in the rest of this guide must be executed from the rclone folder created above.
Note that most of the terminal commands in the rest of this guide must be
executed from the rclone folder created above.
Now [install Go](https://golang.org/doc/install) and verify your installation:
go version
```sh
go version
```
Great, you can now compile and execute your own version of rclone:
go build
./rclone version
```sh
go build
./rclone version
```
(Note that you can also replace `go build` with `make`, which will include a
more accurate version number in the executable as well as enable you to specify
more build options.) Finally make a branch to add your new feature
git checkout -b my-new-feature
```sh
git checkout -b my-new-feature
```
And get hacking.
You may like one of the [popular editors/IDE's for Go](https://github.com/golang/go/wiki/IDEsAndTextEditorPlugins) and a quick view on the rclone [code organisation](#code-organisation).
You may like one of the [popular editors/IDE's for Go](https://github.com/golang/go/wiki/IDEsAndTextEditorPlugins)
and a quick view on the rclone [code organisation](#code-organisation).
When ready - test the affected functionality and run the unit tests for the code you changed
When ready - test the affected functionality and run the unit tests for the
code you changed
cd folder/with/changed/files
go test -v
```sh
cd folder/with/changed/files
go test -v
```
Note that you may need to make a test remote, e.g. `TestSwift` for some
of the unit tests.
This is typically enough if you made a simple bug fix, otherwise please read the rclone [testing](#testing) section too.
This is typically enough if you made a simple bug fix, otherwise please read
the rclone [testing](#testing) section too.
Make sure you
@@ -79,14 +99,19 @@ Make sure you
When you are done with that push your changes to GitHub:
git push -u origin my-new-feature
```sh
git push -u origin my-new-feature
```
and open the GitHub website to [create your pull
request](https://help.github.com/articles/creating-a-pull-request/).
Your changes will then get reviewed and you might get asked to fix some stuff. If so, then make the changes in the same branch, commit and push your updates to GitHub.
Your changes will then get reviewed and you might get asked to fix some stuff.
If so, then make the changes in the same branch, commit and push your updates to
GitHub.
You may sometimes be asked to [base your changes on the latest master](#basing-your-changes-on-the-latest-master) or [squash your commits](#squashing-your-commits).
You may sometimes be asked to [base your changes on the latest master](#basing-your-changes-on-the-latest-master)
or [squash your commits](#squashing-your-commits).
## Using Git and GitHub
@@ -94,87 +119,118 @@ You may sometimes be asked to [base your changes on the latest master](#basing-y
Follow the guideline for [commit messages](#commit-messages) and then:
git checkout my-new-feature # To switch to your branch
git status # To see the new and changed files
git add FILENAME # To select FILENAME for the commit
git status # To verify the changes to be committed
git commit # To do the commit
git log # To verify the commit. Use q to quit the log
```sh
git checkout my-new-feature # To switch to your branch
git status # To see the new and changed files
git add FILENAME # To select FILENAME for the commit
git status # To verify the changes to be committed
git commit # To do the commit
git log # To verify the commit. Use q to quit the log
```
You can modify the message or changes in the latest commit using:
git commit --amend
```sh
git commit --amend
```
If you amend to commits that have been pushed to GitHub, then you will have to [replace your previously pushed commits](#replacing-your-previously-pushed-commits).
If you amend to commits that have been pushed to GitHub, then you will have to
[replace your previously pushed commits](#replacing-your-previously-pushed-commits).
### Replacing your previously pushed commits
Note that you are about to rewrite the GitHub history of your branch. It is good practice to involve your collaborators before modifying commits that have been pushed to GitHub.
Note that you are about to rewrite the GitHub history of your branch. It is good
practice to involve your collaborators before modifying commits that have been
pushed to GitHub.
Your previously pushed commits are replaced by:
git push --force origin my-new-feature
```sh
git push --force origin my-new-feature
```
### Basing your changes on the latest master
To base your changes on the latest version of the [rclone master](https://github.com/rclone/rclone/tree/master) (upstream):
To base your changes on the latest version of the
[rclone master](https://github.com/rclone/rclone/tree/master) (upstream):
git checkout master
git fetch upstream
git merge --ff-only
git push origin --follow-tags # optional update of your fork in GitHub
git checkout my-new-feature
git rebase master
```sh
git checkout master
git fetch upstream
git merge --ff-only
git push origin --follow-tags # optional update of your fork in GitHub
git checkout my-new-feature
git rebase master
```
If you rebase commits that have been pushed to GitHub, then you will have to [replace your previously pushed commits](#replacing-your-previously-pushed-commits).
If you rebase commits that have been pushed to GitHub, then you will have to
[replace your previously pushed commits](#replacing-your-previously-pushed-commits).
### Squashing your commits ###
### Squashing your commits
To combine your commits into one commit:
git log # To count the commits to squash, e.g. the last 2
git reset --soft HEAD~2 # To undo the 2 latest commits
git status # To check everything is as expected
```sh
git log # To count the commits to squash, e.g. the last 2
git reset --soft HEAD~2 # To undo the 2 latest commits
git status # To check everything is as expected
```
If everything is fine, then make the new combined commit:
git commit # To commit the undone commits as one
```sh
git commit # To commit the undone commits as one
```
otherwise, you may roll back using:
git reflog # To check that HEAD{1} is your previous state
git reset --soft 'HEAD@{1}' # To roll back to your previous state
```sh
git reflog # To check that HEAD{1} is your previous state
git reset --soft 'HEAD@{1}' # To roll back to your previous state
```
If you squash commits that have been pushed to GitHub, then you will have to [replace your previously pushed commits](#replacing-your-previously-pushed-commits).
If you squash commits that have been pushed to GitHub, then you will have to
[replace your previously pushed commits](#replacing-your-previously-pushed-commits).
Tip: You may like to use `git rebase -i master` if you are experienced or have a more complex situation.
Tip: You may like to use `git rebase -i master` if you are experienced or have a
more complex situation.
### GitHub Continuous Integration
rclone currently uses [GitHub Actions](https://github.com/rclone/rclone/actions) to build and test the project, which should be automatically available for your fork too from the `Actions` tab in your repository.
rclone currently uses [GitHub Actions](https://github.com/rclone/rclone/actions)
to build and test the project, which should be automatically available for your
fork too from the `Actions` tab in your repository.
## Testing
### Code quality tests
If you install [golangci-lint](https://github.com/golangci/golangci-lint) then you can run the same tests as get run in the CI which can be very helpful.
If you install [golangci-lint](https://github.com/golangci/golangci-lint) then
you can run the same tests as get run in the CI which can be very helpful.
You can run them with `make check` or with `golangci-lint run ./...`.
Using these tests ensures that the rclone codebase all uses the same coding standards. These tests also check for easy mistakes to make (like forgetting to check an error return).
Using these tests ensures that the rclone codebase all uses the same coding
standards. These tests also check for easy mistakes to make (like forgetting
to check an error return).
### Quick testing
rclone's tests are run from the go testing framework, so at the top
level you can run this to run all the tests.
go test -v ./...
```sh
go test -v ./...
```
You can also use `make`, if supported by your platform
make quicktest
```sh
make quicktest
```
The quicktest is [automatically run by GitHub](#github-continuous-integration) when you push your branch to GitHub.
The quicktest is [automatically run by GitHub](#github-continuous-integration)
when you push your branch to GitHub.
### Backend testing
@@ -190,41 +246,51 @@ need to make a remote called `TestDrive`.
You can then run the unit tests in the drive directory. These tests
are skipped if `TestDrive:` isn't defined.
cd backend/drive
go test -v
```sh
cd backend/drive
go test -v
```
You can then run the integration tests which test all of rclone's
operations. Normally these get run against the local file system,
but they can be run against any of the remotes.
cd fs/sync
go test -v -remote TestDrive:
go test -v -remote TestDrive: -fast-list
```sh
cd fs/sync
go test -v -remote TestDrive:
go test -v -remote TestDrive: -fast-list
cd fs/operations
go test -v -remote TestDrive:
cd fs/operations
go test -v -remote TestDrive:
```
If you want to use the integration test framework to run these tests
altogether with an HTML report and test retries then from the
project root:
go install github.com/rclone/rclone/fstest/test_all
test_all -backends drive
```sh
go install github.com/rclone/rclone/fstest/test_all
test_all -backends drive
```
### Full integration testing
If you want to run all the integration tests against all the remotes,
then change into the project root and run
make check
make test
```sh
make check
make test
```
The commands may require some extra go packages which you can install with
make build_dep
```sh
make build_dep
```
The full integration tests are run daily on the integration test server. You can
find the results at https://pub.rclone.org/integration-tests/
find the results at <https://pub.rclone.org/integration-tests/>
## Code Organisation
@@ -232,46 +298,48 @@ Rclone code is organised into a small number of top level directories
with modules beneath.
- backend - the rclone backends for interfacing to cloud providers -
- all - import this to load all the cloud providers
- ...providers
- all - import this to load all the cloud providers
- ...providers
- bin - scripts for use while building or maintaining rclone
- cmd - the rclone commands
- all - import this to load all the commands
- ...commands
- all - import this to load all the commands
- ...commands
- cmdtest - end-to-end tests of commands, flags, environment variables,...
- docs - the documentation and website
- content - adjust these docs only - everything else is autogenerated
- command - these are auto-generated - edit the corresponding .go file
- content - adjust these docs only, except those marked autogenerated
or portions marked autogenerated where the corresponding .go file must be
edited instead, and everything else is autogenerated
- commands - these are auto-generated, edit the corresponding .go file
- fs - main rclone definitions - minimal amount of code
- accounting - bandwidth limiting and statistics
- asyncreader - an io.Reader which reads ahead
- config - manage the config file and flags
- driveletter - detect if a name is a drive letter
- filter - implements include/exclude filtering
- fserrors - rclone specific error handling
- fshttp - http handling for rclone
- fspath - path handling for rclone
- hash - defines rclone's hash types and functions
- list - list a remote
- log - logging facilities
- march - iterates directories in lock step
- object - in memory Fs objects
- operations - primitives for sync, e.g. Copy, Move
- sync - sync directories
- walk - walk a directory
- accounting - bandwidth limiting and statistics
- asyncreader - an io.Reader which reads ahead
- config - manage the config file and flags
- driveletter - detect if a name is a drive letter
- filter - implements include/exclude filtering
- fserrors - rclone specific error handling
- fshttp - http handling for rclone
- fspath - path handling for rclone
- hash - defines rclone's hash types and functions
- list - list a remote
- log - logging facilities
- march - iterates directories in lock step
- object - in memory Fs objects
- operations - primitives for sync, e.g. Copy, Move
- sync - sync directories
- walk - walk a directory
- fstest - provides integration test framework
- fstests - integration tests for the backends
- mockdir - mocks an fs.Directory
- mockobject - mocks an fs.Object
- test_all - Runs integration tests for everything
- fstests - integration tests for the backends
- mockdir - mocks an fs.Directory
- mockobject - mocks an fs.Object
- test_all - Runs integration tests for everything
- graphics - the images used in the website, etc.
- lib - libraries used by the backend
- atexit - register functions to run when rclone exits
- dircache - directory ID to name caching
- oauthutil - helpers for using oauth
- pacer - retries with backoff and paces operations
- readers - a selection of useful io.Readers
- rest - a thin abstraction over net/http for REST
- atexit - register functions to run when rclone exits
- dircache - directory ID to name caching
- oauthutil - helpers for using oauth
- pacer - retries with backoff and paces operations
- readers - a selection of useful io.Readers
- rest - a thin abstraction over net/http for REST
- librclone - in memory interface to rclone's API for embedding rclone
- vfs - Virtual FileSystem layer for implementing rclone mount and similar
@@ -279,6 +347,36 @@ with modules beneath.
If you are adding a new feature then please update the documentation.
The documentation sources are generally in Markdown format, in conformance
with the CommonMark specification and compatible with GitHub Flavored
Markdown (GFM). The markdown format is checked as part of the lint operation
that runs automatically on pull requests, to enforce standards and consistency.
This is based on the [markdownlint](https://github.com/DavidAnson/markdownlint)
tool, which can also be integrated into editors so you can perform the same
checks while writing.
HTML pages, served as website <rclone.org>, are generated from the Markdown,
using [Hugo](https://gohugo.io). Note that when generating the HTML pages,
there is currently used a different algorithm for generating header anchors
than what GitHub uses for its Markdown rendering. For example, in the HTML docs
generated by Hugo any leading `-` characters are ignored, which means when
linking to a header with text `--config string` we therefore need to use the
link `#config-string` in our Markdown source, which will not work in GitHub's
preview where `#--config-string` would be the correct link.
Most of the documentation are written directly in text files with extension
`.md`, mainly within folder `docs/content`. Note that several of such files
are autogenerated (e.g. the command documentation, and `docs/content/flags.md`),
or contain autogenerated portions (e.g. the backend documentation under
`docs/content/commands`). These are marked with an `autogenerated` comment.
The sources of the autogenerated text are usually Markdown formatted text
embedded as string values in the Go source code, so you need to locate these
and edit the `.go` file instead. The `MANUAL.*`, `rclone.1` and other text
files in the root of the repository are also autogenerated. The autogeneration
of files, and the website, will be done during the release process. See the
`make doc` and `make website` targets in the Makefile if you are interested in
how. You don't need to run these when adding a feature.
If you add a new general flag (not for a backend), then document it in
`docs/content/docs.md` - the flags there are supposed to be in
alphabetical order.
@@ -287,39 +385,40 @@ If you add a new backend option/flag, then it should be documented in
the source file in the `Help:` field.
- Start with the most important information about the option,
as a single sentence on a single line.
- This text will be used for the command-line flag help.
- It will be combined with other information, such as any default value,
and the result will look odd if not written as a single sentence.
- It should end with a period/full stop character, which will be shown
in docs but automatically removed when producing the flag help.
- Try to keep it below 80 characters, to reduce text wrapping in the terminal.
as a single sentence on a single line.
- This text will be used for the command-line flag help.
- It will be combined with other information, such as any default value,
and the result will look odd if not written as a single sentence.
- It should end with a period/full stop character, which will be shown
in docs but automatically removed when producing the flag help.
- Try to keep it below 80 characters, to reduce text wrapping in the terminal.
- More details can be added in a new paragraph, after an empty line (`"\n\n"`).
- Like with docs generated from Markdown, a single line break is ignored
and two line breaks creates a new paragraph.
- This text will be shown to the user in `rclone config`
and in the docs (where it will be added by `make backenddocs`,
normally run some time before next release).
- Like with docs generated from Markdown, a single line break is ignored
and two line breaks creates a new paragraph.
- This text will be shown to the user in `rclone config`
and in the docs (where it will be added by `make backenddocs`,
normally run some time before next release).
- To create options of enumeration type use the `Examples:` field.
- Each example value have their own `Help:` field, but they are treated
a bit different than the main option help text. They will be shown
as an unordered list, therefore a single line break is enough to
create a new list item. Also, for enumeration texts like name of
countries, it looks better without an ending period/full stop character.
- Each example value have their own `Help:` field, but they are treated
a bit different than the main option help text. They will be shown
as an unordered list, therefore a single line break is enough to
create a new list item. Also, for enumeration texts like name of
countries, it looks better without an ending period/full stop character.
The only documentation you need to edit are the `docs/content/*.md`
files. The `MANUAL.*`, `rclone.1`, website, etc. are all auto-generated
from those during the release process. See the `make doc` and `make
website` targets in the Makefile if you are interested in how. You
don't need to run these when adding a feature.
When writing documentation for an entirely new backend,
see [backend documentation](#backend-documentation).
Documentation for rclone sub commands is with their code, e.g.
`cmd/ls/ls.go`. Write flag help strings as a single sentence on a single
line, without a period/full stop character at the end, as it will be
combined unmodified with other information (such as any default value).
If you are updating documentation for a command, you must do that in the
command source code, e.g. `cmd/ls/ls.go`. Write flag help strings as a single
sentence on a single line, without a period/full stop character at the end,
as it will be combined unmodified with other information (such as any default
value).
Note that you can use [GitHub's online editor](https://help.github.com/en/github/managing-files-in-a-repository/editing-files-in-another-users-repository)
for small changes in the docs which makes it very easy.
Note that you can use
[GitHub's online editor](https://help.github.com/en/github/managing-files-in-a-repository/editing-files-in-another-users-repository)
for small changes in the docs which makes it very easy. Just remember the
caveat when linking to header anchors, noted above, which means that GitHub's
Markdown preview may not be an entirely reliable verification of the results.
## Making a release
@@ -350,13 +449,13 @@ change will get linked into the issue.
Here is an example of a short commit message:
```
```text
drive: add team drive support - fixes #885
```
And here is an example of a longer one:
```
```text
mount: fix hang on errored upload
In certain circumstances, if an upload failed then the mount could hang
@@ -379,7 +478,9 @@ To add a dependency `github.com/ncw/new_dependency` see the
instructions below. These will fetch the dependency and add it to
`go.mod` and `go.sum`.
go get github.com/ncw/new_dependency
```sh
go get github.com/ncw/new_dependency
```
You can add constraints on that package when doing `go get` (see the
go docs linked above), but don't unless you really need to.
@@ -391,7 +492,9 @@ and `go.sum` in the same commit as your other changes.
If you need to update a dependency then run
go get golang.org/x/crypto
```sh
go get golang.org/x/crypto
```
Check in a single commit as above.
@@ -434,25 +537,38 @@ remote or an fs.
### Getting going
- Create `backend/remote/remote.go` (copy this from a similar remote)
- box is a good one to start from if you have a directory-based remote (and shows how to use the directory cache)
- b2 is a good one to start from if you have a bucket-based remote
- box is a good one to start from if you have a directory-based remote (and
shows how to use the directory cache)
- b2 is a good one to start from if you have a bucket-based remote
- Add your remote to the imports in `backend/all/all.go`
- HTTP based remotes are easiest to maintain if they use rclone's [lib/rest](https://pkg.go.dev/github.com/rclone/rclone/lib/rest) module, but if there is a really good Go SDK from the provider then use that instead.
- Try to implement as many optional methods as possible as it makes the remote more usable.
- Use [lib/encoder](https://pkg.go.dev/github.com/rclone/rclone/lib/encoder) to make sure we can encode any path name and `rclone info` to help determine the encodings needed
- `rclone purge -v TestRemote:rclone-info`
- `rclone test info --all --remote-encoding None -vv --write-json remote.json TestRemote:rclone-info`
- `go run cmd/test/info/internal/build_csv/main.go -o remote.csv remote.json`
- open `remote.csv` in a spreadsheet and examine
- HTTP based remotes are easiest to maintain if they use rclone's
[lib/rest](https://pkg.go.dev/github.com/rclone/rclone/lib/rest) module, but
if there is a really good Go SDK from the provider then use that instead.
- Try to implement as many optional methods as possible as it makes the remote
more usable.
- Use [lib/encoder](https://pkg.go.dev/github.com/rclone/rclone/lib/encoder) to
make sure we can encode any path name and `rclone info` to help determine the
encodings needed
- `rclone purge -v TestRemote:rclone-info`
- `rclone test info --all --remote-encoding None -vv --write-json remote.json TestRemote:rclone-info`
- `go run cmd/test/info/internal/build_csv/main.go -o remote.csv remote.json`
- open `remote.csv` in a spreadsheet and examine
### Guidelines for a speedy merge
- **Do** use [lib/rest](https://pkg.go.dev/github.com/rclone/rclone/lib/rest) if you are implementing a REST like backend and parsing XML/JSON in the backend.
- **Do** use rclone's Client or Transport from [fs/fshttp](https://pkg.go.dev/github.com/rclone/rclone/fs/fshttp) if your backend is HTTP based - this adds features like `--dump bodies`, `--tpslimit`, `--user-agent` without you having to code anything!
- **Do** follow your example backend exactly - use the same code order, function names, layout, structure. **Don't** move stuff around and **Don't** delete the comments.
- **Do not** split your backend up into `fs.go` and `object.go` (there are a few backends like that - don't follow them!)
- **Do** use [lib/rest](https://pkg.go.dev/github.com/rclone/rclone/lib/rest)
if you are implementing a REST like backend and parsing XML/JSON in the backend.
- **Do** use rclone's Client or Transport from [fs/fshttp](https://pkg.go.dev/github.com/rclone/rclone/fs/fshttp)
if your backend is HTTP based - this adds features like `--dump bodies`,
`--tpslimit`, `--user-agent` without you having to code anything!
- **Do** follow your example backend exactly - use the same code order, function
names, layout, structure. **Don't** move stuff around and **Don't** delete the
comments.
- **Do not** split your backend up into `fs.go` and `object.go` (there are a few
backends like that - don't follow them!)
- **Do** put your API type definitions in a separate file - by preference `api/types.go`
- **Remember** we have >50 backends to maintain so keeping them as similar as possible to each other is a high priority!
- **Remember** we have >50 backends to maintain so keeping them as similar as
possible to each other is a high priority!
### Unit tests
@@ -463,19 +579,20 @@ remote or an fs.
### Integration tests
- Add your backend to `fstest/test_all/config.yaml`
- Once you've done that then you can use the integration test framework from the project root:
- go install ./...
- test_all -backends remote
- Once you've done that then you can use the integration test framework from
the project root:
- go install ./...
- test_all -backends remote
Or if you want to run the integration tests manually:
- Make sure integration tests pass with
- `cd fs/operations`
- `go test -v -remote TestRemote:`
- `cd fs/sync`
- `go test -v -remote TestRemote:`
- `cd fs/operations`
- `go test -v -remote TestRemote:`
- `cd fs/sync`
- `go test -v -remote TestRemote:`
- If your remote defines `ListR` check with this also
- `go test -v -remote TestRemote: -fast-list`
- `go test -v -remote TestRemote: -fast-list`
See the [testing](#testing) section for more information on integration tests.
@@ -487,10 +604,13 @@ alphabetical order of full name of remote (e.g. `drive` is ordered as
`Google Drive`) but with the local file system last.
- `README.md` - main GitHub page
- `docs/content/remote.md` - main docs page (note the backend options are automatically added to this file with `make backenddocs`)
- make sure this has the `autogenerated options` comments in (see your reference backend docs)
- update them in your backend with `bin/make_backend_docs.py remote`
- `docs/content/overview.md` - overview docs - add an entry into the Features table and the Optional Features table.
- `docs/content/remote.md` - main docs page (note the backend options are
automatically added to this file with `make backenddocs`)
- make sure this has the `autogenerated options` comments in (see your
reference backend docs)
- update them in your backend with `bin/make_backend_docs.py remote`
- `docs/content/overview.md` - overview docs - add an entry into the Features
table and the Optional Features table.
- `docs/content/docs.md` - list of remotes in config section
- `docs/content/_index.md` - front page of rclone.org
- `docs/layouts/chrome/navbar.html` - add it to the website navigation
@@ -506,21 +626,22 @@ It is quite easy to add a new S3 provider to rclone.
You'll need to modify the following files
- `backend/s3/s3.go`
- Add the provider to `providerOption` at the top of the file
- Add endpoints and other config for your provider gated on the provider in `fs.RegInfo`.
- Exclude your provider from generic config questions (eg `region` and `endpoint).
- Add the provider to the `setQuirks` function - see the documentation there.
- Add the provider to `providerOption` at the top of the file
- Add endpoints and other config for your provider gated on the provider in `fs.RegInfo`.
- Exclude your provider from generic config questions (eg `region` and `endpoint).
- Add the provider to the `setQuirks` function - see the documentation there.
- `docs/content/s3.md`
- Add the provider at the top of the page.
- Add a section about the provider linked from there.
- Add a transcript of a trial `rclone config` session
- Edit the transcript to remove things which might change in subsequent versions
- **Do not** alter or add to the autogenerated parts of `s3.md`
- **Do not** run `make backenddocs` or `bin/make_backend_docs.py s3`
- Add the provider at the top of the page.
- Add a section about the provider linked from there.
- Make sure this is in alphabetical order in the `Providers` section.
- Add a transcript of a trial `rclone config` session
- Edit the transcript to remove things which might change in subsequent versions
- **Do not** alter or add to the autogenerated parts of `s3.md`
- **Do not** run `make backenddocs` or `bin/make_backend_docs.py s3`
- `README.md` - this is the home page in github
- Add the provider and a link to the section you wrote in `docs/contents/s3.md`
- Add the provider and a link to the section you wrote in `docs/contents/s3.md`
- `docs/content/_index.md` - this is the home page of rclone.org
- Add the provider and a link to the section you wrote in `docs/contents/s3.md`
- Add the provider and a link to the section you wrote in `docs/contents/s3.md`
When adding the provider, endpoints, quirks, docs etc keep them in
alphabetical order by `Provider` name, but with `AWS` first and
@@ -541,38 +662,39 @@ For an example of adding an s3 provider see [eb3082a1](https://github.com/rclone
## Writing a plugin
New features (backends, commands) can also be added "out-of-tree", through Go plugins.
Changes will be kept in a dynamically loaded file instead of being compiled into the main binary.
This is useful if you can't merge your changes upstream or don't want to maintain a fork of rclone.
New features (backends, commands) can also be added "out-of-tree", through Go
plugins. Changes will be kept in a dynamically loaded file instead of being
compiled into the main binary. This is useful if you can't merge your changes
upstream or don't want to maintain a fork of rclone.
### Usage
- Naming
- Plugins names must have the pattern `librcloneplugin_KIND_NAME.so`.
- `KIND` should be one of `backend`, `command` or `bundle`.
- Example: A plugin with backend support for PiFS would be called
`librcloneplugin_backend_pifs.so`.
- Loading
- Supported on macOS & Linux as of now. ([Go issue for Windows support](https://github.com/golang/go/issues/19282))
- Supported on rclone v1.50 or greater.
- All plugins in the folder specified by variable `$RCLONE_PLUGIN_PATH` are loaded.
- If this variable doesn't exist, plugin support is disabled.
- Plugins must be compiled against the exact version of rclone to work.
(The rclone used during building the plugin must be the same as the source of rclone)
- Naming
- Plugins names must have the pattern `librcloneplugin_KIND_NAME.so`.
- `KIND` should be one of `backend`, `command` or `bundle`.
- Example: A plugin with backend support for PiFS would be called
`librcloneplugin_backend_pifs.so`.
- Loading
- Supported on macOS & Linux as of now. ([Go issue for Windows support](https://github.com/golang/go/issues/19282))
- Supported on rclone v1.50 or greater.
- All plugins in the folder specified by variable `$RCLONE_PLUGIN_PATH` are loaded.
- If this variable doesn't exist, plugin support is disabled.
- Plugins must be compiled against the exact version of rclone to work.
(The rclone used during building the plugin must be the same as the source
of rclone)
### Building
To turn your existing additions into a Go plugin, move them to an external repository
and change the top-level package name to `main`.
Check `rclone --version` and make sure that the plugin's rclone dependency and host Go version match.
Check `rclone --version` and make sure that the plugin's rclone dependency and
host Go version match.
Then, run `go build -buildmode=plugin -o PLUGIN_NAME.so .` to build the plugin.
[Go reference](https://godoc.org/github.com/rclone/rclone/lib/plugin)
[Minimal example](https://gist.github.com/terorie/21b517ee347828e899e1913efc1d684f)
## Keeping a backend or command out of tree
Rclone was designed to be modular so it is very easy to keep a backend
@@ -585,6 +707,6 @@ add them out of tree.
This may be easier than using a plugin and is supported on all
platforms not just macOS and Linux.
This is explained further in https://github.com/rclone/rclone_out_of_tree_example
This is explained further in <https://github.com/rclone/rclone_out_of_tree_example>
which has an example of an out of tree backend `ram` (which is a
renamed version of the `memory` backend).

View File

@@ -1,4 +1,4 @@
# Maintainers guide for rclone #
# Maintainers guide for rclone
Current active maintainers of rclone are:
@@ -24,80 +24,108 @@ Current active maintainers of rclone are:
| Dan McArdle | @dmcardle | gitannex |
| Sam Harrison | @childish-sambino | filescom |
**This is a work in progress Draft**
## This is a work in progress draft
This is a guide for how to be an rclone maintainer. This is mostly a write-up of what I (@ncw) attempt to do.
This is a guide for how to be an rclone maintainer. This is mostly a write-up
of what I (@ncw) attempt to do.
## Triaging Tickets ##
## Triaging Tickets
When a ticket comes in it should be triaged. This means it should be classified by adding labels and placed into a milestone. Quite a lot of tickets need a bit of back and forth to determine whether it is a valid ticket so tickets may remain without labels or milestone for a while.
When a ticket comes in it should be triaged. This means it should be classified
by adding labels and placed into a milestone. Quite a lot of tickets need a bit
of back and forth to determine whether it is a valid ticket so tickets may
remain without labels or milestone for a while.
Rclone uses the labels like this:
* `bug` - a definitely verified bug
* `can't reproduce` - a problem which we can't reproduce
* `doc fix` - a bug in the documentation - if users need help understanding the docs add this label
* `duplicate` - normally close these and ask the user to subscribe to the original
* `enhancement: new remote` - a new rclone backend
* `enhancement` - a new feature
* `FUSE` - to do with `rclone mount` command
* `good first issue` - mark these if you find a small self-contained issue - these get shown to new visitors to the project
* `help` wanted - mark these if you find a self-contained issue - these get shown to new visitors to the project
* `IMPORTANT` - note to maintainers not to forget to fix this for the release
* `maintenance` - internal enhancement, code re-organisation, etc.
* `Needs Go 1.XX` - waiting for that version of Go to be released
* `question` - not a `bug` or `enhancement` - direct to the forum for next time
* `Remote: XXX` - which rclone backend this affects
* `thinking` - not decided on the course of action yet
- `bug` - a definitely verified bug
- `can't reproduce` - a problem which we can't reproduce
- `doc fix` - a bug in the documentation - if users need help understanding the
docs add this label
- `duplicate` - normally close these and ask the user to subscribe to the original
- `enhancement: new remote` - a new rclone backend
- `enhancement` - a new feature
- `FUSE` - to do with `rclone mount` command
- `good first issue` - mark these if you find a small self-contained issue -
these get shown to new visitors to the project
- `help` wanted - mark these if you find a self-contained issue - these get
shown to new visitors to the project
- `IMPORTANT` - note to maintainers not to forget to fix this for the release
- `maintenance` - internal enhancement, code re-organisation, etc.
- `Needs Go 1.XX` - waiting for that version of Go to be released
- `question` - not a `bug` or `enhancement` - direct to the forum for next time
- `Remote: XXX` - which rclone backend this affects
- `thinking` - not decided on the course of action yet
If it turns out to be a bug or an enhancement it should be tagged as such, with the appropriate other tags. Don't forget the "good first issue" tag to give new contributors something easy to do to get going.
If it turns out to be a bug or an enhancement it should be tagged as such, with
the appropriate other tags. Don't forget the "good first issue" tag to give new
contributors something easy to do to get going.
When a ticket is tagged it should be added to a milestone, either the next release, the one after, Soon or Help Wanted. Bugs can be added to the "Known Bugs" milestone if they aren't planned to be fixed or need to wait for something (e.g. the next go release).
When a ticket is tagged it should be added to a milestone, either the next
release, the one after, Soon or Help Wanted. Bugs can be added to the
"Known Bugs" milestone if they aren't planned to be fixed or need to wait for
something (e.g. the next go release).
The milestones have these meanings:
* v1.XX - stuff we would like to fit into this release
* v1.XX+1 - stuff we are leaving until the next release
* Soon - stuff we think is a good idea - waiting to be scheduled for a release
* Help wanted - blue sky stuff that might get moved up, or someone could help with
* Known bugs - bugs waiting on external factors or we aren't going to fix for the moment
- v1.XX - stuff we would like to fit into this release
- v1.XX+1 - stuff we are leaving until the next release
- Soon - stuff we think is a good idea - waiting to be scheduled for a release
- Help wanted - blue sky stuff that might get moved up, or someone could help with
- Known bugs - bugs waiting on external factors or we aren't going to fix for
the moment
Tickets [with no milestone](https://github.com/rclone/rclone/issues?utf8=✓&q=is%3Aissue%20is%3Aopen%20no%3Amile) are good candidates for ones that have slipped between the gaps and need following up.
Tickets [with no milestone](https://github.com/rclone/rclone/issues?utf8=✓&q=is%3Aissue%20is%3Aopen%20no%3Amile)
are good candidates for ones that have slipped between the gaps and need
following up.
## Closing Tickets ##
## Closing Tickets
Close tickets as soon as you can - make sure they are tagged with a release. Post a link to a beta in the ticket with the fix in, asking for feedback.
Close tickets as soon as you can - make sure they are tagged with a release.
Post a link to a beta in the ticket with the fix in, asking for feedback.
## Pull requests ##
## Pull requests
Try to process pull requests promptly!
Merging pull requests on GitHub itself works quite well nowadays so you can squash and rebase or rebase pull requests. rclone doesn't use merge commits. Use the squash and rebase option if you need to edit the commit message.
Merging pull requests on GitHub itself works quite well nowadays so you can
squash and rebase or rebase pull requests. rclone doesn't use merge commits.
Use the squash and rebase option if you need to edit the commit message.
After merging the commit, in your local master branch, do `git pull` then run `bin/update-authors.py` to update the authors file then `git push`.
After merging the commit, in your local master branch, do `git pull` then run
`bin/update-authors.py` to update the authors file then `git push`.
Sometimes pull requests need to be left open for a while - this especially true of contributions of new backends which take a long time to get right.
Sometimes pull requests need to be left open for a while - this especially true
of contributions of new backends which take a long time to get right.
## Merges ##
## Merges
If you are merging a branch locally then do `git merge --ff-only branch-name` to avoid a merge commit. You'll need to rebase the branch if it doesn't merge cleanly.
If you are merging a branch locally then do `git merge --ff-only branch-name` to
avoid a merge commit. You'll need to rebase the branch if it doesn't merge cleanly.
## Release cycle ##
## Release cycle
Rclone aims for a 6-8 week release cycle. Sometimes release cycles take longer if there is something big to merge that didn't stabilize properly or for personal reasons.
Rclone aims for a 6-8 week release cycle. Sometimes release cycles take longer
if there is something big to merge that didn't stabilize properly or for personal
reasons.
High impact regressions should be fixed before the next release.
Near the start of the release cycle, the dependencies should be updated with `make update` to give time for bugs to surface.
Near the start of the release cycle, the dependencies should be updated with
`make update` to give time for bugs to surface.
Towards the end of the release cycle try not to merge anything too big so let things settle down.
Towards the end of the release cycle try not to merge anything too big so let
things settle down.
Follow the instructions in RELEASE.md for making the release. Note that the testing part is the most time-consuming often needing several rounds of test and fix depending on exactly how many new features rclone has gained.
Follow the instructions in RELEASE.md for making the release. Note that the
testing part is the most time-consuming often needing several rounds of test
and fix depending on exactly how many new features rclone has gained.
## Mailing list ##
## Mailing list
There is now an invite-only mailing list for rclone developers `rclone-dev` on google groups.
There is now an invite-only mailing list for rclone developers `rclone-dev` on
google groups.
## TODO ##
## TODO
I should probably make a dev@rclone.org to register with cloud providers.
I should probably make a <dev@rclone.org> to register with cloud providers.

231
MANUAL.html generated
View File

@@ -81,7 +81,7 @@
<header id="title-block-header">
<h1 class="title">rclone(1) User Manual</h1>
<p class="author">Nick Craig-Wood</p>
<p class="date">Jun 27, 2025</p>
<p class="date">Jun 17, 2025</p>
</header>
<h1 id="name">NAME</h1>
<p>rclone - manage files on cloud storage</p>
@@ -426,7 +426,6 @@ choco install rclone</code></pre>
<p><a href="https://repology.org/project/rclone/versions"><img src="https://repology.org/badge/vertical-allrepos/rclone.svg?columns=3" alt="Packaging status" /></a></p>
<h2 id="docker">Docker installation</h2>
<p>The rclone developers maintain a <a href="https://hub.docker.com/r/rclone/rclone">docker image for rclone</a>.</p>
<p><strong>Note:</strong> We also now offer a paid version of rclone with enterprise-grade security and zero CVEs through our partner <a href="https://securebuild.com/blog/introducing-securebuild">SecureBuild</a>. If you are interested, check out their website and the <a href="https://securebuild.com/images/rclone">Rclone SecureBuild Image</a>.</p>
<p>These images are built as part of the release process based on a minimal Alpine Linux.</p>
<p>The <code>:latest</code> tag will always point to the latest stable release. You can use the <code>:beta</code> tag to get the latest build from master. You can also use version tags, e.g. <code>:1.49.1</code>, <code>:1.49</code> or <code>:1</code>.</p>
<pre><code>$ docker pull rclone/rclone:latest
@@ -2683,15 +2682,15 @@ X-User-Defined </code></pre>
<pre><code>rclone convmv &quot;stories/The Quick Brown Fox!.txt&quot; --name-transform &quot;all,command=echo&quot;
// Output: stories/The Quick Brown Fox!.txt</code></pre>
<pre><code>rclone convmv &quot;stories/The Quick Brown Fox!&quot; --name-transform &quot;date=-{YYYYMMDD}&quot;
// Output: stories/The Quick Brown Fox!-20250618</code></pre>
// Output: stories/The Quick Brown Fox!-20250617</code></pre>
<pre><code>rclone convmv &quot;stories/The Quick Brown Fox!&quot; --name-transform &quot;date=-{macfriendlytime}&quot;
// Output: stories/The Quick Brown Fox!-2025-06-18 0148PM</code></pre>
// Output: stories/The Quick Brown Fox!-2025-06-17 0551PM</code></pre>
<pre><code>rclone convmv &quot;stories/The Quick Brown Fox!.txt&quot; --name-transform &quot;all,regex=[\\.\\w]/ab&quot;
// Output: ababababababab/ababab ababababab ababababab ababab!abababab</code></pre>
<p>Multiple transformations can be used in sequence, applied in the order they are specified on the command line.</p>
<p>The <code>--name-transform</code> flag is also available in <code>sync</code>, <code>copy</code>, and <code>move</code>.</p>
<h1 id="files-vs-directories">Files vs Directories</h1>
<p>By default <code>--name-transform</code> will only apply to file names. The means only the leaf file name will be transformed. However some of the transforms would be better applied to the whole path or just directories. To choose which which part of the file path is affected some tags can be added to the <code>--name-transform</code>.</p>
<p>By default <code>--name-transform</code> will only apply to file names. The means only the leaf file name will be transformed. However some of the transforms would be better applied to the whole path or just directories. To choose which which part of the file path is affected some tags can be added to the <code>--name-transform</code></p>
<table>
<colgroup>
<col style="width: 50%" />
@@ -2719,7 +2718,7 @@ X-User-Defined </code></pre>
</tbody>
</table>
<p>This is used by adding the tag into the transform name like this: <code>--name-transform file,prefix=ABC</code> or <code>--name-transform dir,prefix=DEF</code>.</p>
<p>For some conversions using all is more likely to be useful, for example <code>--name-transform all,nfc</code>.</p>
<p>For some conversions using all is more likely to be useful, for example <code>--name-transform all,nfc</code></p>
<p>Note that <code>--name-transform</code> may not add path separators <code>/</code> to the name. This will cause an error.</p>
<h1 id="ordering-and-conflicts">Ordering and Conflicts</h1>
<ul>
@@ -2740,7 +2739,16 @@ X-User-Defined </code></pre>
</ul>
<h1 id="race-conditions-and-non-deterministic-behavior">Race Conditions and Non-Deterministic Behavior</h1>
<p>Some transformations, such as <code>replace=old:new</code>, may introduce conflicts where multiple source files map to the same destination name. This can lead to race conditions when performing concurrent transfers. It is up to the user to anticipate these. * If two files from the source are transformed into the same name at the destination, the final state may be non-deterministic. * Running rclone check after a sync using such transformations may erroneously report missing or differing files due to overwritten results.</p>
<p>To minimize risks, users should: * Carefully review transformations that may introduce conflicts. * Use <code>--dry-run</code> to inspect changes before executing a sync (but keep in mind that it won't show the effect of non-deterministic transformations). * Avoid transformations that cause multiple distinct source files to map to the same destination name. * Consider disabling concurrency with <code>--transfers=1</code> if necessary. * Certain transformations (e.g. <code>prefix</code>) will have a multiplying effect every time they are used. Avoid these when using <code>bisync</code>.</p>
<ul>
<li>To minimize risks, users should:
<ul>
<li>Carefully review transformations that may introduce conflicts.</li>
<li>Use <code>--dry-run</code> to inspect changes before executing a sync (but keep in mind that it won't show the effect of non-deterministic transformations).</li>
<li>Avoid transformations that cause multiple distinct source files to map to the same destination name.</li>
<li>Consider disabling concurrency with <code>--transfers=1</code> if necessary.</li>
<li>Certain transformations (e.g. <code>prefix</code>) will have a multiplying effect every time they are used. Avoid these when using <code>bisync</code>.</li>
</ul></li>
</ul>
<pre><code>rclone convmv dest:path --name-transform XXX [flags]</code></pre>
<h2 id="options-48">Options</h2>
<pre><code> --create-empty-src-dirs Create empty source dirs on destination after move
@@ -13231,7 +13239,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
--tpslimit float Limit HTTP transactions per second to this
--tpslimit-burst int Max burst of transactions for --tpslimit (default 1)
--use-cookies Enable session cookiejar
--user-agent string Set the user-agent to a specified string (default &quot;rclone/v1.70.2&quot;)</code></pre>
--user-agent string Set the user-agent to a specified string (default &quot;rclone/v1.70.0&quot;)</code></pre>
<h2 id="performance">Performance</h2>
<p>Flags helpful for increasing performance.</p>
<pre><code> --buffer-size SizeSuffix In memory buffer size when reading files for each --transfer (default 16Mi)
@@ -13653,7 +13661,6 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
--ftp-explicit-tls Use Explicit FTPS (FTP over TLS)
--ftp-force-list-hidden Use LIST -a to force listing of hidden files and folders. This will disable the use of MLSD
--ftp-host string FTP host to connect to
--ftp-http-proxy string URL for HTTP CONNECT proxy
--ftp-idle-timeout Duration Max time before closing idle connections (default 1m0s)
--ftp-no-check-certificate Do not verify the TLS certificate of the server
--ftp-no-check-upload Don&#39;t check the upload is OK
@@ -21361,7 +21368,6 @@ y/e/d&gt; y</code></pre>
<h4 id="box-client-credentials">--box-client-credentials</h4>
<p>Use client credentials OAuth flow.</p>
<p>This will use the OAUTH2 client Credentials Flow as described in RFC 6749.</p>
<p>Note that this option is NOT supported by all backends.</p>
<p>Properties:</p>
<ul>
<li>Config: client_credentials</li>
@@ -22616,7 +22622,6 @@ y/e/d&gt; y</code></pre>
<h4 id="sharefile-client-credentials">--sharefile-client-credentials</h4>
<p>Use client credentials OAuth flow.</p>
<p>This will use the OAUTH2 client Credentials Flow as described in RFC 6749.</p>
<p>Note that this option is NOT supported by all backends.</p>
<p>Properties:</p>
<ul>
<li>Config: client_credentials</li>
@@ -23390,7 +23395,7 @@ upstreams = &quot;My Drive=My Drive:&quot; &quot;Test Drive=Test Drive:&quot;</c
<p>See the <a href="https://rclone.org/docs/#metadata">metadata</a> docs for more info.</p>
<h1 id="doi">DOI</h1>
<p>The DOI remote is a read only remote for reading files from digital object identifiers (DOI).</p>
<p>Currently, the DOI backend supports DOIs hosted with: - <a href="https://inveniosoftware.org/products/rdm/">InvenioRDM</a> - <a href="https://zenodo.org">Zenodo</a> - <a href="https://data.caltech.edu">CaltechDATA</a> - <a href="https://inveniosoftware.org/showcase/">Other InvenioRDM repositories</a> - <a href="https://dataverse.org">Dataverse</a> - <a href="https://dataverse.harvard.edu">Harvard Dataverse</a> - <a href="https://dataverse.org/installations">Other Dataverse repositories</a></p>
<p>Currently, the DOI backend supports supports DOIs hosted with: - <a href="https://inveniosoftware.org/products/rdm/">InvenioRDM</a> - <a href="https://zenodo.org">Zenodo</a> - <a href="https://data.caltech.edu">CaltechDATA</a> - <a href="https://inveniosoftware.org/showcase/">Other InvenioRDM repositories</a> - <a href="https://dataverse.org">Dataverse</a> - <a href="https://dataverse.harvard.edu">Harvard Dataverse</a> - <a href="https://dataverse.org/installations">Other Dataverse repositories</a></p>
<p>Paths are specified as <code>remote:path</code></p>
<p>Paths may be as deep as required, e.g. <code>remote:directory/subdirectory</code>.</p>
<h2 id="configuration-12">Configuration</h2>
@@ -23751,7 +23756,6 @@ y/e/d&gt; y</code></pre>
<h4 id="dropbox-client-credentials">--dropbox-client-credentials</h4>
<p>Use client credentials OAuth flow.</p>
<p>This will use the OAUTH2 client Credentials Flow as described in RFC 6749.</p>
<p>Note that this option is NOT supported by all backends.</p>
<p>Properties:</p>
<ul>
<li>Config: client_credentials</li>
@@ -24720,16 +24724,6 @@ rclone lsf :ftp,host=speedtest.tele2.net,user=anonymous,pass=IXs2wc8OJOz7SYLBk47
<li>Type: string</li>
<li>Required: false</li>
</ul>
<h4 id="ftp-http-proxy">--ftp-http-proxy</h4>
<p>URL for HTTP CONNECT proxy</p>
<p>Set this to a URL for an HTTP proxy which supports the HTTP CONNECT verb.</p>
<p>Properties:</p>
<ul>
<li>Config: http_proxy</li>
<li>Env Var: RCLONE_FTP_HTTP_PROXY</li>
<li>Type: string</li>
<li>Required: false</li>
</ul>
<h4 id="ftp-no-check-upload">--ftp-no-check-upload</h4>
<p>Don't check the upload is OK</p>
<p>Normally rclone will try to check the upload exists after it has uploaded a file to make sure the size and modification time are as expected.</p>
@@ -25657,7 +25651,6 @@ ya29.c.c0ASRK0GbAFEewXD [truncated]</code></pre>
<h4 id="gcs-client-credentials">--gcs-client-credentials</h4>
<p>Use client credentials OAuth flow.</p>
<p>This will use the OAUTH2 client Credentials Flow as described in RFC 6749.</p>
<p>Note that this option is NOT supported by all backends.</p>
<p>Properties:</p>
<ul>
<li>Config: client_credentials</li>
@@ -26361,7 +26354,6 @@ trashed=false and &#39;c&#39; in parents</code></pre>
<h4 id="drive-client-credentials">--drive-client-credentials</h4>
<p>Use client credentials OAuth flow.</p>
<p>This will use the OAUTH2 client Credentials Flow as described in RFC 6749.</p>
<p>Note that this option is NOT supported by all backends.</p>
<p>Properties:</p>
<ul>
<li>Config: client_credentials</li>
@@ -27437,7 +27429,6 @@ y/e/d&gt; y</code></pre>
<h4 id="gphotos-client-credentials">--gphotos-client-credentials</h4>
<p>Use client credentials OAuth flow.</p>
<p>This will use the OAUTH2 client Credentials Flow as described in RFC 6749.</p>
<p>Note that this option is NOT supported by all backends.</p>
<p>Properties:</p>
<ul>
<li>Config: client_credentials</li>
@@ -27605,13 +27596,6 @@ y/e/d&gt; y</code></pre>
<p>Rclone cannot delete files anywhere except under <code>album</code>.</p>
<h3 id="deleting-albums">Deleting albums</h3>
<p>The Google Photos API does not support deleting albums - see <a href="https://issuetracker.google.com/issues/135714733">bug #135714733</a>.</p>
<h2 id="making-your-own-client_id-1">Making your own client_id</h2>
<p>When you use rclone with Google photos in its default configuration you are using rclone's client_id. This is shared between all the rclone users. There is a global rate limit on the number of queries per second that each client_id can do set by Google.</p>
<p>If there is a problem with this client_id (eg quota too low or the client_id stops working) then you can make your own.</p>
<p>Please follow the steps in <a href="https://rclone.org/drive/#making-your-own-client-id">the google drive docs</a>. You will need these scopes instead of the drive ones detailed:</p>
<pre><code>https://www.googleapis.com/auth/photoslibrary.appendonly
https://www.googleapis.com/auth/photoslibrary.readonly.appcreateddata
https://www.googleapis.com/auth/photoslibrary.edit.appcreateddata</code></pre>
<h1 id="hasher">Hasher</h1>
<p>Hasher is a special overlay backend to create remotes which handle checksums for other remotes. It's main functions include: - Emulate hash types unimplemented by backends - Cache checksums to help with slow hashing of large local or (S)FTP files - Warm up checksum cache from external SUM files</p>
<h2 id="getting-started-1">Getting started</h2>
@@ -28167,7 +28151,6 @@ rclone lsd remote:/users/test/path</code></pre>
<h4 id="hidrive-client-credentials">--hidrive-client-credentials</h4>
<p>Use client credentials OAuth flow.</p>
<p>This will use the OAUTH2 client Credentials Flow as described in RFC 6749.</p>
<p>Note that this option is NOT supported by all backends.</p>
<p>Properties:</p>
<ul>
<li>Config: client_credentials</li>
@@ -29420,7 +29403,6 @@ y/e/d&gt; y</code></pre>
<h4 id="jottacloud-client-credentials">--jottacloud-client-credentials</h4>
<p>Use client credentials OAuth flow.</p>
<p>This will use the OAUTH2 client Credentials Flow as described in RFC 6749.</p>
<p>Note that this option is NOT supported by all backends.</p>
<p>Properties:</p>
<ul>
<li>Config: client_credentials</li>
@@ -30180,7 +30162,6 @@ y/e/d&gt; y</code></pre>
<h4 id="mailru-client-credentials">--mailru-client-credentials</h4>
<p>Use client credentials OAuth flow.</p>
<p>This will use the OAUTH2 client Credentials Flow as described in RFC 6749.</p>
<p>Note that this option is NOT supported by all backends.</p>
<p>Properties:</p>
<ul>
<li>Config: client_credentials</li>
@@ -32083,10 +32064,7 @@ y/e/d&gt; y</code></pre>
<h4 id="creating-client-id-for-onedrive-personal">Creating Client ID for OneDrive Personal</h4>
<p>To create your own Client ID, please follow these steps:</p>
<ol type="1">
<li>Open https://portal.azure.com/?quickstart=true#view/Microsoft_AAD_IAM/ActiveDirectoryMenuBlade/~/Overview and then under the <code>Add</code> menu click <code>App registration</code>.
<ul>
<li>If you have not created an Azure account, you will be prompted to. This is free, but you need to provide a phone number, address, and credit card for identity verification.</li>
</ul></li>
<li>Open https://portal.azure.com/#blade/Microsoft_AAD_RegisteredApps/ApplicationsListBlade and then click <code>New registration</code>.</li>
<li>Enter a name for your app, choose account type <code>Accounts in any organizational directory (Any Azure AD directory - Multitenant) and personal Microsoft accounts (e.g. Skype, Xbox)</code>, select <code>Web</code> in <code>Redirect URI</code>, then type (do not copy and paste) <code>http://localhost:53682/</code> and click Register. Copy and keep the <code>Application (client) ID</code> under the app name for later use.</li>
<li>Under <code>manage</code> select <code>Certificates &amp; secrets</code>, click <code>New client secret</code>. Enter a description (can be anything) and set <code>Expires</code> to 24 months. Copy and keep that secret <em>Value</em> for later use (you <em>won't</em> be able to see this value afterwards).</li>
<li>Under <code>manage</code> select <code>API permissions</code>, click <code>Add a permission</code> and select <code>Microsoft Graph</code> then select <code>delegated permissions</code>.</li>
@@ -32324,7 +32302,6 @@ y/e/d&gt; y</code></pre>
<h4 id="onedrive-client-credentials">--onedrive-client-credentials</h4>
<p>Use client credentials OAuth flow.</p>
<p>This will use the OAUTH2 client Credentials Flow as described in RFC 6749.</p>
<p>Note that this option is NOT supported by all backends.</p>
<p>Properties:</p>
<ul>
<li>Config: client_credentials</li>
@@ -32658,75 +32635,75 @@ rclone rc vfs/refresh recursive=true</code></pre>
<p>Permissions are also supported, if <code>--onedrive-metadata-permissions</code> is set. The accepted values for <code>--onedrive-metadata-permissions</code> are "<code>read</code>", "<code>write</code>", "<code>read,write</code>", and "<code>off</code>" (the default). "<code>write</code>" supports adding new permissions, updating the "role" of existing permissions, and removing permissions. Updating and removing require the Permission ID to be known, so it is recommended to use "<code>read,write</code>" instead of "<code>write</code>" if you wish to update/remove permissions.</p>
<p>Permissions are read/written in JSON format using the same schema as the <a href="https://learn.microsoft.com/en-us/onedrive/developer/rest-api/resources/permission?view=odsp-graph-online">OneDrive API</a>, which differs slightly between OneDrive Personal and Business.</p>
<p>Example for OneDrive Personal:</p>
<div class="sourceCode" id="cb1405"><pre class="sourceCode json"><code class="sourceCode json"><span id="cb1405-1"><a href="#cb1405-1" aria-hidden="true"></a><span class="ot">[</span></span>
<span id="cb1405-2"><a href="#cb1405-2" aria-hidden="true"></a> <span class="fu">{</span></span>
<span id="cb1405-3"><a href="#cb1405-3" aria-hidden="true"></a> <span class="dt">&quot;id&quot;</span><span class="fu">:</span> <span class="st">&quot;1234567890ABC!123&quot;</span><span class="fu">,</span></span>
<span id="cb1405-4"><a href="#cb1405-4" aria-hidden="true"></a> <span class="dt">&quot;grantedTo&quot;</span><span class="fu">:</span> <span class="fu">{</span></span>
<span id="cb1405-5"><a href="#cb1405-5" aria-hidden="true"></a> <span class="dt">&quot;user&quot;</span><span class="fu">:</span> <span class="fu">{</span></span>
<span id="cb1405-6"><a href="#cb1405-6" aria-hidden="true"></a> <span class="dt">&quot;id&quot;</span><span class="fu">:</span> <span class="st">&quot;ryan@contoso.com&quot;</span></span>
<span id="cb1405-7"><a href="#cb1405-7" aria-hidden="true"></a> <span class="fu">},</span></span>
<span id="cb1405-8"><a href="#cb1405-8" aria-hidden="true"></a> <span class="dt">&quot;application&quot;</span><span class="fu">:</span> <span class="fu">{},</span></span>
<span id="cb1405-9"><a href="#cb1405-9" aria-hidden="true"></a> <span class="dt">&quot;device&quot;</span><span class="fu">:</span> <span class="fu">{}</span></span>
<span id="cb1405-10"><a href="#cb1405-10" aria-hidden="true"></a> <span class="fu">},</span></span>
<span id="cb1405-11"><a href="#cb1405-11" aria-hidden="true"></a> <span class="dt">&quot;invitation&quot;</span><span class="fu">:</span> <span class="fu">{</span></span>
<span id="cb1405-12"><a href="#cb1405-12" aria-hidden="true"></a> <span class="dt">&quot;email&quot;</span><span class="fu">:</span> <span class="st">&quot;ryan@contoso.com&quot;</span></span>
<span id="cb1405-13"><a href="#cb1405-13" aria-hidden="true"></a> <span class="fu">},</span></span>
<span id="cb1405-14"><a href="#cb1405-14" aria-hidden="true"></a> <span class="dt">&quot;link&quot;</span><span class="fu">:</span> <span class="fu">{</span></span>
<span id="cb1405-15"><a href="#cb1405-15" aria-hidden="true"></a> <span class="dt">&quot;webUrl&quot;</span><span class="fu">:</span> <span class="st">&quot;https://1drv.ms/t/s!1234567890ABC&quot;</span></span>
<span id="cb1405-16"><a href="#cb1405-16" aria-hidden="true"></a> <span class="fu">},</span></span>
<span id="cb1405-17"><a href="#cb1405-17" aria-hidden="true"></a> <span class="dt">&quot;roles&quot;</span><span class="fu">:</span> <span class="ot">[</span></span>
<span id="cb1405-18"><a href="#cb1405-18" aria-hidden="true"></a> <span class="st">&quot;read&quot;</span></span>
<span id="cb1405-19"><a href="#cb1405-19" aria-hidden="true"></a> <span class="ot">]</span><span class="fu">,</span></span>
<span id="cb1405-20"><a href="#cb1405-20" aria-hidden="true"></a> <span class="dt">&quot;shareId&quot;</span><span class="fu">:</span> <span class="st">&quot;s!1234567890ABC&quot;</span></span>
<span id="cb1405-21"><a href="#cb1405-21" aria-hidden="true"></a> <span class="fu">}</span></span>
<span id="cb1405-22"><a href="#cb1405-22" aria-hidden="true"></a><span class="ot">]</span></span></code></pre></div>
<p>Example for OneDrive Business:</p>
<div class="sourceCode" id="cb1406"><pre class="sourceCode json"><code class="sourceCode json"><span id="cb1406-1"><a href="#cb1406-1" aria-hidden="true"></a><span class="ot">[</span></span>
<span id="cb1406-2"><a href="#cb1406-2" aria-hidden="true"></a> <span class="fu">{</span></span>
<span id="cb1406-3"><a href="#cb1406-3" aria-hidden="true"></a> <span class="dt">&quot;id&quot;</span><span class="fu">:</span> <span class="st">&quot;1234567890ABC!123&quot;</span><span class="fu">,</span></span>
<span id="cb1406-4"><a href="#cb1406-4" aria-hidden="true"></a> <span class="dt">&quot;grantedTo&quot;</span><span class="fu">:</span> <span class="fu">{</span></span>
<span id="cb1406-5"><a href="#cb1406-5" aria-hidden="true"></a> <span class="dt">&quot;user&quot;</span><span class="fu">:</span> <span class="fu">{</span></span>
<span id="cb1406-6"><a href="#cb1406-6" aria-hidden="true"></a> <span class="dt">&quot;id&quot;</span><span class="fu">:</span> <span class="st">&quot;ryan@contoso.com&quot;</span></span>
<span id="cb1406-7"><a href="#cb1406-7" aria-hidden="true"></a> <span class="fu">},</span></span>
<span id="cb1406-8"><a href="#cb1406-8" aria-hidden="true"></a> <span class="dt">&quot;application&quot;</span><span class="fu">:</span> <span class="fu">{},</span></span>
<span id="cb1406-9"><a href="#cb1406-9" aria-hidden="true"></a> <span class="dt">&quot;device&quot;</span><span class="fu">:</span> <span class="fu">{}</span></span>
<span id="cb1406-10"><a href="#cb1406-10" aria-hidden="true"></a> <span class="fu">},</span></span>
<span id="cb1406-11"><a href="#cb1406-11" aria-hidden="true"></a> <span class="dt">&quot;invitation&quot;</span><span class="fu">:</span> <span class="fu">{</span></span>
<span id="cb1406-12"><a href="#cb1406-12" aria-hidden="true"></a> <span class="dt">&quot;email&quot;</span><span class="fu">:</span> <span class="st">&quot;ryan@contoso.com&quot;</span></span>
<span id="cb1406-13"><a href="#cb1406-13" aria-hidden="true"></a> <span class="fu">},</span></span>
<span id="cb1406-14"><a href="#cb1406-14" aria-hidden="true"></a> <span class="dt">&quot;link&quot;</span><span class="fu">:</span> <span class="fu">{</span></span>
<span id="cb1406-15"><a href="#cb1406-15" aria-hidden="true"></a> <span class="dt">&quot;webUrl&quot;</span><span class="fu">:</span> <span class="st">&quot;https://1drv.ms/t/s!1234567890ABC&quot;</span></span>
<span id="cb1406-16"><a href="#cb1406-16" aria-hidden="true"></a> <span class="fu">},</span></span>
<span id="cb1406-17"><a href="#cb1406-17" aria-hidden="true"></a> <span class="dt">&quot;roles&quot;</span><span class="fu">:</span> <span class="ot">[</span></span>
<span id="cb1406-18"><a href="#cb1406-18" aria-hidden="true"></a> <span class="st">&quot;read&quot;</span></span>
<span id="cb1406-19"><a href="#cb1406-19" aria-hidden="true"></a> <span class="ot">]</span><span class="fu">,</span></span>
<span id="cb1406-20"><a href="#cb1406-20" aria-hidden="true"></a> <span class="dt">&quot;shareId&quot;</span><span class="fu">:</span> <span class="st">&quot;s!1234567890ABC&quot;</span></span>
<span id="cb1406-21"><a href="#cb1406-21" aria-hidden="true"></a> <span class="fu">}</span></span>
<span id="cb1406-22"><a href="#cb1406-22" aria-hidden="true"></a><span class="ot">]</span></span></code></pre></div>
<p>Example for OneDrive Business:</p>
<div class="sourceCode" id="cb1407"><pre class="sourceCode json"><code class="sourceCode json"><span id="cb1407-1"><a href="#cb1407-1" aria-hidden="true"></a><span class="ot">[</span></span>
<span id="cb1407-2"><a href="#cb1407-2" aria-hidden="true"></a> <span class="fu">{</span></span>
<span id="cb1407-3"><a href="#cb1407-3" aria-hidden="true"></a> <span class="dt">&quot;id&quot;</span><span class="fu">:</span> <span class="st">&quot;48d31887-5fad-4d73-a9f5-3c356e68a038&quot;</span><span class="fu">,</span></span>
<span id="cb1407-4"><a href="#cb1407-4" aria-hidden="true"></a> <span class="dt">&quot;grantedToIdentities&quot;</span><span class="fu">:</span> <span class="ot">[</span></span>
<span id="cb1407-5"><a href="#cb1407-5" aria-hidden="true"></a> <span class="fu">{</span></span>
<span id="cb1407-6"><a href="#cb1407-6" aria-hidden="true"></a> <span class="dt">&quot;user&quot;</span><span class="fu">:</span> <span class="fu">{</span></span>
<span id="cb1407-7"><a href="#cb1407-7" aria-hidden="true"></a> <span class="dt">&quot;displayName&quot;</span><span class="fu">:</span> <span class="st">&quot;ryan@contoso.com&quot;</span></span>
<span id="cb1407-8"><a href="#cb1407-8" aria-hidden="true"></a> <span class="fu">},</span></span>
<span id="cb1407-9"><a href="#cb1407-9" aria-hidden="true"></a> <span class="dt">&quot;application&quot;</span><span class="fu">:</span> <span class="fu">{},</span></span>
<span id="cb1407-10"><a href="#cb1407-10" aria-hidden="true"></a> <span class="dt">&quot;device&quot;</span><span class="fu">:</span> <span class="fu">{}</span></span>
<span id="cb1407-11"><a href="#cb1407-11" aria-hidden="true"></a> <span class="fu">}</span></span>
<span id="cb1407-12"><a href="#cb1407-12" aria-hidden="true"></a> <span class="ot">]</span><span class="fu">,</span></span>
<span id="cb1407-13"><a href="#cb1407-13" aria-hidden="true"></a> <span class="dt">&quot;link&quot;</span><span class="fu">:</span> <span class="fu">{</span></span>
<span id="cb1407-14"><a href="#cb1407-14" aria-hidden="true"></a> <span class="dt">&quot;type&quot;</span><span class="fu">:</span> <span class="st">&quot;view&quot;</span><span class="fu">,</span></span>
<span id="cb1407-15"><a href="#cb1407-15" aria-hidden="true"></a> <span class="dt">&quot;scope&quot;</span><span class="fu">:</span> <span class="st">&quot;users&quot;</span><span class="fu">,</span></span>
<span id="cb1407-16"><a href="#cb1407-16" aria-hidden="true"></a> <span class="dt">&quot;webUrl&quot;</span><span class="fu">:</span> <span class="st">&quot;https://contoso.sharepoint.com/:w:/t/design/a577ghg9hgh737613bmbjf839026561fmzhsr85ng9f3hjck2t5s&quot;</span></span>
<span id="cb1407-17"><a href="#cb1407-17" aria-hidden="true"></a> <span class="fu">},</span></span>
<span id="cb1407-18"><a href="#cb1407-18" aria-hidden="true"></a> <span class="dt">&quot;roles&quot;</span><span class="fu">:</span> <span class="ot">[</span></span>
<span id="cb1407-19"><a href="#cb1407-19" aria-hidden="true"></a> <span class="st">&quot;read&quot;</span></span>
<span id="cb1407-20"><a href="#cb1407-20" aria-hidden="true"></a> <span class="ot">]</span><span class="fu">,</span></span>
<span id="cb1407-21"><a href="#cb1407-21" aria-hidden="true"></a> <span class="dt">&quot;shareId&quot;</span><span class="fu">:</span> <span class="st">&quot;u!LKj1lkdlals90j1nlkascl&quot;</span></span>
<span id="cb1407-22"><a href="#cb1407-22" aria-hidden="true"></a> <span class="fu">}</span><span class="ot">,</span></span>
<span id="cb1407-23"><a href="#cb1407-23" aria-hidden="true"></a> <span class="fu">{</span></span>
<span id="cb1407-24"><a href="#cb1407-24" aria-hidden="true"></a> <span class="dt">&quot;id&quot;</span><span class="fu">:</span> <span class="st">&quot;5D33DD65C6932946&quot;</span><span class="fu">,</span></span>
<span id="cb1407-25"><a href="#cb1407-25" aria-hidden="true"></a> <span class="dt">&quot;grantedTo&quot;</span><span class="fu">:</span> <span class="fu">{</span></span>
<span id="cb1407-26"><a href="#cb1407-26" aria-hidden="true"></a> <span class="dt">&quot;user&quot;</span><span class="fu">:</span> <span class="fu">{</span></span>
<span id="cb1407-27"><a href="#cb1407-27" aria-hidden="true"></a> <span class="dt">&quot;displayName&quot;</span><span class="fu">:</span> <span class="st">&quot;John Doe&quot;</span><span class="fu">,</span></span>
<span id="cb1407-28"><a href="#cb1407-28" aria-hidden="true"></a> <span class="dt">&quot;id&quot;</span><span class="fu">:</span> <span class="st">&quot;efee1b77-fb3b-4f65-99d6-274c11914d12&quot;</span></span>
<span id="cb1407-29"><a href="#cb1407-29" aria-hidden="true"></a> <span class="fu">},</span></span>
<span id="cb1407-30"><a href="#cb1407-30" aria-hidden="true"></a> <span class="dt">&quot;application&quot;</span><span class="fu">:</span> <span class="fu">{},</span></span>
<span id="cb1407-31"><a href="#cb1407-31" aria-hidden="true"></a> <span class="dt">&quot;device&quot;</span><span class="fu">:</span> <span class="fu">{}</span></span>
<span id="cb1407-32"><a href="#cb1407-32" aria-hidden="true"></a> <span class="fu">},</span></span>
<span id="cb1407-33"><a href="#cb1407-33" aria-hidden="true"></a> <span class="dt">&quot;roles&quot;</span><span class="fu">:</span> <span class="ot">[</span></span>
<span id="cb1407-34"><a href="#cb1407-34" aria-hidden="true"></a> <span class="st">&quot;owner&quot;</span></span>
<span id="cb1407-35"><a href="#cb1407-35" aria-hidden="true"></a> <span class="ot">]</span><span class="fu">,</span></span>
<span id="cb1407-36"><a href="#cb1407-36" aria-hidden="true"></a> <span class="dt">&quot;shareId&quot;</span><span class="fu">:</span> <span class="st">&quot;FWxc1lasfdbEAGM5fI7B67aB5ZMPDMmQ11U&quot;</span></span>
<span id="cb1407-37"><a href="#cb1407-37" aria-hidden="true"></a> <span class="fu">}</span></span>
<span id="cb1407-38"><a href="#cb1407-38" aria-hidden="true"></a><span class="ot">]</span></span></code></pre></div>
<span id="cb1406-3"><a href="#cb1406-3" aria-hidden="true"></a> <span class="dt">&quot;id&quot;</span><span class="fu">:</span> <span class="st">&quot;48d31887-5fad-4d73-a9f5-3c356e68a038&quot;</span><span class="fu">,</span></span>
<span id="cb1406-4"><a href="#cb1406-4" aria-hidden="true"></a> <span class="dt">&quot;grantedToIdentities&quot;</span><span class="fu">:</span> <span class="ot">[</span></span>
<span id="cb1406-5"><a href="#cb1406-5" aria-hidden="true"></a> <span class="fu">{</span></span>
<span id="cb1406-6"><a href="#cb1406-6" aria-hidden="true"></a> <span class="dt">&quot;user&quot;</span><span class="fu">:</span> <span class="fu">{</span></span>
<span id="cb1406-7"><a href="#cb1406-7" aria-hidden="true"></a> <span class="dt">&quot;displayName&quot;</span><span class="fu">:</span> <span class="st">&quot;ryan@contoso.com&quot;</span></span>
<span id="cb1406-8"><a href="#cb1406-8" aria-hidden="true"></a> <span class="fu">},</span></span>
<span id="cb1406-9"><a href="#cb1406-9" aria-hidden="true"></a> <span class="dt">&quot;application&quot;</span><span class="fu">:</span> <span class="fu">{},</span></span>
<span id="cb1406-10"><a href="#cb1406-10" aria-hidden="true"></a> <span class="dt">&quot;device&quot;</span><span class="fu">:</span> <span class="fu">{}</span></span>
<span id="cb1406-11"><a href="#cb1406-11" aria-hidden="true"></a> <span class="fu">}</span></span>
<span id="cb1406-12"><a href="#cb1406-12" aria-hidden="true"></a> <span class="ot">]</span><span class="fu">,</span></span>
<span id="cb1406-13"><a href="#cb1406-13" aria-hidden="true"></a> <span class="dt">&quot;link&quot;</span><span class="fu">:</span> <span class="fu">{</span></span>
<span id="cb1406-14"><a href="#cb1406-14" aria-hidden="true"></a> <span class="dt">&quot;type&quot;</span><span class="fu">:</span> <span class="st">&quot;view&quot;</span><span class="fu">,</span></span>
<span id="cb1406-15"><a href="#cb1406-15" aria-hidden="true"></a> <span class="dt">&quot;scope&quot;</span><span class="fu">:</span> <span class="st">&quot;users&quot;</span><span class="fu">,</span></span>
<span id="cb1406-16"><a href="#cb1406-16" aria-hidden="true"></a> <span class="dt">&quot;webUrl&quot;</span><span class="fu">:</span> <span class="st">&quot;https://contoso.sharepoint.com/:w:/t/design/a577ghg9hgh737613bmbjf839026561fmzhsr85ng9f3hjck2t5s&quot;</span></span>
<span id="cb1406-17"><a href="#cb1406-17" aria-hidden="true"></a> <span class="fu">},</span></span>
<span id="cb1406-18"><a href="#cb1406-18" aria-hidden="true"></a> <span class="dt">&quot;roles&quot;</span><span class="fu">:</span> <span class="ot">[</span></span>
<span id="cb1406-19"><a href="#cb1406-19" aria-hidden="true"></a> <span class="st">&quot;read&quot;</span></span>
<span id="cb1406-20"><a href="#cb1406-20" aria-hidden="true"></a> <span class="ot">]</span><span class="fu">,</span></span>
<span id="cb1406-21"><a href="#cb1406-21" aria-hidden="true"></a> <span class="dt">&quot;shareId&quot;</span><span class="fu">:</span> <span class="st">&quot;u!LKj1lkdlals90j1nlkascl&quot;</span></span>
<span id="cb1406-22"><a href="#cb1406-22" aria-hidden="true"></a> <span class="fu">}</span><span class="ot">,</span></span>
<span id="cb1406-23"><a href="#cb1406-23" aria-hidden="true"></a> <span class="fu">{</span></span>
<span id="cb1406-24"><a href="#cb1406-24" aria-hidden="true"></a> <span class="dt">&quot;id&quot;</span><span class="fu">:</span> <span class="st">&quot;5D33DD65C6932946&quot;</span><span class="fu">,</span></span>
<span id="cb1406-25"><a href="#cb1406-25" aria-hidden="true"></a> <span class="dt">&quot;grantedTo&quot;</span><span class="fu">:</span> <span class="fu">{</span></span>
<span id="cb1406-26"><a href="#cb1406-26" aria-hidden="true"></a> <span class="dt">&quot;user&quot;</span><span class="fu">:</span> <span class="fu">{</span></span>
<span id="cb1406-27"><a href="#cb1406-27" aria-hidden="true"></a> <span class="dt">&quot;displayName&quot;</span><span class="fu">:</span> <span class="st">&quot;John Doe&quot;</span><span class="fu">,</span></span>
<span id="cb1406-28"><a href="#cb1406-28" aria-hidden="true"></a> <span class="dt">&quot;id&quot;</span><span class="fu">:</span> <span class="st">&quot;efee1b77-fb3b-4f65-99d6-274c11914d12&quot;</span></span>
<span id="cb1406-29"><a href="#cb1406-29" aria-hidden="true"></a> <span class="fu">},</span></span>
<span id="cb1406-30"><a href="#cb1406-30" aria-hidden="true"></a> <span class="dt">&quot;application&quot;</span><span class="fu">:</span> <span class="fu">{},</span></span>
<span id="cb1406-31"><a href="#cb1406-31" aria-hidden="true"></a> <span class="dt">&quot;device&quot;</span><span class="fu">:</span> <span class="fu">{}</span></span>
<span id="cb1406-32"><a href="#cb1406-32" aria-hidden="true"></a> <span class="fu">},</span></span>
<span id="cb1406-33"><a href="#cb1406-33" aria-hidden="true"></a> <span class="dt">&quot;roles&quot;</span><span class="fu">:</span> <span class="ot">[</span></span>
<span id="cb1406-34"><a href="#cb1406-34" aria-hidden="true"></a> <span class="st">&quot;owner&quot;</span></span>
<span id="cb1406-35"><a href="#cb1406-35" aria-hidden="true"></a> <span class="ot">]</span><span class="fu">,</span></span>
<span id="cb1406-36"><a href="#cb1406-36" aria-hidden="true"></a> <span class="dt">&quot;shareId&quot;</span><span class="fu">:</span> <span class="st">&quot;FWxc1lasfdbEAGM5fI7B67aB5ZMPDMmQ11U&quot;</span></span>
<span id="cb1406-37"><a href="#cb1406-37" aria-hidden="true"></a> <span class="fu">}</span></span>
<span id="cb1406-38"><a href="#cb1406-38" aria-hidden="true"></a><span class="ot">]</span></span></code></pre></div>
<p>To write permissions, pass in a "permissions" metadata key using this same format. The <a href="https://rclone.org/docs/#metadata-mapper"><code>--metadata-mapper</code></a> tool can be very helpful for this.</p>
<p>When adding permissions, an email address can be provided in the <code>User.ID</code> or <code>DisplayName</code> properties of <code>grantedTo</code> or <code>grantedToIdentities</code>. Alternatively, an ObjectID can be provided in <code>User.ID</code>. At least one valid recipient must be provided in order to add a permission for a user. Creating a Public Link is also supported, if <code>Link.Scope</code> is set to <code>"anonymous"</code>.</p>
<p>Example request to add a "read" permission with <code>--metadata-mapper</code>:</p>
<div class="sourceCode" id="cb1408"><pre class="sourceCode json"><code class="sourceCode json"><span id="cb1408-1"><a href="#cb1408-1" aria-hidden="true"></a><span class="fu">{</span></span>
<span id="cb1408-2"><a href="#cb1408-2" aria-hidden="true"></a> <span class="dt">&quot;Metadata&quot;</span><span class="fu">:</span> <span class="fu">{</span></span>
<span id="cb1408-3"><a href="#cb1408-3" aria-hidden="true"></a> <span class="dt">&quot;permissions&quot;</span><span class="fu">:</span> <span class="st">&quot;[{</span><span class="ch">\&quot;</span><span class="st">grantedToIdentities</span><span class="ch">\&quot;</span><span class="st">:[{</span><span class="ch">\&quot;</span><span class="st">user</span><span class="ch">\&quot;</span><span class="st">:{</span><span class="ch">\&quot;</span><span class="st">id</span><span class="ch">\&quot;</span><span class="st">:</span><span class="ch">\&quot;</span><span class="st">ryan@contoso.com</span><span class="ch">\&quot;</span><span class="st">}}],</span><span class="ch">\&quot;</span><span class="st">roles</span><span class="ch">\&quot;</span><span class="st">:[</span><span class="ch">\&quot;</span><span class="st">read</span><span class="ch">\&quot;</span><span class="st">]}]&quot;</span></span>
<span id="cb1408-4"><a href="#cb1408-4" aria-hidden="true"></a> <span class="fu">}</span></span>
<span id="cb1408-5"><a href="#cb1408-5" aria-hidden="true"></a><span class="fu">}</span></span></code></pre></div>
<div class="sourceCode" id="cb1407"><pre class="sourceCode json"><code class="sourceCode json"><span id="cb1407-1"><a href="#cb1407-1" aria-hidden="true"></a><span class="fu">{</span></span>
<span id="cb1407-2"><a href="#cb1407-2" aria-hidden="true"></a> <span class="dt">&quot;Metadata&quot;</span><span class="fu">:</span> <span class="fu">{</span></span>
<span id="cb1407-3"><a href="#cb1407-3" aria-hidden="true"></a> <span class="dt">&quot;permissions&quot;</span><span class="fu">:</span> <span class="st">&quot;[{</span><span class="ch">\&quot;</span><span class="st">grantedToIdentities</span><span class="ch">\&quot;</span><span class="st">:[{</span><span class="ch">\&quot;</span><span class="st">user</span><span class="ch">\&quot;</span><span class="st">:{</span><span class="ch">\&quot;</span><span class="st">id</span><span class="ch">\&quot;</span><span class="st">:</span><span class="ch">\&quot;</span><span class="st">ryan@contoso.com</span><span class="ch">\&quot;</span><span class="st">}}],</span><span class="ch">\&quot;</span><span class="st">roles</span><span class="ch">\&quot;</span><span class="st">:[</span><span class="ch">\&quot;</span><span class="st">read</span><span class="ch">\&quot;</span><span class="st">]}]&quot;</span></span>
<span id="cb1407-4"><a href="#cb1407-4" aria-hidden="true"></a> <span class="fu">}</span></span>
<span id="cb1407-5"><a href="#cb1407-5" aria-hidden="true"></a><span class="fu">}</span></span></code></pre></div>
<p>Note that adding a permission can fail if a conflicting permission already exists for the file/folder.</p>
<p>To update an existing permission, include both the Permission ID and the new <code>roles</code> to be assigned. <code>roles</code> is the only property that can be changed.</p>
<p>To remove permissions, pass in a blob containing only the permissions you wish to keep (which can be empty, to remove all.) Note that the <code>owner</code> role will be ignored, as it cannot be removed.</p>
@@ -35100,7 +35077,6 @@ y/e/d&gt; y</code></pre>
<h4 id="pcloud-client-credentials">--pcloud-client-credentials</h4>
<p>Use client credentials OAuth flow.</p>
<p>This will use the OAUTH2 client Credentials Flow as described in RFC 6749.</p>
<p>Note that this option is NOT supported by all backends.</p>
<p>Properties:</p>
<ul>
<li>Config: client_credentials</li>
@@ -35710,7 +35686,6 @@ y/e/d&gt; </code></pre>
<h4 id="premiumizeme-client-credentials">--premiumizeme-client-credentials</h4>
<p>Use client credentials OAuth flow.</p>
<p>This will use the OAUTH2 client Credentials Flow as described in RFC 6749.</p>
<p>Note that this option is NOT supported by all backends.</p>
<p>Properties:</p>
<ul>
<li>Config: client_credentials</li>
@@ -36102,7 +36077,6 @@ e/n/d/r/c/s/q&gt; q</code></pre>
<h4 id="putio-client-credentials">--putio-client-credentials</h4>
<p>Use client credentials OAuth flow.</p>
<p>This will use the OAUTH2 client Credentials Flow as described in RFC 6749.</p>
<p>Note that this option is NOT supported by all backends.</p>
<p>Properties:</p>
<ul>
<li>Config: client_credentials</li>
@@ -39125,7 +39099,6 @@ y/e/d&gt; y</code></pre>
<h4 id="yandex-client-credentials">--yandex-client-credentials</h4>
<p>Use client credentials OAuth flow.</p>
<p>This will use the OAUTH2 client Credentials Flow as described in RFC 6749.</p>
<p>Note that this option is NOT supported by all backends.</p>
<p>Properties:</p>
<ul>
<li>Config: client_credentials</li>
@@ -39349,7 +39322,6 @@ y/e/d&gt; </code></pre>
<h4 id="zoho-client-credentials">--zoho-client-credentials</h4>
<p>Use client credentials OAuth flow.</p>
<p>This will use the OAUTH2 client Credentials Flow as described in RFC 6749.</p>
<p>Note that this option is NOT supported by all backends.</p>
<p>Properties:</p>
<ul>
<li>Config: client_credentials</li>
@@ -40090,45 +40062,6 @@ $ tree /tmp/c
<li>"error": return an error based on option value</li>
</ul>
<h1 id="changelog-1">Changelog</h1>
<h2 id="v1.70.2---2025-06-27">v1.70.2 - 2025-06-27</h2>
<p><a href="https://github.com/rclone/rclone/compare/v1.70.1...v1.70.2">See commits</a></p>
<ul>
<li>Bug Fixes
<ul>
<li>convmv: Make --dry-run logs less noisy (nielash)</li>
<li>sync: Avoid copying dir metadata to itself (nielash)</li>
<li>build: Bump github.com/go-chi/chi/v5 from 5.2.1 to 5.2.2 to fix GHSA-vrw8-fxc6-2r93 (dependabot[bot])</li>
<li>convmv: Fix moving to unicode-equivalent name (nielash)</li>
<li>log: Fix deadlock when using systemd logging (Nick Craig-Wood)</li>
<li>pacer: Fix nil pointer deref in RetryError (Nick Craig-Wood)</li>
<li>doc fixes (Ali Zein Yousuf, Nick Craig-Wood)</li>
</ul></li>
<li>Local
<ul>
<li>Fix --skip-links on Windows when skipping Junction points (Nick Craig-Wood)</li>
</ul></li>
<li>Combine
<ul>
<li>Fix directory not found errors with ListP interface (Nick Craig-Wood)</li>
</ul></li>
<li>Mega
<ul>
<li>Fix tls handshake failure (necaran)</li>
</ul></li>
<li>Pikpak
<ul>
<li>Fix uploads fail with "aws-chunked encoding is not supported" error (Nick Craig-Wood)</li>
</ul></li>
</ul>
<h2 id="v1.70.1---2025-06-19">v1.70.1 - 2025-06-19</h2>
<p><a href="https://github.com/rclone/rclone/compare/v1.70.0...v1.70.1">See commits</a></p>
<ul>
<li>Bug Fixes
<ul>
<li>convmv: Fix spurious "error running command echo" on Windows (Nick Craig-Wood)</li>
<li>doc fixes (albertony, Ed Craig-Wood, jinjingroad)</li>
</ul></li>
</ul>
<h2 id="v1.70.0---2025-06-17">v1.70.0 - 2025-06-17</h2>
<p><a href="https://github.com/rclone/rclone/compare/v1.69.0...v1.70.0">See commits</a></p>
<ul>

136
MANUAL.md generated
View File

@@ -1,6 +1,6 @@
% rclone(1) User Manual
% Nick Craig-Wood
% Jun 27, 2025
% Jun 17, 2025
# NAME
@@ -512,12 +512,6 @@ package is here.
The rclone developers maintain a [docker image for rclone](https://hub.docker.com/r/rclone/rclone).
**Note:** We also now offer a paid version of rclone with
enterprise-grade security and zero CVEs through our partner
[SecureBuild](https://securebuild.com/blog/introducing-securebuild).
If you are interested, check out their website and the [Rclone
SecureBuild Image](https://securebuild.com/images/rclone).
These images are built as part of the release process based on a
minimal Alpine Linux.
@@ -4458,12 +4452,12 @@ rclone convmv "stories/The Quick Brown Fox!.txt" --name-transform "all,command=e
```
rclone convmv "stories/The Quick Brown Fox!" --name-transform "date=-{YYYYMMDD}"
// Output: stories/The Quick Brown Fox!-20250618
// Output: stories/The Quick Brown Fox!-20250617
```
```
rclone convmv "stories/The Quick Brown Fox!" --name-transform "date=-{macfriendlytime}"
// Output: stories/The Quick Brown Fox!-2025-06-18 0148PM
// Output: stories/The Quick Brown Fox!-2025-06-17 0551PM
```
```
@@ -4471,15 +4465,17 @@ rclone convmv "stories/The Quick Brown Fox!.txt" --name-transform "all,regex=[\\
// Output: ababababababab/ababab ababababab ababababab ababab!abababab
```
Multiple transformations can be used in sequence, applied in the order they are specified on the command line.
The `--name-transform` flag is also available in `sync`, `copy`, and `move`.
# Files vs Directories
# Files vs Directories ##
By default `--name-transform` will only apply to file names. The means only the leaf file name will be transformed.
However some of the transforms would be better applied to the whole path or just directories.
To choose which which part of the file path is affected some tags can be added to the `--name-transform`.
To choose which which part of the file path is affected some tags can be added to the `--name-transform`
| Tag | Effect |
|------|------|
@@ -4489,11 +4485,11 @@ To choose which which part of the file path is affected some tags can be added t
This is used by adding the tag into the transform name like this: `--name-transform file,prefix=ABC` or `--name-transform dir,prefix=DEF`.
For some conversions using all is more likely to be useful, for example `--name-transform all,nfc`.
For some conversions using all is more likely to be useful, for example `--name-transform all,nfc`
Note that `--name-transform` may not add path separators `/` to the name. This will cause an error.
# Ordering and Conflicts
# Ordering and Conflicts ##
* Transformations will be applied in the order specified by the user.
* If the `file` tag is in use (the default) then only the leaf name of files will be transformed.
@@ -4508,19 +4504,19 @@ user, allowing for intentional use cases (e.g., trimming one prefix before addin
* Users should be aware that certain combinations may lead to unexpected results and should verify
transformations using `--dry-run` before execution.
# Race Conditions and Non-Deterministic Behavior
# Race Conditions and Non-Deterministic Behavior ##
Some transformations, such as `replace=old:new`, may introduce conflicts where multiple source files map to the same destination name.
This can lead to race conditions when performing concurrent transfers. It is up to the user to anticipate these.
* If two files from the source are transformed into the same name at the destination, the final state may be non-deterministic.
* Running rclone check after a sync using such transformations may erroneously report missing or differing files due to overwritten results.
To minimize risks, users should:
* Carefully review transformations that may introduce conflicts.
* Use `--dry-run` to inspect changes before executing a sync (but keep in mind that it won't show the effect of non-deterministic transformations).
* Avoid transformations that cause multiple distinct source files to map to the same destination name.
* Consider disabling concurrency with `--transfers=1` if necessary.
* Certain transformations (e.g. `prefix`) will have a multiplying effect every time they are used. Avoid these when using `bisync`.
* To minimize risks, users should:
* Carefully review transformations that may introduce conflicts.
* Use `--dry-run` to inspect changes before executing a sync (but keep in mind that it won't show the effect of non-deterministic transformations).
* Avoid transformations that cause multiple distinct source files to map to the same destination name.
* Consider disabling concurrency with `--transfers=1` if necessary.
* Certain transformations (e.g. `prefix`) will have a multiplying effect every time they are used. Avoid these when using `bisync`.
@@ -22404,7 +22400,7 @@ Flags for general networking and HTTP stuff.
--tpslimit float Limit HTTP transactions per second to this
--tpslimit-burst int Max burst of transactions for --tpslimit (default 1)
--use-cookies Enable session cookiejar
--user-agent string Set the user-agent to a specified string (default "rclone/v1.70.2")
--user-agent string Set the user-agent to a specified string (default "rclone/v1.70.0")
```
@@ -22886,7 +22882,6 @@ Backend-only flags (these can be set in the config file also).
--ftp-explicit-tls Use Explicit FTPS (FTP over TLS)
--ftp-force-list-hidden Use LIST -a to force listing of hidden files and folders. This will disable the use of MLSD
--ftp-host string FTP host to connect to
--ftp-http-proxy string URL for HTTP CONNECT proxy
--ftp-idle-timeout Duration Max time before closing idle connections (default 1m0s)
--ftp-no-check-certificate Do not verify the TLS certificate of the server
--ftp-no-check-upload Don't check the upload is OK
@@ -33554,8 +33549,6 @@ Use client credentials OAuth flow.
This will use the OAUTH2 client Credentials Flow as described in RFC 6749.
Note that this option is NOT supported by all backends.
Properties:
- Config: client_credentials
@@ -35375,8 +35368,6 @@ Use client credentials OAuth flow.
This will use the OAUTH2 client Credentials Flow as described in RFC 6749.
Note that this option is NOT supported by all backends.
Properties:
- Config: client_credentials
@@ -36640,7 +36631,7 @@ See the [metadata](https://rclone.org/docs/#metadata) docs for more info.
The DOI remote is a read only remote for reading files from digital object identifiers (DOI).
Currently, the DOI backend supports DOIs hosted with:
Currently, the DOI backend supports supports DOIs hosted with:
- [InvenioRDM](https://inveniosoftware.org/products/rdm/)
- [Zenodo](https://zenodo.org)
- [CaltechDATA](https://data.caltech.edu)
@@ -37119,8 +37110,6 @@ Use client credentials OAuth flow.
This will use the OAUTH2 client Credentials Flow as described in RFC 6749.
Note that this option is NOT supported by all backends.
Properties:
- Config: client_credentials
@@ -38559,20 +38548,6 @@ Properties:
- Type: string
- Required: false
#### --ftp-http-proxy
URL for HTTP CONNECT proxy
Set this to a URL for an HTTP proxy which supports the HTTP CONNECT verb.
Properties:
- Config: http_proxy
- Env Var: RCLONE_FTP_HTTP_PROXY
- Type: string
- Required: false
#### --ftp-no-check-upload
Don't check the upload is OK
@@ -39616,8 +39591,6 @@ Use client credentials OAuth flow.
This will use the OAUTH2 client Credentials Flow as described in RFC 6749.
Note that this option is NOT supported by all backends.
Properties:
- Config: client_credentials
@@ -40431,8 +40404,6 @@ Use client credentials OAuth flow.
This will use the OAUTH2 client Credentials Flow as described in RFC 6749.
Note that this option is NOT supported by all backends.
Properties:
- Config: client_credentials
@@ -41968,8 +41939,6 @@ Use client credentials OAuth flow.
This will use the OAUTH2 client Credentials Flow as described in RFC 6749.
Note that this option is NOT supported by all backends.
Properties:
- Config: client_credentials
@@ -42277,25 +42246,6 @@ Rclone cannot delete files anywhere except under `album`.
The Google Photos API does not support deleting albums - see [bug #135714733](https://issuetracker.google.com/issues/135714733).
## Making your own client_id
When you use rclone with Google photos in its default configuration you
are using rclone's client_id. This is shared between all the rclone
users. There is a global rate limit on the number of queries per
second that each client_id can do set by Google.
If there is a problem with this client_id (eg quota too low or the
client_id stops working) then you can make your own.
Please follow the steps in [the google drive docs](https://rclone.org/drive/#making-your-own-client-id).
You will need these scopes instead of the drive ones detailed:
```
https://www.googleapis.com/auth/photoslibrary.appendonly
https://www.googleapis.com/auth/photoslibrary.readonly.appcreateddata
https://www.googleapis.com/auth/photoslibrary.edit.appcreateddata
```
# Hasher
Hasher is a special overlay backend to create remotes which handle
@@ -43178,8 +43128,6 @@ Use client credentials OAuth flow.
This will use the OAUTH2 client Credentials Flow as described in RFC 6749.
Note that this option is NOT supported by all backends.
Properties:
- Config: client_credentials
@@ -44749,8 +44697,6 @@ Use client credentials OAuth flow.
This will use the OAUTH2 client Credentials Flow as described in RFC 6749.
Note that this option is NOT supported by all backends.
Properties:
- Config: client_credentials
@@ -45627,8 +45573,6 @@ Use client credentials OAuth flow.
This will use the OAUTH2 client Credentials Flow as described in RFC 6749.
Note that this option is NOT supported by all backends.
Properties:
- Config: client_credentials
@@ -48362,8 +48306,7 @@ For example, you might see throttling.
To create your own Client ID, please follow these steps:
1. Open https://portal.azure.com/?quickstart=true#view/Microsoft_AAD_IAM/ActiveDirectoryMenuBlade/~/Overview and then under the `Add` menu click `App registration`.
* If you have not created an Azure account, you will be prompted to. This is free, but you need to provide a phone number, address, and credit card for identity verification.
1. Open https://portal.azure.com/#blade/Microsoft_AAD_RegisteredApps/ApplicationsListBlade and then click `New registration`.
2. Enter a name for your app, choose account type `Accounts in any organizational directory (Any Azure AD directory - Multitenant) and personal Microsoft accounts (e.g. Skype, Xbox)`, select `Web` in `Redirect URI`, then type (do not copy and paste) `http://localhost:53682/` and click Register. Copy and keep the `Application (client) ID` under the app name for later use.
3. Under `manage` select `Certificates & secrets`, click `New client secret`. Enter a description (can be anything) and set `Expires` to 24 months. Copy and keep that secret _Value_ for later use (you _won't_ be able to see this value afterwards).
4. Under `manage` select `API permissions`, click `Add a permission` and select `Microsoft Graph` then select `delegated permissions`.
@@ -48616,8 +48559,6 @@ Use client credentials OAuth flow.
This will use the OAUTH2 client Credentials Flow as described in RFC 6749.
Note that this option is NOT supported by all backends.
Properties:
- Config: client_credentials
@@ -52246,8 +52187,6 @@ Use client credentials OAuth flow.
This will use the OAUTH2 client Credentials Flow as described in RFC 6749.
Note that this option is NOT supported by all backends.
Properties:
- Config: client_credentials
@@ -53049,8 +52988,6 @@ Use client credentials OAuth flow.
This will use the OAUTH2 client Credentials Flow as described in RFC 6749.
Note that this option is NOT supported by all backends.
Properties:
- Config: client_credentials
@@ -53647,8 +53584,6 @@ Use client credentials OAuth flow.
This will use the OAUTH2 client Credentials Flow as described in RFC 6749.
Note that this option is NOT supported by all backends.
Properties:
- Config: client_credentials
@@ -58069,8 +58004,6 @@ Use client credentials OAuth flow.
This will use the OAUTH2 client Credentials Flow as described in RFC 6749.
Note that this option is NOT supported by all backends.
Properties:
- Config: client_credentials
@@ -58371,8 +58304,6 @@ Use client credentials OAuth flow.
This will use the OAUTH2 client Credentials Flow as described in RFC 6749.
Note that this option is NOT supported by all backends.
Properties:
- Config: client_credentials
@@ -59140,35 +59071,6 @@ Options:
# Changelog
## v1.70.2 - 2025-06-27
[See commits](https://github.com/rclone/rclone/compare/v1.70.1...v1.70.2)
* Bug Fixes
* convmv: Make --dry-run logs less noisy (nielash)
* sync: Avoid copying dir metadata to itself (nielash)
* build: Bump github.com/go-chi/chi/v5 from 5.2.1 to 5.2.2 to fix GHSA-vrw8-fxc6-2r93 (dependabot[bot])
* convmv: Fix moving to unicode-equivalent name (nielash)
* log: Fix deadlock when using systemd logging (Nick Craig-Wood)
* pacer: Fix nil pointer deref in RetryError (Nick Craig-Wood)
* doc fixes (Ali Zein Yousuf, Nick Craig-Wood)
* Local
* Fix --skip-links on Windows when skipping Junction points (Nick Craig-Wood)
* Combine
* Fix directory not found errors with ListP interface (Nick Craig-Wood)
* Mega
* Fix tls handshake failure (necaran)
* Pikpak
* Fix uploads fail with "aws-chunked encoding is not supported" error (Nick Craig-Wood)
## v1.70.1 - 2025-06-19
[See commits](https://github.com/rclone/rclone/compare/v1.70.0...v1.70.1)
* Bug Fixes
* convmv: Fix spurious "error running command echo" on Windows (Nick Craig-Wood)
* doc fixes (albertony, Ed Craig-Wood, jinjingroad)
## v1.70.0 - 2025-06-17
[See commits](https://github.com/rclone/rclone/compare/v1.69.0...v1.70.0)

143
MANUAL.txt generated
View File

@@ -1,6 +1,6 @@
rclone(1) User Manual
Nick Craig-Wood
Jun 27, 2025
Jun 17, 2025
NAME
@@ -491,10 +491,6 @@ Docker installation
The rclone developers maintain a docker image for rclone.
Note: We also now offer a paid version of rclone with enterprise-grade
security and zero CVEs through our partner SecureBuild. If you are
interested, check out their website and the Rclone SecureBuild Image.
These images are built as part of the release process based on a minimal
Alpine Linux.
@@ -4149,10 +4145,10 @@ Examples:
// Output: stories/The Quick Brown Fox!.txt
rclone convmv "stories/The Quick Brown Fox!" --name-transform "date=-{YYYYMMDD}"
// Output: stories/The Quick Brown Fox!-20250618
// Output: stories/The Quick Brown Fox!-20250617
rclone convmv "stories/The Quick Brown Fox!" --name-transform "date=-{macfriendlytime}"
// Output: stories/The Quick Brown Fox!-2025-06-18 0148PM
// Output: stories/The Quick Brown Fox!-2025-06-17 0551PM
rclone convmv "stories/The Quick Brown Fox!.txt" --name-transform "all,regex=[\\.\\w]/ab"
// Output: ababababababab/ababab ababababab ababababab ababab!abababab
@@ -4168,7 +4164,7 @@ By default --name-transform will only apply to file names. The means
only the leaf file name will be transformed. However some of the
transforms would be better applied to the whole path or just
directories. To choose which which part of the file path is affected
some tags can be added to the --name-transform.
some tags can be added to the --name-transform
-----------------------------------------------------------------------
Tag Effect
@@ -4188,7 +4184,7 @@ This is used by adding the tag into the transform name like this:
--name-transform file,prefix=ABC or --name-transform dir,prefix=DEF.
For some conversions using all is more likely to be useful, for example
--name-transform all,nfc.
--name-transform all,nfc
Note that --name-transform may not add path separators / to the name.
This will cause an error.
@@ -4227,14 +4223,16 @@ be non-deterministic. * Running rclone check after a sync using such
transformations may erroneously report missing or differing files due to
overwritten results.
To minimize risks, users should: * Carefully review transformations that
may introduce conflicts. * Use --dry-run to inspect changes before
executing a sync (but keep in mind that it won't show the effect of
non-deterministic transformations). * Avoid transformations that cause
multiple distinct source files to map to the same destination name. *
Consider disabling concurrency with --transfers=1 if necessary. *
Certain transformations (e.g. prefix) will have a multiplying effect
every time they are used. Avoid these when using bisync.
- To minimize risks, users should:
- Carefully review transformations that may introduce conflicts.
- Use --dry-run to inspect changes before executing a sync (but
keep in mind that it won't show the effect of non-deterministic
transformations).
- Avoid transformations that cause multiple distinct source files
to map to the same destination name.
- Consider disabling concurrency with --transfers=1 if necessary.
- Certain transformations (e.g. prefix) will have a multiplying
effect every time they are used. Avoid these when using bisync.
rclone convmv dest:path --name-transform XXX [flags]
@@ -21963,7 +21961,7 @@ Flags for general networking and HTTP stuff.
--tpslimit float Limit HTTP transactions per second to this
--tpslimit-burst int Max burst of transactions for --tpslimit (default 1)
--use-cookies Enable session cookiejar
--user-agent string Set the user-agent to a specified string (default "rclone/v1.70.2")
--user-agent string Set the user-agent to a specified string (default "rclone/v1.70.0")
Performance
@@ -22415,7 +22413,6 @@ Backend-only flags (these can be set in the config file also).
--ftp-explicit-tls Use Explicit FTPS (FTP over TLS)
--ftp-force-list-hidden Use LIST -a to force listing of hidden files and folders. This will disable the use of MLSD
--ftp-host string FTP host to connect to
--ftp-http-proxy string URL for HTTP CONNECT proxy
--ftp-idle-timeout Duration Max time before closing idle connections (default 1m0s)
--ftp-no-check-certificate Do not verify the TLS certificate of the server
--ftp-no-check-upload Don't check the upload is OK
@@ -32912,8 +32909,6 @@ Use client credentials OAuth flow.
This will use the OAUTH2 client Credentials Flow as described in RFC
6749.
Note that this option is NOT supported by all backends.
Properties:
- Config: client_credentials
@@ -34752,8 +34747,6 @@ Use client credentials OAuth flow.
This will use the OAUTH2 client Credentials Flow as described in RFC
6749.
Note that this option is NOT supported by all backends.
Properties:
- Config: client_credentials
@@ -35985,9 +35978,9 @@ DOI
The DOI remote is a read only remote for reading files from digital
object identifiers (DOI).
Currently, the DOI backend supports DOIs hosted with: - InvenioRDM -
Zenodo - CaltechDATA - Other InvenioRDM repositories - Dataverse -
Harvard Dataverse - Other Dataverse repositories
Currently, the DOI backend supports supports DOIs hosted with: -
InvenioRDM - Zenodo - CaltechDATA - Other InvenioRDM repositories -
Dataverse - Harvard Dataverse - Other Dataverse repositories
Paths are specified as remote:path
@@ -36445,8 +36438,6 @@ Use client credentials OAuth flow.
This will use the OAUTH2 client Credentials Flow as described in RFC
6749.
Note that this option is NOT supported by all backends.
Properties:
- Config: client_credentials
@@ -37862,20 +37853,6 @@ Properties:
- Type: string
- Required: false
--ftp-http-proxy
URL for HTTP CONNECT proxy
Set this to a URL for an HTTP proxy which supports the HTTP CONNECT
verb.
Properties:
- Config: http_proxy
- Env Var: RCLONE_FTP_HTTP_PROXY
- Type: string
- Required: false
--ftp-no-check-upload
Don't check the upload is OK
@@ -38914,8 +38891,6 @@ Use client credentials OAuth flow.
This will use the OAUTH2 client Credentials Flow as described in RFC
6749.
Note that this option is NOT supported by all backends.
Properties:
- Config: client_credentials
@@ -39770,8 +39745,6 @@ Use client credentials OAuth flow.
This will use the OAUTH2 client Credentials Flow as described in RFC
6749.
Note that this option is NOT supported by all backends.
Properties:
- Config: client_credentials
@@ -41338,8 +41311,6 @@ Use client credentials OAuth flow.
This will use the OAUTH2 client Credentials Flow as described in RFC
6749.
Note that this option is NOT supported by all backends.
Properties:
- Config: client_credentials
@@ -41646,23 +41617,6 @@ Deleting albums
The Google Photos API does not support deleting albums - see bug
#135714733.
Making your own client_id
When you use rclone with Google photos in its default configuration you
are using rclone's client_id. This is shared between all the rclone
users. There is a global rate limit on the number of queries per second
that each client_id can do set by Google.
If there is a problem with this client_id (eg quota too low or the
client_id stops working) then you can make your own.
Please follow the steps in the google drive docs. You will need these
scopes instead of the drive ones detailed:
https://www.googleapis.com/auth/photoslibrary.appendonly
https://www.googleapis.com/auth/photoslibrary.readonly.appcreateddata
https://www.googleapis.com/auth/photoslibrary.edit.appcreateddata
Hasher
Hasher is a special overlay backend to create remotes which handle
@@ -42533,8 +42487,6 @@ Use client credentials OAuth flow.
This will use the OAUTH2 client Credentials Flow as described in RFC
6749.
Note that this option is NOT supported by all backends.
Properties:
- Config: client_credentials
@@ -44211,8 +44163,6 @@ Use client credentials OAuth flow.
This will use the OAUTH2 client Credentials Flow as described in RFC
6749.
Note that this option is NOT supported by all backends.
Properties:
- Config: client_credentials
@@ -45114,8 +45064,6 @@ Use client credentials OAuth flow.
This will use the OAUTH2 client Credentials Flow as described in RFC
6749.
Note that this option is NOT supported by all backends.
Properties:
- Config: client_credentials
@@ -47882,11 +47830,8 @@ Creating Client ID for OneDrive Personal
To create your own Client ID, please follow these steps:
1. Open
https://portal.azure.com/?quickstart=true#view/Microsoft_AAD_IAM/ActiveDirectoryMenuBlade/~/Overview
and then under the Add menu click App registration.
- If you have not created an Azure account, you will be prompted
to. This is free, but you need to provide a phone number,
address, and credit card for identity verification.
https://portal.azure.com/#blade/Microsoft_AAD_RegisteredApps/ApplicationsListBlade
and then click New registration.
2. Enter a name for your app, choose account type
Accounts in any organizational directory (Any Azure AD directory - Multitenant) and personal Microsoft accounts (e.g. Skype, Xbox),
select Web in Redirect URI, then type (do not copy and paste)
@@ -48169,8 +48114,6 @@ Use client credentials OAuth flow.
This will use the OAUTH2 client Credentials Flow as described in RFC
6749.
Note that this option is NOT supported by all backends.
Properties:
- Config: client_credentials
@@ -51897,8 +51840,6 @@ Use client credentials OAuth flow.
This will use the OAUTH2 client Credentials Flow as described in RFC
6749.
Note that this option is NOT supported by all backends.
Properties:
- Config: client_credentials
@@ -52703,8 +52644,6 @@ Use client credentials OAuth flow.
This will use the OAUTH2 client Credentials Flow as described in RFC
6749.
Note that this option is NOT supported by all backends.
Properties:
- Config: client_credentials
@@ -53289,8 +53228,6 @@ Use client credentials OAuth flow.
This will use the OAUTH2 client Credentials Flow as described in RFC
6749.
Note that this option is NOT supported by all backends.
Properties:
- Config: client_credentials
@@ -57742,8 +57679,6 @@ Use client credentials OAuth flow.
This will use the OAUTH2 client Credentials Flow as described in RFC
6749.
Note that this option is NOT supported by all backends.
Properties:
- Config: client_credentials
@@ -58039,8 +57974,6 @@ Use client credentials OAuth flow.
This will use the OAUTH2 client Credentials Flow as described in RFC
6749.
Note that this option is NOT supported by all backends.
Properties:
- Config: client_credentials
@@ -58801,40 +58734,6 @@ Options:
Changelog
v1.70.2 - 2025-06-27
See commits
- Bug Fixes
- convmv: Make --dry-run logs less noisy (nielash)
- sync: Avoid copying dir metadata to itself (nielash)
- build: Bump github.com/go-chi/chi/v5 from 5.2.1 to 5.2.2 to fix
GHSA-vrw8-fxc6-2r93 (dependabot[bot])
- convmv: Fix moving to unicode-equivalent name (nielash)
- log: Fix deadlock when using systemd logging (Nick Craig-Wood)
- pacer: Fix nil pointer deref in RetryError (Nick Craig-Wood)
- doc fixes (Ali Zein Yousuf, Nick Craig-Wood)
- Local
- Fix --skip-links on Windows when skipping Junction points (Nick
Craig-Wood)
- Combine
- Fix directory not found errors with ListP interface (Nick
Craig-Wood)
- Mega
- Fix tls handshake failure (necaran)
- Pikpak
- Fix uploads fail with "aws-chunked encoding is not supported"
error (Nick Craig-Wood)
v1.70.1 - 2025-06-19
See commits
- Bug Fixes
- convmv: Fix spurious "error running command echo" on Windows
(Nick Craig-Wood)
- doc fixes (albertony, Ed Craig-Wood, jinjingroad)
v1.70.0 - 2025-06-17
See commits

View File

@@ -243,7 +243,7 @@ fetch_binaries:
rclone -P sync --exclude "/testbuilds/**" --delete-excluded $(BETA_UPLOAD) build/
serve: website
cd docs && hugo server --logLevel info -w --disableFastRender
cd docs && hugo server --logLevel info -w --disableFastRender --ignoreCache
tag: retag doc
bin/make_changelog.py $(LAST_TAG) $(VERSION) > docs/content/changelog.md.new

259
README.md
View File

@@ -1,6 +1,6 @@
<!-- markdownlint-disable-next-line first-line-heading no-inline-html -->
[<img src="https://rclone.org/img/logo_on_light__horizontal_color.svg" width="50%" alt="rclone logo">](https://rclone.org/#gh-light-mode-only)
<!-- markdownlint-disable-next-line no-inline-html -->
[<img src="https://rclone.org/img/logo_on_dark__horizontal_color.svg" width="50%" alt="rclone logo">](https://rclone.org/#gh-dark-mode-only)
[Website](https://rclone.org) |
@@ -18,100 +18,105 @@
# Rclone
Rclone *("rsync for cloud storage")* is a command-line program to sync files and directories to and from different cloud storage providers.
Rclone *("rsync for cloud storage")* is a command-line program to sync files and
directories to and from different cloud storage providers.
## Storage providers
* 1Fichier [:page_facing_up:](https://rclone.org/fichier/)
* Akamai Netstorage [:page_facing_up:](https://rclone.org/netstorage/)
* Alibaba Cloud (Aliyun) Object Storage System (OSS) [:page_facing_up:](https://rclone.org/s3/#alibaba-oss)
* Amazon S3 [:page_facing_up:](https://rclone.org/s3/)
* ArvanCloud Object Storage (AOS) [:page_facing_up:](https://rclone.org/s3/#arvan-cloud-object-storage-aos)
* Backblaze B2 [:page_facing_up:](https://rclone.org/b2/)
* Box [:page_facing_up:](https://rclone.org/box/)
* Ceph [:page_facing_up:](https://rclone.org/s3/#ceph)
* China Mobile Ecloud Elastic Object Storage (EOS) [:page_facing_up:](https://rclone.org/s3/#china-mobile-ecloud-eos)
* Cloudflare R2 [:page_facing_up:](https://rclone.org/s3/#cloudflare-r2)
* Citrix ShareFile [:page_facing_up:](https://rclone.org/sharefile/)
* DigitalOcean Spaces [:page_facing_up:](https://rclone.org/s3/#digitalocean-spaces)
* Digi Storage [:page_facing_up:](https://rclone.org/koofr/#digi-storage)
* Dreamhost [:page_facing_up:](https://rclone.org/s3/#dreamhost)
* Dropbox [:page_facing_up:](https://rclone.org/dropbox/)
* Enterprise File Fabric [:page_facing_up:](https://rclone.org/filefabric/)
* Fastmail Files [:page_facing_up:](https://rclone.org/webdav/#fastmail-files)
* Files.com [:page_facing_up:](https://rclone.org/filescom/)
* FlashBlade [:page_facing_up:](https://rclone.org/s3/#pure-storage-flashblade)
* FTP [:page_facing_up:](https://rclone.org/ftp/)
* GoFile [:page_facing_up:](https://rclone.org/gofile/)
* Google Cloud Storage [:page_facing_up:](https://rclone.org/googlecloudstorage/)
* Google Drive [:page_facing_up:](https://rclone.org/drive/)
* Google Photos [:page_facing_up:](https://rclone.org/googlephotos/)
* HDFS (Hadoop Distributed Filesystem) [:page_facing_up:](https://rclone.org/hdfs/)
* Hetzner Storage Box [:page_facing_up:](https://rclone.org/sftp/#hetzner-storage-box)
* HiDrive [:page_facing_up:](https://rclone.org/hidrive/)
* HTTP [:page_facing_up:](https://rclone.org/http/)
* Huawei Cloud Object Storage Service(OBS) [:page_facing_up:](https://rclone.org/s3/#huawei-obs)
* iCloud Drive [:page_facing_up:](https://rclone.org/iclouddrive/)
* ImageKit [:page_facing_up:](https://rclone.org/imagekit/)
* Internet Archive [:page_facing_up:](https://rclone.org/internetarchive/)
* Jottacloud [:page_facing_up:](https://rclone.org/jottacloud/)
* IBM COS S3 [:page_facing_up:](https://rclone.org/s3/#ibm-cos-s3)
* IONOS Cloud [:page_facing_up:](https://rclone.org/s3/#ionos)
* Koofr [:page_facing_up:](https://rclone.org/koofr/)
* Leviia Object Storage [:page_facing_up:](https://rclone.org/s3/#leviia)
* Liara Object Storage [:page_facing_up:](https://rclone.org/s3/#liara-object-storage)
* Linkbox [:page_facing_up:](https://rclone.org/linkbox)
* Linode Object Storage [:page_facing_up:](https://rclone.org/s3/#linode)
* Magalu Object Storage [:page_facing_up:](https://rclone.org/s3/#magalu)
* Mail.ru Cloud [:page_facing_up:](https://rclone.org/mailru/)
* Memset Memstore [:page_facing_up:](https://rclone.org/swift/)
* MEGA [:page_facing_up:](https://rclone.org/mega/)
* MEGA S4 Object Storage [:page_facing_up:](https://rclone.org/s3/#mega)
* Memory [:page_facing_up:](https://rclone.org/memory/)
* Microsoft Azure Blob Storage [:page_facing_up:](https://rclone.org/azureblob/)
* Microsoft Azure Files Storage [:page_facing_up:](https://rclone.org/azurefiles/)
* Microsoft OneDrive [:page_facing_up:](https://rclone.org/onedrive/)
* Minio [:page_facing_up:](https://rclone.org/s3/#minio)
* Nextcloud [:page_facing_up:](https://rclone.org/webdav/#nextcloud)
* OVH [:page_facing_up:](https://rclone.org/swift/)
* Blomp Cloud Storage [:page_facing_up:](https://rclone.org/swift/)
* OpenDrive [:page_facing_up:](https://rclone.org/opendrive/)
* OpenStack Swift [:page_facing_up:](https://rclone.org/swift/)
* Oracle Cloud Storage [:page_facing_up:](https://rclone.org/swift/)
* Oracle Object Storage [:page_facing_up:](https://rclone.org/oracleobjectstorage/)
* Outscale [:page_facing_up:](https://rclone.org/s3/#outscale)
* ownCloud [:page_facing_up:](https://rclone.org/webdav/#owncloud)
* pCloud [:page_facing_up:](https://rclone.org/pcloud/)
* Petabox [:page_facing_up:](https://rclone.org/s3/#petabox)
* PikPak [:page_facing_up:](https://rclone.org/pikpak/)
* Pixeldrain [:page_facing_up:](https://rclone.org/pixeldrain/)
* premiumize.me [:page_facing_up:](https://rclone.org/premiumizeme/)
* put.io [:page_facing_up:](https://rclone.org/putio/)
* Proton Drive [:page_facing_up:](https://rclone.org/protondrive/)
* QingStor [:page_facing_up:](https://rclone.org/qingstor/)
* Qiniu Cloud Object Storage (Kodo) [:page_facing_up:](https://rclone.org/s3/#qiniu)
* Quatrix [:page_facing_up:](https://rclone.org/quatrix/)
* Rackspace Cloud Files [:page_facing_up:](https://rclone.org/swift/)
* RackCorp Object Storage [:page_facing_up:](https://rclone.org/s3/#RackCorp)
* rsync.net [:page_facing_up:](https://rclone.org/sftp/#rsync-net)
* Scaleway [:page_facing_up:](https://rclone.org/s3/#scaleway)
* Seafile [:page_facing_up:](https://rclone.org/seafile/)
* Seagate Lyve Cloud [:page_facing_up:](https://rclone.org/s3/#lyve)
* SeaweedFS [:page_facing_up:](https://rclone.org/s3/#seaweedfs)
* Selectel Object Storage [:page_facing_up:](https://rclone.org/s3/#selectel)
* SFTP [:page_facing_up:](https://rclone.org/sftp/)
* SMB / CIFS [:page_facing_up:](https://rclone.org/smb/)
* StackPath [:page_facing_up:](https://rclone.org/s3/#stackpath)
* Storj [:page_facing_up:](https://rclone.org/storj/)
* SugarSync [:page_facing_up:](https://rclone.org/sugarsync/)
* Synology C2 Object Storage [:page_facing_up:](https://rclone.org/s3/#synology-c2)
* Tencent Cloud Object Storage (COS) [:page_facing_up:](https://rclone.org/s3/#tencent-cos)
* Uloz.to [:page_facing_up:](https://rclone.org/ulozto/)
* Wasabi [:page_facing_up:](https://rclone.org/s3/#wasabi)
* WebDAV [:page_facing_up:](https://rclone.org/webdav/)
* Yandex Disk [:page_facing_up:](https://rclone.org/yandex/)
* Zoho WorkDrive [:page_facing_up:](https://rclone.org/zoho/)
* The local filesystem [:page_facing_up:](https://rclone.org/local/)
- 1Fichier [:page_facing_up:](https://rclone.org/fichier/)
- Akamai Netstorage [:page_facing_up:](https://rclone.org/netstorage/)
- Alibaba Cloud (Aliyun) Object Storage System (OSS) [:page_facing_up:](https://rclone.org/s3/#alibaba-oss)
- Amazon S3 [:page_facing_up:](https://rclone.org/s3/)
- ArvanCloud Object Storage (AOS) [:page_facing_up:](https://rclone.org/s3/#arvan-cloud-object-storage-aos)
- Backblaze B2 [:page_facing_up:](https://rclone.org/b2/)
- Box [:page_facing_up:](https://rclone.org/box/)
- Ceph [:page_facing_up:](https://rclone.org/s3/#ceph)
- China Mobile Ecloud Elastic Object Storage (EOS) [:page_facing_up:](https://rclone.org/s3/#china-mobile-ecloud-eos)
- Cloudflare R2 [:page_facing_up:](https://rclone.org/s3/#cloudflare-r2)
- Citrix ShareFile [:page_facing_up:](https://rclone.org/sharefile/)
- DigitalOcean Spaces [:page_facing_up:](https://rclone.org/s3/#digitalocean-spaces)
- Digi Storage [:page_facing_up:](https://rclone.org/koofr/#digi-storage)
- Dreamhost [:page_facing_up:](https://rclone.org/s3/#dreamhost)
- Dropbox [:page_facing_up:](https://rclone.org/dropbox/)
- Enterprise File Fabric [:page_facing_up:](https://rclone.org/filefabric/)
- Exaba [:page_facing_up:](https://rclone.org/s3/#exaba)
- Fastmail Files [:page_facing_up:](https://rclone.org/webdav/#fastmail-files)
- FileLu [:page_facing_up:](https://rclone.org/filelu/)
- Files.com [:page_facing_up:](https://rclone.org/filescom/)
- FlashBlade [:page_facing_up:](https://rclone.org/s3/#pure-storage-flashblade)
- FTP [:page_facing_up:](https://rclone.org/ftp/)
- GoFile [:page_facing_up:](https://rclone.org/gofile/)
- Google Cloud Storage [:page_facing_up:](https://rclone.org/googlecloudstorage/)
- Google Drive [:page_facing_up:](https://rclone.org/drive/)
- Google Photos [:page_facing_up:](https://rclone.org/googlephotos/)
- HDFS (Hadoop Distributed Filesystem) [:page_facing_up:](https://rclone.org/hdfs/)
- Hetzner Storage Box [:page_facing_up:](https://rclone.org/sftp/#hetzner-storage-box)
- HiDrive [:page_facing_up:](https://rclone.org/hidrive/)
- HTTP [:page_facing_up:](https://rclone.org/http/)
- Huawei Cloud Object Storage Service(OBS) [:page_facing_up:](https://rclone.org/s3/#huawei-obs)
- iCloud Drive [:page_facing_up:](https://rclone.org/iclouddrive/)
- ImageKit [:page_facing_up:](https://rclone.org/imagekit/)
- Internet Archive [:page_facing_up:](https://rclone.org/internetarchive/)
- Jottacloud [:page_facing_up:](https://rclone.org/jottacloud/)
- IBM COS S3 [:page_facing_up:](https://rclone.org/s3/#ibm-cos-s3)
- IONOS Cloud [:page_facing_up:](https://rclone.org/s3/#ionos)
- Koofr [:page_facing_up:](https://rclone.org/koofr/)
- Leviia Object Storage [:page_facing_up:](https://rclone.org/s3/#leviia)
- Liara Object Storage [:page_facing_up:](https://rclone.org/s3/#liara-object-storage)
- Linkbox [:page_facing_up:](https://rclone.org/linkbox)
- Linode Object Storage [:page_facing_up:](https://rclone.org/s3/#linode)
- Magalu Object Storage [:page_facing_up:](https://rclone.org/s3/#magalu)
- Mail.ru Cloud [:page_facing_up:](https://rclone.org/mailru/)
- Memset Memstore [:page_facing_up:](https://rclone.org/swift/)
- MEGA [:page_facing_up:](https://rclone.org/mega/)
- MEGA S4 Object Storage [:page_facing_up:](https://rclone.org/s3/#mega)
- Memory [:page_facing_up:](https://rclone.org/memory/)
- Microsoft Azure Blob Storage [:page_facing_up:](https://rclone.org/azureblob/)
- Microsoft Azure Files Storage [:page_facing_up:](https://rclone.org/azurefiles/)
- Microsoft OneDrive [:page_facing_up:](https://rclone.org/onedrive/)
- Minio [:page_facing_up:](https://rclone.org/s3/#minio)
- Nextcloud [:page_facing_up:](https://rclone.org/webdav/#nextcloud)
- Blomp Cloud Storage [:page_facing_up:](https://rclone.org/swift/)
- OpenDrive [:page_facing_up:](https://rclone.org/opendrive/)
- OpenStack Swift [:page_facing_up:](https://rclone.org/swift/)
- Oracle Cloud Storage [:page_facing_up:](https://rclone.org/swift/)
- Oracle Object Storage [:page_facing_up:](https://rclone.org/oracleobjectstorage/)
- Outscale [:page_facing_up:](https://rclone.org/s3/#outscale)
- OVHcloud Object Storage (Swift) [:page_facing_up:](https://rclone.org/swift/)
- OVHcloud Object Storage (S3-compatible) [:page_facing_up:](https://rclone.org/s3/#ovhcloud)
- ownCloud [:page_facing_up:](https://rclone.org/webdav/#owncloud)
- pCloud [:page_facing_up:](https://rclone.org/pcloud/)
- Petabox [:page_facing_up:](https://rclone.org/s3/#petabox)
- PikPak [:page_facing_up:](https://rclone.org/pikpak/)
- Pixeldrain [:page_facing_up:](https://rclone.org/pixeldrain/)
- premiumize.me [:page_facing_up:](https://rclone.org/premiumizeme/)
- put.io [:page_facing_up:](https://rclone.org/putio/)
- Proton Drive [:page_facing_up:](https://rclone.org/protondrive/)
- QingStor [:page_facing_up:](https://rclone.org/qingstor/)
- Qiniu Cloud Object Storage (Kodo) [:page_facing_up:](https://rclone.org/s3/#qiniu)
- Quatrix [:page_facing_up:](https://rclone.org/quatrix/)
- Rackspace Cloud Files [:page_facing_up:](https://rclone.org/swift/)
- RackCorp Object Storage [:page_facing_up:](https://rclone.org/s3/#RackCorp)
- rsync.net [:page_facing_up:](https://rclone.org/sftp/#rsync-net)
- Scaleway [:page_facing_up:](https://rclone.org/s3/#scaleway)
- Seafile [:page_facing_up:](https://rclone.org/seafile/)
- Seagate Lyve Cloud [:page_facing_up:](https://rclone.org/s3/#lyve)
- SeaweedFS [:page_facing_up:](https://rclone.org/s3/#seaweedfs)
- Selectel Object Storage [:page_facing_up:](https://rclone.org/s3/#selectel)
- SFTP [:page_facing_up:](https://rclone.org/sftp/)
- SMB / CIFS [:page_facing_up:](https://rclone.org/smb/)
- StackPath [:page_facing_up:](https://rclone.org/s3/#stackpath)
- Storj [:page_facing_up:](https://rclone.org/storj/)
- SugarSync [:page_facing_up:](https://rclone.org/sugarsync/)
- Synology C2 Object Storage [:page_facing_up:](https://rclone.org/s3/#synology-c2)
- Tencent Cloud Object Storage (COS) [:page_facing_up:](https://rclone.org/s3/#tencent-cos)
- Uloz.to [:page_facing_up:](https://rclone.org/ulozto/)
- Wasabi [:page_facing_up:](https://rclone.org/s3/#wasabi)
- WebDAV [:page_facing_up:](https://rclone.org/webdav/)
- Yandex Disk [:page_facing_up:](https://rclone.org/yandex/)
- Zoho WorkDrive [:page_facing_up:](https://rclone.org/zoho/)
- Zata.ai [:page_facing_up:](https://rclone.org/s3/#Zata)
- The local filesystem [:page_facing_up:](https://rclone.org/local/)
Please see [the full list of all storage providers and their features](https://rclone.org/overview/)
@@ -119,50 +124,54 @@ Please see [the full list of all storage providers and their features](https://r
These backends adapt or modify other storage providers
* Alias: rename existing remotes [:page_facing_up:](https://rclone.org/alias/)
* Cache: cache remotes (DEPRECATED) [:page_facing_up:](https://rclone.org/cache/)
* Chunker: split large files [:page_facing_up:](https://rclone.org/chunker/)
* Combine: combine multiple remotes into a directory tree [:page_facing_up:](https://rclone.org/combine/)
* Compress: compress files [:page_facing_up:](https://rclone.org/compress/)
* Crypt: encrypt files [:page_facing_up:](https://rclone.org/crypt/)
* Hasher: hash files [:page_facing_up:](https://rclone.org/hasher/)
* Union: join multiple remotes to work together [:page_facing_up:](https://rclone.org/union/)
- Alias: rename existing remotes [:page_facing_up:](https://rclone.org/alias/)
- Cache: cache remotes (DEPRECATED) [:page_facing_up:](https://rclone.org/cache/)
- Chunker: split large files [:page_facing_up:](https://rclone.org/chunker/)
- Combine: combine multiple remotes into a directory tree [:page_facing_up:](https://rclone.org/combine/)
- Compress: compress files [:page_facing_up:](https://rclone.org/compress/)
- Crypt: encrypt files [:page_facing_up:](https://rclone.org/crypt/)
- Hasher: hash files [:page_facing_up:](https://rclone.org/hasher/)
- Union: join multiple remotes to work together [:page_facing_up:](https://rclone.org/union/)
## Features
* MD5/SHA-1 hashes checked at all times for file integrity
* Timestamps preserved on files
* Partial syncs supported on a whole file basis
* [Copy](https://rclone.org/commands/rclone_copy/) mode to just copy new/changed files
* [Sync](https://rclone.org/commands/rclone_sync/) (one way) mode to make a directory identical
* [Bisync](https://rclone.org/bisync/) (two way) to keep two directories in sync bidirectionally
* [Check](https://rclone.org/commands/rclone_check/) mode to check for file hash equality
* Can sync to and from network, e.g. two different cloud accounts
* Optional large file chunking ([Chunker](https://rclone.org/chunker/))
* Optional transparent compression ([Compress](https://rclone.org/compress/))
* Optional encryption ([Crypt](https://rclone.org/crypt/))
* Optional FUSE mount ([rclone mount](https://rclone.org/commands/rclone_mount/))
* Multi-threaded downloads to local disk
* Can [serve](https://rclone.org/commands/rclone_serve/) local or remote files over HTTP/WebDAV/FTP/SFTP/DLNA
- MD5/SHA-1 hashes checked at all times for file integrity
- Timestamps preserved on files
- Partial syncs supported on a whole file basis
- [Copy](https://rclone.org/commands/rclone_copy/) mode to just copy new/changed
files
- [Sync](https://rclone.org/commands/rclone_sync/) (one way) mode to make a directory
identical
- [Bisync](https://rclone.org/bisync/) (two way) to keep two directories in sync
bidirectionally
- [Check](https://rclone.org/commands/rclone_check/) mode to check for file hash
equality
- Can sync to and from network, e.g. two different cloud accounts
- Optional large file chunking ([Chunker](https://rclone.org/chunker/))
- Optional transparent compression ([Compress](https://rclone.org/compress/))
- Optional encryption ([Crypt](https://rclone.org/crypt/))
- Optional FUSE mount ([rclone mount](https://rclone.org/commands/rclone_mount/))
- Multi-threaded downloads to local disk
- Can [serve](https://rclone.org/commands/rclone_serve/) local or remote files
over HTTP/WebDAV/FTP/SFTP/DLNA
## Installation & documentation
Please see the [rclone website](https://rclone.org/) for:
* [Installation](https://rclone.org/install/)
* [Documentation & configuration](https://rclone.org/docs/)
* [Changelog](https://rclone.org/changelog/)
* [FAQ](https://rclone.org/faq/)
* [Storage providers](https://rclone.org/overview/)
* [Forum](https://forum.rclone.org/)
* ...and more
- [Installation](https://rclone.org/install/)
- [Documentation & configuration](https://rclone.org/docs/)
- [Changelog](https://rclone.org/changelog/)
- [FAQ](https://rclone.org/faq/)
- [Storage providers](https://rclone.org/overview/)
- [Forum](https://forum.rclone.org/)
- ...and more
## Downloads
* https://rclone.org/downloads/
- <https://rclone.org/downloads/>
License
-------
## License
This is free software under the terms of the MIT license (check the
[COPYING file](/COPYING) included in this package).

View File

@@ -4,52 +4,55 @@ This file describes how to make the various kinds of releases
## Extra required software for making a release
* [gh the github cli](https://github.com/cli/cli) for uploading packages
* pandoc for making the html and man pages
- [gh the github cli](https://github.com/cli/cli) for uploading packages
- pandoc for making the html and man pages
## Making a release
* git checkout master # see below for stable branch
* git pull # IMPORTANT
* git status - make sure everything is checked in
* Check GitHub actions build for master is Green
* make test # see integration test server or run locally
* make tag
* edit docs/content/changelog.md # make sure to remove duplicate logs from point releases
* make tidy
* make doc
* git status - to check for new man pages - git add them
* git commit -a -v -m "Version v1.XX.0"
* make retag
* git push origin # without --follow-tags so it doesn't push the tag if it fails
* git push --follow-tags origin
* # Wait for the GitHub builds to complete then...
* make fetch_binaries
* make tarball
* make vendorball
* make sign_upload
* make check_sign
* make upload
* make upload_website
* make upload_github
* make startdev # make startstable for stable branch
* # announce with forum post, twitter post, patreon post
- git checkout master # see below for stable branch
- git pull # IMPORTANT
- git status - make sure everything is checked in
- Check GitHub actions build for master is Green
- make test # see integration test server or run locally
- make tag
- edit docs/content/changelog.md # make sure to remove duplicate logs from point
releases
- make tidy
- make doc
- git status - to check for new man pages - git add them
- git commit -a -v -m "Version v1.XX.0"
- make retag
- git push origin # without --follow-tags so it doesn't push the tag if it fails
- git push --follow-tags origin
- \# Wait for the GitHub builds to complete then...
- make fetch_binaries
- make tarball
- make vendorball
- make sign_upload
- make check_sign
- make upload
- make upload_website
- make upload_github
- make startdev # make startstable for stable branch
- \# announce with forum post, twitter post, patreon post
## Update dependencies
Early in the next release cycle update the dependencies.
* Review any pinned packages in go.mod and remove if possible
* `make updatedirect`
* `make GOTAGS=cmount`
* `make compiletest`
* Fix anything which doesn't compile at this point and commit changes here
* `git commit -a -v -m "build: update all dependencies"`
- Review any pinned packages in go.mod and remove if possible
- `make updatedirect`
- `make GOTAGS=cmount`
- `make compiletest`
- Fix anything which doesn't compile at this point and commit changes here
- `git commit -a -v -m "build: update all dependencies"`
If the `make updatedirect` upgrades the version of go in the `go.mod`
go 1.22.0
```text
go 1.22.0
```
then go to manual mode. `go1.22` here is the lowest supported version
in the `go.mod`.
@@ -57,7 +60,7 @@ If `make updatedirect` added a `toolchain` directive then remove it.
We don't want to force a toolchain on our users. Linux packagers are
often using a version of Go that is a few versions out of date.
```
```sh
go list -m -f '{{if not (or .Main .Indirect)}}{{.Path}}{{end}}' all > /tmp/potential-upgrades
go get -d $(cat /tmp/potential-upgrades)
go mod tidy -go=1.22 -compat=1.22
@@ -67,7 +70,7 @@ If the `go mod tidy` fails use the output from it to remove the
package which can't be upgraded from `/tmp/potential-upgrades` when
done
```
```sh
git co go.mod go.sum
```
@@ -77,12 +80,12 @@ Optionally upgrade the direct and indirect dependencies. This is very
likely to fail if the manual method was used abve - in that case
ignore it as it is too time consuming to fix.
* `make update`
* `make GOTAGS=cmount`
* `make compiletest`
* roll back any updates which didn't compile
* `git commit -a -v --amend`
* **NB** watch out for this changing the default go version in `go.mod`
- `make update`
- `make GOTAGS=cmount`
- `make compiletest`
- roll back any updates which didn't compile
- `git commit -a -v --amend`
- **NB** watch out for this changing the default go version in `go.mod`
Note that `make update` updates all direct and indirect dependencies
and there can occasionally be forwards compatibility problems with
@@ -99,7 +102,9 @@ The above procedure will not upgrade major versions, so v2 to v3.
However this tool can show which major versions might need to be
upgraded:
go run github.com/icholy/gomajor@latest list -major
```sh
go run github.com/icholy/gomajor@latest list -major
```
Expect API breakage when updating major versions.
@@ -107,7 +112,9 @@ Expect API breakage when updating major versions.
At some point after the release run
bin/tidy-beta v1.55
```sh
bin/tidy-beta v1.55
```
where the version number is that of a couple ago to remove old beta binaries.
@@ -117,54 +124,64 @@ If rclone needs a point release due to some horrendous bug:
Set vars
* BASE_TAG=v1.XX # e.g. v1.52
* NEW_TAG=${BASE_TAG}.Y # e.g. v1.52.1
* echo $BASE_TAG $NEW_TAG # v1.52 v1.52.1
- BASE_TAG=v1.XX # e.g. v1.52
- NEW_TAG=${BASE_TAG}.Y # e.g. v1.52.1
- echo $BASE_TAG $NEW_TAG # v1.52 v1.52.1
First make the release branch. If this is a second point release then
this will be done already.
* git co -b ${BASE_TAG}-stable ${BASE_TAG}.0
* make startstable
- git co -b ${BASE_TAG}-stable ${BASE_TAG}.0
- make startstable
Now
* git co ${BASE_TAG}-stable
* git cherry-pick any fixes
* make startstable
* Do the steps as above
* git co master
* `#` cherry pick the changes to the changelog - check the diff to make sure it is correct
* git checkout ${BASE_TAG}-stable docs/content/changelog.md
* git commit -a -v -m "Changelog updates from Version ${NEW_TAG}"
* git push
- git co ${BASE_TAG}-stable
- git cherry-pick any fixes
- make startstable
- Do the steps as above
- git co master
- `#` cherry pick the changes to the changelog - check the diff to make sure it
is correct
- git checkout ${BASE_TAG}-stable docs/content/changelog.md
- git commit -a -v -m "Changelog updates from Version ${NEW_TAG}"
- git push
## Sponsor logos
If updating the website note that the sponsor logos have been moved out of the main repository.
If updating the website note that the sponsor logos have been moved out of the
main repository.
You will need to checkout `/docs/static/img/logos` from https://github.com/rclone/third-party-logos
You will need to checkout `/docs/static/img/logos` from <https://github.com/rclone/third-party-logos>
which is a private repo containing artwork from sponsors.
## Update the website between releases
Create an update website branch based off the last release
git co -b update-website
```sh
git co -b update-website
```
If the branch already exists, double check there are no commits that need saving.
Now reset the branch to the last release
git reset --hard v1.64.0
```sh
git reset --hard v1.64.0
```
Create the changes, check them in, test with `make serve` then
make upload_test_website
```sh
make upload_test_website
```
Check out https://test.rclone.org and when happy
Check out <https://test.rclone.org> and when happy
make upload_website
```sh
make upload_website
```
Cherry pick any changes back to master and the stable branch if it is active.
@@ -172,14 +189,14 @@ Cherry pick any changes back to master and the stable branch if it is active.
To do a basic build of rclone's docker image to debug builds locally:
```
```sh
docker buildx build --load -t rclone/rclone:testing --progress=plain .
docker run --rm rclone/rclone:testing version
```
To test the multipatform build
```
```sh
docker buildx build -t rclone/rclone:testing --progress=plain --platform linux/amd64,linux/386,linux/arm64,linux/arm/v7,linux/arm/v6 .
```
@@ -187,6 +204,6 @@ To make a full build then set the tags correctly and add `--push`
Note that you can't only build one architecture - you need to build them all.
```
```sh
docker buildx build --platform linux/amd64,linux/386,linux/arm64,linux/arm/v7,linux/arm/v6 -t rclone/rclone:1.54.1 -t rclone/rclone:1.54 -t rclone/rclone:1 -t rclone/rclone:latest --push .
```

View File

@@ -1 +1 @@
v1.70.2
v1.71.0

View File

@@ -1,4 +1,4 @@
//go:build !plan9 && !solaris && !js
//go:build !plan9 && !solaris && !js && !wasm
// Package azureblob provides an interface to the Microsoft Azure blob object storage system
package azureblob
@@ -51,6 +51,7 @@ import (
"github.com/rclone/rclone/lib/env"
"github.com/rclone/rclone/lib/multipart"
"github.com/rclone/rclone/lib/pacer"
"github.com/rclone/rclone/lib/pool"
"golang.org/x/sync/errgroup"
)
@@ -72,6 +73,7 @@ const (
emulatorAccount = "devstoreaccount1"
emulatorAccountKey = "Eby8vdM02xNOcqFlqUwJPLlmEtlCDXJ1OUzFT50uSRZ6IFsuFq2UVErCz4I6tq/K1SZFPTOtr/KBHBeksoGMGw=="
emulatorBlobEndpoint = "http://127.0.0.1:10000/devstoreaccount1"
sasCopyValidity = time.Hour // how long SAS should last when doing server side copy
)
var (
@@ -559,6 +561,11 @@ type Fs struct {
pacer *fs.Pacer // To pace and retry the API calls
uploadToken *pacer.TokenDispenser // control concurrency
publicAccess container.PublicAccessType // Container Public Access Level
// user delegation cache
userDelegationMu sync.Mutex
userDelegation *service.UserDelegationCredential
userDelegationExpiry time.Time
}
// Object describes an azure object
@@ -984,6 +991,38 @@ func NewFs(ctx context.Context, name, root string, m configmap.Mapper) (fs.Fs, e
if err != nil {
return nil, fmt.Errorf("failed to acquire MSI token: %w", err)
}
case opt.ClientID != "" && opt.Tenant != "" && opt.MSIClientID != "":
// Workload Identity based authentication
var options azidentity.ManagedIdentityCredentialOptions
options.ID = azidentity.ClientID(opt.MSIClientID)
msiCred, err := azidentity.NewManagedIdentityCredential(&options)
if err != nil {
return nil, fmt.Errorf("failed to acquire MSI token: %w", err)
}
getClientAssertions := func(context.Context) (string, error) {
token, err := msiCred.GetToken(context.Background(), policy.TokenRequestOptions{
Scopes: []string{"api://AzureADTokenExchange"},
})
if err != nil {
return "", fmt.Errorf("failed to acquire MSI token: %w", err)
}
return token.Token, nil
}
assertOpts := &azidentity.ClientAssertionCredentialOptions{}
f.cred, err = azidentity.NewClientAssertionCredential(
opt.Tenant,
opt.ClientID,
getClientAssertions,
assertOpts)
if err != nil {
return nil, fmt.Errorf("failed to acquire client assertion token: %w", err)
}
case opt.UseAZ:
var options = azidentity.AzureCLICredentialOptions{}
f.cred, err = azidentity.NewAzureCLICredential(&options)
@@ -1688,6 +1727,38 @@ func (f *Fs) Purge(ctx context.Context, dir string) error {
return f.deleteContainer(ctx, container)
}
// Get a user delegation which is valid for at least sasCopyValidity
//
// This value is cached in f
func (f *Fs) getUserDelegation(ctx context.Context) (*service.UserDelegationCredential, error) {
f.userDelegationMu.Lock()
defer f.userDelegationMu.Unlock()
if f.userDelegation != nil && time.Until(f.userDelegationExpiry) > sasCopyValidity {
return f.userDelegation, nil
}
// Validity window
start := time.Now().UTC()
expiry := start.Add(2 * sasCopyValidity)
startStr := start.Format(time.RFC3339)
expiryStr := expiry.Format(time.RFC3339)
// Acquire user delegation key from the service client
info := service.KeyInfo{
Start: &startStr,
Expiry: &expiryStr,
}
userDelegationKey, err := f.svc.GetUserDelegationCredential(ctx, info, nil)
if err != nil {
return nil, fmt.Errorf("failed to get user delegation key: %w", err)
}
f.userDelegation = userDelegationKey
f.userDelegationExpiry = expiry
return f.userDelegation, nil
}
// getAuth gets auth to copy o.
//
// tokenOK is used to signal that token based auth (Microsoft Entra
@@ -1699,7 +1770,7 @@ func (f *Fs) Purge(ctx context.Context, dir string) error {
// URL (not a SAS) and token will be empty.
//
// If tokenOK is true it may also return a token for the auth.
func (o *Object) getAuth(ctx context.Context, tokenOK bool, noAuth bool) (srcURL string, token *string, err error) {
func (o *Object) getAuth(ctx context.Context, noAuth bool) (srcURL string, err error) {
f := o.fs
srcBlobSVC := o.getBlobSVC()
srcURL = srcBlobSVC.URL()
@@ -1708,29 +1779,47 @@ func (o *Object) getAuth(ctx context.Context, tokenOK bool, noAuth bool) (srcURL
case noAuth:
// If same storage account then no auth needed
case f.cred != nil:
if !tokenOK {
return srcURL, token, errors.New("not supported: Microsoft Entra ID")
}
options := policy.TokenRequestOptions{}
accessToken, err := f.cred.GetToken(ctx, options)
// Generate a User Delegation SAS URL using Azure AD credentials
userDelegationKey, err := f.getUserDelegation(ctx)
if err != nil {
return srcURL, token, fmt.Errorf("failed to create access token: %w", err)
return "", fmt.Errorf("sas creation: %w", err)
}
token = &accessToken.Token
// Build the SAS values
perms := sas.BlobPermissions{Read: true}
container, containerPath := o.split()
start := time.Now().UTC()
expiry := start.Add(sasCopyValidity)
vals := sas.BlobSignatureValues{
StartTime: start,
ExpiryTime: expiry,
Permissions: perms.String(),
ContainerName: container,
BlobName: containerPath,
}
// Sign with the delegation key
queryParameters, err := vals.SignWithUserDelegation(userDelegationKey)
if err != nil {
return "", fmt.Errorf("signing SAS with user delegation failed: %w", err)
}
// Append the SAS to the URL
srcURL = srcBlobSVC.URL() + "?" + queryParameters.Encode()
case f.sharedKeyCred != nil:
// Generate a short lived SAS URL if using shared key credentials
expiry := time.Now().Add(time.Hour)
expiry := time.Now().Add(sasCopyValidity)
sasOptions := blob.GetSASURLOptions{}
srcURL, err = srcBlobSVC.GetSASURL(sas.BlobPermissions{Read: true}, expiry, &sasOptions)
if err != nil {
return srcURL, token, fmt.Errorf("failed to create SAS URL: %w", err)
return srcURL, fmt.Errorf("failed to create SAS URL: %w", err)
}
case f.anonymous || f.opt.SASURL != "":
// If using a SASURL or anonymous, no need for any extra auth
default:
return srcURL, token, errors.New("unknown authentication type")
return srcURL, errors.New("unknown authentication type")
}
return srcURL, token, nil
return srcURL, nil
}
// Do multipart parallel copy.
@@ -1751,7 +1840,7 @@ func (f *Fs) copyMultipart(ctx context.Context, remote, dstContainer, dstPath st
o.fs = f
o.remote = remote
srcURL, token, err := src.getAuth(ctx, true, false)
srcURL, err := src.getAuth(ctx, false)
if err != nil {
return nil, fmt.Errorf("multipart copy: %w", err)
}
@@ -1795,7 +1884,8 @@ func (f *Fs) copyMultipart(ctx context.Context, remote, dstContainer, dstPath st
Count: partSize,
},
// Specifies the authorization scheme and signature for the copy source.
CopySourceAuthorization: token,
// We use SAS URLs as this doesn't seem to work always
// CopySourceAuthorization: token,
// CPKInfo *blob.CPKInfo
// CPKScopeInfo *blob.CPKScopeInfo
}
@@ -1865,7 +1955,7 @@ func (f *Fs) copySinglepart(ctx context.Context, remote, dstContainer, dstPath s
dstBlobSVC := f.getBlobSVC(dstContainer, dstPath)
// Get the source auth - none needed for same storage account
srcURL, _, err := src.getAuth(ctx, false, f == src.fs)
srcURL, err := src.getAuth(ctx, f == src.fs)
if err != nil {
return nil, fmt.Errorf("single part copy: source auth: %w", err)
}
@@ -2581,6 +2671,13 @@ func (w *azChunkWriter) WriteChunk(ctx context.Context, chunkNumber int, reader
return -1, err
}
// Only account after the checksum reads have been done
if do, ok := reader.(pool.DelayAccountinger); ok {
// To figure out this number, do a transfer and if the accounted size is 0 or a
// multiple of what it should be, increase or decrease this number.
do.DelayAccounting(2)
}
// Upload the block, with MD5 for check
m := md5.New()
currentChunkSize, err := io.Copy(m, reader)
@@ -2668,6 +2765,8 @@ func (o *Object) clearUncommittedBlocks(ctx context.Context) (err error) {
blockList blockblob.GetBlockListResponse
properties *blob.GetPropertiesResponse
options *blockblob.CommitBlockListOptions
// Use temporary pacer as this can be called recursively which can cause a deadlock with --max-connections
pacer = fs.NewPacer(ctx, pacer.NewS3(pacer.MinSleep(minSleep), pacer.MaxSleep(maxSleep), pacer.DecayConstant(decayConstant)))
)
properties, err = o.readMetaDataAlways(ctx)
@@ -2679,7 +2778,7 @@ func (o *Object) clearUncommittedBlocks(ctx context.Context) (err error) {
if objectExists {
// Get the committed block list
err = o.fs.pacer.Call(func() (bool, error) {
err = pacer.Call(func() (bool, error) {
blockList, err = blockBlobSVC.GetBlockList(ctx, blockblob.BlockListTypeAll, nil)
return o.fs.shouldRetry(ctx, err)
})
@@ -2721,7 +2820,7 @@ func (o *Object) clearUncommittedBlocks(ctx context.Context) (err error) {
// Commit only the committed blocks
fs.Debugf(o, "Committing %d blocks to remove uncommitted blocks", len(blockIDs))
err = o.fs.pacer.Call(func() (bool, error) {
err = pacer.Call(func() (bool, error) {
_, err := blockBlobSVC.CommitBlockList(ctx, blockIDs, options)
return o.fs.shouldRetry(ctx, err)
})

View File

@@ -1,4 +1,4 @@
//go:build !plan9 && !solaris && !js
//go:build !plan9 && !solaris && !js && !wasm
package azureblob

View File

@@ -1,6 +1,6 @@
// Test AzureBlob filesystem interface
//go:build !plan9 && !solaris && !js
//go:build !plan9 && !solaris && !js && !wasm
package azureblob

View File

@@ -1,7 +1,7 @@
// Build for azureblob for unsupported platforms to stop go complaining
// about "no buildable Go source files "
//go:build plan9 || solaris || js
//go:build plan9 || solaris || js || wasm
// Package azureblob provides an interface to the Microsoft Azure blob object storage system
package azureblob

View File

@@ -1,4 +1,4 @@
//go:build !plan9 && !js
//go:build !plan9 && !js && !wasm
// Package azurefiles provides an interface to Microsoft Azure Files
package azurefiles
@@ -453,7 +453,7 @@ func newFsFromOptions(ctx context.Context, name, root string, opt *Options) (fs.
return nil, fmt.Errorf("create new shared key credential failed: %w", err)
}
case opt.UseAZ:
var options = azidentity.AzureCLICredentialOptions{}
options := azidentity.AzureCLICredentialOptions{}
cred, err = azidentity.NewAzureCLICredential(&options)
fmt.Println(cred)
if err != nil {
@@ -550,7 +550,7 @@ func newFsFromOptions(ctx context.Context, name, root string, opt *Options) (fs.
case opt.UseMSI:
// Specifying a user-assigned identity. Exactly one of the above IDs must be specified.
// Validate and ensure exactly one is set. (To do: better validation.)
var b2i = map[bool]int{false: 0, true: 1}
b2i := map[bool]int{false: 0, true: 1}
set := b2i[opt.MSIClientID != ""] + b2i[opt.MSIObjectID != ""] + b2i[opt.MSIResourceID != ""]
if set > 1 {
return nil, errors.New("more than one user-assigned identity ID is set")
@@ -569,6 +569,37 @@ func newFsFromOptions(ctx context.Context, name, root string, opt *Options) (fs.
if err != nil {
return nil, fmt.Errorf("failed to acquire MSI token: %w", err)
}
case opt.ClientID != "" && opt.Tenant != "" && opt.MSIClientID != "":
// Workload Identity based authentication
var options azidentity.ManagedIdentityCredentialOptions
options.ID = azidentity.ClientID(opt.MSIClientID)
msiCred, err := azidentity.NewManagedIdentityCredential(&options)
if err != nil {
return nil, fmt.Errorf("failed to acquire MSI token: %w", err)
}
getClientAssertions := func(context.Context) (string, error) {
token, err := msiCred.GetToken(context.Background(), policy.TokenRequestOptions{
Scopes: []string{"api://AzureADTokenExchange"},
})
if err != nil {
return "", fmt.Errorf("failed to acquire MSI token: %w", err)
}
return token.Token, nil
}
assertOpts := &azidentity.ClientAssertionCredentialOptions{}
cred, err = azidentity.NewClientAssertionCredential(
opt.Tenant,
opt.ClientID,
getClientAssertions,
assertOpts)
if err != nil {
return nil, fmt.Errorf("failed to acquire client assertion token: %w", err)
}
default:
return nil, errors.New("no authentication method configured")
}
@@ -823,7 +854,7 @@ func (f *Fs) List(ctx context.Context, dir string) (fs.DirEntries, error) {
return entries, err
}
var opt = &directory.ListFilesAndDirectoriesOptions{
opt := &directory.ListFilesAndDirectoriesOptions{
Include: directory.ListFilesInclude{
Timestamps: true,
},
@@ -982,6 +1013,10 @@ func (o *Object) SetModTime(ctx context.Context, t time.Time) error {
SMBProperties: &file.SMBProperties{
LastWriteTime: &t,
},
HTTPHeaders: &file.HTTPHeaders{
ContentMD5: o.md5,
ContentType: &o.contentType,
},
}
_, err := o.fileClient().SetHTTPHeaders(ctx, &opt)
if err != nil {

View File

@@ -1,4 +1,4 @@
//go:build !plan9 && !js
//go:build !plan9 && !js && !wasm
package azurefiles

View File

@@ -1,4 +1,4 @@
//go:build !plan9 && !js
//go:build !plan9 && !js && !wasm
package azurefiles

View File

@@ -1,7 +1,7 @@
// Build for azurefiles for unsupported platforms to stop go complaining
// about "no buildable Go source files "
//go:build plan9 || js
//go:build plan9 || js || wasm
// Package azurefiles provides an interface to Microsoft Azure Files
package azurefiles

View File

@@ -1673,6 +1673,21 @@ func (o *Object) getMetaData(ctx context.Context) (info *api.File, err error) {
return o.getMetaDataListing(ctx)
}
}
// If using versionAt we need to list the find the correct version.
if o.fs.opt.VersionAt.IsSet() {
info, err := o.getMetaDataListing(ctx)
if err != nil {
return nil, err
}
if info.Action == "hide" {
// Rerturn object not found error if the current version is deleted.
return nil, fs.ErrorObjectNotFound
}
return info, nil
}
_, info, err = o.getOrHead(ctx, "HEAD", nil)
return info, err
}

View File

@@ -446,14 +446,14 @@ func (f *Fs) InternalTestVersions(t *testing.T) {
t.Run("List", func(t *testing.T) {
fstest.CheckListing(t, f, test.want)
})
// b2 NewObject doesn't work with VersionAt
//t.Run("NewObject", func(t *testing.T) {
// gotObj, gotErr := f.NewObject(ctx, fileName)
// assert.Equal(t, test.wantErr, gotErr)
// if gotErr == nil {
// assert.Equal(t, test.wantSize, gotObj.Size())
// }
//})
t.Run("NewObject", func(t *testing.T) {
gotObj, gotErr := f.NewObject(ctx, fileName)
assert.Equal(t, test.wantErr, gotErr)
if gotErr == nil {
assert.Equal(t, test.wantSize, gotObj.Size())
}
})
})
}
})

View File

@@ -271,9 +271,9 @@ type User struct {
ModifiedAt time.Time `json:"modified_at"`
Language string `json:"language"`
Timezone string `json:"timezone"`
SpaceAmount int64 `json:"space_amount"`
SpaceUsed int64 `json:"space_used"`
MaxUploadSize int64 `json:"max_upload_size"`
SpaceAmount float64 `json:"space_amount"`
SpaceUsed float64 `json:"space_used"`
MaxUploadSize float64 `json:"max_upload_size"`
Status string `json:"status"`
JobTitle string `json:"job_title"`
Phone string `json:"phone"`

View File

@@ -1,4 +1,4 @@
//go:build !plan9 && !js
//go:build !plan9 && !js && !wasm
// Package cache implements a virtual provider to cache existing remotes.
package cache

View File

@@ -1,4 +1,4 @@
//go:build !plan9 && !js && !race
//go:build !plan9 && !js && !wasm && !race
package cache_test

View File

@@ -1,6 +1,6 @@
// Test Cache filesystem interface
//go:build !plan9 && !js && !race
//go:build !plan9 && !js && !wasm && !race
package cache_test

View File

@@ -1,7 +1,7 @@
// Build for cache for unsupported platforms to stop go complaining
// about "no buildable Go source files "
//go:build plan9 || js
//go:build plan9 || js || wasm
// Package cache implements a virtual provider to cache existing remotes.
package cache

View File

@@ -1,4 +1,4 @@
//go:build !plan9 && !js && !race
//go:build !plan9 && !js && !wasm && !race
package cache_test

View File

@@ -1,4 +1,4 @@
//go:build !plan9 && !js
//go:build !plan9 && !js && !wasm
package cache

View File

@@ -1,4 +1,4 @@
//go:build !plan9 && !js
//go:build !plan9 && !js && !wasm
package cache

View File

@@ -1,4 +1,4 @@
//go:build !plan9 && !js
//go:build !plan9 && !js && !wasm
package cache

View File

@@ -1,4 +1,4 @@
//go:build !plan9 && !js
//go:build !plan9 && !js && !wasm
package cache

View File

@@ -1,4 +1,4 @@
//go:build !plan9 && !js
//go:build !plan9 && !js && !wasm
package cache

View File

@@ -1,4 +1,4 @@
//go:build !plan9 && !js
//go:build !plan9 && !js && !wasm
package cache

View File

@@ -1,5 +1,4 @@
//go:build !plan9 && !js
// +build !plan9,!js
//go:build !plan9 && !js && !wasm
package cache

View File

@@ -1446,9 +1446,9 @@ func (f *Fs) About(ctx context.Context) (usage *fs.Usage, err error) {
}
}
usage = &fs.Usage{
Total: fs.NewUsageValue(int64(total)), // quota of bytes that can be used
Used: fs.NewUsageValue(int64(used)), // bytes in use
Free: fs.NewUsageValue(int64(total - used)), // bytes which can be uploaded before reaching the quota
Total: fs.NewUsageValue(total), // quota of bytes that can be used
Used: fs.NewUsageValue(used), // bytes in use
Free: fs.NewUsageValue(total - used), // bytes which can be uploaded before reaching the quota
}
return usage, nil
}

View File

@@ -163,6 +163,16 @@ Enabled by default. Use 0 to disable.`,
Help: "Disable TLS 1.3 (workaround for FTP servers with buggy TLS)",
Default: false,
Advanced: true,
}, {
Name: "allow_insecure_tls_ciphers",
Help: `Allow insecure TLS ciphers
Setting this flag will allow the usage of the following TLS ciphers in addition to the secure defaults:
- TLS_RSA_WITH_AES_128_GCM_SHA256
`,
Default: false,
Advanced: true,
}, {
Name: "shut_timeout",
Help: "Maximum time to wait for data connection closing status.",
@@ -236,29 +246,30 @@ a write only folder.
// Options defines the configuration for this backend
type Options struct {
Host string `config:"host"`
User string `config:"user"`
Pass string `config:"pass"`
Port string `config:"port"`
TLS bool `config:"tls"`
ExplicitTLS bool `config:"explicit_tls"`
TLSCacheSize int `config:"tls_cache_size"`
DisableTLS13 bool `config:"disable_tls13"`
Concurrency int `config:"concurrency"`
SkipVerifyTLSCert bool `config:"no_check_certificate"`
DisableEPSV bool `config:"disable_epsv"`
DisableMLSD bool `config:"disable_mlsd"`
DisableUTF8 bool `config:"disable_utf8"`
WritingMDTM bool `config:"writing_mdtm"`
ForceListHidden bool `config:"force_list_hidden"`
IdleTimeout fs.Duration `config:"idle_timeout"`
CloseTimeout fs.Duration `config:"close_timeout"`
ShutTimeout fs.Duration `config:"shut_timeout"`
AskPassword bool `config:"ask_password"`
Enc encoder.MultiEncoder `config:"encoding"`
SocksProxy string `config:"socks_proxy"`
HTTPProxy string `config:"http_proxy"`
NoCheckUpload bool `config:"no_check_upload"`
Host string `config:"host"`
User string `config:"user"`
Pass string `config:"pass"`
Port string `config:"port"`
TLS bool `config:"tls"`
ExplicitTLS bool `config:"explicit_tls"`
TLSCacheSize int `config:"tls_cache_size"`
DisableTLS13 bool `config:"disable_tls13"`
AllowInsecureTLSCiphers bool `config:"allow_insecure_tls_ciphers"`
Concurrency int `config:"concurrency"`
SkipVerifyTLSCert bool `config:"no_check_certificate"`
DisableEPSV bool `config:"disable_epsv"`
DisableMLSD bool `config:"disable_mlsd"`
DisableUTF8 bool `config:"disable_utf8"`
WritingMDTM bool `config:"writing_mdtm"`
ForceListHidden bool `config:"force_list_hidden"`
IdleTimeout fs.Duration `config:"idle_timeout"`
CloseTimeout fs.Duration `config:"close_timeout"`
ShutTimeout fs.Duration `config:"shut_timeout"`
AskPassword bool `config:"ask_password"`
Enc encoder.MultiEncoder `config:"encoding"`
SocksProxy string `config:"socks_proxy"`
HTTPProxy string `config:"http_proxy"`
NoCheckUpload bool `config:"no_check_upload"`
}
// Fs represents a remote FTP server
@@ -407,6 +418,14 @@ func (f *Fs) tlsConfig() *tls.Config {
if f.opt.DisableTLS13 {
tlsConfig.MaxVersion = tls.VersionTLS12
}
if f.opt.AllowInsecureTLSCiphers {
var ids []uint16
// Read default ciphers
for _, cs := range tls.CipherSuites() {
ids = append(ids, cs.ID)
}
tlsConfig.CipherSuites = append(ids, tls.TLS_RSA_WITH_AES_128_GCM_SHA256)
}
}
return tlsConfig
}

View File

@@ -52,7 +52,7 @@ func (f *Fs) testUploadTimeout(t *testing.T) {
ci.Timeout = saveTimeout
}()
ci.LowLevelRetries = 1
ci.Timeout = idleTimeout
ci.Timeout = fs.Duration(idleTimeout)
upload := func(concurrency int, shutTimeout time.Duration) (obj fs.Object, err error) {
fixFs := deriveFs(ctx, t, f, settings{

View File

@@ -117,16 +117,22 @@ func init() {
} else {
oauthConfig.Scopes = scopesReadWrite
}
return oauthutil.ConfigOut("warning", &oauthutil.Options{
return oauthutil.ConfigOut("warning1", &oauthutil.Options{
OAuth2Config: oauthConfig,
})
case "warning":
case "warning1":
// Warn the user as required by google photos integration
return fs.ConfigConfirm("warning_done", true, "config_warning", `Warning
return fs.ConfigConfirm("warning2", true, "config_warning", `Warning
IMPORTANT: All media items uploaded to Google Photos with rclone
are stored in full resolution at original quality. These uploads
will count towards storage in your Google Account.`)
case "warning2":
// Warn the user that rclone can no longer download photos it didnt upload from google photos
return fs.ConfigConfirm("warning_done", true, "config_warning", `Warning
IMPORTANT: Due to Google policy changes rclone can now only download photos it uploaded.`)
case "warning_done":
return nil, nil
}

View File

@@ -371,9 +371,9 @@ func (f *Fs) About(ctx context.Context) (*fs.Usage, error) {
return nil, err
}
return &fs.Usage{
Total: fs.NewUsageValue(int64(info.Capacity)),
Used: fs.NewUsageValue(int64(info.Used)),
Free: fs.NewUsageValue(int64(info.Remaining)),
Total: fs.NewUsageValue(info.Capacity),
Used: fs.NewUsageValue(info.Used),
Free: fs.NewUsageValue(info.Remaining),
}, nil
}

View File

@@ -421,6 +421,9 @@ func (f *Fs) NewObject(ctx context.Context, remote string) (fs.Object, error) {
// will return the object and the error, otherwise will return
// nil and the error
func (f *Fs) Put(ctx context.Context, in io.Reader, src fs.ObjectInfo, options ...fs.OpenOption) (fs.Object, error) {
if src.Size() == 0 {
return nil, fs.ErrorCantUploadEmptyFiles
}
return uploadFile(ctx, f, in, src.Remote(), options...)
}
@@ -659,6 +662,9 @@ func (o *Object) Open(ctx context.Context, options ...fs.OpenOption) (io.ReadClo
// But for unknown-sized objects (indicated by src.Size() == -1), Upload should either
// return an error or update the object properly (rather than e.g. calling panic).
func (o *Object) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, options ...fs.OpenOption) (err error) {
if src.Size() == 0 {
return fs.ErrorCantUploadEmptyFiles
}
srcRemote := o.Remote()
@@ -670,7 +676,7 @@ func (o *Object) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, op
var resp *client.UploadResult
err = o.fs.pacer.Call(func() (bool, error) {
err = o.fs.pacer.CallNoRetry(func() (bool, error) {
var res *http.Response
res, resp, err = o.fs.ik.Upload(ctx, in, client.UploadParam{
FileName: fileName,
@@ -725,7 +731,7 @@ func uploadFile(ctx context.Context, f *Fs, in io.Reader, srcRemote string, opti
UseUniqueFileName := new(bool)
*UseUniqueFileName = false
err := f.pacer.Call(func() (bool, error) {
err := f.pacer.CallNoRetry(func() (bool, error) {
var res *http.Response
var err error
res, _, err = f.ik.Upload(ctx, in, client.UploadParam{
@@ -794,35 +800,10 @@ func (o *Object) Metadata(ctx context.Context) (metadata fs.Metadata, err error)
return metadata, nil
}
// Copy src to this remote using server-side move operations.
//
// This is stored with the remote path given.
//
// It returns the destination Object and a possible error.
//
// Will only be called if src.Fs().Name() == f.Name()
//
// If it isn't possible then return fs.ErrorCantMove
func (f *Fs) Copy(ctx context.Context, src fs.Object, remote string) (fs.Object, error) {
srcObj, ok := src.(*Object)
if !ok {
return nil, fs.ErrorCantMove
}
file, err := srcObj.Open(ctx)
if err != nil {
return nil, err
}
return uploadFile(ctx, f, file, remote)
}
// Check the interfaces are satisfied.
var (
_ fs.Fs = &Fs{}
_ fs.Purger = &Fs{}
_ fs.PublicLinker = &Fs{}
_ fs.Object = &Object{}
_ fs.Copier = &Fs{}
)

View File

@@ -1339,7 +1339,7 @@ func quotePath(s string) string {
seg := strings.Split(s, "/")
newValues := []string{}
for _, v := range seg {
newValues = append(newValues, url.PathEscape(v))
newValues = append(newValues, url.QueryEscape(v))
}
return strings.Join(newValues, "/")
}

View File

@@ -461,7 +461,7 @@ func translateErrorsDir(err error) error {
return err
}
// translatesErrorsObject translates Koofr errors to rclone errors (for an object operation)
// translateErrorsObject translates Koofr errors to rclone errors (for an object operation)
func translateErrorsObject(err error) error {
switch err := err.(type) {
case httpclient.InvalidStatusError:

View File

@@ -617,16 +617,36 @@ func (o *Object) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, op
case 1:
// upload file using link from first step
var res *http.Response
var location string
// Check to see if we are being redirected
opts := &rest.Opts{
Method: "HEAD",
RootURL: getFirstStepResult.Data.SignURL,
Options: options,
NoRedirect: true,
}
err = o.fs.pacer.CallNoRetry(func() (bool, error) {
res, err = o.fs.srv.Call(ctx, opts)
return o.fs.shouldRetry(ctx, res, err)
})
if res != nil {
location = res.Header.Get("Location")
if location != "" {
// set the URL to the new Location
opts.RootURL = location
err = nil
}
}
if err != nil {
return fmt.Errorf("head upload URL: %w", err)
}
file := io.MultiReader(bytes.NewReader(first10mBytes), in)
opts := &rest.Opts{
Method: "PUT",
RootURL: getFirstStepResult.Data.SignURL,
Options: options,
Body: file,
ContentLength: &size,
}
opts.Method = "PUT"
opts.Body = file
opts.ContentLength = &size
err = o.fs.pacer.CallNoRetry(func() (bool, error) {
res, err = o.fs.srv.Call(ctx, opts)

View File

@@ -1,4 +1,4 @@
//go:build windows || plan9 || js || linux
//go:build windows || plan9 || js || wasm || linux
package local

View File

@@ -1,4 +1,4 @@
//go:build !windows && !plan9 && !js && !linux
//go:build !windows && !plan9 && !js && !wasm && !linux
package local

View File

@@ -1,4 +1,4 @@
//go:build plan9 || js
//go:build plan9 || js || wasm
package local

View File

@@ -1,4 +1,4 @@
//go:build !windows && !plan9 && !js
//go:build !windows && !plan9 && !js && !wasm
package local

View File

@@ -305,6 +305,12 @@ only useful for reading.
Help: "The last status change time.",
}},
},
{
Name: "hashes",
Help: `Comma separated list of supported checksum types.`,
Default: fs.CommaSepList{},
Advanced: true,
},
{
Name: config.ConfigEncoding,
Help: config.ConfigEncodingHelp,
@@ -331,6 +337,7 @@ type Options struct {
NoSparse bool `config:"no_sparse"`
NoSetModTime bool `config:"no_set_modtime"`
TimeType timeType `config:"time_type"`
Hashes fs.CommaSepList `config:"hashes"`
Enc encoder.MultiEncoder `config:"encoding"`
NoClone bool `config:"no_clone"`
}
@@ -664,8 +671,12 @@ func (f *Fs) List(ctx context.Context, dir string) (entries fs.DirEntries, err e
name := fi.Name()
mode := fi.Mode()
newRemote := f.cleanRemote(dir, name)
symlinkFlag := os.ModeSymlink
if runtime.GOOS == "windows" {
symlinkFlag |= os.ModeIrregular
}
// Follow symlinks if required
if f.opt.FollowSymlinks && (mode&os.ModeSymlink) != 0 {
if f.opt.FollowSymlinks && (mode&symlinkFlag) != 0 {
localPath := filepath.Join(fsDirPath, name)
fi, err = os.Stat(localPath)
// Quietly skip errors on excluded files and directories
@@ -687,13 +698,13 @@ func (f *Fs) List(ctx context.Context, dir string) (entries fs.DirEntries, err e
if fi.IsDir() {
// Ignore directories which are symlinks. These are junction points under windows which
// are kind of a souped up symlink. Unix doesn't have directories which are symlinks.
if (mode&os.ModeSymlink) == 0 && f.dev == readDevice(fi, f.opt.OneFileSystem) {
if (mode&symlinkFlag) == 0 && f.dev == readDevice(fi, f.opt.OneFileSystem) {
d := f.newDirectory(newRemote, fi)
entries = append(entries, d)
}
} else {
// Check whether this link should be translated
if f.opt.TranslateSymlinks && fi.Mode()&os.ModeSymlink != 0 {
if f.opt.TranslateSymlinks && fi.Mode()&symlinkFlag != 0 {
newRemote += fs.LinkSuffix
}
// Don't include non directory if not included
@@ -1021,6 +1032,19 @@ func (f *Fs) DirMove(ctx context.Context, src fs.Fs, srcRemote, dstRemote string
// Hashes returns the supported hash sets.
func (f *Fs) Hashes() hash.Set {
if len(f.opt.Hashes) > 0 {
// Return only configured hashes.
// Note: Could have used hash.SupportOnly to limit supported hashes for all hash related features.
var supported hash.Set
for _, hashName := range f.opt.Hashes {
var ht hash.Type
if err := ht.Set(hashName); err != nil {
fs.Infof(nil, "Invalid token %q in hash string %q", hashName, f.opt.Hashes.String())
}
supported.Add(ht)
}
return supported
}
return hash.Supported()
}

View File

@@ -1,4 +1,4 @@
//go:build dragonfly || plan9 || js
//go:build dragonfly || plan9 || js || wasm
package local

View File

@@ -1,4 +1,4 @@
//go:build !windows && !plan9 && !js
//go:build !windows && !plan9 && !js && !wasm
package local

View File

@@ -1,4 +1,4 @@
//go:build windows || plan9 || js
//go:build windows || plan9 || js || wasm
package local

View File

@@ -634,7 +634,7 @@ func (f *Fs) readItemMetaData(ctx context.Context, path string) (entry fs.DirEnt
return
}
// itemToEntry converts API item to rclone directory entry
// itemToDirEntry converts API item to rclone directory entry
// The dirSize return value is:
//
// <0 - for a file or in case of error

View File

@@ -946,9 +946,9 @@ func (f *Fs) About(ctx context.Context) (*fs.Usage, error) {
return nil, fmt.Errorf("failed to get Mega Quota: %w", err)
}
usage := &fs.Usage{
Total: fs.NewUsageValue(int64(q.Mstrg)), // quota of bytes that can be used
Used: fs.NewUsageValue(int64(q.Cstrg)), // bytes in use
Free: fs.NewUsageValue(int64(q.Mstrg - q.Cstrg)), // bytes which can be uploaded before reaching the quota
Total: fs.NewUsageValue(q.Mstrg), // quota of bytes that can be used
Used: fs.NewUsageValue(q.Cstrg), // bytes in use
Free: fs.NewUsageValue(q.Mstrg - q.Cstrg), // bytes which can be uploaded before reaching the quota
}
return usage, nil
}

View File

@@ -1,4 +1,4 @@
//go:build !plan9 && !solaris && !js
//go:build !plan9 && !solaris && !js && !wasm
package oracleobjectstorage

View File

@@ -1,4 +1,4 @@
//go:build !plan9 && !solaris && !js
//go:build !plan9 && !solaris && !js && !wasm
package oracleobjectstorage

View File

@@ -1,4 +1,4 @@
//go:build !plan9 && !solaris && !js
//go:build !plan9 && !solaris && !js && !wasm
package oracleobjectstorage

View File

@@ -1,4 +1,4 @@
//go:build !plan9 && !solaris && !js
//go:build !plan9 && !solaris && !js && !wasm
package oracleobjectstorage

View File

@@ -1,4 +1,4 @@
//go:build !plan9 && !solaris && !js
//go:build !plan9 && !solaris && !js && !wasm
package oracleobjectstorage

View File

@@ -1,4 +1,4 @@
//go:build !plan9 && !solaris && !js
//go:build !plan9 && !solaris && !js && !wasm
package oracleobjectstorage

View File

@@ -1,4 +1,4 @@
//go:build !plan9 && !solaris && !js
//go:build !plan9 && !solaris && !js && !wasm
package oracleobjectstorage

View File

@@ -1,4 +1,4 @@
//go:build !plan9 && !solaris && !js
//go:build !plan9 && !solaris && !js && !wasm
// Package oracleobjectstorage provides an interface to the OCI object storage system.
package oracleobjectstorage
@@ -12,6 +12,7 @@ import (
"strings"
"time"
"github.com/ncw/swift/v2"
"github.com/oracle/oci-go-sdk/v65/common"
"github.com/oracle/oci-go-sdk/v65/objectstorage"
"github.com/rclone/rclone/fs"
@@ -33,9 +34,46 @@ func init() {
NewFs: NewFs,
CommandHelp: commandHelp,
Options: newOptions(),
MetadataInfo: &fs.MetadataInfo{
System: systemMetadataInfo,
Help: `User metadata is stored as opc-meta- keys.`,
},
})
}
var systemMetadataInfo = map[string]fs.MetadataHelp{
"opc-meta-mode": {
Help: "File type and mode",
Type: "octal, unix style",
Example: "0100664",
},
"opc-meta-uid": {
Help: "User ID of owner",
Type: "decimal number",
Example: "500",
},
"opc-meta-gid": {
Help: "Group ID of owner",
Type: "decimal number",
Example: "500",
},
"opc-meta-atime": {
Help: "Time of last access",
Type: "ISO 8601",
Example: "2025-06-30T22:27:43-04:00",
},
"opc-meta-mtime": {
Help: "Time of last modification",
Type: "ISO 8601",
Example: "2025-06-30T22:27:43-04:00",
},
"opc-meta-btime": {
Help: "Time of file birth (creation)",
Type: "ISO 8601",
Example: "2025-06-30T22:27:43-04:00",
},
}
// Fs represents a remote object storage server
type Fs struct {
name string // name of this remote
@@ -82,6 +120,7 @@ func NewFs(ctx context.Context, name, root string, m configmap.Mapper) (fs.Fs, e
}
f.setRoot(root)
f.features = (&fs.Features{
ReadMetadata: true,
ReadMimeType: true,
WriteMimeType: true,
BucketBased: true,
@@ -688,6 +727,38 @@ func (f *Fs) ListR(ctx context.Context, dir string, callback fs.ListRCallback) (
return list.Flush()
}
// Metadata returns metadata for an object
//
// It should return nil if there is no Metadata
func (o *Object) Metadata(ctx context.Context) (metadata fs.Metadata, err error) {
err = o.readMetaData(ctx)
if err != nil {
return nil, err
}
metadata = make(fs.Metadata, len(o.meta)+7)
for k, v := range o.meta {
switch k {
case metaMtime:
if modTime, err := swift.FloatStringToTime(v); err == nil {
metadata["mtime"] = modTime.Format(time.RFC3339Nano)
}
case metaMD5Hash:
// don't write hash metadata
default:
metadata[k] = v
}
}
if o.mimeType != "" {
metadata["content-type"] = o.mimeType
}
if !o.lastModified.IsZero() {
metadata["btime"] = o.lastModified.Format(time.RFC3339Nano)
}
return metadata, nil
}
// Check the interfaces are satisfied
var (
_ fs.Fs = &Fs{}

View File

@@ -1,4 +1,4 @@
//go:build !plan9 && !solaris && !js
//go:build !plan9 && !solaris && !js && !wasm
package oracleobjectstorage

View File

@@ -1,7 +1,7 @@
// Build for oracleobjectstorage for unsupported platforms to stop go complaining
// about "no buildable Go source files "
//go:build plan9 || solaris || js
//go:build plan9 || solaris || js || wasm
// Package oracleobjectstorage provides an interface to the OCI object storage system.
package oracleobjectstorage

View File

@@ -1,4 +1,4 @@
//go:build !plan9 && !solaris && !js
//go:build !plan9 && !solaris && !js && !wasm
package oracleobjectstorage

View File

@@ -155,6 +155,7 @@ func (f *Fs) getFile(ctx context.Context, ID string) (info *api.File, err error)
err = f.pacer.Call(func() (bool, error) {
resp, err = f.rst.CallJSON(ctx, &opts, nil, &info)
if err == nil && !info.Links.ApplicationOctetStream.Valid() {
time.Sleep(5 * time.Second)
return true, errors.New("no link")
}
return f.shouldRetry(ctx, resp, err)

333
backend/pikpak/multipart.go Normal file
View File

@@ -0,0 +1,333 @@
package pikpak
import (
"context"
"fmt"
"io"
"sort"
"strings"
"sync"
"time"
"github.com/aws/aws-sdk-go-v2/aws"
"github.com/aws/aws-sdk-go-v2/service/s3"
"github.com/aws/aws-sdk-go-v2/service/s3/types"
"github.com/rclone/rclone/backend/pikpak/api"
"github.com/rclone/rclone/fs"
"github.com/rclone/rclone/fs/accounting"
"github.com/rclone/rclone/fs/chunksize"
"github.com/rclone/rclone/fs/fserrors"
"github.com/rclone/rclone/lib/atexit"
"github.com/rclone/rclone/lib/pacer"
"github.com/rclone/rclone/lib/pool"
"golang.org/x/sync/errgroup"
)
const (
bufferSize = 1024 * 1024 // default size of the pages used in the reader
bufferCacheSize = 64 // max number of buffers to keep in cache
bufferCacheFlushTime = 5 * time.Second // flush the cached buffers after this long
)
// bufferPool is a global pool of buffers
var (
bufferPool *pool.Pool
bufferPoolOnce sync.Once
)
// get a buffer pool
func getPool() *pool.Pool {
bufferPoolOnce.Do(func() {
ci := fs.GetConfig(context.Background())
// Initialise the buffer pool when used
bufferPool = pool.New(bufferCacheFlushTime, bufferSize, bufferCacheSize, ci.UseMmap)
})
return bufferPool
}
// NewRW gets a pool.RW using the multipart pool
func NewRW() *pool.RW {
return pool.NewRW(getPool())
}
// Upload does a multipart upload in parallel
func (w *pikpakChunkWriter) Upload(ctx context.Context) (err error) {
// make concurrency machinery
tokens := pacer.NewTokenDispenser(w.con)
uploadCtx, cancel := context.WithCancel(ctx)
defer cancel()
defer atexit.OnError(&err, func() {
cancel()
fs.Debugf(w.o, "multipart upload: Cancelling...")
errCancel := w.Abort(ctx)
if errCancel != nil {
fs.Debugf(w.o, "multipart upload: failed to cancel: %v", errCancel)
}
})()
var (
g, gCtx = errgroup.WithContext(uploadCtx)
finished = false
off int64
size = w.size
chunkSize = w.chunkSize
)
// Do the accounting manually
in, acc := accounting.UnWrapAccounting(w.in)
for partNum := int64(0); !finished; partNum++ {
// Get a block of memory from the pool and token which limits concurrency.
tokens.Get()
rw := NewRW()
if acc != nil {
rw.SetAccounting(acc.AccountRead)
}
free := func() {
// return the memory and token
_ = rw.Close() // Can't return an error
tokens.Put()
}
// Fail fast, in case an errgroup managed function returns an error
// gCtx is cancelled. There is no point in uploading all the other parts.
if gCtx.Err() != nil {
free()
break
}
// Read the chunk
var n int64
n, err = io.CopyN(rw, in, chunkSize)
if err == io.EOF {
if n == 0 && partNum != 0 { // end if no data and if not first chunk
free()
break
}
finished = true
} else if err != nil {
free()
return fmt.Errorf("multipart upload: failed to read source: %w", err)
}
partNum := partNum
partOff := off
off += n
g.Go(func() (err error) {
defer free()
fs.Debugf(w.o, "multipart upload: starting chunk %d size %v offset %v/%v", partNum, fs.SizeSuffix(n), fs.SizeSuffix(partOff), fs.SizeSuffix(size))
_, err = w.WriteChunk(gCtx, int32(partNum), rw)
return err
})
}
err = g.Wait()
if err != nil {
return err
}
err = w.Close(ctx)
if err != nil {
return fmt.Errorf("multipart upload: failed to finalise: %w", err)
}
return nil
}
var warnStreamUpload sync.Once
// state of ChunkWriter
type pikpakChunkWriter struct {
chunkSize int64
size int64
con int
f *Fs
o *Object
in io.Reader
mu sync.Mutex
completedParts []types.CompletedPart
client *s3.Client
mOut *s3.CreateMultipartUploadOutput
}
func (f *Fs) newChunkWriter(ctx context.Context, remote string, size int64, p *api.ResumableParams, in io.Reader, options ...fs.OpenOption) (w *pikpakChunkWriter, err error) {
// Temporary Object under construction
o := &Object{
fs: f,
remote: remote,
}
// calculate size of parts
chunkSize := f.opt.ChunkSize
// size can be -1 here meaning we don't know the size of the incoming file. We use ChunkSize
// buffers here (default 5 MiB). With a maximum number of parts (10,000) this will be a file of
// 48 GiB which seems like a not too unreasonable limit.
if size == -1 {
warnStreamUpload.Do(func() {
fs.Logf(f, "Streaming uploads using chunk size %v will have maximum file size of %v",
f.opt.ChunkSize, fs.SizeSuffix(int64(chunkSize)*int64(maxUploadParts)))
})
} else {
chunkSize = chunksize.Calculator(o, size, maxUploadParts, chunkSize)
}
client, err := f.newS3Client(ctx, p)
if err != nil {
return nil, fmt.Errorf("failed to create upload client: %w", err)
}
w = &pikpakChunkWriter{
chunkSize: int64(chunkSize),
size: size,
con: max(1, f.opt.UploadConcurrency),
f: f,
o: o,
in: in,
completedParts: make([]types.CompletedPart, 0),
client: client,
}
req := &s3.CreateMultipartUploadInput{
Bucket: &p.Bucket,
Key: &p.Key,
}
// Apply upload options
for _, option := range options {
key, value := option.Header()
lowerKey := strings.ToLower(key)
switch lowerKey {
case "":
// ignore
case "cache-control":
req.CacheControl = aws.String(value)
case "content-disposition":
req.ContentDisposition = aws.String(value)
case "content-encoding":
req.ContentEncoding = aws.String(value)
case "content-type":
req.ContentType = aws.String(value)
}
}
err = w.f.pacer.Call(func() (bool, error) {
w.mOut, err = w.client.CreateMultipartUpload(ctx, req)
return w.shouldRetry(ctx, err)
})
if err != nil {
return nil, fmt.Errorf("create multipart upload failed: %w", err)
}
fs.Debugf(w.o, "multipart upload: %q initiated", *w.mOut.UploadId)
return
}
// shouldRetry returns a boolean as to whether this err
// deserve to be retried. It returns the err as a convenience
func (w *pikpakChunkWriter) shouldRetry(ctx context.Context, err error) (bool, error) {
if fserrors.ContextError(ctx, &err) {
return false, err
}
if fserrors.ShouldRetry(err) {
return true, err
}
return false, err
}
// add a part number and etag to the completed parts
func (w *pikpakChunkWriter) addCompletedPart(part types.CompletedPart) {
w.mu.Lock()
defer w.mu.Unlock()
w.completedParts = append(w.completedParts, part)
}
// WriteChunk will write chunk number with reader bytes, where chunk number >= 0
func (w *pikpakChunkWriter) WriteChunk(ctx context.Context, chunkNumber int32, reader io.ReadSeeker) (currentChunkSize int64, err error) {
if chunkNumber < 0 {
err := fmt.Errorf("invalid chunk number provided: %v", chunkNumber)
return -1, err
}
partNumber := chunkNumber + 1
var res *s3.UploadPartOutput
err = w.f.pacer.Call(func() (bool, error) {
// Discover the size by seeking to the end
currentChunkSize, err = reader.Seek(0, io.SeekEnd)
if err != nil {
return false, err
}
// rewind the reader on retry and after reading md5
_, err := reader.Seek(0, io.SeekStart)
if err != nil {
return false, err
}
res, err = w.client.UploadPart(ctx, &s3.UploadPartInput{
Bucket: w.mOut.Bucket,
Key: w.mOut.Key,
UploadId: w.mOut.UploadId,
PartNumber: &partNumber,
Body: reader,
})
if err != nil {
if chunkNumber <= 8 {
return w.shouldRetry(ctx, err)
}
// retry all chunks once have done the first few
return true, err
}
return false, nil
})
if err != nil {
return -1, fmt.Errorf("failed to upload chunk %d with %v bytes: %w", partNumber, currentChunkSize, err)
}
w.addCompletedPart(types.CompletedPart{
PartNumber: &partNumber,
ETag: res.ETag,
})
fs.Debugf(w.o, "multipart upload: wrote chunk %d with %v bytes", partNumber, currentChunkSize)
return currentChunkSize, err
}
// Abort the multipart upload
func (w *pikpakChunkWriter) Abort(ctx context.Context) (err error) {
// Abort the upload session
err = w.f.pacer.Call(func() (bool, error) {
_, err = w.client.AbortMultipartUpload(ctx, &s3.AbortMultipartUploadInput{
Bucket: w.mOut.Bucket,
Key: w.mOut.Key,
UploadId: w.mOut.UploadId,
})
return w.shouldRetry(ctx, err)
})
if err != nil {
return fmt.Errorf("failed to abort multipart upload %q: %w", *w.mOut.UploadId, err)
}
fs.Debugf(w.o, "multipart upload: %q aborted", *w.mOut.UploadId)
return
}
// Close and finalise the multipart upload
func (w *pikpakChunkWriter) Close(ctx context.Context) (err error) {
// sort the completed parts by part number
sort.Slice(w.completedParts, func(i, j int) bool {
return *w.completedParts[i].PartNumber < *w.completedParts[j].PartNumber
})
// Finalise the upload session
err = w.f.pacer.Call(func() (bool, error) {
_, err = w.client.CompleteMultipartUpload(ctx, &s3.CompleteMultipartUploadInput{
Bucket: w.mOut.Bucket,
Key: w.mOut.Key,
UploadId: w.mOut.UploadId,
MultipartUpload: &types.CompletedMultipartUpload{
Parts: w.completedParts,
},
})
return w.shouldRetry(ctx, err)
})
if err != nil {
return fmt.Errorf("failed to complete multipart upload: %w", err)
}
fs.Debugf(w.o, "multipart upload: %q finished", *w.mOut.UploadId)
return
}

View File

@@ -41,12 +41,10 @@ import (
"github.com/aws/aws-sdk-go-v2/aws"
awsconfig "github.com/aws/aws-sdk-go-v2/config"
"github.com/aws/aws-sdk-go-v2/credentials"
"github.com/aws/aws-sdk-go-v2/feature/s3/manager"
"github.com/aws/aws-sdk-go-v2/service/s3"
"github.com/rclone/rclone/backend/pikpak/api"
"github.com/rclone/rclone/fs"
"github.com/rclone/rclone/fs/accounting"
"github.com/rclone/rclone/fs/chunksize"
"github.com/rclone/rclone/fs/config"
"github.com/rclone/rclone/fs/config/configmap"
"github.com/rclone/rclone/fs/config/configstruct"
@@ -66,17 +64,22 @@ import (
// Constants
const (
clientID = "YUMx5nI8ZU8Ap8pm"
clientVersion = "2.0.0"
packageName = "mypikpak.com"
defaultUserAgent = "Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:129.0) Gecko/20100101 Firefox/129.0"
minSleep = 100 * time.Millisecond
maxSleep = 2 * time.Second
taskWaitTime = 500 * time.Millisecond
decayConstant = 2 // bigger for slower decay, exponential
rootURL = "https://api-drive.mypikpak.com"
minChunkSize = fs.SizeSuffix(manager.MinUploadPartSize)
defaultUploadConcurrency = manager.DefaultUploadConcurrency
clientID = "YUMx5nI8ZU8Ap8pm"
clientVersion = "2.0.0"
packageName = "mypikpak.com"
defaultUserAgent = "Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:129.0) Gecko/20100101 Firefox/129.0"
minSleep = 100 * time.Millisecond
maxSleep = 2 * time.Second
taskWaitTime = 500 * time.Millisecond
decayConstant = 2 // bigger for slower decay, exponential
rootURL = "https://api-drive.mypikpak.com"
maxUploadParts = 10000 // Part number must be an integer between 1 and 10000, inclusive.
defaultChunkSize = fs.SizeSuffix(1024 * 1024 * 5) // Part size should be in [100KB, 5GB]
minChunkSize = 100 * fs.Kibi
maxChunkSize = 5 * fs.Gibi
defaultUploadCutoff = fs.SizeSuffix(200 * 1024 * 1024)
maxUploadCutoff = 5 * fs.Gibi // maximum allowed size for singlepart uploads
)
// Globals
@@ -223,6 +226,14 @@ Fill in for rclone to use a non root folder as its starting point.
Help: "Files bigger than this will be cached on disk to calculate hash if required.",
Default: fs.SizeSuffix(10 * 1024 * 1024),
Advanced: true,
}, {
Name: "upload_cutoff",
Help: `Cutoff for switching to chunked upload.
Any files larger than this will be uploaded in chunks of chunk_size.
The minimum is 0 and the maximum is 5 GiB.`,
Default: defaultUploadCutoff,
Advanced: true,
}, {
Name: "chunk_size",
Help: `Chunk size for multipart uploads.
@@ -241,7 +252,7 @@ large file of known size to stay below the 10,000 chunks limit.
Increasing the chunk size decreases the accuracy of the progress
statistics displayed with "-P" flag.`,
Default: minChunkSize,
Default: defaultChunkSize,
Advanced: true,
}, {
Name: "upload_concurrency",
@@ -257,7 +268,7 @@ in memory.
If you are uploading small numbers of large files over high-speed links
and these uploads do not fully utilize your bandwidth, then increasing
this may help to speed up the transfers.`,
Default: defaultUploadConcurrency,
Default: 4,
Advanced: true,
}, {
Name: config.ConfigEncoding,
@@ -294,6 +305,7 @@ type Options struct {
NoMediaLink bool `config:"no_media_link"`
HashMemoryThreshold fs.SizeSuffix `config:"hash_memory_limit"`
ChunkSize fs.SizeSuffix `config:"chunk_size"`
UploadCutoff fs.SizeSuffix `config:"upload_cutoff"`
UploadConcurrency int `config:"upload_concurrency"`
Enc encoder.MultiEncoder `config:"encoding"`
}
@@ -467,6 +479,11 @@ func (f *Fs) shouldRetry(ctx context.Context, resp *http.Response, err error) (b
// when a zero-byte file was uploaded with an invalid captcha token
f.rst.captcha.Invalidate()
return true, err
} else if strings.Contains(apiErr.Reason, "idx.shub.mypikpak.com") && apiErr.Code == 500 {
// internal server error: Post "http://idx.shub.mypikpak.com": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
// This typically happens when trying to retrieve a gcid for which no record exists.
// No retry is needed in this case.
return false, err
}
}
@@ -524,6 +541,39 @@ func (f *Fs) newClientWithPacer(ctx context.Context) (err error) {
return nil
}
func checkUploadChunkSize(cs fs.SizeSuffix) error {
if cs < minChunkSize {
return fmt.Errorf("%s is less than %s", cs, minChunkSize)
}
if cs > maxChunkSize {
return fmt.Errorf("%s is greater than %s", cs, maxChunkSize)
}
return nil
}
func (f *Fs) setUploadChunkSize(cs fs.SizeSuffix) (old fs.SizeSuffix, err error) {
err = checkUploadChunkSize(cs)
if err == nil {
old, f.opt.ChunkSize = f.opt.ChunkSize, cs
}
return
}
func checkUploadCutoff(cs fs.SizeSuffix) error {
if cs > maxUploadCutoff {
return fmt.Errorf("%s is greater than %s", cs, maxUploadCutoff)
}
return nil
}
func (f *Fs) setUploadCutoff(cs fs.SizeSuffix) (old fs.SizeSuffix, err error) {
err = checkUploadCutoff(cs)
if err == nil {
old, f.opt.UploadCutoff = f.opt.UploadCutoff, cs
}
return
}
// newFs partially constructs Fs from the path
//
// It constructs a valid Fs but doesn't attempt to figure out whether
@@ -531,11 +581,17 @@ func (f *Fs) newClientWithPacer(ctx context.Context) (err error) {
func newFs(ctx context.Context, name, path string, m configmap.Mapper) (*Fs, error) {
// Parse config into Options struct
opt := new(Options)
if err := configstruct.Set(m, opt); err != nil {
err := configstruct.Set(m, opt)
if err != nil {
return nil, err
}
if opt.ChunkSize < minChunkSize {
return nil, fmt.Errorf("chunk size must be at least %s", minChunkSize)
err = checkUploadChunkSize(opt.ChunkSize)
if err != nil {
return nil, fmt.Errorf("pikpak: chunk size: %w", err)
}
err = checkUploadCutoff(opt.UploadCutoff)
if err != nil {
return nil, fmt.Errorf("pikpak: upload cutoff: %w", err)
}
root := parsePath(path)
@@ -923,6 +979,24 @@ func (f *Fs) deleteObjects(ctx context.Context, IDs []string, useTrash bool) (er
return nil
}
// untrash a file or directory by ID
//
// If a name collision occurs in the destination folder, PikPak might automatically
// rename the restored item(s) by appending a numbered suffix. For example,
// foo.txt -> foo(1).txt or foo(2).txt if foo(1).txt already exists
func (f *Fs) untrashObjects(ctx context.Context, IDs []string) (err error) {
if len(IDs) == 0 {
return nil
}
req := api.RequestBatch{
IDs: IDs,
}
if err := f.requestBatchAction(ctx, "batchUntrash", &req); err != nil {
return fmt.Errorf("untrash object failed: %w", err)
}
return nil
}
// purgeCheck removes the root directory, if check is set then it
// refuses to do so if it has anything in
func (f *Fs) purgeCheck(ctx context.Context, dir string, check bool) error {
@@ -1007,7 +1081,14 @@ func (f *Fs) CleanUp(ctx context.Context) (err error) {
return f.waitTask(ctx, info.TaskID)
}
// Move the object
// Move the object to a new parent folder
//
// Objects cannot be moved to their current folder.
// "file_move_or_copy_to_cur" (9): Please don't move or copy to current folder or sub folder
//
// If a name collision occurs in the destination folder, PikPak might automatically
// rename the moved item(s) by appending a numbered suffix. For example,
// foo.txt -> foo(1).txt or foo(2).txt if foo(1).txt already exists
func (f *Fs) moveObjects(ctx context.Context, IDs []string, dirID string) (err error) {
if len(IDs) == 0 {
return nil
@@ -1023,6 +1104,12 @@ func (f *Fs) moveObjects(ctx context.Context, IDs []string, dirID string) (err e
}
// renames the object
//
// The new name must be different from the current name.
// "file_rename_to_same_name" (3): Name of file or folder is not changed
//
// Within the same folder, object names must be unique.
// "file_duplicated_name" (3): File name cannot be repeated
func (f *Fs) renameObject(ctx context.Context, ID, newName string) (info *api.File, err error) {
req := api.File{
Name: f.opt.Enc.FromStandardName(newName),
@@ -1107,18 +1194,13 @@ func (f *Fs) createObject(ctx context.Context, remote string, modTime time.Time,
// Will only be called if src.Fs().Name() == f.Name()
//
// If it isn't possible then return fs.ErrorCantMove
func (f *Fs) Move(ctx context.Context, src fs.Object, remote string) (fs.Object, error) {
func (f *Fs) Move(ctx context.Context, src fs.Object, remote string) (dst fs.Object, err error) {
srcObj, ok := src.(*Object)
if !ok {
fs.Debugf(src, "Can't move - not same remote type")
return nil, fs.ErrorCantMove
}
err := srcObj.readMetaData(ctx)
if err != nil {
return nil, err
}
srcLeaf, srcParentID, err := srcObj.fs.dirCache.FindPath(ctx, srcObj.remote, false)
err = srcObj.readMetaData(ctx)
if err != nil {
return nil, err
}
@@ -1129,31 +1211,74 @@ func (f *Fs) Move(ctx context.Context, src fs.Object, remote string) (fs.Object,
return nil, err
}
if srcParentID != dstParentID {
// Do the move
if srcObj.parent != dstParentID {
// Perform the move. A numbered copy might be generated upon name collision.
if err = f.moveObjects(ctx, []string{srcObj.id}, dstParentID); err != nil {
return nil, err
return nil, fmt.Errorf("move: failed to move object %s to new parent %s: %w", srcObj.id, dstParentID, err)
}
defer func() {
if err != nil {
// FIXME: Restored file might have a numbered name if a conflict occurs
if mvErr := f.moveObjects(ctx, []string{srcObj.id}, srcObj.parent); mvErr != nil {
fs.Logf(f, "move: couldn't restore original object %q to %q after move failure: %v", dstObj.id, src.Remote(), mvErr)
}
}
}()
}
// Manually update info of moved object to save API calls
dstObj.id = srcObj.id
dstObj.mimeType = srcObj.mimeType
dstObj.gcid = srcObj.gcid
dstObj.md5sum = srcObj.md5sum
dstObj.hasMetaData = true
if srcLeaf != dstLeaf {
// Rename
info, err := f.renameObject(ctx, srcObj.id, dstLeaf)
if err != nil {
return nil, fmt.Errorf("move: couldn't rename moved file: %w", err)
// Find the moved object and any conflict object with the same name.
var moved, conflict *api.File
_, err = f.listAll(ctx, dstParentID, api.KindOfFile, "false", func(item *api.File) bool {
if item.ID == srcObj.id {
moved = item
if item.Name == dstLeaf {
return true
}
} else if item.Name == dstLeaf {
conflict = item
}
return dstObj, dstObj.setMetaData(info)
// Stop early if both found
return moved != nil && conflict != nil
})
if err != nil {
return nil, fmt.Errorf("move: couldn't locate moved file %q in destination directory %q: %w", srcObj.id, dstParentID, err)
}
return dstObj, nil
if moved == nil {
return nil, fmt.Errorf("move: moved file %q not found in destination", srcObj.id)
}
// If moved object already has the correct name, return
if moved.Name == dstLeaf {
return dstObj, dstObj.setMetaData(moved)
}
// If name collision, delete conflicting file first
if conflict != nil {
if err = f.deleteObjects(ctx, []string{conflict.ID}, true); err != nil {
return nil, fmt.Errorf("move: couldn't delete conflicting file: %w", err)
}
defer func() {
if err != nil {
if restoreErr := f.untrashObjects(ctx, []string{conflict.ID}); restoreErr != nil {
fs.Logf(f, "move: couldn't restore conflicting file: %v", restoreErr)
}
}
}()
}
info, err := f.renameObject(ctx, srcObj.id, dstLeaf)
if err != nil {
return nil, fmt.Errorf("move: couldn't rename moved file %q to %q: %w", dstObj.id, dstLeaf, err)
}
return dstObj, dstObj.setMetaData(info)
}
// copy objects
//
// Objects cannot be copied to their current folder.
// "file_move_or_copy_to_cur" (9): Please don't move or copy to current folder or sub folder
//
// If a name collision occurs in the destination folder, PikPak might automatically
// rename the copied item(s) by appending a numbered suffix. For example,
// foo.txt -> foo(1).txt or foo(2).txt if foo(1).txt already exists
func (f *Fs) copyObjects(ctx context.Context, IDs []string, dirID string) (err error) {
if len(IDs) == 0 {
return nil
@@ -1177,13 +1302,13 @@ func (f *Fs) copyObjects(ctx context.Context, IDs []string, dirID string) (err e
// Will only be called if src.Fs().Name() == f.Name()
//
// If it isn't possible then return fs.ErrorCantCopy
func (f *Fs) Copy(ctx context.Context, src fs.Object, remote string) (fs.Object, error) {
func (f *Fs) Copy(ctx context.Context, src fs.Object, remote string) (dst fs.Object, err error) {
srcObj, ok := src.(*Object)
if !ok {
fs.Debugf(src, "Can't copy - not same remote type")
return nil, fs.ErrorCantCopy
}
err := srcObj.readMetaData(ctx)
err = srcObj.readMetaData(ctx)
if err != nil {
return nil, err
}
@@ -1198,31 +1323,55 @@ func (f *Fs) Copy(ctx context.Context, src fs.Object, remote string) (fs.Object,
fs.Debugf(src, "Can't copy - same parent")
return nil, fs.ErrorCantCopy
}
// Check for possible conflicts: Pikpak creates numbered copies on name collision.
var conflict *api.File
_, srcLeaf := dircache.SplitPath(srcObj.remote)
if srcLeaf == dstLeaf {
if conflict, err = f.readMetaDataForPath(ctx, remote); err == nil {
// delete conflicting file
if err = f.deleteObjects(ctx, []string{conflict.ID}, true); err != nil {
return nil, fmt.Errorf("copy: couldn't delete conflicting file: %w", err)
}
defer func() {
if err != nil {
if restoreErr := f.untrashObjects(ctx, []string{conflict.ID}); restoreErr != nil {
fs.Logf(f, "copy: couldn't restore conflicting file: %v", restoreErr)
}
}
}()
} else if err != fs.ErrorObjectNotFound {
return nil, err
}
} else {
dstDir, _ := dircache.SplitPath(remote)
dstObj.remote = path.Join(dstDir, srcLeaf)
if conflict, err = f.readMetaDataForPath(ctx, dstObj.remote); err == nil {
tmpName := conflict.Name + "-rclone-copy-" + random.String(8)
if _, err = f.renameObject(ctx, conflict.ID, tmpName); err != nil {
return nil, fmt.Errorf("copy: couldn't rename conflicting file: %w", err)
}
defer func() {
if _, renameErr := f.renameObject(ctx, conflict.ID, conflict.Name); renameErr != nil {
fs.Logf(f, "copy: couldn't rename conflicting file back to original: %v", renameErr)
}
}()
} else if err != fs.ErrorObjectNotFound {
return nil, err
}
}
// Copy the object
if err := f.copyObjects(ctx, []string{srcObj.id}, dstParentID); err != nil {
return nil, fmt.Errorf("couldn't copy file: %w", err)
}
// Update info of the copied object with new parent but source name
if info, err := dstObj.fs.readMetaDataForPath(ctx, srcObj.remote); err != nil {
return nil, fmt.Errorf("copy: couldn't locate copied file: %w", err)
} else if err = dstObj.setMetaData(info); err != nil {
return nil, err
}
// Can't copy and change name in one step so we have to check if we have
// the correct name after copy
srcLeaf, _, err := srcObj.fs.dirCache.FindPath(ctx, srcObj.remote, false)
err = dstObj.readMetaData(ctx)
if err != nil {
return nil, err
return nil, fmt.Errorf("copy: couldn't locate copied file: %w", err)
}
if srcLeaf != dstLeaf {
// Rename
info, err := f.renameObject(ctx, dstObj.id, dstLeaf)
if err != nil {
return nil, fmt.Errorf("copy: couldn't rename copied file: %w", err)
}
return dstObj, dstObj.setMetaData(info)
return f.Move(ctx, dstObj, remote)
}
return dstObj, nil
}
@@ -1260,9 +1409,7 @@ func (f *Fs) uploadByForm(ctx context.Context, in io.Reader, name string, size i
return
}
func (f *Fs) uploadByResumable(ctx context.Context, in io.Reader, name string, size int64, resumable *api.Resumable) (err error) {
p := resumable.Params
func (f *Fs) newS3Client(ctx context.Context, p *api.ResumableParams) (s3Client *s3.Client, err error) {
// Create a credentials provider
creds := credentials.NewStaticCredentialsProvider(p.AccessKeyID, p.AccessKeySecret, p.SecurityToken)
@@ -1272,22 +1419,64 @@ func (f *Fs) uploadByResumable(ctx context.Context, in io.Reader, name string, s
if err != nil {
return
}
ci := fs.GetConfig(ctx)
cfg.RetryMaxAttempts = ci.LowLevelRetries
cfg.HTTPClient = getClient(ctx, &f.opt)
client := s3.NewFromConfig(cfg, func(o *s3.Options) {
o.BaseEndpoint = aws.String("https://mypikpak.com/")
o.RequestChecksumCalculation = aws.RequestChecksumCalculationWhenRequired
o.ResponseChecksumValidation = aws.ResponseChecksumValidationWhenRequired
})
partSize := chunksize.Calculator(name, size, int(manager.MaxUploadParts), f.opt.ChunkSize)
return client, nil
}
// Create an uploader with custom options
uploader := manager.NewUploader(client, func(u *manager.Uploader) {
u.PartSize = int64(partSize)
u.Concurrency = f.opt.UploadConcurrency
})
// Perform an upload
_, err = uploader.Upload(ctx, &s3.PutObjectInput{
func (f *Fs) uploadByResumable(ctx context.Context, in io.Reader, name string, size int64, resumable *api.Resumable, options ...fs.OpenOption) (err error) {
p := resumable.Params
if size < 0 || size >= int64(f.opt.UploadCutoff) {
mu, err := f.newChunkWriter(ctx, name, size, p, in, options...)
if err != nil {
return fmt.Errorf("multipart upload failed to initialise: %w", err)
}
return mu.Upload(ctx)
}
// upload singlepart
client, err := f.newS3Client(ctx, p)
if err != nil {
return fmt.Errorf("failed to create upload client: %w", err)
}
req := &s3.PutObjectInput{
Bucket: &p.Bucket,
Key: &p.Key,
Body: in,
Body: io.NopCloser(in),
}
// Apply upload options
for _, option := range options {
key, value := option.Header()
lowerKey := strings.ToLower(key)
switch lowerKey {
case "":
// ignore
case "cache-control":
req.CacheControl = aws.String(value)
case "content-disposition":
req.ContentDisposition = aws.String(value)
case "content-encoding":
req.ContentEncoding = aws.String(value)
case "content-type":
req.ContentType = aws.String(value)
}
}
var s3opts = []func(*s3.Options){}
// Can't retry single part uploads as only have an io.Reader
s3opts = append(s3opts, func(o *s3.Options) {
o.RetryMaxAttempts = 1
})
err = f.pacer.CallNoRetry(func() (bool, error) {
_, err = client.PutObject(ctx, req, s3opts...)
return f.shouldRetry(ctx, nil, err)
})
return
}
@@ -1319,8 +1508,30 @@ func (f *Fs) upload(ctx context.Context, in io.Reader, leaf, dirID, gcid string,
}
if new.File == nil {
return nil, fmt.Errorf("invalid response: %+v", new)
} else if new.File.Phase == api.PhaseTypeComplete {
// early return; in case of zero-byte objects
}
defer atexit.OnError(&err, func() {
fs.Debugf(leaf, "canceling upload: %v", err)
if cancelErr := f.deleteObjects(ctx, []string{new.File.ID}, false); cancelErr != nil {
fs.Logf(leaf, "failed to cancel upload: %v", cancelErr)
}
if new.Task != nil {
if cancelErr := f.deleteTask(ctx, new.Task.ID, false); cancelErr != nil {
fs.Logf(leaf, "failed to cancel upload: %v", cancelErr)
}
fs.Debugf(leaf, "waiting %v for the cancellation to be effective", taskWaitTime)
time.Sleep(taskWaitTime)
}
})()
// Note: The API might automatically append a numbered suffix to the filename,
// even if a file with the same name does not exist in the target directory.
if upName := f.opt.Enc.ToStandardName(new.File.Name); leaf != upName {
return nil, fserrors.NoRetryError(fmt.Errorf("uploaded file name mismatch: expected %q, got %q", leaf, upName))
}
// early return; in case of zero-byte objects or uploaded by matched gcid
if new.File.Phase == api.PhaseTypeComplete {
if acc, ok := in.(*accounting.Account); ok && acc != nil {
// if `in io.Reader` is still in type of `*accounting.Account` (meaning that it is unused)
// it is considered as a server side copy as no incoming/outgoing traffic occur at all
@@ -1330,22 +1541,10 @@ func (f *Fs) upload(ctx context.Context, in io.Reader, leaf, dirID, gcid string,
return new.File, nil
}
defer atexit.OnError(&err, func() {
fs.Debugf(leaf, "canceling upload: %v", err)
if cancelErr := f.deleteObjects(ctx, []string{new.File.ID}, false); cancelErr != nil {
fs.Logf(leaf, "failed to cancel upload: %v", cancelErr)
}
if cancelErr := f.deleteTask(ctx, new.Task.ID, false); cancelErr != nil {
fs.Logf(leaf, "failed to cancel upload: %v", cancelErr)
}
fs.Debugf(leaf, "waiting %v for the cancellation to be effective", taskWaitTime)
time.Sleep(taskWaitTime)
})()
if uploadType == api.UploadTypeForm && new.Form != nil {
err = f.uploadByForm(ctx, in, req.Name, size, new.Form, options...)
} else if uploadType == api.UploadTypeResumable && new.Resumable != nil {
err = f.uploadByResumable(ctx, in, leaf, size, new.Resumable)
err = f.uploadByResumable(ctx, in, leaf, size, new.Resumable, options...)
} else {
err = fmt.Errorf("no method available for uploading: %+v", new)
}
@@ -1353,6 +1552,9 @@ func (f *Fs) upload(ctx context.Context, in io.Reader, leaf, dirID, gcid string,
if err != nil {
return nil, fmt.Errorf("failed to upload: %w", err)
}
if new.Task == nil {
return new.File, nil
}
return new.File, f.waitTask(ctx, new.Task.ID)
}

View File

@@ -1,10 +1,10 @@
// Test PikPak filesystem interface
package pikpak_test
package pikpak
import (
"testing"
"github.com/rclone/rclone/backend/pikpak"
"github.com/rclone/rclone/fs"
"github.com/rclone/rclone/fstest/fstests"
)
@@ -12,6 +12,23 @@ import (
func TestIntegration(t *testing.T) {
fstests.Run(t, &fstests.Opt{
RemoteName: "TestPikPak:",
NilObject: (*pikpak.Object)(nil),
NilObject: (*Object)(nil),
ChunkedUpload: fstests.ChunkedUploadConfig{
MinChunkSize: minChunkSize,
MaxChunkSize: maxChunkSize,
},
})
}
func (f *Fs) SetUploadChunkSize(cs fs.SizeSuffix) (fs.SizeSuffix, error) {
return f.setUploadChunkSize(cs)
}
func (f *Fs) SetUploadCutoff(cs fs.SizeSuffix) (fs.SizeSuffix, error) {
return f.setUploadCutoff(cs)
}
var (
_ fstests.SetUploadChunkSizer = (*Fs)(nil)
_ fstests.SetUploadCutoffer = (*Fs)(nil)
)

View File

@@ -793,7 +793,7 @@ func (f *Fs) About(ctx context.Context) (usage *fs.Usage, err error) {
return nil, err
}
usage = &fs.Usage{
Used: fs.NewUsageValue(int64(info.SpaceUsed)),
Used: fs.NewUsageValue(info.SpaceUsed),
}
return usage, nil
}

View File

@@ -1,3 +1,5 @@
//go:build !wasm
// Package protondrive implements the Proton Drive backend
package protondrive

View File

@@ -1,3 +1,5 @@
//go:build !wasm
package protondrive_test
import (

View File

@@ -0,0 +1,7 @@
// Build for sftp for unsupported platforms to stop go complaining
// about "no buildable Go source files "
//go:build wasm
// Package protondrive implements the Proton Drive backend
package protondrive

View File

@@ -1,4 +1,4 @@
//go:build !plan9 && !js
//go:build !plan9 && !js && !wasm
// Package qingstor provides an interface to QingStor object storage
// Home: https://www.qingcloud.com/

View File

@@ -1,6 +1,6 @@
// Test QingStor filesystem interface
//go:build !plan9 && !js
//go:build !plan9 && !js && !wasm
package qingstor

View File

@@ -1,7 +1,7 @@
// Build for unsupported platforms to stop go complaining
// about "no buildable Go source files "
//go:build plan9 || js
//go:build plan9 || js || wasm
// Package qingstor provides an interface to QingStor object storage
// Home: https://www.qingcloud.com/

View File

@@ -1,6 +1,6 @@
// Upload object to QingStor
//go:build !plan9 && !js
//go:build !plan9 && !js && !wasm
package qingstor

View File

@@ -149,6 +149,9 @@ var providerOption = fs.Option{
}, {
Value: "Outscale",
Help: "OUTSCALE Object Storage (OOS)",
}, {
Value: "OVHcloud",
Help: "OVHcloud Object Storage",
}, {
Value: "Petabox",
Help: "Petabox Object Storage",
@@ -185,6 +188,9 @@ var providerOption = fs.Option{
}, {
Value: "Qiniu",
Help: "Qiniu Object Storage (Kodo)",
}, {
Value: "Zata",
Help: "Zata (S3 compatible Gateway)",
}, {
Value: "Other",
Help: "Any other S3 compatible provider",
@@ -490,6 +496,14 @@ func init() {
Value: "ap-northeast-1",
Help: "Northeast Asia Region 1.\nNeeds location constraint ap-northeast-1.",
}},
}, {
Name: "region",
Help: "Region where you can connect with.\n",
Provider: "Zata",
Examples: []fs.OptionExample{{
Value: "us-east-1",
Help: "Indore, Madhya Pradesh, India",
}},
}, {
Name: "region",
Help: "Region where your bucket will be created and your data stored.\n",
@@ -524,6 +538,59 @@ func init() {
Value: "ap-northeast-1",
Help: "Tokyo, Japan",
}},
}, {
// References:
// https://help.ovhcloud.com/csm/en-public-cloud-storage-s3-location?id=kb_article_view&sysparm_article=KB0047384
// https://support.us.ovhcloud.com/hc/en-us/articles/10667991081107-Endpoints-and-Object-Storage-Geoavailability
Name: "region",
Help: "Region where your bucket will be created and your data stored.\n",
Provider: "OVHcloud",
Examples: []fs.OptionExample{{
Value: "gra",
Help: "Gravelines, France",
}, {
Value: "rbx",
Help: "Roubaix, France",
}, {
Value: "sbg",
Help: "Strasbourg, France",
}, {
Value: "eu-west-par",
Help: "Paris, France (3AZ)",
}, {
Value: "de",
Help: "Frankfurt, Germany",
}, {
Value: "uk",
Help: "London, United Kingdom",
}, {
Value: "waw",
Help: "Warsaw, Poland",
}, {
Value: "bhs",
Help: "Beauharnois, Canada",
}, {
Value: "ca-east-tor",
Help: "Toronto, Canada",
}, {
Value: "sgp",
Help: "Singapore",
}, {
Value: "ap-southeast-syd",
Help: "Sydney, Australia",
}, {
Value: "ap-south-mum",
Help: "Mumbai, India",
}, {
Value: "us-east-va",
Help: "Vint Hill, Virginia, USA",
}, {
Value: "us-west-or",
Help: "Hillsboro, Oregon, USA",
}, {
Value: "rbx-archive",
Help: "Roubaix, France (Cold Archive)",
}},
}, {
Name: "region",
Help: "Region where your bucket will be created and your data stored.\n",
@@ -576,7 +643,7 @@ func init() {
}, {
Name: "region",
Help: "Region to connect to.\n\nLeave blank if you are using an S3 clone and you don't have a region.",
Provider: "!AWS,Alibaba,ArvanCloud,ChinaMobile,Cloudflare,FlashBlade,IONOS,Petabox,Liara,Linode,Magalu,Qiniu,RackCorp,Scaleway,Selectel,Storj,Synology,TencentCOS,HuaweiOBS,IDrive,Mega",
Provider: "!AWS,Alibaba,ArvanCloud,ChinaMobile,Cloudflare,FlashBlade,IONOS,Petabox,Liara,Linode,Magalu,OVHcloud,Qiniu,RackCorp,Scaleway,Selectel,Storj,Synology,TencentCOS,HuaweiOBS,IDrive,Mega,Zata",
Examples: []fs.OptionExample{{
Value: "",
Help: "Use this if unsure.\nWill use v4 signatures and an empty region.",
@@ -1163,6 +1230,71 @@ func init() {
Value: "obs.ru-northwest-2.myhuaweicloud.com",
Help: "RU-Moscow2",
}},
}, {
Name: "endpoint",
Help: "Endpoint for OVHcloud Object Storage.",
Provider: "OVHcloud",
Examples: []fs.OptionExample{{
Value: "s3.gra.io.cloud.ovh.net",
Help: "OVHcloud Gravelines, France",
Provider: "OVHcloud",
}, {
Value: "s3.rbx.io.cloud.ovh.net",
Help: "OVHcloud Roubaix, France",
Provider: "OVHcloud",
}, {
Value: "s3.sbg.io.cloud.ovh.net",
Help: "OVHcloud Strasbourg, France",
Provider: "OVHcloud",
}, {
Value: "s3.eu-west-par.io.cloud.ovh.net",
Help: "OVHcloud Paris, France (3AZ)",
Provider: "OVHcloud",
}, {
Value: "s3.de.io.cloud.ovh.net",
Help: "OVHcloud Frankfurt, Germany",
Provider: "OVHcloud",
}, {
Value: "s3.uk.io.cloud.ovh.net",
Help: "OVHcloud London, United Kingdom",
Provider: "OVHcloud",
}, {
Value: "s3.waw.io.cloud.ovh.net",
Help: "OVHcloud Warsaw, Poland",
Provider: "OVHcloud",
}, {
Value: "s3.bhs.io.cloud.ovh.net",
Help: "OVHcloud Beauharnois, Canada",
Provider: "OVHcloud",
}, {
Value: "s3.ca-east-tor.io.cloud.ovh.net",
Help: "OVHcloud Toronto, Canada",
Provider: "OVHcloud",
}, {
Value: "s3.sgp.io.cloud.ovh.net",
Help: "OVHcloud Singapore",
Provider: "OVHcloud",
}, {
Value: "s3.ap-southeast-syd.io.cloud.ovh.net",
Help: "OVHcloud Sydney, Australia",
Provider: "OVHcloud",
}, {
Value: "s3.ap-south-mum.io.cloud.ovh.net",
Help: "OVHcloud Mumbai, India",
Provider: "OVHcloud",
}, {
Value: "s3.us-east-va.io.cloud.ovh.us",
Help: "OVHcloud Vint Hill, Virginia, USA",
Provider: "OVHcloud",
}, {
Value: "s3.us-west-or.io.cloud.ovh.us",
Help: "OVHcloud Hillsboro, Oregon, USA",
Provider: "OVHcloud",
}, {
Value: "s3.rbx-archive.io.cloud.ovh.net",
Help: "OVHcloud Roubaix, France (Cold Archive)",
Provider: "OVHcloud",
}},
}, {
Name: "endpoint",
Help: "Endpoint for Scaleway Object Storage.",
@@ -1380,6 +1512,14 @@ func init() {
Value: "s3-ap-northeast-1.qiniucs.com",
Help: "Northeast Asia Endpoint 1",
}},
}, {
Name: "endpoint",
Help: "Endpoint for Zata Object Storage.",
Provider: "Zata",
Examples: []fs.OptionExample{{
Value: "idr01.zata.ai",
Help: "South Asia Endpoint",
}},
}, {
// Selectel endpoints: https://docs.selectel.ru/en/cloud/object-storage/manage/domains/#s3-api-domains
Name: "endpoint",
@@ -1392,7 +1532,7 @@ func init() {
}, {
Name: "endpoint",
Help: "Endpoint for S3 API.\n\nRequired when using an S3 clone.",
Provider: "!AWS,ArvanCloud,IBMCOS,IDrive,IONOS,TencentCOS,HuaweiOBS,Alibaba,ChinaMobile,GCS,Liara,Linode,LyveCloud,Magalu,Scaleway,Selectel,StackPath,Storj,Synology,RackCorp,Qiniu,Petabox",
Provider: "!AWS,ArvanCloud,IBMCOS,IDrive,IONOS,TencentCOS,HuaweiOBS,Alibaba,ChinaMobile,GCS,Liara,Linode,LyveCloud,Magalu,OVHcloud,Scaleway,Selectel,StackPath,Storj,Synology,RackCorp,Qiniu,Petabox,Zata",
Examples: []fs.OptionExample{{
Value: "objects-us-east-1.dream.io",
Help: "Dream Objects endpoint",
@@ -1927,7 +2067,7 @@ func init() {
}, {
Name: "location_constraint",
Help: "Location constraint - must be set to match the Region.\n\nLeave blank if not sure. Used when creating buckets only.",
Provider: "!AWS,Alibaba,ArvanCloud,HuaweiOBS,ChinaMobile,Cloudflare,FlashBlade,IBMCOS,IDrive,IONOS,Leviia,Liara,Linode,Magalu,Outscale,Qiniu,RackCorp,Scaleway,Selectel,StackPath,Storj,TencentCOS,Petabox,Mega",
Provider: "!AWS,Alibaba,ArvanCloud,HuaweiOBS,ChinaMobile,Cloudflare,FlashBlade,IBMCOS,IDrive,IONOS,Leviia,Liara,Linode,Magalu,Outscale,OVHcloud,Qiniu,RackCorp,Scaleway,Selectel,StackPath,Storj,TencentCOS,Petabox,Mega",
}, {
Name: "acl",
Help: `Canned ACL used when creating buckets and storing or copying objects.
@@ -2409,6 +2549,11 @@ See [AWS Docs on Dualstack Endpoints](https://docs.aws.amazon.com/AmazonS3/lates
See: [AWS S3 Transfer acceleration](https://docs.aws.amazon.com/AmazonS3/latest/dev/transfer-acceleration-examples.html)`,
Default: false,
Advanced: true,
}, {
Name: "use_arn_region",
Help: `If true, enables arn region support for the service.`,
Default: false,
Advanced: true,
}, {
Name: "leave_parts_on_error",
Provider: "AWS",
@@ -2616,7 +2761,7 @@ The parameter should be a date, "2006-01-02", datetime "2006-01-02
Note that when using this no file write operations are permitted,
so you can't upload files or delete them.
See [the time option docs](/docs/#time-option) for valid formats.
See [the time option docs](/docs/#time-options) for valid formats.
`,
Default: fs.Time{},
Advanced: true,
@@ -2956,6 +3101,7 @@ type Options struct {
ForcePathStyle bool `config:"force_path_style"`
V2Auth bool `config:"v2_auth"`
UseAccelerateEndpoint bool `config:"use_accelerate_endpoint"`
UseARNRegion bool `config:"use_arn_region"`
LeavePartsOnError bool `config:"leave_parts_on_error"`
ListChunk int32 `config:"list_chunk"`
ListVersion int `config:"list_version"`
@@ -3320,6 +3466,7 @@ func s3Connection(ctx context.Context, opt *Options, client *http.Client) (s3Cli
options = append(options, func(s3Opt *s3.Options) {
s3Opt.UsePathStyle = opt.ForcePathStyle
s3Opt.UseAccelerate = opt.UseAccelerateEndpoint
s3Opt.UseARNRegion = opt.UseARNRegion
// FIXME maybe this should be a tristate so can default to DualStackEndpointStateUnset?
if opt.UseDualStack {
s3Opt.EndpointOptions.UseDualStackEndpoint = aws.DualStackEndpointStateEnabled
@@ -3570,6 +3717,8 @@ func setQuirks(opt *Options) {
useAlreadyExists = false // untested
case "Outscale":
virtualHostStyle = false
case "OVHcloud":
// No quirks
case "RackCorp":
// No quirks
useMultipartEtag = false // untested
@@ -3631,6 +3780,11 @@ func setQuirks(opt *Options) {
urlEncodeListings = false
virtualHostStyle = false
useAlreadyExists = false // untested
case "Zata":
useMultipartEtag = false
mightGzip = false
useUnsignedPayload = false
useAlreadyExists = false
case "Exaba":
virtualHostStyle = false
case "GCS":
@@ -4434,7 +4588,7 @@ func (f *Fs) list(ctx context.Context, opt listOpt, fn listFn) error {
}
foundItems += len(resp.Contents)
for i, object := range resp.Contents {
remote := deref(object.Key)
remote := *stringClone(deref(object.Key))
if urlEncodeListings {
remote, err = url.QueryUnescape(remote)
if err != nil {
@@ -5037,8 +5191,11 @@ func (f *Fs) copyMultipart(ctx context.Context, copyReq *s3.CopyObjectInput, dst
MultipartUpload: &types.CompletedMultipartUpload{
Parts: parts,
},
RequestPayer: req.RequestPayer,
UploadId: uid,
RequestPayer: req.RequestPayer,
SSECustomerAlgorithm: req.SSECustomerAlgorithm,
SSECustomerKey: req.SSECustomerKey,
SSECustomerKeyMD5: req.SSECustomerKeyMD5,
UploadId: uid,
})
return f.shouldRetry(ctx, err)
})
@@ -5887,7 +6044,7 @@ func (o *Object) readMetaData(ctx context.Context) (err error) {
func s3MetadataToMap(s3Meta map[string]string) map[string]string {
meta := make(map[string]string, len(s3Meta))
for k, v := range s3Meta {
meta[strings.ToLower(k)] = v
meta[strings.ToLower(k)] = *stringClone(v)
}
return meta
}
@@ -5930,14 +6087,14 @@ func (o *Object) setMetaData(resp *s3.HeadObjectOutput) {
o.lastModified = *resp.LastModified
}
}
o.mimeType = deref(resp.ContentType)
o.mimeType = strings.Clone(deref(resp.ContentType))
// Set system metadata
o.storageClass = (*string)(&resp.StorageClass)
o.cacheControl = resp.CacheControl
o.contentDisposition = resp.ContentDisposition
o.contentEncoding = resp.ContentEncoding
o.contentLanguage = resp.ContentLanguage
o.storageClass = stringClone(string(resp.StorageClass))
o.cacheControl = stringClonePointer(resp.CacheControl)
o.contentDisposition = stringClonePointer(resp.ContentDisposition)
o.contentEncoding = stringClonePointer(removeAWSChunked(resp.ContentEncoding))
o.contentLanguage = stringClonePointer(resp.ContentLanguage)
// If decompressing then size and md5sum are unknown
if o.fs.opt.Decompress && deref(o.contentEncoding) == "gzip" {
@@ -6004,6 +6161,36 @@ func (o *Object) Storable() bool {
return true
}
// removeAWSChunked removes the "aws-chunked" content-coding from a
// Content-Encoding field value (RFC 9110). Comparison is case-insensitive.
// Returns nil if encoding is empty after removal.
func removeAWSChunked(pv *string) *string {
if pv == nil {
return nil
}
v := *pv
if v == "" {
return nil
}
if !strings.Contains(strings.ToLower(v), "aws-chunked") {
return pv
}
parts := strings.Split(v, ",")
out := make([]string, 0, len(parts))
for _, p := range parts {
tok := strings.TrimSpace(p)
if tok == "" || strings.EqualFold(tok, "aws-chunked") {
continue
}
out = append(out, tok)
}
if len(out) == 0 {
return nil
}
v = strings.Join(out, ",")
return &v
}
func (o *Object) downloadFromURL(ctx context.Context, bucketPath string, options ...fs.OpenOption) (in io.ReadCloser, err error) {
url := o.fs.opt.DownloadURL + bucketPath
var resp *http.Response
@@ -6172,7 +6359,7 @@ func (o *Object) Open(ctx context.Context, options ...fs.OpenOption) (in io.Read
o.setMetaData(&head)
// Decompress body if necessary
if deref(resp.ContentEncoding) == "gzip" {
if deref(removeAWSChunked(resp.ContentEncoding)) == "gzip" {
if o.fs.opt.Decompress || (resp.ContentLength == nil && o.fs.opt.MightGzip.Value) {
return readers.NewGzipReader(resp.Body)
}
@@ -6422,8 +6609,11 @@ func (w *s3ChunkWriter) Close(ctx context.Context) (err error) {
MultipartUpload: &types.CompletedMultipartUpload{
Parts: w.completedParts,
},
RequestPayer: w.multiPartUploadInput.RequestPayer,
UploadId: w.uploadID,
RequestPayer: w.multiPartUploadInput.RequestPayer,
SSECustomerAlgorithm: w.multiPartUploadInput.SSECustomerAlgorithm,
SSECustomerKey: w.multiPartUploadInput.SSECustomerKey,
SSECustomerKeyMD5: w.multiPartUploadInput.SSECustomerKeyMD5,
UploadId: w.uploadID,
})
return w.f.shouldRetry(ctx, err)
})
@@ -6452,8 +6642,8 @@ func (o *Object) uploadMultipart(ctx context.Context, src fs.ObjectInfo, in io.R
}
var s3cw *s3ChunkWriter = chunkWriter.(*s3ChunkWriter)
gotETag = s3cw.eTag
versionID = aws.String(s3cw.versionID)
gotETag = *stringClone(s3cw.eTag)
versionID = stringClone(s3cw.versionID)
hashOfHashes := md5.Sum(s3cw.md5s)
wantETag = fmt.Sprintf("%s-%d", hex.EncodeToString(hashOfHashes[:]), len(s3cw.completedParts))
@@ -6485,8 +6675,8 @@ func (o *Object) uploadSinglepartPutObject(ctx context.Context, req *s3.PutObjec
}
lastModified = time.Now()
if resp != nil {
etag = deref(resp.ETag)
versionID = resp.VersionId
etag = *stringClone(deref(resp.ETag))
versionID = stringClonePointer(resp.VersionId)
}
return etag, lastModified, versionID, nil
}
@@ -6538,8 +6728,8 @@ func (o *Object) uploadSinglepartPresignedRequest(ctx context.Context, req *s3.P
if date, err := http.ParseTime(resp.Header.Get("Date")); err != nil {
lastModified = date
}
etag = resp.Header.Get("Etag")
vID := resp.Header.Get("x-amz-version-id")
etag = *stringClone(resp.Header.Get("Etag"))
vID := *stringClone(resp.Header.Get("x-amz-version-id"))
if vID != "" {
versionID = &vID
}
@@ -6593,7 +6783,7 @@ func (o *Object) prepareUpload(ctx context.Context, src fs.ObjectInfo, options [
case "content-disposition":
ui.req.ContentDisposition = pv
case "content-encoding":
ui.req.ContentEncoding = pv
ui.req.ContentEncoding = removeAWSChunked(pv)
case "content-language":
ui.req.ContentLanguage = pv
case "content-type":
@@ -6690,7 +6880,7 @@ func (o *Object) prepareUpload(ctx context.Context, src fs.ObjectInfo, options [
case "content-disposition":
ui.req.ContentDisposition = aws.String(value)
case "content-encoding":
ui.req.ContentEncoding = aws.String(value)
ui.req.ContentEncoding = removeAWSChunked(aws.String(value))
case "content-language":
ui.req.ContentLanguage = aws.String(value)
case "content-type":

View File

@@ -248,6 +248,47 @@ func TestMergeDeleteMarkers(t *testing.T) {
}
}
func TestRemoveAWSChunked(t *testing.T) {
ps := func(s string) *string {
return &s
}
tests := []struct {
name string
in *string
want *string
}{
{"nil", nil, nil},
{"empty", ps(""), nil},
{"only aws", ps("aws-chunked"), nil},
{"leading aws", ps("aws-chunked, gzip"), ps("gzip")},
{"trailing aws", ps("gzip, aws-chunked"), ps("gzip")},
{"middle aws", ps("gzip, aws-chunked, br"), ps("gzip,br")},
{"case insensitive", ps("GZip, AwS-ChUnKeD, Br"), ps("GZip,Br")},
{"duplicates", ps("aws-chunked , aws-chunked"), nil},
{"no aws normalize spaces", ps(" gzip , br "), ps(" gzip , br ")},
{"surrounding spaces", ps(" aws-chunked "), nil},
{"no change", ps("gzip, br"), ps("gzip, br")},
}
for _, tc := range tests {
t.Run(tc.name, func(t *testing.T) {
got := removeAWSChunked(tc.in)
check := func(want, got *string) {
t.Helper()
if tc.want == nil {
assert.Nil(t, got)
} else {
require.NotNil(t, got)
assert.Equal(t, *tc.want, *got)
}
}
check(tc.want, got)
// Idempotent
got2 := removeAWSChunked(got)
check(got, got2)
})
}
}
func (f *Fs) InternalTestVersions(t *testing.T) {
ctx := context.Background()

View File

@@ -111,7 +111,8 @@ func init() {
encoder.EncodeSlash |
encoder.EncodeBackSlash |
encoder.EncodeDoubleQuote |
encoder.EncodeInvalidUtf8),
encoder.EncodeInvalidUtf8 |
encoder.EncodeDot),
}},
})
}

View File

@@ -1,4 +1,4 @@
//go:build !plan9
//go:build !plan9 && !wasm
// Package sftp provides a filesystem interface using github.com/pkg/sftp
package sftp
@@ -222,15 +222,45 @@ E.g. the second example above should be rewritten as:
Help: "Windows Command Prompt",
},
},
}, {
Name: "hashes",
Help: `Comma separated list of supported checksum types.`,
Default: fs.CommaSepList{},
Advanced: true,
}, {
Name: "md5sum_command",
Default: "",
Help: "The command used to read md5 hashes.\n\nLeave blank for autodetect.",
Help: "The command used to read MD5 hashes.\n\nLeave blank for autodetect.",
Advanced: true,
}, {
Name: "sha1sum_command",
Default: "",
Help: "The command used to read sha1 hashes.\n\nLeave blank for autodetect.",
Help: "The command used to read SHA-1 hashes.\n\nLeave blank for autodetect.",
Advanced: true,
}, {
Name: "crc32sum_command",
Default: "",
Help: "The command used to read CRC-32 hashes.\n\nLeave blank for autodetect.",
Advanced: true,
}, {
Name: "sha256sum_command",
Default: "",
Help: "The command used to read SHA-256 hashes.\n\nLeave blank for autodetect.",
Advanced: true,
}, {
Name: "blake3sum_command",
Default: "",
Help: "The command used to read BLAKE3 hashes.\n\nLeave blank for autodetect.",
Advanced: true,
}, {
Name: "xxh3sum_command",
Default: "",
Help: "The command used to read XXH3 hashes.\n\nLeave blank for autodetect.",
Advanced: true,
}, {
Name: "xxh128sum_command",
Default: "",
Help: "The command used to read XXH128 hashes.\n\nLeave blank for autodetect.",
Advanced: true,
}, {
Name: "skip_links",
@@ -535,8 +565,14 @@ type Options struct {
PathOverride string `config:"path_override"`
SetModTime bool `config:"set_modtime"`
ShellType string `config:"shell_type"`
Hashes fs.CommaSepList `config:"hashes"`
Md5sumCommand string `config:"md5sum_command"`
Sha1sumCommand string `config:"sha1sum_command"`
Crc32sumCommand string `config:"crc32sum_command"`
Sha256sumCommand string `config:"sha256sum_command"`
Blake3sumCommand string `config:"blake3sum_command"`
Xxh3sumCommand string `config:"xxh3sum_command"`
Xxh128sumCommand string `config:"xxh128sum_command"`
SkipLinks bool `config:"skip_links"`
Subsystem string `config:"subsystem"`
ServerCommand string `config:"server_command"`
@@ -585,13 +621,18 @@ type Fs struct {
// Object is a remote SFTP file that has been stat'd (so it exists, but is not necessarily open for reading)
type Object struct {
fs *Fs
remote string
size int64 // size of the object
modTime uint32 // modification time of the object as unix time
mode os.FileMode // mode bits from the file
md5sum *string // Cached MD5 checksum
sha1sum *string // Cached SHA1 checksum
fs *Fs
remote string
size int64 // size of the object
modTime uint32 // modification time of the object as unix time
mode os.FileMode // mode bits from the file
md5sum *string // Cached MD5 checksum
sha1sum *string // Cached SHA-1 checksum
crc32sum *string // Cached CRC-32 checksum
sha256sum *string // Cached SHA-256 checksum
blake3sum *string // Cached BLAKE3 checksum
xxh3sum *string // Cached XXH3 checksum
xxh128sum *string // Cached XXH128 checksum
}
// conn encapsulates an ssh client and corresponding sftp client
@@ -891,7 +932,7 @@ func NewFs(ctx context.Context, name, root string, m configmap.Mapper) (fs.Fs, e
User: opt.User,
Auth: []ssh.AuthMethod{},
HostKeyCallback: ssh.InsecureIgnoreHostKey(),
Timeout: f.ci.ConnectTimeout,
Timeout: time.Duration(f.ci.ConnectTimeout),
ClientVersion: "SSH-2.0-" + f.ci.UserAgent,
}
@@ -1643,14 +1684,112 @@ func (f *Fs) Hashes() hash.Set {
return *f.cachedHashes
}
hashSet := hash.NewHashSet()
f.cachedHashes = &hashSet
hashTypesSupported := hash.NewHashSet()
f.cachedHashes = &hashTypesSupported
if f.opt.DisableHashCheck || f.shellType == shellTypeNotSupported {
return hashSet
return hashTypesSupported
}
hashTypes := hash.NewHashSet()
if len(f.opt.Hashes) > 0 {
for _, hashName := range f.opt.Hashes {
var hashType hash.Type
if err := hashType.Set(hashName); err != nil {
fs.Infof(nil, "Invalid token %q in hash string %q", hashName, f.opt.Hashes.String())
}
hashTypes.Add(hashType)
}
} else {
hashTypes.Add(hash.MD5, hash.SHA1)
}
hashCommands := map[hash.Type]struct {
option *string
emptyHash string
hashCommands []struct{ hashFile, hashEmpty string }
}{
hash.MD5: {
&f.opt.Md5sumCommand,
"d41d8cd98f00b204e9800998ecf8427e",
[]struct{ hashFile, hashEmpty string }{
{"md5sum", "md5sum"},
{"md5 -r", "md5 -r"},
{"rclone md5sum", "rclone md5sum"},
},
},
hash.SHA1: {
&f.opt.Sha1sumCommand,
"da39a3ee5e6b4b0d3255bfef95601890afd80709",
[]struct{ hashFile, hashEmpty string }{
{"sha1sum", "sha1sum"},
{"sha1 -r", "sha1 -r"},
{"rclone sha1sum", "rclone sha1sum"},
},
},
hash.CRC32: {
&f.opt.Sha1sumCommand,
"00000000",
[]struct{ hashFile, hashEmpty string }{
{"crc32", "crc32"},
{"rclone hashsum crc32", "rclone hashsum crc32"},
},
},
hash.SHA256: {
&f.opt.Sha256sumCommand,
"e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855",
[]struct{ hashFile, hashEmpty string }{
{"sha256sum", "sha1sum"},
{"sha256 -r", "sha1 -r"},
{"rclone hashsum sha256", "rclone hashsum sha256"},
},
},
hash.BLAKE3: {
&f.opt.Blake3sumCommand,
"af1349b9f5f9a1a6a0404dea36dcc9499bcb25c9adc112b7cc9a93cae41f3262",
[]struct{ hashFile, hashEmpty string }{
{"b3sum", "b3sum"},
{"rclone hashsum blake3", "rclone hashsum blake3"},
},
},
hash.XXH3: {
&f.opt.Xxh3sumCommand,
"2d06800538d394c2",
[]struct{ hashFile, hashEmpty string }{
// The xxhsum tool uses a non-standard prefix "XXH3_" preceding the hash output for the 64-bit variant
// of XXH3, to avoid confusion with the older 64-bit algorithm XXH64. This was introduced in version
// 0.8.3 released Dec 30, 2024. Older versions only supported the alternative BSD style output format,
// otherwise optional with argument --tag. We are currently not expecting these output formats and can
// therefore not use the "xxhsum -H3" command or its xxh3sum alias directly.
//{"xxh3sum", "xxh3sum"},
//{"xxhsum -H3", "xxhsum -H3"},
{"rclone hashsum xxh3", "rclone hashsum xxh3"},
},
},
hash.XXH128: {
&f.opt.Xxh128sumCommand,
"99aa06d3014798d86001c324468d497f",
[]struct{ hashFile, hashEmpty string }{
{"xxh128sum", "xxh128sum"},
{"xxhsum -H2", "xxhsum -H2"},
{"rclone hashsum xxh128", "rclone hashsum xxh128"},
},
},
}
if f.shellType == "powershell" {
for _, hashType := range []hash.Type{hash.MD5, hash.SHA1, hash.SHA256} {
if entry, ok := hashCommands[hashType]; ok {
entry.hashCommands = append(hashCommands[hashType].hashCommands, struct {
hashFile, hashEmpty string
}{
fmt.Sprintf("&{param($Path);Get-FileHash -Algorithm %v -LiteralPath $Path -ErrorAction Stop|Select-Object -First 1 -ExpandProperty Hash|ForEach-Object{\"$($_.ToLower()) ${Path}\"}}", hashType),
fmt.Sprintf("Get-FileHash -Algorithm %v -InputStream ([System.IO.MemoryStream]::new()) -ErrorAction Stop|Select-Object -First 1 -ExpandProperty Hash|ForEach-Object{$_.ToLower()}", hashType),
})
hashCommands[hashType] = entry
}
}
}
// look for a hash command which works
checkHash := func(hashType hash.Type, commands []struct{ hashFile, hashEmpty string }, expected string, hashCommand *string, changed *bool) bool {
if *hashCommand == hashCommandNotSupported {
return false
@@ -1679,55 +1818,25 @@ func (f *Fs) Hashes() hash.Set {
}
changed := false
md5Commands := []struct {
hashFile, hashEmpty string
}{
{"md5sum", "md5sum"},
{"md5 -r", "md5 -r"},
{"rclone md5sum", "rclone md5sum"},
for _, hashType := range hashTypes.Array() {
if entry, ok := hashCommands[hashType]; ok {
if works := checkHash(hashType, entry.hashCommands, entry.emptyHash, entry.option, &changed); works {
hashTypesSupported.Add(hashType)
}
}
}
sha1Commands := []struct {
hashFile, hashEmpty string
}{
{"sha1sum", "sha1sum"},
{"sha1 -r", "sha1 -r"},
{"rclone sha1sum", "rclone sha1sum"},
}
if f.shellType == "powershell" {
md5Commands = append(md5Commands, struct {
hashFile, hashEmpty string
}{
"&{param($Path);Get-FileHash -Algorithm MD5 -LiteralPath $Path -ErrorAction Stop|Select-Object -First 1 -ExpandProperty Hash|ForEach-Object{\"$($_.ToLower()) ${Path}\"}}",
"Get-FileHash -Algorithm MD5 -InputStream ([System.IO.MemoryStream]::new()) -ErrorAction Stop|Select-Object -First 1 -ExpandProperty Hash|ForEach-Object{$_.ToLower()}",
})
sha1Commands = append(sha1Commands, struct {
hashFile, hashEmpty string
}{
"&{param($Path);Get-FileHash -Algorithm SHA1 -LiteralPath $Path -ErrorAction Stop|Select-Object -First 1 -ExpandProperty Hash|ForEach-Object{\"$($_.ToLower()) ${Path}\"}}",
"Get-FileHash -Algorithm SHA1 -InputStream ([System.IO.MemoryStream]::new()) -ErrorAction Stop|Select-Object -First 1 -ExpandProperty Hash|ForEach-Object{$_.ToLower()}",
})
}
md5Works := checkHash(hash.MD5, md5Commands, "d41d8cd98f00b204e9800998ecf8427e", &f.opt.Md5sumCommand, &changed)
sha1Works := checkHash(hash.SHA1, sha1Commands, "da39a3ee5e6b4b0d3255bfef95601890afd80709", &f.opt.Sha1sumCommand, &changed)
if changed {
// Save permanently in config to avoid the extra work next time
fs.Debugf(f, "Setting hash command for %v to %q (set sha1sum_command to override)", hash.MD5, f.opt.Md5sumCommand)
f.m.Set("md5sum_command", f.opt.Md5sumCommand)
fs.Debugf(f, "Setting hash command for %v to %q (set md5sum_command to override)", hash.SHA1, f.opt.Sha1sumCommand)
f.m.Set("sha1sum_command", f.opt.Sha1sumCommand)
for _, hashType := range hashTypes.Array() {
if entry, ok := hashCommands[hashType]; ok {
fs.Debugf(f, "Setting hash command for %v to %q (set %vsum_command to override)", hashType, *entry.option, hashType)
f.m.Set(fmt.Sprintf("%vsum_command", hashType), *entry.option)
}
}
}
if sha1Works {
hashSet.Add(hash.SHA1)
}
if md5Works {
hashSet.Add(hash.MD5)
}
return hashSet
return hashTypesSupported
}
// About gets usage stats
@@ -1754,9 +1863,9 @@ func (f *Fs) About(ctx context.Context) (*fs.Usage, error) {
free := vfsStats.FreeSpace()
used := total - free
return &fs.Usage{
Total: fs.NewUsageValue(int64(total)),
Used: fs.NewUsageValue(int64(used)),
Free: fs.NewUsageValue(int64(free)),
Total: fs.NewUsageValue(total),
Used: fs.NewUsageValue(used),
Free: fs.NewUsageValue(free),
}, nil
} else if err != nil {
if errors.Is(err, os.ErrNotExist) {
@@ -1862,17 +1971,43 @@ func (o *Object) Hash(ctx context.Context, r hash.Type) (string, error) {
_ = o.fs.Hashes()
var hashCmd string
if r == hash.MD5 {
switch r {
case hash.MD5:
if o.md5sum != nil {
return *o.md5sum, nil
}
hashCmd = o.fs.opt.Md5sumCommand
} else if r == hash.SHA1 {
case hash.SHA1:
if o.sha1sum != nil {
return *o.sha1sum, nil
}
hashCmd = o.fs.opt.Sha1sumCommand
} else {
case hash.CRC32:
if o.crc32sum != nil {
return *o.crc32sum, nil
}
hashCmd = o.fs.opt.Crc32sumCommand
case hash.SHA256:
if o.sha256sum != nil {
return *o.sha256sum, nil
}
hashCmd = o.fs.opt.Sha256sumCommand
case hash.BLAKE3:
if o.blake3sum != nil {
return *o.blake3sum, nil
}
hashCmd = o.fs.opt.Blake3sumCommand
case hash.XXH3:
if o.xxh3sum != nil {
return *o.xxh3sum, nil
}
hashCmd = o.fs.opt.Xxh3sumCommand
case hash.XXH128:
if o.xxh128sum != nil {
return *o.xxh128sum, nil
}
hashCmd = o.fs.opt.Xxh128sumCommand
default:
return "", hash.ErrUnsupported
}
if hashCmd == "" || hashCmd == hashCommandNotSupported {
@@ -1889,10 +2024,21 @@ func (o *Object) Hash(ctx context.Context, r hash.Type) (string, error) {
}
hashString := parseHash(outBytes)
fs.Debugf(o, "Parsed hash: %s", hashString)
if r == hash.MD5 {
switch r {
case hash.MD5:
o.md5sum = &hashString
} else if r == hash.SHA1 {
case hash.SHA1:
o.sha1sum = &hashString
case hash.CRC32:
o.crc32sum = &hashString
case hash.SHA256:
o.sha256sum = &hashString
case hash.BLAKE3:
o.blake3sum = &hashString
case hash.XXH3:
o.xxh3sum = &hashString
case hash.XXH128:
o.xxh128sum = &hashString
}
return hashString, nil
}
@@ -1957,7 +2103,7 @@ func (f *Fs) remoteShellPath(remote string) string {
}
// Converts a byte array from the SSH session returned by
// an invocation of md5sum/sha1sum to a hash string
// an invocation of hash command to a hash string
// as expected by the rest of this application
func parseHash(bytes []byte) string {
// For strings with backslash *sum writes a leading \
@@ -2186,6 +2332,11 @@ func (o *Object) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, op
// Clear the hash cache since we are about to update the object
o.md5sum = nil
o.sha1sum = nil
o.crc32sum = nil
o.sha256sum = nil
o.blake3sum = nil
o.xxh3sum = nil
o.xxh128sum = nil
c, err := o.fs.getSftpConnection(ctx)
if err != nil {
return fmt.Errorf("Update: %w", err)

View File

@@ -1,4 +1,4 @@
//go:build !plan9
//go:build !plan9 && !wasm
package sftp

View File

@@ -1,6 +1,6 @@
// Test Sftp filesystem interface
//go:build !plan9
//go:build !plan9 && !wasm
package sftp_test

View File

@@ -1,7 +1,7 @@
// Build for sftp for unsupported platforms to stop go complaining
// about "no buildable Go source files "
//go:build plan9
//go:build plan9 || wasm
// Package sftp provides a filesystem interface using github.com/pkg/sftp
package sftp

View File

@@ -1,4 +1,4 @@
//go:build !plan9
//go:build !plan9 && !wasm
package sftp

View File

@@ -1,4 +1,4 @@
//go:build !plan9
//go:build !plan9 && !wasm
package sftp

View File

@@ -1,4 +1,4 @@
//go:build !plan9
//go:build !plan9 && !wasm
package sftp

View File

@@ -1,4 +1,4 @@
//go:build !plan9
//go:build !plan9 && !wasm
package sftp

View File

@@ -1,4 +1,4 @@
//go:build !plan9
//go:build !plan9 && !wasm
package sftp

View File

@@ -38,7 +38,7 @@ func (f *Fs) dial(ctx context.Context, network, addr string) (*conn, error) {
d := &smb2.Dialer{}
if f.opt.UseKerberos {
cl, err := getKerberosClient()
cl, err := NewKerberosFactory().GetClient(f.opt.KerberosCCache)
if err != nil {
return nil, err
}

99
backend/smb/filepool.go Normal file
View File

@@ -0,0 +1,99 @@
package smb
import (
"context"
"fmt"
"os"
"sync"
"github.com/cloudsoda/go-smb2"
"golang.org/x/sync/errgroup"
)
// FsInterface defines the methods that filePool needs from Fs
type FsInterface interface {
getConnection(ctx context.Context, share string) (*conn, error)
putConnection(pc **conn, err error)
removeSession()
}
type file struct {
*smb2.File
c *conn
}
type filePool struct {
ctx context.Context
fs FsInterface
share string
path string
mu sync.Mutex
pool []*file
}
func newFilePool(ctx context.Context, fs FsInterface, share, path string) *filePool {
return &filePool{
ctx: ctx,
fs: fs,
share: share,
path: path,
}
}
func (p *filePool) get() (*file, error) {
p.mu.Lock()
if len(p.pool) > 0 {
f := p.pool[len(p.pool)-1]
p.pool = p.pool[:len(p.pool)-1]
p.mu.Unlock()
return f, nil
}
p.mu.Unlock()
c, err := p.fs.getConnection(p.ctx, p.share)
if err != nil {
return nil, err
}
fl, err := c.smbShare.OpenFile(p.path, os.O_WRONLY, 0o644)
if err != nil {
p.fs.putConnection(&c, err)
return nil, fmt.Errorf("failed to open: %w", err)
}
return &file{File: fl, c: c}, nil
}
func (p *filePool) put(f *file, err error) {
if f == nil {
return
}
if err != nil {
_ = f.Close()
p.fs.putConnection(&f.c, err)
return
}
p.mu.Lock()
p.pool = append(p.pool, f)
p.mu.Unlock()
}
func (p *filePool) drain() error {
p.mu.Lock()
files := p.pool
p.pool = nil
p.mu.Unlock()
g, _ := errgroup.WithContext(p.ctx)
for _, f := range files {
g.Go(func() error {
err := f.Close()
p.fs.putConnection(&f.c, err)
return err
})
}
return g.Wait()
}

View File

@@ -0,0 +1,228 @@
package smb
import (
"context"
"errors"
"sync"
"testing"
"github.com/cloudsoda/go-smb2"
"github.com/stretchr/testify/assert"
)
// Mock Fs that implements FsInterface
type mockFs struct {
mu sync.Mutex
putConnectionCalled bool
putConnectionErr error
getConnectionCalled bool
getConnectionErr error
getConnectionResult *conn
removeSessionCalled bool
}
func (m *mockFs) putConnection(pc **conn, err error) {
m.mu.Lock()
defer m.mu.Unlock()
m.putConnectionCalled = true
m.putConnectionErr = err
}
func (m *mockFs) getConnection(ctx context.Context, share string) (*conn, error) {
m.mu.Lock()
defer m.mu.Unlock()
m.getConnectionCalled = true
if m.getConnectionErr != nil {
return nil, m.getConnectionErr
}
if m.getConnectionResult != nil {
return m.getConnectionResult, nil
}
return &conn{}, nil
}
func (m *mockFs) removeSession() {
m.mu.Lock()
defer m.mu.Unlock()
m.removeSessionCalled = true
}
func (m *mockFs) isPutConnectionCalled() bool {
m.mu.Lock()
defer m.mu.Unlock()
return m.putConnectionCalled
}
func (m *mockFs) getPutConnectionErr() error {
m.mu.Lock()
defer m.mu.Unlock()
return m.putConnectionErr
}
func (m *mockFs) isGetConnectionCalled() bool {
m.mu.Lock()
defer m.mu.Unlock()
return m.getConnectionCalled
}
func newMockFs() *mockFs {
return &mockFs{}
}
// Helper function to create a mock file
func newMockFile() *file {
return &file{
File: &smb2.File{},
c: &conn{},
}
}
// Test filePool creation
func TestNewFilePool(t *testing.T) {
ctx := context.Background()
fs := newMockFs()
share := "testshare"
path := "/test/path"
pool := newFilePool(ctx, fs, share, path)
assert.NotNil(t, pool)
assert.Equal(t, ctx, pool.ctx)
assert.Equal(t, fs, pool.fs)
assert.Equal(t, share, pool.share)
assert.Equal(t, path, pool.path)
assert.Empty(t, pool.pool)
}
// Test getting file from pool when pool has files
func TestFilePool_Get_FromPool(t *testing.T) {
ctx := context.Background()
fs := newMockFs()
pool := newFilePool(ctx, fs, "testshare", "/test/path")
// Add a mock file to the pool
mockFile := newMockFile()
pool.pool = append(pool.pool, mockFile)
// Get file from pool
f, err := pool.get()
assert.NoError(t, err)
assert.NotNil(t, f)
assert.Equal(t, mockFile, f)
assert.Empty(t, pool.pool)
}
// Test getting file when pool is empty
func TestFilePool_Get_EmptyPool(t *testing.T) {
ctx := context.Background()
fs := newMockFs()
// Set up the mock to return an error from getConnection
// This tests that the pool calls getConnection when empty
fs.getConnectionErr = errors.New("connection failed")
pool := newFilePool(ctx, fs, "testshare", "test/path")
// This should call getConnection and return the error
f, err := pool.get()
assert.Error(t, err)
assert.Nil(t, f)
assert.True(t, fs.isGetConnectionCalled())
assert.Equal(t, "connection failed", err.Error())
}
// Test putting file successfully
func TestFilePool_Put_Success(t *testing.T) {
ctx := context.Background()
fs := newMockFs()
pool := newFilePool(ctx, fs, "testshare", "/test/path")
mockFile := newMockFile()
pool.put(mockFile, nil)
assert.Len(t, pool.pool, 1)
assert.Equal(t, mockFile, pool.pool[0])
}
// Test putting file with error
func TestFilePool_Put_WithError(t *testing.T) {
ctx := context.Background()
fs := newMockFs()
pool := newFilePool(ctx, fs, "testshare", "/test/path")
mockFile := newMockFile()
pool.put(mockFile, errors.New("write error"))
// Should call putConnection with error
assert.True(t, fs.isPutConnectionCalled())
assert.Equal(t, errors.New("write error"), fs.getPutConnectionErr())
assert.Empty(t, pool.pool)
}
// Test putting nil file
func TestFilePool_Put_NilFile(t *testing.T) {
ctx := context.Background()
fs := newMockFs()
pool := newFilePool(ctx, fs, "testshare", "/test/path")
// Should not panic
pool.put(nil, nil)
pool.put(nil, errors.New("some error"))
assert.Empty(t, pool.pool)
}
// Test draining pool with files
func TestFilePool_Drain_WithFiles(t *testing.T) {
ctx := context.Background()
fs := newMockFs()
pool := newFilePool(ctx, fs, "testshare", "/test/path")
// Add mock files to pool
mockFile1 := newMockFile()
mockFile2 := newMockFile()
pool.pool = append(pool.pool, mockFile1, mockFile2)
// Before draining
assert.Len(t, pool.pool, 2)
_ = pool.drain()
assert.Empty(t, pool.pool)
}
// Test concurrent access to pool
func TestFilePool_ConcurrentAccess(t *testing.T) {
ctx := context.Background()
fs := newMockFs()
pool := newFilePool(ctx, fs, "testshare", "/test/path")
const numGoroutines = 10
for i := 0; i < numGoroutines; i++ {
mockFile := newMockFile()
pool.pool = append(pool.pool, mockFile)
}
// Test concurrent get operations
done := make(chan bool, numGoroutines)
for i := 0; i < numGoroutines; i++ {
go func() {
defer func() { done <- true }()
f, err := pool.get()
if err == nil {
pool.put(f, nil)
}
}()
}
for i := 0; i < numGoroutines; i++ {
<-done
}
// Pool should be in a consistent after the concurrence access
assert.Len(t, pool.pool, numGoroutines)
}

View File

@@ -7,72 +7,132 @@ import (
"path/filepath"
"strings"
"sync"
"time"
"github.com/jcmturner/gokrb5/v8/client"
"github.com/jcmturner/gokrb5/v8/config"
"github.com/jcmturner/gokrb5/v8/credentials"
)
var (
kerberosClient *client.Client
kerberosErr error
kerberosOnce sync.Once
)
// KerberosFactory encapsulates dependencies and caches for Kerberos clients.
type KerberosFactory struct {
// clientCache caches Kerberos clients keyed by resolved ccache path.
// Clients are reused unless the associated ccache file changes.
clientCache sync.Map // map[string]*client.Client
// getKerberosClient returns a Kerberos client that can be used to authenticate.
func getKerberosClient() (*client.Client, error) {
if kerberosClient == nil || kerberosErr == nil {
kerberosOnce.Do(func() {
kerberosClient, kerberosErr = createKerberosClient()
})
}
// errCache caches errors encountered when loading Kerberos clients.
// Prevents repeated attempts for paths that previously failed.
errCache sync.Map // map[string]error
return kerberosClient, kerberosErr
// modTimeCache tracks the last known modification time of ccache files.
// Used to detect changes and trigger credential refresh.
modTimeCache sync.Map // map[string]time.Time
loadCCache func(string) (*credentials.CCache, error)
newClient func(*credentials.CCache, *config.Config, ...func(*client.Settings)) (*client.Client, error)
loadConfig func() (*config.Config, error)
}
// createKerberosClient creates a new Kerberos client.
func createKerberosClient() (*client.Client, error) {
cfgPath := os.Getenv("KRB5_CONFIG")
if cfgPath == "" {
cfgPath = "/etc/krb5.conf"
// NewKerberosFactory creates a new instance of KerberosFactory with default dependencies.
func NewKerberosFactory() *KerberosFactory {
return &KerberosFactory{
loadCCache: credentials.LoadCCache,
newClient: client.NewFromCCache,
loadConfig: defaultLoadKerberosConfig,
}
}
cfg, err := config.Load(cfgPath)
// GetClient returns a cached Kerberos client or creates a new one if needed.
func (kf *KerberosFactory) GetClient(ccachePath string) (*client.Client, error) {
resolvedPath, err := resolveCcachePath(ccachePath)
if err != nil {
return nil, err
}
// Determine the ccache location from the environment, falling back to the
// default location.
ccachePath := os.Getenv("KRB5CCNAME")
stat, err := os.Stat(resolvedPath)
if err != nil {
kf.errCache.Store(resolvedPath, err)
return nil, err
}
mtime := stat.ModTime()
if oldMod, ok := kf.modTimeCache.Load(resolvedPath); ok {
if oldTime, ok := oldMod.(time.Time); ok && oldTime.Equal(mtime) {
if errVal, ok := kf.errCache.Load(resolvedPath); ok {
return nil, errVal.(error)
}
if clientVal, ok := kf.clientCache.Load(resolvedPath); ok {
return clientVal.(*client.Client), nil
}
}
}
// Load Kerberos config
cfg, err := kf.loadConfig()
if err != nil {
kf.errCache.Store(resolvedPath, err)
return nil, err
}
// Load ccache
ccache, err := kf.loadCCache(resolvedPath)
if err != nil {
kf.errCache.Store(resolvedPath, err)
return nil, err
}
// Create new client
cl, err := kf.newClient(ccache, cfg)
if err != nil {
kf.errCache.Store(resolvedPath, err)
return nil, err
}
// Cache and return
kf.clientCache.Store(resolvedPath, cl)
kf.errCache.Delete(resolvedPath)
kf.modTimeCache.Store(resolvedPath, mtime)
return cl, nil
}
// resolveCcachePath resolves the KRB5 ccache path.
func resolveCcachePath(ccachePath string) (string, error) {
if ccachePath == "" {
ccachePath = os.Getenv("KRB5CCNAME")
}
switch {
case strings.Contains(ccachePath, ":"):
parts := strings.SplitN(ccachePath, ":", 2)
switch parts[0] {
prefix, path := parts[0], parts[1]
switch prefix {
case "FILE":
ccachePath = parts[1]
return path, nil
case "DIR":
primary, err := os.ReadFile(filepath.Join(parts[1], "primary"))
primary, err := os.ReadFile(filepath.Join(path, "primary"))
if err != nil {
return nil, err
return "", err
}
ccachePath = filepath.Join(parts[1], strings.TrimSpace(string(primary)))
return filepath.Join(path, strings.TrimSpace(string(primary))), nil
default:
return nil, fmt.Errorf("unsupported KRB5CCNAME: %s", ccachePath)
return "", fmt.Errorf("unsupported KRB5CCNAME: %s", ccachePath)
}
case ccachePath == "":
u, err := user.Current()
if err != nil {
return nil, err
return "", err
}
ccachePath = "/tmp/krb5cc_" + u.Uid
return "/tmp/krb5cc_" + u.Uid, nil
default:
return ccachePath, nil
}
ccache, err := credentials.LoadCCache(ccachePath)
if err != nil {
return nil, err
}
return client.NewFromCCache(ccache, cfg)
}
// defaultLoadKerberosConfig loads Kerberos config from default or env path.
func defaultLoadKerberosConfig() (*config.Config, error) {
cfgPath := os.Getenv("KRB5_CONFIG")
if cfgPath == "" {
cfgPath = "/etc/krb5.conf"
}
return config.Load(cfgPath)
}

View File

@@ -0,0 +1,142 @@
package smb
import (
"os"
"path/filepath"
"testing"
"time"
"github.com/jcmturner/gokrb5/v8/client"
"github.com/jcmturner/gokrb5/v8/config"
"github.com/jcmturner/gokrb5/v8/credentials"
"github.com/stretchr/testify/assert"
)
func TestResolveCcachePath(t *testing.T) {
tmpDir := t.TempDir()
// Setup: files for FILE and DIR modes
fileCcache := filepath.Join(tmpDir, "file_ccache")
err := os.WriteFile(fileCcache, []byte{}, 0600)
assert.NoError(t, err)
dirCcache := filepath.Join(tmpDir, "dir_ccache")
err = os.Mkdir(dirCcache, 0755)
assert.NoError(t, err)
err = os.WriteFile(filepath.Join(dirCcache, "primary"), []byte("ticket"), 0600)
assert.NoError(t, err)
dirCcacheTicket := filepath.Join(dirCcache, "ticket")
err = os.WriteFile(dirCcacheTicket, []byte{}, 0600)
assert.NoError(t, err)
tests := []struct {
name string
ccachePath string
envKRB5CCNAME string
expected string
expectError bool
}{
{
name: "FILE: prefix from env",
ccachePath: "",
envKRB5CCNAME: "FILE:" + fileCcache,
expected: fileCcache,
},
{
name: "DIR: prefix from env",
ccachePath: "",
envKRB5CCNAME: "DIR:" + dirCcache,
expected: dirCcacheTicket,
},
{
name: "Unsupported prefix",
ccachePath: "",
envKRB5CCNAME: "MEMORY:/bad/path",
expectError: true,
},
{
name: "Direct file path (no prefix)",
ccachePath: "/tmp/myccache",
expected: "/tmp/myccache",
},
{
name: "Default to /tmp/krb5cc_<uid>",
ccachePath: "",
envKRB5CCNAME: "",
expected: "/tmp/krb5cc_",
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
t.Setenv("KRB5CCNAME", tt.envKRB5CCNAME)
result, err := resolveCcachePath(tt.ccachePath)
if tt.expectError {
assert.Error(t, err)
} else {
assert.NoError(t, err)
assert.Contains(t, result, tt.expected)
}
})
}
}
func TestKerberosFactory_GetClient_ReloadOnCcacheChange(t *testing.T) {
// Create temp ccache file
tmpFile, err := os.CreateTemp("", "krb5cc_test")
assert.NoError(t, err)
defer func() {
if err := os.Remove(tmpFile.Name()); err != nil {
t.Logf("Failed to remove temp file %s: %v", tmpFile.Name(), err)
}
}()
unixPath := filepath.ToSlash(tmpFile.Name())
ccachePath := "FILE:" + unixPath
initialContent := []byte("CCACHE_VERSION 4\n")
_, err = tmpFile.Write(initialContent)
assert.NoError(t, err)
assert.NoError(t, tmpFile.Close())
// Setup mocks
loadCallCount := 0
mockLoadCCache := func(path string) (*credentials.CCache, error) {
loadCallCount++
return &credentials.CCache{}, nil
}
mockNewClient := func(cc *credentials.CCache, cfg *config.Config, opts ...func(*client.Settings)) (*client.Client, error) {
return &client.Client{}, nil
}
mockLoadConfig := func() (*config.Config, error) {
return &config.Config{}, nil
}
factory := &KerberosFactory{
loadCCache: mockLoadCCache,
newClient: mockNewClient,
loadConfig: mockLoadConfig,
}
// First call — triggers loading
_, err = factory.GetClient(ccachePath)
assert.NoError(t, err)
assert.Equal(t, 1, loadCallCount, "expected 1 load call")
// Second call — should reuse cache, no additional load
_, err = factory.GetClient(ccachePath)
assert.NoError(t, err)
assert.Equal(t, 1, loadCallCount, "expected cached reuse, no new load")
// Simulate file update
time.Sleep(1 * time.Second) // ensure mtime changes
err = os.WriteFile(tmpFile.Name(), []byte("CCACHE_VERSION 4\n#updated"), 0600)
assert.NoError(t, err)
// Third call — should detect change, reload
_, err = factory.GetClient(ccachePath)
assert.NoError(t, err)
assert.Equal(t, 2, loadCallCount, "expected reload on changed ccache")
}

View File

@@ -3,6 +3,7 @@ package smb
import (
"context"
"errors"
"fmt"
"io"
"os"
@@ -107,6 +108,20 @@ Set to 0 to keep connections indefinitely.
Help: "Whether the server is configured to be case-insensitive.\n\nAlways true on Windows shares.",
Default: true,
Advanced: true,
}, {
Name: "kerberos_ccache",
Help: `Path to the Kerberos credential cache (krb5cc).
Overrides the default KRB5CCNAME environment variable and allows this
instance of the SMB backend to use a different Kerberos cache file.
This is useful when mounting multiple SMB with different credentials
or running in multi-user environments.
Supported formats:
- FILE:/path/to/ccache Use the specified file.
- DIR:/path/to/ccachedir Use the primary file inside the specified directory.
- /path/to/ccache Interpreted as a file path.`,
Advanced: true,
}, {
Name: config.ConfigEncoding,
Help: config.ConfigEncodingHelp,
@@ -137,6 +152,7 @@ type Options struct {
Domain string `config:"domain"`
SPN string `config:"spn"`
UseKerberos bool `config:"use_kerberos"`
KerberosCCache string `config:"kerberos_ccache"`
HideSpecial bool `config:"hide_special_share"`
CaseInsensitive bool `config:"case_insensitive"`
IdleTimeout fs.Duration `config:"idle_timeout"`
@@ -479,22 +495,82 @@ func (f *Fs) About(ctx context.Context) (_ *fs.Usage, err error) {
return nil, err
}
bs := int64(stat.BlockSize())
bs := stat.BlockSize()
usage := &fs.Usage{
Total: fs.NewUsageValue(bs * int64(stat.TotalBlockCount())),
Used: fs.NewUsageValue(bs * int64(stat.TotalBlockCount()-stat.FreeBlockCount())),
Free: fs.NewUsageValue(bs * int64(stat.AvailableBlockCount())),
Total: fs.NewUsageValue(bs * stat.TotalBlockCount()),
Used: fs.NewUsageValue(bs * (stat.TotalBlockCount() - stat.FreeBlockCount())),
Free: fs.NewUsageValue(bs * stat.AvailableBlockCount()),
}
return usage, nil
}
type smbWriterAt struct {
pool *filePool
closed bool
closeMu sync.Mutex
wg sync.WaitGroup
}
func (w *smbWriterAt) WriteAt(p []byte, off int64) (int, error) {
w.closeMu.Lock()
if w.closed {
w.closeMu.Unlock()
return 0, errors.New("writer already closed")
}
w.wg.Add(1)
w.closeMu.Unlock()
defer w.wg.Done()
f, err := w.pool.get()
if err != nil {
return 0, fmt.Errorf("failed to get file from pool: %w", err)
}
n, writeErr := f.WriteAt(p, off)
w.pool.put(f, writeErr)
if writeErr != nil {
return n, fmt.Errorf("failed to write at offset %d: %w", off, writeErr)
}
return n, writeErr
}
func (w *smbWriterAt) Close() error {
w.closeMu.Lock()
defer w.closeMu.Unlock()
if w.closed {
return nil
}
w.closed = true
// Wait for all pending writes to finish
w.wg.Wait()
var errs []error
// Drain the pool
if err := w.pool.drain(); err != nil {
errs = append(errs, fmt.Errorf("failed to drain file pool: %w", err))
}
// Remove session
w.pool.fs.removeSession()
if len(errs) > 0 {
return errors.Join(errs...)
}
return nil
}
// OpenWriterAt opens with a handle for random access writes
//
// Pass in the remote desired and the size if known.
//
// It truncates any existing object
func (f *Fs) OpenWriterAt(ctx context.Context, remote string, size int64) (fs.WriterAtCloser, error) {
var err error
o := &Object{
fs: f,
remote: remote,
@@ -504,27 +580,42 @@ func (f *Fs) OpenWriterAt(ctx context.Context, remote string, size int64) (fs.Wr
return nil, fs.ErrorIsDir
}
err = o.fs.ensureDirectory(ctx, share, filename)
err := o.fs.ensureDirectory(ctx, share, filename)
if err != nil {
return nil, fmt.Errorf("failed to make parent directories: %w", err)
}
filename = o.fs.toSambaPath(filename)
o.fs.addSession() // Show session in use
defer o.fs.removeSession()
smbPath := o.fs.toSambaPath(filename)
// One-time truncate
cn, err := o.fs.getConnection(ctx, share)
if err != nil {
return nil, err
}
fl, err := cn.smbShare.OpenFile(filename, os.O_WRONLY|os.O_CREATE|os.O_TRUNC, 0o644)
file, err := cn.smbShare.OpenFile(smbPath, os.O_CREATE|os.O_WRONLY|os.O_TRUNC, 0o644)
if err != nil {
return nil, fmt.Errorf("failed to open: %w", err)
o.fs.putConnection(&cn, err)
return nil, err
}
if size > 0 {
if truncateErr := file.Truncate(size); truncateErr != nil {
_ = file.Close()
o.fs.putConnection(&cn, truncateErr)
return nil, fmt.Errorf("failed to truncate file: %w", truncateErr)
}
}
if closeErr := file.Close(); closeErr != nil {
o.fs.putConnection(&cn, closeErr)
return nil, fmt.Errorf("failed to close file after truncate: %w", closeErr)
}
o.fs.putConnection(&cn, nil)
return fl, nil
// Add a new session
o.fs.addSession()
return &smbWriterAt{
pool: newFilePool(ctx, o.fs, share, smbPath),
}, nil
}
// Shutdown the backend, closing any background tasks and any

View File

@@ -6,6 +6,7 @@ import (
"testing"
"github.com/rclone/rclone/backend/smb"
"github.com/rclone/rclone/fstest"
"github.com/rclone/rclone/fstest/fstests"
)
@@ -18,6 +19,9 @@ func TestIntegration(t *testing.T) {
}
func TestIntegration2(t *testing.T) {
if *fstest.RemoteName != "" {
t.Skip("skipping as -remote is set")
}
krb5Dir := t.TempDir()
t.Setenv("KRB5_CONFIG", filepath.Join(krb5Dir, "krb5.conf"))
t.Setenv("KRB5CCNAME", filepath.Join(krb5Dir, "ccache"))
@@ -26,3 +30,24 @@ func TestIntegration2(t *testing.T) {
NilObject: (*smb.Object)(nil),
})
}
func TestIntegration3(t *testing.T) {
if *fstest.RemoteName != "" {
t.Skip("skipping as -remote is set")
}
krb5Dir := t.TempDir()
t.Setenv("KRB5_CONFIG", filepath.Join(krb5Dir, "krb5.conf"))
ccache := filepath.Join(krb5Dir, "ccache")
t.Setenv("RCLONE_TEST_CUSTOM_CCACHE_LOCATION", ccache)
name := "TestSMBKerberosCcache"
fstests.Run(t, &fstests.Opt{
RemoteName: name + ":rclone",
NilObject: (*smb.Object)(nil),
ExtraConfig: []fstests.ExtraConfigItem{
{Name: name, Key: "kerberos_ccache", Value: ccache},
},
})
}

View File

@@ -491,8 +491,8 @@ func swiftConnection(ctx context.Context, opt *Options, name string) (*swift.Con
ApplicationCredentialName: opt.ApplicationCredentialName,
ApplicationCredentialSecret: opt.ApplicationCredentialSecret,
EndpointType: swift.EndpointType(opt.EndpointType),
ConnectTimeout: 10 * ci.ConnectTimeout, // Use the timeouts in the transport
Timeout: 10 * ci.Timeout, // Use the timeouts in the transport
ConnectTimeout: time.Duration(10 * ci.ConnectTimeout), // Use the timeouts in the transport
Timeout: time.Duration(10 * ci.Timeout), // Use the timeouts in the transport
Transport: fshttp.NewTransport(ctx),
FetchUntilEmptyPage: opt.FetchUntilEmptyPage,
PartialPageFetchThreshold: opt.PartialPageFetchThreshold,

View File

@@ -1550,7 +1550,7 @@ func (o *Object) extraHeaders(ctx context.Context, src fs.ObjectInfo) map[string
extraHeaders := map[string]string{}
if o.fs.useOCMtime || o.fs.hasOCMD5 || o.fs.hasOCSHA1 {
if o.fs.useOCMtime {
extraHeaders["X-OC-Mtime"] = fmt.Sprintf("%d", o.modTime.Unix())
extraHeaders["X-OC-Mtime"] = fmt.Sprintf("%d", src.ModTime(ctx).Unix())
}
// Set one upload checksum
// Owncloud uses one checksum only to check the upload and stores its own SHA1 and MD5

View File

@@ -12,4 +12,5 @@
<seb•ɑƬ•chezwam•ɖɵʈ•org>
<allllaboutyou@gmail.com>
<psycho@feltzv.fr>
<afw5059@gmail.com>
<afw5059@gmail.com>
<piyushgarg80>

Some files were not shown because too many files have changed in this diff Show More