1
0
mirror of https://github.com/rclone/rclone.git synced 2025-12-06 00:03:32 +00:00

Compare commits

...

451 Commits

Author SHA1 Message Date
Nick Craig-Wood
23e9066b4f build: make rclone build with wasip1/wasm as well as js/wasm
Fixes #7831
2025-08-22 00:07:58 +01:00
Nick Craig-Wood
693ca3215f Revert "build: disable wasm/js build due to go bug"
This reverts commit 0470450583 as
the bug is now fixed in go1.25
2025-08-22 00:07:58 +01:00
Nick Craig-Wood
321cf23e9c s3: add --s3-use-arn-region flag - fixes #8686 2025-08-22 00:02:41 +01:00
Nick Craig-Wood
7e8d4bd915 Add Binbin Qian to contributors 2025-08-22 00:02:41 +01:00
Nick Craig-Wood
06f45e0ac0 Add Lucas Bremgartner to contributors 2025-08-22 00:02:41 +01:00
Binbin Qian
4af2f01abc docs: add tips about outdated certificates 2025-08-21 08:21:02 +02:00
Lucas Bremgartner
dd3fff6eae FAQ: specify the availability of SSL_CERT_* env vars
SSL_CERT_FILE and SSL_CERT_DIR env vars are only available on Unix systems other than macOS.

Addressing comment https://github.com/rclone/rclone/pull/1977#issuecomment-3201961570
2025-08-20 12:34:04 +01:00
wiserain
ca6631746a pikpak: add file name integrity check during upload
This commit introduces a new validation step to ensure data integrity 
during file uploads.

- The API's returned file name (new.File.Name) is now verified 
  against the requested file name (leaf) immediately after 
  the initial upload ticket is created.
- If a mismatch is detected, the upload process is aborted with an error, 
  and the defer cleanup logic is triggered to delete any partially created file.
- This addresses an unexpected API behavior where numbered suffixes 
  might be appended to filenames even without conflicts.
- This change prevents corrupted or misnamed files from being uploaded 
  without client-side awareness.
2025-08-19 22:00:23 +09:00
nielash
e5fe0b1476 bisync: skip TestBisyncConcurrent on non-local
See discussion on
https://github.com/rclone/rclone/pull/8708#discussion_r2280308808
2025-08-18 17:57:14 -04:00
Nick Craig-Wood
4c5764204d internetarchive: fix server side copy files with &
Before this change, server side copy of files with & gave the error:

    Invalid Argument</Message><Resource>x-(amz|archive)-copy-source
    header has bad character

This fix switches to using url.QueryEscape which escapes everything
from url.PathEscape which doesn't escape &.

Fixes #8754
2025-08-18 19:37:30 +01:00
Nick Craig-Wood
d70f40229e Revert "s3: set useAlreadyExists to false for Alibaba OSS"
This reverts commit 64ed9b175f.

This fails the integration tests with

s3_internal_test.go:434: Creating a bucket we already have created returned code: No Error
s3_internal_test.go:439:
    	Error Trace:	backend/s3/s3_internal_test.go:439
    	Error:      	Should be true
    	Test:       	TestIntegration/FsMkdir/FsPutFiles/Internal/Versions/Mkdir
    	Messages:   	Need to set UseAlreadyExists quirk
2025-08-18 19:37:30 +01:00
Nick Craig-Wood
05b13b47b5 Add huangnauh to contributors 2025-08-18 19:37:30 +01:00
Sudipto Baral
ecd52aa809 smb: improve multithreaded upload performance using multiple connections
In the current design, OpenWriterAt provides the interface for random-access
writes, and openChunkWriterFromOpenWriterAt wraps this interface to enable
parallel chunk uploads using multiple goroutines. A global connection pool is
already in place to manage SMB connections across files.

However, currently only one connection is used per file, which makes multiple
goroutines compete for the connection during multithreaded writes.

This changes create separate connections for each goroutine, which allows true
parallelism by giving each goroutine its own SMB connection

Signed-off-by: sudipto baral <sudiptobaral.me@gmail.com>
2025-08-18 16:29:18 +01:00
nielash
269abb1aee bisync: fix data races on tests 2025-08-17 20:16:46 -04:00
nielash
d91cbb2626 bisync: remove unused parameters 2025-08-17 20:16:46 -04:00
nielash
9073d17313 bisync: deglobalize to fix concurrent runs via rc - fixes #8675
Before this change, bisync used some global variables, which could cause errors
if running multiple concurrent bisync runs through the rc. (Running normally
from the command line was not affected.)

This change deglobalizes those variables so that multiple bisync runs can be
safely run at once, from the same rclone instance.
2025-08-17 20:16:46 -04:00
huangnauh
cc20d93f47 mount: fix identification of symlinks in directory listings 2025-08-17 12:57:35 +01:00
Nick Craig-Wood
cb1507fa96 s3: fix Content-Type: aws-chunked causing upload errors with --metadata
`Content-Type: aws-chunked` is used on S3 PUT requests to signal SigV4
streaming uploads: the body is sent in AWS-formatted chunks, each
chunk framed and HMAC-signed.

When copying from a non S3 compatible object store (like Digital
Ocean) the objects can have `Content-Type: aws-chunked` (which you
won't see on AWS S3). Attempting to copy these objects to S3 with
`--metadata` this produces this error.

    aws-chunked encoding is not supported when x-amz-content-sha256 UNSIGNED-PAYLOAD is supplied

This patch makes sure `aws-chunked` is removed from the `Content-Type`
metadata both on the way in and the way out.

Fixes #8724
2025-08-16 17:11:54 +01:00
Nick Craig-Wood
b0b3b04b3b config: fix problem reading pasted tokens over 4095 bytes
Before this change we were reading input from stdin using the terminal
in the default line mode which has a limit of 4095 characters.

The typical culprit was onedrive tokens (which are very long) giving the error

    Couldn't decode response: invalid character 'e' looking for beginning of value

This change swaps over to use the github.com/peterh/liner read line
library which does not have that limitation and also enables more
sensible cursor editing.

Fixes #8688 #8323 #5835
2025-08-16 16:44:35 +01:00
Nick Craig-Wood
8d878d0a5f config: fix test failure on local machine with a config file
This uses a temporary config file instead.
2025-08-16 16:44:00 +01:00
Nick Craig-Wood
8d353039a6 log: add log rotation to --log-file - fixes #2259 2025-08-16 16:38:23 +01:00
Nick Craig-Wood
4b777db20b accounting: Fix stats (speed=0 and eta=nil) when starting jobs via rc
Before this change we used the current context to start the average
loop. This means that if the context came from the rc the average loop
would be cancelled at the end of the rc request leading the speed not
being measured.

This uses the background context for the accounting loop so it doesn't
get cancelled when its parent gets cancelled.
2025-08-16 16:33:38 +01:00
Nick Craig-Wood
16ad0c2aef docs: update overview table for oracle object storage 2025-08-16 16:00:14 +01:00
Nick Craig-Wood
e46dec2a94 Add praveen-solanki-oracle to contributors 2025-08-16 16:00:14 +01:00
praveen-solanki-oracle
2b54b63cb3 oracleobjectstorage: add read only metadata support - Fixes #8705 2025-08-16 15:55:53 +01:00
Nick Craig-Wood
f2eb5f35f6 doc: sync doesn't symlinks in dest without --link - Fixes #8749 2025-08-16 09:22:31 +01:00
Nick Craig-Wood
d9a36ef45c s3: sort providers in docs 2025-08-15 17:38:31 +01:00
Nick Craig-Wood
eade7710e7 s3: add docs for Exaba Object Storage 2025-08-15 17:38:31 +01:00
Nick Craig-Wood
e6470d998c azureblob: fix double accounting for multipart uploads - fixes #8718
Before this change multipart uploads using OpenChunkWriter would
account for twice the space used.

This fixes the problem by adjusting the accounting delay.
2025-08-14 16:59:34 +01:00
Nick Craig-Wood
0c0fb93111 pool: fix deadlock with --max-buffer-memory
Before this change we used an overcomplicated method of memory
reservations in the pool.RW which caused deadlocks.

This changes it to use a much simpler reservation system where we
actually reserve the memory and store it in the pool.RW. This allows
us to use the semaphore.Weighted to count the actually memory in use
(rather than the memory in use and in the cache). This in turn allows
accurate use of the semaphore by users wanting memory.
2025-08-14 16:14:59 +01:00
Nick Craig-Wood
3f60764bd4 azureblob: fix deadlock with --max-connections with InvalidBlockOrBlob errors
Before this change the azureblob backend could deadlock when using
--max-connections. This is because when it receives InvalidBlockOrBlob
error it attempts to clear the condition before retrying. This in turn
involved recursively calling the pacer. At this point the pacer can
easily have no connections left which causes a deadlock as all the
other pacer connections are waiting for the InvalidBlockOrBlob to be
resolved.

This fixes the problem by using a temporary pacer when resolving the
InvalidBlockOrBlob errors.
2025-08-14 16:14:59 +01:00
Nick Craig-Wood
8f84f91666 build: downgrade linter to use go1.24 until it is fixed for go1.25 2025-08-13 17:54:45 +01:00
Nick Craig-Wood
2c91772bf1 build: update all dependencies 2025-08-13 17:54:45 +01:00
Nick Craig-Wood
c3f721755d build: update to go1.25 and make go1.24 the minimum required version 2025-08-13 17:54:45 +01:00
Nick Craig-Wood
8a952583a5 Add Timothy Jacobs to contributors 2025-08-13 17:54:40 +01:00
nielash
fc5bd21e28 bisync: fix time.Local data race on tests - fixes #8272
Before this change, the bisync tests were directly setting the time.Local
variable to UTC.

The reason for overriding the time zone on the tests is to make them
deterministic regardless of where in the world the user happens to be. There are
some goldenized strings which have the time zone hard-coded and would result in a
miscompare failure outside of that time zone.

However, mutating the time.Local variable is not the right way to do this, as OP
correctly pointed out on #8272.

Setting the TZ environment variable from within the code was also not an ideal
solution because, while it worked on unix, it did not work on Windows. See
fbac94a799/src/time/zoneinfo.go (L79-L80)

This change fixes the issue by defining a new bisync.LogTZ setting for use when
printing timestamps in /cmd/bisync/resolve.go. We override this on the tests
instead of time.Local.
2025-08-13 11:58:35 -04:00
nielash
be73a10a97 googlecloudstorage: fix rateLimitExceeded error on bisync tests
Additional to googlecloudstorage's general rate limiting, it apparently has a
separate limit for updating the same object more than once per second:

googleapi: Error 429: The object rclone-test-
demilaf1fexu/015108so/check_access/path2/modtime_write_test exceeded the rate
limit for object mutation operations (create, update, and delete). Please reduce
your request rate. See https://cloud.google.com/storage/docs/gcs429.,
rateLimitExceeded

We were encountering this in the part of the bisync tests where we create an
object, verify that we can edit its modtime, then remove it. We were not
encountering it elsewhere because it only concerns manipulations of the same
object -- not the rate of API calls in general. For the same reason, the standard
pacer is not an effective solution for enforcing this (unless, of course, we
want to slow the entire test down by setting a 1s MinSleep across the board.)

While ideally this would be handled in the backend, this gets around it by
sleeping for 1s in the relevant part of the bisync tests.
2025-08-13 11:58:35 -04:00
Timothy Jacobs
7edf8eb233 accounting: populate transfer snapshot with "what" value 2025-08-13 16:25:38 +01:00
dependabot[bot]
99144dcbba build(deps): bump actions/checkout from 4 to 5
Bumps [actions/checkout](https://github.com/actions/checkout) from 4 to 5.
- [Release notes](https://github.com/actions/checkout/releases)
- [Changelog](https://github.com/actions/checkout/blob/main/CHANGELOG.md)
- [Commits](https://github.com/actions/checkout/compare/v4...v5)

---
updated-dependencies:
- dependency-name: actions/checkout
  dependency-version: '5'
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-08-12 19:39:49 +02:00
dependabot[bot]
8f90f830bd build(deps): bump actions/download-artifact from 4 to 5
Bumps [actions/download-artifact](https://github.com/actions/download-artifact) from 4 to 5.
- [Release notes](https://github.com/actions/download-artifact/releases)
- [Commits](https://github.com/actions/download-artifact/compare/v4...v5)

---
updated-dependencies:
- dependency-name: actions/download-artifact
  dependency-version: '5'
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-08-12 17:49:55 +02:00
nielash
456108f29e googlecloudstorage: enable bisync integration tests
These were habitually failing at some point and ignored for that reason, but
seem to be passing now. It is possible that in the interim, the underlying issue
was resolved by another commit. If there is still an issue lurking, the nightly
tests will surely reveal it (and give us a log to look at.)
2025-08-09 18:12:17 -04:00
nielash
f7968aad1c fstest: fix parsing of commas in -remotes
Connection string remotes like "TestGoogleCloudStorage,directory_markers:" use
commas. Before this change, these could not be passed with the -remotes flag,
which expected commas to be used only as separators.

After this change, CSV parsing is used so that commas will be properly
recognized inside a terminal-escaped and quoted value, like:

-remotes local,\"TestGoogleCloudStorage,directory_markers:\"
2025-08-09 18:12:17 -04:00
nielash
2a587d21c4 azurefiles: fix hash getting erased when modtime is set
Before this change, setting an object's modtime with o.SetModTime() (without
updating the file's content) would inadvertently erase its md5 hash.

The documentation notes: "If this property isn't specified on the request, the
property is cleared for the file. Subsequent calls to Get File Properties won't
return this property, unless it's explicitly set on the file again."
https://learn.microsoft.com/en-us/rest/api/storageservices/set-file-properties#common-request-headers

This change fixes the issue by setting ContentMD5 (and ContentType), to the
extent we have it, during SetModTime.

Discovered on bisync integration tests such as TestBisyncRemoteRemote/resolve
2025-08-09 18:12:17 -04:00
nielash
4b0df05907 bisync: disable --sftp-copy-is-hardlink on sftp tests
Before this change, TestSFTPOpenssh integration tests would fail due to setting
copy_is_hardlink=true in /fstest/testserver/init.d/TestSFTPOpenssh.

For example, if a file was server-side copied from path1 to path2 and then the
bisync tests set the path2 modtime, the path1 modtime would also unexpectedly
mutate.

Hardlinks are not the same as copies. The bisync tests assume that they can
modify a file on one side without affecting a file on the other. This change
essentially sets --sftp-copy-is-hardlink to the default of false for the bisync
tests.
2025-08-09 18:12:17 -04:00
Anagh Kumar Baranwal
a92af34825 local: fix --copy-links on Windows when listing Junction points 2025-08-10 00:33:34 +05:30
Nick Craig-Wood
8ffde402f6 operations: fix too many connections open when using --max-memory
Before this change we opened the connection before allocating memory.
This meant a long wait sometimes for memory and too many connections
open.

Now we allocate the memory first before opening the connection.
2025-08-07 12:45:44 +01:00
Nick Craig-Wood
117d8d9fdb pool: fix deadlock with --max-memory and multipart transfers
Because multipart transfers can need more than one buffer to complete,
if transfers was set very high, it was possible for lots of multipart
transfers to start, grab fewer buffers than chunk size, then deadlock
because no more memory was available.

This fixes the problem by introducing a reservation system which the
multipart transfer uses to ensure it can reserve all the memory for
one chunk before starting.
2025-08-07 12:45:44 +01:00
Nick Craig-Wood
5050f42b8b pool: unify memory between multipart and asyncreader to use one pool
Before this the multipart code and asyncreader used separate pools
which is inefficient on memory use.
2025-08-07 12:45:44 +01:00
Nick Craig-Wood
fcbcdea067 docs: update links to rcloneui 2025-08-05 16:25:58 +01:00
Nick Craig-Wood
d4e68bf66b docs: add MEGA S4 as a gold sponsor
This also tidies the menu cards.
2025-08-01 12:40:29 +01:00
Nick Craig-Wood
743d160fdd about: fix potential overflow of about in various backends
Before this fix it was possible for an about call in various backends
to exceed an int64 and wrap.

This patch causes it to clip to the max int64 value instead.
2025-07-31 11:38:51 +01:00
Nick Craig-Wood
dc95f36bc1 box: fix about: cannot unmarshal number 1.0e+18 into Go struct field
Before this change rclone about was failing with

    cannot unmarshal number 1.0e+18 into Go struct field User.space_amount of type int64

As Box increased Enterprise accounts user.space_amount from 30PB to
1e+18 or 888.178PB returning it as a floating point number, not an integer.

This fix reads it as a float64 and clips it to the maximum value of an
int64 if necessary.
2025-07-31 11:38:51 +01:00
Nick Craig-Wood
d3e3af377a oauthutil: fix nil pointer crash when started with expired token 2025-07-31 11:38:51 +01:00
n4n5
db4812fbfa rc: listremotes should send an empty array instead of nil 2025-07-25 15:37:25 +01:00
n4n5
ff9cbab5fa config: add error if RCLONE_CONFIG_PASS was supplied but didn't decrypt config 2025-07-25 11:24:18 +01:00
n4n5
30d8ab5f2f rc: add config/unlock to unlock the config file 2025-07-25 11:19:07 +01:00
Anagh Kumar Baranwal
d71a4195d6 ftp: allow insecure TLS ciphers - fixes #8701
Signed-off-by: Anagh Kumar Baranwal <6824881+darthShadow@users.noreply.github.com>
2025-07-25 10:30:18 +01:00
zjx20
64ed9b175f s3: set useAlreadyExists to false for Alibaba OSS 2025-07-24 23:22:16 +01:00
Nick Craig-Wood
2b10340e4e docs: update sponsors page 2025-07-24 15:19:15 +01:00
Nick Craig-Wood
3c596f8d11 fs: allow global variables to be overriden or set on backend creation
This allows backend config to contain

- `override.var` - set var during remote creation only
- `global.var` - set var in the global config permanently

Fixes #8563
2025-07-23 15:09:51 +01:00
Nick Craig-Wood
6a9c221841 fs: allow setting of --http_proxy from command line
This in turn allows `override.http_proxy` to be set in backend configs
to set an http proxy for a single backend.
2025-07-23 15:09:51 +01:00
Nick Craig-Wood
c49b24ff90 tests: cloudinary: remove test ignore after merging fix from #8707 2025-07-23 13:12:55 +01:00
Nick Craig-Wood
edbbfd1e86 Add Antonin Goude to contributors 2025-07-23 13:12:55 +01:00
Nick Craig-Wood
0e0af7499c Add Yu Xin to contributors 2025-07-23 13:12:55 +01:00
Nick Craig-Wood
eb4fe3ef4c Add houance to contributors 2025-07-23 13:12:55 +01:00
Nick Craig-Wood
70eb0f21d9 Add Florent Vennetier to contributors 2025-07-23 13:12:55 +01:00
Nick Craig-Wood
12378bae27 Add n4n5 to contributors 2025-07-23 13:12:55 +01:00
Nick Craig-Wood
3c08c4df3a Add Albin Parou to contributors 2025-07-23 13:12:55 +01:00
Nick Craig-Wood
897509ae10 Add liubingrun to contributors 2025-07-23 13:12:55 +01:00
nielash
0eb7ee2e16 sync: fix testLoggerVsLsf when backend only reads modtime
There are some backends (like PikPak) that advertise a precision of
fs.ModTimeNotSupported but do actually return a modtime when asked. In the case
of PikPak, it is because the modtime can be read but not written, and is not
considered reliable enough to use for syncing.

Before this change, testLoggerVsLsf got confused in this scenario (expected a
blank modtime but got non-blank). Adding to the confusion, it only reaches this
code if the backend happens to support md5 hashes, and the fsrc and fdst have
the same precision.

This change fixes the issue by setting the modtime string on both sides to
"none" in this scenario. Note that we can't use "" (blank) because
(operations.ListFormat).AddModTime would replace that with "2006-01-02 15:04:05".
2025-07-23 12:49:52 +01:00
nielash
c1ebfb7e04 sync: fix testLoggerVsLsf checking wrong fs
Before this change, two tests (TestServerSideCopyOverSelf and
TestServerSideMoveOverSelf) were checking the wrong Fs in the call to
testLoggerVsLsf. This fixes it by making sure we are testing the same two Fs's
we synced.
2025-07-23 12:49:52 +01:00
Nick Craig-Wood
3d62058693 docs: fix make opengraph tags absolute as not all sites understand relative 2025-07-22 18:00:33 +01:00
albertony
122890799f docs: update contributing guide regarding markdown documentation 2025-07-21 20:23:16 +02:00
albertony
65078d5846 build: add markdown linting to workflow 2025-07-21 20:23:16 +02:00
albertony
92f304902d build: add markdownlint configuration 2025-07-21 20:23:16 +02:00
albertony
45477a6c7d docs: minor format cleanup install.md 2025-07-21 20:23:16 +02:00
albertony
79b549b5a4 docs: fix markdownlint issue md049/emphasis-style 2025-07-21 20:23:16 +02:00
albertony
318880b4ad docs: fix markdownlint issue md036/no-emphasis-as-heading 2025-07-21 20:23:16 +02:00
albertony
75521dcf6e docs: fix markdownlint issue md033/no-inline-html 2025-07-21 20:23:16 +02:00
albertony
8bf20dd545 docs: fix markdownlint issue md025/single-title 2025-07-21 20:23:16 +02:00
albertony
744bce1246 docs: fix markdownlint issue md041/first-line-heading 2025-07-21 20:23:16 +02:00
albertony
c817fc5c57 docs: fix markdownlint issue md001/heading-increment 2025-07-21 20:23:16 +02:00
albertony
0bb4d0a985 docs: fix markdownlint issue md003/heading-style 2025-07-21 20:23:16 +02:00
albertony
a8605abd34 docs: fix markdownlint issue md034/no-bare-urls 2025-07-21 20:23:16 +02:00
albertony
953fb4490b docs: fix markdownlint issue md010/no-hard-tabs 2025-07-21 20:23:16 +02:00
albertony
b17c3d18af docs: fix markdownlint issue md013/line-length 2025-07-21 20:23:16 +02:00
albertony
b45580fa19 docs: fix markdownlint issue md038/no-space-in-code 2025-07-21 20:23:16 +02:00
albertony
1c26f40078 docs: fix markdownlint issue md040/fenced-code-language 2025-07-21 20:23:16 +02:00
albertony
667ad093eb docs: fix markdownlint issue md046/code-block-style 2025-07-21 20:23:16 +02:00
albertony
2c369aedf5 docs: fix markdownlint issue md037/no-space-in-emphasis 2025-07-21 20:23:16 +02:00
albertony
7a0d5ab0b4 docs: fix markdownlint issue md059/descriptive-link-text 2025-07-21 20:23:16 +02:00
albertony
75582b804b docs: fix markdownlint issues md007/ul-indent md004/ul-style 2025-07-21 20:23:16 +02:00
albertony
73452551c6 docs: fix markdownlint issue md012/no-multiple-blanks 2025-07-21 20:23:16 +02:00
albertony
cb3cf5068b docs: fix markdownlint issue md058/blanks-around-tables 2025-07-21 20:23:16 +02:00
albertony
428f518771 docs: fix markdownlint issue md022/blanks-around-headings 2025-07-21 20:23:16 +02:00
albertony
0411a41e11 docs: fix markdownlint issue md031/blanks-around-fences 2025-07-21 20:23:16 +02:00
albertony
07b37bcd12 docs: fix markdownlint issue md032/blanks-around-lists 2025-07-21 20:23:16 +02:00
albertony
0506826ff5 docs: fix markdownlint issue md009/no-trailing-spaces 2025-07-21 20:23:16 +02:00
albertony
4fcd36a5ab docs: fix markdownlint issue md014/commands-show-output 2025-07-21 20:23:16 +02:00
albertony
b2f43f39ba docs: fix markdownlint issues md007/ul-indent md004/ul-style (bin/update-authors.py) 2025-07-21 20:23:16 +02:00
albertony
074d73d12b docs: fix markdownlint issues md007/ul-indent md004/ul-style (authors.md) 2025-07-21 20:23:16 +02:00
Nick Craig-Wood
6457bcf51e docs: add opengraph tags for website social media previews 2025-07-21 17:48:23 +01:00
Nick Craig-Wood
8d12519f3d mount: note that bucket based remotes can use directory markers 2025-07-21 17:48:23 +01:00
wiserain
8a7c401366 pikpak: add docs for methods to clarify name collision handling and restrictions 2025-07-21 17:43:15 +01:00
wiserain
0aae8f346f pikpak: enhance Copy method to handle name collisions and improve error management 2025-07-21 17:43:15 +01:00
wiserain
e991328967 pikpak: enhance Move for better handling of error and name collision 2025-07-21 17:43:15 +01:00
Yu Xin
614d02a673 accounting: fix incorrect stats with --transfers=1 - fixes #8670 2025-07-21 16:54:19 +01:00
houance
018ebdded5 rc: fix operations/check ignoring oneWay parameter
Change param from parsing "oneway" to "oneWay" as bool value, as the docs
say "oneWay -  check one way only, source files must exist on remote"
2025-07-21 16:41:08 +01:00
Florent Vennetier
fc08983d71 s3: add OVHcloud Object Storage provider
Co-Authored-By: Antonin Goude <antonin.goude@ovhcloud.com>
2025-07-21 16:34:53 +01:00
n4n5
7b61084891 docs: rc: fix description of how to read local config 2025-07-21 15:42:37 +01:00
albertony
d1ac6c2fe1 build: limit check for edits of autogenerated files to only commits in a pull request 2025-07-17 16:20:38 +02:00
albertony
da9c99272c build: extend check for edits of autogenerated files to all commits in a pull request 2025-07-17 16:20:38 +02:00
Sudipto Baral
9c7594d78f smb: refresh Kerberos credentials when ccache file changes
This change enhances the SMB backend in Rclone to automatically refresh
Kerberos credentials when the associated ccache file is updated.

Previously, credentials were only loaded once per path and cached
indefinitely, which caused issues when service tickets expired or the
cache was renewed on the server.
2025-07-17 14:34:44 +01:00
Albin Parou
70226cc653 s3: fix multipart upload and server side copy when using bucket policy SSE-C
When uploading or moving data within an s3-compatible bucket, the
`SSECustomer*` headers should always be forwarded: on
`CreateMultipartUpload`, `UploadPart`, `UploadCopyPart` and
`CompleteMultipartUpload`. But currently rclone doesn't forward those
headers to `CompleteMultipartUpload`.

This is a requirement if you want to enforce `SSE-C` at the bucket level
via a bucket policy. Cf: `This parameter is required only when the
object was created using a checksum algorithm or if your bucket policy
requires the use of SSE-C.` in
https://docs.aws.amazon.com/AmazonS3/latest/API/API_CompleteMultipartUpload.html
2025-07-17 14:29:31 +01:00
liubingrun
c20e4bd99c backend/s3: Fix memory leak by cloning strings #8683
This commit addresses a potential memory leak in the S3 backend where
strings extracted from large API responses were keeping the entire
response in memory. The issue occurs because Go strings share underlying
memory with their source, preventing garbage collection of large XML
responses even when only small substrings are needed.

Signed-off-by: liubingrun <liubr1@chinatelecom.cn>
2025-07-17 12:31:52 +01:00
Nick Craig-Wood
ccfe153e9b purge: exit with a fatal error if filters are set on rclone purge
Fixes #8491
2025-07-17 11:17:08 +01:00
Nick Craig-Wood
c9730bcaaf docs: Add Backblaze as a Platinum sponsor 2025-07-17 11:17:08 +01:00
Nick Craig-Wood
03dd7486c1 Add Sam Pegg to contributors 2025-07-17 11:17:08 +01:00
raider13209
6249009fdf googlephotos: added warning for Google Photos compatability-fixes #8672 2025-07-17 10:48:12 +01:00
Nick Craig-Wood
8e2d76459f test: remove flakey TestChunkerChunk50bYandex: test 2025-07-16 16:39:57 +01:00
albertony
5e539c6a72 docs: Consolidate entries for Josh Soref in contributors 2025-07-13 14:05:45 +02:00
albertony
8866112400 docs: remove dead link to example of writing a plugin 2025-07-13 13:51:38 +02:00
Nick Craig-Wood
bfdd5e2c22 filescom: document that hashes need to be enabled - fixes #8674 2025-07-11 14:15:59 +01:00
Nick Craig-Wood
f3f16cd2b9 Add Sudipto Baral to contributors 2025-07-11 14:15:59 +01:00
albertony
d84ea2ec52 docs: fix incorrect json syntax in sample output 2025-07-11 13:49:27 +02:00
albertony
b259241c07 docs: ignore author email piyushgarg80
This should merge the two duplicates:
- piyushgarg <piyushgarg80@gmail.com>
- Piyush <piyushgarg80>
2025-07-11 13:49:06 +02:00
albertony
a8ab0730a7 docs: fix header level for --dump option section 2025-07-10 12:36:10 +02:00
albertony
cef207cf94 docs: use stringArray as parameter type 2025-07-10 12:36:10 +02:00
albertony
e728ea32d1 docs: use consistent markdown heading syntax 2025-07-10 12:36:10 +02:00
Nick Craig-Wood
ccdee0420f imagekit: remove server side Copy method as it was downloading and uploading
The Copy method was downloading the file and uploading it again rather
than server side copying it.

It looks from the docs that the upload process can read a URL so this
might be possible, but the removed code is incorrect.
2025-07-10 11:29:27 +01:00
Nick Craig-Wood
8a51e11d23 imagekit: don't low level retry uploads
Low level retrying uploads can lead to partial or empty files being
uploaded as the io.Reader has been read in the first attempt.
2025-07-10 11:29:27 +01:00
Nick Craig-Wood
9083f1ff15 imagekit: return correct error when attempting to upload zero length files
Imagekit doesn't support empty files so return correct error for
integration tests to process properly.
2025-07-10 11:29:27 +01:00
Sudipto Baral
2964b1a169 smb: add --smb-kerberos-ccache option to set kerberos ccache per smb backend 2025-07-10 10:17:42 +01:00
Nick Craig-Wood
b6767820de test: fix smb kerberos integration tests
Thanks @sudiptob2 for the tip!
2025-07-09 18:05:29 +01:00
Nick Craig-Wood
821e7fce45 Changelog updates from Version v1.70.3 2025-07-09 16:26:56 +01:00
albertony
b7c6268d3e config: make parsing of duration options consistent
All user visible Durations should be fs.Duration rather than time.Duration. Suffix is then optional and defaults to s. Additional suffices d, w, M and y are supported, in addition to ms, s, m and h - which are the only ones supported by time.Duration. Absolute times can also be specified, and will be interpreted as duration relative to now.
2025-07-08 12:08:14 +02:00
albertony
521d6b88d4 docs: cleanup usage 2025-07-08 11:28:28 +02:00
albertony
cf767b0856 docs: break long lines 2025-07-08 11:28:28 +02:00
albertony
25f7809822 docs: add option value type to header where missing 2025-07-08 11:28:28 +02:00
albertony
74c0b1ea3b docs: mention that identifiers in option values are case insensitive 2025-07-08 11:28:28 +02:00
albertony
f4dcb1e9cf docs: rewrite dump option examples 2025-07-08 11:28:28 +02:00
albertony
90f1d023ff docs: use markdown inline code format for dump option headers that are real examples 2025-07-08 11:28:28 +02:00
albertony
e9c5f2d4e8 docs: change spelling from server side to server-side 2025-07-08 11:28:28 +02:00
albertony
1249e9b5ac docs: cleanup header casing 2025-07-08 11:28:28 +02:00
albertony
d47bc5f6c4 docs: rename OSX to macOS 2025-07-08 11:28:28 +02:00
albertony
efb1794135 docs: fix list and code block issue 2025-07-08 11:28:28 +02:00
albertony
71b98a03a9 docs: consistent markdown list format 2025-07-08 11:28:28 +02:00
albertony
8e625c6593 docs: split section with general description of options with that documenting actual main options 2025-07-08 11:28:28 +02:00
albertony
6b2cd7c631 docs: improve description of option types 2025-07-08 11:28:28 +02:00
albertony
aa4aead63c docs: use space instead of equal sign to separate option and value in headers 2025-07-08 11:28:28 +02:00
albertony
c491d12cd0 docs: use comma to separate short and long option format in headers 2025-07-08 11:28:28 +02:00
albertony
9e4d703a56 docs: remove use of uncommon parameter types 2025-07-08 11:28:28 +02:00
albertony
fc0c0a7771 docs: remove use of parameter type FILE 2025-07-08 11:28:28 +02:00
albertony
d5cc0d83b0 docs: remove use of parameter type DIR 2025-07-08 11:28:28 +02:00
albertony
52762dc866 docs: remove use of parameter type CONFIG_FILE 2025-07-08 11:28:28 +02:00
albertony
3c092cfc17 docs: change use of parameter type N and NUMBER to int consistent with flags and cli help 2025-07-08 11:28:28 +02:00
albertony
7f3f1af541 docs: change use of parameter type TIME to Duration consistent with flags and cli help 2025-07-08 11:28:28 +02:00
albertony
f885c481f0 docs: change use of parameter type BANDWIDTH_SPEC to BwTimetable consistent with flags and cli help 2025-07-08 11:28:28 +02:00
albertony
865d4b2bda docs: change use of parameter type SIZE to SizeSuffix consistent with flags and cli help 2025-07-08 11:28:28 +02:00
albertony
3cb1e65eb6 docs: cleanup markdown header format 2025-07-08 11:28:28 +02:00
albertony
f667346718 docs: explain separated list parameters 2025-07-08 11:28:28 +02:00
Nick Craig-Wood
c6e1f59415 azureblob: fix server side copy error "requires exactly one scope"
Before this change, if not using shared key or SAS URL authentication
for the source, rclone gave this error

    ManagedIdentityCredential.GetToken() requires exactly one scope

when doing server side copies.

This was introduced in:

3a5ddfcd3c azureblob: implement multipart server side copy

This fixes the problem by creating a temporary SAS URL using user
delegation to read the source blob when copying.

Fixes #8662
2025-07-08 07:50:51 +01:00
Nick Craig-Wood
f353c92852 test: remove and ignore failing integration tests
- remove non docker based Swift tests as they are too slow
- ignore TestChunkerChunk50b test which always fails
2025-07-08 07:48:54 +01:00
albertony
1e88c6a18b docs: explain the json log format in more detail 2025-07-07 10:21:13 +02:00
albertony
7242aed1c3 check: fix difference report (was reporting error counts) 2025-07-07 08:16:55 +01:00
albertony
81e63785fe serve sftp: add support for more hashes (crc32, sha256, blake3, xxh3, xxh128) 2025-07-07 09:11:29 +02:00
albertony
c7937f53d4 serve sftp: extract function refactoring for handling hashsum commands 2025-07-07 09:11:29 +02:00
albertony
58fa1c975f sftp: add support for more hashes (crc32, sha256, blake3, xxh3, xxh128) 2025-07-07 09:11:29 +02:00
albertony
da49fc1b6d local: configurable supported hashes 2025-07-07 09:11:29 +02:00
albertony
df9c921dd5 hash: add support for BLAKE3, XXH3, XXH128 2025-07-07 09:11:29 +02:00
Nick Craig-Wood
d9c227eff6 vfs: make integration TestDirEntryModTimeInvalidation test more reliable
Before this change it was not taking the Precision of the remote into account.
2025-07-06 14:35:16 +01:00
Nick Craig-Wood
524c285d88 smb: skip non integration tests when doing integration tests 2025-07-06 13:39:54 +01:00
Nick Craig-Wood
4107246335 seafile: fix integration test errors by adding dot to encoding
The seafile backend used to be able to cope with files called "." and
".." but at some point became unable to do so, causing integration
test failurs.

This adds EncodeDot to the encoding which encodes "." and ".." names.
2025-07-05 21:27:10 +01:00
Nick Craig-Wood
87a65ec6a5 linkbox: fix upload error "user upload file not exist"
Linkbox have started issuing 302 redirects on some of their PUT
requests when rclone uploads a file.

This is problematic for several reasons:

1. This is the wrong redirect code - it should be 307 to preserve the method
2. Since Expect/100-Continue isn't supported the whole body gets uploaded

This fixes the problem by first doing a HEAD request on the URL. This
will allow us to read the redirect Location and not upload the body to
the wrong place.

It should still work (albeit a little more inefficiently) if Linkbox
stop redirecting the PUT requests.

See: https://forum.rclone.org/t/linkbox-upload-error/51795
Fixes: #8606
2025-07-05 09:26:43 +01:00
Nick Craig-Wood
c6d0b61982 build: remove integration tests which are too slow
This removes

- TestCompressSwift: - never finishes - too slow - we have TestCompressS3 instead
- TestCryptSwift: - never finishes - too slow - we have TestCryptS3 instead
- TestChunkerChunk50bBox: - often times out - covered by other tests
2025-07-05 09:24:00 +01:00
Nick Craig-Wood
88e30eecbf march: fix deadlock when using --no-traverse - fixes #8656
This ocurred whenever there were more than 100 files in the source due
to the output channel filling up.

The fix is not to use list.NewSorter but take more care to output the
dst objects in the same order the src objects are delivered. As the
src objects are delivered sorted, no sorting is needed.

In order not to cause another deadlock, we need to send nil dst
objects which is safe since this adjusts the termination conditions
for the channels.

Thanks to @jeremy for the test script the Go tests are based on.
2025-07-04 14:52:28 +01:00
wiserain
f904378c4d pikpak: improve error handling for missing links and unrecoverable 500s
This commit improves error handling in two specific scenarios:

* Missing Download Links: A 5-second delay is introduced when a download
  link is missing, as low-level retries aren't enough. Empirically, it
  takes about 30s-1m for the link to become available. This resolves
  failed integration tests: backend: TestIntegration/FsMkdir/FsPutFiles/
  ObjectUpdate, vfs: TestFileReadAtNonZeroLength

* Unrecoverable 500 Errors: The shouldRetry method is updated to skip
  retries for 500 errors from "idx.shub.mypikpak.com" indicating "no
  record for gcid." These errors are non-recoverable, so retrying is futile.
2025-07-04 15:27:29 +09:00
wiserain
24eb8dcde0 pikpak: rewrite upload to bypass AWS S3 manager - fixes #8629
This commit introduces a significant rewrite of PikPak's upload, specifically
targeting direct handling of file uploads rather than relying on the generic
S3 manager. The primary motivation is to address critical upload failures
reported in #8629.

* Added new `multipart.go` file for multipart uploads using AWS S3 SDK.
* Removed dependency on AWS S3 manager; replaced with custom handling.
* Updated PikPak test package with new multipart upload tests,
  including configurable chunk size and upload cutoff.
* Added new configuration option `upload_cutoff` to control chunked uploads.
* Defined constraints for `chunk_size` and `upload_cutoff` (min/max values,
  validation).
* Adjusted default `upload_concurrency` from 5 to 4.
2025-07-04 11:25:12 +09:00
Nick Craig-Wood
a97425d9cb test: fix TestSMBKerberos password expiring errors
ERROR(runtime): uncaught exception - kinit for rclone@RCLONE.LOCAL failed (Password has expired)
2025-07-03 19:31:45 +01:00
Nick Craig-Wood
c51878f9a9 Add Vikas Bhansali to contributors 2025-07-03 19:31:45 +01:00
Nick Craig-Wood
92f0a73ac6 Add Ross Smith II to contributors 2025-07-03 19:31:45 +01:00
Vikas Bhansali
163c149f3f azureblob,azurefiles: add support for client assertion based authentication 2025-07-03 09:57:07 +01:00
WeidiDeng
224ca0ae8e webdav: fix setting modtime to that of local object instead of remote
In this commit the source of the modtime got changed to the wrong object by accident

0b9671313b webdav: add an ownCloud Infinite Scale vendor that enables tus chunked upload support

This reverts that change and fixes the integration tests.
2025-07-03 09:42:15 +01:00
Ross Smith II
5bf6cd1f4f build: set default shell to bash in build.yml
Per https://docs.github.com/en/actions/reference/workflow-syntax-for-github-actions#defaultsrunshell
2025-07-02 20:06:57 +01:00
Nick Craig-Wood
555739eec5 docs: fix filescom/filelu link mixup
See: https://forum.rclone.org/t/a-small-bug-in-rclone-documentation/51774
2025-07-02 15:37:28 +01:00
Nick Craig-Wood
c036ce90fe Add Davide Bizzarri to contributors 2025-07-02 15:37:28 +01:00
Davide Bizzarri
6163ae7cc7 fix: b2 versionAt read metadata 2025-07-01 17:36:07 +01:00
Nick Craig-Wood
cd950e30cb test: make TestWebdavInfiniteScale startup more reliable
This adds a _connect_delay=5s which allows the server to startup
properly. It also makes sure it stores its config in /tmp rather than
the current working directory.
2025-07-01 17:13:41 +01:00
Nick Craig-Wood
89dfae96ad test_all: add _connect_delay for slow starting servers 2025-07-01 17:13:11 +01:00
Nick Craig-Wood
c0a2d730a6 docs: update link for filescom 2025-06-30 11:09:37 +01:00
Nick Craig-Wood
592407230b test_all: make TestWebdav InfiniteScale integration tests run 2025-06-28 10:57:27 +01:00
Nick Craig-Wood
a7c3ddb482 test_all: make SMB with Kerberos integration tests run properly 2025-06-28 10:56:41 +01:00
Nick Craig-Wood
7a1813c531 test_all: allow an env parameter to set environment variables 2025-06-28 10:55:48 +01:00
Nick Craig-Wood
16e3d1becd Changelog updates from Version v1.70.2 2025-06-27 14:35:34 +01:00
Nick Craig-Wood
c0f6b910ae Add Ali Zein Yousuf to contributors 2025-06-27 14:35:34 +01:00
Nick Craig-Wood
e3bf8dc122 Add $@M@RTH_ to contributors 2025-06-27 14:35:34 +01:00
Ali Zein Yousuf
086a835131 docs: update client ID instructions to current Azure AD portal - fixes #8027 2025-06-27 12:22:10 +01:00
$@M@RTH_
d0668de192 s3: add Zata provider 2025-06-26 17:13:19 +01:00
Nick Craig-Wood
4df974ccc4 pacer: fix nil pointer deref in RetryError - fixes #8077
Before this change, if RetryAfterError was called with a nil err, then
it's Error method would return this when wrapped in a fmt.Errorf
statement

    error %!v(PANIC=Error method: runtime error: invalid memory address or nil pointer dereference))

Looking at the code, it looks like RetryAfterError will usually be
called with a nil pointer, so this patch makes sure it has a sensible
error.
2025-06-25 21:19:17 +01:00
Nick Craig-Wood
a50c903a82 docs: Remove Warp as a sponsor 2025-06-25 16:37:09 +01:00
Nick Craig-Wood
97a8092c14 docs: add files.com as a Gold sponsor 2025-06-25 16:37:09 +01:00
Nick Craig-Wood
526565b810 docs: add links to SecureBuild docker image 2025-06-25 16:37:09 +01:00
Nick Craig-Wood
64804b81bd Add curlwget to contributors 2025-06-25 16:37:09 +01:00
nielash
e10f516a5e convmv: fix moving to unicode-equivalent name - fixes #8634
Before this change, using convmv to convert filenames between NFD and NFC could
fail on certain backends (such as onedrive) that were insensitive to the
difference. This change fixes the issue by extending the existing
needsMoveCaseInsensitive logic for use in this scenario.
2025-06-25 11:19:50 +01:00
nielash
fe62a2bb4e transform: add truncate_keep_extension and truncate_bytes
This change adds a truncate_bytes mode which counts the number of bytes, as
opposed to the number of UTF-8 characters. This can be useful for ensuring that a
crypt-encoded filename will not exceed the underlying backend's length limits
(see https://forum.rclone.org/t/any-clear-file-name-length-when-using-crypt/36930 ).

This change also adds support for _keep_extension when using truncate and
truncate_bytes.
2025-06-25 11:19:50 +01:00
nielash
d6ecb949ca convmv: make --dry-run logs less noisy
Before this change, convmv dry runs would log a SkipDestructive message for
every single object, even objects that would not really be moved during a real
run. This made it quite difficult to tell what would actually happen during the
real run. This change fixes that by returning silently in such cases (as would
happen during a real run.)
2025-06-25 11:19:50 +01:00
nielash
a845a96538 sync: avoid copying dir metadata to itself
In convmv, src and dst can point to the same directory. Unless a dir's name is
changing, we should leave it alone and not attempt to copy its metadata to
itself.
2025-06-25 11:19:50 +01:00
curlwget
92f30fda8d docs: fix some function names in comments
Signed-off-by: curlwget <curlwget@icloud.com>
2025-06-24 15:04:45 +01:00
Nick Craig-Wood
559ef2eba8 combine: fix directory not found errors with ListP interface - Fixes #8627
In

b1d774c2e3 combine: implement ListP interface

We introduced the ListP interface to the combine backend. This was
passing the wrong remote to the upstreams. This was picked up by the
integration tests but was ignored by accident.
2025-06-23 17:43:52 +01:00
Nick Craig-Wood
17b25d7ce2 local: fix --skip-links on Windows when skipping Junction points
Due to a change in Go which was enabled by the `go 1.22` in `go.mod`
rclone has stopped skipping junction points ("My Documents" in
particular) if `--skip-links` is set on Windows.

This is because the output from os.Lstat has changed and junction
points are no longer marked with os.ModeSymlink but with
os.ModeIrregular instead.

This fix now skips os.ModeIrregular objects if --skip-links is set on
Windows only.

Fixes #8561
See: https://github.com/golang/go/issues/73827
2025-06-23 16:39:14 +01:00
Nick Craig-Wood
fe3253eefd Add Marvin Rösch to contributors 2025-06-23 16:39:14 +01:00
dependabot[bot]
c38ca6b2d1 build: bump github.com/go-chi/chi/v5 from 5.2.1 to 5.2.2 to fix GHSA-vrw8-fxc6-2r93
See: https://github.com/go-chi/chi/security/advisories/GHSA-vrw8-fxc6-2r93
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-06-20 18:27:36 +01:00
Marvin Rösch
5aa9811084 copy,copyto,move,moveto: implement logger flags to store result of sync
This enables the logger flags (`--combined`, `--missing-on-src`
etc.) for the `rclone copy` and `move` commands (as well as their
`copyto` and `moveto` variants) akin to `rclone sync`. Warnings for
unsupported/wonky flag combinations are also printed, e.g. when the
destination is not traversed but `--dest-after` is specified.

- fs/operations: add reusable methods for operation logging
- cmd/sync: use reusable methods for implementing logging in sync command
- cmd: implement logging for copy/copyto/move/moveto commands
- fs/operations/operationsflags: warn about logs in conjunction with --no-traverse
- cmd: add logger docs to copy and move commands

Fixes #8115
2025-06-20 16:55:00 +01:00
Nick Craig-Wood
3cae373064 log: fix deadlock when using systemd logging - fixes #8621
In this commit the logging system was re-worked

dfa4d94827 fs: Remove github.com/sirupsen/logrus and replace with log/slog

Unfortunately the systemd logging was still using the plain log
package and this caused a deadlock as it was recursively calling the
logging package.

The fix was to use the dedicated systemd journal logging routines in
the process removing a TODO!
2025-06-20 15:26:57 +01:00
Nick Craig-Wood
b6b8526fb4 docs: googlephotos: detail how to make your own client_id - fixes #8622 2025-06-20 12:14:46 +01:00
Nick Craig-Wood
6f86143176 Add necaran to contributors 2025-06-20 12:14:46 +01:00
necaran
beffef2882 mega: fix tls handshake failure - fixes #8565
The cipher suites used by Mega's storage endpoints: https://github.com/meganz/webclient/issues/103
are no longer supported by default since Go 1.22: https://tip.golang.org/doc/go1.22#minor_library_changes
This therefore assigns the cipher suites explicitly to include the one Mega needs.
2025-06-19 18:05:00 +01:00
Nick Craig-Wood
96f5bbcdd7 Changelog updates from Version v1.70.1 2025-06-19 14:25:38 +01:00
Nick Craig-Wood
27ce78bee4 Add jinjingroad to contributors 2025-06-19 14:25:29 +01:00
Ed Craig-Wood
898a59062b docs: DOI grammar error 2025-06-19 08:05:38 +02:00
albertony
c5f55243e1 docs: lib/transform: cleanup formatting 2025-06-19 08:04:46 +02:00
albertony
62a9727ab5 lib/transform: avoid empty charmap entry 2025-06-19 08:04:46 +02:00
jinjingroad
16f1e08b73 chore: fix function name
Signed-off-by: jinjingroad <jinjingroad@sina.com>
2025-06-19 08:02:51 +02:00
Nick Craig-Wood
4280ec75cc convmv: fix spurious "error running command echo" on Windows
Before this change the help for convmv was generated by running the
examples each time rclone started up. Unfortunately this involved
running the echo command which did not work on Windows.

This pre-generates the help into `transform.md` and embeds it. It can
be re-generated with `go generate` which is a better solution.

See: https://forum.rclone.org/t/invoke-of-1-70-0-complains-of-echo-not-found/51618
2025-06-18 14:28:14 +01:00
Ed Craig-Wood
b064cc2116 docs: client-credentials is not support by all backends 2025-06-18 14:06:57 +01:00
Nick Craig-Wood
f8b50f8d8f Start v1.71.0-DEV development 2025-06-18 11:31:52 +01:00
Nick Craig-Wood
9d464e8e9a Version v1.70.0 2025-06-17 17:53:11 +01:00
Nick Craig-Wood
92fea7eb1b ftp: add --ftp-http-proxy to connect via HTTP CONNECT proxy 2025-06-17 17:53:11 +01:00
Nick Craig-Wood
f226d12a2f pcloud: fix "Access denied. You do not have permissions to perform this operation" on large uploads
The API we use for OpenWriterAt seems to have been disabled at pcloud

    PUT /file_open?flags=XXX&folderid=XXX&name=XXX HTTP/1.1

gives

    {
            "result": 2003,
            "error": "Access denied. You do not have permissions to perform this operation."
    }

So disable OpenWriterAt and hence multipart uploads for the moment.
2025-06-17 12:46:35 +01:00
nielash
359260c49d operations: fix TransformFile when can't server-side copy/move 2025-06-16 17:40:19 +01:00
Nick Craig-Wood
125c8a98bb fstest: fix -verbose flag after logging revamp 2025-06-16 17:39:37 +01:00
Nick Craig-Wood
81fccd9c39 googlecloudstorage: fix directory marker after // changes in #5858
Before this change we were creating the directory markers with double
slashes on.
2025-06-16 17:33:40 +01:00
Nick Craig-Wood
1dc3421c7f s3: fix directory marker after // changes in #5858
Before this change we were creating the directory markers with double
slashes on.
2025-06-16 17:33:40 +01:00
Nick Craig-Wood
073184132e azureblob: fix directory marker after // changes in #5858
Before this change we were creating the directory markers with double
slashes on.
2025-06-16 17:33:40 +01:00
Nick Craig-Wood
476ff65fd7 tests: ignore some more habitually failing tests 2025-06-13 16:25:42 +01:00
Nick Craig-Wood
2847412433 googlephotos: fix typo in error message - Fixes #8600 2025-06-13 14:59:08 +01:00
Nick Craig-Wood
5c81132da0 s3: MEGA S4 support 2025-06-13 11:47:21 +01:00
Nick Craig-Wood
6e1c7b9239 Add Ser-Bul to contributors 2025-06-13 11:47:21 +01:00
nielash
e469c8974c chunker: fix double-transform
Before this change, chunker could double-transform a file under certain
conditions, when --name-transform was in use. This change fixes the issue by
ensuring that --name-transform is disabled during internal file moves.
2025-06-12 18:31:01 +01:00
Ser-Bul
629b427443 docs: mailru: added note about permissions level choice for the apps password 2025-06-12 17:35:42 +01:00
Nick Craig-Wood
108504963c tests: ignore habitually failing tests and backends
This ignores:

- cmd/bisync where it always fails
- cmd/gitannex where it always fails
- sharefile - citrix have refused to give us a testing account
- duplicated sia backend
- iclouddrive - token expiring every 30 days makes it too difficult

It would be nice to fix up these things at some point, but for the
integration test results to be useful they need less noise in them.
2025-06-12 16:24:14 +01:00
Nick Craig-Wood
6aa09fb1d6 docs: link to asciinema rather than including the js 2025-06-12 15:10:56 +01:00
Nick Craig-Wood
bfa6852334 docs: target="_blank" must have rel="noopener" 2025-06-12 15:10:56 +01:00
nielash
63d55d4a39 sync: fix testLoggerVsLsf when dst is local
Before this change, the testLoggerVsLsf function would get confused if given
r.Flocal when expecting r.Fremote. This change makes it agnostic.
2025-06-12 11:11:51 +01:00
kingston125
578ee49550 docs: fix FileLu docs
* Reorder providers alphabetically: moved FileLu above Files.com
* Added FileLu storage to docs.md
2025-06-11 16:25:30 +01:00
Nick Craig-Wood
dda6a863e9 build: update all dependencies
This updates all direct and indirect dependencies

It stops the linter complaining about deprecated azidentiy APIs also.
2025-06-09 14:19:53 +01:00
Nick Craig-Wood
99358cee88 onedrive: fix crash if no metadata was updated
Before this change, rclone would crash if no metadata was updated.
This could happen if the --onedrive-metadata-permissions read was
supplied but metadata to write was supplied.

Fixes #8586
2025-06-06 17:40:25 +01:00
Nick Craig-Wood
768a4236e6 Add kingston125 to contributors 2025-06-06 17:40:25 +01:00
Nick Craig-Wood
ffbf002ba8 Add Flora Thiebaut to contributors 2025-06-06 17:40:25 +01:00
kingston125
4a1b5b864c Add FileLu cloud storage backend 2025-06-06 15:15:07 +01:00
Flora Thiebaut
3b3096c940 doi: add new doi backend
Add a new backend to support mounting datasets published with a digital
object identifier (DOI).
2025-06-05 16:40:54 +01:00
Nick Craig-Wood
51fd697c7a build: fix check_autogenerated_edits.py flagging up files that didn't exist
Before this change new backend docs would have their changes flagged
which is undesirable for the first revision.
2025-06-05 16:37:01 +01:00
Nick Craig-Wood
210acb42cd docs: rc: add more info on how to discover _config and _filter parameters #8584 2025-06-05 10:44:33 +01:00
Nick Craig-Wood
6c36615efe s3: add Exaba provider 2025-06-04 17:42:48 +01:00
nielash
d4e2717081 convmv: add convmv command
convmv supports advanced path name transformations for converting and renaming
files and directories by applying prefixes, suffixes, and other alterations.

For example:

rclone convmv "stories/The Quick Brown Fox!.txt" --name-transform "all,uppercase"
// Output: STORIES/THE QUICK BROWN FOX!.TXT

See help doc for complete details.
2025-06-04 17:24:07 +01:00
nielash
013c563293 lib/transform: add transform library and --name-transform flag
lib/transform adds the transform library, supporting advanced path name
transformations for converting and renaming files and directories by applying
prefixes, suffixes, and other alterations.

It also adds the --name-transform flag for use with sync, copy, and move.

Multiple transformations can be used in sequence, applied in the order they are
specified on the command line.

By default --name-transform will only apply to file names. The means only the leaf
file name will be transformed. However some of the transforms would be better
applied to the whole path or just directories. To choose which which part of the
file path is affected some tags can be added to the --name-transform:

file	Only transform the leaf name of files (DEFAULT)
dir	Only transform name of directories - these may appear anywhere in the path
all	Transform the entire path for files and directories

Example syntax:
--name-transform file,prefix=ABC
--name-transform dir,prefix=DEF
2025-06-04 17:24:07 +01:00
nielash
41a407dcc9 march: split src and dst
splits m.key into separate functions for src and dst to prepare for
lib/transform which will want to do transforms on the src side only.

Co-Authored-By: Nick Craig-Wood <nick@craig-wood.com>
2025-06-04 17:24:07 +01:00
Nick Craig-Wood
cf1f5a7af6 Add ahxxm to contributors 2025-06-04 17:24:07 +01:00
Nick Craig-Wood
597872e5d7 Add Nathanael Demacon to contributors 2025-06-04 17:24:07 +01:00
ahxxm
e2d6872745 b2: use file id from listing when not presented in headers - fixes #8113 2025-06-04 16:23:58 +01:00
Nathanael Demacon
ddebca8d42 fs: fix goroutine leak and improve stats accounting process
This fixes the go routine leak in the stats accounting

- don't start stats average loop when initializing `StatsInfo`
- stop the loop instead of pausing
- use a context instead of a channel
- move `period` variable in `averageValues` struct

Fixes #8570
2025-06-04 14:43:19 +01:00
Nick Craig-Wood
5173ca0454 march: fix syncing with a duplicate file and directory
As part of the out of memory syncing code, in this commit

0148bd4668 march: Implement callback based syncing

we changed the syncing method to use a sorted stream of directory
entries.

Unfortunately as part of this change the sort order of files and
directories became undefined.

This meant that if there existed both a file `foo` and a directory
`foo` in the same directory (as is common on object storage systems)
then these could be matched up incorrectly.

They could be matched up correctly like this

- `foo` (directory) - `foo` (directory)
- `foo` (file)      - `foo` (file)

Or incorrectly like this (one of many possibilities)

- no match          - `foo` (file)
- `foo` (directory) - `foo` (directory)
- `foo` (file)      - no match

Just depending on how the input listings were ordered.

This in turn made container based syncing with a duplicated file and
directory name erratic, deleting files when it shouldn't.

This patch ensures that directories always sync before files by adding
a suffix to the sort key depending on whether the entry was a file or
directory.
2025-06-04 10:54:31 +01:00
Nick Craig-Wood
ccac9813f3 Add PrathameshLakawade to contributors 2025-06-04 10:54:31 +01:00
Nick Craig-Wood
9133fd03df Add Oleksiy Stashok to contributors 2025-06-04 10:54:31 +01:00
PrathameshLakawade
2e891f4ff8 docs: fix page_facing_up typo next to Lyve Cloud in README.md 2025-06-04 08:25:17 +02:00
PrathameshLakawade
3c66d9ccb1 backend/s3: require custom endpoint for Lyve Cloud v2 support
Lyve Cloud v2 no longer provides a shared S3 endpoint like v1 did. Instead, each customer receives
a unique, reseller-specific endpoint. To reflect this change, the S3 backend now requires users to
manually enter their endpoint when selecting Lyve Cloud as a provider.
Previously, users selected from a list of hardcoded Lyve Cloud v1 endpoints. This was not compatible
with Lyve Cloud v2 accounts and could cause confusion or misconfiguration.

This change:
- Removes outdated pre-defined endpoint selection for Lyve Cloud
- Requires users to provide their own endpoint
- Adds a format example to guide correct usage

Before: Users selected a fixed endpoint from a list (v1 only)
After:  Users must input their own endpoint (v2-compatible)
2025-06-03 16:19:41 +01:00
Oleksiy Stashok
badf16cc34 backend: skip hash calculation when the hashType is None - fixes #8518
When hashType is None `local` backend still runs expensive logic that reads the entire file content to produce an empty string.
2025-06-03 15:40:50 +01:00
Nick Craig-Wood
0ee7cd80f2 azureblob: fix multipart server side copies of 0 sized files
Before this fix multipart server side copies would fail.

This problem was due to an incorrect calculation of the number of
parts to transfer - it calculated 1 part to transfer rather than 0.
2025-06-02 17:22:37 +01:00
Nick Craig-Wood
aeb43c6a4c Add Jeremy Daer to contributors 2025-06-02 17:22:37 +01:00
Nick Craig-Wood
12322a2141 Add wbulot to contributors 2025-06-02 17:22:37 +01:00
Jeremy Daer
4fd5a3d0a2 s3: add Pure Storage FlashBlade provider support (#8575)
Pure Storage FlashBlade is an enterprise object storage platform that
provides S3-compatible APIs. This change adds FlashBlade as a new
provider option in the S3 backend.

Before this change, FlashBlade users had to use the "Other" provider
with manual configuration of various compatibility flags. This often
resulted in suboptimal performance due to conservative default settings.

After this change, users can select the "FlashBlade" S3 provider and
get an optimal configuration:

- ListObjectsV2 enabled for better performance
- AWS-compatible multipart ETags for reliable transfers
- Proper handling of "AlreadyOwnedByYou" bucket creation responses
- Path-style URLs by default (virtual-host style with DNS setup)
- Unsigned payloads to ensure compatibility with all rclone features

FlashBlade supports modern S3 features including trailer checksum
algorithms (SHA256, CRC32, CRC32C), object versioning, and lifecycle
management.

Provider settings were verified by testing against a FlashBlade//E
system running Purity//FB 4.5.7.

Documentation and test configurations are included.

Integration test results:
```
go test -v -fast-list -remote TestS3FlashBlade:
PASS
ok  	github.com/rclone/rclone/backend/s3	232.444s
```
2025-05-30 12:35:13 +01:00
wbulot
3594330177 backend/gofile: update to use new direct upload endpoint
Update the Gofile backend to use the new direct upload endpoint based on the latest API changes.
The previous implementation used dynamic server selection, but Gofile has simplified their API
to use a single upload endpoint at https://upload.gofile.io/uploadfile.

This change:
- Removes server selection logic and related code
- Simplifies the Fs struct by removing server-related fields
- Updates the upload process to use the direct upload URL
2025-05-27 14:28:25 +01:00
Nick Craig-Wood
15510c66d4 log: add --windows-event-log-level to support Windows Event Log
This provides JSON logs in the Windows Event Log.
2025-05-23 11:27:49 +01:00
Nick Craig-Wood
dfa4d94827 fs: Remove github.com/sirupsen/logrus and replace with log/slog
This removes logrus which is not developed any more and replaces it
with the new log/slog from the Go standard library.

It implements its own slog Handler which is backwards compatible with
all of rclone's previous logging modes.
2025-05-23 11:27:49 +01:00
Nick Craig-Wood
36b89960e3 Add fhuber to contributors 2025-05-23 11:27:49 +01:00
fhuber
a3f3fc61ee cmd serve s3: fix ListObjectsV2 response
add trailing slash to s3 ListObjectsV2 response because some clients expect a trailing forward slash to distinguish if the returned object is a directory

Fixes #8464
2025-05-22 22:27:38 +01:00
Nick Craig-Wood
b8fde4fc46 Changelog updates from Version v1.69.3 2025-05-22 09:55:00 +01:00
Nick Craig-Wood
c37fe733df onedrive: re-add --onedrive-upload-cutoff flag
This was removed as part of #1716 to fix rclone uploads taking double
the space.

7f744033d8 onedrive: Removed upload cutoff and always do session uploads

As far as I can see, two revisions are still being created for single
part uploads so the default for this flag is set to -1, off.

However it may be useful for experimentation.

See: #8545
2025-05-15 15:25:10 +01:00
Nick Craig-Wood
b31659904f onedrive: fix "The upload session was not found" errors
Before this change, sometimes, perhaps on heavily loaded sharepoint
servers, uploads would sometimes fail with the error:

{"error":{"code":"itemNotFound","message":"The upload session was not found"}}

This retries the upload after a 5 second delay up to --low-level-retries times.

Fixes #8545
2025-05-15 15:25:10 +01:00
Nick Craig-Wood
ebcf51336e Add Germán Casares to contributors 2025-05-15 15:25:10 +01:00
Nick Craig-Wood
a334bba643 Add Jeff Geerling to contributors 2025-05-15 15:25:10 +01:00
Germán Casares
d4fd93e7f3 googlephotos: update read only and read write scopes to meet Google's requirements.
As part of changes to the Google Photos APIs the scopes rclone used
for accessing Google photos have been removed.

This commit replaces the scopes with updated ones.

These aren't as powerful as the old scopes - this means rclone will
only be able to download photos it uploaded from March 31, 2025.

To use these new scopes do `rclone reconnect yourgooglephotosremote:`

Fixes #8434

Co-authored-by: Nick Craig-Wood <nick@craig-wood.com>
2025-05-12 16:43:23 +01:00
albertony
6644bdba0f build: update github.com/ebitengine/purego to v0.8.3 to fix mac_amd64 build
Fixes #8552
2025-05-12 09:08:15 +02:00
albertony
68a65e878f docs: add hint about config touch and config file not found 2025-05-09 08:30:34 +01:00
Jeff Geerling
7606ad8294 docs: add FAQ for dismissing 'rclone.conf not found'
See: https://forum.rclone.org/t/notice-about-missing-rclone-conf-is-annoying/51116
2025-05-09 08:23:31 +02:00
Nick Craig-Wood
32847e88b4 docs: document how to keep an out of tree backend 2025-05-08 17:16:28 +01:00
Nick Craig-Wood
2e879586bd Add Clément Wehrung to contributors 2025-05-08 17:16:28 +01:00
Clément Wehrung
9d55b2411f iclouddrive: fix panic and files potentially downloaded twice
- Fixing SIGSEGV Fixes #8211
- Removed files potentially downloaded twice
2025-05-07 18:00:33 +01:00
Nick Craig-Wood
fe880c0fac docs: move --max-connections documentation to the correct place 2025-05-06 15:23:55 +01:00
Nick Craig-Wood
b160089be7 Add Ben Boeckel to contributors 2025-05-06 15:23:55 +01:00
Nick Craig-Wood
c2254164f8 Add Tho Neyugn to contributors 2025-05-06 15:23:55 +01:00
Ben Boeckel
e57b94c4ac docs: fix typo in s3/storj docs 2025-05-04 18:57:47 +02:00
Tho Neyugn
3273bf3716 serve s3: remove redundant handler initialization 2025-05-01 16:49:11 +01:00
Nick Craig-Wood
f5501edfcf Changelog updates from Version 1.69.2 2025-05-01 16:43:16 +01:00
Nick Craig-Wood
2404831725 sftp: add --sftp-http-proxy to connect via HTTP CONNECT proxy 2025-04-29 14:16:17 +01:00
Nick Craig-Wood
9f0e237931 Add Jugal Kishore to contributors 2025-04-29 14:16:09 +01:00
Jugal Kishore
f752eaa298 docs: correct SSL docs anchor link from #ssl-tls to #tls-ssl
Fixed the anchor link in the documentation that points to the SSL/TLS section.
This change ensures the link directs correctly to the intended section (#tls-ssl) instead of the incorrect #ssl-tls.

No functional code changes, documentation only.
2025-04-28 10:19:35 +02:00
Nick Craig-Wood
1f8373fae8 drive: metadata: fix error when setting copy-requires-writer-permission on a folder
This appears not to be allowed, so this fixes the problem by ignoring
that metadata for a folder.

Fixes #8517
2025-04-25 12:15:37 +01:00
Nick Craig-Wood
b94f80b9d7 docs: Update contributors
- Add Andrew Kreimer to contributors
- Add Christian Richter to contributors
- Add Ed Craig-Wood to contributors
- Add Klaas Freitag to contributors
- Add Ralf Haferkamp to contributors
2025-04-25 12:14:37 +01:00
dependabot[bot]
5f4e983ccb build: bump golang.org/x/net from 0.36.0 to 0.38.0
Bumps [golang.org/x/net](https://github.com/golang/net) from 0.36.0 to 0.38.0.
- [Commits](https://github.com/golang/net/compare/v0.36.0...v0.38.0)

---
updated-dependencies:
- dependency-name: golang.org/x/net
  dependency-version: 0.38.0
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-04-22 13:49:35 +02:00
Ed Craig-Wood
28b6f38135 Update README.md
removed warp as project sponsor
2025-04-16 10:42:00 +01:00
Andrew Kreimer
6adb4056bb docs: fix typos via codespell
There are some types in the changelog.

Fix them via codespell.
2025-04-16 09:24:01 +02:00
Klaas Freitag
0b9671313b webdav: add an ownCloud Infinite Scale vendor that enables tus chunked upload support
This change adds a new vendor called "infinitescale" to the webdav
backend. It enables the ownCloud Infinite Scale
https://github.com/owncloud/ocis project and implements its specific
chunked uploader following the tus protocol https://tus.io

Signed-off-by: Christian Richter <crichter@owncloud.com>
Co-authored-by: Klaas Freitag <klaas.freitag@kiteworks.com>
Co-authored-by: Christian Richter <crichter@owncloud.com>
Co-authored-by: Christian Richter <1058116+dragonchaser@users.noreply.github.com>
Co-authored-by: Ralf Haferkamp <r.haferkamp@opencloud.eu>
2025-04-11 12:23:55 +01:00
Nick Craig-Wood
e0c99d6203 onedrive: fix metadata ordering in permissions
Before this change, due to a quirk in Graph, User permissions could be
lost when applying permissions.

Fixes #8465
2025-04-11 10:38:51 +01:00
Nick Craig-Wood
7af1a930b7 Add Ben Alex to contributors 2025-04-11 10:38:51 +01:00
Nick Craig-Wood
6e46ee4ffa Add simwai to contributors 2025-04-11 10:38:51 +01:00
Ben Alex
4f1fc1a84e iclouddrive: fix so created files are writable
At present any created file (eg through the touch command, copy, mount
etc) is read-only in iCloud.

This has been reported by users at
https://forum.rclone.org/t/icloud-and-file-editing-permissions/50659.
2025-04-10 11:38:38 +01:00
simwai
c10b6c5e8e cmd/authorize: show required arguments in help text 2025-04-09 16:30:38 +01:00
yuval-cloudinary
52ff407116 cloudinary: var naming convention - #8416 2025-04-09 15:03:59 +01:00
yuval-cloudinary
078d202f39 cloudinary: automatically add/remove known media files extensions #8416 2025-04-09 15:03:59 +01:00
Nick Craig-Wood
3e105f7e58 Add Markus Gerstel to contributors 2025-04-09 15:03:59 +01:00
Nick Craig-Wood
02ca72e30c Add Enduriel to contributors 2025-04-09 15:03:59 +01:00
Nick Craig-Wood
e567c52457 Add huanghaojun to contributors 2025-04-09 15:03:59 +01:00
Nick Craig-Wood
10501d0398 Add simonmcnair to contributors 2025-04-09 15:03:59 +01:00
Nick Craig-Wood
972ed42661 Add Samantha Bowen to contributors 2025-04-09 15:03:59 +01:00
Markus Gerstel
48802b0a3b s3: documentation regression - fixes #8438
We lost a previous documentation fix (#7077) detailing how to restore
single objects from AWS S3 Glacier.

Also make clearer that rclone provides restore functionality natively.

Co-authored-by: danielkrajnik <dan94kra@gmail.com>
2025-04-09 14:18:18 +01:00
Enduriel
a9c7c493cf hash: add SHA512 support for file hashes 2025-04-09 14:16:22 +01:00
huanghaojun
49f6ed5f5e vfs: fix inefficient directory caching when directory reads are slow
Before this change, when querying directories with large datasets, if
the query duration exceeded the directory cache expiration time, the
cache became invalid by the time results were retrieved. This means
every execution of `_readDir` triggers `_readDirFromEntries`,
resulting in prolonged processing times.

After this change we update the directory time with the time at the
end of the query.
2025-04-09 11:58:09 +01:00
simonmcnair
a5d03e0ada docs: update fuse version in docker docs 2025-04-09 11:54:06 +01:00
Samantha Bowen
199f61cefa fs/config: Read configuration passwords from stdin even when terminated with EOF - fixes #8480 2025-04-09 11:41:10 +01:00
Dan McArdle
fa78c6443e cmd/gitannex: Reject unknown layout modes in INITREMOTE
This is a "fail fast" improvement. Now, we will reject invalid layout
modes at setup time, rather than deferring failure until the user
attempts a transfer.
2025-04-09 11:27:44 +01:00
Dan McArdle
52e2e4b84c cmd/gitannex: Add configparse.go and refactor
This is a behavior-preserving refactor. I'm mostly just moving the code
that defines and parses configs (e.g. "rcloneremotename") into a new
source file. This lets us focus more on implementing the text protocol
in gitannex.go.
2025-04-09 11:27:44 +01:00
Dan McArdle
1c933372fe cmd/gitannex: Permit remotes with options
It looks like commit 2a1e28f5f5 did not
fix the errors in the integration tests that I hoped it would. Upon
further inspection, I noticed that I forgot that remotes can have
options just like backends.

This should fix some of the failing integration tests. For context:
https://github.com/rclone/rclone/pull/7987#issuecomment-2688580667

Specifically, I believe that TestGitAnnexFstestBackendCases/HandlesInit
should no longer fail on the Azure backend with "INITREMOTE-FAILURE
remote does not exist: TestAzureBlob,directory_markers:".

Issue #7984
2025-04-09 11:27:44 +01:00
Nick Craig-Wood
f5dfe3f5a6 serve ftp: add serve rc interface 2025-04-09 11:12:07 +01:00
Nick Craig-Wood
5702b7578c serve sftp: add serve rc interface 2025-04-09 11:12:07 +01:00
Nick Craig-Wood
703788b40e serve restic: add serve rc interface 2025-04-09 11:12:07 +01:00
Nick Craig-Wood
aef9c2117e serve s3: add serve rc interface 2025-04-09 11:12:07 +01:00
Nick Craig-Wood
2a42d95385 serve dlna: add serve rc interface 2025-04-09 11:12:07 +01:00
Nick Craig-Wood
e37775bb41 serve webdav: add serve rc interface - fixes #4505 2025-04-09 11:12:07 +01:00
Nick Craig-Wood
780f4040ea serve http: add serve rc interface 2025-04-09 11:12:07 +01:00
Nick Craig-Wood
0b7be6ffb9 serve nfs: add serve rc interface 2025-04-09 11:12:07 +01:00
Nick Craig-Wood
4d9a165e56 serve: Add rc control for serve commands #4505
This adds the framework for serving. The individual servers will be
added in separate commits.
2025-04-09 11:12:07 +01:00
Nick Craig-Wood
21e5fa192a configstruct: add SetAny to parse config from the rc
Now that we have unified the config, we can make a much more
convenient rc interface which mirrors the command line exactly, rather
than using the structure of the internal Go structs.
2025-04-09 11:12:07 +01:00
Nick Craig-Wood
cf571ad661 rc: In options/info make FieldName contain a "." if it should be nested
Before this would have Output "FieldName": "ListenAddr" where it
actually needs to be set in a sub object "HTTP".

After this fix it outputs "FieldName": "HTTP.ListenAddr" to indicate
"ListenAddr" needs to be set in the object "HTTP".
2025-04-09 11:12:07 +01:00
Nick Craig-Wood
b1456835d8 serve restic: convert options to new style 2025-04-09 11:12:07 +01:00
Nick Craig-Wood
b930c4b437 serve s3: convert options to new style 2025-04-09 11:12:07 +01:00
Nick Craig-Wood
cebd588092 serve http: convert options to new style 2025-04-09 11:12:07 +01:00
Nick Craig-Wood
3c981e6c2c serve webdav: convert options to new style 2025-04-09 11:12:07 +01:00
Nick Craig-Wood
6054c4e49d auth proxy: convert options to new style 2025-04-09 11:12:07 +01:00
Nick Craig-Wood
028316ba5d auth proxy: add VFS options parameter for use for default VFS
This is for use from the RC API.
2025-04-09 11:12:07 +01:00
Nick Craig-Wood
df457f5802 serve: make the servers self registering
This is so that they can import cmd/serve without causing an import
loop.

The active servers can now be configured by commenting lines out in
cmd/all/all.go like all the other commands.
2025-04-09 11:12:07 +01:00
Nick Craig-Wood
084e35c49d lib/http: fix race between Serve() and Shutdown()
This was discovered by the race detector.
2025-04-09 11:12:07 +01:00
Nick Craig-Wood
90ea4a73ad lib/http: add Addr() method to return the first configured server address 2025-04-09 11:12:07 +01:00
Nick Craig-Wood
efe8ac8f35 Add Danny Garside to contributors 2025-04-09 11:12:06 +01:00
Danny Garside
894ef3b375 docs: fix minor typo in box docs 2025-04-08 20:51:22 +01:00
Nick Craig-Wood
385465bfa9 sync: implement --list-cutoff to allow on disk sorting for reduced memory use
Before this change, rclone had to load an entire directory into RAM in
order to sort it so it could be synced.

With directories with millions of entries, this used too much memory.

This fixes the probem by using an on disk sort when there are more
than --list-cutoff entries in a directory.

Fixes #7974
2025-04-08 18:02:24 +01:00
Nick Craig-Wood
0148bd4668 march: Implement callback based syncing
This changes the syncing method to take callbacks for directory
listings rather than being passed the entire directory listing at
once.

This will enable out of memory syncing.
2025-04-08 18:02:24 +01:00
Nick Craig-Wood
0f7ecf6f06 list: add ListDirSortedFn for callback oriented directory listing
This will be used for the out of memory sync
2025-04-08 15:14:09 +01:00
Nick Craig-Wood
08e81f8420 list: Implement Sorter to sort directory entries
Later this will be extended to do out of memory sorts
2025-04-08 15:14:09 +01:00
Nick Craig-Wood
0ac2d2f50f cache: mark ListP as not supported yet 2025-04-08 15:14:09 +01:00
Nick Craig-Wood
42fcb0a6fc hasher: implement ListP interface 2025-04-08 15:14:09 +01:00
Nick Craig-Wood
490dd14bc5 compress: implement ListP interface 2025-04-08 15:14:09 +01:00
Nick Craig-Wood
943ea0acae chunker: mark ListP as not supported yet 2025-04-08 15:14:09 +01:00
Nick Craig-Wood
d64a97f973 union: mark ListP as not supported yet 2025-04-08 15:14:09 +01:00
Nick Craig-Wood
5d8f1d4b88 crypt: implement ListP interface 2025-04-08 15:14:09 +01:00
Nick Craig-Wood
b1d774c2e3 combine: implement ListP interface 2025-04-08 15:14:09 +01:00
Nick Craig-Wood
fad579c4a2 s3: Implement paged listing interface ListP 2025-04-08 15:14:09 +01:00
Nick Craig-Wood
37120ef7bd list: add WithListP helper to implement List for ListP backends 2025-04-08 15:14:09 +01:00
Nick Craig-Wood
cba653d502 walk: move NewListRHelper into list.Helper to avoid circular dependency
It turns out that the list helpers were at the wrong level and needed
to be pushed down into the fs/list for future work.
2025-04-08 15:14:00 +01:00
Nick Craig-Wood
2a90de9502 fs: define ListP interface for paged listing #4788 2025-04-08 15:12:53 +01:00
Nick Craig-Wood
bff229713a accounting: Add listed stat for number of directory entries listed 2025-04-08 15:12:53 +01:00
Nick Craig-Wood
117f583ebe walk: factor Listing helpers into their own file and add tests 2025-04-08 15:12:53 +01:00
Nick Craig-Wood
205667143c serve nfs: make metadata files have special file handles
Metadata files have the file handle of their source file with
0x00000001 suffixed in big endian so we can look them up directly from
their file handles.
2025-04-07 13:41:29 +01:00
Nick Craig-Wood
fe84cbdc9d serve nfs: change the format of --nfs-cache-type symlink file handles
This is an backwards incompatible change which will invalidate the
current handles.

This change adds a 4 byte big endian length prefix to the handles so
we can in future suffix extra info on the handles. This needed to be 4
bytes as Linux does not like File handles which aren't multiples of 4
bytes long.
2025-04-07 13:41:29 +01:00
Nick Craig-Wood
533c6438f3 vfs: add --vfs-metadata-extension to expose metadata sidecar files
This adds --vfs-metadata-extension which can be used to expose sidecar
files with file metadata in. These files don't exist in the listings
until they are accessed.
2025-04-07 13:41:29 +01:00
Nick Craig-Wood
b587b094c9 docs: Add rcloneui.com as Silver Sponsor 2025-04-07 13:41:29 +01:00
Nick Craig-Wood
525798e1a5 Add Klaas Freitag to contributors 2025-04-07 13:41:29 +01:00
Nick Craig-Wood
ea63052d36 Add eccoisle to contributors 2025-04-07 13:41:29 +01:00
Nick Craig-Wood
b5a99c5011 Add Fernando Fernández to contributors 2025-04-07 13:41:29 +01:00
Nick Craig-Wood
56b7015675 Add alingse to contributors 2025-04-07 13:41:29 +01:00
Nick Craig-Wood
4ff970ebab Add Jörn Friedrich Dreyer to contributors 2025-04-07 13:41:29 +01:00
eccoisle
dccb5144c3 docs: replace option --auto-filename-header with --header-filename 2025-04-06 14:28:34 +02:00
dependabot[bot]
33b087171a build: update github.com/golang-jwt/jwt/v5 from 5.2.1 to 5.2.2 to fix CVE-2025-30204
Bumps [github.com/golang-jwt/jwt/v5](https://github.com/golang-jwt/jwt) from 5.2.1 to 5.2.2.
- [Release notes](https://github.com/golang-jwt/jwt/releases)
- [Changelog](https://github.com/golang-jwt/jwt/blob/main/VERSION_HISTORY.md)
- [Commits](https://github.com/golang-jwt/jwt/compare/v5.2.1...v5.2.2)

See: https://github.com/golang-jwt/jwt/security/advisories/GHSA-mh63-6h87-95cp
See: https://www.cve.org/CVERecord?id=CVE-2025-30204

Signed-off-by: dependabot[bot] <support@github.com>
2025-04-06 11:46:13 +01:00
Fernando Fernández
58d9ae1c60 docs/googlephotos: fix typos 2025-04-06 10:49:02 +02:00
dependabot[bot]
20302ab6b9 build: bump github.com/golang-jwt/jwt/v4 from 4.5.1 to 4.5.2
Bumps [github.com/golang-jwt/jwt/v4](https://github.com/golang-jwt/jwt) from 4.5.1 to 4.5.2.
- [Release notes](https://github.com/golang-jwt/jwt/releases)
- [Changelog](https://github.com/golang-jwt/jwt/blob/main/VERSION_HISTORY.md)
- [Commits](https://github.com/golang-jwt/jwt/compare/v4.5.1...v4.5.2)

---
updated-dependencies:
- dependency-name: github.com/golang-jwt/jwt/v4
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-04-04 17:26:44 +02:00
alingse
6fb0de62a4 operations: fix call fmt.Errorf with wrong err 2025-04-04 16:21:45 +02:00
Jörn Friedrich Dreyer
839eef0db2 webdav: retry propfind on 425 status
This retries propfind on 425 status

In ownCloud Infinite Scale, files might be in that state if
postprocessing is still ongoing. All metadata are available anyway

Allow item status 425 "too early" for items when changing metadata

Fixes the upload behavior with ownCloud Infinite Scale

Signed-off-by: Jörn Friedrich Dreyer <jfd@butonic.de>
Co-authored-by: Klaas Freitag <kraft@freisturz.de>
2025-03-26 12:51:04 +00:00
Nick Craig-Wood
267eebe5c9 Add --max-connections to control maximum backend concurrency 2025-03-25 15:49:27 +00:00
Nick Craig-Wood
755d72a591 rc: fix debug/* commands not being available over unix sockets
This was caused by an incorrect handler URL which was passing the
debug/* commands to the debug/pprof handler by accident. This only
happened when using unix sockets.
2025-03-25 15:30:49 +00:00
Dan McArdle
4d38424e6c cmd/gitannex: Prevent tests from hanging when assertion fails
This fixes another way that the gitannex tests can hang.

The issue is that our test harness explicitly called `wg.Done()` at the
end of each test case, but when assertions checked with [require] fail,
they halt test execution and prevent `wg.Done()` from happening.

A second issue is that we were incorrectly calling [require] functions
in the goroutine that runs the gitannex server. I found that [require]
calls [testing.T.FailNow] under the hood, which says "FailNow must be
called from the goroutine running the test or benchmark function, not
from other goroutines created during the test." [1]

This commit fixes both issues by replacing the explicit synchronization
with a `chan error`. This enables us to run the gitannex server in a
goroutine, interact with the server in the test's goroutine, and then at
then end use [require] on the test-associated goroutine to ensure the
server's error/nil value matches expectations.

[1]: https://pkg.go.dev/testing#T.FailNow
2025-03-18 12:38:04 +00:00
Dan McArdle
53624222c9 cmd/gitannex: Add explicit timeout for mock stdout reads in tests
It seems like (*testState).readLine() hangs indefinitely when it's
waiting for a line that will never be written [1].

This commit adds an explicit 30-second timeout when reading from the
internal mock stdout. Given that we integrate with fstest, this timeout
needs to be sufficiently long that it accommodates slow-but-successful
operations on real remotes.

[1]: https://github.com/rclone/rclone/pull/8423#issuecomment-2701601290
2025-03-18 12:38:04 +00:00
nielash
44e83d77d7 http: correct root if definitely pointing to a file - fixes #8428
This was formalized in
c69eb84573
But it appears that we forgot to update `http`, and the `FsRoot` test didn't
catch it because we don't currently have an http integration test.
2025-03-17 18:05:23 +00:00
Nick Craig-Wood
19aa366d88 pool: add --max-buffer-memory to limit total buffer memory usage 2025-03-17 18:01:15 +00:00
Nick Craig-Wood
3fb4164d87 filter: Add --hash-filter to deterministically select a subset of files
Fixes #8400
2025-03-17 17:25:59 +00:00
dependabot[bot]
4e2b78f65d build: update golang.org/x/net to 0.36.0. to fix CVE-2025-22869
SSH servers which implement file transfer protocols are vulnerable to
a denial of service attack from clients which complete the key
exchange slowly, or not at all, causing pending content to be read
into memory, but never transmitted.

This updates golang.org/x/net to fix the problem.

See: https://pkg.go.dev/vuln/GO-2025-3487
See: https://www.cve.org/CVERecord?id=CVE-2025-22869
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-03-17 17:25:12 +00:00
Nick Craig-Wood
e47f59e1f9 rc: add add short parameter to core/stats to not return transferring and checking 2025-03-17 13:44:37 +00:00
Nick Craig-Wood
63c4fef27a fs: fix corruption of SizeSuffix with "B" suffix in config (eg --min-size)
Before this change, the config system round tripped fs.SizeSuffix
values through strings like this, corrupting them in the process.

    "2B" -> 2 -> "2" -> 2048

This caused `--min-size 2B` to be interpreted as `--min-size 2k`.

This fix makes sure SizeSuffix values have a "B" suffix when turned
into a string where necessary, so it becomes

    "2B" -> 2 -> "2B" -> 2

In rclone v2 we should probably declare unsuffixed SizeSuffix values
are in bytes not kBytes (done for rsync compatibility) but this would
be a backwards incompatible change which we don't want for v1.

Fixes #8437
Fixes #8212
Fixes #5169
2025-03-13 09:56:20 +00:00
Nick Craig-Wood
a7a7c1d592 filters: show --min-size and --max-size in --dump filters 2025-03-12 12:32:21 +00:00
Nick Craig-Wood
6a7e68aaf2 build: check docs for edits of autogenerated sections
This adds a lint step which checks the top commit for edits to
autogenerated doc sections.
2025-03-10 22:07:19 +00:00
Nick Craig-Wood
6e7a3795f1 Add jack to contributors 2025-03-10 22:07:19 +00:00
jack
177337686a docs: fix incorrect mentions of vfs-cache-min-free-size 2025-03-09 01:23:42 +01:00
Nick Craig-Wood
ccef29bbff fs/object: fix memory object out of bounds Seek 2025-03-06 11:31:52 +00:00
Nick Craig-Wood
64b3d1d539 serve nfs: fix unlikely crash 2025-03-06 11:31:52 +00:00
Nick Craig-Wood
aab6643cea docs: update minimum OS requirements for go1.24 2025-03-05 17:20:10 +00:00
Dan McArdle
2a1e28f5f5 cmd/gitannex: Tweak parsing of "rcloneremotename" config
The "rcloneremotename" (aka "target") config parameter is now permitted
to contain (1) remote names that are defined by environment variables,
but not in an rclone config file, and (2) backend strings such as
":memory:".

This should fix some of the failing integration tests. For context:
https://github.com/rclone/rclone/pull/7987#issuecomment-2688580667

Issue #7984
2025-03-04 16:40:32 +00:00
Dan McArdle
db9205b298 cmd/gitannex: Drop var rebindings now that we have go1.23 2025-03-04 16:40:32 +00:00
Zachary Vorhies
964c6204dd docs: add note for using rclone cat for slicing out a byte range from a file 2025-03-04 16:31:56 +00:00
Jonathan Giannuzzi
65f7eb0fba rcserver: improve content-type check
Some libraries use `application/json; charset=utf-8` as their `Content-Type`, which is valid.
However we were not decoding the JSON body in that case, resulting in issues communicating with the rcserver.
2025-03-04 16:28:34 +00:00
Nick Craig-Wood
401cf81034 build: modernize Go usage
This commit modernizes Go usage. This was done with:

go run golang.org/x/tools/gopls/internal/analysis/modernize/cmd/modernize@latest -fix -test ./...

Then files needed to be `go fmt`ed and a few comments needed to be
restored.

The modernizations include replacing

- if/else conditional assignment by a call to the built-in min or max functions added in go1.21
- sort.Slice(x, func(i, j int) bool) { return s[i] < s[j] } by a call to slices.Sort(s), added in go1.21
- interface{} by the 'any' type added in go1.18
- append([]T(nil), s...) by slices.Clone(s) or slices.Concat(s), added in go1.21
- loop around an m[k]=v map update by a call to one of the Collect, Copy, Clone, or Insert functions from the maps package, added in go1.21
- []byte(fmt.Sprintf...) by fmt.Appendf(nil, ...), added in go1.19
- append(s[:i], s[i+1]...) by slices.Delete(s, i, i+1), added in go1.21
- a 3-clause for i := 0; i < n; i++ {} loop by for i := range n {}, added in go1.22
2025-02-28 11:31:14 +00:00
Nick Craig-Wood
431386085f build: update all dependencies and fix deprecations 2025-02-26 18:00:58 +00:00
Nick Craig-Wood
bf150a5b7d build: update golang.org/x/crypto to v0.35.0 to fix CVE-2025-22869
SSH servers which implement file transfer protocols are vulnerable to
a denial of service attack from clients which complete the key
exchange slowly, or not at all, causing pending content to be read
into memory, but never transmitted.

This affects users of `rclone serve sftp`.

See: https://pkg.go.dev/vuln/GO-2025-3487
2025-02-26 18:00:58 +00:00
Nick Craig-Wood
ddecfe6e77 build: make go1.23 the minimum go version
This is necessary now that golang.org/x/crypto is only allowing the
last two versions of Go.

See: https://go.googlesource.com/crypto/+/89ff08d67c4d79f9ac619aaf1f7388888798651f
2025-02-26 18:00:58 +00:00
Dan McArdle
68e40dc141 cmd/gitannex: Add to integration tests
This commit registers gitannex's unit tests with the integration tester
by updating the config.yaml file.

Since we have not yet updated the e2e tests to use the fstest framework,
this commit also adds a case to the e2e tests' skipE2eTestIfNecessary()
function.

Issue #7984
2025-02-26 17:18:02 +00:00
Dan McArdle
325f400a88 cmd/gitannex: Simplify verbose failures in tests 2025-02-26 17:18:02 +00:00
Dan McArdle
be33e281b3 cmd/gitannex: Port unit tests to fstest
This enables the unit tests to run on any given backend, via the
`-remote` flag, e.g. `go test -v ./cmd/gitannex/... -remote dropbox:`.

We should also port the gitannex e2e tests at some point.

Issue: #7984
2025-02-26 17:18:02 +00:00
Nick Craig-Wood
0010090d05 vfs: fix integration test failures
In this commit

ceef78ce44 vfs: fix directory cache serving stale data

We added a new test which caused lots of integration test failures.

This fixes the problem by disabling the test unless the feature flag
DirModTimeUpdatesOnWrite is present on the remote.
2025-02-26 12:21:35 +00:00
Nick Craig-Wood
b7f26937f1 azureblob: fix errors not being retried when doing single part copy
Sometimes the Azure blob servers reply with 503 ServerBusy errors and
these should be retried.

Before this change, when testing to see if a single part server side
copy was done, ServerBusy errors were returned to the user rather than
being retried.

Wrapping the call in the pacer fixes the problem and ensures it is
retried properly using the --low-level-retries mechanism.
2025-02-25 10:22:49 +00:00
Nick Craig-Wood
5037d7368d azureblob: handle retry error codes more carefully 2025-02-24 10:58:26 +00:00
Nick Craig-Wood
0ccf65017f touch: make touch obey --transfers
Before this change, when executed on a directory, rclone would only
touch files sequentially.

This change makes rclone touch --transfers files at once.

Fixes #8402
2025-02-21 15:53:47 +00:00
Nick Craig-Wood
85d467e16a Add luzpaz to contributors 2025-02-21 15:53:44 +00:00
Nick Craig-Wood
cf4b55d965 Add Dave Vasilevsky to contributors 2025-02-21 15:53:10 +00:00
luzpaz
e0d477804b docs: fix various typos
Found via `codespell -q 3 -S "./docs/static,./fs/rc/params_test.go" -L aadd,afile,alledges,bbefore,bu,buda,copys,couldn,crashers,crypted,ddelete,deriver,failre,goup,hashin,hel,inbraces,keep-alives,ket,medias,ment,mis,nd,nin,notin,ois,ot,parth,re-use,re-using,responser,rin,sav,splited,streamin,synching,te,twoo,ue,unknwon,wasn`
2025-02-19 20:30:44 +00:00
Dave Vasilevsky
4fc9583feb dropbox: Retry link without expiry
Dropbox only allows public links with expiry for certain account types.
Rather than erroring for other accounts, retry without expiry.
2025-02-17 20:39:14 +00:00
Dave Vasilevsky
904c9b2e24 Dropbox: Support Dropbox Paper
These files must be "exported" to be useful. The export process
is controlled by the --dropbox-export-formats flag  and the ancilliary flags
--dropbox-skip-exports and --dropbox-show-all-exports modeled on the
Google drive equivalents
2025-02-17 18:20:37 +00:00
emyarod
cdfd748241 chore: update contributor email 2025-02-16 07:52:20 +00:00
Nick Craig-Wood
661027f2cf docs: correct stable release workflow 2025-02-15 20:08:51 +00:00
Nick Craig-Wood
7ecd1638eb Add Lorenz Brun to contributors 2025-02-15 20:08:51 +00:00
Nick Craig-Wood
06b92ddeb3 Add Michael Kebe to contributors 2025-02-15 20:08:51 +00:00
Lorenz Brun
ceef78ce44 vfs: fix directory cache serving stale data
The VFS directory cache layer didn't update directory entry properties
if they are reused after cache invalidation.

Update them unconditionally as newDir sets them to the same value and
setting a pointer is cheaper in both LoC as well as CPU cycles than a
branch.

Also add a test exercising this behavior.

Fixes #6335
2025-02-15 15:22:16 +00:00
Anagh Kumar Baranwal
6560ea9bdc build: fix docker plugin build - fixes #8394
Signed-off-by: Anagh Kumar Baranwal <6824881+darthShadow@users.noreply.github.com>
2025-02-15 14:23:43 +00:00
Michael Kebe
cda82f3d30 docs: improved sftp limitations
Added a link to `--sftp-path-override` for a better solution with working hash calculation.
2025-02-15 11:11:26 +01:00
Nick Craig-Wood
7da2d8b507 Changelog updates from Version v1.69.1 2025-02-14 17:15:50 +00:00
Nick Craig-Wood
fb7919928c docs: add FileLu as sponsors and tidy sponsor logos 2025-02-14 17:15:50 +00:00
Anagh Kumar Baranwal
5d670fc54a accounting: fix percentDiff calculation -- fixes #8345
Signed-off-by: Anagh Kumar Baranwal <6824881+darthShadow@users.noreply.github.com>
2025-02-14 14:58:12 +00:00
Nick Craig-Wood
b5e72e2fc3 vfs: fix the cache failing to upload symlinks when --links was specified
Before this change, if --vfs-cache-mode writes or above was set and
--links was in use, when a symlink was saved then the VFS failed to
upload it. This meant when the VFS was restarted the link wasn't there
any more.

This was caused by the local backend, which we use to manage the VFS
cache, picking up the global --links flag.

This patch makes sure that the internal instantations of the local
backend in the VFS cache don't ever use the --links flag or the
--local-links flag even if specified on the command line.

Fixes #8367
2025-02-13 13:30:52 +00:00
Nick Craig-Wood
8997993a30 Add jbagwell-akamai to contributors 2025-02-13 13:30:52 +00:00
Nick Craig-Wood
b721f363e5 Add ll3006 to contributors 2025-02-13 13:30:40 +00:00
Zachary Vorhies
d93dad22fe doc: add note on concurrency of rclone purge 2025-02-13 11:41:37 +00:00
jbagwell-akamai
e27bf8b738 s3: add latest Linode Object Storage endpoints
Added missing Linode Object Storage endpoints AMS, MAA, CGK, LON, LAX, MAD, MEL, MIA, OSA, GRU, SIN
2025-02-13 09:36:22 +00:00
Janne Hellsten
539e96cc1f cmd: fix crash if rclone is invoked without any arguments - Fixes #8378 2025-02-12 21:31:05 +00:00
Anagh Kumar Baranwal
5086aad0b2 build: disable docker builds on PRs & add missing dockerfile changes
Signed-off-by: Anagh Kumar Baranwal <6824881+darthShadow@users.noreply.github.com>
2025-02-12 21:29:01 +00:00
ll3006
c1b414e2cf sync: copy dir modtimes even when copyEmptySrcDirs is false - fixes #8317
Before, after a sync, only file modtimes were updated when not using
--copy-empty-src-dirs. This ensures modtimes are updated to match the source
folder, regardless of copyEmptySrcDir. The flag --no-update-dir-modtime
(which previously did nothing) will disable this.
2025-02-12 21:27:15 +00:00
ll3006
2ff8aa1c20 sync: add tests to check dir modtimes are kept when syncing
This adds tests to check dir modtimes are updated from source
when syncing even if they've changed in the destination.
This should work both with and without --copy-empty-src-dirs.
2025-02-12 21:27:15 +00:00
nielash
6d2a72367a fix golangci-lint errors 2025-02-12 21:24:55 +00:00
nielash
9df751d4ec bisync: fix false positive on integration tests
5f70918e2c
introduced a new INFO log when making a directory, which differs depending on
whether the backend supports setting directory metadata. This caused false
positives on the bisync createemptysrcdirs test.

This fixes it by ignoring that log line.
2025-02-12 21:24:55 +00:00
Nick Craig-Wood
e175c863aa s3: split the GCS quirks into -s3-use-x-id and -s3-sign-accept-encoding #8373
Before this we applied both these quirks if provider == "GCS".

Splitting them like this makes them applicable for other providers
such as ActiveScale.
2025-02-12 21:08:16 +00:00
Nick Craig-Wood
64cd8ae0f0 Add Joel K Biju to contributors 2025-02-12 21:08:11 +00:00
Anagh Kumar Baranwal
46b498b86a stats: fix the speed not getting updated after a pause in the processing
This shifts the behavior of the average loop to be a persistent loop
that gets resumed/paused when transfers & checks are started/completed.

Previously, the averageLoop was stopped on completion of
transfers & checks but failed to start again due to the protection of
the sync.Once

Signed-off-by: Anagh Kumar Baranwal <6824881+darthShadow@users.noreply.github.com>
2025-02-11 12:52:55 +00:00
Joel K Biju
b76cd74087 opendrive: added --opendrive-access flag to handle permissions 2025-02-11 12:43:10 +00:00
nielash
3b49fd24d4 bisync: fix listings missing concurrent modifications - fixes #8359
Before this change, there was a bug affecting listing files when:

- a given bisync run had changes in the 2to1 direction
AND
- the run had NO changes in the 1to2 direction
AND
- at least one of the changed files changed AGAIN during the run
(specifically, after the initial march and before the transfers.)

In this situation, the listings on one side would still retain the prior version
of the changed file, potentially causing conflicts or errors.

This change fixes the issue by making sure that if we're updating the listings
on one side, we must also update the other. (We previously tried to skip it for
efficiency, but this failed to account for the possibility that a changed file
could change again during the run.)
2025-02-11 11:21:02 +00:00
Anagh Kumar Baranwal
c0515a51a5 Added parallel docker builds and caching for go build in the container
Signed-off-by: Anagh Kumar Baranwal <6824881+darthShadow@users.noreply.github.com>
2025-02-11 10:17:50 +00:00
Jonathan Giannuzzi
dc9c87279b smb: improve connection pooling efficiency
* Lower pacer minSleep to establish new connections faster
* Use Echo requests to check whether connections are working (required an upgrade of go-smb2)
* Only remount shares when needed
* Use context for connection establishment
* When returning a connection to the pool, only check the ones that encountered errors
* Close connections in parallel
2025-02-04 12:35:19 +00:00
Nick Craig-Wood
057fdb3a9d lib/oauthutil: fix redirect URL mismatch errors - fixes #8351
In this commit we introduced support for client credentials flow:

65012beea4 lib/oauthutil: add support for OAuth client credential flow

This involved re-organising the oauth credentials.

Unfortunately a small error was made which used a fixed redirect URL
rather than the one configured for the backend.

This caused the box backend oauth flow not to work properly with
redirect_uri_mismatch errors.

These backends were using the wrong redirect URL and will likely be
affected, though it is possible the backends have workarounds.

- box
- drive
- googlecloudstorage
- googlephotos
- hidrive
- pikpak
- premiumizeme
- sharefile
- yandex
2025-02-03 12:15:54 +00:00
Nick Craig-Wood
3daf62cf3d b2: fix "fatal error: concurrent map writes" - fixes #8355
This was caused by the embryonic metadata support. Since this isn't
actually visible externally, this patch removes it for the time being.
2025-02-03 11:33:21 +00:00
Nick Craig-Wood
0ef495fa76 Add Alexander Minbaev to contributors 2025-02-03 11:33:21 +00:00
Nick Craig-Wood
722c567504 Add Zachary Vorhies to contributors 2025-02-03 11:33:21 +00:00
Nick Craig-Wood
0ebe1c0f81 Add Jess to contributors 2025-02-03 11:33:21 +00:00
Alexander Minbaev
2dc06b2548 s3: add IBM IAM signer - fixes #7617 2025-02-03 11:29:31 +00:00
Zachary Vorhies
b52aabd8fe serve nfs: update docs to note Windows is not supported - fixes #8352 2025-02-01 12:20:09 +00:00
Jess
6494ac037f cmd/config(update remote): introduce --no-output option
Fixes #8190
2025-01-24 17:22:58 +00:00
jkpe
5c3a1bbf30 s3: add DigitalOcean regions SFO2, LON1, TOR1, BLR1 2025-01-24 10:41:48 +00:00
631 changed files with 46213 additions and 16197 deletions

View File

@@ -23,15 +23,18 @@ jobs:
build:
if: inputs.manual || (github.repository == 'rclone/rclone' && (github.event_name != 'pull_request' || github.event.pull_request.head.repo.full_name != github.event.pull_request.base.repo.full_name))
timeout-minutes: 60
defaults:
run:
shell: bash
strategy:
fail-fast: false
matrix:
job_name: ['linux', 'linux_386', 'mac_amd64', 'mac_arm64', 'windows', 'other_os', 'go1.22', 'go1.23']
job_name: ['linux', 'linux_386', 'mac_amd64', 'mac_arm64', 'windows', 'other_os', 'go1.24']
include:
- job_name: linux
os: ubuntu-latest
go: '>=1.24.0-rc.1'
go: '>=1.25.0-rc.1'
gotags: cmount
build_flags: '-include "^linux/"'
check: true
@@ -42,14 +45,14 @@ jobs:
- job_name: linux_386
os: ubuntu-latest
go: '>=1.24.0-rc.1'
go: '>=1.25.0-rc.1'
goarch: 386
gotags: cmount
quicktest: true
- job_name: mac_amd64
os: macos-latest
go: '>=1.24.0-rc.1'
go: '>=1.25.0-rc.1'
gotags: 'cmount'
build_flags: '-include "^darwin/amd64" -cgo'
quicktest: true
@@ -58,14 +61,14 @@ jobs:
- job_name: mac_arm64
os: macos-latest
go: '>=1.24.0-rc.1'
go: '>=1.25.0-rc.1'
gotags: 'cmount'
build_flags: '-include "^darwin/arm64" -cgo -macos-arch arm64 -cgo-cflags=-I/usr/local/include -cgo-ldflags=-L/usr/local/lib'
deploy: true
- job_name: windows
os: windows-latest
go: '>=1.24.0-rc.1'
go: '>=1.25.0-rc.1'
gotags: cmount
cgo: '0'
build_flags: '-include "^windows/"'
@@ -75,20 +78,14 @@ jobs:
- job_name: other_os
os: ubuntu-latest
go: '>=1.24.0-rc.1'
go: '>=1.25.0-rc.1'
build_flags: '-exclude "^(windows/|darwin/|linux/)"'
compile_all: true
deploy: true
- job_name: go1.22
- job_name: go1.24
os: ubuntu-latest
go: '1.22'
quicktest: true
racequicktest: true
- job_name: go1.23
os: ubuntu-latest
go: '1.23'
go: '1.24'
quicktest: true
racequicktest: true
@@ -98,7 +95,7 @@ jobs:
steps:
- name: Checkout
uses: actions/checkout@v4
uses: actions/checkout@v5
with:
fetch-depth: 0
@@ -109,7 +106,6 @@ jobs:
check-latest: true
- name: Set environment variables
shell: bash
run: |
echo 'GOTAGS=${{ matrix.gotags }}' >> $GITHUB_ENV
echo 'BUILD_FLAGS=${{ matrix.build_flags }}' >> $GITHUB_ENV
@@ -118,7 +114,6 @@ jobs:
if [[ "${{ matrix.cgo }}" != "" ]]; then echo 'CGO_ENABLED=${{ matrix.cgo }}' >> $GITHUB_ENV ; fi
- name: Install Libraries on Linux
shell: bash
run: |
sudo modprobe fuse
sudo chmod 666 /dev/fuse
@@ -128,7 +123,6 @@ jobs:
if: matrix.os == 'ubuntu-latest'
- name: Install Libraries on macOS
shell: bash
run: |
# https://github.com/Homebrew/brew/issues/15621#issuecomment-1619266788
# https://github.com/orgs/Homebrew/discussions/4612#discussioncomment-6319008
@@ -157,7 +151,6 @@ jobs:
if: matrix.os == 'windows-latest'
- name: Print Go version and environment
shell: bash
run: |
printf "Using go at: $(which go)\n"
printf "Go version: $(go version)\n"
@@ -169,29 +162,24 @@ jobs:
env
- name: Build rclone
shell: bash
run: |
make
- name: Rclone version
shell: bash
run: |
rclone version
- name: Run tests
shell: bash
run: |
make quicktest
if: matrix.quicktest
- name: Race test
shell: bash
run: |
make racequicktest
if: matrix.racequicktest
- name: Run librclone tests
shell: bash
run: |
make -C librclone/ctest test
make -C librclone/ctest clean
@@ -199,14 +187,12 @@ jobs:
if: matrix.librclonetest
- name: Compile all architectures test
shell: bash
run: |
make
make compile_all
if: matrix.compile_all
- name: Deploy built binaries
shell: bash
run: |
if [[ "${{ matrix.os }}" == "ubuntu-latest" ]]; then make release_dep_linux ; fi
make ci_beta
@@ -225,19 +211,20 @@ jobs:
steps:
- name: Get runner parameters
id: get-runner-parameters
shell: bash
run: |
echo "year-week=$(/bin/date -u "+%Y%V")" >> $GITHUB_OUTPUT
echo "runner-os-version=$ImageOS" >> $GITHUB_OUTPUT
- name: Checkout
uses: actions/checkout@v4
uses: actions/checkout@v5
with:
fetch-depth: 0
- name: Install Go
id: setup-go
uses: actions/setup-go@v5
with:
go-version: '>=1.23.0-rc.1'
go-version: '1.24'
check-latest: true
cache: false
@@ -295,6 +282,20 @@ jobs:
- name: Scan for vulnerabilities
run: govulncheck ./...
- name: Check Markdown format
uses: DavidAnson/markdownlint-cli2-action@v20
with:
globs: |
CONTRIBUTING.md
MAINTAINERS.md
README.md
RELEASE.md
docs/content/{authors,bugs,changelog,docs,downloads,faq,filtering,gui,install,licence,overview,privacy}.md
- name: Scan edits of autogenerated files
run: bin/check_autogenerated_edits.py 'origin/${{ github.base_ref }}'
if: github.event_name == 'pull_request'
android:
if: inputs.manual || (github.repository == 'rclone/rclone' && (github.event_name != 'pull_request' || github.event.pull_request.head.repo.full_name != github.event.pull_request.base.repo.full_name))
timeout-minutes: 30
@@ -303,7 +304,7 @@ jobs:
steps:
- name: Checkout
uses: actions/checkout@v4
uses: actions/checkout@v5
with:
fetch-depth: 0
@@ -311,10 +312,9 @@ jobs:
- name: Set up Go
uses: actions/setup-go@v5
with:
go-version: '>=1.24.0-rc.1'
go-version: '>=1.25.0-rc.1'
- name: Set global environment variables
shell: bash
run: |
echo "VERSION=$(make version)" >> $GITHUB_ENV
@@ -333,7 +333,6 @@ jobs:
run: env PATH=$PATH:~/go/bin gomobile bind -androidapi ${RCLONE_NDK_VERSION} -v -target=android/arm -javapkg=org.rclone -ldflags '-s -X github.com/rclone/rclone/fs.Version='${VERSION} github.com/rclone/rclone/librclone/gomobile
- name: arm-v7a Set environment variables
shell: bash
run: |
echo "CC=$(echo $ANDROID_NDK/toolchains/llvm/prebuilt/linux-x86_64/bin/armv7a-linux-androideabi${RCLONE_NDK_VERSION}-clang)" >> $GITHUB_ENV
echo "CC_FOR_TARGET=$CC" >> $GITHUB_ENV
@@ -347,7 +346,6 @@ jobs:
run: go build -v -tags android -trimpath -ldflags '-s -X github.com/rclone/rclone/fs.Version='${VERSION} -o build/rclone-android-${RCLONE_NDK_VERSION}-armv7a .
- name: arm64-v8a Set environment variables
shell: bash
run: |
echo "CC=$(echo $ANDROID_NDK/toolchains/llvm/prebuilt/linux-x86_64/bin/aarch64-linux-android${RCLONE_NDK_VERSION}-clang)" >> $GITHUB_ENV
echo "CC_FOR_TARGET=$CC" >> $GITHUB_ENV
@@ -360,7 +358,6 @@ jobs:
run: go build -v -tags android -trimpath -ldflags '-s -X github.com/rclone/rclone/fs.Version='${VERSION} -o build/rclone-android-${RCLONE_NDK_VERSION}-armv8a .
- name: x86 Set environment variables
shell: bash
run: |
echo "CC=$(echo $ANDROID_NDK/toolchains/llvm/prebuilt/linux-x86_64/bin/i686-linux-android${RCLONE_NDK_VERSION}-clang)" >> $GITHUB_ENV
echo "CC_FOR_TARGET=$CC" >> $GITHUB_ENV
@@ -373,7 +370,6 @@ jobs:
run: go build -v -tags android -trimpath -ldflags '-s -X github.com/rclone/rclone/fs.Version='${VERSION} -o build/rclone-android-${RCLONE_NDK_VERSION}-x86 .
- name: x64 Set environment variables
shell: bash
run: |
echo "CC=$(echo $ANDROID_NDK/toolchains/llvm/prebuilt/linux-x86_64/bin/x86_64-linux-android${RCLONE_NDK_VERSION}-clang)" >> $GITHUB_ENV
echo "CC_FOR_TARGET=$CC" >> $GITHUB_ENV

View File

@@ -1,77 +0,0 @@
name: Docker beta build
on:
push:
branches:
- master
jobs:
build:
if: github.repository == 'rclone/rclone'
runs-on: ubuntu-latest
name: Build image job
steps:
- name: Free some space
shell: bash
run: |
df -h .
# Remove android SDK
sudo rm -rf /usr/local/lib/android || true
# Remove .net runtime
sudo rm -rf /usr/share/dotnet || true
df -h .
- name: Checkout master
uses: actions/checkout@v4
with:
fetch-depth: 0
- name: Login to Docker Hub
uses: docker/login-action@v3
with:
username: ${{ secrets.DOCKERHUB_USERNAME }}
password: ${{ secrets.DOCKERHUB_TOKEN }}
- name: Extract metadata (tags, labels) for Docker
id: meta
uses: docker/metadata-action@v5
with:
images: ghcr.io/${{ github.repository }}
- name: Set up QEMU
uses: docker/setup-qemu-action@v3
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
- name: Login to GitHub Container Registry
uses: docker/login-action@v3
with:
registry: ghcr.io
# This is the user that triggered the Workflow. In this case, it will
# either be the user whom created the Release or manually triggered
# the workflow_dispatch.
username: ${{ github.actor }}
# `secrets.GITHUB_TOKEN` is a secret that's automatically generated by
# GitHub Actions at the start of a workflow run to identify the job.
# This is used to authenticate against GitHub Container Registry.
# See https://docs.github.com/en/actions/security-guides/automatic-token-authentication#about-the-github_token-secret
# for more detailed information.
password: ${{ secrets.GITHUB_TOKEN }}
- name: Show disk usage
shell: bash
run: |
df -h .
- name: Build and publish image
uses: docker/build-push-action@v6
with:
file: Dockerfile
context: .
push: true # push the image to ghcr
tags: |
ghcr.io/rclone/rclone:beta
rclone/rclone:beta
labels: ${{ steps.meta.outputs.labels }}
platforms: linux/amd64,linux/386,linux/arm64,linux/arm/v7,linux/arm/v6
cache-from: type=gha, scope=${{ github.workflow }}
cache-to: type=gha, mode=max, scope=${{ github.workflow }}
provenance: false
# Eventually cache will need to be cleared if builds more frequent than once a week
# https://github.com/docker/build-push-action/issues/252
- name: Show disk usage
shell: bash
run: |
df -h .

View File

@@ -0,0 +1,294 @@
---
# Github Actions release for rclone
# -*- compile-command: "yamllint -f parsable build_publish_docker_image.yml" -*-
name: Build & Push Docker Images
# Trigger the workflow on push or pull request
on:
push:
branches:
- '**'
tags:
- '**'
workflow_dispatch:
inputs:
manual:
description: Manual run (bypass default conditions)
type: boolean
default: true
jobs:
build-image:
if: inputs.manual || (github.repository == 'rclone/rclone' && github.event_name != 'pull_request')
timeout-minutes: 60
strategy:
fail-fast: false
matrix:
include:
- platform: linux/amd64
runs-on: ubuntu-24.04
- platform: linux/386
runs-on: ubuntu-24.04
- platform: linux/arm64
runs-on: ubuntu-24.04-arm
- platform: linux/arm/v7
runs-on: ubuntu-24.04-arm
- platform: linux/arm/v6
runs-on: ubuntu-24.04-arm
name: Build Docker Image for ${{ matrix.platform }}
runs-on: ${{ matrix.runs-on }}
steps:
- name: Free Space
shell: bash
run: |
df -h .
# Remove android SDK
sudo rm -rf /usr/local/lib/android || true
# Remove .net runtime
sudo rm -rf /usr/share/dotnet || true
df -h .
- name: Checkout Repository
uses: actions/checkout@v5
with:
fetch-depth: 0
- name: Set REPO_NAME Variable
run: |
echo "REPO_NAME=`echo ${{github.repository}} | tr '[:upper:]' '[:lower:]'`" >> ${GITHUB_ENV}
- name: Set PLATFORM Variable
run: |
platform=${{ matrix.platform }}
echo "PLATFORM=${platform//\//-}" >> $GITHUB_ENV
- name: Set CACHE_NAME Variable
shell: python
run: |
import os, re
def slugify(input_string, max_length=63):
slug = input_string.lower()
slug = re.sub(r'[^a-z0-9 -]', ' ', slug)
slug = slug.strip()
slug = re.sub(r'\s+', '-', slug)
slug = re.sub(r'-+', '-', slug)
slug = slug[:max_length]
slug = re.sub(r'[-]+$', '', slug)
return slug
ref_name_slug = "cache"
if os.environ.get("GITHUB_REF_NAME") and os.environ['GITHUB_EVENT_NAME'] == "pull_request":
ref_name_slug += "-pr-" + slugify(os.environ['GITHUB_REF_NAME'])
with open(os.environ['GITHUB_ENV'], 'a') as env:
env.write(f"CACHE_NAME={ref_name_slug}\n")
- name: Get ImageOS
# There's no way around this, because "ImageOS" is only available to
# processes, but the setup-go action uses it in its key.
id: imageos
uses: actions/github-script@v7
with:
result-encoding: string
script: |
return process.env.ImageOS
- name: Extract Metadata (tags, labels) for Docker
id: meta
uses: docker/metadata-action@v5
env:
DOCKER_METADATA_ANNOTATIONS_LEVELS: manifest,manifest-descriptor # Important for digest annotation (used by Github packages)
with:
images: |
ghcr.io/${{ env.REPO_NAME }}
labels: |
org.opencontainers.image.url=https://github.com/rclone/rclone/pkgs/container/rclone
org.opencontainers.image.vendor=${{ github.repository_owner }}
org.opencontainers.image.authors=rclone <https://github.com/rclone>
org.opencontainers.image.source=${{ github.server_url }}/${{ github.repository }}
org.opencontainers.image.revision=${{ github.sha }}
tags: |
type=sha
type=ref,event=pr
type=ref,event=branch
type=semver,pattern={{version}}
type=semver,pattern={{major}}
type=semver,pattern={{major}}.{{minor}}
type=raw,value=beta,enable={{is_default_branch}}
- name: Setup QEMU
uses: docker/setup-qemu-action@v3
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
- name: Load Go Build Cache for Docker
id: go-cache
uses: actions/cache@v4
with:
key: ${{ runner.os }}-${{ steps.imageos.outputs.result }}-go-${{ env.CACHE_NAME }}-${{ env.PLATFORM }}-${{ hashFiles('**/go.mod') }}-${{ hashFiles('**/go.sum') }}
restore-keys: |
${{ runner.os }}-${{ steps.imageos.outputs.result }}-go-${{ env.CACHE_NAME }}-${{ env.PLATFORM }}
# Cache only the go builds, the module download is cached via the docker layer caching
path: |
go-build-cache
- name: Inject Go Build Cache into Docker
uses: reproducible-containers/buildkit-cache-dance@v3
with:
cache-map: |
{
"go-build-cache": "/root/.cache/go-build"
}
skip-extraction: ${{ steps.go-cache.outputs.cache-hit }}
- name: Login to GitHub Container Registry
uses: docker/login-action@v3
with:
registry: ghcr.io
# This is the user that triggered the Workflow. In this case, it will
# either be the user whom created the Release or manually triggered
# the workflow_dispatch.
username: ${{ github.actor }}
password: ${{ secrets.GITHUB_TOKEN }}
- name: Build and Publish Image Digest
id: build
uses: docker/build-push-action@v6
with:
file: Dockerfile
context: .
provenance: false
# don't specify 'tags' here (error "get can't push tagged ref by digest")
# tags: ${{ steps.meta.outputs.tags }}
labels: ${{ steps.meta.outputs.labels }}
annotations: ${{ steps.meta.outputs.annotations }}
platforms: ${{ matrix.platform }}
outputs: |
type=image,name=ghcr.io/${{ env.REPO_NAME }},push-by-digest=true,name-canonical=true,push=true
cache-from: |
type=registry,ref=ghcr.io/${{ env.REPO_NAME }}:build-${{ env.CACHE_NAME }}-${{ env.PLATFORM }}
cache-to: |
type=registry,ref=ghcr.io/${{ env.REPO_NAME }}:build-${{ env.CACHE_NAME }}-${{ env.PLATFORM }},image-manifest=true,mode=max,compression=zstd
- name: Export Image Digest
run: |
mkdir -p /tmp/digests
digest="${{ steps.build.outputs.digest }}"
touch "/tmp/digests/${digest#sha256:}"
- name: Upload Image Digest
uses: actions/upload-artifact@v4
with:
name: digests-${{ env.PLATFORM }}
path: /tmp/digests/*
retention-days: 1
if-no-files-found: error
merge-image:
name: Merge & Push Final Docker Image
runs-on: ubuntu-24.04
needs:
- build-image
steps:
- name: Download Image Digests
uses: actions/download-artifact@v5
with:
path: /tmp/digests
pattern: digests-*
merge-multiple: true
- name: Set REPO_NAME Variable
run: |
echo "REPO_NAME=`echo ${{github.repository}} | tr '[:upper:]' '[:lower:]'`" >> ${GITHUB_ENV}
- name: Extract Metadata (tags, labels) for Docker
id: meta
uses: docker/metadata-action@v5
env:
DOCKER_METADATA_ANNOTATIONS_LEVELS: index
with:
images: |
${{ env.REPO_NAME }}
ghcr.io/${{ env.REPO_NAME }}
labels: |
org.opencontainers.image.url=https://github.com/rclone/rclone/pkgs/container/rclone
org.opencontainers.image.vendor=${{ github.repository_owner }}
org.opencontainers.image.authors=rclone <https://github.com/rclone>
org.opencontainers.image.source=${{ github.server_url }}/${{ github.repository }}
org.opencontainers.image.revision=${{ github.sha }}
tags: |
type=sha
type=ref,event=pr
type=ref,event=branch
type=semver,pattern={{version}}
type=semver,pattern={{major}}
type=semver,pattern={{major}}.{{minor}}
type=raw,value=beta,enable={{is_default_branch}}
- name: Extract Tags
shell: python
run: |
import json, os
metadata_json = os.environ['DOCKER_METADATA_OUTPUT_JSON']
metadata = json.loads(metadata_json)
tags = [f"--tag '{tag}'" for tag in metadata["tags"]]
tags_string = " ".join(tags)
with open(os.environ['GITHUB_ENV'], 'a') as env:
env.write(f"TAGS={tags_string}\n")
- name: Extract Annotations
shell: python
run: |
import json, os
metadata_json = os.environ['DOCKER_METADATA_OUTPUT_JSON']
metadata = json.loads(metadata_json)
annotations = [f"--annotation '{annotation}'" for annotation in metadata["annotations"]]
annotations_string = " ".join(annotations)
with open(os.environ['GITHUB_ENV'], 'a') as env:
env.write(f"ANNOTATIONS={annotations_string}\n")
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
- name: Login to Docker Hub
uses: docker/login-action@v3
with:
username: ${{ secrets.DOCKERHUB_USERNAME }}
password: ${{ secrets.DOCKERHUB_TOKEN }}
- name: Login to GitHub Container Registry
uses: docker/login-action@v3
with:
registry: ghcr.io
# This is the user that triggered the Workflow. In this case, it will
# either be the user whom created the Release or manually triggered
# the workflow_dispatch.
username: ${{ github.actor }}
password: ${{ secrets.GITHUB_TOKEN }}
- name: Create & Push Manifest List
working-directory: /tmp/digests
run: |
docker buildx imagetools create \
${{ env.TAGS }} \
${{ env.ANNOTATIONS }} \
$(printf 'ghcr.io/${{ env.REPO_NAME }}@sha256:%s ' *)
- name: Inspect and Run Multi-Platform Image
run: |
docker buildx imagetools inspect --raw ${{ env.REPO_NAME }}:${{ steps.meta.outputs.version }}
docker buildx imagetools inspect --raw ghcr.io/${{ env.REPO_NAME }}:${{ steps.meta.outputs.version }}
docker run --rm ghcr.io/${{ env.REPO_NAME }}:${{ steps.meta.outputs.version }} version

View File

@@ -0,0 +1,49 @@
---
# Github Actions release for rclone
# -*- compile-command: "yamllint -f parsable build_publish_docker_plugin.yml" -*-
name: Release Build for Docker Plugin
on:
release:
types: [published]
workflow_dispatch:
inputs:
manual:
description: Manual run (bypass default conditions)
type: boolean
default: true
jobs:
build_docker_volume_plugin:
if: inputs.manual || github.repository == 'rclone/rclone'
name: Build docker plugin job
runs-on: ubuntu-latest
steps:
- name: Free some space
shell: bash
run: |
df -h .
# Remove android SDK
sudo rm -rf /usr/local/lib/android || true
# Remove .net runtime
sudo rm -rf /usr/share/dotnet || true
df -h .
- name: Checkout master
uses: actions/checkout@v5
with:
fetch-depth: 0
- name: Build and publish docker plugin
shell: bash
run: |
VER=${GITHUB_REF#refs/tags/}
PLUGIN_USER=rclone
docker login --username ${{ secrets.DOCKER_HUB_USER }} \
--password-stdin <<< "${{ secrets.DOCKER_HUB_PASSWORD }}"
for PLUGIN_ARCH in amd64 arm64 arm/v7 arm/v6 ;do
export PLUGIN_USER PLUGIN_ARCH
make docker-plugin PLUGIN_TAG=${PLUGIN_ARCH/\//-}
make docker-plugin PLUGIN_TAG=${PLUGIN_ARCH/\//-}-${VER#v}
done
make docker-plugin PLUGIN_ARCH=amd64 PLUGIN_TAG=latest
make docker-plugin PLUGIN_ARCH=amd64 PLUGIN_TAG=${VER#v}

View File

@@ -1,89 +0,0 @@
name: Docker release build
on:
release:
types: [published]
jobs:
build:
if: github.repository == 'rclone/rclone'
runs-on: ubuntu-latest
name: Build image job
steps:
- name: Free some space
shell: bash
run: |
df -h .
# Remove android SDK
sudo rm -rf /usr/local/lib/android || true
# Remove .net runtime
sudo rm -rf /usr/share/dotnet || true
df -h .
- name: Checkout master
uses: actions/checkout@v4
with:
fetch-depth: 0
- name: Get actual patch version
id: actual_patch_version
run: echo ::set-output name=ACTUAL_PATCH_VERSION::$(echo $GITHUB_REF | cut -d / -f 3 | sed 's/v//g')
- name: Get actual minor version
id: actual_minor_version
run: echo ::set-output name=ACTUAL_MINOR_VERSION::$(echo $GITHUB_REF | cut -d / -f 3 | sed 's/v//g' | cut -d "." -f 1,2)
- name: Get actual major version
id: actual_major_version
run: echo ::set-output name=ACTUAL_MAJOR_VERSION::$(echo $GITHUB_REF | cut -d / -f 3 | sed 's/v//g' | cut -d "." -f 1)
- name: Set up QEMU
uses: docker/setup-qemu-action@v3
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
- name: Login to Docker Hub
uses: docker/login-action@v3
with:
username: ${{ secrets.DOCKER_HUB_USER }}
password: ${{ secrets.DOCKER_HUB_PASSWORD }}
- name: Build and publish image
uses: docker/build-push-action@v6
with:
file: Dockerfile
context: .
platforms: linux/amd64,linux/386,linux/arm64,linux/arm/v7,linux/arm/v6
push: true
tags: |
rclone/rclone:latest
rclone/rclone:${{ steps.actual_patch_version.outputs.ACTUAL_PATCH_VERSION }}
rclone/rclone:${{ steps.actual_minor_version.outputs.ACTUAL_MINOR_VERSION }}
rclone/rclone:${{ steps.actual_major_version.outputs.ACTUAL_MAJOR_VERSION }}
build_docker_volume_plugin:
if: github.repository == 'rclone/rclone'
needs: build
runs-on: ubuntu-latest
name: Build docker plugin job
steps:
- name: Free some space
shell: bash
run: |
df -h .
# Remove android SDK
sudo rm -rf /usr/local/lib/android || true
# Remove .net runtime
sudo rm -rf /usr/share/dotnet || true
df -h .
- name: Checkout master
uses: actions/checkout@v4
with:
fetch-depth: 0
- name: Build and publish docker plugin
shell: bash
run: |
VER=${GITHUB_REF#refs/tags/}
PLUGIN_USER=rclone
docker login --username ${{ secrets.DOCKER_HUB_USER }} \
--password-stdin <<< "${{ secrets.DOCKER_HUB_PASSWORD }}"
for PLUGIN_ARCH in amd64 arm64 arm/v7 arm/v6 ;do
export PLUGIN_USER PLUGIN_ARCH
make docker-plugin PLUGIN_TAG=${PLUGIN_ARCH/\//-}
make docker-plugin PLUGIN_TAG=${PLUGIN_ARCH/\//-}-${VER#v}
done
make docker-plugin PLUGIN_ARCH=amd64 PLUGIN_TAG=latest
make docker-plugin PLUGIN_ARCH=amd64 PLUGIN_TAG=${VER#v}

43
.markdownlint.yml Normal file
View File

@@ -0,0 +1,43 @@
default: true
# Use specific styles, to be consistent accross all documents.
# Default is to accept any as long as it is consistent within the same document.
heading-style: # MD003
style: atx
ul-style: # MD004
style: dash
hr-style: # MD035
style: ---
code-block-style: # MD046
style: fenced
code-fence-style: # MD048
style: backtick
emphasis-style: # MD049
style: asterisk
strong-style: # MD050
style: asterisk
# Allow multiple headers with same text as long as they are not siblings.
no-duplicate-heading: # MD024
siblings_only: true
# Allow long lines in code blocks and tables.
line-length: # MD013
code_blocks: false
tables: false
# The Markdown files used to generated docs with Hugo contain a top level
# header, even though the YAML front matter has a title property (which is
# used for the HTML document title only). Suppress Markdownlint warning:
# Multiple top-level headings in the same document.
single-title: # MD025
level: 1
front_matter_title:
# The HTML docs generated by Hugo from Markdown files may have slightly
# different header anchors than GitHub rendered Markdown, e.g. Hugo trims
# leading dashes so "--config string" becomes "#config-string" while it is
# "#--config-string" in GitHub preview. When writing links to headers in the
# Markdown files we must use whatever works in the final HTML generated docs.
# Suppress Markdownlint warning: Link fragments should be valid.
link-fragments: false # MD051

View File

@@ -15,61 +15,81 @@ with the [latest beta of rclone](https://beta.rclone.org/):
- Rclone version (e.g. output from `rclone version`)
- Which OS you are using and how many bits (e.g. Windows 10, 64 bit)
- The command you were trying to run (e.g. `rclone copy /tmp remote:tmp`)
- A log of the command with the `-vv` flag (e.g. output from `rclone -vv copy /tmp remote:tmp`)
- if the log contains secrets then edit the file with a text editor first to obscure them
- A log of the command with the `-vv` flag (e.g. output from
`rclone -vv copy /tmp remote:tmp`)
- if the log contains secrets then edit the file with a text editor first to
obscure them
## Submitting a new feature or bug fix
If you find a bug that you'd like to fix, or a new feature that you'd
like to implement then please submit a pull request via GitHub.
If it is a big feature, then [make an issue](https://github.com/rclone/rclone/issues) first so it can be discussed.
If it is a big feature, then [make an issue](https://github.com/rclone/rclone/issues)
first so it can be discussed.
To prepare your pull request first press the fork button on [rclone's GitHub
page](https://github.com/rclone/rclone).
Then [install Git](https://git-scm.com/downloads) and set your public contribution [name](https://docs.github.com/en/github/getting-started-with-github/setting-your-username-in-git) and [email](https://docs.github.com/en/github/setting-up-and-managing-your-github-user-account/setting-your-commit-email-address#setting-your-commit-email-address-in-git).
Then [install Git](https://git-scm.com/downloads) and set your public contribution
[name](https://docs.github.com/en/github/getting-started-with-github/setting-your-username-in-git)
and [email](https://docs.github.com/en/github/setting-up-and-managing-your-github-user-account/setting-your-commit-email-address#setting-your-commit-email-address-in-git).
Next open your terminal, change directory to your preferred folder and initialise your local rclone project:
Next open your terminal, change directory to your preferred folder and initialise
your local rclone project:
git clone https://github.com/rclone/rclone.git
cd rclone
git remote rename origin upstream
# if you have SSH keys setup in your GitHub account:
git remote add origin git@github.com:YOURUSER/rclone.git
# otherwise:
git remote add origin https://github.com/YOURUSER/rclone.git
```sh
git clone https://github.com/rclone/rclone.git
cd rclone
git remote rename origin upstream
# if you have SSH keys setup in your GitHub account:
git remote add origin git@github.com:YOURUSER/rclone.git
# otherwise:
git remote add origin https://github.com/YOURUSER/rclone.git
```
Note that most of the terminal commands in the rest of this guide must be executed from the rclone folder created above.
Note that most of the terminal commands in the rest of this guide must be
executed from the rclone folder created above.
Now [install Go](https://golang.org/doc/install) and verify your installation:
go version
```sh
go version
```
Great, you can now compile and execute your own version of rclone:
go build
./rclone version
```sh
go build
./rclone version
```
(Note that you can also replace `go build` with `make`, which will include a
more accurate version number in the executable as well as enable you to specify
more build options.) Finally make a branch to add your new feature
git checkout -b my-new-feature
```sh
git checkout -b my-new-feature
```
And get hacking.
You may like one of the [popular editors/IDE's for Go](https://github.com/golang/go/wiki/IDEsAndTextEditorPlugins) and a quick view on the rclone [code organisation](#code-organisation).
You may like one of the [popular editors/IDE's for Go](https://github.com/golang/go/wiki/IDEsAndTextEditorPlugins)
and a quick view on the rclone [code organisation](#code-organisation).
When ready - test the affected functionality and run the unit tests for the code you changed
When ready - test the affected functionality and run the unit tests for the
code you changed
cd folder/with/changed/files
go test -v
```sh
cd folder/with/changed/files
go test -v
```
Note that you may need to make a test remote, e.g. `TestSwift` for some
of the unit tests.
This is typically enough if you made a simple bug fix, otherwise please read the rclone [testing](#testing) section too.
This is typically enough if you made a simple bug fix, otherwise please read
the rclone [testing](#testing) section too.
Make sure you
@@ -79,14 +99,19 @@ Make sure you
When you are done with that push your changes to GitHub:
git push -u origin my-new-feature
```sh
git push -u origin my-new-feature
```
and open the GitHub website to [create your pull
request](https://help.github.com/articles/creating-a-pull-request/).
Your changes will then get reviewed and you might get asked to fix some stuff. If so, then make the changes in the same branch, commit and push your updates to GitHub.
Your changes will then get reviewed and you might get asked to fix some stuff.
If so, then make the changes in the same branch, commit and push your updates to
GitHub.
You may sometimes be asked to [base your changes on the latest master](#basing-your-changes-on-the-latest-master) or [squash your commits](#squashing-your-commits).
You may sometimes be asked to [base your changes on the latest master](#basing-your-changes-on-the-latest-master)
or [squash your commits](#squashing-your-commits).
## Using Git and GitHub
@@ -94,87 +119,118 @@ You may sometimes be asked to [base your changes on the latest master](#basing-y
Follow the guideline for [commit messages](#commit-messages) and then:
git checkout my-new-feature # To switch to your branch
git status # To see the new and changed files
git add FILENAME # To select FILENAME for the commit
git status # To verify the changes to be committed
git commit # To do the commit
git log # To verify the commit. Use q to quit the log
```sh
git checkout my-new-feature # To switch to your branch
git status # To see the new and changed files
git add FILENAME # To select FILENAME for the commit
git status # To verify the changes to be committed
git commit # To do the commit
git log # To verify the commit. Use q to quit the log
```
You can modify the message or changes in the latest commit using:
git commit --amend
```sh
git commit --amend
```
If you amend to commits that have been pushed to GitHub, then you will have to [replace your previously pushed commits](#replacing-your-previously-pushed-commits).
If you amend to commits that have been pushed to GitHub, then you will have to
[replace your previously pushed commits](#replacing-your-previously-pushed-commits).
### Replacing your previously pushed commits
Note that you are about to rewrite the GitHub history of your branch. It is good practice to involve your collaborators before modifying commits that have been pushed to GitHub.
Note that you are about to rewrite the GitHub history of your branch. It is good
practice to involve your collaborators before modifying commits that have been
pushed to GitHub.
Your previously pushed commits are replaced by:
git push --force origin my-new-feature
```sh
git push --force origin my-new-feature
```
### Basing your changes on the latest master
To base your changes on the latest version of the [rclone master](https://github.com/rclone/rclone/tree/master) (upstream):
To base your changes on the latest version of the
[rclone master](https://github.com/rclone/rclone/tree/master) (upstream):
git checkout master
git fetch upstream
git merge --ff-only
git push origin --follow-tags # optional update of your fork in GitHub
git checkout my-new-feature
git rebase master
```sh
git checkout master
git fetch upstream
git merge --ff-only
git push origin --follow-tags # optional update of your fork in GitHub
git checkout my-new-feature
git rebase master
```
If you rebase commits that have been pushed to GitHub, then you will have to [replace your previously pushed commits](#replacing-your-previously-pushed-commits).
If you rebase commits that have been pushed to GitHub, then you will have to
[replace your previously pushed commits](#replacing-your-previously-pushed-commits).
### Squashing your commits ###
### Squashing your commits
To combine your commits into one commit:
git log # To count the commits to squash, e.g. the last 2
git reset --soft HEAD~2 # To undo the 2 latest commits
git status # To check everything is as expected
```sh
git log # To count the commits to squash, e.g. the last 2
git reset --soft HEAD~2 # To undo the 2 latest commits
git status # To check everything is as expected
```
If everything is fine, then make the new combined commit:
git commit # To commit the undone commits as one
```sh
git commit # To commit the undone commits as one
```
otherwise, you may roll back using:
git reflog # To check that HEAD{1} is your previous state
git reset --soft 'HEAD@{1}' # To roll back to your previous state
```sh
git reflog # To check that HEAD{1} is your previous state
git reset --soft 'HEAD@{1}' # To roll back to your previous state
```
If you squash commits that have been pushed to GitHub, then you will have to [replace your previously pushed commits](#replacing-your-previously-pushed-commits).
If you squash commits that have been pushed to GitHub, then you will have to
[replace your previously pushed commits](#replacing-your-previously-pushed-commits).
Tip: You may like to use `git rebase -i master` if you are experienced or have a more complex situation.
Tip: You may like to use `git rebase -i master` if you are experienced or have a
more complex situation.
### GitHub Continuous Integration
rclone currently uses [GitHub Actions](https://github.com/rclone/rclone/actions) to build and test the project, which should be automatically available for your fork too from the `Actions` tab in your repository.
rclone currently uses [GitHub Actions](https://github.com/rclone/rclone/actions)
to build and test the project, which should be automatically available for your
fork too from the `Actions` tab in your repository.
## Testing
### Code quality tests
If you install [golangci-lint](https://github.com/golangci/golangci-lint) then you can run the same tests as get run in the CI which can be very helpful.
If you install [golangci-lint](https://github.com/golangci/golangci-lint) then
you can run the same tests as get run in the CI which can be very helpful.
You can run them with `make check` or with `golangci-lint run ./...`.
Using these tests ensures that the rclone codebase all uses the same coding standards. These tests also check for easy mistakes to make (like forgetting to check an error return).
Using these tests ensures that the rclone codebase all uses the same coding
standards. These tests also check for easy mistakes to make (like forgetting
to check an error return).
### Quick testing
rclone's tests are run from the go testing framework, so at the top
level you can run this to run all the tests.
go test -v ./...
```sh
go test -v ./...
```
You can also use `make`, if supported by your platform
make quicktest
```sh
make quicktest
```
The quicktest is [automatically run by GitHub](#github-continuous-integration) when you push your branch to GitHub.
The quicktest is [automatically run by GitHub](#github-continuous-integration)
when you push your branch to GitHub.
### Backend testing
@@ -190,41 +246,51 @@ need to make a remote called `TestDrive`.
You can then run the unit tests in the drive directory. These tests
are skipped if `TestDrive:` isn't defined.
cd backend/drive
go test -v
```sh
cd backend/drive
go test -v
```
You can then run the integration tests which test all of rclone's
operations. Normally these get run against the local file system,
but they can be run against any of the remotes.
cd fs/sync
go test -v -remote TestDrive:
go test -v -remote TestDrive: -fast-list
```sh
cd fs/sync
go test -v -remote TestDrive:
go test -v -remote TestDrive: -fast-list
cd fs/operations
go test -v -remote TestDrive:
cd fs/operations
go test -v -remote TestDrive:
```
If you want to use the integration test framework to run these tests
altogether with an HTML report and test retries then from the
project root:
go install github.com/rclone/rclone/fstest/test_all
test_all -backends drive
```sh
go install github.com/rclone/rclone/fstest/test_all
test_all -backends drive
```
### Full integration testing
If you want to run all the integration tests against all the remotes,
then change into the project root and run
make check
make test
```sh
make check
make test
```
The commands may require some extra go packages which you can install with
make build_dep
```sh
make build_dep
```
The full integration tests are run daily on the integration test server. You can
find the results at https://pub.rclone.org/integration-tests/
find the results at <https://pub.rclone.org/integration-tests/>
## Code Organisation
@@ -232,46 +298,48 @@ Rclone code is organised into a small number of top level directories
with modules beneath.
- backend - the rclone backends for interfacing to cloud providers -
- all - import this to load all the cloud providers
- ...providers
- all - import this to load all the cloud providers
- ...providers
- bin - scripts for use while building or maintaining rclone
- cmd - the rclone commands
- all - import this to load all the commands
- ...commands
- all - import this to load all the commands
- ...commands
- cmdtest - end-to-end tests of commands, flags, environment variables,...
- docs - the documentation and website
- content - adjust these docs only - everything else is autogenerated
- command - these are auto-generated - edit the corresponding .go file
- content - adjust these docs only, except those marked autogenerated
or portions marked autogenerated where the corresponding .go file must be
edited instead, and everything else is autogenerated
- commands - these are auto-generated, edit the corresponding .go file
- fs - main rclone definitions - minimal amount of code
- accounting - bandwidth limiting and statistics
- asyncreader - an io.Reader which reads ahead
- config - manage the config file and flags
- driveletter - detect if a name is a drive letter
- filter - implements include/exclude filtering
- fserrors - rclone specific error handling
- fshttp - http handling for rclone
- fspath - path handling for rclone
- hash - defines rclone's hash types and functions
- list - list a remote
- log - logging facilities
- march - iterates directories in lock step
- object - in memory Fs objects
- operations - primitives for sync, e.g. Copy, Move
- sync - sync directories
- walk - walk a directory
- accounting - bandwidth limiting and statistics
- asyncreader - an io.Reader which reads ahead
- config - manage the config file and flags
- driveletter - detect if a name is a drive letter
- filter - implements include/exclude filtering
- fserrors - rclone specific error handling
- fshttp - http handling for rclone
- fspath - path handling for rclone
- hash - defines rclone's hash types and functions
- list - list a remote
- log - logging facilities
- march - iterates directories in lock step
- object - in memory Fs objects
- operations - primitives for sync, e.g. Copy, Move
- sync - sync directories
- walk - walk a directory
- fstest - provides integration test framework
- fstests - integration tests for the backends
- mockdir - mocks an fs.Directory
- mockobject - mocks an fs.Object
- test_all - Runs integration tests for everything
- fstests - integration tests for the backends
- mockdir - mocks an fs.Directory
- mockobject - mocks an fs.Object
- test_all - Runs integration tests for everything
- graphics - the images used in the website, etc.
- lib - libraries used by the backend
- atexit - register functions to run when rclone exits
- dircache - directory ID to name caching
- oauthutil - helpers for using oauth
- pacer - retries with backoff and paces operations
- readers - a selection of useful io.Readers
- rest - a thin abstraction over net/http for REST
- atexit - register functions to run when rclone exits
- dircache - directory ID to name caching
- oauthutil - helpers for using oauth
- pacer - retries with backoff and paces operations
- readers - a selection of useful io.Readers
- rest - a thin abstraction over net/http for REST
- librclone - in memory interface to rclone's API for embedding rclone
- vfs - Virtual FileSystem layer for implementing rclone mount and similar
@@ -279,6 +347,36 @@ with modules beneath.
If you are adding a new feature then please update the documentation.
The documentation sources are generally in Markdown format, in conformance
with the CommonMark specification and compatible with GitHub Flavored
Markdown (GFM). The markdown format is checked as part of the lint operation
that runs automatically on pull requests, to enforce standards and consistency.
This is based on the [markdownlint](https://github.com/DavidAnson/markdownlint)
tool, which can also be integrated into editors so you can perform the same
checks while writing.
HTML pages, served as website <rclone.org>, are generated from the Markdown,
using [Hugo](https://gohugo.io). Note that when generating the HTML pages,
there is currently used a different algorithm for generating header anchors
than what GitHub uses for its Markdown rendering. For example, in the HTML docs
generated by Hugo any leading `-` characters are ignored, which means when
linking to a header with text `--config string` we therefore need to use the
link `#config-string` in our Markdown source, which will not work in GitHub's
preview where `#--config-string` would be the correct link.
Most of the documentation are written directly in text files with extension
`.md`, mainly within folder `docs/content`. Note that several of such files
are autogenerated (e.g. the command documentation, and `docs/content/flags.md`),
or contain autogenerated portions (e.g. the backend documentation under
`docs/content/commands`). These are marked with an `autogenerated` comment.
The sources of the autogenerated text are usually Markdown formatted text
embedded as string values in the Go source code, so you need to locate these
and edit the `.go` file instead. The `MANUAL.*`, `rclone.1` and other text
files in the root of the repository are also autogenerated. The autogeneration
of files, and the website, will be done during the release process. See the
`make doc` and `make website` targets in the Makefile if you are interested in
how. You don't need to run these when adding a feature.
If you add a new general flag (not for a backend), then document it in
`docs/content/docs.md` - the flags there are supposed to be in
alphabetical order.
@@ -287,39 +385,40 @@ If you add a new backend option/flag, then it should be documented in
the source file in the `Help:` field.
- Start with the most important information about the option,
as a single sentence on a single line.
- This text will be used for the command-line flag help.
- It will be combined with other information, such as any default value,
and the result will look odd if not written as a single sentence.
- It should end with a period/full stop character, which will be shown
in docs but automatically removed when producing the flag help.
- Try to keep it below 80 characters, to reduce text wrapping in the terminal.
as a single sentence on a single line.
- This text will be used for the command-line flag help.
- It will be combined with other information, such as any default value,
and the result will look odd if not written as a single sentence.
- It should end with a period/full stop character, which will be shown
in docs but automatically removed when producing the flag help.
- Try to keep it below 80 characters, to reduce text wrapping in the terminal.
- More details can be added in a new paragraph, after an empty line (`"\n\n"`).
- Like with docs generated from Markdown, a single line break is ignored
and two line breaks creates a new paragraph.
- This text will be shown to the user in `rclone config`
and in the docs (where it will be added by `make backenddocs`,
normally run some time before next release).
- Like with docs generated from Markdown, a single line break is ignored
and two line breaks creates a new paragraph.
- This text will be shown to the user in `rclone config`
and in the docs (where it will be added by `make backenddocs`,
normally run some time before next release).
- To create options of enumeration type use the `Examples:` field.
- Each example value have their own `Help:` field, but they are treated
a bit different than the main option help text. They will be shown
as an unordered list, therefore a single line break is enough to
create a new list item. Also, for enumeration texts like name of
countries, it looks better without an ending period/full stop character.
- Each example value have their own `Help:` field, but they are treated
a bit different than the main option help text. They will be shown
as an unordered list, therefore a single line break is enough to
create a new list item. Also, for enumeration texts like name of
countries, it looks better without an ending period/full stop character.
The only documentation you need to edit are the `docs/content/*.md`
files. The `MANUAL.*`, `rclone.1`, website, etc. are all auto-generated
from those during the release process. See the `make doc` and `make
website` targets in the Makefile if you are interested in how. You
don't need to run these when adding a feature.
When writing documentation for an entirely new backend,
see [backend documentation](#backend-documentation).
Documentation for rclone sub commands is with their code, e.g.
`cmd/ls/ls.go`. Write flag help strings as a single sentence on a single
line, without a period/full stop character at the end, as it will be
combined unmodified with other information (such as any default value).
If you are updating documentation for a command, you must do that in the
command source code, e.g. `cmd/ls/ls.go`. Write flag help strings as a single
sentence on a single line, without a period/full stop character at the end,
as it will be combined unmodified with other information (such as any default
value).
Note that you can use [GitHub's online editor](https://help.github.com/en/github/managing-files-in-a-repository/editing-files-in-another-users-repository)
for small changes in the docs which makes it very easy.
Note that you can use
[GitHub's online editor](https://help.github.com/en/github/managing-files-in-a-repository/editing-files-in-another-users-repository)
for small changes in the docs which makes it very easy. Just remember the
caveat when linking to header anchors, noted above, which means that GitHub's
Markdown preview may not be an entirely reliable verification of the results.
## Making a release
@@ -350,13 +449,13 @@ change will get linked into the issue.
Here is an example of a short commit message:
```
```text
drive: add team drive support - fixes #885
```
And here is an example of a longer one:
```
```text
mount: fix hang on errored upload
In certain circumstances, if an upload failed then the mount could hang
@@ -379,7 +478,9 @@ To add a dependency `github.com/ncw/new_dependency` see the
instructions below. These will fetch the dependency and add it to
`go.mod` and `go.sum`.
go get github.com/ncw/new_dependency
```sh
go get github.com/ncw/new_dependency
```
You can add constraints on that package when doing `go get` (see the
go docs linked above), but don't unless you really need to.
@@ -391,7 +492,9 @@ and `go.sum` in the same commit as your other changes.
If you need to update a dependency then run
go get golang.org/x/crypto
```sh
go get golang.org/x/crypto
```
Check in a single commit as above.
@@ -434,25 +537,38 @@ remote or an fs.
### Getting going
- Create `backend/remote/remote.go` (copy this from a similar remote)
- box is a good one to start from if you have a directory-based remote (and shows how to use the directory cache)
- b2 is a good one to start from if you have a bucket-based remote
- box is a good one to start from if you have a directory-based remote (and
shows how to use the directory cache)
- b2 is a good one to start from if you have a bucket-based remote
- Add your remote to the imports in `backend/all/all.go`
- HTTP based remotes are easiest to maintain if they use rclone's [lib/rest](https://pkg.go.dev/github.com/rclone/rclone/lib/rest) module, but if there is a really good Go SDK from the provider then use that instead.
- Try to implement as many optional methods as possible as it makes the remote more usable.
- Use [lib/encoder](https://pkg.go.dev/github.com/rclone/rclone/lib/encoder) to make sure we can encode any path name and `rclone info` to help determine the encodings needed
- `rclone purge -v TestRemote:rclone-info`
- `rclone test info --all --remote-encoding None -vv --write-json remote.json TestRemote:rclone-info`
- `go run cmd/test/info/internal/build_csv/main.go -o remote.csv remote.json`
- open `remote.csv` in a spreadsheet and examine
- HTTP based remotes are easiest to maintain if they use rclone's
[lib/rest](https://pkg.go.dev/github.com/rclone/rclone/lib/rest) module, but
if there is a really good Go SDK from the provider then use that instead.
- Try to implement as many optional methods as possible as it makes the remote
more usable.
- Use [lib/encoder](https://pkg.go.dev/github.com/rclone/rclone/lib/encoder) to
make sure we can encode any path name and `rclone info` to help determine the
encodings needed
- `rclone purge -v TestRemote:rclone-info`
- `rclone test info --all --remote-encoding None -vv --write-json remote.json TestRemote:rclone-info`
- `go run cmd/test/info/internal/build_csv/main.go -o remote.csv remote.json`
- open `remote.csv` in a spreadsheet and examine
### Guidelines for a speedy merge
- **Do** use [lib/rest](https://pkg.go.dev/github.com/rclone/rclone/lib/rest) if you are implementing a REST like backend and parsing XML/JSON in the backend.
- **Do** use rclone's Client or Transport from [fs/fshttp](https://pkg.go.dev/github.com/rclone/rclone/fs/fshttp) if your backend is HTTP based - this adds features like `--dump bodies`, `--tpslimit`, `--user-agent` without you having to code anything!
- **Do** follow your example backend exactly - use the same code order, function names, layout, structure. **Don't** move stuff around and **Don't** delete the comments.
- **Do not** split your backend up into `fs.go` and `object.go` (there are a few backends like that - don't follow them!)
- **Do** use [lib/rest](https://pkg.go.dev/github.com/rclone/rclone/lib/rest)
if you are implementing a REST like backend and parsing XML/JSON in the backend.
- **Do** use rclone's Client or Transport from [fs/fshttp](https://pkg.go.dev/github.com/rclone/rclone/fs/fshttp)
if your backend is HTTP based - this adds features like `--dump bodies`,
`--tpslimit`, `--user-agent` without you having to code anything!
- **Do** follow your example backend exactly - use the same code order, function
names, layout, structure. **Don't** move stuff around and **Don't** delete the
comments.
- **Do not** split your backend up into `fs.go` and `object.go` (there are a few
backends like that - don't follow them!)
- **Do** put your API type definitions in a separate file - by preference `api/types.go`
- **Remember** we have >50 backends to maintain so keeping them as similar as possible to each other is a high priority!
- **Remember** we have >50 backends to maintain so keeping them as similar as
possible to each other is a high priority!
### Unit tests
@@ -463,19 +579,20 @@ remote or an fs.
### Integration tests
- Add your backend to `fstest/test_all/config.yaml`
- Once you've done that then you can use the integration test framework from the project root:
- go install ./...
- test_all -backends remote
- Once you've done that then you can use the integration test framework from
the project root:
- go install ./...
- test_all -backends remote
Or if you want to run the integration tests manually:
- Make sure integration tests pass with
- `cd fs/operations`
- `go test -v -remote TestRemote:`
- `cd fs/sync`
- `go test -v -remote TestRemote:`
- `cd fs/operations`
- `go test -v -remote TestRemote:`
- `cd fs/sync`
- `go test -v -remote TestRemote:`
- If your remote defines `ListR` check with this also
- `go test -v -remote TestRemote: -fast-list`
- `go test -v -remote TestRemote: -fast-list`
See the [testing](#testing) section for more information on integration tests.
@@ -487,10 +604,13 @@ alphabetical order of full name of remote (e.g. `drive` is ordered as
`Google Drive`) but with the local file system last.
- `README.md` - main GitHub page
- `docs/content/remote.md` - main docs page (note the backend options are automatically added to this file with `make backenddocs`)
- make sure this has the `autogenerated options` comments in (see your reference backend docs)
- update them in your backend with `bin/make_backend_docs.py remote`
- `docs/content/overview.md` - overview docs - add an entry into the Features table and the Optional Features table.
- `docs/content/remote.md` - main docs page (note the backend options are
automatically added to this file with `make backenddocs`)
- make sure this has the `autogenerated options` comments in (see your
reference backend docs)
- update them in your backend with `bin/make_backend_docs.py remote`
- `docs/content/overview.md` - overview docs - add an entry into the Features
table and the Optional Features table.
- `docs/content/docs.md` - list of remotes in config section
- `docs/content/_index.md` - front page of rclone.org
- `docs/layouts/chrome/navbar.html` - add it to the website navigation
@@ -506,21 +626,22 @@ It is quite easy to add a new S3 provider to rclone.
You'll need to modify the following files
- `backend/s3/s3.go`
- Add the provider to `providerOption` at the top of the file
- Add endpoints and other config for your provider gated on the provider in `fs.RegInfo`.
- Exclude your provider from generic config questions (eg `region` and `endpoint).
- Add the provider to the `setQuirks` function - see the documentation there.
- Add the provider to `providerOption` at the top of the file
- Add endpoints and other config for your provider gated on the provider in `fs.RegInfo`.
- Exclude your provider from generic config questions (eg `region` and `endpoint).
- Add the provider to the `setQuirks` function - see the documentation there.
- `docs/content/s3.md`
- Add the provider at the top of the page.
- Add a section about the provider linked from there.
- Add a transcript of a trial `rclone config` session
- Edit the transcript to remove things which might change in subsequent versions
- **Do not** alter or add to the autogenerated parts of `s3.md`
- **Do not** run `make backenddocs` or `bin/make_backend_docs.py s3`
- Add the provider at the top of the page.
- Add a section about the provider linked from there.
- Make sure this is in alphabetical order in the `Providers` section.
- Add a transcript of a trial `rclone config` session
- Edit the transcript to remove things which might change in subsequent versions
- **Do not** alter or add to the autogenerated parts of `s3.md`
- **Do not** run `make backenddocs` or `bin/make_backend_docs.py s3`
- `README.md` - this is the home page in github
- Add the provider and a link to the section you wrote in `docs/contents/s3.md`
- Add the provider and a link to the section you wrote in `docs/contents/s3.md`
- `docs/content/_index.md` - this is the home page of rclone.org
- Add the provider and a link to the section you wrote in `docs/contents/s3.md`
- Add the provider and a link to the section you wrote in `docs/contents/s3.md`
When adding the provider, endpoints, quirks, docs etc keep them in
alphabetical order by `Provider` name, but with `AWS` first and
@@ -541,34 +662,51 @@ For an example of adding an s3 provider see [eb3082a1](https://github.com/rclone
## Writing a plugin
New features (backends, commands) can also be added "out-of-tree", through Go plugins.
Changes will be kept in a dynamically loaded file instead of being compiled into the main binary.
This is useful if you can't merge your changes upstream or don't want to maintain a fork of rclone.
New features (backends, commands) can also be added "out-of-tree", through Go
plugins. Changes will be kept in a dynamically loaded file instead of being
compiled into the main binary. This is useful if you can't merge your changes
upstream or don't want to maintain a fork of rclone.
### Usage
- Naming
- Plugins names must have the pattern `librcloneplugin_KIND_NAME.so`.
- `KIND` should be one of `backend`, `command` or `bundle`.
- Example: A plugin with backend support for PiFS would be called
`librcloneplugin_backend_pifs.so`.
- Loading
- Supported on macOS & Linux as of now. ([Go issue for Windows support](https://github.com/golang/go/issues/19282))
- Supported on rclone v1.50 or greater.
- All plugins in the folder specified by variable `$RCLONE_PLUGIN_PATH` are loaded.
- If this variable doesn't exist, plugin support is disabled.
- Plugins must be compiled against the exact version of rclone to work.
(The rclone used during building the plugin must be the same as the source of rclone)
- Naming
- Plugins names must have the pattern `librcloneplugin_KIND_NAME.so`.
- `KIND` should be one of `backend`, `command` or `bundle`.
- Example: A plugin with backend support for PiFS would be called
`librcloneplugin_backend_pifs.so`.
- Loading
- Supported on macOS & Linux as of now. ([Go issue for Windows support](https://github.com/golang/go/issues/19282))
- Supported on rclone v1.50 or greater.
- All plugins in the folder specified by variable `$RCLONE_PLUGIN_PATH` are loaded.
- If this variable doesn't exist, plugin support is disabled.
- Plugins must be compiled against the exact version of rclone to work.
(The rclone used during building the plugin must be the same as the source
of rclone)
### Building
To turn your existing additions into a Go plugin, move them to an external repository
and change the top-level package name to `main`.
Check `rclone --version` and make sure that the plugin's rclone dependency and host Go version match.
Check `rclone --version` and make sure that the plugin's rclone dependency and
host Go version match.
Then, run `go build -buildmode=plugin -o PLUGIN_NAME.so .` to build the plugin.
[Go reference](https://godoc.org/github.com/rclone/rclone/lib/plugin)
[Minimal example](https://gist.github.com/terorie/21b517ee347828e899e1913efc1d684f)
## Keeping a backend or command out of tree
Rclone was designed to be modular so it is very easy to keep a backend
or a command out of the main rclone source tree.
So for example if you had a backend which accessed your proprietary
systems or a command which was specialised for your needs you could
add them out of tree.
This may be easier than using a plugin and is supported on all
platforms not just macOS and Linux.
This is explained further in <https://github.com/rclone/rclone_out_of_tree_example>
which has an example of an out of tree backend `ram` (which is a
renamed version of the `memory` backend).

View File

@@ -1,21 +1,47 @@
FROM golang:alpine AS builder
COPY . /go/src/github.com/rclone/rclone/
ARG CGO_ENABLED=0
WORKDIR /go/src/github.com/rclone/rclone/
RUN apk add --no-cache make bash gawk git
RUN \
CGO_ENABLED=0 \
make
RUN ./rclone version
RUN echo "**** Set Go Environment Variables ****" && \
go env -w GOCACHE=/root/.cache/go-build
RUN echo "**** Install Dependencies ****" && \
apk add --no-cache \
make \
bash \
gawk \
git
COPY go.mod .
COPY go.sum .
RUN echo "**** Download Go Dependencies ****" && \
go mod download -x
RUN echo "**** Verify Go Dependencies ****" && \
go mod verify
COPY . .
RUN --mount=type=cache,target=/root/.cache/go-build,sharing=locked \
echo "**** Build Binary ****" && \
make
RUN echo "**** Print Version Binary ****" && \
./rclone version
# Begin final image
FROM alpine:latest
LABEL org.opencontainers.image.source="https://github.com/rclone/rclone"
RUN apk --no-cache add ca-certificates fuse3 tzdata && \
echo "user_allow_other" >> /etc/fuse.conf
RUN echo "**** Install Dependencies ****" && \
apk add --no-cache \
ca-certificates \
fuse3 \
tzdata && \
echo "Enable user_allow_other in fuse" && \
echo "user_allow_other" >> /etc/fuse.conf
COPY --from=builder /go/src/github.com/rclone/rclone/rclone /usr/local/bin/

View File

@@ -1,4 +1,4 @@
# Maintainers guide for rclone #
# Maintainers guide for rclone
Current active maintainers of rclone are:
@@ -24,80 +24,108 @@ Current active maintainers of rclone are:
| Dan McArdle | @dmcardle | gitannex |
| Sam Harrison | @childish-sambino | filescom |
**This is a work in progress Draft**
## This is a work in progress draft
This is a guide for how to be an rclone maintainer. This is mostly a write-up of what I (@ncw) attempt to do.
This is a guide for how to be an rclone maintainer. This is mostly a write-up
of what I (@ncw) attempt to do.
## Triaging Tickets ##
## Triaging Tickets
When a ticket comes in it should be triaged. This means it should be classified by adding labels and placed into a milestone. Quite a lot of tickets need a bit of back and forth to determine whether it is a valid ticket so tickets may remain without labels or milestone for a while.
When a ticket comes in it should be triaged. This means it should be classified
by adding labels and placed into a milestone. Quite a lot of tickets need a bit
of back and forth to determine whether it is a valid ticket so tickets may
remain without labels or milestone for a while.
Rclone uses the labels like this:
* `bug` - a definitely verified bug
* `can't reproduce` - a problem which we can't reproduce
* `doc fix` - a bug in the documentation - if users need help understanding the docs add this label
* `duplicate` - normally close these and ask the user to subscribe to the original
* `enhancement: new remote` - a new rclone backend
* `enhancement` - a new feature
* `FUSE` - to do with `rclone mount` command
* `good first issue` - mark these if you find a small self-contained issue - these get shown to new visitors to the project
* `help` wanted - mark these if you find a self-contained issue - these get shown to new visitors to the project
* `IMPORTANT` - note to maintainers not to forget to fix this for the release
* `maintenance` - internal enhancement, code re-organisation, etc.
* `Needs Go 1.XX` - waiting for that version of Go to be released
* `question` - not a `bug` or `enhancement` - direct to the forum for next time
* `Remote: XXX` - which rclone backend this affects
* `thinking` - not decided on the course of action yet
- `bug` - a definitely verified bug
- `can't reproduce` - a problem which we can't reproduce
- `doc fix` - a bug in the documentation - if users need help understanding the
docs add this label
- `duplicate` - normally close these and ask the user to subscribe to the original
- `enhancement: new remote` - a new rclone backend
- `enhancement` - a new feature
- `FUSE` - to do with `rclone mount` command
- `good first issue` - mark these if you find a small self-contained issue -
these get shown to new visitors to the project
- `help` wanted - mark these if you find a self-contained issue - these get
shown to new visitors to the project
- `IMPORTANT` - note to maintainers not to forget to fix this for the release
- `maintenance` - internal enhancement, code re-organisation, etc.
- `Needs Go 1.XX` - waiting for that version of Go to be released
- `question` - not a `bug` or `enhancement` - direct to the forum for next time
- `Remote: XXX` - which rclone backend this affects
- `thinking` - not decided on the course of action yet
If it turns out to be a bug or an enhancement it should be tagged as such, with the appropriate other tags. Don't forget the "good first issue" tag to give new contributors something easy to do to get going.
If it turns out to be a bug or an enhancement it should be tagged as such, with
the appropriate other tags. Don't forget the "good first issue" tag to give new
contributors something easy to do to get going.
When a ticket is tagged it should be added to a milestone, either the next release, the one after, Soon or Help Wanted. Bugs can be added to the "Known Bugs" milestone if they aren't planned to be fixed or need to wait for something (e.g. the next go release).
When a ticket is tagged it should be added to a milestone, either the next
release, the one after, Soon or Help Wanted. Bugs can be added to the
"Known Bugs" milestone if they aren't planned to be fixed or need to wait for
something (e.g. the next go release).
The milestones have these meanings:
* v1.XX - stuff we would like to fit into this release
* v1.XX+1 - stuff we are leaving until the next release
* Soon - stuff we think is a good idea - waiting to be scheduled for a release
* Help wanted - blue sky stuff that might get moved up, or someone could help with
* Known bugs - bugs waiting on external factors or we aren't going to fix for the moment
- v1.XX - stuff we would like to fit into this release
- v1.XX+1 - stuff we are leaving until the next release
- Soon - stuff we think is a good idea - waiting to be scheduled for a release
- Help wanted - blue sky stuff that might get moved up, or someone could help with
- Known bugs - bugs waiting on external factors or we aren't going to fix for
the moment
Tickets [with no milestone](https://github.com/rclone/rclone/issues?utf8=✓&q=is%3Aissue%20is%3Aopen%20no%3Amile) are good candidates for ones that have slipped between the gaps and need following up.
Tickets [with no milestone](https://github.com/rclone/rclone/issues?utf8=✓&q=is%3Aissue%20is%3Aopen%20no%3Amile)
are good candidates for ones that have slipped between the gaps and need
following up.
## Closing Tickets ##
## Closing Tickets
Close tickets as soon as you can - make sure they are tagged with a release. Post a link to a beta in the ticket with the fix in, asking for feedback.
Close tickets as soon as you can - make sure they are tagged with a release.
Post a link to a beta in the ticket with the fix in, asking for feedback.
## Pull requests ##
## Pull requests
Try to process pull requests promptly!
Merging pull requests on GitHub itself works quite well nowadays so you can squash and rebase or rebase pull requests. rclone doesn't use merge commits. Use the squash and rebase option if you need to edit the commit message.
Merging pull requests on GitHub itself works quite well nowadays so you can
squash and rebase or rebase pull requests. rclone doesn't use merge commits.
Use the squash and rebase option if you need to edit the commit message.
After merging the commit, in your local master branch, do `git pull` then run `bin/update-authors.py` to update the authors file then `git push`.
After merging the commit, in your local master branch, do `git pull` then run
`bin/update-authors.py` to update the authors file then `git push`.
Sometimes pull requests need to be left open for a while - this especially true of contributions of new backends which take a long time to get right.
Sometimes pull requests need to be left open for a while - this especially true
of contributions of new backends which take a long time to get right.
## Merges ##
## Merges
If you are merging a branch locally then do `git merge --ff-only branch-name` to avoid a merge commit. You'll need to rebase the branch if it doesn't merge cleanly.
If you are merging a branch locally then do `git merge --ff-only branch-name` to
avoid a merge commit. You'll need to rebase the branch if it doesn't merge cleanly.
## Release cycle ##
## Release cycle
Rclone aims for a 6-8 week release cycle. Sometimes release cycles take longer if there is something big to merge that didn't stabilize properly or for personal reasons.
Rclone aims for a 6-8 week release cycle. Sometimes release cycles take longer
if there is something big to merge that didn't stabilize properly or for personal
reasons.
High impact regressions should be fixed before the next release.
Near the start of the release cycle, the dependencies should be updated with `make update` to give time for bugs to surface.
Near the start of the release cycle, the dependencies should be updated with
`make update` to give time for bugs to surface.
Towards the end of the release cycle try not to merge anything too big so let things settle down.
Towards the end of the release cycle try not to merge anything too big so let
things settle down.
Follow the instructions in RELEASE.md for making the release. Note that the testing part is the most time-consuming often needing several rounds of test and fix depending on exactly how many new features rclone has gained.
Follow the instructions in RELEASE.md for making the release. Note that the
testing part is the most time-consuming often needing several rounds of test
and fix depending on exactly how many new features rclone has gained.
## Mailing list ##
## Mailing list
There is now an invite-only mailing list for rclone developers `rclone-dev` on google groups.
There is now an invite-only mailing list for rclone developers `rclone-dev` on
google groups.
## TODO ##
## TODO
I should probably make a dev@rclone.org to register with cloud providers.
I should probably make a <dev@rclone.org> to register with cloud providers.

3931
MANUAL.html generated

File diff suppressed because it is too large Load Diff

3115
MANUAL.md generated

File diff suppressed because it is too large Load Diff

3234
MANUAL.txt generated

File diff suppressed because it is too large Load Diff

View File

@@ -243,7 +243,7 @@ fetch_binaries:
rclone -P sync --exclude "/testbuilds/**" --delete-excluded $(BETA_UPLOAD) build/
serve: website
cd docs && hugo server --logLevel info -w --disableFastRender
cd docs && hugo server --logLevel info -w --disableFastRender --ignoreCache
tag: retag doc
bin/make_changelog.py $(LAST_TAG) $(VERSION) > docs/content/changelog.md.new

272
README.md
View File

@@ -1,22 +1,6 @@
<div align="center">
<sup>Special thanks to our sponsor:</sup>
<br>
<br>
<a href="https://www.warp.dev/?utm_source=github&utm_medium=referral&utm_campaign=rclone_20231103">
<div>
<img src="https://rclone.org/img/logos/warp-github.svg" width="300" alt="Warp">
</div>
<b>Warp is a modern, Rust-based terminal with AI built in so you and your team can build great software, faster.</b>
<div>
<sup>Visit warp.dev to learn more.</sup>
</div>
</a>
<br>
<hr>
</div>
<br>
<!-- markdownlint-disable-next-line first-line-heading no-inline-html -->
[<img src="https://rclone.org/img/logo_on_light__horizontal_color.svg" width="50%" alt="rclone logo">](https://rclone.org/#gh-light-mode-only)
<!-- markdownlint-disable-next-line no-inline-html -->
[<img src="https://rclone.org/img/logo_on_dark__horizontal_color.svg" width="50%" alt="rclone logo">](https://rclone.org/#gh-dark-mode-only)
[Website](https://rclone.org) |
@@ -34,97 +18,105 @@
# Rclone
Rclone *("rsync for cloud storage")* is a command-line program to sync files and directories to and from different cloud storage providers.
Rclone *("rsync for cloud storage")* is a command-line program to sync files and
directories to and from different cloud storage providers.
## Storage providers
* 1Fichier [:page_facing_up:](https://rclone.org/fichier/)
* Akamai Netstorage [:page_facing_up:](https://rclone.org/netstorage/)
* Alibaba Cloud (Aliyun) Object Storage System (OSS) [:page_facing_up:](https://rclone.org/s3/#alibaba-oss)
* Amazon S3 [:page_facing_up:](https://rclone.org/s3/)
* ArvanCloud Object Storage (AOS) [:page_facing_up:](https://rclone.org/s3/#arvan-cloud-object-storage-aos)
* Backblaze B2 [:page_facing_up:](https://rclone.org/b2/)
* Box [:page_facing_up:](https://rclone.org/box/)
* Ceph [:page_facing_up:](https://rclone.org/s3/#ceph)
* China Mobile Ecloud Elastic Object Storage (EOS) [:page_facing_up:](https://rclone.org/s3/#china-mobile-ecloud-eos)
* Cloudflare R2 [:page_facing_up:](https://rclone.org/s3/#cloudflare-r2)
* Citrix ShareFile [:page_facing_up:](https://rclone.org/sharefile/)
* DigitalOcean Spaces [:page_facing_up:](https://rclone.org/s3/#digitalocean-spaces)
* Digi Storage [:page_facing_up:](https://rclone.org/koofr/#digi-storage)
* Dreamhost [:page_facing_up:](https://rclone.org/s3/#dreamhost)
* Dropbox [:page_facing_up:](https://rclone.org/dropbox/)
* Enterprise File Fabric [:page_facing_up:](https://rclone.org/filefabric/)
* Fastmail Files [:page_facing_up:](https://rclone.org/webdav/#fastmail-files)
* Files.com [:page_facing_up:](https://rclone.org/filescom/)
* FTP [:page_facing_up:](https://rclone.org/ftp/)
* GoFile [:page_facing_up:](https://rclone.org/gofile/)
* Google Cloud Storage [:page_facing_up:](https://rclone.org/googlecloudstorage/)
* Google Drive [:page_facing_up:](https://rclone.org/drive/)
* Google Photos [:page_facing_up:](https://rclone.org/googlephotos/)
* HDFS (Hadoop Distributed Filesystem) [:page_facing_up:](https://rclone.org/hdfs/)
* Hetzner Storage Box [:page_facing_up:](https://rclone.org/sftp/#hetzner-storage-box)
* HiDrive [:page_facing_up:](https://rclone.org/hidrive/)
* HTTP [:page_facing_up:](https://rclone.org/http/)
* Huawei Cloud Object Storage Service(OBS) [:page_facing_up:](https://rclone.org/s3/#huawei-obs)
* iCloud Drive [:page_facing_up:](https://rclone.org/iclouddrive/)
* ImageKit [:page_facing_up:](https://rclone.org/imagekit/)
* Internet Archive [:page_facing_up:](https://rclone.org/internetarchive/)
* Jottacloud [:page_facing_up:](https://rclone.org/jottacloud/)
* IBM COS S3 [:page_facing_up:](https://rclone.org/s3/#ibm-cos-s3)
* IONOS Cloud [:page_facing_up:](https://rclone.org/s3/#ionos)
* Koofr [:page_facing_up:](https://rclone.org/koofr/)
* Leviia Object Storage [:page_facing_up:](https://rclone.org/s3/#leviia)
* Liara Object Storage [:page_facing_up:](https://rclone.org/s3/#liara-object-storage)
* Linkbox [:page_facing_up:](https://rclone.org/linkbox)
* Linode Object Storage [:page_facing_up:](https://rclone.org/s3/#linode)
* Magalu Object Storage [:page_facing_up:](https://rclone.org/s3/#magalu)
* Mail.ru Cloud [:page_facing_up:](https://rclone.org/mailru/)
* Memset Memstore [:page_facing_up:](https://rclone.org/swift/)
* Mega [:page_facing_up:](https://rclone.org/mega/)
* Memory [:page_facing_up:](https://rclone.org/memory/)
* Microsoft Azure Blob Storage [:page_facing_up:](https://rclone.org/azureblob/)
* Microsoft Azure Files Storage [:page_facing_up:](https://rclone.org/azurefiles/)
* Microsoft OneDrive [:page_facing_up:](https://rclone.org/onedrive/)
* Minio [:page_facing_up:](https://rclone.org/s3/#minio)
* Nextcloud [:page_facing_up:](https://rclone.org/webdav/#nextcloud)
* OVH [:page_facing_up:](https://rclone.org/swift/)
* Blomp Cloud Storage [:page_facing_up:](https://rclone.org/swift/)
* OpenDrive [:page_facing_up:](https://rclone.org/opendrive/)
* OpenStack Swift [:page_facing_up:](https://rclone.org/swift/)
* Oracle Cloud Storage [:page_facing_up:](https://rclone.org/swift/)
* Oracle Object Storage [:page_facing_up:](https://rclone.org/oracleobjectstorage/)
* Outscale [:page_facing_up:](https://rclone.org/s3/#outscale)
* ownCloud [:page_facing_up:](https://rclone.org/webdav/#owncloud)
* pCloud [:page_facing_up:](https://rclone.org/pcloud/)
* Petabox [:page_facing_up:](https://rclone.org/s3/#petabox)
* PikPak [:page_facing_up:](https://rclone.org/pikpak/)
* Pixeldrain [:page_facing_up:](https://rclone.org/pixeldrain/)
* premiumize.me [:page_facing_up:](https://rclone.org/premiumizeme/)
* put.io [:page_facing_up:](https://rclone.org/putio/)
* Proton Drive [:page_facing_up:](https://rclone.org/protondrive/)
* QingStor [:page_facing_up:](https://rclone.org/qingstor/)
* Qiniu Cloud Object Storage (Kodo) [:page_facing_up:](https://rclone.org/s3/#qiniu)
* Quatrix [:page_facing_up:](https://rclone.org/quatrix/)
* Rackspace Cloud Files [:page_facing_up:](https://rclone.org/swift/)
* RackCorp Object Storage [:page_facing_up:](https://rclone.org/s3/#RackCorp)
* rsync.net [:page_facing_up:](https://rclone.org/sftp/#rsync-net)
* Scaleway [:page_facing_up:](https://rclone.org/s3/#scaleway)
* Seafile [:page_facing_up:](https://rclone.org/seafile/)
* SeaweedFS [:page_facing_up:](https://rclone.org/s3/#seaweedfs)
* Selectel Object Storage [:page_facing_up:](https://rclone.org/s3/#selectel)
* SFTP [:page_facing_up:](https://rclone.org/sftp/)
* SMB / CIFS [:page_facing_up:](https://rclone.org/smb/)
* StackPath [:page_facing_up:](https://rclone.org/s3/#stackpath)
* Storj [:page_facing_up:](https://rclone.org/storj/)
* SugarSync [:page_facing_up:](https://rclone.org/sugarsync/)
* Synology C2 Object Storage [:page_facing_up:](https://rclone.org/s3/#synology-c2)
* Tencent Cloud Object Storage (COS) [:page_facing_up:](https://rclone.org/s3/#tencent-cos)
* Uloz.to [:page_facing_up:](https://rclone.org/ulozto/)
* Wasabi [:page_facing_up:](https://rclone.org/s3/#wasabi)
* WebDAV [:page_facing_up:](https://rclone.org/webdav/)
* Yandex Disk [:page_facing_up:](https://rclone.org/yandex/)
* Zoho WorkDrive [:page_facing_up:](https://rclone.org/zoho/)
* The local filesystem [:page_facing_up:](https://rclone.org/local/)
- 1Fichier [:page_facing_up:](https://rclone.org/fichier/)
- Akamai Netstorage [:page_facing_up:](https://rclone.org/netstorage/)
- Alibaba Cloud (Aliyun) Object Storage System (OSS) [:page_facing_up:](https://rclone.org/s3/#alibaba-oss)
- Amazon S3 [:page_facing_up:](https://rclone.org/s3/)
- ArvanCloud Object Storage (AOS) [:page_facing_up:](https://rclone.org/s3/#arvan-cloud-object-storage-aos)
- Backblaze B2 [:page_facing_up:](https://rclone.org/b2/)
- Box [:page_facing_up:](https://rclone.org/box/)
- Ceph [:page_facing_up:](https://rclone.org/s3/#ceph)
- China Mobile Ecloud Elastic Object Storage (EOS) [:page_facing_up:](https://rclone.org/s3/#china-mobile-ecloud-eos)
- Cloudflare R2 [:page_facing_up:](https://rclone.org/s3/#cloudflare-r2)
- Citrix ShareFile [:page_facing_up:](https://rclone.org/sharefile/)
- DigitalOcean Spaces [:page_facing_up:](https://rclone.org/s3/#digitalocean-spaces)
- Digi Storage [:page_facing_up:](https://rclone.org/koofr/#digi-storage)
- Dreamhost [:page_facing_up:](https://rclone.org/s3/#dreamhost)
- Dropbox [:page_facing_up:](https://rclone.org/dropbox/)
- Enterprise File Fabric [:page_facing_up:](https://rclone.org/filefabric/)
- Exaba [:page_facing_up:](https://rclone.org/s3/#exaba)
- Fastmail Files [:page_facing_up:](https://rclone.org/webdav/#fastmail-files)
- FileLu [:page_facing_up:](https://rclone.org/filelu/)
- Files.com [:page_facing_up:](https://rclone.org/filescom/)
- FlashBlade [:page_facing_up:](https://rclone.org/s3/#pure-storage-flashblade)
- FTP [:page_facing_up:](https://rclone.org/ftp/)
- GoFile [:page_facing_up:](https://rclone.org/gofile/)
- Google Cloud Storage [:page_facing_up:](https://rclone.org/googlecloudstorage/)
- Google Drive [:page_facing_up:](https://rclone.org/drive/)
- Google Photos [:page_facing_up:](https://rclone.org/googlephotos/)
- HDFS (Hadoop Distributed Filesystem) [:page_facing_up:](https://rclone.org/hdfs/)
- Hetzner Storage Box [:page_facing_up:](https://rclone.org/sftp/#hetzner-storage-box)
- HiDrive [:page_facing_up:](https://rclone.org/hidrive/)
- HTTP [:page_facing_up:](https://rclone.org/http/)
- Huawei Cloud Object Storage Service(OBS) [:page_facing_up:](https://rclone.org/s3/#huawei-obs)
- iCloud Drive [:page_facing_up:](https://rclone.org/iclouddrive/)
- ImageKit [:page_facing_up:](https://rclone.org/imagekit/)
- Internet Archive [:page_facing_up:](https://rclone.org/internetarchive/)
- Jottacloud [:page_facing_up:](https://rclone.org/jottacloud/)
- IBM COS S3 [:page_facing_up:](https://rclone.org/s3/#ibm-cos-s3)
- IONOS Cloud [:page_facing_up:](https://rclone.org/s3/#ionos)
- Koofr [:page_facing_up:](https://rclone.org/koofr/)
- Leviia Object Storage [:page_facing_up:](https://rclone.org/s3/#leviia)
- Liara Object Storage [:page_facing_up:](https://rclone.org/s3/#liara-object-storage)
- Linkbox [:page_facing_up:](https://rclone.org/linkbox)
- Linode Object Storage [:page_facing_up:](https://rclone.org/s3/#linode)
- Magalu Object Storage [:page_facing_up:](https://rclone.org/s3/#magalu)
- Mail.ru Cloud [:page_facing_up:](https://rclone.org/mailru/)
- Memset Memstore [:page_facing_up:](https://rclone.org/swift/)
- MEGA [:page_facing_up:](https://rclone.org/mega/)
- MEGA S4 Object Storage [:page_facing_up:](https://rclone.org/s3/#mega)
- Memory [:page_facing_up:](https://rclone.org/memory/)
- Microsoft Azure Blob Storage [:page_facing_up:](https://rclone.org/azureblob/)
- Microsoft Azure Files Storage [:page_facing_up:](https://rclone.org/azurefiles/)
- Microsoft OneDrive [:page_facing_up:](https://rclone.org/onedrive/)
- Minio [:page_facing_up:](https://rclone.org/s3/#minio)
- Nextcloud [:page_facing_up:](https://rclone.org/webdav/#nextcloud)
- Blomp Cloud Storage [:page_facing_up:](https://rclone.org/swift/)
- OpenDrive [:page_facing_up:](https://rclone.org/opendrive/)
- OpenStack Swift [:page_facing_up:](https://rclone.org/swift/)
- Oracle Cloud Storage [:page_facing_up:](https://rclone.org/swift/)
- Oracle Object Storage [:page_facing_up:](https://rclone.org/oracleobjectstorage/)
- Outscale [:page_facing_up:](https://rclone.org/s3/#outscale)
- OVHcloud Object Storage (Swift) [:page_facing_up:](https://rclone.org/swift/)
- OVHcloud Object Storage (S3-compatible) [:page_facing_up:](https://rclone.org/s3/#ovhcloud)
- ownCloud [:page_facing_up:](https://rclone.org/webdav/#owncloud)
- pCloud [:page_facing_up:](https://rclone.org/pcloud/)
- Petabox [:page_facing_up:](https://rclone.org/s3/#petabox)
- PikPak [:page_facing_up:](https://rclone.org/pikpak/)
- Pixeldrain [:page_facing_up:](https://rclone.org/pixeldrain/)
- premiumize.me [:page_facing_up:](https://rclone.org/premiumizeme/)
- put.io [:page_facing_up:](https://rclone.org/putio/)
- Proton Drive [:page_facing_up:](https://rclone.org/protondrive/)
- QingStor [:page_facing_up:](https://rclone.org/qingstor/)
- Qiniu Cloud Object Storage (Kodo) [:page_facing_up:](https://rclone.org/s3/#qiniu)
- Quatrix [:page_facing_up:](https://rclone.org/quatrix/)
- Rackspace Cloud Files [:page_facing_up:](https://rclone.org/swift/)
- RackCorp Object Storage [:page_facing_up:](https://rclone.org/s3/#RackCorp)
- rsync.net [:page_facing_up:](https://rclone.org/sftp/#rsync-net)
- Scaleway [:page_facing_up:](https://rclone.org/s3/#scaleway)
- Seafile [:page_facing_up:](https://rclone.org/seafile/)
- Seagate Lyve Cloud [:page_facing_up:](https://rclone.org/s3/#lyve)
- SeaweedFS [:page_facing_up:](https://rclone.org/s3/#seaweedfs)
- Selectel Object Storage [:page_facing_up:](https://rclone.org/s3/#selectel)
- SFTP [:page_facing_up:](https://rclone.org/sftp/)
- SMB / CIFS [:page_facing_up:](https://rclone.org/smb/)
- StackPath [:page_facing_up:](https://rclone.org/s3/#stackpath)
- Storj [:page_facing_up:](https://rclone.org/storj/)
- SugarSync [:page_facing_up:](https://rclone.org/sugarsync/)
- Synology C2 Object Storage [:page_facing_up:](https://rclone.org/s3/#synology-c2)
- Tencent Cloud Object Storage (COS) [:page_facing_up:](https://rclone.org/s3/#tencent-cos)
- Uloz.to [:page_facing_up:](https://rclone.org/ulozto/)
- Wasabi [:page_facing_up:](https://rclone.org/s3/#wasabi)
- WebDAV [:page_facing_up:](https://rclone.org/webdav/)
- Yandex Disk [:page_facing_up:](https://rclone.org/yandex/)
- Zoho WorkDrive [:page_facing_up:](https://rclone.org/zoho/)
- Zata.ai [:page_facing_up:](https://rclone.org/s3/#Zata)
- The local filesystem [:page_facing_up:](https://rclone.org/local/)
Please see [the full list of all storage providers and their features](https://rclone.org/overview/)
@@ -132,50 +124,54 @@ Please see [the full list of all storage providers and their features](https://r
These backends adapt or modify other storage providers
* Alias: rename existing remotes [:page_facing_up:](https://rclone.org/alias/)
* Cache: cache remotes (DEPRECATED) [:page_facing_up:](https://rclone.org/cache/)
* Chunker: split large files [:page_facing_up:](https://rclone.org/chunker/)
* Combine: combine multiple remotes into a directory tree [:page_facing_up:](https://rclone.org/combine/)
* Compress: compress files [:page_facing_up:](https://rclone.org/compress/)
* Crypt: encrypt files [:page_facing_up:](https://rclone.org/crypt/)
* Hasher: hash files [:page_facing_up:](https://rclone.org/hasher/)
* Union: join multiple remotes to work together [:page_facing_up:](https://rclone.org/union/)
- Alias: rename existing remotes [:page_facing_up:](https://rclone.org/alias/)
- Cache: cache remotes (DEPRECATED) [:page_facing_up:](https://rclone.org/cache/)
- Chunker: split large files [:page_facing_up:](https://rclone.org/chunker/)
- Combine: combine multiple remotes into a directory tree [:page_facing_up:](https://rclone.org/combine/)
- Compress: compress files [:page_facing_up:](https://rclone.org/compress/)
- Crypt: encrypt files [:page_facing_up:](https://rclone.org/crypt/)
- Hasher: hash files [:page_facing_up:](https://rclone.org/hasher/)
- Union: join multiple remotes to work together [:page_facing_up:](https://rclone.org/union/)
## Features
* MD5/SHA-1 hashes checked at all times for file integrity
* Timestamps preserved on files
* Partial syncs supported on a whole file basis
* [Copy](https://rclone.org/commands/rclone_copy/) mode to just copy new/changed files
* [Sync](https://rclone.org/commands/rclone_sync/) (one way) mode to make a directory identical
* [Bisync](https://rclone.org/bisync/) (two way) to keep two directories in sync bidirectionally
* [Check](https://rclone.org/commands/rclone_check/) mode to check for file hash equality
* Can sync to and from network, e.g. two different cloud accounts
* Optional large file chunking ([Chunker](https://rclone.org/chunker/))
* Optional transparent compression ([Compress](https://rclone.org/compress/))
* Optional encryption ([Crypt](https://rclone.org/crypt/))
* Optional FUSE mount ([rclone mount](https://rclone.org/commands/rclone_mount/))
* Multi-threaded downloads to local disk
* Can [serve](https://rclone.org/commands/rclone_serve/) local or remote files over HTTP/WebDAV/FTP/SFTP/DLNA
- MD5/SHA-1 hashes checked at all times for file integrity
- Timestamps preserved on files
- Partial syncs supported on a whole file basis
- [Copy](https://rclone.org/commands/rclone_copy/) mode to just copy new/changed
files
- [Sync](https://rclone.org/commands/rclone_sync/) (one way) mode to make a directory
identical
- [Bisync](https://rclone.org/bisync/) (two way) to keep two directories in sync
bidirectionally
- [Check](https://rclone.org/commands/rclone_check/) mode to check for file hash
equality
- Can sync to and from network, e.g. two different cloud accounts
- Optional large file chunking ([Chunker](https://rclone.org/chunker/))
- Optional transparent compression ([Compress](https://rclone.org/compress/))
- Optional encryption ([Crypt](https://rclone.org/crypt/))
- Optional FUSE mount ([rclone mount](https://rclone.org/commands/rclone_mount/))
- Multi-threaded downloads to local disk
- Can [serve](https://rclone.org/commands/rclone_serve/) local or remote files
over HTTP/WebDAV/FTP/SFTP/DLNA
## Installation & documentation
Please see the [rclone website](https://rclone.org/) for:
* [Installation](https://rclone.org/install/)
* [Documentation & configuration](https://rclone.org/docs/)
* [Changelog](https://rclone.org/changelog/)
* [FAQ](https://rclone.org/faq/)
* [Storage providers](https://rclone.org/overview/)
* [Forum](https://forum.rclone.org/)
* ...and more
- [Installation](https://rclone.org/install/)
- [Documentation & configuration](https://rclone.org/docs/)
- [Changelog](https://rclone.org/changelog/)
- [FAQ](https://rclone.org/faq/)
- [Storage providers](https://rclone.org/overview/)
- [Forum](https://forum.rclone.org/)
- ...and more
## Downloads
* https://rclone.org/downloads/
- <https://rclone.org/downloads/>
License
-------
## License
This is free software under the terms of the MIT license (check the
[COPYING file](/COPYING) included in this package).

View File

@@ -4,52 +4,55 @@ This file describes how to make the various kinds of releases
## Extra required software for making a release
* [gh the github cli](https://github.com/cli/cli) for uploading packages
* pandoc for making the html and man pages
- [gh the github cli](https://github.com/cli/cli) for uploading packages
- pandoc for making the html and man pages
## Making a release
* git checkout master # see below for stable branch
* git pull # IMPORTANT
* git status - make sure everything is checked in
* Check GitHub actions build for master is Green
* make test # see integration test server or run locally
* make tag
* edit docs/content/changelog.md # make sure to remove duplicate logs from point releases
* make tidy
* make doc
* git status - to check for new man pages - git add them
* git commit -a -v -m "Version v1.XX.0"
* make retag
* git push origin # without --follow-tags so it doesn't push the tag if it fails
* git push --follow-tags origin
* # Wait for the GitHub builds to complete then...
* make fetch_binaries
* make tarball
* make vendorball
* make sign_upload
* make check_sign
* make upload
* make upload_website
* make upload_github
* make startdev # make startstable for stable branch
* # announce with forum post, twitter post, patreon post
- git checkout master # see below for stable branch
- git pull # IMPORTANT
- git status - make sure everything is checked in
- Check GitHub actions build for master is Green
- make test # see integration test server or run locally
- make tag
- edit docs/content/changelog.md # make sure to remove duplicate logs from point
releases
- make tidy
- make doc
- git status - to check for new man pages - git add them
- git commit -a -v -m "Version v1.XX.0"
- make retag
- git push origin # without --follow-tags so it doesn't push the tag if it fails
- git push --follow-tags origin
- \# Wait for the GitHub builds to complete then...
- make fetch_binaries
- make tarball
- make vendorball
- make sign_upload
- make check_sign
- make upload
- make upload_website
- make upload_github
- make startdev # make startstable for stable branch
- \# announce with forum post, twitter post, patreon post
## Update dependencies
Early in the next release cycle update the dependencies.
* Review any pinned packages in go.mod and remove if possible
* `make updatedirect`
* `make GOTAGS=cmount`
* `make compiletest`
* Fix anything which doesn't compile at this point and commit changes here
* `git commit -a -v -m "build: update all dependencies"`
- Review any pinned packages in go.mod and remove if possible
- `make updatedirect`
- `make GOTAGS=cmount`
- `make compiletest`
- Fix anything which doesn't compile at this point and commit changes here
- `git commit -a -v -m "build: update all dependencies"`
If the `make updatedirect` upgrades the version of go in the `go.mod`
go 1.22.0
```text
go 1.22.0
```
then go to manual mode. `go1.22` here is the lowest supported version
in the `go.mod`.
@@ -57,7 +60,7 @@ If `make updatedirect` added a `toolchain` directive then remove it.
We don't want to force a toolchain on our users. Linux packagers are
often using a version of Go that is a few versions out of date.
```
```sh
go list -m -f '{{if not (or .Main .Indirect)}}{{.Path}}{{end}}' all > /tmp/potential-upgrades
go get -d $(cat /tmp/potential-upgrades)
go mod tidy -go=1.22 -compat=1.22
@@ -67,7 +70,7 @@ If the `go mod tidy` fails use the output from it to remove the
package which can't be upgraded from `/tmp/potential-upgrades` when
done
```
```sh
git co go.mod go.sum
```
@@ -77,12 +80,12 @@ Optionally upgrade the direct and indirect dependencies. This is very
likely to fail if the manual method was used abve - in that case
ignore it as it is too time consuming to fix.
* `make update`
* `make GOTAGS=cmount`
* `make compiletest`
* roll back any updates which didn't compile
* `git commit -a -v --amend`
* **NB** watch out for this changing the default go version in `go.mod`
- `make update`
- `make GOTAGS=cmount`
- `make compiletest`
- roll back any updates which didn't compile
- `git commit -a -v --amend`
- **NB** watch out for this changing the default go version in `go.mod`
Note that `make update` updates all direct and indirect dependencies
and there can occasionally be forwards compatibility problems with
@@ -99,7 +102,9 @@ The above procedure will not upgrade major versions, so v2 to v3.
However this tool can show which major versions might need to be
upgraded:
go run github.com/icholy/gomajor@latest list -major
```sh
go run github.com/icholy/gomajor@latest list -major
```
Expect API breakage when updating major versions.
@@ -107,7 +112,9 @@ Expect API breakage when updating major versions.
At some point after the release run
bin/tidy-beta v1.55
```sh
bin/tidy-beta v1.55
```
where the version number is that of a couple ago to remove old beta binaries.
@@ -117,54 +124,64 @@ If rclone needs a point release due to some horrendous bug:
Set vars
* BASE_TAG=v1.XX # e.g. v1.52
* NEW_TAG=${BASE_TAG}.Y # e.g. v1.52.1
* echo $BASE_TAG $NEW_TAG # v1.52 v1.52.1
- BASE_TAG=v1.XX # e.g. v1.52
- NEW_TAG=${BASE_TAG}.Y # e.g. v1.52.1
- echo $BASE_TAG $NEW_TAG # v1.52 v1.52.1
First make the release branch. If this is a second point release then
this will be done already.
* git co -b ${BASE_TAG}-stable ${BASE_TAG}.0
* make startstable
- git co -b ${BASE_TAG}-stable ${BASE_TAG}.0
- make startstable
Now
* git co ${BASE_TAG}-stable
* git cherry-pick any fixes
* Do the steps as above
* make startstable
* git co master
* `#` cherry pick the changes to the changelog - check the diff to make sure it is correct
* git checkout ${BASE_TAG}-stable docs/content/changelog.md
* git commit -a -v -m "Changelog updates from Version ${NEW_TAG}"
* git push
- git co ${BASE_TAG}-stable
- git cherry-pick any fixes
- make startstable
- Do the steps as above
- git co master
- `#` cherry pick the changes to the changelog - check the diff to make sure it
is correct
- git checkout ${BASE_TAG}-stable docs/content/changelog.md
- git commit -a -v -m "Changelog updates from Version ${NEW_TAG}"
- git push
## Sponsor logos
If updating the website note that the sponsor logos have been moved out of the main repository.
If updating the website note that the sponsor logos have been moved out of the
main repository.
You will need to checkout `/docs/static/img/logos` from https://github.com/rclone/third-party-logos
You will need to checkout `/docs/static/img/logos` from <https://github.com/rclone/third-party-logos>
which is a private repo containing artwork from sponsors.
## Update the website between releases
Create an update website branch based off the last release
git co -b update-website
```sh
git co -b update-website
```
If the branch already exists, double check there are no commits that need saving.
Now reset the branch to the last release
git reset --hard v1.64.0
```sh
git reset --hard v1.64.0
```
Create the changes, check them in, test with `make serve` then
make upload_test_website
```sh
make upload_test_website
```
Check out https://test.rclone.org and when happy
Check out <https://test.rclone.org> and when happy
make upload_website
```sh
make upload_website
```
Cherry pick any changes back to master and the stable branch if it is active.
@@ -172,14 +189,14 @@ Cherry pick any changes back to master and the stable branch if it is active.
To do a basic build of rclone's docker image to debug builds locally:
```
```sh
docker buildx build --load -t rclone/rclone:testing --progress=plain .
docker run --rm rclone/rclone:testing version
```
To test the multipatform build
```
```sh
docker buildx build -t rclone/rclone:testing --progress=plain --platform linux/amd64,linux/386,linux/arm64,linux/arm/v7,linux/arm/v6 .
```
@@ -187,6 +204,6 @@ To make a full build then set the tags correctly and add `--push`
Note that you can't only build one architecture - you need to build them all.
```
```sh
docker buildx build --platform linux/amd64,linux/386,linux/arm64,linux/arm/v7,linux/arm/v6 -t rclone/rclone:1.54.1 -t rclone/rclone:1.54 -t rclone/rclone:1 -t rclone/rclone:latest --push .
```

View File

@@ -1 +1 @@
v1.70.0
v1.71.0

View File

@@ -14,10 +14,12 @@ import (
_ "github.com/rclone/rclone/backend/combine"
_ "github.com/rclone/rclone/backend/compress"
_ "github.com/rclone/rclone/backend/crypt"
_ "github.com/rclone/rclone/backend/doi"
_ "github.com/rclone/rclone/backend/drive"
_ "github.com/rclone/rclone/backend/dropbox"
_ "github.com/rclone/rclone/backend/fichier"
_ "github.com/rclone/rclone/backend/filefabric"
_ "github.com/rclone/rclone/backend/filelu"
_ "github.com/rclone/rclone/backend/filescom"
_ "github.com/rclone/rclone/backend/ftp"
_ "github.com/rclone/rclone/backend/gofile"

View File

@@ -1,4 +1,4 @@
//go:build !plan9 && !solaris && !js
//go:build !plan9 && !solaris && !js && !wasm
// Package azureblob provides an interface to the Microsoft Azure blob object storage system
package azureblob
@@ -19,6 +19,7 @@ import (
"net/url"
"os"
"path"
"slices"
"sort"
"strconv"
"strings"
@@ -43,13 +44,14 @@ import (
"github.com/rclone/rclone/fs/fserrors"
"github.com/rclone/rclone/fs/fshttp"
"github.com/rclone/rclone/fs/hash"
"github.com/rclone/rclone/fs/walk"
"github.com/rclone/rclone/fs/list"
"github.com/rclone/rclone/lib/atexit"
"github.com/rclone/rclone/lib/bucket"
"github.com/rclone/rclone/lib/encoder"
"github.com/rclone/rclone/lib/env"
"github.com/rclone/rclone/lib/multipart"
"github.com/rclone/rclone/lib/pacer"
"github.com/rclone/rclone/lib/pool"
"golang.org/x/sync/errgroup"
)
@@ -71,6 +73,7 @@ const (
emulatorAccount = "devstoreaccount1"
emulatorAccountKey = "Eby8vdM02xNOcqFlqUwJPLlmEtlCDXJ1OUzFT50uSRZ6IFsuFq2UVErCz4I6tq/K1SZFPTOtr/KBHBeksoGMGw=="
emulatorBlobEndpoint = "http://127.0.0.1:10000/devstoreaccount1"
sasCopyValidity = time.Hour // how long SAS should last when doing server side copy
)
var (
@@ -558,6 +561,11 @@ type Fs struct {
pacer *fs.Pacer // To pace and retry the API calls
uploadToken *pacer.TokenDispenser // control concurrency
publicAccess container.PublicAccessType // Container Public Access Level
// user delegation cache
userDelegationMu sync.Mutex
userDelegation *service.UserDelegationCredential
userDelegationExpiry time.Time
}
// Object describes an azure object
@@ -611,6 +619,9 @@ func parsePath(path string) (root string) {
// relative to f.root
func (f *Fs) split(rootRelativePath string) (containerName, containerPath string) {
containerName, containerPath = bucket.Split(bucket.Join(f.root, rootRelativePath))
if f.opt.DirectoryMarkers && strings.HasSuffix(containerPath, "//") {
containerPath = containerPath[:len(containerPath)-1]
}
return f.opt.Enc.FromStandardName(containerName), f.opt.Enc.FromStandardPath(containerPath)
}
@@ -656,19 +667,33 @@ func (f *Fs) shouldRetry(ctx context.Context, err error) (bool, error) {
if fserrors.ContextError(ctx, &err) {
return false, err
}
// FIXME interpret special errors - more to do here
if storageErr, ok := err.(*azcore.ResponseError); ok {
var storageErr *azcore.ResponseError
if errors.As(err, &storageErr) {
// General errors from:
// https://learn.microsoft.com/en-us/rest/api/storageservices/common-rest-api-error-codes
// Blob specific errors from:
// https://learn.microsoft.com/en-us/rest/api/storageservices/blob-service-error-codes
switch storageErr.ErrorCode {
case "InvalidBlobOrBlock":
// These errors happen sometimes in multipart uploads
// because of block concurrency issues
return true, err
case "InternalError":
// The server encountered an internal error. Please retry the request.
return true, err
case "OperationTimedOut":
// The operation could not be completed within the permitted time. The
// operation may or may not have succeeded on the server side. Please query
// the server state before retrying the operation.
return true, err
case "ServerBusy":
// The server is currently unable to receive requests. Please retry your
// request.
return true, err
}
statusCode := storageErr.StatusCode
for _, e := range retryErrorCodes {
if statusCode == e {
return true, err
}
if slices.Contains(retryErrorCodes, statusCode) {
return true, err
}
}
return fserrors.ShouldRetry(err), err
@@ -913,6 +938,7 @@ func NewFs(ctx context.Context, name, root string, m configmap.Mapper) (fs.Fs, e
}
case opt.ClientID != "" && opt.Tenant != "" && opt.Username != "" && opt.Password != "":
// User with username and password
//nolint:staticcheck // this is deprecated due to Azure policy
options := azidentity.UsernamePasswordCredentialOptions{
ClientOptions: policyClientOptions,
}
@@ -965,6 +991,38 @@ func NewFs(ctx context.Context, name, root string, m configmap.Mapper) (fs.Fs, e
if err != nil {
return nil, fmt.Errorf("failed to acquire MSI token: %w", err)
}
case opt.ClientID != "" && opt.Tenant != "" && opt.MSIClientID != "":
// Workload Identity based authentication
var options azidentity.ManagedIdentityCredentialOptions
options.ID = azidentity.ClientID(opt.MSIClientID)
msiCred, err := azidentity.NewManagedIdentityCredential(&options)
if err != nil {
return nil, fmt.Errorf("failed to acquire MSI token: %w", err)
}
getClientAssertions := func(context.Context) (string, error) {
token, err := msiCred.GetToken(context.Background(), policy.TokenRequestOptions{
Scopes: []string{"api://AzureADTokenExchange"},
})
if err != nil {
return "", fmt.Errorf("failed to acquire MSI token: %w", err)
}
return token.Token, nil
}
assertOpts := &azidentity.ClientAssertionCredentialOptions{}
f.cred, err = azidentity.NewClientAssertionCredential(
opt.Tenant,
opt.ClientID,
getClientAssertions,
assertOpts)
if err != nil {
return nil, fmt.Errorf("failed to acquire client assertion token: %w", err)
}
case opt.UseAZ:
var options = azidentity.AzureCLICredentialOptions{}
f.cred, err = azidentity.NewAzureCLICredential(&options)
@@ -1198,7 +1256,7 @@ func (f *Fs) list(ctx context.Context, containerName, directory, prefix string,
continue
}
// process directory markers as directories
remote = strings.TrimRight(remote, "/")
remote, _ = strings.CutSuffix(remote, "/")
}
remote = remote[len(prefix):]
if addContainer {
@@ -1363,7 +1421,7 @@ func (f *Fs) List(ctx context.Context, dir string) (entries fs.DirEntries, err e
// of listing recursively that doing a directory traversal.
func (f *Fs) ListR(ctx context.Context, dir string, callback fs.ListRCallback) (err error) {
containerName, directory := f.split(dir)
list := walk.NewListRHelper(callback)
list := list.NewHelper(callback)
listR := func(containerName, directory, prefix string, addContainer bool) error {
return f.list(ctx, containerName, directory, prefix, addContainer, true, int32(f.opt.ListChunkSize), func(remote string, object *container.BlobItem, isDirectory bool) error {
entry, err := f.itemToDirEntry(ctx, remote, object, isDirectory)
@@ -1519,7 +1577,7 @@ func (f *Fs) Mkdir(ctx context.Context, dir string) error {
// mkdirParent creates the parent bucket/directory if it doesn't exist
func (f *Fs) mkdirParent(ctx context.Context, remote string) error {
remote = strings.TrimRight(remote, "/")
remote, _ = strings.CutSuffix(remote, "/")
dir := path.Dir(remote)
if dir == "/" || dir == "." {
dir = ""
@@ -1669,6 +1727,38 @@ func (f *Fs) Purge(ctx context.Context, dir string) error {
return f.deleteContainer(ctx, container)
}
// Get a user delegation which is valid for at least sasCopyValidity
//
// This value is cached in f
func (f *Fs) getUserDelegation(ctx context.Context) (*service.UserDelegationCredential, error) {
f.userDelegationMu.Lock()
defer f.userDelegationMu.Unlock()
if f.userDelegation != nil && time.Until(f.userDelegationExpiry) > sasCopyValidity {
return f.userDelegation, nil
}
// Validity window
start := time.Now().UTC()
expiry := start.Add(2 * sasCopyValidity)
startStr := start.Format(time.RFC3339)
expiryStr := expiry.Format(time.RFC3339)
// Acquire user delegation key from the service client
info := service.KeyInfo{
Start: &startStr,
Expiry: &expiryStr,
}
userDelegationKey, err := f.svc.GetUserDelegationCredential(ctx, info, nil)
if err != nil {
return nil, fmt.Errorf("failed to get user delegation key: %w", err)
}
f.userDelegation = userDelegationKey
f.userDelegationExpiry = expiry
return f.userDelegation, nil
}
// getAuth gets auth to copy o.
//
// tokenOK is used to signal that token based auth (Microsoft Entra
@@ -1680,7 +1770,7 @@ func (f *Fs) Purge(ctx context.Context, dir string) error {
// URL (not a SAS) and token will be empty.
//
// If tokenOK is true it may also return a token for the auth.
func (o *Object) getAuth(ctx context.Context, tokenOK bool, noAuth bool) (srcURL string, token *string, err error) {
func (o *Object) getAuth(ctx context.Context, noAuth bool) (srcURL string, err error) {
f := o.fs
srcBlobSVC := o.getBlobSVC()
srcURL = srcBlobSVC.URL()
@@ -1689,29 +1779,47 @@ func (o *Object) getAuth(ctx context.Context, tokenOK bool, noAuth bool) (srcURL
case noAuth:
// If same storage account then no auth needed
case f.cred != nil:
if !tokenOK {
return srcURL, token, errors.New("not supported: Microsoft Entra ID")
}
options := policy.TokenRequestOptions{}
accessToken, err := f.cred.GetToken(ctx, options)
// Generate a User Delegation SAS URL using Azure AD credentials
userDelegationKey, err := f.getUserDelegation(ctx)
if err != nil {
return srcURL, token, fmt.Errorf("failed to create access token: %w", err)
return "", fmt.Errorf("sas creation: %w", err)
}
token = &accessToken.Token
// Build the SAS values
perms := sas.BlobPermissions{Read: true}
container, containerPath := o.split()
start := time.Now().UTC()
expiry := start.Add(sasCopyValidity)
vals := sas.BlobSignatureValues{
StartTime: start,
ExpiryTime: expiry,
Permissions: perms.String(),
ContainerName: container,
BlobName: containerPath,
}
// Sign with the delegation key
queryParameters, err := vals.SignWithUserDelegation(userDelegationKey)
if err != nil {
return "", fmt.Errorf("signing SAS with user delegation failed: %w", err)
}
// Append the SAS to the URL
srcURL = srcBlobSVC.URL() + "?" + queryParameters.Encode()
case f.sharedKeyCred != nil:
// Generate a short lived SAS URL if using shared key credentials
expiry := time.Now().Add(time.Hour)
expiry := time.Now().Add(sasCopyValidity)
sasOptions := blob.GetSASURLOptions{}
srcURL, err = srcBlobSVC.GetSASURL(sas.BlobPermissions{Read: true}, expiry, &sasOptions)
if err != nil {
return srcURL, token, fmt.Errorf("failed to create SAS URL: %w", err)
return srcURL, fmt.Errorf("failed to create SAS URL: %w", err)
}
case f.anonymous || f.opt.SASURL != "":
// If using a SASURL or anonymous, no need for any extra auth
default:
return srcURL, token, errors.New("unknown authentication type")
return srcURL, errors.New("unknown authentication type")
}
return srcURL, token, nil
return srcURL, nil
}
// Do multipart parallel copy.
@@ -1732,7 +1840,7 @@ func (f *Fs) copyMultipart(ctx context.Context, remote, dstContainer, dstPath st
o.fs = f
o.remote = remote
srcURL, token, err := src.getAuth(ctx, true, false)
srcURL, err := src.getAuth(ctx, false)
if err != nil {
return nil, fmt.Errorf("multipart copy: %w", err)
}
@@ -1753,7 +1861,7 @@ func (f *Fs) copyMultipart(ctx context.Context, remote, dstContainer, dstPath st
var (
srcSize = src.size
partSize = int64(chunksize.Calculator(o, src.size, blockblob.MaxBlocks, f.opt.ChunkSize))
numParts = (srcSize-1)/partSize + 1
numParts = (srcSize + partSize - 1) / partSize
blockIDs = make([]string, numParts) // list of blocks for finalize
g, gCtx = errgroup.WithContext(ctx)
checker = newCheckForInvalidBlockOrBlob("copy", o)
@@ -1776,7 +1884,8 @@ func (f *Fs) copyMultipart(ctx context.Context, remote, dstContainer, dstPath st
Count: partSize,
},
// Specifies the authorization scheme and signature for the copy source.
CopySourceAuthorization: token,
// We use SAS URLs as this doesn't seem to work always
// CopySourceAuthorization: token,
// CPKInfo *blob.CPKInfo
// CPKScopeInfo *blob.CPKScopeInfo
}
@@ -1846,7 +1955,7 @@ func (f *Fs) copySinglepart(ctx context.Context, remote, dstContainer, dstPath s
dstBlobSVC := f.getBlobSVC(dstContainer, dstPath)
// Get the source auth - none needed for same storage account
srcURL, _, err := src.getAuth(ctx, false, f == src.fs)
srcURL, err := src.getAuth(ctx, f == src.fs)
if err != nil {
return nil, fmt.Errorf("single part copy: source auth: %w", err)
}
@@ -1872,7 +1981,11 @@ func (f *Fs) copySinglepart(ctx context.Context, remote, dstContainer, dstPath s
pollTime := 100 * time.Millisecond
for copyStatus != nil && string(*copyStatus) == string(container.CopyStatusTypePending) {
time.Sleep(pollTime)
getMetadata, err := dstBlobSVC.GetProperties(ctx, &getOptions)
var getMetadata blob.GetPropertiesResponse
err = f.pacer.Call(func() (bool, error) {
getMetadata, err = dstBlobSVC.GetProperties(ctx, &getOptions)
return f.shouldRetry(ctx, err)
})
if err != nil {
return nil, err
}
@@ -2157,11 +2270,6 @@ func (o *Object) getTags() (tags map[string]string) {
// getBlobSVC creates a blob client
func (o *Object) getBlobSVC() *blob.Client {
container, directory := o.split()
// If we are trying to remove an all / directory marker then
// this will have one / too many now.
if bucket.IsAllSlashes(o.remote) {
directory = strings.TrimSuffix(directory, "/")
}
return o.fs.getBlobSVC(container, directory)
}
@@ -2563,6 +2671,13 @@ func (w *azChunkWriter) WriteChunk(ctx context.Context, chunkNumber int, reader
return -1, err
}
// Only account after the checksum reads have been done
if do, ok := reader.(pool.DelayAccountinger); ok {
// To figure out this number, do a transfer and if the accounted size is 0 or a
// multiple of what it should be, increase or decrease this number.
do.DelayAccounting(2)
}
// Upload the block, with MD5 for check
m := md5.New()
currentChunkSize, err := io.Copy(m, reader)
@@ -2650,6 +2765,8 @@ func (o *Object) clearUncommittedBlocks(ctx context.Context) (err error) {
blockList blockblob.GetBlockListResponse
properties *blob.GetPropertiesResponse
options *blockblob.CommitBlockListOptions
// Use temporary pacer as this can be called recursively which can cause a deadlock with --max-connections
pacer = fs.NewPacer(ctx, pacer.NewS3(pacer.MinSleep(minSleep), pacer.MaxSleep(maxSleep), pacer.DecayConstant(decayConstant)))
)
properties, err = o.readMetaDataAlways(ctx)
@@ -2661,7 +2778,7 @@ func (o *Object) clearUncommittedBlocks(ctx context.Context) (err error) {
if objectExists {
// Get the committed block list
err = o.fs.pacer.Call(func() (bool, error) {
err = pacer.Call(func() (bool, error) {
blockList, err = blockBlobSVC.GetBlockList(ctx, blockblob.BlockListTypeAll, nil)
return o.fs.shouldRetry(ctx, err)
})
@@ -2703,7 +2820,7 @@ func (o *Object) clearUncommittedBlocks(ctx context.Context) (err error) {
// Commit only the committed blocks
fs.Debugf(o, "Committing %d blocks to remove uncommitted blocks", len(blockIDs))
err = o.fs.pacer.Call(func() (bool, error) {
err = pacer.Call(func() (bool, error) {
_, err := blockBlobSVC.CommitBlockList(ctx, blockIDs, options)
return o.fs.shouldRetry(ctx, err)
})
@@ -2844,6 +2961,9 @@ func (o *Object) prepareUpload(ctx context.Context, src fs.ObjectInfo, options [
return ui, err
}
}
// if ui.isDirMarker && strings.HasSuffix(containerPath, "//") {
// containerPath = containerPath[:len(containerPath)-1]
// }
// Update Mod time
o.updateMetadataWithModTime(src.ModTime(ctx))

View File

@@ -1,4 +1,4 @@
//go:build !plan9 && !solaris && !js
//go:build !plan9 && !solaris && !js && !wasm
package azureblob

View File

@@ -1,6 +1,6 @@
// Test AzureBlob filesystem interface
//go:build !plan9 && !solaris && !js
//go:build !plan9 && !solaris && !js && !wasm
package azureblob

View File

@@ -1,7 +1,7 @@
// Build for azureblob for unsupported platforms to stop go complaining
// about "no buildable Go source files "
//go:build plan9 || solaris || js
//go:build plan9 || solaris || js || wasm
// Package azureblob provides an interface to the Microsoft Azure blob object storage system
package azureblob

View File

@@ -1,4 +1,4 @@
//go:build !plan9 && !js
//go:build !plan9 && !js && !wasm
// Package azurefiles provides an interface to Microsoft Azure Files
package azurefiles
@@ -453,7 +453,7 @@ func newFsFromOptions(ctx context.Context, name, root string, opt *Options) (fs.
return nil, fmt.Errorf("create new shared key credential failed: %w", err)
}
case opt.UseAZ:
var options = azidentity.AzureCLICredentialOptions{}
options := azidentity.AzureCLICredentialOptions{}
cred, err = azidentity.NewAzureCLICredential(&options)
fmt.Println(cred)
if err != nil {
@@ -516,6 +516,7 @@ func newFsFromOptions(ctx context.Context, name, root string, opt *Options) (fs.
}
case opt.ClientID != "" && opt.Tenant != "" && opt.Username != "" && opt.Password != "":
// User with username and password
//nolint:staticcheck // this is deprecated due to Azure policy
options := azidentity.UsernamePasswordCredentialOptions{
ClientOptions: policyClientOptions,
}
@@ -549,7 +550,7 @@ func newFsFromOptions(ctx context.Context, name, root string, opt *Options) (fs.
case opt.UseMSI:
// Specifying a user-assigned identity. Exactly one of the above IDs must be specified.
// Validate and ensure exactly one is set. (To do: better validation.)
var b2i = map[bool]int{false: 0, true: 1}
b2i := map[bool]int{false: 0, true: 1}
set := b2i[opt.MSIClientID != ""] + b2i[opt.MSIObjectID != ""] + b2i[opt.MSIResourceID != ""]
if set > 1 {
return nil, errors.New("more than one user-assigned identity ID is set")
@@ -568,6 +569,37 @@ func newFsFromOptions(ctx context.Context, name, root string, opt *Options) (fs.
if err != nil {
return nil, fmt.Errorf("failed to acquire MSI token: %w", err)
}
case opt.ClientID != "" && opt.Tenant != "" && opt.MSIClientID != "":
// Workload Identity based authentication
var options azidentity.ManagedIdentityCredentialOptions
options.ID = azidentity.ClientID(opt.MSIClientID)
msiCred, err := azidentity.NewManagedIdentityCredential(&options)
if err != nil {
return nil, fmt.Errorf("failed to acquire MSI token: %w", err)
}
getClientAssertions := func(context.Context) (string, error) {
token, err := msiCred.GetToken(context.Background(), policy.TokenRequestOptions{
Scopes: []string{"api://AzureADTokenExchange"},
})
if err != nil {
return "", fmt.Errorf("failed to acquire MSI token: %w", err)
}
return token.Token, nil
}
assertOpts := &azidentity.ClientAssertionCredentialOptions{}
cred, err = azidentity.NewClientAssertionCredential(
opt.Tenant,
opt.ClientID,
getClientAssertions,
assertOpts)
if err != nil {
return nil, fmt.Errorf("failed to acquire client assertion token: %w", err)
}
default:
return nil, errors.New("no authentication method configured")
}
@@ -822,7 +854,7 @@ func (f *Fs) List(ctx context.Context, dir string) (fs.DirEntries, error) {
return entries, err
}
var opt = &directory.ListFilesAndDirectoriesOptions{
opt := &directory.ListFilesAndDirectoriesOptions{
Include: directory.ListFilesInclude{
Timestamps: true,
},
@@ -921,7 +953,7 @@ func (o *Object) setMetadata(resp *file.GetPropertiesResponse) {
}
}
// readMetaData gets the metadata if it hasn't already been fetched
// getMetadata gets the metadata if it hasn't already been fetched
func (o *Object) getMetadata(ctx context.Context) error {
resp, err := o.fileClient().GetProperties(ctx, nil)
if err != nil {
@@ -981,6 +1013,10 @@ func (o *Object) SetModTime(ctx context.Context, t time.Time) error {
SMBProperties: &file.SMBProperties{
LastWriteTime: &t,
},
HTTPHeaders: &file.HTTPHeaders{
ContentMD5: o.md5,
ContentType: &o.contentType,
},
}
_, err := o.fileClient().SetHTTPHeaders(ctx, &opt)
if err != nil {

View File

@@ -1,4 +1,4 @@
//go:build !plan9 && !js
//go:build !plan9 && !js && !wasm
package azurefiles
@@ -61,7 +61,7 @@ const chars = "abcdefghijklmnopqrstuvwzyxABCDEFGHIJKLMNOPQRSTUVWZYX"
func randomString(charCount int) string {
strBldr := strings.Builder{}
for i := 0; i < charCount; i++ {
for range charCount {
randPos := rand.Int63n(52)
strBldr.WriteByte(chars[randPos])
}

View File

@@ -1,4 +1,4 @@
//go:build !plan9 && !js
//go:build !plan9 && !js && !wasm
package azurefiles

View File

@@ -1,7 +1,7 @@
// Build for azurefiles for unsupported platforms to stop go complaining
// about "no buildable Go source files "
//go:build plan9 || js
//go:build plan9 || js || wasm
// Package azurefiles provides an interface to Microsoft Azure Files
package azurefiles

View File

@@ -130,10 +130,10 @@ type AuthorizeAccountResponse struct {
AbsoluteMinimumPartSize int `json:"absoluteMinimumPartSize"` // The smallest possible size of a part of a large file.
AccountID string `json:"accountId"` // The identifier for the account.
Allowed struct { // An object (see below) containing the capabilities of this auth token, and any restrictions on using it.
BucketID string `json:"bucketId"` // When present, access is restricted to one bucket.
BucketName string `json:"bucketName"` // When present, name of bucket - may be empty
Capabilities []string `json:"capabilities"` // A list of strings, each one naming a capability the key has.
NamePrefix interface{} `json:"namePrefix"` // When present, access is restricted to files whose names start with the prefix
BucketID string `json:"bucketId"` // When present, access is restricted to one bucket.
BucketName string `json:"bucketName"` // When present, name of bucket - may be empty
Capabilities []string `json:"capabilities"` // A list of strings, each one naming a capability the key has.
NamePrefix any `json:"namePrefix"` // When present, access is restricted to files whose names start with the prefix
} `json:"allowed"`
APIURL string `json:"apiUrl"` // The base URL to use for all API calls except for uploading and downloading files.
AuthorizationToken string `json:"authorizationToken"` // An authorization token to use with all calls, other than b2_authorize_account, that need an Authorization header.

View File

@@ -16,6 +16,7 @@ import (
"io"
"net/http"
"path"
"slices"
"strconv"
"strings"
"sync"
@@ -30,8 +31,8 @@ import (
"github.com/rclone/rclone/fs/fserrors"
"github.com/rclone/rclone/fs/fshttp"
"github.com/rclone/rclone/fs/hash"
"github.com/rclone/rclone/fs/list"
"github.com/rclone/rclone/fs/operations"
"github.com/rclone/rclone/fs/walk"
"github.com/rclone/rclone/lib/bucket"
"github.com/rclone/rclone/lib/encoder"
"github.com/rclone/rclone/lib/multipart"
@@ -300,14 +301,13 @@ type Fs struct {
// Object describes a b2 object
type Object struct {
fs *Fs // what this object is part of
remote string // The remote path
id string // b2 id of the file
modTime time.Time // The modified time of the object if known
sha1 string // SHA-1 hash if known
size int64 // Size of the object
mimeType string // Content-Type of the object
meta map[string]string // The object metadata if known - may be nil - with lower case keys
fs *Fs // what this object is part of
remote string // The remote path
id string // b2 id of the file
modTime time.Time // The modified time of the object if known
sha1 string // SHA-1 hash if known
size int64 // Size of the object
mimeType string // Content-Type of the object
}
// ------------------------------------------------------------
@@ -590,12 +590,7 @@ func (f *Fs) authorizeAccount(ctx context.Context) error {
// hasPermission returns if the current AuthorizationToken has the selected permission
func (f *Fs) hasPermission(permission string) bool {
for _, capability := range f.info.Allowed.Capabilities {
if capability == permission {
return true
}
}
return false
return slices.Contains(f.info.Allowed.Capabilities, permission)
}
// getUploadURL returns the upload info with the UploadURL and the AuthorizationToken
@@ -923,7 +918,7 @@ func (f *Fs) List(ctx context.Context, dir string) (entries fs.DirEntries, err e
// of listing recursively that doing a directory traversal.
func (f *Fs) ListR(ctx context.Context, dir string, callback fs.ListRCallback) (err error) {
bucket, directory := f.split(dir)
list := walk.NewListRHelper(callback)
list := list.NewHelper(callback)
listR := func(bucket, directory, prefix string, addBucket bool) error {
last := ""
return f.list(ctx, bucket, directory, prefix, addBucket, true, 0, f.opt.Versions, false, func(remote string, object *api.File, isDirectory bool) error {
@@ -1276,7 +1271,7 @@ func (f *Fs) purge(ctx context.Context, dir string, oldOnly bool, deleteHidden b
toBeDeleted := make(chan *api.File, f.ci.Transfers)
var wg sync.WaitGroup
wg.Add(f.ci.Transfers)
for i := 0; i < f.ci.Transfers; i++ {
for range f.ci.Transfers {
go func() {
defer wg.Done()
for object := range toBeDeleted {
@@ -1605,9 +1600,6 @@ func (o *Object) decodeMetaDataRaw(ID, SHA1 string, Size int64, UploadTimestamp
if err != nil {
return err
}
// For now, just set "mtime" in metadata
o.meta = make(map[string]string, 1)
o.meta["mtime"] = o.modTime.Format(time.RFC3339Nano)
return nil
}
@@ -1681,6 +1673,21 @@ func (o *Object) getMetaData(ctx context.Context) (info *api.File, err error) {
return o.getMetaDataListing(ctx)
}
}
// If using versionAt we need to list the find the correct version.
if o.fs.opt.VersionAt.IsSet() {
info, err := o.getMetaDataListing(ctx)
if err != nil {
return nil, err
}
if info.Action == "hide" {
// Rerturn object not found error if the current version is deleted.
return nil, fs.ErrorObjectNotFound
}
return info, nil
}
_, info, err = o.getOrHead(ctx, "HEAD", nil)
return info, err
}
@@ -1887,20 +1894,18 @@ func (o *Object) getOrHead(ctx context.Context, method string, options []fs.Open
Info: Info,
}
// Embryonic metadata support - just mtime
o.meta = make(map[string]string, 1)
modTime, err := parseTimeStringHelper(info.Info[timeKey])
if err == nil {
o.meta["mtime"] = modTime.Format(time.RFC3339Nano)
}
// When reading files from B2 via cloudflare using
// --b2-download-url cloudflare strips the Content-Length
// headers (presumably so it can inject stuff) so use the old
// length read from the listing.
// Additionally, the official examples return S3 headers
// instead of native, i.e. no file ID, use ones from listing.
if info.Size < 0 {
info.Size = o.size
}
if info.ID == "" {
info.ID = o.id
}
return resp, info, nil
}
@@ -1950,7 +1955,7 @@ func init() {
// urlEncode encodes in with % encoding
func urlEncode(in string) string {
var out bytes.Buffer
for i := 0; i < len(in); i++ {
for i := range len(in) {
c := in[i]
if noNeedToEncode[c] {
_ = out.WriteByte(c)
@@ -2271,7 +2276,7 @@ See: https://www.backblaze.com/docs/cloud-storage-lifecycle-rules
},
}
func (f *Fs) lifecycleCommand(ctx context.Context, name string, arg []string, opt map[string]string) (out interface{}, err error) {
func (f *Fs) lifecycleCommand(ctx context.Context, name string, arg []string, opt map[string]string) (out any, err error) {
var newRule api.LifecycleRule
if daysStr := opt["daysFromHidingToDeleting"]; daysStr != "" {
days, err := strconv.Atoi(daysStr)
@@ -2360,7 +2365,7 @@ Durations are parsed as per the rest of rclone, 2h, 7d, 7w etc.
},
}
func (f *Fs) cleanupCommand(ctx context.Context, name string, arg []string, opt map[string]string) (out interface{}, err error) {
func (f *Fs) cleanupCommand(ctx context.Context, name string, arg []string, opt map[string]string) (out any, err error) {
maxAge := defaultMaxAge
if opt["max-age"] != "" {
maxAge, err = fs.ParseDuration(opt["max-age"])
@@ -2383,7 +2388,7 @@ it would do.
`,
}
func (f *Fs) cleanupHiddenCommand(ctx context.Context, name string, arg []string, opt map[string]string) (out interface{}, err error) {
func (f *Fs) cleanupHiddenCommand(ctx context.Context, name string, arg []string, opt map[string]string) (out any, err error) {
return nil, f.cleanUp(ctx, true, false, 0)
}
@@ -2402,7 +2407,7 @@ var commandHelp = []fs.CommandHelp{
// The result should be capable of being JSON encoded
// If it is a string or a []string it will be shown to the user
// otherwise it will be JSON encoded and shown to the user like that
func (f *Fs) Command(ctx context.Context, name string, arg []string, opt map[string]string) (out interface{}, err error) {
func (f *Fs) Command(ctx context.Context, name string, arg []string, opt map[string]string) (out any, err error) {
switch name {
case "lifecycle":
return f.lifecycleCommand(ctx, name, arg, opt)

View File

@@ -258,12 +258,6 @@ func (f *Fs) internalTestMetadata(t *testing.T, size string, uploadCutoff string
assert.Equal(t, v, got, k)
}
// mtime
for k, v := range metadata {
got := o.meta[k]
assert.Equal(t, v, got, k)
}
assert.Equal(t, mimeType, gotMetadata.ContentType, "Content-Type")
// Modification time from the x-bz-info-src_last_modified_millis header
@@ -452,14 +446,14 @@ func (f *Fs) InternalTestVersions(t *testing.T) {
t.Run("List", func(t *testing.T) {
fstest.CheckListing(t, f, test.want)
})
// b2 NewObject doesn't work with VersionAt
//t.Run("NewObject", func(t *testing.T) {
// gotObj, gotErr := f.NewObject(ctx, fileName)
// assert.Equal(t, test.wantErr, gotErr)
// if gotErr == nil {
// assert.Equal(t, test.wantSize, gotObj.Size())
// }
//})
t.Run("NewObject", func(t *testing.T) {
gotObj, gotErr := f.NewObject(ctx, fileName)
assert.Equal(t, test.wantErr, gotErr)
if gotErr == nil {
assert.Equal(t, test.wantSize, gotObj.Size())
}
})
})
}
})

View File

@@ -478,17 +478,14 @@ func (up *largeUpload) Copy(ctx context.Context) (err error) {
remaining = up.size
)
g.SetLimit(up.f.opt.UploadConcurrency)
for part := 0; part < up.parts; part++ {
for part := range up.parts {
// Fail fast, in case an errgroup managed function returns an error
// gCtx is cancelled. There is no point in copying all the other parts.
if gCtx.Err() != nil {
break
}
reqSize := remaining
if reqSize >= up.chunkSize {
reqSize = up.chunkSize
}
reqSize := min(remaining, up.chunkSize)
part := part // for the closure
g.Go(func() (err error) {

View File

@@ -271,9 +271,9 @@ type User struct {
ModifiedAt time.Time `json:"modified_at"`
Language string `json:"language"`
Timezone string `json:"timezone"`
SpaceAmount int64 `json:"space_amount"`
SpaceUsed int64 `json:"space_used"`
MaxUploadSize int64 `json:"max_upload_size"`
SpaceAmount float64 `json:"space_amount"`
SpaceUsed float64 `json:"space_used"`
MaxUploadSize float64 `json:"max_upload_size"`
Status string `json:"status"`
JobTitle string `json:"job_title"`
Phone string `json:"phone"`

View File

@@ -237,8 +237,8 @@ func getClaims(boxConfig *api.ConfigJSON, boxSubType string) (claims *boxCustomC
return claims, nil
}
func getSigningHeaders(boxConfig *api.ConfigJSON) map[string]interface{} {
signingHeaders := map[string]interface{}{
func getSigningHeaders(boxConfig *api.ConfigJSON) map[string]any {
signingHeaders := map[string]any{
"kid": boxConfig.BoxAppSettings.AppAuth.PublicKeyID,
}
return signingHeaders
@@ -1343,12 +1343,8 @@ func (f *Fs) changeNotifyRunner(ctx context.Context, notifyFunc func(string, fs.
nextStreamPosition = streamPosition
for {
limit := f.opt.ListChunk
// box only allows a max of 500 events
if limit > 500 {
limit = 500
}
limit := min(f.opt.ListChunk, 500)
opts := rest.Opts{
Method: "GET",

View File

@@ -105,7 +105,7 @@ func (o *Object) commitUpload(ctx context.Context, SessionID string, parts []api
const defaultDelay = 10
var tries int
outer:
for tries = 0; tries < maxTries; tries++ {
for tries = range maxTries {
err = o.fs.pacer.Call(func() (bool, error) {
resp, err = o.fs.srv.CallJSON(ctx, &opts, &request, nil)
if err != nil {
@@ -203,7 +203,7 @@ func (o *Object) uploadMultipart(ctx context.Context, in io.Reader, leaf, direct
errs := make(chan error, 1)
var wg sync.WaitGroup
outer:
for part := 0; part < session.TotalParts; part++ {
for part := range session.TotalParts {
// Check any errors
select {
case err = <-errs:
@@ -211,10 +211,7 @@ outer:
default:
}
reqSize := remaining
if reqSize >= chunkSize {
reqSize = chunkSize
}
reqSize := min(remaining, chunkSize)
// Make a block of memory
buf := make([]byte, reqSize)

View File

@@ -1,4 +1,4 @@
//go:build !plan9 && !js
//go:build !plan9 && !js && !wasm
// Package cache implements a virtual provider to cache existing remotes.
package cache
@@ -29,6 +29,7 @@ import (
"github.com/rclone/rclone/fs/config/obscure"
"github.com/rclone/rclone/fs/fspath"
"github.com/rclone/rclone/fs/hash"
"github.com/rclone/rclone/fs/list"
"github.com/rclone/rclone/fs/rc"
"github.com/rclone/rclone/fs/walk"
"github.com/rclone/rclone/lib/atexit"
@@ -1086,13 +1087,13 @@ func (f *Fs) List(ctx context.Context, dir string) (entries fs.DirEntries, err e
return cachedEntries, nil
}
func (f *Fs) recurse(ctx context.Context, dir string, list *walk.ListRHelper) error {
func (f *Fs) recurse(ctx context.Context, dir string, list *list.Helper) error {
entries, err := f.List(ctx, dir)
if err != nil {
return err
}
for i := 0; i < len(entries); i++ {
for i := range entries {
innerDir, ok := entries[i].(fs.Directory)
if ok {
err := f.recurse(ctx, innerDir.Remote(), list)
@@ -1138,7 +1139,7 @@ func (f *Fs) ListR(ctx context.Context, dir string, callback fs.ListRCallback) (
}
// if we're here, we're gonna do a standard recursive traversal and cache everything
list := walk.NewListRHelper(callback)
list := list.NewHelper(callback)
err = f.recurse(ctx, dir, list)
if err != nil {
return err
@@ -1428,7 +1429,7 @@ func (f *Fs) cacheReader(u io.Reader, src fs.ObjectInfo, originalRead func(inn i
}()
// wait until both are done
for c := 0; c < 2; c++ {
for range 2 {
<-done
}
}
@@ -1753,7 +1754,7 @@ func (f *Fs) About(ctx context.Context) (*fs.Usage, error) {
}
// Stats returns stats about the cache storage
func (f *Fs) Stats() (map[string]map[string]interface{}, error) {
func (f *Fs) Stats() (map[string]map[string]any, error) {
return f.cache.Stats()
}
@@ -1933,7 +1934,7 @@ var commandHelp = []fs.CommandHelp{
// The result should be capable of being JSON encoded
// If it is a string or a []string it will be shown to the user
// otherwise it will be JSON encoded and shown to the user like that
func (f *Fs) Command(ctx context.Context, name string, arg []string, opt map[string]string) (interface{}, error) {
func (f *Fs) Command(ctx context.Context, name string, arg []string, opt map[string]string) (any, error) {
switch name {
case "stats":
return f.Stats()

View File

@@ -1,4 +1,4 @@
//go:build !plan9 && !js && !race
//go:build !plan9 && !js && !wasm && !race
package cache_test
@@ -360,7 +360,7 @@ func TestInternalWrappedWrittenContentMatches(t *testing.T) {
require.NoError(t, err)
require.Equal(t, int64(len(checkSample)), o.Size())
for i := 0; i < len(checkSample); i++ {
for i := range checkSample {
require.Equal(t, testData[i], checkSample[i])
}
}
@@ -387,7 +387,7 @@ func TestInternalLargeWrittenContentMatches(t *testing.T) {
readData, err := runInstance.readDataFromRemote(t, rootFs, "data.bin", 0, testSize, false)
require.NoError(t, err)
for i := 0; i < len(readData); i++ {
for i := range readData {
require.Equalf(t, testData[i], readData[i], "at byte %v", i)
}
}
@@ -688,7 +688,7 @@ func TestInternalMaxChunkSizeRespected(t *testing.T) {
co, ok := o.(*cache.Object)
require.True(t, ok)
for i := 0; i < 4; i++ { // read first 4
for i := range 4 { // read first 4
_ = runInstance.readDataFromObj(t, co, chunkSize*int64(i), chunkSize*int64(i+1), false)
}
cfs.CleanUpCache(true)
@@ -971,7 +971,7 @@ func (r *run) randomReader(t *testing.T, size int64) io.ReadCloser {
f, err := os.CreateTemp("", "rclonecache-tempfile")
require.NoError(t, err)
for i := 0; i < int(cnt); i++ {
for range int(cnt) {
data := randStringBytes(int(chunk))
_, _ = f.Write(data)
}
@@ -1085,9 +1085,9 @@ func (r *run) rm(t *testing.T, f fs.Fs, remote string) error {
return err
}
func (r *run) list(t *testing.T, f fs.Fs, remote string) ([]interface{}, error) {
func (r *run) list(t *testing.T, f fs.Fs, remote string) ([]any, error) {
var err error
var l []interface{}
var l []any
var list fs.DirEntries
list, err = f.List(context.Background(), remote)
for _, ll := range list {
@@ -1215,7 +1215,7 @@ func (r *run) listenForBackgroundUpload(t *testing.T, f fs.Fs, remote string) ch
var err error
var state cache.BackgroundUploadState
for i := 0; i < 2; i++ {
for range 2 {
select {
case state = <-buCh:
// continue
@@ -1293,7 +1293,7 @@ func (r *run) completeAllBackgroundUploads(t *testing.T, f fs.Fs, lastRemote str
func (r *run) retryBlock(block func() error, maxRetries int, rate time.Duration) error {
var err error
for i := 0; i < maxRetries; i++ {
for range maxRetries {
err = block()
if err == nil {
return nil

View File

@@ -1,6 +1,6 @@
// Test Cache filesystem interface
//go:build !plan9 && !js && !race
//go:build !plan9 && !js && !wasm && !race
package cache_test
@@ -17,7 +17,7 @@ func TestIntegration(t *testing.T) {
fstests.Run(t, &fstests.Opt{
RemoteName: "TestCache:",
NilObject: (*cache.Object)(nil),
UnimplementableFsMethods: []string{"PublicLink", "OpenWriterAt", "OpenChunkWriter", "DirSetModTime", "MkdirMetadata"},
UnimplementableFsMethods: []string{"PublicLink", "OpenWriterAt", "OpenChunkWriter", "DirSetModTime", "MkdirMetadata", "ListP"},
UnimplementableObjectMethods: []string{"MimeType", "ID", "GetTier", "SetTier", "Metadata", "SetMetadata"},
UnimplementableDirectoryMethods: []string{"Metadata", "SetMetadata", "SetModTime"},
SkipInvalidUTF8: true, // invalid UTF-8 confuses the cache

View File

@@ -1,7 +1,7 @@
// Build for cache for unsupported platforms to stop go complaining
// about "no buildable Go source files "
//go:build plan9 || js
//go:build plan9 || js || wasm
// Package cache implements a virtual provider to cache existing remotes.
package cache

View File

@@ -1,4 +1,4 @@
//go:build !plan9 && !js && !race
//go:build !plan9 && !js && !wasm && !race
package cache_test
@@ -162,7 +162,7 @@ func TestInternalUploadQueueMoreFiles(t *testing.T) {
randInstance := rand.New(rand.NewSource(time.Now().Unix()))
lastFile := ""
for i := 0; i < totalFiles; i++ {
for i := range totalFiles {
size := int64(randInstance.Intn(maxSize-minSize) + minSize)
testReader := runInstance.randomReader(t, size)
remote := "test/" + strconv.Itoa(i) + ".bin"

View File

@@ -1,4 +1,4 @@
//go:build !plan9 && !js
//go:build !plan9 && !js && !wasm
package cache

View File

@@ -1,4 +1,4 @@
//go:build !plan9 && !js
//go:build !plan9 && !js && !wasm
package cache
@@ -182,7 +182,7 @@ func (r *Handle) queueOffset(offset int64) {
}
}
for i := 0; i < r.workers; i++ {
for i := range r.workers {
o := r.preloadOffset + int64(r.cacheFs().opt.ChunkSize)*int64(i)
if o < 0 || o >= r.cachedObject.Size() {
continue
@@ -222,7 +222,7 @@ func (r *Handle) getChunk(chunkStart int64) ([]byte, error) {
if !found {
// we're gonna give the workers a chance to pickup the chunk
// and retry a couple of times
for i := 0; i < r.cacheFs().opt.ReadRetries*8; i++ {
for i := range r.cacheFs().opt.ReadRetries * 8 {
data, err = r.storage().GetChunk(r.cachedObject, chunkStart)
if err == nil {
found = true

View File

@@ -1,4 +1,4 @@
//go:build !plan9 && !js
//go:build !plan9 && !js && !wasm
package cache

10
backend/cache/plex.go vendored
View File

@@ -1,4 +1,4 @@
//go:build !plan9 && !js
//go:build !plan9 && !js && !wasm
package cache
@@ -209,7 +209,7 @@ func (p *plexConnector) authenticate() error {
if err != nil {
return err
}
var data map[string]interface{}
var data map[string]any
err = json.NewDecoder(resp.Body).Decode(&data)
if err != nil {
return fmt.Errorf("failed to obtain token: %w", err)
@@ -273,11 +273,11 @@ func (p *plexConnector) isPlaying(co *Object) bool {
}
// adapted from: https://stackoverflow.com/a/28878037 (credit)
func get(m interface{}, path ...interface{}) (interface{}, bool) {
func get(m any, path ...any) (any, bool) {
for _, p := range path {
switch idx := p.(type) {
case string:
if mm, ok := m.(map[string]interface{}); ok {
if mm, ok := m.(map[string]any); ok {
if val, found := mm[idx]; found {
m = val
continue
@@ -285,7 +285,7 @@ func get(m interface{}, path ...interface{}) (interface{}, bool) {
}
return nil, false
case int:
if mm, ok := m.([]interface{}); ok {
if mm, ok := m.([]any); ok {
if len(mm) > idx {
m = mm[idx]
continue

View File

@@ -1,4 +1,4 @@
//go:build !plan9 && !js
//go:build !plan9 && !js && !wasm
package cache

View File

@@ -1,4 +1,4 @@
//go:build !plan9 && !js
//go:build !plan9 && !js && !wasm
package cache
@@ -18,6 +18,7 @@ import (
"github.com/rclone/rclone/fs"
"github.com/rclone/rclone/fs/walk"
bolt "go.etcd.io/bbolt"
"go.etcd.io/bbolt/errors"
)
// Constants
@@ -597,7 +598,7 @@ func (b *Persistent) CleanChunksBySize(maxSize int64) {
})
if err != nil {
if err == bolt.ErrDatabaseNotOpen {
if err == errors.ErrDatabaseNotOpen {
// we're likely a late janitor and we need to end quietly as there's no guarantee of what exists anymore
return
}
@@ -606,16 +607,16 @@ func (b *Persistent) CleanChunksBySize(maxSize int64) {
}
// Stats returns a go map with the stats key values
func (b *Persistent) Stats() (map[string]map[string]interface{}, error) {
r := make(map[string]map[string]interface{})
r["data"] = make(map[string]interface{})
func (b *Persistent) Stats() (map[string]map[string]any, error) {
r := make(map[string]map[string]any)
r["data"] = make(map[string]any)
r["data"]["oldest-ts"] = time.Now()
r["data"]["oldest-file"] = ""
r["data"]["newest-ts"] = time.Now()
r["data"]["newest-file"] = ""
r["data"]["total-chunks"] = 0
r["data"]["total-size"] = int64(0)
r["files"] = make(map[string]interface{})
r["files"] = make(map[string]any)
r["files"]["oldest-ts"] = time.Now()
r["files"]["oldest-name"] = ""
r["files"]["newest-ts"] = time.Now()

View File

@@ -1,5 +1,4 @@
//go:build !plan9 && !js
// +build !plan9,!js
//go:build !plan9 && !js && !wasm
package cache

View File

@@ -356,7 +356,8 @@ func NewFs(ctx context.Context, name, rpath string, m configmap.Mapper) (fs.Fs,
DirModTimeUpdatesOnWrite: true,
}).Fill(ctx, f).Mask(ctx, baseFs).WrapsFs(f, baseFs)
f.features.Disable("ListR") // Recursive listing may cause chunker skip files
f.features.ListR = nil // Recursive listing may cause chunker skip files
f.features.ListP = nil // ListP not supported yet
return f, err
}
@@ -632,7 +633,7 @@ func (f *Fs) parseChunkName(filePath string) (parentPath string, chunkNo int, ct
// forbidChunk prints error message or raises error if file is chunk.
// First argument sets log prefix, use `false` to suppress message.
func (f *Fs) forbidChunk(o interface{}, filePath string) error {
func (f *Fs) forbidChunk(o any, filePath string) error {
if parentPath, _, _, _ := f.parseChunkName(filePath); parentPath != "" {
if f.opt.FailHard {
return fmt.Errorf("chunk overlap with %q", parentPath)
@@ -680,7 +681,7 @@ func (f *Fs) newXactID(ctx context.Context, filePath string) (xactID string, err
circleSec := unixSec % closestPrimeZzzzSeconds
first4chars := strconv.FormatInt(circleSec, 36)
for tries := 0; tries < maxTransactionProbes; tries++ {
for range maxTransactionProbes {
f.xactIDMutex.Lock()
randomness := f.xactIDRand.Int63n(maxTwoBase36Digits + 1)
f.xactIDMutex.Unlock()
@@ -1189,10 +1190,7 @@ func (f *Fs) put(
}
tempRemote := f.makeChunkName(baseRemote, c.chunkNo, "", xactID)
size := c.sizeLeft
if size > c.chunkSize {
size = c.chunkSize
}
size := min(c.sizeLeft, c.chunkSize)
savedReadCount := c.readCount
// If a single chunk is expected, avoid the extra rename operation
@@ -1477,10 +1475,7 @@ func (c *chunkingReader) dummyRead(in io.Reader, size int64) error {
const bufLen = 1048576 // 1 MiB
buf := make([]byte, bufLen)
for size > 0 {
n := size
if n > bufLen {
n = bufLen
}
n := min(size, bufLen)
if _, err := io.ReadFull(in, buf[0:n]); err != nil {
return err
}
@@ -1866,6 +1861,8 @@ func (f *Fs) Move(ctx context.Context, src fs.Object, remote string) (fs.Object,
// baseMove chains to the wrapped Move or simulates it by Copy+Delete
func (f *Fs) baseMove(ctx context.Context, src fs.Object, remote string, delMode int) (fs.Object, error) {
ctx, ci := fs.AddConfig(ctx)
ci.NameTransform = nil // ensure operations.Move does not double-transform here
var (
dest fs.Object
err error
@@ -2480,7 +2477,7 @@ func unmarshalSimpleJSON(ctx context.Context, metaObject fs.Object, data []byte)
if len(data) > maxMetadataSizeWritten {
return nil, false, ErrMetaTooBig
}
if data == nil || len(data) < 2 || data[0] != '{' || data[len(data)-1] != '}' {
if len(data) < 2 || data[0] != '{' || data[len(data)-1] != '}' {
return nil, false, errors.New("invalid json")
}
var metadata metaSimpleJSON

View File

@@ -40,7 +40,7 @@ func testPutLarge(t *testing.T, f *Fs, kilobytes int) {
})
}
type settings map[string]interface{}
type settings map[string]any
func deriveFs(ctx context.Context, t *testing.T, f fs.Fs, path string, opts settings) fs.Fs {
fsName := strings.Split(f.Name(), "{")[0] // strip off hash

View File

@@ -46,6 +46,7 @@ func TestIntegration(t *testing.T) {
"DirCacheFlush",
"UserInfo",
"Disconnect",
"ListP",
},
}
if *fstest.RemoteName == "" {

View File

@@ -18,7 +18,7 @@ type CloudinaryEncoder interface {
ToStandardPath(string) string
// ToStandardName takes name in this encoding and converts
// it in Standard encoding.
ToStandardName(string) string
ToStandardName(string, string) string
// Encoded root of the remote (as passed into NewFs)
FromStandardFullPath(string) string
}

View File

@@ -8,7 +8,9 @@ import (
"fmt"
"io"
"net/http"
"net/url"
"path"
"slices"
"strconv"
"strings"
"time"
@@ -103,19 +105,39 @@ func init() {
Advanced: true,
Help: "Wait N seconds for eventual consistency of the databases that support the backend operation",
},
{
Name: "adjust_media_files_extensions",
Default: true,
Advanced: true,
Help: "Cloudinary handles media formats as a file attribute and strips it from the name, which is unlike most other file systems",
},
{
Name: "media_extensions",
Default: []string{
"3ds", "3g2", "3gp", "ai", "arw", "avi", "avif", "bmp", "bw",
"cr2", "cr3", "djvu", "dng", "eps3", "fbx", "flif", "flv", "gif",
"glb", "gltf", "hdp", "heic", "heif", "ico", "indd", "jp2", "jpe",
"jpeg", "jpg", "jxl", "jxr", "m2ts", "mov", "mp4", "mpeg", "mts",
"mxf", "obj", "ogv", "pdf", "ply", "png", "psd", "svg", "tga",
"tif", "tiff", "ts", "u3ma", "usdz", "wdp", "webm", "webp", "wmv"},
Advanced: true,
Help: "Cloudinary supported media extensions",
},
},
})
}
// Options defines the configuration for this backend
type Options struct {
CloudName string `config:"cloud_name"`
APIKey string `config:"api_key"`
APISecret string `config:"api_secret"`
UploadPrefix string `config:"upload_prefix"`
UploadPreset string `config:"upload_preset"`
Enc encoder.MultiEncoder `config:"encoding"`
EventuallyConsistentDelay fs.Duration `config:"eventually_consistent_delay"`
CloudName string `config:"cloud_name"`
APIKey string `config:"api_key"`
APISecret string `config:"api_secret"`
UploadPrefix string `config:"upload_prefix"`
UploadPreset string `config:"upload_preset"`
Enc encoder.MultiEncoder `config:"encoding"`
EventuallyConsistentDelay fs.Duration `config:"eventually_consistent_delay"`
MediaExtensions []string `config:"media_extensions"`
AdjustMediaFilesExtensions bool `config:"adjust_media_files_extensions"`
}
// Fs represents a remote cloudinary server
@@ -203,6 +225,18 @@ func (f *Fs) FromStandardPath(s string) string {
// FromStandardName implementation of the api.CloudinaryEncoder
func (f *Fs) FromStandardName(s string) string {
if f.opt.AdjustMediaFilesExtensions {
parsedURL, err := url.Parse(s)
ext := ""
if err != nil {
fs.Logf(nil, "Error parsing URL: %v", err)
} else {
ext = path.Ext(parsedURL.Path)
if slices.Contains(f.opt.MediaExtensions, strings.ToLower(strings.TrimPrefix(ext, "."))) {
s = strings.TrimSuffix(parsedURL.Path, ext)
}
}
}
return strings.ReplaceAll(f.opt.Enc.FromStandardName(s), "&", "\uFF06")
}
@@ -212,8 +246,20 @@ func (f *Fs) ToStandardPath(s string) string {
}
// ToStandardName implementation of the api.CloudinaryEncoder
func (f *Fs) ToStandardName(s string) string {
return strings.ReplaceAll(f.opt.Enc.ToStandardName(s), "\uFF06", "&")
func (f *Fs) ToStandardName(s string, assetURL string) string {
ext := ""
if f.opt.AdjustMediaFilesExtensions {
parsedURL, err := url.Parse(assetURL)
if err != nil {
fs.Logf(nil, "Error parsing URL: %v", err)
} else {
ext = path.Ext(parsedURL.Path)
if !slices.Contains(f.opt.MediaExtensions, strings.ToLower(strings.TrimPrefix(ext, "."))) {
ext = ""
}
}
}
return strings.ReplaceAll(f.opt.Enc.ToStandardName(s), "\uFF06", "&") + ext
}
// FromStandardFullPath encodes a full path to Cloudinary standard
@@ -331,10 +377,7 @@ func (f *Fs) List(ctx context.Context, dir string) (fs.DirEntries, error) {
}
for _, asset := range results.Assets {
remote := api.CloudinaryEncoder.ToStandardName(f, asset.DisplayName)
if dir != "" {
remote = path.Join(dir, api.CloudinaryEncoder.ToStandardName(f, asset.DisplayName))
}
remote := path.Join(dir, api.CloudinaryEncoder.ToStandardName(f, asset.DisplayName, asset.SecureURL))
o := &Object{
fs: f,
remote: remote,

View File

@@ -20,6 +20,7 @@ import (
"github.com/rclone/rclone/fs/config/configmap"
"github.com/rclone/rclone/fs/config/configstruct"
"github.com/rclone/rclone/fs/hash"
"github.com/rclone/rclone/fs/list"
"github.com/rclone/rclone/fs/operations"
"github.com/rclone/rclone/fs/walk"
"golang.org/x/sync/errgroup"
@@ -265,6 +266,9 @@ func NewFs(ctx context.Context, name, root string, m configmap.Mapper) (outFs fs
}
}
// Enable ListP always
features.ListP = f.ListP
// Enable Purge when any upstreams support it
if features.Purge == nil {
for _, u := range f.upstreams {
@@ -809,24 +813,52 @@ func (u *upstream) wrapEntries(ctx context.Context, entries fs.DirEntries) (fs.D
// This should return ErrDirNotFound if the directory isn't
// found.
func (f *Fs) List(ctx context.Context, dir string) (entries fs.DirEntries, err error) {
return list.WithListP(ctx, dir, f)
}
// ListP lists the objects and directories of the Fs starting
// from dir non recursively into out.
//
// dir should be "" to start from the root, and should not
// have trailing slashes.
//
// This should return ErrDirNotFound if the directory isn't
// found.
//
// It should call callback for each tranche of entries read.
// These need not be returned in any particular order. If
// callback returns an error then the listing will stop
// immediately.
func (f *Fs) ListP(ctx context.Context, dir string, callback fs.ListRCallback) error {
// defer log.Trace(f, "dir=%q", dir)("entries = %v, err=%v", &entries, &err)
if f.root == "" && dir == "" {
entries = make(fs.DirEntries, 0, len(f.upstreams))
entries := make(fs.DirEntries, 0, len(f.upstreams))
for combineDir := range f.upstreams {
d := fs.NewLimitedDirWrapper(combineDir, fs.NewDir(combineDir, f.when))
entries = append(entries, d)
}
return entries, nil
return callback(entries)
}
u, uRemote, err := f.findUpstream(dir)
if err != nil {
return nil, err
return err
}
entries, err = u.f.List(ctx, uRemote)
if err != nil {
return nil, err
wrappedCallback := func(entries fs.DirEntries) error {
entries, err := u.wrapEntries(ctx, entries)
if err != nil {
return err
}
return callback(entries)
}
return u.wrapEntries(ctx, entries)
listP := u.f.Features().ListP
if listP == nil {
entries, err := u.f.List(ctx, uRemote)
if err != nil {
return err
}
return wrappedCallback(entries)
}
return listP(ctx, uRemote, wrappedCallback)
}
// ListR lists the objects and directories of the Fs starting

View File

@@ -29,6 +29,7 @@ import (
"github.com/rclone/rclone/fs/config/configstruct"
"github.com/rclone/rclone/fs/fspath"
"github.com/rclone/rclone/fs/hash"
"github.com/rclone/rclone/fs/list"
"github.com/rclone/rclone/fs/log"
"github.com/rclone/rclone/fs/object"
"github.com/rclone/rclone/fs/operations"
@@ -208,6 +209,8 @@ func NewFs(ctx context.Context, name, rpath string, m configmap.Mapper) (fs.Fs,
if !operations.CanServerSideMove(wrappedFs) {
f.features.Disable("PutStream")
}
// Enable ListP always
f.features.ListP = f.ListP
return f, err
}
@@ -352,11 +355,39 @@ func (f *Fs) processEntries(entries fs.DirEntries) (newEntries fs.DirEntries, er
// found.
// List entries and process them
func (f *Fs) List(ctx context.Context, dir string) (entries fs.DirEntries, err error) {
entries, err = f.Fs.List(ctx, dir)
if err != nil {
return nil, err
return list.WithListP(ctx, dir, f)
}
// ListP lists the objects and directories of the Fs starting
// from dir non recursively into out.
//
// dir should be "" to start from the root, and should not
// have trailing slashes.
//
// This should return ErrDirNotFound if the directory isn't
// found.
//
// It should call callback for each tranche of entries read.
// These need not be returned in any particular order. If
// callback returns an error then the listing will stop
// immediately.
func (f *Fs) ListP(ctx context.Context, dir string, callback fs.ListRCallback) error {
wrappedCallback := func(entries fs.DirEntries) error {
entries, err := f.processEntries(entries)
if err != nil {
return err
}
return callback(entries)
}
return f.processEntries(entries)
listP := f.Fs.Features().ListP
if listP == nil {
entries, err := f.Fs.List(ctx, dir)
if err != nil {
return err
}
return wrappedCallback(entries)
}
return listP(ctx, dir, wrappedCallback)
}
// ListR lists the objects and directories of the Fs starting

View File

@@ -192,7 +192,7 @@ func newCipher(mode NameEncryptionMode, password, salt string, dirNameEncrypt bo
dirNameEncrypt: dirNameEncrypt,
encryptedSuffix: ".bin",
}
c.buffers.New = func() interface{} {
c.buffers.New = func() any {
return new([blockSize]byte)
}
err := c.Key(password, salt)
@@ -336,7 +336,7 @@ func (c *Cipher) obfuscateSegment(plaintext string) string {
_, _ = result.WriteString(strconv.Itoa(dir) + ".")
// but we'll augment it with the nameKey for real calculation
for i := 0; i < len(c.nameKey); i++ {
for i := range len(c.nameKey) {
dir += int(c.nameKey[i])
}
@@ -418,7 +418,7 @@ func (c *Cipher) deobfuscateSegment(ciphertext string) (string, error) {
}
// add the nameKey to get the real rotate distance
for i := 0; i < len(c.nameKey); i++ {
for i := range len(c.nameKey) {
dir += int(c.nameKey[i])
}
@@ -664,7 +664,7 @@ func (n *nonce) increment() {
// add a uint64 to the nonce
func (n *nonce) add(x uint64) {
carry := uint16(0)
for i := 0; i < 8; i++ {
for i := range 8 {
digit := (*n)[i]
xDigit := byte(x)
x >>= 8

View File

@@ -1307,10 +1307,7 @@ func TestNewDecrypterSeekLimit(t *testing.T) {
open := func(ctx context.Context, underlyingOffset, underlyingLimit int64) (io.ReadCloser, error) {
end := len(ciphertext)
if underlyingLimit >= 0 {
end = int(underlyingOffset + underlyingLimit)
if end > len(ciphertext) {
end = len(ciphertext)
}
end = min(int(underlyingOffset+underlyingLimit), len(ciphertext))
}
reader = io.NopCloser(bytes.NewBuffer(ciphertext[int(underlyingOffset):end]))
return reader, nil
@@ -1490,7 +1487,7 @@ func TestDecrypterRead(t *testing.T) {
assert.NoError(t, err)
// Test truncating the file at each possible point
for i := 0; i < len(file16)-1; i++ {
for i := range len(file16) - 1 {
what := fmt.Sprintf("truncating to %d/%d", i, len(file16))
cd := newCloseDetector(bytes.NewBuffer(file16[:i]))
fh, err := c.newDecrypter(cd)

View File

@@ -18,6 +18,7 @@ import (
"github.com/rclone/rclone/fs/config/obscure"
"github.com/rclone/rclone/fs/fspath"
"github.com/rclone/rclone/fs/hash"
"github.com/rclone/rclone/fs/list"
)
// Globals
@@ -293,6 +294,9 @@ func NewFs(ctx context.Context, name, rpath string, m configmap.Mapper) (fs.Fs,
PartialUploads: true,
}).Fill(ctx, f).Mask(ctx, wrappedFs).WrapsFs(f, wrappedFs)
// Enable ListP always
f.features.ListP = f.ListP
return f, err
}
@@ -416,11 +420,40 @@ func (f *Fs) encryptEntries(ctx context.Context, entries fs.DirEntries) (newEntr
// This should return ErrDirNotFound if the directory isn't
// found.
func (f *Fs) List(ctx context.Context, dir string) (entries fs.DirEntries, err error) {
entries, err = f.Fs.List(ctx, f.cipher.EncryptDirName(dir))
if err != nil {
return nil, err
return list.WithListP(ctx, dir, f)
}
// ListP lists the objects and directories of the Fs starting
// from dir non recursively into out.
//
// dir should be "" to start from the root, and should not
// have trailing slashes.
//
// This should return ErrDirNotFound if the directory isn't
// found.
//
// It should call callback for each tranche of entries read.
// These need not be returned in any particular order. If
// callback returns an error then the listing will stop
// immediately.
func (f *Fs) ListP(ctx context.Context, dir string, callback fs.ListRCallback) error {
wrappedCallback := func(entries fs.DirEntries) error {
entries, err := f.encryptEntries(ctx, entries)
if err != nil {
return err
}
return callback(entries)
}
return f.encryptEntries(ctx, entries)
listP := f.Fs.Features().ListP
encryptedDir := f.cipher.EncryptDirName(dir)
if listP == nil {
entries, err := f.Fs.List(ctx, encryptedDir)
if err != nil {
return err
}
return wrappedCallback(entries)
}
return listP(ctx, encryptedDir, wrappedCallback)
}
// ListR lists the objects and directories of the Fs starting
@@ -924,7 +957,7 @@ Usage Example:
// The result should be capable of being JSON encoded
// If it is a string or a []string it will be shown to the user
// otherwise it will be JSON encoded and shown to the user like that
func (f *Fs) Command(ctx context.Context, name string, arg []string, opt map[string]string) (out interface{}, err error) {
func (f *Fs) Command(ctx context.Context, name string, arg []string, opt map[string]string) (out any, err error) {
switch name {
case "decode":
out := make([]string, 0, len(arg))

View File

@@ -25,7 +25,7 @@ func Pad(n int, buf []byte) []byte {
}
length := len(buf)
padding := n - (length % n)
for i := 0; i < padding; i++ {
for range padding {
buf = append(buf, byte(padding))
}
if (len(buf) % n) != 0 {
@@ -54,7 +54,7 @@ func Unpad(n int, buf []byte) ([]byte, error) {
if padding == 0 {
return nil, ErrorPaddingTooShort
}
for i := 0; i < padding; i++ {
for i := range padding {
if buf[length-1-i] != byte(padding) {
return nil, ErrorPaddingNotAllTheSame
}

View File

@@ -0,0 +1,38 @@
// Type definitions specific to Dataverse
package api
// DataverseDatasetResponse is returned by the Dataverse dataset API
type DataverseDatasetResponse struct {
Status string `json:"status"`
Data DataverseDataset `json:"data"`
}
// DataverseDataset is the representation of a dataset
type DataverseDataset struct {
LatestVersion DataverseDatasetVersion `json:"latestVersion"`
}
// DataverseDatasetVersion is the representation of a dataset version
type DataverseDatasetVersion struct {
LastUpdateTime string `json:"lastUpdateTime"`
Files []DataverseFile `json:"files"`
}
// DataverseFile is the representation of a file found in a dataset
type DataverseFile struct {
DirectoryLabel string `json:"directoryLabel"`
DataFile DataverseDataFile `json:"dataFile"`
}
// DataverseDataFile represents file metadata details
type DataverseDataFile struct {
ID int64 `json:"id"`
Filename string `json:"filename"`
ContentType string `json:"contentType"`
FileSize int64 `json:"filesize"`
OriginalFileFormat string `json:"originalFileFormat"`
OriginalFileSize int64 `json:"originalFileSize"`
OriginalFileName string `json:"originalFileName"`
MD5 string `json:"md5"`
}

View File

@@ -0,0 +1,33 @@
// Type definitions specific to InvenioRDM
package api
// InvenioRecordResponse is the representation of a record stored in InvenioRDM
type InvenioRecordResponse struct {
Links InvenioRecordResponseLinks `json:"links"`
}
// InvenioRecordResponseLinks represents a record's links
type InvenioRecordResponseLinks struct {
Self string `json:"self"`
}
// InvenioFilesResponse is the representation of a record's files
type InvenioFilesResponse struct {
Entries []InvenioFilesResponseEntry `json:"entries"`
}
// InvenioFilesResponseEntry is the representation of a file entry
type InvenioFilesResponseEntry struct {
Key string `json:"key"`
Checksum string `json:"checksum"`
Size int64 `json:"size"`
Updated string `json:"updated"`
MimeType string `json:"mimetype"`
Links InvenioFilesResponseEntryLinks `json:"links"`
}
// InvenioFilesResponseEntryLinks represents file links details
type InvenioFilesResponseEntryLinks struct {
Content string `json:"content"`
}

26
backend/doi/api/types.go Normal file
View File

@@ -0,0 +1,26 @@
// Package api has general type definitions for doi
package api
// DoiResolverResponse is returned by the DOI resolver API
//
// Reference: https://www.doi.org/the-identifier/resources/factsheets/doi-resolution-documentation
type DoiResolverResponse struct {
ResponseCode int `json:"responseCode"`
Handle string `json:"handle"`
Values []DoiResolverResponseValue `json:"values"`
}
// DoiResolverResponseValue is a single handle record value
type DoiResolverResponseValue struct {
Index int `json:"index"`
Type string `json:"type"`
Data DoiResolverResponseValueData `json:"data"`
TTL int `json:"ttl"`
Timestamp string `json:"timestamp"`
}
// DoiResolverResponseValueData is the data held in a handle value
type DoiResolverResponseValueData struct {
Format string `json:"format"`
Value any `json:"value"`
}

112
backend/doi/dataverse.go Normal file
View File

@@ -0,0 +1,112 @@
// Implementation for Dataverse
package doi
import (
"context"
"fmt"
"net/http"
"net/url"
"path"
"strings"
"time"
"github.com/rclone/rclone/backend/doi/api"
"github.com/rclone/rclone/fs"
"github.com/rclone/rclone/lib/rest"
)
// Returns true if resolvedURL is likely a DOI hosted on a Dataverse intallation
func activateDataverse(resolvedURL *url.URL) (isActive bool) {
queryValues := resolvedURL.Query()
persistentID := queryValues.Get("persistentId")
return persistentID != ""
}
// Resolve the main API endpoint for a DOI hosted on a Dataverse installation
func resolveDataverseEndpoint(resolvedURL *url.URL) (provider Provider, endpoint *url.URL, err error) {
queryValues := resolvedURL.Query()
persistentID := queryValues.Get("persistentId")
query := url.Values{}
query.Add("persistentId", persistentID)
endpointURL := resolvedURL.ResolveReference(&url.URL{Path: "/api/datasets/:persistentId/", RawQuery: query.Encode()})
return Dataverse, endpointURL, nil
}
// dataverseProvider implements the doiProvider interface for Dataverse installations
type dataverseProvider struct {
f *Fs
}
// ListEntries returns the full list of entries found at the remote, regardless of root
func (dp *dataverseProvider) ListEntries(ctx context.Context) (entries []*Object, err error) {
// Use the cache if populated
cachedEntries, found := dp.f.cache.GetMaybe("files")
if found {
parsedEntries, ok := cachedEntries.([]Object)
if ok {
for _, entry := range parsedEntries {
newEntry := entry
entries = append(entries, &newEntry)
}
return entries, nil
}
}
filesURL := dp.f.endpoint
var res *http.Response
var result api.DataverseDatasetResponse
opts := rest.Opts{
Method: "GET",
Path: strings.TrimLeft(filesURL.EscapedPath(), "/"),
Parameters: filesURL.Query(),
}
err = dp.f.pacer.Call(func() (bool, error) {
res, err = dp.f.srv.CallJSON(ctx, &opts, nil, &result)
return shouldRetry(ctx, res, err)
})
if err != nil {
return nil, fmt.Errorf("readDir failed: %w", err)
}
modTime, modTimeErr := time.Parse(time.RFC3339, result.Data.LatestVersion.LastUpdateTime)
if modTimeErr != nil {
fs.Logf(dp.f, "error: could not parse last update time %v", modTimeErr)
modTime = timeUnset
}
for _, file := range result.Data.LatestVersion.Files {
contentURLPath := fmt.Sprintf("/api/access/datafile/%d", file.DataFile.ID)
query := url.Values{}
query.Add("format", "original")
contentURL := dp.f.endpoint.ResolveReference(&url.URL{Path: contentURLPath, RawQuery: query.Encode()})
entry := &Object{
fs: dp.f,
remote: path.Join(file.DirectoryLabel, file.DataFile.Filename),
contentURL: contentURL.String(),
size: file.DataFile.FileSize,
modTime: modTime,
md5: file.DataFile.MD5,
contentType: file.DataFile.ContentType,
}
if file.DataFile.OriginalFileName != "" {
entry.remote = path.Join(file.DirectoryLabel, file.DataFile.OriginalFileName)
entry.size = file.DataFile.OriginalFileSize
entry.contentType = file.DataFile.OriginalFileFormat
}
entries = append(entries, entry)
}
// Populate the cache
cacheEntries := []Object{}
for _, entry := range entries {
cacheEntries = append(cacheEntries, *entry)
}
dp.f.cache.Put("files", cacheEntries)
return entries, nil
}
func newDataverseProvider(f *Fs) doiProvider {
return &dataverseProvider{
f: f,
}
}

649
backend/doi/doi.go Normal file
View File

@@ -0,0 +1,649 @@
// Package doi provides a filesystem interface for digital objects identified by DOIs.
//
// See: https://www.doi.org/the-identifier/what-is-a-doi/
package doi
import (
"context"
"errors"
"fmt"
"io"
"net/http"
"net/url"
"path"
"strings"
"time"
"github.com/rclone/rclone/backend/doi/api"
"github.com/rclone/rclone/fs"
"github.com/rclone/rclone/fs/config/configmap"
"github.com/rclone/rclone/fs/config/configstruct"
"github.com/rclone/rclone/fs/fserrors"
"github.com/rclone/rclone/fs/fshttp"
"github.com/rclone/rclone/fs/hash"
"github.com/rclone/rclone/lib/cache"
"github.com/rclone/rclone/lib/pacer"
"github.com/rclone/rclone/lib/rest"
)
const (
// the URL of the DOI resolver
//
// Reference: https://www.doi.org/the-identifier/resources/factsheets/doi-resolution-documentation
doiResolverAPIURL = "https://doi.org/api"
minSleep = 10 * time.Millisecond
maxSleep = 2 * time.Second
decayConstant = 2 // bigger for slower decay, exponential
)
var (
errorReadOnly = errors.New("doi remotes are read only")
timeUnset = time.Unix(0, 0)
)
func init() {
fsi := &fs.RegInfo{
Name: "doi",
Description: "DOI datasets",
NewFs: NewFs,
CommandHelp: commandHelp,
Options: []fs.Option{{
Name: "doi",
Help: "The DOI or the doi.org URL.",
Required: true,
}, {
Name: fs.ConfigProvider,
Help: `DOI provider.
The DOI provider can be set when rclone does not automatically recognize a supported DOI provider.`,
Examples: []fs.OptionExample{
{
Value: "auto",
Help: "Auto-detect provider",
},
{
Value: string(Zenodo),
Help: "Zenodo",
}, {
Value: string(Dataverse),
Help: "Dataverse",
}, {
Value: string(Invenio),
Help: "Invenio",
}},
Required: false,
Advanced: true,
}, {
Name: "doi_resolver_api_url",
Help: `The URL of the DOI resolver API to use.
The DOI resolver can be set for testing or for cases when the the canonical DOI resolver API cannot be used.
Defaults to "https://doi.org/api".`,
Required: false,
Advanced: true,
}},
}
fs.Register(fsi)
}
// Provider defines the type of provider hosting the DOI
type Provider string
const (
// Zenodo provider, see https://zenodo.org
Zenodo Provider = "zenodo"
// Dataverse provider, see https://dataverse.harvard.edu
Dataverse Provider = "dataverse"
// Invenio provider, see https://inveniordm.docs.cern.ch
Invenio Provider = "invenio"
)
// Options defines the configuration for this backend
type Options struct {
Doi string `config:"doi"` // The DOI, a digital identifier of an object, usually a dataset
Provider string `config:"provider"` // The DOI provider
DoiResolverAPIURL string `config:"doi_resolver_api_url"` // The URL of the DOI resolver API to use.
}
// Fs stores the interface to the remote HTTP files
type Fs struct {
name string // name of this remote
root string // the path we are working on
provider Provider // the DOI provider
doiProvider doiProvider // the interface used to interact with the DOI provider
features *fs.Features // optional features
opt Options // options for this backend
ci *fs.ConfigInfo // global config
endpoint *url.URL // the main API endpoint for this remote
endpointURL string // endpoint as a string
srv *rest.Client // the connection to the server
pacer *fs.Pacer // pacer for API calls
cache *cache.Cache // a cache for the remote metadata
}
// Object is a remote object that has been stat'd (so it exists, but is not necessarily open for reading)
type Object struct {
fs *Fs // what this object is part of
remote string // the remote path
contentURL string // the URL where the contents of the file can be downloaded
size int64 // size of the object
modTime time.Time // modification time of the object
contentType string // content type of the object
md5 string // MD5 hash of the object content
}
// doiProvider is the interface used to list objects in a DOI
type doiProvider interface {
// ListEntries returns the full list of entries found at the remote, regardless of root
ListEntries(ctx context.Context) (entries []*Object, err error)
}
// Parse the input string as a DOI
// Examples:
// 10.1000/182 -> 10.1000/182
// https://doi.org/10.1000/182 -> 10.1000/182
// doi:10.1000/182 -> 10.1000/182
func parseDoi(doi string) string {
doiURL, err := url.Parse(doi)
if err != nil {
return doi
}
if doiURL.Scheme == "doi" {
return strings.TrimLeft(strings.TrimPrefix(doi, "doi:"), "/")
}
if strings.HasSuffix(doiURL.Hostname(), "doi.org") {
return strings.TrimLeft(doiURL.Path, "/")
}
return doi
}
// Resolve a DOI to a URL
// Reference: https://www.doi.org/the-identifier/resources/factsheets/doi-resolution-documentation
func resolveDoiURL(ctx context.Context, srv *rest.Client, pacer *fs.Pacer, opt *Options) (doiURL *url.URL, err error) {
resolverURL := opt.DoiResolverAPIURL
if resolverURL == "" {
resolverURL = doiResolverAPIURL
}
var result api.DoiResolverResponse
params := url.Values{}
params.Add("index", "1")
opts := rest.Opts{
Method: "GET",
RootURL: resolverURL,
Path: "/handles/" + opt.Doi,
Parameters: params,
}
err = pacer.Call(func() (bool, error) {
res, err := srv.CallJSON(ctx, &opts, nil, &result)
return shouldRetry(ctx, res, err)
})
if err != nil {
return nil, err
}
if result.ResponseCode != 1 {
return nil, fmt.Errorf("could not resolve DOI (error code %d)", result.ResponseCode)
}
resolvedURLStr := ""
for _, value := range result.Values {
if value.Type == "URL" && value.Data.Format == "string" {
valueStr, ok := value.Data.Value.(string)
if !ok {
return nil, fmt.Errorf("could not resolve DOI (incorrect response format)")
}
resolvedURLStr = valueStr
}
}
resolvedURL, err := url.Parse(resolvedURLStr)
if err != nil {
return nil, err
}
return resolvedURL, nil
}
// Resolve the passed configuration into a provider and enpoint
func resolveEndpoint(ctx context.Context, srv *rest.Client, pacer *fs.Pacer, opt *Options) (provider Provider, endpoint *url.URL, err error) {
resolvedURL, err := resolveDoiURL(ctx, srv, pacer, opt)
if err != nil {
return "", nil, err
}
switch opt.Provider {
case string(Dataverse):
return resolveDataverseEndpoint(resolvedURL)
case string(Invenio):
return resolveInvenioEndpoint(ctx, srv, pacer, resolvedURL)
case string(Zenodo):
return resolveZenodoEndpoint(ctx, srv, pacer, resolvedURL, opt.Doi)
}
hostname := strings.ToLower(resolvedURL.Hostname())
if hostname == "dataverse.harvard.edu" || activateDataverse(resolvedURL) {
return resolveDataverseEndpoint(resolvedURL)
}
if hostname == "zenodo.org" || strings.HasSuffix(hostname, ".zenodo.org") {
return resolveZenodoEndpoint(ctx, srv, pacer, resolvedURL, opt.Doi)
}
if activateInvenio(ctx, srv, pacer, resolvedURL) {
return resolveInvenioEndpoint(ctx, srv, pacer, resolvedURL)
}
return "", nil, fmt.Errorf("provider '%s' is not supported", resolvedURL.Hostname())
}
// Make the http connection from the passed options
func (f *Fs) httpConnection(ctx context.Context, opt *Options) (isFile bool, err error) {
provider, endpoint, err := resolveEndpoint(ctx, f.srv, f.pacer, opt)
if err != nil {
return false, err
}
// Update f with the new parameters
f.srv.SetRoot(endpoint.ResolveReference(&url.URL{Path: "/"}).String())
f.endpoint = endpoint
f.endpointURL = endpoint.String()
f.provider = provider
f.opt.Provider = string(provider)
switch f.provider {
case Dataverse:
f.doiProvider = newDataverseProvider(f)
case Invenio, Zenodo:
f.doiProvider = newInvenioProvider(f)
default:
return false, fmt.Errorf("provider type '%s' not supported", f.provider)
}
// Determine if the root is a file
entries, err := f.doiProvider.ListEntries(ctx)
if err != nil {
return false, err
}
for _, entry := range entries {
if entry.remote == f.root {
isFile = true
break
}
}
return isFile, nil
}
// retryErrorCodes is a slice of error codes that we will retry
var retryErrorCodes = []int{
429, // Too Many Requests.
500, // Internal Server Error
502, // Bad Gateway
503, // Service Unavailable
504, // Gateway Timeout
509, // Bandwidth Limit Exceeded
}
// shouldRetry returns a boolean as to whether this res and err
// deserve to be retried. It returns the err as a convenience.
func shouldRetry(ctx context.Context, res *http.Response, err error) (bool, error) {
if fserrors.ContextError(ctx, &err) {
return false, err
}
return fserrors.ShouldRetry(err) || fserrors.ShouldRetryHTTP(res, retryErrorCodes), err
}
// NewFs creates a new Fs object from the name and root. It connects to
// the host specified in the config file.
func NewFs(ctx context.Context, name, root string, m configmap.Mapper) (fs.Fs, error) {
root = strings.Trim(root, "/")
// Parse config into Options struct
opt := new(Options)
err := configstruct.Set(m, opt)
if err != nil {
return nil, err
}
opt.Doi = parseDoi(opt.Doi)
client := fshttp.NewClient(ctx)
ci := fs.GetConfig(ctx)
f := &Fs{
name: name,
root: root,
opt: *opt,
ci: ci,
srv: rest.NewClient(client),
pacer: fs.NewPacer(ctx, pacer.NewDefault(pacer.MinSleep(minSleep), pacer.MaxSleep(maxSleep), pacer.DecayConstant(decayConstant))),
cache: cache.New(),
}
f.features = (&fs.Features{
CanHaveEmptyDirectories: true,
}).Fill(ctx, f)
isFile, err := f.httpConnection(ctx, opt)
if err != nil {
return nil, err
}
if isFile {
// return an error with an fs which points to the parent
newRoot := path.Dir(f.root)
if newRoot == "." {
newRoot = ""
}
f.root = newRoot
return f, fs.ErrorIsFile
}
return f, nil
}
// Name returns the configured name of the file system
func (f *Fs) Name() string {
return f.name
}
// Root returns the root for the filesystem
func (f *Fs) Root() string {
return f.root
}
// String returns the URL for the filesystem
func (f *Fs) String() string {
return fmt.Sprintf("DOI %s", f.opt.Doi)
}
// Features returns the optional features of this Fs
func (f *Fs) Features() *fs.Features {
return f.features
}
// Precision is the remote http file system's modtime precision, which we have no way of knowing. We estimate at 1s
func (f *Fs) Precision() time.Duration {
return time.Second
}
// Hashes returns hash.HashNone to indicate remote hashing is unavailable
func (f *Fs) Hashes() hash.Set {
return hash.Set(hash.MD5)
// return hash.Set(hash.None)
}
// Mkdir makes the root directory of the Fs object
func (f *Fs) Mkdir(ctx context.Context, dir string) error {
return errorReadOnly
}
// Remove a remote http file object
func (o *Object) Remove(ctx context.Context) error {
return errorReadOnly
}
// Rmdir removes the root directory of the Fs object
func (f *Fs) Rmdir(ctx context.Context, dir string) error {
return errorReadOnly
}
// NewObject creates a new remote http file object
func (f *Fs) NewObject(ctx context.Context, remote string) (fs.Object, error) {
entries, err := f.doiProvider.ListEntries(ctx)
if err != nil {
return nil, err
}
remoteFullPath := remote
if f.root != "" {
remoteFullPath = path.Join(f.root, remote)
}
for _, entry := range entries {
if entry.Remote() == remoteFullPath {
return entry, nil
}
}
return nil, fs.ErrorObjectNotFound
}
// List the objects and directories in dir into entries. The
// entries can be returned in any order but should be for a
// complete directory.
//
// dir should be "" to list the root, and should not have
// trailing slashes.
//
// This should return ErrDirNotFound if the directory isn't
// found.
func (f *Fs) List(ctx context.Context, dir string) (entries fs.DirEntries, err error) {
fileEntries, err := f.doiProvider.ListEntries(ctx)
if err != nil {
return nil, fmt.Errorf("error listing %q: %w", dir, err)
}
fullDir := path.Join(f.root, dir)
if fullDir != "" {
fullDir += "/"
}
dirPaths := map[string]bool{}
for _, entry := range fileEntries {
// First, filter out files not in `fullDir`
if !strings.HasPrefix(entry.remote, fullDir) {
continue
}
// Then, find entries in subfolers
remotePath := entry.remote
if fullDir != "" {
remotePath = strings.TrimLeft(strings.TrimPrefix(remotePath, fullDir), "/")
}
parts := strings.SplitN(remotePath, "/", 2)
if len(parts) == 1 {
newEntry := *entry
newEntry.remote = path.Join(dir, remotePath)
entries = append(entries, &newEntry)
} else {
dirPaths[path.Join(dir, parts[0])] = true
}
}
for dirPath := range dirPaths {
entry := fs.NewDir(dirPath, time.Time{})
entries = append(entries, entry)
}
return entries, nil
}
// Put in to the remote path with the modTime given of the given size
//
// May create the object even if it returns an error - if so
// will return the object and the error, otherwise will return
// nil and the error
func (f *Fs) Put(ctx context.Context, in io.Reader, src fs.ObjectInfo, options ...fs.OpenOption) (fs.Object, error) {
return nil, errorReadOnly
}
// PutStream uploads to the remote path with the modTime given of indeterminate size
func (f *Fs) PutStream(ctx context.Context, in io.Reader, src fs.ObjectInfo, options ...fs.OpenOption) (fs.Object, error) {
return nil, errorReadOnly
}
// Fs is the filesystem this remote http file object is located within
func (o *Object) Fs() fs.Info {
return o.fs
}
// String returns the URL to the remote HTTP file
func (o *Object) String() string {
if o == nil {
return "<nil>"
}
return o.remote
}
// Remote the name of the remote HTTP file, relative to the fs root
func (o *Object) Remote() string {
return o.remote
}
// Hash returns "" since HTTP (in Go or OpenSSH) doesn't support remote calculation of hashes
func (o *Object) Hash(ctx context.Context, t hash.Type) (string, error) {
if t != hash.MD5 {
return "", hash.ErrUnsupported
}
return o.md5, nil
}
// Size returns the size in bytes of the remote http file
func (o *Object) Size() int64 {
return o.size
}
// ModTime returns the modification time of the remote http file
func (o *Object) ModTime(ctx context.Context) time.Time {
return o.modTime
}
// SetModTime sets the modification and access time to the specified time
//
// it also updates the info field
func (o *Object) SetModTime(ctx context.Context, modTime time.Time) error {
return errorReadOnly
}
// Storable returns whether the remote http file is a regular file (not a directory, symbolic link, block device, character device, named pipe, etc.)
func (o *Object) Storable() bool {
return true
}
// Open a remote http file object for reading. Seek is supported
func (o *Object) Open(ctx context.Context, options ...fs.OpenOption) (in io.ReadCloser, err error) {
fs.FixRangeOption(options, o.size)
opts := rest.Opts{
Method: "GET",
RootURL: o.contentURL,
Options: options,
}
var res *http.Response
err = o.fs.pacer.Call(func() (bool, error) {
res, err = o.fs.srv.Call(ctx, &opts)
return shouldRetry(ctx, res, err)
})
if err != nil {
return nil, fmt.Errorf("Open failed: %w", err)
}
// Handle non-compliant redirects
if res.Header.Get("Location") != "" {
newURL, err := res.Location()
if err == nil {
opts.RootURL = newURL.String()
err = o.fs.pacer.Call(func() (bool, error) {
res, err = o.fs.srv.Call(ctx, &opts)
return shouldRetry(ctx, res, err)
})
if err != nil {
return nil, fmt.Errorf("Open failed: %w", err)
}
}
}
return res.Body, nil
}
// Update in to the object with the modTime given of the given size
func (o *Object) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, options ...fs.OpenOption) error {
return errorReadOnly
}
// MimeType of an Object if known, "" otherwise
func (o *Object) MimeType(ctx context.Context) string {
return o.contentType
}
var commandHelp = []fs.CommandHelp{{
Name: "metadata",
Short: "Show metadata about the DOI.",
Long: `This command returns a JSON object with some information about the DOI.
rclone backend medatadata doi:
It returns a JSON object representing metadata about the DOI.
`,
}, {
Name: "set",
Short: "Set command for updating the config parameters.",
Long: `This set command can be used to update the config parameters
for a running doi backend.
Usage Examples:
rclone backend set doi: [-o opt_name=opt_value] [-o opt_name2=opt_value2]
rclone rc backend/command command=set fs=doi: [-o opt_name=opt_value] [-o opt_name2=opt_value2]
rclone rc backend/command command=set fs=doi: -o doi=NEW_DOI
The option keys are named as they are in the config file.
This rebuilds the connection to the doi backend when it is called with
the new parameters. Only new parameters need be passed as the values
will default to those currently in use.
It doesn't return anything.
`,
}}
// Command the backend to run a named command
//
// The command run is name
// args may be used to read arguments from
// opts may be used to read optional arguments from
//
// The result should be capable of being JSON encoded
// If it is a string or a []string it will be shown to the user
// otherwise it will be JSON encoded and shown to the user like that
func (f *Fs) Command(ctx context.Context, name string, arg []string, opt map[string]string) (out interface{}, err error) {
switch name {
case "metadata":
return f.ShowMetadata(ctx)
case "set":
newOpt := f.opt
err := configstruct.Set(configmap.Simple(opt), &newOpt)
if err != nil {
return nil, fmt.Errorf("reading config: %w", err)
}
_, err = f.httpConnection(ctx, &newOpt)
if err != nil {
return nil, fmt.Errorf("updating session: %w", err)
}
f.opt = newOpt
keys := []string{}
for k := range opt {
keys = append(keys, k)
}
fs.Logf(f, "Updated config values: %s", strings.Join(keys, ", "))
return nil, nil
default:
return nil, fs.ErrorCommandNotFound
}
}
// ShowMetadata returns some metadata about the corresponding DOI
func (f *Fs) ShowMetadata(ctx context.Context) (metadata interface{}, err error) {
doiURL, err := url.Parse("https://doi.org/" + f.opt.Doi)
if err != nil {
return nil, err
}
info := map[string]any{}
info["DOI"] = f.opt.Doi
info["URL"] = doiURL.String()
info["metadataURL"] = f.endpointURL
info["provider"] = f.provider
return info, nil
}
// Check the interfaces are satisfied
var (
_ fs.Fs = (*Fs)(nil)
_ fs.PutStreamer = (*Fs)(nil)
_ fs.Commander = (*Fs)(nil)
_ fs.Object = (*Object)(nil)
_ fs.MimeTyper = (*Object)(nil)
)

View File

@@ -0,0 +1,260 @@
package doi
import (
"context"
"crypto/md5"
"encoding/hex"
"encoding/json"
"io"
"net/http"
"net/http/httptest"
"net/url"
"sort"
"strings"
"testing"
"time"
"github.com/rclone/rclone/backend/doi/api"
"github.com/rclone/rclone/fs"
"github.com/rclone/rclone/fs/config/configmap"
"github.com/rclone/rclone/fs/hash"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
)
var remoteName = "TestDoi"
func TestParseDoi(t *testing.T) {
// 10.1000/182 -> 10.1000/182
doi := "10.1000/182"
parsed := parseDoi(doi)
assert.Equal(t, "10.1000/182", parsed)
// https://doi.org/10.1000/182 -> 10.1000/182
doi = "https://doi.org/10.1000/182"
parsed = parseDoi(doi)
assert.Equal(t, "10.1000/182", parsed)
// https://dx.doi.org/10.1000/182 -> 10.1000/182
doi = "https://dxdoi.org/10.1000/182"
parsed = parseDoi(doi)
assert.Equal(t, "10.1000/182", parsed)
// doi:10.1000/182 -> 10.1000/182
doi = "doi:10.1000/182"
parsed = parseDoi(doi)
assert.Equal(t, "10.1000/182", parsed)
// doi://10.1000/182 -> 10.1000/182
doi = "doi://10.1000/182"
parsed = parseDoi(doi)
assert.Equal(t, "10.1000/182", parsed)
}
// prepareMockDoiResolverServer prepares a test server to resolve DOIs
func prepareMockDoiResolverServer(t *testing.T, resolvedURL string) (doiResolverAPIURL string) {
mux := http.NewServeMux()
// Handle requests for resolving DOIs
mux.HandleFunc("GET /api/handles/{handle...}", func(w http.ResponseWriter, r *http.Request) {
// Check that we are resolving a DOI
handle := strings.TrimPrefix(r.URL.Path, "/api/handles/")
assert.NotEmpty(t, handle)
index := r.URL.Query().Get("index")
assert.Equal(t, "1", index)
// Return the most basic response
result := api.DoiResolverResponse{
ResponseCode: 1,
Handle: handle,
Values: []api.DoiResolverResponseValue{
{
Index: 1,
Type: "URL",
Data: api.DoiResolverResponseValueData{
Format: "string",
Value: resolvedURL,
},
},
},
}
resultBytes, err := json.Marshal(result)
require.NoError(t, err)
w.Header().Add("Content-Type", "application/json")
_, err = w.Write(resultBytes)
require.NoError(t, err)
})
// Make the test server
ts := httptest.NewServer(mux)
// Close the server at the end of the test
t.Cleanup(ts.Close)
return ts.URL + "/api"
}
func md5Sum(text string) string {
hash := md5.Sum([]byte(text))
return hex.EncodeToString(hash[:])
}
// prepareMockZenodoServer prepares a test server that mocks Zenodo.org
func prepareMockZenodoServer(t *testing.T, files map[string]string) *httptest.Server {
mux := http.NewServeMux()
// Handle requests for a single record
mux.HandleFunc("GET /api/records/{recordID...}", func(w http.ResponseWriter, r *http.Request) {
// Check that we are returning data about a single record
recordID := strings.TrimPrefix(r.URL.Path, "/api/records/")
assert.NotEmpty(t, recordID)
// Return the most basic response
selfURL, err := url.Parse("http://" + r.Host)
require.NoError(t, err)
selfURL = selfURL.JoinPath(r.URL.String())
result := api.InvenioRecordResponse{
Links: api.InvenioRecordResponseLinks{
Self: selfURL.String(),
},
}
resultBytes, err := json.Marshal(result)
require.NoError(t, err)
w.Header().Add("Content-Type", "application/json")
_, err = w.Write(resultBytes)
require.NoError(t, err)
})
// Handle requests for listing files in a record
mux.HandleFunc("GET /api/records/{record}/files", func(w http.ResponseWriter, r *http.Request) {
// Return the most basic response
filesBaseURL, err := url.Parse("http://" + r.Host)
require.NoError(t, err)
filesBaseURL = filesBaseURL.JoinPath("/api/files/")
entries := []api.InvenioFilesResponseEntry{}
for filename, contents := range files {
entries = append(entries,
api.InvenioFilesResponseEntry{
Key: filename,
Checksum: md5Sum(contents),
Size: int64(len(contents)),
Updated: time.Now().UTC().Format(time.RFC3339),
MimeType: "text/plain; charset=utf-8",
Links: api.InvenioFilesResponseEntryLinks{
Content: filesBaseURL.JoinPath(filename).String(),
},
},
)
}
result := api.InvenioFilesResponse{
Entries: entries,
}
resultBytes, err := json.Marshal(result)
require.NoError(t, err)
w.Header().Add("Content-Type", "application/json")
_, err = w.Write(resultBytes)
require.NoError(t, err)
})
// Handle requests for file contents
mux.HandleFunc("/api/files/{file}", func(w http.ResponseWriter, r *http.Request) {
// Check that we are returning the contents of a file
filename := strings.TrimPrefix(r.URL.Path, "/api/files/")
assert.NotEmpty(t, filename)
contents, found := files[filename]
if !found {
w.WriteHeader(404)
return
}
// Return the most basic response
_, err := w.Write([]byte(contents))
require.NoError(t, err)
})
// Make the test server
ts := httptest.NewServer(mux)
// Close the server at the end of the test
t.Cleanup(ts.Close)
return ts
}
func TestZenodoRemote(t *testing.T) {
recordID := "2600782"
doi := "10.5281/zenodo.2600782"
// The files in the dataset
files := map[string]string{
"README.md": "This is a dataset.",
"data.txt": "Some data",
}
ts := prepareMockZenodoServer(t, files)
resolvedURL := ts.URL + "/record/" + recordID
doiResolverAPIURL := prepareMockDoiResolverServer(t, resolvedURL)
testConfig := configmap.Simple{
"type": "doi",
"doi": doi,
"provider": "zenodo",
"doi_resolver_api_url": doiResolverAPIURL,
}
f, err := NewFs(context.Background(), remoteName, "", testConfig)
require.NoError(t, err)
// Test listing the DOI files
entries, err := f.List(context.Background(), "")
require.NoError(t, err)
sort.Sort(entries)
require.Equal(t, len(files), len(entries))
e := entries[0]
assert.Equal(t, "README.md", e.Remote())
assert.Equal(t, int64(18), e.Size())
_, ok := e.(*Object)
assert.True(t, ok)
e = entries[1]
assert.Equal(t, "data.txt", e.Remote())
assert.Equal(t, int64(9), e.Size())
_, ok = e.(*Object)
assert.True(t, ok)
// Test reading the DOI files
o, err := f.NewObject(context.Background(), "README.md")
require.NoError(t, err)
assert.Equal(t, int64(18), o.Size())
md5Hash, err := o.Hash(context.Background(), hash.MD5)
require.NoError(t, err)
assert.Equal(t, "464352b1cab5240e44528a56fda33d9d", md5Hash)
fd, err := o.Open(context.Background())
require.NoError(t, err)
data, err := io.ReadAll(fd)
require.NoError(t, err)
require.NoError(t, fd.Close())
assert.Equal(t, []byte(files["README.md"]), data)
do, ok := o.(fs.MimeTyper)
require.True(t, ok)
assert.Equal(t, "text/plain; charset=utf-8", do.MimeType(context.Background()))
o, err = f.NewObject(context.Background(), "data.txt")
require.NoError(t, err)
assert.Equal(t, int64(9), o.Size())
md5Hash, err = o.Hash(context.Background(), hash.MD5)
require.NoError(t, err)
assert.Equal(t, "5b82f8bf4df2bfb0e66ccaa7306fd024", md5Hash)
fd, err = o.Open(context.Background())
require.NoError(t, err)
data, err = io.ReadAll(fd)
require.NoError(t, err)
require.NoError(t, fd.Close())
assert.Equal(t, []byte(files["data.txt"]), data)
do, ok = o.(fs.MimeTyper)
require.True(t, ok)
assert.Equal(t, "text/plain; charset=utf-8", do.MimeType(context.Background()))
}

16
backend/doi/doi_test.go Normal file
View File

@@ -0,0 +1,16 @@
// Test DOI filesystem interface
package doi
import (
"testing"
"github.com/rclone/rclone/fstest/fstests"
)
// TestIntegration runs integration tests against the remote
func TestIntegration(t *testing.T) {
fstests.Run(t, &fstests.Opt{
RemoteName: "TestDoi:",
NilObject: (*Object)(nil),
})
}

164
backend/doi/invenio.go Normal file
View File

@@ -0,0 +1,164 @@
// Implementation for InvenioRDM
package doi
import (
"context"
"fmt"
"net/http"
"net/url"
"regexp"
"strings"
"time"
"github.com/rclone/rclone/backend/doi/api"
"github.com/rclone/rclone/fs"
"github.com/rclone/rclone/lib/rest"
)
var invenioRecordRegex = regexp.MustCompile(`\/records?\/(.+)`)
// Returns true if resolvedURL is likely a DOI hosted on an InvenioRDM intallation
func activateInvenio(ctx context.Context, srv *rest.Client, pacer *fs.Pacer, resolvedURL *url.URL) (isActive bool) {
_, _, err := resolveInvenioEndpoint(ctx, srv, pacer, resolvedURL)
return err == nil
}
// Resolve the main API endpoint for a DOI hosted on an InvenioRDM installation
func resolveInvenioEndpoint(ctx context.Context, srv *rest.Client, pacer *fs.Pacer, resolvedURL *url.URL) (provider Provider, endpoint *url.URL, err error) {
var res *http.Response
opts := rest.Opts{
Method: "GET",
RootURL: resolvedURL.String(),
}
err = pacer.Call(func() (bool, error) {
res, err = srv.Call(ctx, &opts)
return shouldRetry(ctx, res, err)
})
if err != nil {
return "", nil, err
}
// First, attempt to grab the API URL from the headers
var linksetURL *url.URL
links := parseLinkHeader(res.Header.Get("Link"))
for _, link := range links {
if link.Rel == "linkset" && link.Type == "application/linkset+json" {
parsed, err := url.Parse(link.Href)
if err == nil {
linksetURL = parsed
break
}
}
}
if linksetURL != nil {
endpoint, err = checkInvenioAPIURL(ctx, srv, pacer, linksetURL)
if err == nil {
return Invenio, endpoint, nil
}
fs.Logf(nil, "using linkset URL failed: %s", err.Error())
}
// If there is no linkset header, try to grab the record ID from the URL
recordID := ""
resURL := res.Request.URL
match := invenioRecordRegex.FindStringSubmatch(resURL.EscapedPath())
if match != nil {
recordID = match[1]
guessedURL := res.Request.URL.ResolveReference(&url.URL{
Path: "/api/records/" + recordID,
})
endpoint, err = checkInvenioAPIURL(ctx, srv, pacer, guessedURL)
if err == nil {
return Invenio, endpoint, nil
}
fs.Logf(nil, "guessing the URL failed: %s", err.Error())
}
return "", nil, fmt.Errorf("could not resolve the Invenio API endpoint for '%s'", resolvedURL.String())
}
func checkInvenioAPIURL(ctx context.Context, srv *rest.Client, pacer *fs.Pacer, resolvedURL *url.URL) (endpoint *url.URL, err error) {
var result api.InvenioRecordResponse
opts := rest.Opts{
Method: "GET",
RootURL: resolvedURL.String(),
}
err = pacer.Call(func() (bool, error) {
res, err := srv.CallJSON(ctx, &opts, nil, &result)
return shouldRetry(ctx, res, err)
})
if err != nil {
return nil, err
}
if result.Links.Self == "" {
return nil, fmt.Errorf("could not parse API response from '%s'", resolvedURL.String())
}
return url.Parse(result.Links.Self)
}
// invenioProvider implements the doiProvider interface for InvenioRDM installations
type invenioProvider struct {
f *Fs
}
// ListEntries returns the full list of entries found at the remote, regardless of root
func (ip *invenioProvider) ListEntries(ctx context.Context) (entries []*Object, err error) {
// Use the cache if populated
cachedEntries, found := ip.f.cache.GetMaybe("files")
if found {
parsedEntries, ok := cachedEntries.([]Object)
if ok {
for _, entry := range parsedEntries {
newEntry := entry
entries = append(entries, &newEntry)
}
return entries, nil
}
}
filesURL := ip.f.endpoint.JoinPath("files")
var result api.InvenioFilesResponse
opts := rest.Opts{
Method: "GET",
Path: strings.TrimLeft(filesURL.EscapedPath(), "/"),
}
err = ip.f.pacer.Call(func() (bool, error) {
res, err := ip.f.srv.CallJSON(ctx, &opts, nil, &result)
return shouldRetry(ctx, res, err)
})
if err != nil {
return nil, fmt.Errorf("readDir failed: %w", err)
}
for _, file := range result.Entries {
modTime, modTimeErr := time.Parse(time.RFC3339, file.Updated)
if modTimeErr != nil {
fs.Logf(ip.f, "error: could not parse last update time %v", modTimeErr)
modTime = timeUnset
}
entry := &Object{
fs: ip.f,
remote: file.Key,
contentURL: file.Links.Content,
size: file.Size,
modTime: modTime,
contentType: file.MimeType,
md5: strings.TrimPrefix(file.Checksum, "md5:"),
}
entries = append(entries, entry)
}
// Populate the cache
cacheEntries := []Object{}
for _, entry := range entries {
cacheEntries = append(cacheEntries, *entry)
}
ip.f.cache.Put("files", cacheEntries)
return entries, nil
}
func newInvenioProvider(f *Fs) doiProvider {
return &invenioProvider{
f: f,
}
}

View File

@@ -0,0 +1,75 @@
package doi
import (
"regexp"
"strings"
)
var linkRegex = regexp.MustCompile(`^<(.+)>$`)
var valueRegex = regexp.MustCompile(`^"(.+)"$`)
// headerLink represents a link as presented in HTTP headers
// MDN Reference: https://developer.mozilla.org/en-US/docs/Web/HTTP/Reference/Headers/Link
type headerLink struct {
Href string
Rel string
Type string
Extras map[string]string
}
func parseLinkHeader(header string) (links []headerLink) {
for _, link := range strings.Split(header, ",") {
link = strings.TrimSpace(link)
parsed := parseLink(link)
if parsed != nil {
links = append(links, *parsed)
}
}
return links
}
func parseLink(link string) (parsedLink *headerLink) {
var parts []string
for _, part := range strings.Split(link, ";") {
parts = append(parts, strings.TrimSpace(part))
}
match := linkRegex.FindStringSubmatch(parts[0])
if match == nil {
return nil
}
result := &headerLink{
Href: match[1],
Extras: map[string]string{},
}
for _, keyValue := range parts[1:] {
parsed := parseKeyValue(keyValue)
if parsed != nil {
key, value := parsed[0], parsed[1]
switch strings.ToLower(key) {
case "rel":
result.Rel = value
case "type":
result.Type = value
default:
result.Extras[key] = value
}
}
}
return result
}
func parseKeyValue(keyValue string) []string {
parts := strings.SplitN(keyValue, "=", 2)
if parts[0] == "" || len(parts) < 2 {
return nil
}
match := valueRegex.FindStringSubmatch(parts[1])
if match != nil {
parts[1] = match[1]
return parts
}
return parts
}

View File

@@ -0,0 +1,44 @@
package doi
import (
"testing"
"github.com/stretchr/testify/assert"
)
func TestParseLinkHeader(t *testing.T) {
header := "<https://zenodo.org/api/records/15063252> ; rel=\"linkset\" ; type=\"application/linkset+json\""
links := parseLinkHeader(header)
expected := headerLink{
Href: "https://zenodo.org/api/records/15063252",
Rel: "linkset",
Type: "application/linkset+json",
Extras: map[string]string{},
}
assert.Contains(t, links, expected)
header = "<https://api.example.com/issues?page=2>; rel=\"prev\", <https://api.example.com/issues?page=4>; rel=\"next\", <https://api.example.com/issues?page=10>; rel=\"last\", <https://api.example.com/issues?page=1>; rel=\"first\""
links = parseLinkHeader(header)
expectedList := []headerLink{{
Href: "https://api.example.com/issues?page=2",
Rel: "prev",
Type: "",
Extras: map[string]string{},
}, {
Href: "https://api.example.com/issues?page=4",
Rel: "next",
Type: "",
Extras: map[string]string{},
}, {
Href: "https://api.example.com/issues?page=10",
Rel: "last",
Type: "",
Extras: map[string]string{},
}, {
Href: "https://api.example.com/issues?page=1",
Rel: "first",
Type: "",
Extras: map[string]string{},
}}
assert.Equal(t, links, expectedList)
}

47
backend/doi/zenodo.go Normal file
View File

@@ -0,0 +1,47 @@
// Implementation for Zenodo
package doi
import (
"context"
"fmt"
"net/url"
"regexp"
"github.com/rclone/rclone/backend/doi/api"
"github.com/rclone/rclone/fs"
"github.com/rclone/rclone/lib/rest"
)
var zenodoRecordRegex = regexp.MustCompile(`zenodo[.](.+)`)
// Resolve the main API endpoint for a DOI hosted on Zenodo
func resolveZenodoEndpoint(ctx context.Context, srv *rest.Client, pacer *fs.Pacer, resolvedURL *url.URL, doi string) (provider Provider, endpoint *url.URL, err error) {
match := zenodoRecordRegex.FindStringSubmatch(doi)
if match == nil {
return "", nil, fmt.Errorf("could not derive API endpoint URL from '%s'", resolvedURL.String())
}
recordID := match[1]
endpointURL := resolvedURL.ResolveReference(&url.URL{Path: "/api/records/" + recordID})
var result api.InvenioRecordResponse
opts := rest.Opts{
Method: "GET",
RootURL: endpointURL.String(),
}
err = pacer.Call(func() (bool, error) {
res, err := srv.CallJSON(ctx, &opts, nil, &result)
return shouldRetry(ctx, res, err)
})
if err != nil {
return "", nil, err
}
endpointURL, err = url.Parse(result.Links.Self)
if err != nil {
return "", nil, err
}
return Zenodo, endpointURL, nil
}

View File

@@ -18,6 +18,7 @@ import (
"net/http"
"os"
"path"
"slices"
"sort"
"strconv"
"strings"
@@ -37,8 +38,8 @@ import (
"github.com/rclone/rclone/fs/fshttp"
"github.com/rclone/rclone/fs/fspath"
"github.com/rclone/rclone/fs/hash"
"github.com/rclone/rclone/fs/list"
"github.com/rclone/rclone/fs/operations"
"github.com/rclone/rclone/fs/walk"
"github.com/rclone/rclone/lib/dircache"
"github.com/rclone/rclone/lib/encoder"
"github.com/rclone/rclone/lib/env"
@@ -199,13 +200,7 @@ func driveScopes(scopesString string) (scopes []string) {
// Returns true if one of the scopes was "drive.appfolder"
func driveScopesContainsAppFolder(scopes []string) bool {
for _, scope := range scopes {
if scope == scopePrefix+"drive.appfolder" {
return true
}
}
return false
return slices.Contains(scopes, scopePrefix+"drive.appfolder")
}
func driveOAuthOptions() []fs.Option {
@@ -959,12 +954,7 @@ func parseDrivePath(path string) (root string, err error) {
type listFn func(*drive.File) bool
func containsString(slice []string, s string) bool {
for _, e := range slice {
if e == s {
return true
}
}
return false
return slices.Contains(slice, s)
}
// getFile returns drive.File for the ID passed and fields passed in
@@ -1153,13 +1143,7 @@ OUTER:
// Check the case of items is correct since
// the `=` operator is case insensitive.
if title != "" && title != item.Name {
found := false
for _, stem := range stems {
if stem == item.Name {
found = true
break
}
}
found := slices.Contains(stems, item.Name)
if !found {
continue
}
@@ -1212,6 +1196,7 @@ func fixMimeType(mimeTypeIn string) string {
}
return mimeTypeOut
}
func fixMimeTypeMap(in map[string][]string) (out map[string][]string) {
out = make(map[string][]string, len(in))
for k, v := range in {
@@ -1222,9 +1207,11 @@ func fixMimeTypeMap(in map[string][]string) (out map[string][]string) {
}
return out
}
func isInternalMimeType(mimeType string) bool {
return strings.HasPrefix(mimeType, "application/vnd.google-apps.")
}
func isLinkMimeType(mimeType string) bool {
return strings.HasPrefix(mimeType, "application/x-link-")
}
@@ -1559,13 +1546,10 @@ func (f *Fs) getFileFields(ctx context.Context) (fields googleapi.Field) {
func (f *Fs) newRegularObject(ctx context.Context, remote string, info *drive.File) (obj fs.Object, err error) {
// wipe checksum if SkipChecksumGphotos and file is type Photo or Video
if f.opt.SkipChecksumGphotos {
for _, space := range info.Spaces {
if space == "photos" {
info.Md5Checksum = ""
info.Sha1Checksum = ""
info.Sha256Checksum = ""
break
}
if slices.Contains(info.Spaces, "photos") {
info.Md5Checksum = ""
info.Sha1Checksum = ""
info.Sha256Checksum = ""
}
}
o := &Object{
@@ -1657,7 +1641,8 @@ func (f *Fs) newObjectWithInfo(ctx context.Context, remote string, info *drive.F
// When the drive.File cannot be represented as an fs.Object it will return (nil, nil).
func (f *Fs) newObjectWithExportInfo(
ctx context.Context, remote string, info *drive.File,
extension, exportName, exportMimeType string, isDocument bool) (o fs.Object, err error) {
extension, exportName, exportMimeType string, isDocument bool,
) (o fs.Object, err error) {
// Note that resolveShortcut will have been called already if
// we are being called from a listing. However the drive.Item
// will have been resolved so this will do nothing.
@@ -1760,7 +1745,7 @@ func (f *Fs) createDir(ctx context.Context, pathID, leaf string, metadata fs.Met
}
var updateMetadata updateMetadataFn
if len(metadata) > 0 {
updateMetadata, err = f.updateMetadata(ctx, createInfo, metadata, true)
updateMetadata, err = f.updateMetadata(ctx, createInfo, metadata, true, true)
if err != nil {
return nil, fmt.Errorf("create dir: failed to update metadata: %w", err)
}
@@ -1791,7 +1776,7 @@ func (f *Fs) updateDir(ctx context.Context, dirID string, metadata fs.Metadata)
}
dirID = actualID(dirID)
updateInfo := &drive.File{}
updateMetadata, err := f.updateMetadata(ctx, updateInfo, metadata, true)
updateMetadata, err := f.updateMetadata(ctx, updateInfo, metadata, true, true)
if err != nil {
return nil, fmt.Errorf("update dir: failed to update metadata from source object: %w", err)
}
@@ -1848,6 +1833,7 @@ func linkTemplate(mt string) *template.Template {
})
return _linkTemplates[mt]
}
func (f *Fs) fetchFormats(ctx context.Context) {
fetchFormatsOnce.Do(func() {
var about *drive.About
@@ -1893,7 +1879,8 @@ func (f *Fs) importFormats(ctx context.Context) map[string][]string {
// Look through the exportExtensions and find the first format that can be
// converted. If none found then return ("", "", false)
func (f *Fs) findExportFormatByMimeType(ctx context.Context, itemMimeType string) (
extension, mimeType string, isDocument bool) {
extension, mimeType string, isDocument bool,
) {
exportMimeTypes, isDocument := f.exportFormats(ctx)[itemMimeType]
if isDocument {
for _, _extension := range f.exportExtensions {
@@ -2202,7 +2189,7 @@ func (f *Fs) ListR(ctx context.Context, dir string, callback fs.ListRCallback) (
wg := sync.WaitGroup{}
in := make(chan listREntry, listRInputBuffer)
out := make(chan error, f.ci.Checkers)
list := walk.NewListRHelper(callback)
list := list.NewHelper(callback)
overflow := []listREntry{}
listed := 0
@@ -2240,7 +2227,7 @@ func (f *Fs) ListR(ctx context.Context, dir string, callback fs.ListRCallback) (
wg.Add(1)
in <- listREntry{directoryID, dir}
for i := 0; i < f.ci.Checkers; i++ {
for range f.ci.Checkers {
go f.listRRunner(ctx, &wg, in, out, cb, sendJob)
}
go func() {
@@ -2249,11 +2236,8 @@ func (f *Fs) ListR(ctx context.Context, dir string, callback fs.ListRCallback) (
// if the input channel overflowed add the collected entries to the channel now
for len(overflow) > 0 {
mu.Lock()
l := len(overflow)
// only fill half of the channel to prevent entries being put into overflow again
if l > listRInputBuffer/2 {
l = listRInputBuffer / 2
}
l := min(len(overflow), listRInputBuffer/2)
wg.Add(l)
for _, d := range overflow[:l] {
in <- d
@@ -2273,7 +2257,7 @@ func (f *Fs) ListR(ctx context.Context, dir string, callback fs.ListRCallback) (
mu.Unlock()
}()
// wait until the all workers to finish
for i := 0; i < f.ci.Checkers; i++ {
for range f.ci.Checkers {
e := <-out
mu.Lock()
// if one worker returns an error early, close the input so all other workers exit
@@ -2689,7 +2673,7 @@ func (f *Fs) purgeCheck(ctx context.Context, dir string, check bool) error {
if shortcutID != "" {
return f.delete(ctx, shortcutID, f.opt.UseTrash)
}
var trashedFiles = false
trashedFiles := false
if check {
found, err := f.list(ctx, []string{directoryID}, "", false, false, f.opt.TrashedOnly, true, func(item *drive.File) bool {
if !item.Trashed {
@@ -2926,7 +2910,6 @@ func (f *Fs) CleanUp(ctx context.Context) error {
err := f.svc.Files.EmptyTrash().Context(ctx).Do()
return f.shouldRetry(ctx, err)
})
if err != nil {
return err
}
@@ -3187,6 +3170,7 @@ func (f *Fs) ChangeNotify(ctx context.Context, notifyFunc func(string, fs.EntryT
}
}()
}
func (f *Fs) changeNotifyStartPageToken(ctx context.Context) (pageToken string, err error) {
var startPageToken *drive.StartPageToken
err = f.pacer.Call(func() (bool, error) {
@@ -3909,7 +3893,7 @@ Third delete all orphaned files to the trash
// The result should be capable of being JSON encoded
// If it is a string or a []string it will be shown to the user
// otherwise it will be JSON encoded and shown to the user like that
func (f *Fs) Command(ctx context.Context, name string, arg []string, opt map[string]string) (out interface{}, err error) {
func (f *Fs) Command(ctx context.Context, name string, arg []string, opt map[string]string) (out any, err error) {
switch name {
case "get":
out := make(map[string]string)
@@ -4018,14 +4002,13 @@ func (f *Fs) Command(ctx context.Context, name string, arg []string, opt map[str
case "query":
if len(arg) == 1 {
query := arg[0]
var results, err = f.query(ctx, query)
results, err := f.query(ctx, query)
if err != nil {
return nil, fmt.Errorf("failed to execute query: %q, error: %w", query, err)
}
return results, nil
} else {
return nil, errors.New("need a query argument")
}
return nil, errors.New("need a query argument")
case "rescue":
dirID := ""
_, delete := opt["delete"]
@@ -4085,6 +4068,7 @@ func (o *Object) Hash(ctx context.Context, t hash.Type) (string, error) {
}
return "", hash.ErrUnsupported
}
func (o *baseObject) Hash(ctx context.Context, t hash.Type) (string, error) {
if t != hash.MD5 && t != hash.SHA1 && t != hash.SHA256 {
return "", hash.ErrUnsupported
@@ -4099,7 +4083,8 @@ func (o *baseObject) Size() int64 {
// getRemoteInfoWithExport returns a drive.File and the export settings for the remote
func (f *Fs) getRemoteInfoWithExport(ctx context.Context, remote string) (
info *drive.File, extension, exportName, exportMimeType string, isDocument bool, err error) {
info *drive.File, extension, exportName, exportMimeType string, isDocument bool, err error,
) {
leaf, directoryID, err := f.dirCache.FindPath(ctx, remote, false)
if err != nil {
if err == fs.ErrorDirNotFound {
@@ -4312,12 +4297,13 @@ func (o *Object) Open(ctx context.Context, options ...fs.OpenOption) (in io.Read
}
return o.baseObject.open(ctx, o.url, options...)
}
func (o *documentObject) Open(ctx context.Context, options ...fs.OpenOption) (in io.ReadCloser, err error) {
// Update the size with what we are reading as it can change from
// the HEAD in the listing to this GET. This stops rclone marking
// the transfer as corrupted.
var offset, end int64 = 0, -1
var newOptions = options[:0]
newOptions := options[:0]
for _, o := range options {
// Note that Range requests don't work on Google docs:
// https://developers.google.com/drive/v3/web/manage-downloads#partial_download
@@ -4344,9 +4330,10 @@ func (o *documentObject) Open(ctx context.Context, options ...fs.OpenOption) (in
}
return
}
func (o *linkObject) Open(ctx context.Context, options ...fs.OpenOption) (in io.ReadCloser, err error) {
var offset, limit int64 = 0, -1
var data = o.content
data := o.content
for _, option := range options {
switch x := option.(type) {
case *fs.SeekOption:
@@ -4371,7 +4358,8 @@ func (o *linkObject) Open(ctx context.Context, options ...fs.OpenOption) (in io.
}
func (o *baseObject) update(ctx context.Context, updateInfo *drive.File, uploadMimeType string, in io.Reader,
src fs.ObjectInfo) (info *drive.File, err error) {
src fs.ObjectInfo,
) (info *drive.File, err error) {
// Make the API request to upload metadata and file data.
size := src.Size()
if size >= 0 && size < int64(o.fs.opt.UploadCutoff) {
@@ -4449,6 +4437,7 @@ func (o *Object) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, op
return nil
}
func (o *documentObject) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, options ...fs.OpenOption) error {
srcMimeType := fs.MimeType(ctx, src)
importMimeType := ""
@@ -4544,6 +4533,7 @@ func (o *baseObject) Metadata(ctx context.Context) (metadata fs.Metadata, err er
func (o *documentObject) ext() string {
return o.baseObject.remote[len(o.baseObject.remote)-o.extLen:]
}
func (o *linkObject) ext() string {
return o.baseObject.remote[len(o.baseObject.remote)-o.extLen:]
}

View File

@@ -4,6 +4,7 @@ import (
"context"
"encoding/json"
"fmt"
"maps"
"strconv"
"strings"
"sync"
@@ -324,9 +325,7 @@ func (o *baseObject) parseMetadata(ctx context.Context, info *drive.File) (err e
metadata := make(fs.Metadata, 16)
// Dump user metadata first as it overrides system metadata
for k, v := range info.Properties {
metadata[k] = v
}
maps.Copy(metadata, info.Properties)
// System metadata
metadata["copy-requires-writer-permission"] = fmt.Sprint(info.CopyRequiresWriterPermission)
@@ -508,7 +507,7 @@ type updateMetadataFn func(context.Context, *drive.File) error
//
// It returns a callback which should be called to finish the updates
// after the data is uploaded.
func (f *Fs) updateMetadata(ctx context.Context, updateInfo *drive.File, meta fs.Metadata, update bool) (callback updateMetadataFn, err error) {
func (f *Fs) updateMetadata(ctx context.Context, updateInfo *drive.File, meta fs.Metadata, update, isFolder bool) (callback updateMetadataFn, err error) {
callbackFns := []updateMetadataFn{}
callback = func(ctx context.Context, info *drive.File) error {
for _, fn := range callbackFns {
@@ -533,7 +532,9 @@ func (f *Fs) updateMetadata(ctx context.Context, updateInfo *drive.File, meta fs
}
switch k {
case "copy-requires-writer-permission":
if err := parseBool(&updateInfo.CopyRequiresWriterPermission); err != nil {
if isFolder {
fs.Debugf(f, "Ignoring %s=%s as can't set on folders", k, v)
} else if err := parseBool(&updateInfo.CopyRequiresWriterPermission); err != nil {
return nil, err
}
case "writers-can-share":
@@ -630,7 +631,7 @@ func (f *Fs) fetchAndUpdateMetadata(ctx context.Context, src fs.ObjectInfo, opti
if err != nil {
return nil, fmt.Errorf("failed to read metadata from source object: %w", err)
}
callback, err = f.updateMetadata(ctx, updateInfo, meta, update)
callback, err = f.updateMetadata(ctx, updateInfo, meta, update, false)
if err != nil {
return nil, fmt.Errorf("failed to update metadata from source object: %w", err)
}

View File

@@ -177,10 +177,7 @@ func (rx *resumableUpload) Upload(ctx context.Context) (*drive.File, error) {
if start >= rx.ContentLength {
break
}
reqSize = rx.ContentLength - start
if reqSize >= int64(rx.f.opt.ChunkSize) {
reqSize = int64(rx.f.opt.ChunkSize)
}
reqSize = min(rx.ContentLength-start, int64(rx.f.opt.ChunkSize))
chunk = readers.NewRepeatableLimitReaderBuffer(rx.Media, buf, reqSize)
} else {
// If size unknown read into buffer

View File

@@ -55,10 +55,7 @@ func (d *digest) Write(p []byte) (n int, err error) {
n = len(p)
for len(p) > 0 {
d.writtenMore = true
toWrite := bytesPerBlock - d.n
if toWrite > len(p) {
toWrite = len(p)
}
toWrite := min(bytesPerBlock-d.n, len(p))
_, err = d.blockHash.Write(p[:toWrite])
if err != nil {
panic(hashReturnedError)

View File

@@ -11,7 +11,7 @@ import (
func testChunk(t *testing.T, chunk int) {
data := make([]byte, chunk)
for i := 0; i < chunk; i++ {
for i := range chunk {
data[i] = 'A'
}
for _, test := range []struct {

View File

@@ -92,6 +92,9 @@ const (
maxFileNameLength = 255
)
type exportAPIFormat string
type exportExtension string // dotless
var (
// Description of how to auth for this app
dropboxConfig = &oauthutil.Config{
@@ -132,6 +135,16 @@ var (
DefaultTimeoutAsync: 10 * time.Second,
DefaultBatchSizeAsync: 100,
}
exportKnownAPIFormats = map[exportAPIFormat]exportExtension{
"markdown": "md",
"html": "html",
}
// Populated based on exportKnownAPIFormats
exportKnownExtensions = map[exportExtension]exportAPIFormat{}
paperExtension = ".paper"
paperTemplateExtension = ".papert"
)
// Gets an oauth config with the right scopes
@@ -247,23 +260,61 @@ folders.`,
Help: "Specify a different Dropbox namespace ID to use as the root for all paths.",
Default: "",
Advanced: true,
}}...), defaultBatcherOptions.FsOptions("For full info see [the main docs](https://rclone.org/dropbox/#batch-mode)\n\n")...),
}, {
Name: "export_formats",
Help: `Comma separated list of preferred formats for exporting files
Certain Dropbox files can only be accessed by exporting them to another format.
These include Dropbox Paper documents.
For each such file, rclone will choose the first format on this list that Dropbox
considers valid. If none is valid, it will choose Dropbox's default format.
Known formats include: "html", "md" (markdown)`,
Default: fs.CommaSepList{"html", "md"},
Advanced: true,
}, {
Name: "skip_exports",
Help: "Skip exportable files in all listings.\n\nIf given, exportable files practically become invisible to rclone.",
Default: false,
Advanced: true,
}, {
Name: "show_all_exports",
Default: false,
Help: `Show all exportable files in listings.
Adding this flag will allow all exportable files to be server side copied.
Note that rclone doesn't add extensions to the exportable file names in this mode.
Do **not** use this flag when trying to download exportable files - rclone
will fail to download them.
`,
Advanced: true,
},
}...), defaultBatcherOptions.FsOptions("For full info see [the main docs](https://rclone.org/dropbox/#batch-mode)\n\n")...),
})
for apiFormat, ext := range exportKnownAPIFormats {
exportKnownExtensions[ext] = apiFormat
}
}
// Options defines the configuration for this backend
type Options struct {
ChunkSize fs.SizeSuffix `config:"chunk_size"`
Impersonate string `config:"impersonate"`
SharedFiles bool `config:"shared_files"`
SharedFolders bool `config:"shared_folders"`
BatchMode string `config:"batch_mode"`
BatchSize int `config:"batch_size"`
BatchTimeout fs.Duration `config:"batch_timeout"`
AsyncBatch bool `config:"async_batch"`
PacerMinSleep fs.Duration `config:"pacer_min_sleep"`
Enc encoder.MultiEncoder `config:"encoding"`
RootNsid string `config:"root_namespace"`
ChunkSize fs.SizeSuffix `config:"chunk_size"`
Impersonate string `config:"impersonate"`
SharedFiles bool `config:"shared_files"`
SharedFolders bool `config:"shared_folders"`
BatchMode string `config:"batch_mode"`
BatchSize int `config:"batch_size"`
BatchTimeout fs.Duration `config:"batch_timeout"`
AsyncBatch bool `config:"async_batch"`
PacerMinSleep fs.Duration `config:"pacer_min_sleep"`
Enc encoder.MultiEncoder `config:"encoding"`
RootNsid string `config:"root_namespace"`
ExportFormats fs.CommaSepList `config:"export_formats"`
SkipExports bool `config:"skip_exports"`
ShowAllExports bool `config:"show_all_exports"`
}
// Fs represents a remote dropbox server
@@ -283,8 +334,18 @@ type Fs struct {
pacer *fs.Pacer // To pace the API calls
ns string // The namespace we are using or "" for none
batcher *batcher.Batcher[*files.UploadSessionFinishArg, *files.FileMetadata]
exportExts []exportExtension
}
type exportType int
const (
notExport exportType = iota // a regular file
exportHide // should be hidden
exportListOnly // listable, but can't export
exportExportable // can export
)
// Object describes a dropbox object
//
// Dropbox Objects always have full metadata
@@ -296,6 +357,9 @@ type Object struct {
bytes int64 // size of the object
modTime time.Time // time it was last modified
hash string // content_hash of the object
exportType exportType
exportAPIFormat exportAPIFormat
}
// Name of the remote (as passed into NewFs)
@@ -436,6 +500,14 @@ func NewFs(ctx context.Context, name, root string, m configmap.Mapper) (fs.Fs, e
HeaderGenerator: f.headerGenerator,
}
for _, e := range opt.ExportFormats {
ext := exportExtension(e)
if exportKnownExtensions[ext] == "" {
return nil, fmt.Errorf("dropbox: unknown export format '%s'", e)
}
f.exportExts = append(f.exportExts, ext)
}
// unauthorized config for endpoints that fail with auth
ucfg := dropbox.Config{
LogLevel: dropbox.LogOff, // logging in the SDK: LogOff, LogDebug, LogInfo
@@ -588,38 +660,126 @@ func (f *Fs) setRoot(root string) {
}
}
type getMetadataResult struct {
entry files.IsMetadata
notFound bool
err error
}
// getMetadata gets the metadata for a file or directory
func (f *Fs) getMetadata(ctx context.Context, objPath string) (entry files.IsMetadata, notFound bool, err error) {
err = f.pacer.Call(func() (bool, error) {
entry, err = f.srv.GetMetadata(&files.GetMetadataArg{
func (f *Fs) getMetadata(ctx context.Context, objPath string) (res getMetadataResult) {
res.err = f.pacer.Call(func() (bool, error) {
res.entry, res.err = f.srv.GetMetadata(&files.GetMetadataArg{
Path: f.opt.Enc.FromStandardPath(objPath),
})
return shouldRetry(ctx, err)
return shouldRetry(ctx, res.err)
})
if err != nil {
switch e := err.(type) {
if res.err != nil {
switch e := res.err.(type) {
case files.GetMetadataAPIError:
if e.EndpointError != nil && e.EndpointError.Path != nil && e.EndpointError.Path.Tag == files.LookupErrorNotFound {
notFound = true
err = nil
res.notFound = true
res.err = nil
}
}
}
return
}
// getFileMetadata gets the metadata for a file
func (f *Fs) getFileMetadata(ctx context.Context, filePath string) (fileInfo *files.FileMetadata, err error) {
entry, notFound, err := f.getMetadata(ctx, filePath)
if err != nil {
return nil, err
// Get metadata such that the result would be exported with the given extension
// Return a channel that will eventually receive the metadata
func (f *Fs) getMetadataForExt(ctx context.Context, filePath string, wantExportExtension exportExtension) chan getMetadataResult {
ch := make(chan getMetadataResult, 1)
wantDownloadable := (wantExportExtension == "")
go func() {
defer close(ch)
res := f.getMetadata(ctx, filePath)
info, ok := res.entry.(*files.FileMetadata)
if !ok { // Can't check anything about file, just return what we have
ch <- res
return
}
// Return notFound if downloadability or extension doesn't match
if wantDownloadable != info.IsDownloadable {
ch <- getMetadataResult{notFound: true}
return
}
if !info.IsDownloadable {
_, ext := f.chooseExportFormat(info)
if ext != wantExportExtension {
ch <- getMetadataResult{notFound: true}
return
}
}
// Return our real result or error
ch <- res
}()
return ch
}
// For a given rclone-path, figure out what the Dropbox-path may be, in order of preference.
// Multiple paths might be plausible, due to export path munging.
func (f *Fs) possibleMetadatas(ctx context.Context, filePath string) (ret []<-chan getMetadataResult) {
ret = []<-chan getMetadataResult{}
// Prefer an exact match
ret = append(ret, f.getMetadataForExt(ctx, filePath, ""))
// Check if we're plausibly an export path, otherwise we're done
if f.opt.SkipExports || f.opt.ShowAllExports {
return
}
if notFound {
dotted := path.Ext(filePath)
if dotted == "" {
return
}
ext := exportExtension(dotted[1:])
if exportKnownExtensions[ext] == "" {
return
}
// We might be an export path! Try all possibilities
base := strings.TrimSuffix(filePath, dotted)
// `foo.papert.md` will only come from `foo.papert`. Never check something like `foo.papert.paper`
if strings.HasSuffix(base, paperTemplateExtension) {
ret = append(ret, f.getMetadataForExt(ctx, base, ext))
return
}
// Otherwise, try both `foo.md` coming from `foo`, or from `foo.paper`
ret = append(ret, f.getMetadataForExt(ctx, base, ext))
ret = append(ret, f.getMetadataForExt(ctx, base+paperExtension, ext))
return
}
// getFileMetadata gets the metadata for a file
func (f *Fs) getFileMetadata(ctx context.Context, filePath string) (*files.FileMetadata, error) {
var res getMetadataResult
// Try all possible metadatas
possibleMetadatas := f.possibleMetadatas(ctx, filePath)
for _, ch := range possibleMetadatas {
res = <-ch
if res.err != nil {
return nil, res.err
}
if !res.notFound {
break
}
}
if res.notFound {
return nil, fs.ErrorObjectNotFound
}
fileInfo, ok := entry.(*files.FileMetadata)
fileInfo, ok := res.entry.(*files.FileMetadata)
if !ok {
if _, ok = entry.(*files.FolderMetadata); ok {
if _, ok = res.entry.(*files.FolderMetadata); ok {
return nil, fs.ErrorIsDir
}
return nil, fs.ErrorNotAFile
@@ -628,15 +788,15 @@ func (f *Fs) getFileMetadata(ctx context.Context, filePath string) (fileInfo *fi
}
// getDirMetadata gets the metadata for a directory
func (f *Fs) getDirMetadata(ctx context.Context, dirPath string) (dirInfo *files.FolderMetadata, err error) {
entry, notFound, err := f.getMetadata(ctx, dirPath)
if err != nil {
return nil, err
func (f *Fs) getDirMetadata(ctx context.Context, dirPath string) (*files.FolderMetadata, error) {
res := f.getMetadata(ctx, dirPath)
if res.err != nil {
return nil, res.err
}
if notFound {
if res.notFound {
return nil, fs.ErrorDirNotFound
}
dirInfo, ok := entry.(*files.FolderMetadata)
dirInfo, ok := res.entry.(*files.FolderMetadata)
if !ok {
return nil, fs.ErrorIsFile
}
@@ -836,16 +996,15 @@ func (f *Fs) List(ctx context.Context, dir string) (entries fs.DirEntries, err e
var res *files.ListFolderResult
for {
if !started {
arg := files.ListFolderArg{
Path: f.opt.Enc.FromStandardPath(root),
Recursive: false,
Limit: 1000,
}
arg := files.NewListFolderArg(f.opt.Enc.FromStandardPath(root))
arg.Recursive = false
arg.Limit = 1000
if root == "/" {
arg.Path = "" // Specify root folder as empty string
}
err = f.pacer.Call(func() (bool, error) {
res, err = f.srv.ListFolder(&arg)
res, err = f.srv.ListFolder(arg)
return shouldRetry(ctx, err)
})
if err != nil {
@@ -898,7 +1057,9 @@ func (f *Fs) List(ctx context.Context, dir string) (entries fs.DirEntries, err e
if err != nil {
return nil, err
}
entries = append(entries, o)
if o.(*Object).exportType.listable() {
entries = append(entries, o)
}
}
}
if !res.HasMore {
@@ -984,16 +1145,14 @@ func (f *Fs) purgeCheck(ctx context.Context, dir string, check bool) (err error)
}
// check directory empty
arg := files.ListFolderArg{
Path: encRoot,
Recursive: false,
}
arg := files.NewListFolderArg(encRoot)
arg.Recursive = false
if root == "/" {
arg.Path = "" // Specify root folder as empty string
}
var res *files.ListFolderResult
err = f.pacer.Call(func() (bool, error) {
res, err = f.srv.ListFolder(&arg)
res, err = f.srv.ListFolder(arg)
return shouldRetry(ctx, err)
})
if err != nil {
@@ -1174,6 +1333,16 @@ func (f *Fs) PublicLink(ctx context.Context, remote string, expire fs.Duration,
return shouldRetry(ctx, err)
})
if err != nil && createArg.Settings.Expires != nil && strings.Contains(err.Error(), sharing.SharedLinkSettingsErrorNotAuthorized) {
// Some plans can't create links with expiry
fs.Debugf(absPath, "can't create link with expiry, trying without")
createArg.Settings.Expires = nil
err = f.pacer.Call(func() (bool, error) {
linkRes, err = f.sharing.CreateSharedLinkWithSettings(&createArg)
return shouldRetry(ctx, err)
})
}
if err != nil && strings.Contains(err.Error(),
sharing.CreateSharedLinkWithSettingsErrorSharedLinkAlreadyExists) {
fs.Debugf(absPath, "has a public link already, attempting to retrieve it")
@@ -1277,9 +1446,9 @@ func (f *Fs) About(ctx context.Context) (usage *fs.Usage, err error) {
}
}
usage = &fs.Usage{
Total: fs.NewUsageValue(int64(total)), // quota of bytes that can be used
Used: fs.NewUsageValue(int64(used)), // bytes in use
Free: fs.NewUsageValue(int64(total - used)), // bytes which can be uploaded before reaching the quota
Total: fs.NewUsageValue(total), // quota of bytes that can be used
Used: fs.NewUsageValue(used), // bytes in use
Free: fs.NewUsageValue(total - used), // bytes which can be uploaded before reaching the quota
}
return usage, nil
}
@@ -1338,16 +1507,14 @@ func (f *Fs) changeNotifyCursor(ctx context.Context) (cursor string, err error)
var startCursor *files.ListFolderGetLatestCursorResult
err = f.pacer.Call(func() (bool, error) {
arg := files.ListFolderArg{
Path: f.opt.Enc.FromStandardPath(f.slashRoot),
Recursive: true,
}
arg := files.NewListFolderArg(f.opt.Enc.FromStandardPath(f.slashRoot))
arg.Recursive = true
if arg.Path == "/" {
arg.Path = ""
}
startCursor, err = f.srv.ListFolderGetLatestCursor(&arg)
startCursor, err = f.srv.ListFolderGetLatestCursor(arg)
return shouldRetry(ctx, err)
})
@@ -1451,8 +1618,50 @@ func (f *Fs) Shutdown(ctx context.Context) error {
return nil
}
func (f *Fs) chooseExportFormat(info *files.FileMetadata) (exportAPIFormat, exportExtension) {
// Find API export formats Dropbox supports for this file
// Sometimes Dropbox lists a format in ExportAs but not ExportOptions, so check both
ei := info.ExportInfo
dropboxFormatStrings := append([]string{ei.ExportAs}, ei.ExportOptions...)
// Find which extensions these correspond to
exportExtensions := map[exportExtension]exportAPIFormat{}
var dropboxPreferredAPIFormat exportAPIFormat
var dropboxPreferredExtension exportExtension
for _, format := range dropboxFormatStrings {
apiFormat := exportAPIFormat(format)
// Only consider formats we know about
if ext, ok := exportKnownAPIFormats[apiFormat]; ok {
if dropboxPreferredAPIFormat == "" {
dropboxPreferredAPIFormat = apiFormat
dropboxPreferredExtension = ext
}
exportExtensions[ext] = apiFormat
}
}
// See if the user picked a valid extension
for _, ext := range f.exportExts {
if apiFormat, ok := exportExtensions[ext]; ok {
return apiFormat, ext
}
}
// If no matches, prefer the first valid format Dropbox lists
return dropboxPreferredAPIFormat, dropboxPreferredExtension
}
// ------------------------------------------------------------
func (et exportType) listable() bool {
return et != exportHide
}
// something we should _try_ to export
func (et exportType) exportable() bool {
return et == exportExportable || et == exportListOnly
}
// Fs returns the parent Fs
func (o *Object) Fs() fs.Info {
return o.fs
@@ -1496,6 +1705,32 @@ func (o *Object) Size() int64 {
return o.bytes
}
func (o *Object) setMetadataForExport(info *files.FileMetadata) {
o.bytes = -1
o.hash = ""
if o.fs.opt.SkipExports {
o.exportType = exportHide
return
}
if o.fs.opt.ShowAllExports {
o.exportType = exportListOnly
return
}
var exportExt exportExtension
o.exportAPIFormat, exportExt = o.fs.chooseExportFormat(info)
if o.exportAPIFormat == "" {
o.exportType = exportHide
} else {
o.exportType = exportExportable
// get rid of any paper extension, if present
o.remote = strings.TrimSuffix(o.remote, paperExtension)
// add the export extension
o.remote += "." + string(exportExt)
}
}
// setMetadataFromEntry sets the fs data from a files.FileMetadata
//
// This isn't a complete set of metadata and has an inaccurate date
@@ -1504,6 +1739,10 @@ func (o *Object) setMetadataFromEntry(info *files.FileMetadata) error {
o.bytes = int64(info.Size)
o.modTime = info.ClientModified
o.hash = info.ContentHash
if !info.IsDownloadable {
o.setMetadataForExport(info)
}
return nil
}
@@ -1567,6 +1806,27 @@ func (o *Object) Storable() bool {
return true
}
func (o *Object) export(ctx context.Context) (in io.ReadCloser, err error) {
if o.exportType == exportListOnly || o.exportAPIFormat == "" {
fs.Debugf(o.remote, "No export format found")
return nil, fs.ErrorObjectNotFound
}
arg := files.ExportArg{Path: o.id, ExportFormat: string(o.exportAPIFormat)}
var exportResult *files.ExportResult
err = o.fs.pacer.Call(func() (bool, error) {
exportResult, in, err = o.fs.srv.Export(&arg)
return shouldRetry(ctx, err)
})
if err != nil {
return nil, err
}
o.bytes = int64(exportResult.ExportMetadata.Size)
o.hash = exportResult.ExportMetadata.ExportHash
return
}
// Open an object for read
func (o *Object) Open(ctx context.Context, options ...fs.OpenOption) (in io.ReadCloser, err error) {
if o.fs.opt.SharedFiles {
@@ -1586,6 +1846,10 @@ func (o *Object) Open(ctx context.Context, options ...fs.OpenOption) (in io.Read
return
}
if o.exportType.exportable() {
return o.export(ctx)
}
fs.FixRangeOption(options, o.bytes)
headers := fs.OpenOptionHeaders(options)
arg := files.DownloadArg{

View File

@@ -1,9 +1,16 @@
package dropbox
import (
"context"
"io"
"strings"
"testing"
"github.com/dropbox/dropbox-sdk-go-unofficial/v6/dropbox"
"github.com/dropbox/dropbox-sdk-go-unofficial/v6/dropbox/files"
"github.com/rclone/rclone/fstest/fstests"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
)
func TestInternalCheckPathLength(t *testing.T) {
@@ -42,3 +49,54 @@ func TestInternalCheckPathLength(t *testing.T) {
assert.Equal(t, test.ok, err == nil, test.in)
}
}
func (f *Fs) importPaperForTest(t *testing.T) {
content := `# test doc
Lorem ipsum __dolor__ sit amet
[link](http://google.com)
`
arg := files.PaperCreateArg{
Path: f.slashRootSlash + "export.paper",
ImportFormat: &files.ImportFormat{Tagged: dropbox.Tagged{Tag: files.ImportFormatMarkdown}},
}
var err error
err = f.pacer.Call(func() (bool, error) {
reader := strings.NewReader(content)
_, err = f.srv.PaperCreate(&arg, reader)
return shouldRetry(context.Background(), err)
})
require.NoError(t, err)
}
func (f *Fs) InternalTestPaperExport(t *testing.T) {
ctx := context.Background()
f.importPaperForTest(t)
f.exportExts = []exportExtension{"html"}
obj, err := f.NewObject(ctx, "export.html")
require.NoError(t, err)
rc, err := obj.Open(ctx)
require.NoError(t, err)
defer func() { require.NoError(t, rc.Close()) }()
buf, err := io.ReadAll(rc)
require.NoError(t, err)
text := string(buf)
for _, excerpt := range []string{
"Lorem ipsum",
"<b>dolor</b>",
`href="http://google.com"`,
} {
require.Contains(t, text, excerpt)
}
}
func (f *Fs) InternalTest(t *testing.T) {
t.Run("PaperExport", f.InternalTestPaperExport)
}
var _ fstests.InternalTester = (*Fs)(nil)

View File

@@ -216,11 +216,11 @@ var ItemFields = mustFields(Item{})
// fields returns the JSON fields in use by opt as a | separated
// string.
func fields(opt interface{}) (pipeTags string, err error) {
func fields(opt any) (pipeTags string, err error) {
var tags []string
def := reflect.ValueOf(opt)
defType := def.Type()
for i := 0; i < def.NumField(); i++ {
for i := range def.NumField() {
field := defType.Field(i)
tag, ok := field.Tag.Lookup("json")
if !ok {
@@ -239,7 +239,7 @@ func fields(opt interface{}) (pipeTags string, err error) {
// mustFields returns the JSON fields in use by opt as a | separated
// string. It panics on failure.
func mustFields(opt interface{}) string {
func mustFields(opt any) string {
tags, err := fields(opt)
if err != nil {
panic(err)
@@ -351,12 +351,12 @@ type SpaceInfo struct {
// DeleteResponse is returned from doDeleteFile
type DeleteResponse struct {
Status
Deleted []string `json:"deleted"`
Errors []interface{} `json:"errors"`
ID string `json:"fi_id"`
BackgroundTask int `json:"backgroundtask"`
UsSize string `json:"us_size"`
PaSize string `json:"pa_size"`
Deleted []string `json:"deleted"`
Errors []any `json:"errors"`
ID string `json:"fi_id"`
BackgroundTask int `json:"backgroundtask"`
UsSize string `json:"us_size"`
PaSize string `json:"pa_size"`
//SpaceInfo SpaceInfo `json:"spaceinfo"`
}

View File

@@ -371,7 +371,7 @@ func (f *Fs) getToken(ctx context.Context) (token string, err error) {
}
// params for rpc
type params map[string]interface{}
type params map[string]any
// rpc calls the rpc.php method of the SME file fabric
//

View File

@@ -0,0 +1,81 @@
// Package api defines types for interacting with the FileLu API.
package api
import "encoding/json"
// CreateFolderResponse represents the response for creating a folder.
type CreateFolderResponse struct {
Status int `json:"status"`
Msg string `json:"msg"`
Result struct {
FldID interface{} `json:"fld_id"`
} `json:"result"`
}
// DeleteFolderResponse represents the response for deleting a folder.
type DeleteFolderResponse struct {
Status int `json:"status"`
Msg string `json:"msg"`
}
// FolderListResponse represents the response for listing folders.
type FolderListResponse struct {
Status int `json:"status"`
Msg string `json:"msg"`
Result struct {
Files []struct {
Name string `json:"name"`
FldID json.Number `json:"fld_id"`
Path string `json:"path"`
FileCode string `json:"file_code"`
Size int64 `json:"size"`
} `json:"files"`
Folders []struct {
Name string `json:"name"`
FldID json.Number `json:"fld_id"`
Path string `json:"path"`
} `json:"folders"`
} `json:"result"`
}
// FileDirectLinkResponse represents the response for a direct link to a file.
type FileDirectLinkResponse struct {
Status int `json:"status"`
Msg string `json:"msg"`
Result struct {
URL string `json:"url"`
Size int64 `json:"size"`
} `json:"result"`
}
// FileInfoResponse represents the response for file information.
type FileInfoResponse struct {
Status int `json:"status"`
Msg string `json:"msg"`
Result []struct {
Size string `json:"size"`
Name string `json:"name"`
FileCode string `json:"filecode"`
Hash string `json:"hash"`
Status int `json:"status"`
} `json:"result"`
}
// DeleteFileResponse represents the response for deleting a file.
type DeleteFileResponse struct {
Status int `json:"status"`
Msg string `json:"msg"`
}
// AccountInfoResponse represents the response for account information.
type AccountInfoResponse struct {
Status int `json:"status"` // HTTP status code of the response.
Msg string `json:"msg"` // Message describing the response.
Result struct {
PremiumExpire string `json:"premium_expire"` // Expiration date of premium access.
Email string `json:"email"` // User's email address.
UType string `json:"utype"` // User type (e.g., premium or free).
Storage string `json:"storage"` // Total storage available to the user.
StorageUsed string `json:"storage_used"` // Amount of storage used.
} `json:"result"` // Nested result structure containing account details.
}

366
backend/filelu/filelu.go Normal file
View File

@@ -0,0 +1,366 @@
// Package filelu provides an interface to the FileLu storage system.
package filelu
import (
"context"
"fmt"
"io"
"net/http"
"os"
"path"
"strings"
"time"
"github.com/rclone/rclone/fs"
"github.com/rclone/rclone/fs/config"
"github.com/rclone/rclone/fs/config/configmap"
"github.com/rclone/rclone/fs/config/configstruct"
"github.com/rclone/rclone/fs/fshttp"
"github.com/rclone/rclone/lib/encoder"
"github.com/rclone/rclone/lib/pacer"
"github.com/rclone/rclone/lib/rest"
)
// Register the backend with Rclone
func init() {
fs.Register(&fs.RegInfo{
Name: "filelu",
Description: "FileLu Cloud Storage",
NewFs: NewFs,
Options: []fs.Option{{
Name: "key",
Help: "Your FileLu Rclone key from My Account",
Required: true,
Sensitive: true,
},
{
Name: config.ConfigEncoding,
Help: config.ConfigEncodingHelp,
Advanced: true,
Default: (encoder.Base | // Slash,LtGt,DoubleQuote,Question,Asterisk,Pipe,Hash,Percent,BackSlash,Del,Ctl,RightSpace,InvalidUtf8,Dot
encoder.EncodeSlash |
encoder.EncodeLtGt |
encoder.EncodeExclamation |
encoder.EncodeDoubleQuote |
encoder.EncodeSingleQuote |
encoder.EncodeBackQuote |
encoder.EncodeQuestion |
encoder.EncodeDollar |
encoder.EncodeColon |
encoder.EncodeAsterisk |
encoder.EncodePipe |
encoder.EncodeHash |
encoder.EncodePercent |
encoder.EncodeBackSlash |
encoder.EncodeCrLf |
encoder.EncodeDel |
encoder.EncodeCtl |
encoder.EncodeLeftSpace |
encoder.EncodeLeftPeriod |
encoder.EncodeLeftTilde |
encoder.EncodeLeftCrLfHtVt |
encoder.EncodeRightPeriod |
encoder.EncodeRightCrLfHtVt |
encoder.EncodeSquareBracket |
encoder.EncodeSemicolon |
encoder.EncodeRightSpace |
encoder.EncodeInvalidUtf8 |
encoder.EncodeDot),
},
}})
}
// Options defines the configuration for the FileLu backend
type Options struct {
Key string `config:"key"`
Enc encoder.MultiEncoder `config:"encoding"`
}
// Fs represents the FileLu file system
type Fs struct {
name string
root string
opt Options
features *fs.Features
endpoint string
pacer *pacer.Pacer
srv *rest.Client
client *http.Client
targetFile string
}
// NewFs creates a new Fs object for FileLu
func NewFs(ctx context.Context, name string, root string, m configmap.Mapper) (fs.Fs, error) {
opt := new(Options)
err := configstruct.Set(m, opt)
if err != nil {
return nil, fmt.Errorf("failed to parse config: %w", err)
}
if opt.Key == "" {
return nil, fmt.Errorf("FileLu Rclone Key is required")
}
client := fshttp.NewClient(ctx)
if strings.TrimSpace(root) == "" {
root = ""
}
root = strings.Trim(root, "/")
filename := ""
f := &Fs{
name: name,
opt: *opt,
endpoint: "https://filelu.com/rclone",
client: client,
srv: rest.NewClient(client).SetRoot("https://filelu.com/rclone"),
pacer: pacer.New(),
targetFile: filename,
root: root,
}
f.features = (&fs.Features{
CanHaveEmptyDirectories: true,
WriteMetadata: false,
SlowHash: true,
}).Fill(ctx, f)
rootContainer, rootDirectory := rootSplit(f.root)
if rootContainer != "" && rootDirectory != "" {
// Check to see if the (container,directory) is actually an existing file
oldRoot := f.root
newRoot, leaf := path.Split(oldRoot)
f.root = strings.Trim(newRoot, "/")
_, err := f.NewObject(ctx, leaf)
if err != nil {
if err == fs.ErrorObjectNotFound || err == fs.ErrorNotAFile {
// File doesn't exist or is a directory so return old f
f.root = strings.Trim(oldRoot, "/")
return f, nil
}
return nil, err
}
// return an error with an fs which points to the parent
return f, fs.ErrorIsFile
}
return f, nil
}
// Mkdir to create directory on remote server.
func (f *Fs) Mkdir(ctx context.Context, dir string) error {
fullPath := path.Clean(f.root + "/" + dir)
_, err := f.createFolder(ctx, fullPath)
return err
}
// About provides usage statistics for the remote
func (f *Fs) About(ctx context.Context) (*fs.Usage, error) {
accountInfo, err := f.getAccountInfo(ctx)
if err != nil {
return nil, err
}
totalStorage, err := parseStorageToBytes(accountInfo.Result.Storage)
if err != nil {
return nil, fmt.Errorf("failed to parse total storage: %w", err)
}
usedStorage, err := parseStorageToBytes(accountInfo.Result.StorageUsed)
if err != nil {
return nil, fmt.Errorf("failed to parse used storage: %w", err)
}
return &fs.Usage{
Total: fs.NewUsageValue(totalStorage), // Total bytes available
Used: fs.NewUsageValue(usedStorage), // Total bytes used
Free: fs.NewUsageValue(totalStorage - usedStorage),
}, nil
}
// Purge deletes the directory and all its contents
func (f *Fs) Purge(ctx context.Context, dir string) error {
fullPath := path.Join(f.root, dir)
if fullPath != "" {
fullPath = "/" + strings.Trim(fullPath, "/")
}
return f.deleteFolder(ctx, fullPath)
}
// List returns a list of files and folders
// List returns a list of files and folders for the given directory
func (f *Fs) List(ctx context.Context, dir string) (fs.DirEntries, error) {
// Compose full path for API call
fullPath := path.Join(f.root, dir)
fullPath = "/" + strings.Trim(fullPath, "/")
if fullPath == "/" {
fullPath = ""
}
var entries fs.DirEntries
result, err := f.getFolderList(ctx, fullPath)
if err != nil {
return nil, err
}
fldMap := map[string]bool{}
for _, folder := range result.Result.Folders {
fldMap[folder.FldID.String()] = true
if f.root == "" && dir == "" && strings.Contains(folder.Path, "/") {
continue
}
paths := strings.Split(folder.Path, fullPath+"/")
remote := paths[0]
if len(paths) > 1 {
remote = paths[1]
}
if strings.Contains(remote, "/") {
continue
}
pathsWithoutRoot := strings.Split(folder.Path, "/"+f.root+"/")
remotePathWithoutRoot := pathsWithoutRoot[0]
if len(pathsWithoutRoot) > 1 {
remotePathWithoutRoot = pathsWithoutRoot[1]
}
remotePathWithoutRoot = strings.TrimPrefix(remotePathWithoutRoot, "/")
entries = append(entries, fs.NewDir(remotePathWithoutRoot, time.Now()))
}
for _, file := range result.Result.Files {
if _, ok := fldMap[file.FldID.String()]; ok {
continue
}
remote := path.Join(dir, file.Name)
// trim leading slashes
remote = strings.TrimPrefix(remote, "/")
obj := &Object{
fs: f,
remote: remote,
size: file.Size,
modTime: time.Now(),
}
entries = append(entries, obj)
}
return entries, nil
}
// Put uploads a file directly to the destination folder in the FileLu storage system.
func (f *Fs) Put(ctx context.Context, in io.Reader, src fs.ObjectInfo, options ...fs.OpenOption) (fs.Object, error) {
if src.Size() == 0 {
return nil, fs.ErrorCantUploadEmptyFiles
}
err := f.uploadFile(ctx, in, src.Remote())
if err != nil {
return nil, err
}
newObject := &Object{
fs: f,
remote: src.Remote(),
size: src.Size(),
modTime: src.ModTime(ctx),
}
fs.Infof(f, "Put: Successfully uploaded new file %q", src.Remote())
return newObject, nil
}
// Move moves the file to the specified location
func (f *Fs) Move(ctx context.Context, src fs.Object, destinationPath string) (fs.Object, error) {
if strings.HasPrefix(destinationPath, "/") || strings.Contains(destinationPath, ":\\") {
dir := path.Dir(destinationPath)
if err := os.MkdirAll(dir, 0755); err != nil {
return nil, fmt.Errorf("failed to create destination directory: %w", err)
}
reader, err := src.Open(ctx)
if err != nil {
return nil, fmt.Errorf("failed to open source file: %w", err)
}
defer func() {
if err := reader.Close(); err != nil {
fs.Logf(nil, "Failed to close file body: %v", err)
}
}()
dest, err := os.Create(destinationPath)
if err != nil {
return nil, fmt.Errorf("failed to create destination file: %w", err)
}
defer func() {
if err := dest.Close(); err != nil {
fs.Logf(nil, "Failed to close file body: %v", err)
}
}()
if _, err := io.Copy(dest, reader); err != nil {
return nil, fmt.Errorf("failed to copy file content: %w", err)
}
if err := src.Remove(ctx); err != nil {
return nil, fmt.Errorf("failed to remove source file: %w", err)
}
return nil, nil
}
reader, err := src.Open(ctx)
if err != nil {
return nil, fmt.Errorf("failed to open source object: %w", err)
}
defer func() {
if err := reader.Close(); err != nil {
fs.Logf(nil, "Failed to close file body: %v", err)
}
}()
err = f.uploadFile(ctx, reader, destinationPath)
if err != nil {
return nil, fmt.Errorf("failed to upload file to destination: %w", err)
}
if err := src.Remove(ctx); err != nil {
return nil, fmt.Errorf("failed to delete source file: %w", err)
}
return &Object{
fs: f,
remote: destinationPath,
size: src.Size(),
modTime: src.ModTime(ctx),
}, nil
}
// Rmdir removes a directory
func (f *Fs) Rmdir(ctx context.Context, dir string) error {
fullPath := path.Join(f.root, dir)
if fullPath != "" {
fullPath = "/" + strings.Trim(fullPath, "/")
}
// Step 1: Check if folder is empty
listResp, err := f.getFolderList(ctx, fullPath)
if err != nil {
return err
}
if len(listResp.Result.Files) > 0 || len(listResp.Result.Folders) > 0 {
return fmt.Errorf("Rmdir: directory %q is not empty", fullPath)
}
// Step 2: Delete the folder
return f.deleteFolder(ctx, fullPath)
}
// Check the interfaces are satisfied
var (
_ fs.Fs = (*Fs)(nil)
_ fs.Purger = (*Fs)(nil)
_ fs.Abouter = (*Fs)(nil)
_ fs.Mover = (*Fs)(nil)
_ fs.Object = (*Object)(nil)
)

View File

@@ -0,0 +1,324 @@
package filelu
import (
"bytes"
"context"
"encoding/json"
"fmt"
"io"
"net/http"
"net/url"
"strings"
"github.com/rclone/rclone/backend/filelu/api"
"github.com/rclone/rclone/fs"
"github.com/rclone/rclone/fs/fserrors"
"github.com/rclone/rclone/lib/rest"
)
// createFolder creates a folder at the specified path.
func (f *Fs) createFolder(ctx context.Context, dirPath string) (*api.CreateFolderResponse, error) {
encodedDir := f.fromStandardPath(dirPath)
apiURL := fmt.Sprintf("%s/folder/create?folder_path=%s&key=%s",
f.endpoint,
url.QueryEscape(encodedDir),
url.QueryEscape(f.opt.Key), // assuming f.opt.Key is the correct field
)
req, err := http.NewRequestWithContext(ctx, "GET", apiURL, nil)
if err != nil {
return nil, fmt.Errorf("failed to create request: %w", err)
}
var resp *http.Response
result := api.CreateFolderResponse{}
err = f.pacer.Call(func() (bool, error) {
var innerErr error
resp, innerErr = f.client.Do(req)
return fserrors.ShouldRetry(innerErr), innerErr
})
if err != nil {
return nil, fmt.Errorf("request failed: %w", err)
}
defer func() {
if err := resp.Body.Close(); err != nil {
fs.Logf(nil, "Failed to close response body: %v", err)
}
}()
err = json.NewDecoder(resp.Body).Decode(&result)
if err != nil {
return nil, fmt.Errorf("error decoding response: %w", err)
}
if result.Status != 200 {
return nil, fmt.Errorf("error: %s", result.Msg)
}
fs.Infof(f, "Successfully created folder %q with ID %v", dirPath, result.Result.FldID)
return &result, nil
}
// getFolderList List both files and folders in a directory.
func (f *Fs) getFolderList(ctx context.Context, path string) (*api.FolderListResponse, error) {
encodedDir := f.fromStandardPath(path)
apiURL := fmt.Sprintf("%s/folder/list?folder_path=%s&key=%s",
f.endpoint,
url.QueryEscape(encodedDir),
url.QueryEscape(f.opt.Key),
)
var body []byte
err := f.pacer.Call(func() (bool, error) {
req, err := http.NewRequestWithContext(ctx, "GET", apiURL, nil)
if err != nil {
return false, fmt.Errorf("failed to create request: %w", err)
}
resp, err := f.client.Do(req)
if err != nil {
return shouldRetry(err), fmt.Errorf("failed to list directory: %w", err)
}
defer func() {
if err := resp.Body.Close(); err != nil {
fs.Logf(nil, "Failed to close response body: %v", err)
}
}()
body, err = io.ReadAll(resp.Body)
if err != nil {
return false, fmt.Errorf("error reading response body: %w", err)
}
return shouldRetryHTTP(resp.StatusCode), nil
})
if err != nil {
return nil, err
}
var response api.FolderListResponse
if err := json.NewDecoder(bytes.NewReader(body)).Decode(&response); err != nil {
return nil, fmt.Errorf("error decoding response: %w", err)
}
if response.Status != 200 {
if strings.Contains(response.Msg, "Folder not found") {
return nil, fs.ErrorDirNotFound
}
return nil, fmt.Errorf("API error: %s", response.Msg)
}
for index := range response.Result.Folders {
response.Result.Folders[index].Path = f.toStandardPath(response.Result.Folders[index].Path)
}
for index := range response.Result.Files {
response.Result.Files[index].Name = f.toStandardPath(response.Result.Files[index].Name)
}
return &response, nil
}
// deleteFolder deletes a folder at the specified path.
func (f *Fs) deleteFolder(ctx context.Context, fullPath string) error {
fullPath = f.fromStandardPath(fullPath)
deleteURL := fmt.Sprintf("%s/folder/delete?folder_path=%s&key=%s",
f.endpoint,
url.QueryEscape(fullPath),
url.QueryEscape(f.opt.Key),
)
delResp := api.DeleteFolderResponse{}
err := f.pacer.Call(func() (bool, error) {
req, err := http.NewRequestWithContext(ctx, "GET", deleteURL, nil)
if err != nil {
return false, err
}
resp, err := f.client.Do(req)
if err != nil {
return fserrors.ShouldRetry(err), err
}
defer func() {
if err := resp.Body.Close(); err != nil {
fs.Logf(nil, "Failed to close response body: %v", err)
}
}()
body, err := io.ReadAll(resp.Body)
if err != nil {
return false, err
}
if err := json.Unmarshal(body, &delResp); err != nil {
return false, fmt.Errorf("error decoding delete response: %w", err)
}
if delResp.Status != 200 {
return false, fmt.Errorf("delete error: %s", delResp.Msg)
}
return false, nil
})
if err != nil {
return err
}
fs.Infof(f, "Rmdir: successfully deleted %q", fullPath)
return nil
}
// getDirectLink of file from FileLu to download.
func (f *Fs) getDirectLink(ctx context.Context, filePath string) (string, int64, error) {
filePath = f.fromStandardPath(filePath)
apiURL := fmt.Sprintf("%s/file/direct_link?file_path=%s&key=%s",
f.endpoint,
url.QueryEscape(filePath),
url.QueryEscape(f.opt.Key),
)
result := api.FileDirectLinkResponse{}
err := f.pacer.Call(func() (bool, error) {
req, err := http.NewRequestWithContext(ctx, "GET", apiURL, nil)
if err != nil {
return false, fmt.Errorf("failed to create request: %w", err)
}
resp, err := f.client.Do(req)
if err != nil {
return shouldRetry(err), fmt.Errorf("failed to fetch direct link: %w", err)
}
defer func() {
if err := resp.Body.Close(); err != nil {
fs.Logf(nil, "Failed to close response body: %v", err)
}
}()
if err := json.NewDecoder(resp.Body).Decode(&result); err != nil {
return false, fmt.Errorf("error decoding response: %w", err)
}
if result.Status != 200 {
return false, fmt.Errorf("API error: %s", result.Msg)
}
return shouldRetryHTTP(resp.StatusCode), nil
})
if err != nil {
return "", 0, err
}
return result.Result.URL, result.Result.Size, nil
}
// deleteFile deletes a file based on filePath
func (f *Fs) deleteFile(ctx context.Context, filePath string) error {
filePath = f.fromStandardPath(filePath)
apiURL := fmt.Sprintf("%s/file/remove?file_path=%s&key=%s",
f.endpoint,
url.QueryEscape(filePath),
url.QueryEscape(f.opt.Key),
)
result := api.DeleteFileResponse{}
err := f.pacer.Call(func() (bool, error) {
req, err := http.NewRequestWithContext(ctx, "GET", apiURL, nil)
if err != nil {
return false, fmt.Errorf("failed to create request: %w", err)
}
resp, err := f.client.Do(req)
if err != nil {
return shouldRetry(err), fmt.Errorf("failed to fetch direct link: %w", err)
}
defer func() {
if err := resp.Body.Close(); err != nil {
fs.Logf(nil, "Failed to close response body: %v", err)
}
}()
if err := json.NewDecoder(resp.Body).Decode(&result); err != nil {
return false, fmt.Errorf("error decoding response: %w", err)
}
if result.Status != 200 {
return false, fmt.Errorf("API error: %s", result.Msg)
}
return shouldRetryHTTP(resp.StatusCode), nil
})
return err
}
// getAccountInfo retrieves account information
func (f *Fs) getAccountInfo(ctx context.Context) (*api.AccountInfoResponse, error) {
opts := rest.Opts{
Method: "GET",
Path: "/account/info",
Parameters: url.Values{
"key": {f.opt.Key},
},
}
var result api.AccountInfoResponse
err := f.pacer.Call(func() (bool, error) {
_, callErr := f.srv.CallJSON(ctx, &opts, nil, &result)
return fserrors.ShouldRetry(callErr), callErr
})
if err != nil {
return nil, err
}
if result.Status != 200 {
return nil, fmt.Errorf("error: %s", result.Msg)
}
return &result, nil
}
// getFileInfo retrieves file information based on file code
func (f *Fs) getFileInfo(ctx context.Context, fileCode string) (*api.FileInfoResponse, error) {
u, _ := url.Parse(f.endpoint + "/file/info2")
q := u.Query()
q.Set("file_code", fileCode) // raw path — Go handles escaping properly here
q.Set("key", f.opt.Key)
u.RawQuery = q.Encode()
apiURL := f.endpoint + "/file/info2?" + u.RawQuery
var body []byte
err := f.pacer.Call(func() (bool, error) {
req, err := http.NewRequestWithContext(ctx, "GET", apiURL, nil)
if err != nil {
return false, fmt.Errorf("failed to create request: %w", err)
}
resp, err := f.client.Do(req)
if err != nil {
return shouldRetry(err), fmt.Errorf("failed to fetch file info: %w", err)
}
defer func() {
if err := resp.Body.Close(); err != nil {
fs.Logf(nil, "Failed to close response body: %v", err)
}
}()
body, err = io.ReadAll(resp.Body)
if err != nil {
return false, fmt.Errorf("error reading response body: %w", err)
}
return shouldRetryHTTP(resp.StatusCode), nil
})
if err != nil {
return nil, err
}
result := api.FileInfoResponse{}
if err := json.NewDecoder(bytes.NewReader(body)).Decode(&result); err != nil {
return nil, fmt.Errorf("error decoding response: %w", err)
}
if result.Status != 200 || len(result.Result) == 0 {
return nil, fs.ErrorObjectNotFound
}
return &result, nil
}

View File

@@ -0,0 +1,193 @@
package filelu
import (
"context"
"encoding/json"
"errors"
"fmt"
"io"
"mime/multipart"
"net/http"
"net/url"
"path"
"strings"
"github.com/rclone/rclone/fs"
)
// uploadFile uploads a file to FileLu
func (f *Fs) uploadFile(ctx context.Context, fileContent io.Reader, fileFullPath string) error {
directory := path.Dir(fileFullPath)
fileName := path.Base(fileFullPath)
if directory == "." {
directory = ""
}
destinationFolderPath := path.Join(f.root, directory)
if destinationFolderPath != "" {
destinationFolderPath = "/" + strings.Trim(destinationFolderPath, "/")
}
existingEntries, err := f.List(ctx, path.Dir(fileFullPath))
if err != nil {
if errors.Is(err, fs.ErrorDirNotFound) {
err = f.Mkdir(ctx, path.Dir(fileFullPath))
if err != nil {
return fmt.Errorf("failed to create directory: %w", err)
}
} else {
return fmt.Errorf("failed to list existing files: %w", err)
}
}
for _, entry := range existingEntries {
if entry.Remote() == fileFullPath {
_, ok := entry.(fs.Object)
if !ok {
continue
}
// If the file exists but is different, remove it
filePath := "/" + strings.Trim(destinationFolderPath+"/"+fileName, "/")
err = f.deleteFile(ctx, filePath)
if err != nil {
return fmt.Errorf("failed to delete existing file: %w", err)
}
}
}
uploadURL, sessID, err := f.getUploadServer(ctx)
if err != nil {
return fmt.Errorf("failed to retrieve upload server: %w", err)
}
// Since the fileCode isn't used, just handle the error
if _, err := f.uploadFileWithDestination(ctx, uploadURL, sessID, fileName, fileContent, destinationFolderPath); err != nil {
return fmt.Errorf("failed to upload file: %w", err)
}
return nil
}
// getUploadServer gets the upload server URL with proper key authentication
func (f *Fs) getUploadServer(ctx context.Context) (string, string, error) {
apiURL := fmt.Sprintf("%s/upload/server?key=%s", f.endpoint, url.QueryEscape(f.opt.Key))
var result struct {
Status int `json:"status"`
SessID string `json:"sess_id"`
Result string `json:"result"`
Msg string `json:"msg"`
}
err := f.pacer.Call(func() (bool, error) {
req, err := http.NewRequestWithContext(ctx, "GET", apiURL, nil)
if err != nil {
return false, fmt.Errorf("failed to create request: %w", err)
}
resp, err := f.client.Do(req)
if err != nil {
return shouldRetry(err), fmt.Errorf("failed to get upload server: %w", err)
}
defer func() {
if err := resp.Body.Close(); err != nil {
fs.Logf(nil, "Failed to close response body: %v", err)
}
}()
if err := json.NewDecoder(resp.Body).Decode(&result); err != nil {
return false, fmt.Errorf("error decoding response: %w", err)
}
if result.Status != 200 {
return false, fmt.Errorf("API error: %s", result.Msg)
}
return shouldRetryHTTP(resp.StatusCode), nil
})
if err != nil {
return "", "", err
}
return result.Result, result.SessID, nil
}
// uploadFileWithDestination uploads a file directly to a specified folder using file content reader.
func (f *Fs) uploadFileWithDestination(ctx context.Context, uploadURL, sessID, fileName string, fileContent io.Reader, dirPath string) (string, error) {
destinationPath := f.fromStandardPath(dirPath)
encodedFileName := f.fromStandardPath(fileName)
pr, pw := io.Pipe()
writer := multipart.NewWriter(pw)
isDeletionRequired := false
go func() {
defer func() {
if err := pw.Close(); err != nil {
fs.Logf(nil, "Failed to close: %v", err)
}
}()
_ = writer.WriteField("sess_id", sessID)
_ = writer.WriteField("utype", "prem")
_ = writer.WriteField("fld_path", destinationPath)
part, err := writer.CreateFormFile("file_0", encodedFileName)
if err != nil {
pw.CloseWithError(fmt.Errorf("failed to create form file: %w", err))
return
}
if _, err := io.Copy(part, fileContent); err != nil {
isDeletionRequired = true
pw.CloseWithError(fmt.Errorf("failed to copy file content: %w", err))
return
}
if err := writer.Close(); err != nil {
pw.CloseWithError(fmt.Errorf("failed to close writer: %w", err))
}
}()
var fileCode string
err := f.pacer.Call(func() (bool, error) {
req, err := http.NewRequestWithContext(ctx, "POST", uploadURL, pr)
if err != nil {
return false, fmt.Errorf("failed to create upload request: %w", err)
}
req.Header.Set("Content-Type", writer.FormDataContentType())
resp, err := f.client.Do(req)
if err != nil {
return shouldRetry(err), fmt.Errorf("failed to send upload request: %w", err)
}
defer respBodyClose(resp.Body)
var result []struct {
FileCode string `json:"file_code"`
FileStatus string `json:"file_status"`
}
if err := json.NewDecoder(resp.Body).Decode(&result); err != nil {
return false, fmt.Errorf("failed to parse upload response: %w", err)
}
if len(result) == 0 || result[0].FileStatus != "OK" {
return false, fmt.Errorf("upload failed with status: %s", result[0].FileStatus)
}
fileCode = result[0].FileCode
return shouldRetryHTTP(resp.StatusCode), nil
})
if err != nil && isDeletionRequired {
// Attempt to delete the file if upload fails
_ = f.deleteFile(ctx, destinationPath+"/"+fileName)
}
return fileCode, err
}
// respBodyClose to check body response.
func respBodyClose(responseBody io.Closer) {
if cerr := responseBody.Close(); cerr != nil {
fmt.Printf("Error closing response body: %v\n", cerr)
}
}

View File

@@ -0,0 +1,112 @@
package filelu
import (
"context"
"errors"
"fmt"
"path"
"strings"
"time"
"github.com/rclone/rclone/fs"
"github.com/rclone/rclone/fs/fserrors"
"github.com/rclone/rclone/fs/hash"
)
// errFileNotFound represent file not found error
var errFileNotFound error = errors.New("file not found")
// getFileCode retrieves the file code for a given file path
func (f *Fs) getFileCode(ctx context.Context, filePath string) (string, error) {
// Prepare parent directory
parentDir := path.Dir(filePath)
// Call List to get all the files
result, err := f.getFolderList(ctx, parentDir)
if err != nil {
return "", err
}
for _, file := range result.Result.Files {
filePathFromServer := parentDir + "/" + file.Name
if parentDir == "/" {
filePathFromServer = "/" + file.Name
}
if filePath == filePathFromServer {
return file.FileCode, nil
}
}
return "", errFileNotFound
}
// Features returns the optional features of this Fs
func (f *Fs) Features() *fs.Features {
return f.features
}
func (f *Fs) fromStandardPath(remote string) string {
return f.opt.Enc.FromStandardPath(remote)
}
func (f *Fs) toStandardPath(remote string) string {
return f.opt.Enc.ToStandardPath(remote)
}
// Hashes returns an empty hash set, indicating no hash support
func (f *Fs) Hashes() hash.Set {
return hash.NewHashSet() // Properly creates an empty hash set
}
// Name returns the remote name
func (f *Fs) Name() string {
return f.name
}
// Root returns the root path
func (f *Fs) Root() string {
return f.root
}
// Precision returns the precision of the remote
func (f *Fs) Precision() time.Duration {
return fs.ModTimeNotSupported
}
func (f *Fs) String() string {
return fmt.Sprintf("FileLu root '%s'", f.root)
}
// isFileCode checks if a string looks like a file code
func isFileCode(s string) bool {
if len(s) != 12 {
return false
}
for _, c := range s {
if !((c >= 'a' && c <= 'z') || (c >= '0' && c <= '9')) {
return false
}
}
return true
}
func shouldRetry(err error) bool {
return fserrors.ShouldRetry(err)
}
func shouldRetryHTTP(code int) bool {
return code == 429 || code >= 500
}
func rootSplit(absPath string) (bucket, bucketPath string) {
// No bucket
if absPath == "" {
return "", ""
}
slash := strings.IndexRune(absPath, '/')
// Bucket but no path
if slash < 0 {
return absPath, ""
}
return absPath[:slash], absPath[slash+1:]
}

View File

@@ -0,0 +1,259 @@
package filelu
import (
"bytes"
"context"
"encoding/json"
"fmt"
"io"
"net/http"
"net/url"
"path"
"regexp"
"strconv"
"strings"
"time"
"github.com/rclone/rclone/fs"
"github.com/rclone/rclone/fs/hash"
)
// Object describes a FileLu object
type Object struct {
fs *Fs
remote string
size int64
modTime time.Time
}
// NewObject creates a new Object for the given remote path
func (f *Fs) NewObject(ctx context.Context, remote string) (fs.Object, error) {
var filePath string
filePath = path.Join(f.root, remote)
filePath = "/" + strings.Trim(filePath, "/")
// Get File code
fileCode, err := f.getFileCode(ctx, filePath)
if err != nil {
return nil, fs.ErrorObjectNotFound
}
// Get File info
fileInfos, err := f.getFileInfo(ctx, fileCode)
if err != nil {
return nil, fmt.Errorf("failed to get file info: %w", err)
}
fileInfo := fileInfos.Result[0]
size, _ := strconv.ParseInt(fileInfo.Size, 10, 64)
returnedRemote := remote
return &Object{
fs: f,
remote: returnedRemote,
size: size,
modTime: time.Now(),
}, nil
}
// Open opens the object for reading
func (o *Object) Open(ctx context.Context, options ...fs.OpenOption) (io.ReadCloser, error) {
filePath := path.Join(o.fs.root, o.remote)
// Get direct link
directLink, size, err := o.fs.getDirectLink(ctx, filePath)
if err != nil {
return nil, fmt.Errorf("failed to get direct link: %w", err)
}
o.size = size
// Offset and Count for range download
var offset int64
var count int64
fs.FixRangeOption(options, o.size)
for _, option := range options {
switch x := option.(type) {
case *fs.RangeOption:
offset, count = x.Decode(o.size)
if count < 0 {
count = o.size - offset
}
case *fs.SeekOption:
offset = x.Offset
count = o.size
default:
if option.Mandatory() {
fs.Logf(o, "Unsupported mandatory option: %v", option)
}
}
}
var reader io.ReadCloser
err = o.fs.pacer.Call(func() (bool, error) {
req, err := http.NewRequestWithContext(ctx, "GET", directLink, nil)
if err != nil {
return false, fmt.Errorf("failed to create download request: %w", err)
}
resp, err := o.fs.client.Do(req)
if err != nil {
return shouldRetry(err), fmt.Errorf("failed to download file: %w", err)
}
if resp.StatusCode != http.StatusOK {
defer func() {
if err := resp.Body.Close(); err != nil {
fs.Logf(nil, "Failed to close response body: %v", err)
}
}()
return false, fmt.Errorf("failed to download file: HTTP %d", resp.StatusCode)
}
// Wrap the response body to handle offset and count
currentContents, err := io.ReadAll(resp.Body)
if err != nil {
return false, fmt.Errorf("failed to read response body: %w", err)
}
if offset > 0 {
if offset > int64(len(currentContents)) {
return false, fmt.Errorf("offset %d exceeds file size %d", offset, len(currentContents))
}
currentContents = currentContents[offset:]
}
if count > 0 && count < int64(len(currentContents)) {
currentContents = currentContents[:count]
}
reader = io.NopCloser(bytes.NewReader(currentContents))
return false, nil
})
if err != nil {
return nil, err
}
return reader, nil
}
// Update updates the object with new data
func (o *Object) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, options ...fs.OpenOption) error {
if src.Size() <= 0 {
return fs.ErrorCantUploadEmptyFiles
}
err := o.fs.uploadFile(ctx, in, o.remote)
if err != nil {
return fmt.Errorf("failed to upload file: %w", err)
}
o.size = src.Size()
return nil
}
// Remove deletes the object from FileLu
func (o *Object) Remove(ctx context.Context) error {
fullPath := "/" + strings.Trim(path.Join(o.fs.root, o.remote), "/")
err := o.fs.deleteFile(ctx, fullPath)
if err != nil {
return err
}
fs.Infof(o.fs, "Successfully deleted file: %s", fullPath)
return nil
}
// Hash returns the MD5 hash of an object
func (o *Object) Hash(ctx context.Context, t hash.Type) (string, error) {
if t != hash.MD5 {
return "", hash.ErrUnsupported
}
var fileCode string
if isFileCode(o.fs.root) {
fileCode = o.fs.root
} else {
matches := regexp.MustCompile(`\((.*?)\)`).FindAllStringSubmatch(o.remote, -1)
for _, match := range matches {
if len(match) > 1 && len(match[1]) == 12 {
fileCode = match[1]
break
}
}
}
if fileCode == "" {
return "", fmt.Errorf("no valid file code found in the remote path")
}
apiURL := fmt.Sprintf("%s/file/info?file_code=%s&key=%s",
o.fs.endpoint, url.QueryEscape(fileCode), url.QueryEscape(o.fs.opt.Key))
var result struct {
Status int `json:"status"`
Msg string `json:"msg"`
Result []struct {
Hash string `json:"hash"`
} `json:"result"`
}
err := o.fs.pacer.Call(func() (bool, error) {
req, err := http.NewRequestWithContext(ctx, "GET", apiURL, nil)
if err != nil {
return false, err
}
resp, err := o.fs.client.Do(req)
if err != nil {
return shouldRetry(err), err
}
defer func() {
if err := resp.Body.Close(); err != nil {
fs.Logf(nil, "Failed to close response body: %v", err)
}
}()
if err := json.NewDecoder(resp.Body).Decode(&result); err != nil {
return false, err
}
return shouldRetryHTTP(resp.StatusCode), nil
})
if err != nil {
return "", err
}
if result.Status != 200 || len(result.Result) == 0 {
return "", fmt.Errorf("error: unable to fetch hash: %s", result.Msg)
}
return result.Result[0].Hash, nil
}
// String returns a string representation of the object
func (o *Object) String() string {
return o.remote
}
// Fs returns the parent Fs
func (o *Object) Fs() fs.Info {
return o.fs
}
// Remote returns the remote path
func (o *Object) Remote() string {
return o.remote
}
// Size returns the size of the object
func (o *Object) Size() int64 {
return o.size
}
// ModTime returns the modification time of the object
func (o *Object) ModTime(ctx context.Context) time.Time {
return o.modTime
}
// SetModTime sets the modification time of the object
func (o *Object) SetModTime(ctx context.Context, modTime time.Time) error {
return fs.ErrorCantSetModTime
}
// Storable indicates whether the object is storable
func (o *Object) Storable() bool {
return true
}

View File

@@ -0,0 +1,16 @@
package filelu_test
import (
"testing"
"github.com/rclone/rclone/fstest/fstests"
)
// TestIntegration runs integration tests for the FileLu backend
func TestIntegration(t *testing.T) {
fstests.Run(t, &fstests.Opt{
RemoteName: "TestFileLu:",
NilObject: nil,
SkipInvalidUTF8: true,
})
}

15
backend/filelu/utils.go Normal file
View File

@@ -0,0 +1,15 @@
package filelu
import (
"fmt"
)
// parseStorageToBytes converts a storage string (e.g., "10") to bytes
func parseStorageToBytes(storage string) (int64, error) {
var gb float64
_, err := fmt.Sscanf(storage, "%f", &gb)
if err != nil {
return 0, fmt.Errorf("failed to parse storage: %w", err)
}
return int64(gb * 1024 * 1024 * 1024), nil
}

View File

@@ -10,6 +10,7 @@ import (
"net/http"
"net/url"
"path"
"slices"
"strings"
"time"
@@ -169,11 +170,9 @@ func shouldRetry(ctx context.Context, err error) (bool, error) {
}
if apiErr, ok := err.(files_sdk.ResponseError); ok {
for _, e := range retryErrorCodes {
if apiErr.HttpCode == e {
fs.Debugf(nil, "Retrying API error %v", err)
return true, err
}
if slices.Contains(retryErrorCodes, apiErr.HttpCode) {
fs.Debugf(nil, "Retrying API error %v", err)
return true, err
}
}

View File

@@ -9,6 +9,7 @@ import (
"io"
"net"
"net/textproto"
"net/url"
"path"
"runtime"
"strings"
@@ -162,6 +163,16 @@ Enabled by default. Use 0 to disable.`,
Help: "Disable TLS 1.3 (workaround for FTP servers with buggy TLS)",
Default: false,
Advanced: true,
}, {
Name: "allow_insecure_tls_ciphers",
Help: `Allow insecure TLS ciphers
Setting this flag will allow the usage of the following TLS ciphers in addition to the secure defaults:
- TLS_RSA_WITH_AES_128_GCM_SHA256
`,
Default: false,
Advanced: true,
}, {
Name: "shut_timeout",
Help: "Maximum time to wait for data connection closing status.",
@@ -185,6 +196,14 @@ Supports the format user:pass@host:port, user@host:port, host:port.
Example:
myUser:myPass@localhost:9005
`,
Advanced: true,
}, {
Name: "http_proxy",
Default: "",
Help: `URL for HTTP CONNECT proxy
Set this to a URL for an HTTP proxy which supports the HTTP CONNECT verb.
`,
Advanced: true,
}, {
@@ -227,28 +246,30 @@ a write only folder.
// Options defines the configuration for this backend
type Options struct {
Host string `config:"host"`
User string `config:"user"`
Pass string `config:"pass"`
Port string `config:"port"`
TLS bool `config:"tls"`
ExplicitTLS bool `config:"explicit_tls"`
TLSCacheSize int `config:"tls_cache_size"`
DisableTLS13 bool `config:"disable_tls13"`
Concurrency int `config:"concurrency"`
SkipVerifyTLSCert bool `config:"no_check_certificate"`
DisableEPSV bool `config:"disable_epsv"`
DisableMLSD bool `config:"disable_mlsd"`
DisableUTF8 bool `config:"disable_utf8"`
WritingMDTM bool `config:"writing_mdtm"`
ForceListHidden bool `config:"force_list_hidden"`
IdleTimeout fs.Duration `config:"idle_timeout"`
CloseTimeout fs.Duration `config:"close_timeout"`
ShutTimeout fs.Duration `config:"shut_timeout"`
AskPassword bool `config:"ask_password"`
Enc encoder.MultiEncoder `config:"encoding"`
SocksProxy string `config:"socks_proxy"`
NoCheckUpload bool `config:"no_check_upload"`
Host string `config:"host"`
User string `config:"user"`
Pass string `config:"pass"`
Port string `config:"port"`
TLS bool `config:"tls"`
ExplicitTLS bool `config:"explicit_tls"`
TLSCacheSize int `config:"tls_cache_size"`
DisableTLS13 bool `config:"disable_tls13"`
AllowInsecureTLSCiphers bool `config:"allow_insecure_tls_ciphers"`
Concurrency int `config:"concurrency"`
SkipVerifyTLSCert bool `config:"no_check_certificate"`
DisableEPSV bool `config:"disable_epsv"`
DisableMLSD bool `config:"disable_mlsd"`
DisableUTF8 bool `config:"disable_utf8"`
WritingMDTM bool `config:"writing_mdtm"`
ForceListHidden bool `config:"force_list_hidden"`
IdleTimeout fs.Duration `config:"idle_timeout"`
CloseTimeout fs.Duration `config:"close_timeout"`
ShutTimeout fs.Duration `config:"shut_timeout"`
AskPassword bool `config:"ask_password"`
Enc encoder.MultiEncoder `config:"encoding"`
SocksProxy string `config:"socks_proxy"`
HTTPProxy string `config:"http_proxy"`
NoCheckUpload bool `config:"no_check_upload"`
}
// Fs represents a remote FTP server
@@ -266,6 +287,7 @@ type Fs struct {
pool []*ftp.ServerConn
drain *time.Timer // used to drain the pool when we stop using the connections
tokens *pacer.TokenDispenser
proxyURL *url.URL // address of HTTP proxy read from environment
pacer *fs.Pacer // pacer for FTP connections
fGetTime bool // true if the ftp library accepts GetTime
fSetTime bool // true if the ftp library accepts SetTime
@@ -396,6 +418,14 @@ func (f *Fs) tlsConfig() *tls.Config {
if f.opt.DisableTLS13 {
tlsConfig.MaxVersion = tls.VersionTLS12
}
if f.opt.AllowInsecureTLSCiphers {
var ids []uint16
// Read default ciphers
for _, cs := range tls.CipherSuites() {
ids = append(ids, cs.ID)
}
tlsConfig.CipherSuites = append(ids, tls.TLS_RSA_WITH_AES_128_GCM_SHA256)
}
}
return tlsConfig
}
@@ -413,11 +443,26 @@ func (f *Fs) ftpConnection(ctx context.Context) (c *ftp.ServerConn, err error) {
dial := func(network, address string) (conn net.Conn, err error) {
fs.Debugf(f, "dial(%q,%q)", network, address)
defer func() {
fs.Debugf(f, "> dial: conn=%T, err=%v", conn, err)
if err != nil {
fs.Debugf(f, "> dial: conn=%v, err=%v", conn, err)
} else {
fs.Debugf(f, "> dial: conn=%s->%s, err=%v", conn.LocalAddr(), conn.RemoteAddr(), err)
}
}()
baseDialer := fshttp.NewDialer(ctx)
if f.opt.SocksProxy != "" {
conn, err = proxy.SOCKS5Dial(network, address, f.opt.SocksProxy, baseDialer)
} else if f.proxyURL != nil {
// We need to make the onward connection to f.opt.Host. However the FTP
// library sets the host to the proxy IP after using EPSV or PASV so we need
// to correct that here.
var dialPort string
_, dialPort, err = net.SplitHostPort(address)
if err != nil {
return nil, err
}
dialAddress := net.JoinHostPort(f.opt.Host, dialPort)
conn, err = proxy.HTTPConnectDial(network, dialAddress, f.proxyURL, baseDialer)
} else {
conn, err = baseDialer.Dial(network, address)
}
@@ -631,6 +676,14 @@ func NewFs(ctx context.Context, name, root string, m configmap.Mapper) (ff fs.Fs
CanHaveEmptyDirectories: true,
PartialUploads: true,
}).Fill(ctx, f)
// get proxy URL if set
if opt.HTTPProxy != "" {
proxyURL, err := url.Parse(opt.HTTPProxy)
if err != nil {
return nil, fmt.Errorf("failed to parse HTTP Proxy URL: %w", err)
}
f.proxyURL = proxyURL
}
// set the pool drainer timer going
if f.opt.IdleTimeout > 0 {
f.drain = time.AfterFunc(time.Duration(opt.IdleTimeout), func() { _ = f.drainPool(ctx) })

View File

@@ -17,7 +17,7 @@ import (
"github.com/stretchr/testify/require"
)
type settings map[string]interface{}
type settings map[string]any
func deriveFs(ctx context.Context, t *testing.T, f fs.Fs, opts settings) fs.Fs {
fsName := strings.Split(f.Name(), "{")[0] // strip off hash
@@ -52,7 +52,7 @@ func (f *Fs) testUploadTimeout(t *testing.T) {
ci.Timeout = saveTimeout
}()
ci.LowLevelRetries = 1
ci.Timeout = idleTimeout
ci.Timeout = fs.Duration(idleTimeout)
upload := func(concurrency int, shutTimeout time.Duration) (obj fs.Object, err error) {
fixFs := deriveFs(ctx, t, f, settings{

View File

@@ -194,33 +194,9 @@ type DeleteResponse struct {
Data map[string]Error
}
// Server is an upload server
type Server struct {
Name string `json:"name"`
Zone string `json:"zone"`
}
// String returns a string representation of the Server
func (s *Server) String() string {
return fmt.Sprintf("%s (%s)", s.Name, s.Zone)
}
// Root returns the root URL for the server
func (s *Server) Root() string {
return fmt.Sprintf("https://%s.gofile.io/", s.Name)
}
// URL returns the upload URL for the server
func (s *Server) URL() string {
return fmt.Sprintf("https://%s.gofile.io/contents/uploadfile", s.Name)
}
// ServersResponse is the output from /servers
type ServersResponse struct {
Error
Data struct {
Servers []Server `json:"servers"`
} `json:"data"`
// DirectUploadURL returns the direct upload URL for Gofile
func DirectUploadURL() string {
return "https://upload.gofile.io/uploadfile"
}
// UploadResponse is returned by POST /contents/uploadfile

View File

@@ -8,13 +8,11 @@ import (
"errors"
"fmt"
"io"
"math/rand"
"net/http"
"net/url"
"path"
"strconv"
"strings"
"sync"
"time"
"github.com/rclone/rclone/backend/gofile/api"
@@ -25,7 +23,7 @@ import (
"github.com/rclone/rclone/fs/fserrors"
"github.com/rclone/rclone/fs/fshttp"
"github.com/rclone/rclone/fs/hash"
"github.com/rclone/rclone/fs/walk"
"github.com/rclone/rclone/fs/list"
"github.com/rclone/rclone/lib/dircache"
"github.com/rclone/rclone/lib/encoder"
"github.com/rclone/rclone/lib/pacer"
@@ -37,10 +35,8 @@ const (
maxSleep = 20 * time.Second
decayConstant = 1 // bigger for slower decay, exponential
rootURL = "https://api.gofile.io"
serversExpiry = 60 * time.Second // check for new upload servers this often
serversActive = 2 // choose this many closest upload servers to use
rateLimitSleep = 5 * time.Second // penalise a goroutine by this long for making a rate limit error
maxDepth = 4 // in ListR recursive list this deep (maximum is 16)
rateLimitSleep = 5 * time.Second // penalise a goroutine by this long for making a rate limit error
maxDepth = 4 // in ListR recursive list this deep (maximum is 16)
)
/*
@@ -128,16 +124,13 @@ type Options struct {
// Fs represents a remote gofile
type Fs struct {
name string // name of this remote
root string // the path we are working on
opt Options // parsed options
features *fs.Features // optional features
srv *rest.Client // the connection to the server
dirCache *dircache.DirCache // Map of directory path to directory id
pacer *fs.Pacer // pacer for API calls
serversMu *sync.Mutex // protect the servers info below
servers []api.Server // upload servers we can use
serversChecked time.Time // time the servers were refreshed
name string // name of this remote
root string // the path we are working on
opt Options // parsed options
features *fs.Features // optional features
srv *rest.Client // the connection to the server
dirCache *dircache.DirCache // Map of directory path to directory id
pacer *fs.Pacer // pacer for API calls
}
// Object describes a gofile object
@@ -311,12 +304,11 @@ func NewFs(ctx context.Context, name, root string, m configmap.Mapper) (fs.Fs, e
client := fshttp.NewClient(ctx)
f := &Fs{
name: name,
root: root,
opt: *opt,
srv: rest.NewClient(client).SetRoot(rootURL),
pacer: fs.NewPacer(ctx, pacer.NewDefault(pacer.MinSleep(minSleep), pacer.MaxSleep(maxSleep), pacer.DecayConstant(decayConstant))),
serversMu: new(sync.Mutex),
name: name,
root: root,
opt: *opt,
srv: rest.NewClient(client).SetRoot(rootURL),
pacer: fs.NewPacer(ctx, pacer.NewDefault(pacer.MinSleep(minSleep), pacer.MaxSleep(maxSleep), pacer.DecayConstant(decayConstant))),
}
f.features = (&fs.Features{
CaseInsensitive: false,
@@ -435,98 +427,6 @@ func (f *Fs) readRootFolderID(ctx context.Context, m configmap.Mapper) (err erro
return nil
}
// Find the top n servers measured by response time
func (f *Fs) bestServers(ctx context.Context, servers []api.Server, n int) (newServers []api.Server) {
ctx, cancel := context.WithDeadline(ctx, time.Now().Add(10*time.Second))
defer cancel()
if n > len(servers) {
n = len(servers)
}
results := make(chan int, len(servers))
// Test how long the servers take to respond
for i := range servers {
i := i // for closure
go func() {
opts := rest.Opts{
Method: "GET",
RootURL: servers[i].Root(),
}
var result api.UploadServerStatus
start := time.Now()
_, err := f.srv.CallJSON(ctx, &opts, nil, &result)
ping := time.Since(start)
err = result.Err(err)
if err != nil {
results <- -1 // send a -ve number on error
return
}
fs.Debugf(nil, "Upload server %v responded in %v", &servers[i], ping)
results <- i
}()
}
// Wait for n servers to respond
newServers = make([]api.Server, 0, n)
for range servers {
i := <-results
if i >= 0 {
newServers = append(newServers, servers[i])
}
if len(newServers) >= n {
break
}
}
return newServers
}
// Clear all the upload servers - call on an error
func (f *Fs) clearServers() {
f.serversMu.Lock()
defer f.serversMu.Unlock()
fs.Debugf(f, "Clearing upload servers")
f.servers = nil
}
// Gets an upload server
func (f *Fs) getServer(ctx context.Context) (server *api.Server, err error) {
f.serversMu.Lock()
defer f.serversMu.Unlock()
if len(f.servers) == 0 || time.Since(f.serversChecked) >= serversExpiry {
opts := rest.Opts{
Method: "GET",
Path: "/servers",
}
var result api.ServersResponse
var resp *http.Response
err = f.pacer.Call(func() (bool, error) {
resp, err = f.srv.CallJSON(ctx, &opts, nil, &result)
return shouldRetry(ctx, resp, err)
})
if err = result.Err(err); err != nil {
if len(f.servers) == 0 {
return nil, fmt.Errorf("failed to read upload servers: %w", err)
}
fs.Errorf(f, "failed to read new upload servers: %v", err)
} else {
// Find the top servers measured by response time
f.servers = f.bestServers(ctx, result.Data.Servers, serversActive)
f.serversChecked = time.Now()
}
}
if len(f.servers) == 0 {
return nil, errors.New("no upload servers found")
}
// Pick a server at random since we've already found the top ones
i := rand.Intn(len(f.servers))
return &f.servers[i], nil
}
// rootSlash returns root with a slash on if it is empty, otherwise empty string
func (f *Fs) rootSlash() string {
if f.root == "" {
@@ -734,7 +634,7 @@ func (f *Fs) List(ctx context.Context, dir string) (entries fs.DirEntries, err e
}
// implementation of ListR
func (f *Fs) listR(ctx context.Context, dir string, list *walk.ListRHelper) (err error) {
func (f *Fs) listR(ctx context.Context, dir string, list *list.Helper) (err error) {
directoryID, err := f.dirCache.FindDir(ctx, dir, false)
if err != nil {
return err
@@ -820,7 +720,7 @@ func (f *Fs) listR(ctx context.Context, dir string, list *walk.ListRHelper) (err
// Don't implement this unless you have a more efficient way
// of listing recursively than doing a directory traversal.
func (f *Fs) ListR(ctx context.Context, dir string, callback fs.ListRCallback) (err error) {
list := walk.NewListRHelper(callback)
list := list.NewHelper(callback)
err = f.listR(ctx, dir, list)
if err != nil {
return err
@@ -1526,13 +1426,6 @@ func (o *Object) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, op
return err
}
// Find an upload server
server, err := o.fs.getServer(ctx)
if err != nil {
return err
}
fs.Debugf(o, "Using upload server %v", server)
// If the file exists, delete it after a successful upload
if o.id != "" {
id := o.id
@@ -1561,7 +1454,7 @@ func (o *Object) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, op
},
MultipartContentName: "file",
MultipartFileName: o.fs.opt.Enc.FromStandardName(leaf),
RootURL: server.URL(),
RootURL: api.DirectUploadURL(),
Options: options,
}
err = o.fs.pacer.CallNoRetry(func() (bool, error) {
@@ -1569,10 +1462,6 @@ func (o *Object) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, op
return shouldRetry(ctx, resp, err)
})
if err = result.Err(err); err != nil {
if isAPIErr(err, "error-freespace") {
fs.Errorf(o, "Upload server out of space - need to retry upload")
}
o.fs.clearServers()
return fmt.Errorf("failed to upload file: %w", err)
}
return o.setMetaData(&result.Data)

View File

@@ -35,7 +35,7 @@ import (
"github.com/rclone/rclone/fs/fserrors"
"github.com/rclone/rclone/fs/fshttp"
"github.com/rclone/rclone/fs/hash"
"github.com/rclone/rclone/fs/walk"
"github.com/rclone/rclone/fs/list"
"github.com/rclone/rclone/lib/bucket"
"github.com/rclone/rclone/lib/encoder"
"github.com/rclone/rclone/lib/env"
@@ -483,6 +483,9 @@ func parsePath(path string) (root string) {
// relative to f.root
func (f *Fs) split(rootRelativePath string) (bucketName, bucketPath string) {
bucketName, bucketPath = bucket.Split(bucket.Join(f.root, rootRelativePath))
if f.opt.DirectoryMarkers && strings.HasSuffix(bucketPath, "//") {
bucketPath = bucketPath[:len(bucketPath)-1]
}
return f.opt.Enc.FromStandardName(bucketName), f.opt.Enc.FromStandardPath(bucketPath)
}
@@ -712,7 +715,7 @@ func (f *Fs) list(ctx context.Context, bucket, directory, prefix string, addBuck
continue
}
// process directory markers as directories
remote = strings.TrimRight(remote, "/")
remote, _ = strings.CutSuffix(remote, "/")
}
remote = remote[len(prefix):]
if addBucket {
@@ -845,7 +848,7 @@ func (f *Fs) List(ctx context.Context, dir string) (entries fs.DirEntries, err e
// of listing recursively that doing a directory traversal.
func (f *Fs) ListR(ctx context.Context, dir string, callback fs.ListRCallback) (err error) {
bucket, directory := f.split(dir)
list := walk.NewListRHelper(callback)
list := list.NewHelper(callback)
listR := func(bucket, directory, prefix string, addBucket bool) error {
return f.list(ctx, bucket, directory, prefix, addBucket, true, func(remote string, object *storage.Object, isDirectory bool) error {
entry, err := f.itemToDirEntry(ctx, remote, object, isDirectory)
@@ -959,7 +962,7 @@ func (f *Fs) Mkdir(ctx context.Context, dir string) (err error) {
// mkdirParent creates the parent bucket/directory if it doesn't exist
func (f *Fs) mkdirParent(ctx context.Context, remote string) error {
remote = strings.TrimRight(remote, "/")
remote, _ = strings.CutSuffix(remote, "/")
dir := path.Dir(remote)
if dir == "/" || dir == "." {
dir = ""

View File

@@ -4,6 +4,7 @@ package googlephotos
import (
"path"
"slices"
"strings"
"sync"
@@ -119,7 +120,7 @@ func (as *albums) _del(album *api.Album) {
dirs := as.path[dir]
for i, dir := range dirs {
if dir == leaf {
dirs = append(dirs[:i], dirs[i+1:]...)
dirs = slices.Delete(dirs, i, i+1)
break
}
}

View File

@@ -43,6 +43,7 @@ var (
errAlbumDelete = errors.New("google photos API does not implement deleting albums")
errRemove = errors.New("google photos API only implements removing files from albums")
errOwnAlbums = errors.New("google photos API only allows uploading to albums rclone created")
errReadOnly = errors.New("can't upload files in read only mode")
)
const (
@@ -52,19 +53,31 @@ const (
listChunks = 100 // chunk size to read directory listings
albumChunks = 50 // chunk size to read album listings
minSleep = 10 * time.Millisecond
scopeReadOnly = "https://www.googleapis.com/auth/photoslibrary.readonly"
scopeReadWrite = "https://www.googleapis.com/auth/photoslibrary"
scopeAccess = 2 // position of access scope in list
scopeAppendOnly = "https://www.googleapis.com/auth/photoslibrary.appendonly"
scopeReadOnly = "https://www.googleapis.com/auth/photoslibrary.readonly.appcreateddata"
scopeReadWrite = "https://www.googleapis.com/auth/photoslibrary.edit.appcreateddata"
)
var (
// scopes needed for read write access
scopesReadWrite = []string{
"openid",
"profile",
scopeAppendOnly,
scopeReadOnly,
scopeReadWrite,
}
// scopes needed for read only access
scopesReadOnly = []string{
"openid",
"profile",
scopeReadOnly,
}
// Description of how to auth for this app
oauthConfig = &oauthutil.Config{
Scopes: []string{
"openid",
"profile",
scopeReadWrite, // this must be at position scopeAccess
},
Scopes: scopesReadWrite,
AuthURL: google.Endpoint.AuthURL,
TokenURL: google.Endpoint.TokenURL,
ClientID: rcloneClientID,
@@ -100,20 +113,26 @@ func init() {
case "":
// Fill in the scopes
if opt.ReadOnly {
oauthConfig.Scopes[scopeAccess] = scopeReadOnly
oauthConfig.Scopes = scopesReadOnly
} else {
oauthConfig.Scopes[scopeAccess] = scopeReadWrite
oauthConfig.Scopes = scopesReadWrite
}
return oauthutil.ConfigOut("warning", &oauthutil.Options{
return oauthutil.ConfigOut("warning1", &oauthutil.Options{
OAuth2Config: oauthConfig,
})
case "warning":
case "warning1":
// Warn the user as required by google photos integration
return fs.ConfigConfirm("warning_done", true, "config_warning", `Warning
return fs.ConfigConfirm("warning2", true, "config_warning", `Warning
IMPORTANT: All media items uploaded to Google Photos with rclone
are stored in full resolution at original quality. These uploads
will count towards storage in your Google Account.`)
case "warning2":
// Warn the user that rclone can no longer download photos it didnt upload from google photos
return fs.ConfigConfirm("warning_done", true, "config_warning", `Warning
IMPORTANT: Due to Google policy changes rclone can now only download photos it uploaded.`)
case "warning_done":
return nil, nil
}
@@ -167,7 +186,7 @@ listings and won't be transferred.`,
The Google API will deliver images and video which aren't full
resolution, and/or have EXIF data missing.
However if you ue the gphotosdl proxy tnen you can download original,
However if you use the gphotosdl proxy then you can download original,
unchanged images.
This runs a headless browser in the background.
@@ -333,7 +352,7 @@ func NewFs(ctx context.Context, name, root string, m configmap.Mapper) (fs.Fs, e
baseClient := fshttp.NewClient(ctx)
oAuthClient, ts, err := oauthutil.NewClientWithBaseClient(ctx, name, m, oauthConfig, baseClient)
if err != nil {
return nil, fmt.Errorf("failed to configure Box: %w", err)
return nil, fmt.Errorf("failed to configure google photos: %w", err)
}
root = strings.Trim(path.Clean(root), "/")
@@ -388,7 +407,7 @@ func (f *Fs) fetchEndpoint(ctx context.Context, name string) (endpoint string, e
Method: "GET",
RootURL: "https://accounts.google.com/.well-known/openid-configuration",
}
var openIDconfig map[string]interface{}
var openIDconfig map[string]any
err = f.pacer.Call(func() (bool, error) {
resp, err := f.unAuth.CallJSON(ctx, &opts, nil, &openIDconfig)
return shouldRetry(ctx, resp, err)
@@ -448,7 +467,7 @@ func (f *Fs) Disconnect(ctx context.Context) (err error) {
"token_type_hint": []string{"access_token"},
},
}
var res interface{}
var res any
err = f.pacer.Call(func() (bool, error) {
resp, err := f.srv.CallJSON(ctx, &opts, nil, &res)
return shouldRetry(ctx, resp, err)
@@ -1120,6 +1139,9 @@ func (o *Object) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, op
}
if !album.IsWriteable {
if o.fs.opt.ReadOnly {
return errReadOnly
}
return errOwnAlbums
}

View File

@@ -24,7 +24,7 @@ import (
// The result should be capable of being JSON encoded
// If it is a string or a []string it will be shown to the user
// otherwise it will be JSON encoded and shown to the user like that
func (f *Fs) Command(ctx context.Context, name string, arg []string, opt map[string]string) (out interface{}, err error) {
func (f *Fs) Command(ctx context.Context, name string, arg []string, opt map[string]string) (out any, err error) {
switch name {
case "drop":
return nil, f.db.Stop(true)

View File

@@ -18,6 +18,7 @@ import (
"github.com/rclone/rclone/fs/config/configstruct"
"github.com/rclone/rclone/fs/fspath"
"github.com/rclone/rclone/fs/hash"
"github.com/rclone/rclone/fs/list"
"github.com/rclone/rclone/lib/kv"
)
@@ -182,6 +183,9 @@ func NewFs(ctx context.Context, fsname, rpath string, cmap configmap.Mapper) (fs
}
f.features = stubFeatures.Fill(ctx, f).Mask(ctx, f.Fs).WrapsFs(f, f.Fs)
// Enable ListP always
f.features.ListP = f.ListP
cache.PinUntilFinalized(f.Fs, f)
return f, err
}
@@ -237,10 +241,39 @@ func (f *Fs) wrapEntries(baseEntries fs.DirEntries) (hashEntries fs.DirEntries,
// List the objects and directories in dir into entries.
func (f *Fs) List(ctx context.Context, dir string) (entries fs.DirEntries, err error) {
if entries, err = f.Fs.List(ctx, dir); err != nil {
return nil, err
return list.WithListP(ctx, dir, f)
}
// ListP lists the objects and directories of the Fs starting
// from dir non recursively into out.
//
// dir should be "" to start from the root, and should not
// have trailing slashes.
//
// This should return ErrDirNotFound if the directory isn't
// found.
//
// It should call callback for each tranche of entries read.
// These need not be returned in any particular order. If
// callback returns an error then the listing will stop
// immediately.
func (f *Fs) ListP(ctx context.Context, dir string, callback fs.ListRCallback) error {
wrappedCallback := func(entries fs.DirEntries) error {
entries, err := f.wrapEntries(entries)
if err != nil {
return err
}
return callback(entries)
}
return f.wrapEntries(entries)
listP := f.Fs.Features().ListP
if listP == nil {
entries, err := f.Fs.List(ctx, dir)
if err != nil {
return err
}
return wrappedCallback(entries)
}
return listP(ctx, dir, wrappedCallback)
}
// ListR lists the objects and directories recursively into out.

View File

@@ -6,6 +6,7 @@ import (
"encoding/gob"
"errors"
"fmt"
"maps"
"strings"
"time"
@@ -195,9 +196,7 @@ func (op *kvPut) Do(ctx context.Context, b kv.Bucket) (err error) {
r.Fp = op.fp
}
for hashType, hashVal := range op.hashes {
r.Hashes[hashType] = hashVal
}
maps.Copy(r.Hashes, op.hashes)
if data, err = r.encode(op.key); err != nil {
return fmt.Errorf("marshal failed: %w", err)
}

View File

@@ -371,9 +371,9 @@ func (f *Fs) About(ctx context.Context) (*fs.Usage, error) {
return nil, err
}
return &fs.Usage{
Total: fs.NewUsageValue(int64(info.Capacity)),
Used: fs.NewUsageValue(int64(info.Used)),
Free: fs.NewUsageValue(int64(info.Remaining)),
Total: fs.NewUsageValue(info.Capacity),
Used: fs.NewUsageValue(info.Used),
Free: fs.NewUsageValue(info.Remaining),
}, nil
}

View File

@@ -52,10 +52,7 @@ func writeByBlock(p []byte, writer io.Writer, blockSize uint32, bytesInBlock *ui
total := len(p)
nullBytes := make([]byte, blockSize)
for len(p) > 0 {
toWrite := int(blockSize - *bytesInBlock)
if toWrite > len(p) {
toWrite = len(p)
}
toWrite := min(int(blockSize-*bytesInBlock), len(p))
c, err := writer.Write(p[:toWrite])
*bytesInBlock += uint32(c)
*onlyNullBytesInBlock = *onlyNullBytesInBlock && bytes.Equal(nullBytes[:toWrite], p[:toWrite])
@@ -276,7 +273,7 @@ func (h *hidriveHash) Sum(b []byte) []byte {
}
checksum := zeroSum
for i := 0; i < len(h.levels); i++ {
for i := range h.levels {
level := h.levels[i]
if i < len(h.levels)-1 {
// Aggregate non-empty non-final levels.

View File

@@ -216,7 +216,7 @@ func TestLevelWrite(t *testing.T) {
func TestLevelIsFull(t *testing.T) {
content := [hidrivehash.Size]byte{0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19}
l := hidrivehash.NewLevel()
for i := 0; i < 256; i++ {
for range 256 {
assert.False(t, l.(internal.LevelHash).IsFull())
written, err := l.Write(content[:])
assert.Equal(t, len(content), written)

View File

@@ -180,7 +180,6 @@ func getFsEndpoint(ctx context.Context, client *http.Client, url string, opt *Op
}
addHeaders(req, opt)
res, err := noRedir.Do(req)
if err != nil {
fs.Debugf(nil, "Assuming path is a file as HEAD request could not be sent: %v", err)
return createFileResult()
@@ -249,6 +248,14 @@ func (f *Fs) httpConnection(ctx context.Context, opt *Options) (isFile bool, err
f.httpClient = client
f.endpoint = u
f.endpointURL = u.String()
if isFile {
// Correct root if definitely pointing to a file
f.root = path.Dir(f.root)
if f.root == "." || f.root == "/" {
f.root = ""
}
}
return isFile, nil
}
@@ -505,7 +512,7 @@ func (f *Fs) List(ctx context.Context, dir string) (entries fs.DirEntries, err e
entries = append(entries, entry)
entriesMu.Unlock()
}
for i := 0; i < checkers; i++ {
for range checkers {
wg.Add(1)
go func() {
defer wg.Done()
@@ -740,7 +747,7 @@ It doesn't return anything.
// The result should be capable of being JSON encoded
// If it is a string or a []string it will be shown to the user
// otherwise it will be JSON encoded and shown to the user like that
func (f *Fs) Command(ctx context.Context, name string, arg []string, opt map[string]string) (out interface{}, err error) {
func (f *Fs) Command(ctx context.Context, name string, arg []string, opt map[string]string) (out any, err error) {
switch name {
case "set":
newOpt := f.opt

View File

@@ -76,7 +76,7 @@ func (c *Client) DriveService() (*DriveService, error) {
// This function is the main entry point for making requests to the iCloud
// API. If the initial request returns a 401 (Unauthorized), it will try to
// reauthenticate and retry the request.
func (c *Client) Request(ctx context.Context, opts rest.Opts, request interface{}, response interface{}) (resp *http.Response, err error) {
func (c *Client) Request(ctx context.Context, opts rest.Opts, request any, response any) (resp *http.Response, err error) {
resp, err = c.Session.Request(ctx, opts, request, response)
if err != nil && resp != nil {
// try to reauth
@@ -100,7 +100,7 @@ func (c *Client) Request(ctx context.Context, opts rest.Opts, request interface{
// This function is useful when you have a session that is already
// authenticated, but you need to make a request without triggering
// a re-authentication.
func (c *Client) RequestNoReAuth(ctx context.Context, opts rest.Opts, request interface{}, response interface{}) (resp *http.Response, err error) {
func (c *Client) RequestNoReAuth(ctx context.Context, opts rest.Opts, request any, response any) (resp *http.Response, err error) {
// Make the request without re-authenticating
resp, err = c.Session.Request(ctx, opts, request, response)
return resp, err
@@ -161,6 +161,6 @@ func newRequestError(Status string, Text string) *RequestError {
}
// newErr orf makes a new error from sprintf parameters.
func newRequestErrorf(Status string, Text string, Parameters ...interface{}) *RequestError {
func newRequestErrorf(Status string, Text string, Parameters ...any) *RequestError {
return newRequestError(strings.ToLower(Status), fmt.Sprintf(Text, Parameters...))
}

View File

@@ -252,18 +252,14 @@ func (d *DriveService) DownloadFile(ctx context.Context, url string, opt []fs.Op
}
resp, err := d.icloud.srv.Call(ctx, opts)
if err != nil {
// icloud has some weird http codes
if resp.StatusCode == 330 {
loc, err := resp.Location()
if err == nil {
return d.DownloadFile(ctx, loc.String(), opt)
}
// icloud has some weird http codes
if err != nil && resp != nil && resp.StatusCode == 330 {
loc, err := resp.Location()
if err == nil {
return d.DownloadFile(ctx, loc.String(), opt)
}
return resp, err
}
return d.icloud.srv.Call(ctx, opts)
return resp, err
}
// MoveItemToTrashByItemID moves an item to the trash based on the item ID.
@@ -476,7 +472,7 @@ func (d *DriveService) MoveItemByDriveID(ctx context.Context, id, etag, dstID st
// CopyDocByItemID copies a document by its item ID.
func (d *DriveService) CopyDocByItemID(ctx context.Context, itemID string) (*DriveItemRaw, *http.Response, error) {
// putting name in info doesnt work. extension does work so assume this is a bug in the endpoint
// putting name in info doesn't work. extension does work so assume this is a bug in the endpoint
values := map[string]any{
"info_to_update": map[string]any{},
}
@@ -631,7 +627,7 @@ func NewUpdateFileInfo() UpdateFileInfo {
FileFlags: FileFlags{
IsExecutable: true,
IsHidden: false,
IsWritable: false,
IsWritable: true,
},
}
}
@@ -733,8 +729,8 @@ type DocumentUpdateResponse struct {
StatusCode int `json:"status_code"`
ErrorMessage string `json:"error_message"`
} `json:"status"`
OperationID interface{} `json:"operation_id"`
Document *Document `json:"document"`
OperationID any `json:"operation_id"`
Document *Document `json:"document"`
} `json:"results"`
}
@@ -765,9 +761,9 @@ type Document struct {
IsWritable bool `json:"is_writable"`
IsHidden bool `json:"is_hidden"`
} `json:"file_flags"`
LastOpenedTime int64 `json:"lastOpenedTime"`
RestorePath interface{} `json:"restorePath"`
HasChainedParent bool `json:"hasChainedParent"`
LastOpenedTime int64 `json:"lastOpenedTime"`
RestorePath any `json:"restorePath"`
HasChainedParent bool `json:"hasChainedParent"`
}
// DriveID returns the drive ID of the Document.

Some files were not shown because too many files have changed in this diff Show More