1
0
mirror of https://github.com/rclone/rclone.git synced 2025-12-31 15:43:53 +00:00

Compare commits

..

666 Commits

Author SHA1 Message Date
Nick Craig-Wood
7d97ba0a0f pool: fix deadlock with --max-memory and multipart transfers
Because multipart transfers can need more than one buffer to complete,
if transfers was set very high, it was possible for lots of multipart
transfers to start, grab fewer buffers than chunk size, then deadlock
because no more memory was available.

This fixes the problem by introducing a reservation system which the
multipart transfer uses to ensure it can reserve all the memory for
one chunk before starting.
2025-04-22 17:25:34 +01:00
Nick Craig-Wood
e0c99d6203 onedrive: fix metadata ordering in permissions
Before this change, due to a quirk in Graph, User permissions could be
lost when applying permissions.

Fixes #8465
2025-04-11 10:38:51 +01:00
Nick Craig-Wood
7af1a930b7 Add Ben Alex to contributors 2025-04-11 10:38:51 +01:00
Nick Craig-Wood
6e46ee4ffa Add simwai to contributors 2025-04-11 10:38:51 +01:00
Ben Alex
4f1fc1a84e iclouddrive: fix so created files are writable
At present any created file (eg through the touch command, copy, mount
etc) is read-only in iCloud.

This has been reported by users at
https://forum.rclone.org/t/icloud-and-file-editing-permissions/50659.
2025-04-10 11:38:38 +01:00
simwai
c10b6c5e8e cmd/authorize: show required arguments in help text 2025-04-09 16:30:38 +01:00
yuval-cloudinary
52ff407116 cloudinary: var naming convention - #8416 2025-04-09 15:03:59 +01:00
yuval-cloudinary
078d202f39 cloudinary: automatically add/remove known media files extensions #8416 2025-04-09 15:03:59 +01:00
Nick Craig-Wood
3e105f7e58 Add Markus Gerstel to contributors 2025-04-09 15:03:59 +01:00
Nick Craig-Wood
02ca72e30c Add Enduriel to contributors 2025-04-09 15:03:59 +01:00
Nick Craig-Wood
e567c52457 Add huanghaojun to contributors 2025-04-09 15:03:59 +01:00
Nick Craig-Wood
10501d0398 Add simonmcnair to contributors 2025-04-09 15:03:59 +01:00
Nick Craig-Wood
972ed42661 Add Samantha Bowen to contributors 2025-04-09 15:03:59 +01:00
Markus Gerstel
48802b0a3b s3: documentation regression - fixes #8438
We lost a previous documentation fix (#7077) detailing how to restore
single objects from AWS S3 Glacier.

Also make clearer that rclone provides restore functionality natively.

Co-authored-by: danielkrajnik <dan94kra@gmail.com>
2025-04-09 14:18:18 +01:00
Enduriel
a9c7c493cf hash: add SHA512 support for file hashes 2025-04-09 14:16:22 +01:00
huanghaojun
49f6ed5f5e vfs: fix inefficient directory caching when directory reads are slow
Before this change, when querying directories with large datasets, if
the query duration exceeded the directory cache expiration time, the
cache became invalid by the time results were retrieved. This means
every execution of `_readDir` triggers `_readDirFromEntries`,
resulting in prolonged processing times.

After this change we update the directory time with the time at the
end of the query.
2025-04-09 11:58:09 +01:00
simonmcnair
a5d03e0ada docs: update fuse version in docker docs 2025-04-09 11:54:06 +01:00
Samantha Bowen
199f61cefa fs/config: Read configuration passwords from stdin even when terminated with EOF - fixes #8480 2025-04-09 11:41:10 +01:00
Dan McArdle
fa78c6443e cmd/gitannex: Reject unknown layout modes in INITREMOTE
This is a "fail fast" improvement. Now, we will reject invalid layout
modes at setup time, rather than deferring failure until the user
attempts a transfer.
2025-04-09 11:27:44 +01:00
Dan McArdle
52e2e4b84c cmd/gitannex: Add configparse.go and refactor
This is a behavior-preserving refactor. I'm mostly just moving the code
that defines and parses configs (e.g. "rcloneremotename") into a new
source file. This lets us focus more on implementing the text protocol
in gitannex.go.
2025-04-09 11:27:44 +01:00
Dan McArdle
1c933372fe cmd/gitannex: Permit remotes with options
It looks like commit 2a1e28f5f5 did not
fix the errors in the integration tests that I hoped it would. Upon
further inspection, I noticed that I forgot that remotes can have
options just like backends.

This should fix some of the failing integration tests. For context:
https://github.com/rclone/rclone/pull/7987#issuecomment-2688580667

Specifically, I believe that TestGitAnnexFstestBackendCases/HandlesInit
should no longer fail on the Azure backend with "INITREMOTE-FAILURE
remote does not exist: TestAzureBlob,directory_markers:".

Issue #7984
2025-04-09 11:27:44 +01:00
Nick Craig-Wood
f5dfe3f5a6 serve ftp: add serve rc interface 2025-04-09 11:12:07 +01:00
Nick Craig-Wood
5702b7578c serve sftp: add serve rc interface 2025-04-09 11:12:07 +01:00
Nick Craig-Wood
703788b40e serve restic: add serve rc interface 2025-04-09 11:12:07 +01:00
Nick Craig-Wood
aef9c2117e serve s3: add serve rc interface 2025-04-09 11:12:07 +01:00
Nick Craig-Wood
2a42d95385 serve dlna: add serve rc interface 2025-04-09 11:12:07 +01:00
Nick Craig-Wood
e37775bb41 serve webdav: add serve rc interface - fixes #4505 2025-04-09 11:12:07 +01:00
Nick Craig-Wood
780f4040ea serve http: add serve rc interface 2025-04-09 11:12:07 +01:00
Nick Craig-Wood
0b7be6ffb9 serve nfs: add serve rc interface 2025-04-09 11:12:07 +01:00
Nick Craig-Wood
4d9a165e56 serve: Add rc control for serve commands #4505
This adds the framework for serving. The individual servers will be
added in separate commits.
2025-04-09 11:12:07 +01:00
Nick Craig-Wood
21e5fa192a configstruct: add SetAny to parse config from the rc
Now that we have unified the config, we can make a much more
convenient rc interface which mirrors the command line exactly, rather
than using the structure of the internal Go structs.
2025-04-09 11:12:07 +01:00
Nick Craig-Wood
cf571ad661 rc: In options/info make FieldName contain a "." if it should be nested
Before this would have Output "FieldName": "ListenAddr" where it
actually needs to be set in a sub object "HTTP".

After this fix it outputs "FieldName": "HTTP.ListenAddr" to indicate
"ListenAddr" needs to be set in the object "HTTP".
2025-04-09 11:12:07 +01:00
Nick Craig-Wood
b1456835d8 serve restic: convert options to new style 2025-04-09 11:12:07 +01:00
Nick Craig-Wood
b930c4b437 serve s3: convert options to new style 2025-04-09 11:12:07 +01:00
Nick Craig-Wood
cebd588092 serve http: convert options to new style 2025-04-09 11:12:07 +01:00
Nick Craig-Wood
3c981e6c2c serve webdav: convert options to new style 2025-04-09 11:12:07 +01:00
Nick Craig-Wood
6054c4e49d auth proxy: convert options to new style 2025-04-09 11:12:07 +01:00
Nick Craig-Wood
028316ba5d auth proxy: add VFS options parameter for use for default VFS
This is for use from the RC API.
2025-04-09 11:12:07 +01:00
Nick Craig-Wood
df457f5802 serve: make the servers self registering
This is so that they can import cmd/serve without causing an import
loop.

The active servers can now be configured by commenting lines out in
cmd/all/all.go like all the other commands.
2025-04-09 11:12:07 +01:00
Nick Craig-Wood
084e35c49d lib/http: fix race between Serve() and Shutdown()
This was discovered by the race detector.
2025-04-09 11:12:07 +01:00
Nick Craig-Wood
90ea4a73ad lib/http: add Addr() method to return the first configured server address 2025-04-09 11:12:07 +01:00
Nick Craig-Wood
efe8ac8f35 Add Danny Garside to contributors 2025-04-09 11:12:06 +01:00
Danny Garside
894ef3b375 docs: fix minor typo in box docs 2025-04-08 20:51:22 +01:00
Nick Craig-Wood
385465bfa9 sync: implement --list-cutoff to allow on disk sorting for reduced memory use
Before this change, rclone had to load an entire directory into RAM in
order to sort it so it could be synced.

With directories with millions of entries, this used too much memory.

This fixes the probem by using an on disk sort when there are more
than --list-cutoff entries in a directory.

Fixes #7974
2025-04-08 18:02:24 +01:00
Nick Craig-Wood
0148bd4668 march: Implement callback based syncing
This changes the syncing method to take callbacks for directory
listings rather than being passed the entire directory listing at
once.

This will enable out of memory syncing.
2025-04-08 18:02:24 +01:00
Nick Craig-Wood
0f7ecf6f06 list: add ListDirSortedFn for callback oriented directory listing
This will be used for the out of memory sync
2025-04-08 15:14:09 +01:00
Nick Craig-Wood
08e81f8420 list: Implement Sorter to sort directory entries
Later this will be extended to do out of memory sorts
2025-04-08 15:14:09 +01:00
Nick Craig-Wood
0ac2d2f50f cache: mark ListP as not supported yet 2025-04-08 15:14:09 +01:00
Nick Craig-Wood
42fcb0a6fc hasher: implement ListP interface 2025-04-08 15:14:09 +01:00
Nick Craig-Wood
490dd14bc5 compress: implement ListP interface 2025-04-08 15:14:09 +01:00
Nick Craig-Wood
943ea0acae chunker: mark ListP as not supported yet 2025-04-08 15:14:09 +01:00
Nick Craig-Wood
d64a97f973 union: mark ListP as not supported yet 2025-04-08 15:14:09 +01:00
Nick Craig-Wood
5d8f1d4b88 crypt: implement ListP interface 2025-04-08 15:14:09 +01:00
Nick Craig-Wood
b1d774c2e3 combine: implement ListP interface 2025-04-08 15:14:09 +01:00
Nick Craig-Wood
fad579c4a2 s3: Implement paged listing interface ListP 2025-04-08 15:14:09 +01:00
Nick Craig-Wood
37120ef7bd list: add WithListP helper to implement List for ListP backends 2025-04-08 15:14:09 +01:00
Nick Craig-Wood
cba653d502 walk: move NewListRHelper into list.Helper to avoid circular dependency
It turns out that the list helpers were at the wrong level and needed
to be pushed down into the fs/list for future work.
2025-04-08 15:14:00 +01:00
Nick Craig-Wood
2a90de9502 fs: define ListP interface for paged listing #4788 2025-04-08 15:12:53 +01:00
Nick Craig-Wood
bff229713a accounting: Add listed stat for number of directory entries listed 2025-04-08 15:12:53 +01:00
Nick Craig-Wood
117f583ebe walk: factor Listing helpers into their own file and add tests 2025-04-08 15:12:53 +01:00
Nick Craig-Wood
205667143c serve nfs: make metadata files have special file handles
Metadata files have the file handle of their source file with
0x00000001 suffixed in big endian so we can look them up directly from
their file handles.
2025-04-07 13:41:29 +01:00
Nick Craig-Wood
fe84cbdc9d serve nfs: change the format of --nfs-cache-type symlink file handles
This is an backwards incompatible change which will invalidate the
current handles.

This change adds a 4 byte big endian length prefix to the handles so
we can in future suffix extra info on the handles. This needed to be 4
bytes as Linux does not like File handles which aren't multiples of 4
bytes long.
2025-04-07 13:41:29 +01:00
Nick Craig-Wood
533c6438f3 vfs: add --vfs-metadata-extension to expose metadata sidecar files
This adds --vfs-metadata-extension which can be used to expose sidecar
files with file metadata in. These files don't exist in the listings
until they are accessed.
2025-04-07 13:41:29 +01:00
Nick Craig-Wood
b587b094c9 docs: Add rcloneui.com as Silver Sponsor 2025-04-07 13:41:29 +01:00
Nick Craig-Wood
525798e1a5 Add Klaas Freitag to contributors 2025-04-07 13:41:29 +01:00
Nick Craig-Wood
ea63052d36 Add eccoisle to contributors 2025-04-07 13:41:29 +01:00
Nick Craig-Wood
b5a99c5011 Add Fernando Fernández to contributors 2025-04-07 13:41:29 +01:00
Nick Craig-Wood
56b7015675 Add alingse to contributors 2025-04-07 13:41:29 +01:00
Nick Craig-Wood
4ff970ebab Add Jörn Friedrich Dreyer to contributors 2025-04-07 13:41:29 +01:00
eccoisle
dccb5144c3 docs: replace option --auto-filename-header with --header-filename 2025-04-06 14:28:34 +02:00
dependabot[bot]
33b087171a build: update github.com/golang-jwt/jwt/v5 from 5.2.1 to 5.2.2 to fix CVE-2025-30204
Bumps [github.com/golang-jwt/jwt/v5](https://github.com/golang-jwt/jwt) from 5.2.1 to 5.2.2.
- [Release notes](https://github.com/golang-jwt/jwt/releases)
- [Changelog](https://github.com/golang-jwt/jwt/blob/main/VERSION_HISTORY.md)
- [Commits](https://github.com/golang-jwt/jwt/compare/v5.2.1...v5.2.2)

See: https://github.com/golang-jwt/jwt/security/advisories/GHSA-mh63-6h87-95cp
See: https://www.cve.org/CVERecord?id=CVE-2025-30204

Signed-off-by: dependabot[bot] <support@github.com>
2025-04-06 11:46:13 +01:00
Fernando Fernández
58d9ae1c60 docs/googlephotos: fix typos 2025-04-06 10:49:02 +02:00
dependabot[bot]
20302ab6b9 build: bump github.com/golang-jwt/jwt/v4 from 4.5.1 to 4.5.2
Bumps [github.com/golang-jwt/jwt/v4](https://github.com/golang-jwt/jwt) from 4.5.1 to 4.5.2.
- [Release notes](https://github.com/golang-jwt/jwt/releases)
- [Changelog](https://github.com/golang-jwt/jwt/blob/main/VERSION_HISTORY.md)
- [Commits](https://github.com/golang-jwt/jwt/compare/v4.5.1...v4.5.2)

---
updated-dependencies:
- dependency-name: github.com/golang-jwt/jwt/v4
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-04-04 17:26:44 +02:00
alingse
6fb0de62a4 operations: fix call fmt.Errorf with wrong err 2025-04-04 16:21:45 +02:00
Jörn Friedrich Dreyer
839eef0db2 webdav: retry propfind on 425 status
This retries propfind on 425 status

In ownCloud Infinite Scale, files might be in that state if
postprocessing is still ongoing. All metadata are available anyway

Allow item status 425 "too early" for items when changing metadata

Fixes the upload behavior with ownCloud Infinite Scale

Signed-off-by: Jörn Friedrich Dreyer <jfd@butonic.de>
Co-authored-by: Klaas Freitag <kraft@freisturz.de>
2025-03-26 12:51:04 +00:00
Nick Craig-Wood
267eebe5c9 Add --max-connections to control maximum backend concurrency 2025-03-25 15:49:27 +00:00
Nick Craig-Wood
755d72a591 rc: fix debug/* commands not being available over unix sockets
This was caused by an incorrect handler URL which was passing the
debug/* commands to the debug/pprof handler by accident. This only
happened when using unix sockets.
2025-03-25 15:30:49 +00:00
Dan McArdle
4d38424e6c cmd/gitannex: Prevent tests from hanging when assertion fails
This fixes another way that the gitannex tests can hang.

The issue is that our test harness explicitly called `wg.Done()` at the
end of each test case, but when assertions checked with [require] fail,
they halt test execution and prevent `wg.Done()` from happening.

A second issue is that we were incorrectly calling [require] functions
in the goroutine that runs the gitannex server. I found that [require]
calls [testing.T.FailNow] under the hood, which says "FailNow must be
called from the goroutine running the test or benchmark function, not
from other goroutines created during the test." [1]

This commit fixes both issues by replacing the explicit synchronization
with a `chan error`. This enables us to run the gitannex server in a
goroutine, interact with the server in the test's goroutine, and then at
then end use [require] on the test-associated goroutine to ensure the
server's error/nil value matches expectations.

[1]: https://pkg.go.dev/testing#T.FailNow
2025-03-18 12:38:04 +00:00
Dan McArdle
53624222c9 cmd/gitannex: Add explicit timeout for mock stdout reads in tests
It seems like (*testState).readLine() hangs indefinitely when it's
waiting for a line that will never be written [1].

This commit adds an explicit 30-second timeout when reading from the
internal mock stdout. Given that we integrate with fstest, this timeout
needs to be sufficiently long that it accommodates slow-but-successful
operations on real remotes.

[1]: https://github.com/rclone/rclone/pull/8423#issuecomment-2701601290
2025-03-18 12:38:04 +00:00
nielash
44e83d77d7 http: correct root if definitely pointing to a file - fixes #8428
This was formalized in
c69eb84573
But it appears that we forgot to update `http`, and the `FsRoot` test didn't
catch it because we don't currently have an http integration test.
2025-03-17 18:05:23 +00:00
Nick Craig-Wood
19aa366d88 pool: add --max-buffer-memory to limit total buffer memory usage 2025-03-17 18:01:15 +00:00
Nick Craig-Wood
3fb4164d87 filter: Add --hash-filter to deterministically select a subset of files
Fixes #8400
2025-03-17 17:25:59 +00:00
dependabot[bot]
4e2b78f65d build: update golang.org/x/net to 0.36.0. to fix CVE-2025-22869
SSH servers which implement file transfer protocols are vulnerable to
a denial of service attack from clients which complete the key
exchange slowly, or not at all, causing pending content to be read
into memory, but never transmitted.

This updates golang.org/x/net to fix the problem.

See: https://pkg.go.dev/vuln/GO-2025-3487
See: https://www.cve.org/CVERecord?id=CVE-2025-22869
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-03-17 17:25:12 +00:00
Nick Craig-Wood
e47f59e1f9 rc: add add short parameter to core/stats to not return transferring and checking 2025-03-17 13:44:37 +00:00
Nick Craig-Wood
63c4fef27a fs: fix corruption of SizeSuffix with "B" suffix in config (eg --min-size)
Before this change, the config system round tripped fs.SizeSuffix
values through strings like this, corrupting them in the process.

    "2B" -> 2 -> "2" -> 2048

This caused `--min-size 2B` to be interpreted as `--min-size 2k`.

This fix makes sure SizeSuffix values have a "B" suffix when turned
into a string where necessary, so it becomes

    "2B" -> 2 -> "2B" -> 2

In rclone v2 we should probably declare unsuffixed SizeSuffix values
are in bytes not kBytes (done for rsync compatibility) but this would
be a backwards incompatible change which we don't want for v1.

Fixes #8437
Fixes #8212
Fixes #5169
2025-03-13 09:56:20 +00:00
Nick Craig-Wood
a7a7c1d592 filters: show --min-size and --max-size in --dump filters 2025-03-12 12:32:21 +00:00
Nick Craig-Wood
6a7e68aaf2 build: check docs for edits of autogenerated sections
This adds a lint step which checks the top commit for edits to
autogenerated doc sections.
2025-03-10 22:07:19 +00:00
Nick Craig-Wood
6e7a3795f1 Add jack to contributors 2025-03-10 22:07:19 +00:00
jack
177337686a docs: fix incorrect mentions of vfs-cache-min-free-size 2025-03-09 01:23:42 +01:00
Nick Craig-Wood
ccef29bbff fs/object: fix memory object out of bounds Seek 2025-03-06 11:31:52 +00:00
Nick Craig-Wood
64b3d1d539 serve nfs: fix unlikely crash 2025-03-06 11:31:52 +00:00
Nick Craig-Wood
aab6643cea docs: update minimum OS requirements for go1.24 2025-03-05 17:20:10 +00:00
Dan McArdle
2a1e28f5f5 cmd/gitannex: Tweak parsing of "rcloneremotename" config
The "rcloneremotename" (aka "target") config parameter is now permitted
to contain (1) remote names that are defined by environment variables,
but not in an rclone config file, and (2) backend strings such as
":memory:".

This should fix some of the failing integration tests. For context:
https://github.com/rclone/rclone/pull/7987#issuecomment-2688580667

Issue #7984
2025-03-04 16:40:32 +00:00
Dan McArdle
db9205b298 cmd/gitannex: Drop var rebindings now that we have go1.23 2025-03-04 16:40:32 +00:00
Zachary Vorhies
964c6204dd docs: add note for using rclone cat for slicing out a byte range from a file 2025-03-04 16:31:56 +00:00
Jonathan Giannuzzi
65f7eb0fba rcserver: improve content-type check
Some libraries use `application/json; charset=utf-8` as their `Content-Type`, which is valid.
However we were not decoding the JSON body in that case, resulting in issues communicating with the rcserver.
2025-03-04 16:28:34 +00:00
Nick Craig-Wood
401cf81034 build: modernize Go usage
This commit modernizes Go usage. This was done with:

go run golang.org/x/tools/gopls/internal/analysis/modernize/cmd/modernize@latest -fix -test ./...

Then files needed to be `go fmt`ed and a few comments needed to be
restored.

The modernizations include replacing

- if/else conditional assignment by a call to the built-in min or max functions added in go1.21
- sort.Slice(x, func(i, j int) bool) { return s[i] < s[j] } by a call to slices.Sort(s), added in go1.21
- interface{} by the 'any' type added in go1.18
- append([]T(nil), s...) by slices.Clone(s) or slices.Concat(s), added in go1.21
- loop around an m[k]=v map update by a call to one of the Collect, Copy, Clone, or Insert functions from the maps package, added in go1.21
- []byte(fmt.Sprintf...) by fmt.Appendf(nil, ...), added in go1.19
- append(s[:i], s[i+1]...) by slices.Delete(s, i, i+1), added in go1.21
- a 3-clause for i := 0; i < n; i++ {} loop by for i := range n {}, added in go1.22
2025-02-28 11:31:14 +00:00
Nick Craig-Wood
431386085f build: update all dependencies and fix deprecations 2025-02-26 18:00:58 +00:00
Nick Craig-Wood
bf150a5b7d build: update golang.org/x/crypto to v0.35.0 to fix CVE-2025-22869
SSH servers which implement file transfer protocols are vulnerable to
a denial of service attack from clients which complete the key
exchange slowly, or not at all, causing pending content to be read
into memory, but never transmitted.

This affects users of `rclone serve sftp`.

See: https://pkg.go.dev/vuln/GO-2025-3487
2025-02-26 18:00:58 +00:00
Nick Craig-Wood
ddecfe6e77 build: make go1.23 the minimum go version
This is necessary now that golang.org/x/crypto is only allowing the
last two versions of Go.

See: https://go.googlesource.com/crypto/+/89ff08d67c4d79f9ac619aaf1f7388888798651f
2025-02-26 18:00:58 +00:00
Dan McArdle
68e40dc141 cmd/gitannex: Add to integration tests
This commit registers gitannex's unit tests with the integration tester
by updating the config.yaml file.

Since we have not yet updated the e2e tests to use the fstest framework,
this commit also adds a case to the e2e tests' skipE2eTestIfNecessary()
function.

Issue #7984
2025-02-26 17:18:02 +00:00
Dan McArdle
325f400a88 cmd/gitannex: Simplify verbose failures in tests 2025-02-26 17:18:02 +00:00
Dan McArdle
be33e281b3 cmd/gitannex: Port unit tests to fstest
This enables the unit tests to run on any given backend, via the
`-remote` flag, e.g. `go test -v ./cmd/gitannex/... -remote dropbox:`.

We should also port the gitannex e2e tests at some point.

Issue: #7984
2025-02-26 17:18:02 +00:00
Nick Craig-Wood
0010090d05 vfs: fix integration test failures
In this commit

ceef78ce44 vfs: fix directory cache serving stale data

We added a new test which caused lots of integration test failures.

This fixes the problem by disabling the test unless the feature flag
DirModTimeUpdatesOnWrite is present on the remote.
2025-02-26 12:21:35 +00:00
Nick Craig-Wood
b7f26937f1 azureblob: fix errors not being retried when doing single part copy
Sometimes the Azure blob servers reply with 503 ServerBusy errors and
these should be retried.

Before this change, when testing to see if a single part server side
copy was done, ServerBusy errors were returned to the user rather than
being retried.

Wrapping the call in the pacer fixes the problem and ensures it is
retried properly using the --low-level-retries mechanism.
2025-02-25 10:22:49 +00:00
Nick Craig-Wood
5037d7368d azureblob: handle retry error codes more carefully 2025-02-24 10:58:26 +00:00
Nick Craig-Wood
0ccf65017f touch: make touch obey --transfers
Before this change, when executed on a directory, rclone would only
touch files sequentially.

This change makes rclone touch --transfers files at once.

Fixes #8402
2025-02-21 15:53:47 +00:00
Nick Craig-Wood
85d467e16a Add luzpaz to contributors 2025-02-21 15:53:44 +00:00
Nick Craig-Wood
cf4b55d965 Add Dave Vasilevsky to contributors 2025-02-21 15:53:10 +00:00
luzpaz
e0d477804b docs: fix various typos
Found via `codespell -q 3 -S "./docs/static,./fs/rc/params_test.go" -L aadd,afile,alledges,bbefore,bu,buda,copys,couldn,crashers,crypted,ddelete,deriver,failre,goup,hashin,hel,inbraces,keep-alives,ket,medias,ment,mis,nd,nin,notin,ois,ot,parth,re-use,re-using,responser,rin,sav,splited,streamin,synching,te,twoo,ue,unknwon,wasn`
2025-02-19 20:30:44 +00:00
Dave Vasilevsky
4fc9583feb dropbox: Retry link without expiry
Dropbox only allows public links with expiry for certain account types.
Rather than erroring for other accounts, retry without expiry.
2025-02-17 20:39:14 +00:00
Dave Vasilevsky
904c9b2e24 Dropbox: Support Dropbox Paper
These files must be "exported" to be useful. The export process
is controlled by the --dropbox-export-formats flag  and the ancilliary flags
--dropbox-skip-exports and --dropbox-show-all-exports modeled on the
Google drive equivalents
2025-02-17 18:20:37 +00:00
emyarod
cdfd748241 chore: update contributor email 2025-02-16 07:52:20 +00:00
Nick Craig-Wood
661027f2cf docs: correct stable release workflow 2025-02-15 20:08:51 +00:00
Nick Craig-Wood
7ecd1638eb Add Lorenz Brun to contributors 2025-02-15 20:08:51 +00:00
Nick Craig-Wood
06b92ddeb3 Add Michael Kebe to contributors 2025-02-15 20:08:51 +00:00
Lorenz Brun
ceef78ce44 vfs: fix directory cache serving stale data
The VFS directory cache layer didn't update directory entry properties
if they are reused after cache invalidation.

Update them unconditionally as newDir sets them to the same value and
setting a pointer is cheaper in both LoC as well as CPU cycles than a
branch.

Also add a test exercising this behavior.

Fixes #6335
2025-02-15 15:22:16 +00:00
Anagh Kumar Baranwal
6560ea9bdc build: fix docker plugin build - fixes #8394
Signed-off-by: Anagh Kumar Baranwal <6824881+darthShadow@users.noreply.github.com>
2025-02-15 14:23:43 +00:00
Michael Kebe
cda82f3d30 docs: improved sftp limitations
Added a link to `--sftp-path-override` for a better solution with working hash calculation.
2025-02-15 11:11:26 +01:00
Nick Craig-Wood
7da2d8b507 Changelog updates from Version v1.69.1 2025-02-14 17:15:50 +00:00
Nick Craig-Wood
fb7919928c docs: add FileLu as sponsors and tidy sponsor logos 2025-02-14 17:15:50 +00:00
Anagh Kumar Baranwal
5d670fc54a accounting: fix percentDiff calculation -- fixes #8345
Signed-off-by: Anagh Kumar Baranwal <6824881+darthShadow@users.noreply.github.com>
2025-02-14 14:58:12 +00:00
Nick Craig-Wood
b5e72e2fc3 vfs: fix the cache failing to upload symlinks when --links was specified
Before this change, if --vfs-cache-mode writes or above was set and
--links was in use, when a symlink was saved then the VFS failed to
upload it. This meant when the VFS was restarted the link wasn't there
any more.

This was caused by the local backend, which we use to manage the VFS
cache, picking up the global --links flag.

This patch makes sure that the internal instantations of the local
backend in the VFS cache don't ever use the --links flag or the
--local-links flag even if specified on the command line.

Fixes #8367
2025-02-13 13:30:52 +00:00
Nick Craig-Wood
8997993a30 Add jbagwell-akamai to contributors 2025-02-13 13:30:52 +00:00
Nick Craig-Wood
b721f363e5 Add ll3006 to contributors 2025-02-13 13:30:40 +00:00
Zachary Vorhies
d93dad22fe doc: add note on concurrency of rclone purge 2025-02-13 11:41:37 +00:00
jbagwell-akamai
e27bf8b738 s3: add latest Linode Object Storage endpoints
Added missing Linode Object Storage endpoints AMS, MAA, CGK, LON, LAX, MAD, MEL, MIA, OSA, GRU, SIN
2025-02-13 09:36:22 +00:00
Janne Hellsten
539e96cc1f cmd: fix crash if rclone is invoked without any arguments - Fixes #8378 2025-02-12 21:31:05 +00:00
Anagh Kumar Baranwal
5086aad0b2 build: disable docker builds on PRs & add missing dockerfile changes
Signed-off-by: Anagh Kumar Baranwal <6824881+darthShadow@users.noreply.github.com>
2025-02-12 21:29:01 +00:00
ll3006
c1b414e2cf sync: copy dir modtimes even when copyEmptySrcDirs is false - fixes #8317
Before, after a sync, only file modtimes were updated when not using
--copy-empty-src-dirs. This ensures modtimes are updated to match the source
folder, regardless of copyEmptySrcDir. The flag --no-update-dir-modtime
(which previously did nothing) will disable this.
2025-02-12 21:27:15 +00:00
ll3006
2ff8aa1c20 sync: add tests to check dir modtimes are kept when syncing
This adds tests to check dir modtimes are updated from source
when syncing even if they've changed in the destination.
This should work both with and without --copy-empty-src-dirs.
2025-02-12 21:27:15 +00:00
nielash
6d2a72367a fix golangci-lint errors 2025-02-12 21:24:55 +00:00
nielash
9df751d4ec bisync: fix false positive on integration tests
5f70918e2c
introduced a new INFO log when making a directory, which differs depending on
whether the backend supports setting directory metadata. This caused false
positives on the bisync createemptysrcdirs test.

This fixes it by ignoring that log line.
2025-02-12 21:24:55 +00:00
Nick Craig-Wood
e175c863aa s3: split the GCS quirks into -s3-use-x-id and -s3-sign-accept-encoding #8373
Before this we applied both these quirks if provider == "GCS".

Splitting them like this makes them applicable for other providers
such as ActiveScale.
2025-02-12 21:08:16 +00:00
Nick Craig-Wood
64cd8ae0f0 Add Joel K Biju to contributors 2025-02-12 21:08:11 +00:00
Anagh Kumar Baranwal
46b498b86a stats: fix the speed not getting updated after a pause in the processing
This shifts the behavior of the average loop to be a persistent loop
that gets resumed/paused when transfers & checks are started/completed.

Previously, the averageLoop was stopped on completion of
transfers & checks but failed to start again due to the protection of
the sync.Once

Signed-off-by: Anagh Kumar Baranwal <6824881+darthShadow@users.noreply.github.com>
2025-02-11 12:52:55 +00:00
Joel K Biju
b76cd74087 opendrive: added --opendrive-access flag to handle permissions 2025-02-11 12:43:10 +00:00
nielash
3b49fd24d4 bisync: fix listings missing concurrent modifications - fixes #8359
Before this change, there was a bug affecting listing files when:

- a given bisync run had changes in the 2to1 direction
AND
- the run had NO changes in the 1to2 direction
AND
- at least one of the changed files changed AGAIN during the run
(specifically, after the initial march and before the transfers.)

In this situation, the listings on one side would still retain the prior version
of the changed file, potentially causing conflicts or errors.

This change fixes the issue by making sure that if we're updating the listings
on one side, we must also update the other. (We previously tried to skip it for
efficiency, but this failed to account for the possibility that a changed file
could change again during the run.)
2025-02-11 11:21:02 +00:00
Anagh Kumar Baranwal
c0515a51a5 Added parallel docker builds and caching for go build in the container
Signed-off-by: Anagh Kumar Baranwal <6824881+darthShadow@users.noreply.github.com>
2025-02-11 10:17:50 +00:00
Jonathan Giannuzzi
dc9c87279b smb: improve connection pooling efficiency
* Lower pacer minSleep to establish new connections faster
* Use Echo requests to check whether connections are working (required an upgrade of go-smb2)
* Only remount shares when needed
* Use context for connection establishment
* When returning a connection to the pool, only check the ones that encountered errors
* Close connections in parallel
2025-02-04 12:35:19 +00:00
Nick Craig-Wood
057fdb3a9d lib/oauthutil: fix redirect URL mismatch errors - fixes #8351
In this commit we introduced support for client credentials flow:

65012beea4 lib/oauthutil: add support for OAuth client credential flow

This involved re-organising the oauth credentials.

Unfortunately a small error was made which used a fixed redirect URL
rather than the one configured for the backend.

This caused the box backend oauth flow not to work properly with
redirect_uri_mismatch errors.

These backends were using the wrong redirect URL and will likely be
affected, though it is possible the backends have workarounds.

- box
- drive
- googlecloudstorage
- googlephotos
- hidrive
- pikpak
- premiumizeme
- sharefile
- yandex
2025-02-03 12:15:54 +00:00
Nick Craig-Wood
3daf62cf3d b2: fix "fatal error: concurrent map writes" - fixes #8355
This was caused by the embryonic metadata support. Since this isn't
actually visible externally, this patch removes it for the time being.
2025-02-03 11:33:21 +00:00
Nick Craig-Wood
0ef495fa76 Add Alexander Minbaev to contributors 2025-02-03 11:33:21 +00:00
Nick Craig-Wood
722c567504 Add Zachary Vorhies to contributors 2025-02-03 11:33:21 +00:00
Nick Craig-Wood
0ebe1c0f81 Add Jess to contributors 2025-02-03 11:33:21 +00:00
Alexander Minbaev
2dc06b2548 s3: add IBM IAM signer - fixes #7617 2025-02-03 11:29:31 +00:00
Zachary Vorhies
b52aabd8fe serve nfs: update docs to note Windows is not supported - fixes #8352 2025-02-01 12:20:09 +00:00
Jess
6494ac037f cmd/config(update remote): introduce --no-output option
Fixes #8190
2025-01-24 17:22:58 +00:00
jkpe
5c3a1bbf30 s3: add DigitalOcean regions SFO2, LON1, TOR1, BLR1 2025-01-24 10:41:48 +00:00
Nick Craig-Wood
c837664653 sync: fix cpu spinning when empty directory finding with leading slashes
Before this change the logic which makes sure we create all
directories could get confused with directories which started with
slashes and get into an infinite loop consuming 100% of the CPU.
2025-01-22 11:56:05 +00:00
Nick Craig-Wood
77429b154e s3: fix handling of objects with // in #5858 2025-01-22 11:56:05 +00:00
Nick Craig-Wood
39b8f17ebb azureblob: fix handling of objects with // in #5858 2025-01-22 11:56:05 +00:00
Nick Craig-Wood
81ecfb0f64 fstest: add integration tests objects with // on bucket based backends #5858 2025-01-22 11:56:05 +00:00
Nick Craig-Wood
656e789c5b fs/list: tweak directory listing assertions after allowing // names 2025-01-22 11:56:05 +00:00
Nick Craig-Wood
fe19184084 lib/bucket: fix tidying of // in object keys #5858
Before this change, bucket.Join would tidy up object keys by removing
repeated / in them. This means we can't access objects with // in them
which is valid for object keys (but not for file system paths).

This could have consequences for users who are relying on rclone to
fix improper paths for them.
2025-01-22 11:56:05 +00:00
Nick Craig-Wood
b4990cd858 lib/bucket: add IsAllSlashes function 2025-01-22 11:56:05 +00:00
Nick Craig-Wood
8e955c6b13 azureblob: remove uncommitted blocks on InvalidBlobOrBlock error
When doing a multipart upload or copy, if a InvalidBlobOrBlock error
is received, it can mean that there are uncomitted blocks from a
previous failed attempt with a different length of ID.

This patch makes rclone attempt to clear the uncomitted blocks and
retry if it receives this error.
2025-01-22 11:56:05 +00:00
Nick Craig-Wood
3a5ddfcd3c azureblob: implement multipart server side copy
This implements multipart server side copy to improve copying from one
azure region to another by orders of magnitude (from 30s for a 100M
file to 10s for a 10G file with --azureblob-upload-concurrency 500).

- Add `--azureblob-copy-cutoff` to control the cutoff from single to multipart copy
- Add `--azureblob-copy-concurrency` to control the copy concurrency
- Add ServerSideAcrossConfigs flag as this now works properly
- Implement multipart copy using put block list API
- Shortcut multipart copy for same storage account
- Override with `--azureblob-use-copy-blob`

Fixes #8249
2025-01-22 11:56:05 +00:00
Nick Craig-Wood
ac3f7a87c3 azureblob: speed up server side copies for small files #8249
This speeds up server side copies for small files which need the check
the copy status by using an exponential ramp up of time to check the
copy status endpoint.
2025-01-22 11:56:05 +00:00
Nick Craig-Wood
4e9b63e141 azureblob: cleanup uncommitted blocks on upload errors
Before this change, if a multipart upload was aborted, then rclone
would leave uncommitted blocks lying around. Azure has a limit of
100,000 uncommitted blocks per storage account, so when you then try
to upload other stuff into that account, or simply the same file
again, you can run into this limit. This causes errors like the
following:

BlockCountExceedsLimit: The uncommitted block count cannot exceed the
maximum limit of 100,000 blocks.

This change removes the uncommitted blocks if a multipart upload is
aborted or fails.

If there was an existing destination file, it takes care not to
overwrite it by recomitting already comitted blocks.

This means that the scheme for allocating block IDs had to change to
make them different for each block and each upload.

Fixes #5583
2025-01-22 11:56:05 +00:00
Nick Craig-Wood
7fd7fe3c82 azureblob: factor readMetaData into readMetaDataAlways returning blob properties 2025-01-22 11:56:05 +00:00
Nick Craig-Wood
9dff45563d Add b-wimmer to contributors 2025-01-22 11:56:05 +00:00
b-wimmer
83cf8fb821 azurefiles: add --azurefiles-use-az and --azurefiles-disable-instance-discovery
Adds additional authentication options from azureblob to azurefiles as well

See rclone#8078
2025-01-22 11:11:18 +00:00
Nick Craig-Wood
32e79a5c5c onedrive: mark German (de) region as deprecated
See: https://learn.microsoft.com/en-us/previous-versions/azure/germany/
2025-01-22 11:00:37 +00:00
Nick Craig-Wood
fc44a8114e Add Trevor Starick to contributors 2025-01-22 11:00:37 +00:00
Nick Craig-Wood
657172ef77 Add hiddenmarten to contributors 2025-01-22 11:00:37 +00:00
Nick Craig-Wood
71eb4199c3 Add Corentin Barreau to contributors 2025-01-22 11:00:37 +00:00
Nick Craig-Wood
ac3c21368d Add Bruno Fernandes to contributors 2025-01-22 11:00:37 +00:00
Nick Craig-Wood
db71b2bd5f Add Moises Lima to contributors 2025-01-22 11:00:37 +00:00
Nick Craig-Wood
8cfe42d09f Add izouxv to contributors 2025-01-22 11:00:37 +00:00
Nick Craig-Wood
e673a28a72 Add Robin Schneider to contributors 2025-01-22 11:00:37 +00:00
Nick Craig-Wood
59889ce46b Add Tim White to contributors 2025-01-22 11:00:37 +00:00
Nick Craig-Wood
62e8a01e7e Add Christoph Berger to contributors 2025-01-22 11:00:37 +00:00
Trevor Starick
87eaf37629 azureblob: add support for x-ms-tags header 2025-01-17 19:37:56 +00:00
hiddenmarten
7c7606a6cf rc: disable the metrics server when running rclone rc
Fixes #8248
2025-01-17 17:46:22 +00:00
Corentin Barreau
dbb21165d4 internetarchive: add --internetarchive-metadata="key=value" for setting item metadata
Added the ability to include item's metadata on uploads via the
Internet Archive backend using the `--internetarchive-metadata="key=value"`
argument. This is hidden from the configurator as should only
really be used on the command line.

Before this change, metadata had to be manually added after uploads.
With this new feature, users can specify metadata directly during the
upload process.
2025-01-17 16:00:34 +00:00
Dan McArdle
375953cba3 lib/batcher: Deprecate unused option: batch_commit_timeout 2025-01-17 15:56:09 +00:00
Bruno Fernandes
af5385b344 s3: Added new storage class to magalu provider 2025-01-17 15:54:34 +00:00
Moises Lima
347be176af http servers: add --user-from-header to use for authentication
Retrieve the username from a specified HTTP header if no
other authentication methods are configured
(ideal for proxied setups)
2025-01-17 15:53:23 +00:00
Pat Patterson
bf5a4774c6 b2: add SkipDestructive handling to backend commands - fixes #8194 2025-01-17 15:47:01 +00:00
izouxv
0275d3edf2 vfs: close the change notify channel on Shutdown 2025-01-17 15:38:09 +00:00
Robin Schneider
be53ae98f8 Docker image: Add label org.opencontainers.image.source for release notes in Renovate dependency updates 2025-01-17 15:29:36 +00:00
Tim White
0d9fe51632 docs: add OneDrive Impersonate instructions - fixes #5610 2025-01-17 14:30:51 +00:00
Christoph Berger
03bd795221 docs: explain the stringArray flag parameter descriptor 2025-01-17 09:50:22 +01:00
Nick Craig-Wood
5a4026ccb4 iclouddrive: add notes on ADP and Missing PCS cookies - fixes #8310 2025-01-16 10:14:52 +00:00
Dimitri Papadopoulos
b1d4de69c2 docs: fix typos found by codespell in docs and code comments 2025-01-16 10:39:01 +01:00
Nick Craig-Wood
5316acd046 fs: fix confusing "didn't find section in config file" error
This change decorates the error with the section name not found which
will hopefully save user confusion.

Fixes #8170
2025-01-15 16:32:59 +00:00
Nick Craig-Wood
2c72842c10 vfs: fix race detected by race detector
This race would only happen when --dir-cache-time was very small.

This was noticed in the VFS tests when --dir-cache-time was 100 mS so
is unlikely to affect normal users.
2025-01-14 20:46:27 +00:00
Nick Craig-Wood
4a81f12c26 Add Jonathan Giannuzzi to contributors 2025-01-14 20:46:27 +00:00
Nick Craig-Wood
aabda1cda2 Add Spencer McCullough to contributors 2025-01-14 20:46:27 +00:00
Nick Craig-Wood
572fe20f8e Add Matt Ickstadt to contributors 2025-01-14 20:46:27 +00:00
Jonathan Giannuzzi
2fd4c45b34 smb: add support for kerberos authentication
Fixes #7800
2025-01-14 19:24:31 +00:00
Spencer McCullough
ec5489e23f drive: added backend moveid command 2025-01-14 19:21:13 +00:00
Matt Ickstadt
6898375a2d docs: fix reference to serves3 setting disable_multipart_uploads which was renamed 2025-01-14 18:51:19 +01:00
Matt Ickstadt
d413443a6a docs: fix link to Rclone Serve S3 2025-01-14 18:51:19 +01:00
Nick Craig-Wood
5039747f26 serve s3: fix list objects encoding-type
Before this change rclone would always use encoding-type url even if
the client hadn't asked for it.

This confused some clients.

This fixes the problem by leaving the URL encoding to the gofakes3
library which has also been fixed.

Fixes #7836
2025-01-14 16:08:18 +00:00
Nick Craig-Wood
11ba4ac539 build: update gopkg.in/yaml.v2 to v3 2025-01-14 15:25:10 +00:00
Nick Craig-Wood
b4ed7fb7d7 build: update all dependencies 2025-01-14 15:25:10 +00:00
Nick Craig-Wood
719473565e bisync: fix go vet problems with go1.24 2025-01-14 15:25:10 +00:00
Nick Craig-Wood
bd7278d7e9 build: update to go1.24rc1 and make go1.22 the minimum required version 2025-01-14 12:13:14 +00:00
Nick Craig-Wood
45ba81c726 version: add --deps flag to show dependencies and other build info 2025-01-14 12:08:49 +00:00
Nick Craig-Wood
530658e0cc doc: make man page well formed for whatis - fixes #7430 2025-01-13 18:35:27 +00:00
Nick Craig-Wood
b742705d0c Start v1.70.0-DEV development 2025-01-12 16:31:12 +00:00
Nick Craig-Wood
cd3b08d8cf Version v1.69.0 2025-01-12 15:09:13 +00:00
Nick Craig-Wood
009660a489 test_all: disable docker plugin tests
These are not completing on the integration test server. This needs
investigating, but we need the integration tests to run properly.
2025-01-12 14:02:57 +00:00
albertony
4b6c7c6d84 docs: fix typo 2025-01-12 13:49:47 +01:00
Nick Craig-Wood
a7db375f5d accounting: fix race stopping/starting the stats counter
This was picked up by the race detector in the CI.
2025-01-11 20:25:34 +00:00
Nick Craig-Wood
101dcfe157 docs: add github.com/icholy/gomajor to RELEASE for updating major versions 2025-01-11 20:25:34 +00:00
Francesco Frassinelli
aec87b74d3 ftp: fix ls commands returning empty on "Microsoft FTP Service" servers
The problem was in the upstream library jlaffaye/ftp and this updates it.

Fixes #8224
2025-01-11 20:02:16 +00:00
Nick Craig-Wood
91c8f92ccb s3: add docs on data integrity
See: https://forum.rclone.org/t/help-me-figure-out-how-to-verify-backup-accuracy-and-completeness-on-s3/37632/5
2025-01-11 18:39:15 +00:00
Nick Craig-Wood
965bf19065 webdav: make --webdav-auth-redirect to fix 401 unauthorized on redirect
Before this change, if the server returned a 302 redirect message when
opening a file rclone would do the redirect but drop the
Authorization: header. This is a sensible thing to do for security
reasons but breaks some setups.

This patch adds the --webdav-auth-redirect flag which makes it
preserve the auth just for this kind of request.

See: https://forum.rclone.org/t/webdav-401-unauthorized-when-server-redirects-to-another-domain/39292
2025-01-11 18:39:15 +00:00
Nick Craig-Wood
15ef3b90fa rest: make auth preserving redirects an option 2025-01-11 18:39:15 +00:00
Nick Craig-Wood
f6efaf2a63 box: fix panic when decoding corrupted PEM from JWT file
See: https://forum.rclone.org/t/box-jwt-config-erroring-panic/40685/
2025-01-11 18:39:15 +00:00
Nick Craig-Wood
0e7c495395 size: make output compatible with -P
Before this change the output of `rclone size -P` would get corrupted
by the progress printing.

This is fixed by using operations.SyncPrintf instead of fmt.Printf.

Fixes #7912
2025-01-11 18:39:15 +00:00
Nick Craig-Wood
ff0ded8f11 vfs: add remote name to vfs cache log messages - fixes #7952 2025-01-11 18:39:15 +00:00
Nick Craig-Wood
110bf468a4 dropbox: fix return status when full to be fatal error
This will stop the sync, but won't stop a mount.

Fixes #7334
2025-01-11 18:39:15 +00:00
Nick Craig-Wood
d4e86f4d8b rc: add relative to vfs/queue-set-expiry 2025-01-11 18:39:15 +00:00
Nick Craig-Wood
6091a0362b vfs: fix open files disappearing from directory listings
In this commit

aaadb48d48 vfs: keep virtual directory status accurate and reduce deadlock potential

We reworked the virtual directory detection to use an atomic bool so
that we could run part of the cache forgetting only with a read lock.

Unfortunately this had a bug which meant that directories with virtual
items could be forgotten.

This commit changes the boolean into a count of virtual entries which
should be more accurate.

Fixes #8082
2025-01-11 18:39:15 +00:00
Nick Craig-Wood
33d2747829 docker serve: parse all remaining mount and VFS options
Before this change, this code implemented an ad-hoc parser for a
subset of vfs and mount options.

After the config re-organization it can use the same parsing code as
the rest of rclone which simplifies the code and exposes all the VFS
and mount options.
2025-01-11 18:39:15 +00:00
Nick Craig-Wood
c9e5f45d73 smb: fix panic if stat fails
Before this fix the smb backend could panic if a stat call failed.

This fix makes it return an error instead.

It should have the side effect that we do one less stat call on upload
too.

Fixes #8106
2025-01-11 18:39:15 +00:00
Nick Craig-Wood
2f66537514 googlephotos: fix nil pointer crash on upload - fixes #8233 2025-01-11 18:39:15 +00:00
Nick Craig-Wood
a491312c7d iclouddrive: tweak docs 2025-01-11 18:39:15 +00:00
Nick Craig-Wood
45b7690867 serve dlna: sort the directory entries by directories first then alphabetically by name
Some media boxes don't sort the items returned from the DLNA server,
so sort them here, directories first then alphabetically by name.

See: https://forum.rclone.org/t/serve-dlna-files-order-directories-first/47790
2025-01-11 17:11:40 +00:00
Nick Craig-Wood
30ef1ddb23 serve nfs: fix missing inode numbers which was messing up ls -laR
In 6ba3e24853

    serve nfs: fix incorrect user id and group id exported to NFS #7973

We updated the stat function to output uid and gid. However this set
the inode numbers of everything to -1. This causes a problem with
doing `ls -laR` giving "not listing already-listed directory" as it
uses inode numbers to see if it has listed a directory or not.

This patch reads the inode number from the vfs.Node and sets it in the
Stat output.
2025-01-09 18:55:18 +00:00
Nick Craig-Wood
424d8e3123 serve nfs: implement --nfs-cache-type symlink
`--nfs-cache-type symlink` is similar to `--nfs-cache-type disk` in
that it uses an on disk cache, but the cache entries are held as
symlinks. Rclone will use the handle of the underlying file as the NFS
handle which improves performance.
2025-01-09 18:55:18 +00:00
Nick Craig-Wood
04dfa6d923 azureblob,oracleobjectstorage,s3: quit multipart uploads if the context is cancelled
Before this change the multipart uploads would continue retrying even
if the context was cancelled.
2025-01-09 18:55:18 +00:00
Oleg Kunitsyn
fdff1a54ee http: fix incorrect URLs with initial slash
* http: trim initial slash building url
* Add a test for http object with leading slash

Fixes #8261
2025-01-09 17:40:00 +00:00
Eng Zer Jun
42240f4b5d build: update github.com/shirou/gopsutil to v4
v4 is the latest version with bug fixes and enhancements. While there
are 4 breaking changes in v4, they do not affect us because we do not
use the impacted functions.

Signed-off-by: Eng Zer Jun <engzerjun@gmail.com>
2025-01-09 17:32:09 +00:00
albertony
7692ef289f Replace Windows-specific NewLazyDLL with NewLazySystemDLL
This will only search Windows System directory for the DLL if name is a base
name (like "advapi32.dll"), which prevents DLL preloading attacks.

To get access to NewLazySystemDLL imports of syscall needs to be swapped with
golang.org/x/sys/windows.
2025-01-08 17:35:00 +01:00
Nick Craig-Wood
bfb7b88371 lib/oauthutil: don't require token to exist for client credentials flow
Before this change when setting up client credentials flow manually,
rclone would fail with this error message on first run despite the
fact that no existing token is needed.

    empty token found - please run "rclone config reconnect remote:"

This fixes the problem by ignoring token loading problems for client
credentials flow.
2025-01-08 12:38:24 +00:00
Nick Craig-Wood
5f70918e2c fs/operations: make log messages consistent for mkdir/rmdir at INFO level
Before this change, creating a new directory would write a DEBUG log
but removing it would write an INFO log.

This change makes both write an INFO log for consistency.
2025-01-08 12:38:24 +00:00
Nick Craig-Wood
abf11271fe Add Francesco Frassinelli to contributors 2025-01-08 12:38:24 +00:00
Francesco Frassinelli
a36e89bb61 smb: Add support for Kerberos authentication.
This updates go-smb2 to a version which supports kerberos.

Fixes #7600
2025-01-08 11:25:23 +00:00
Francesco Frassinelli
35614acf59 docs: smb: link to CloudSoda/go-smb2 fork 2025-01-08 11:18:55 +00:00
yuval-cloudinary
7e4b8e33f5 cloudinary: add cloudinary backend - fixes #7989 2025-01-06 10:54:03 +00:00
yuval-cloudinary
5151a663f0 operations: fix eventual consistency in TestParseSumFile test 2025-01-06 10:54:03 +00:00
Nick Craig-Wood
b85a1b684b Add TAKEI Yuya to contributors 2025-01-06 10:34:03 +00:00
Nick Craig-Wood
8fa8f146fa docs: Remove Backblaze as a Platinum sponsor 2025-01-06 10:33:57 +00:00
Nick Craig-Wood
6cad0a013e docs: add RcloneView as silver sponsor 2025-01-06 10:33:57 +00:00
TAKEI Yuya
aa743cbc60 serve docker: fix incorrect GID assignment 2025-01-05 21:42:58 +01:00
Nick Craig-Wood
a389a2979b serve s3: fix Last-Modified timestamp
This had two problems

1. It was using a single digit for day of month
2. It is supposed to be in UTC

Fixes #8277
2024-12-30 14:16:53 +00:00
Nick Craig-Wood
d6f0d1d349 Add ToM to contributors 2024-12-30 14:16:53 +00:00
Nick Craig-Wood
4ed6960d95 Add Henry Lee to contributors 2024-12-30 14:16:53 +00:00
Nick Craig-Wood
731af0c0ab Add Louis Laureys to contributors 2024-12-30 14:16:53 +00:00
ToM
5499fd3b59 docs: filtering: mention feeding --files-from from standard input 2024-12-28 16:44:44 +01:00
ToM
e0e697ca11 docs: filtering: fix --include-from copypaste error 2024-12-27 11:36:41 +00:00
Henry Lee
05f000b076 s3: rename glacier storage class to flexible retrieval 2024-12-27 11:32:43 +00:00
Louis Laureys
a34c839514 b2: add daysFromStartingToCancelingUnfinishedLargeFiles to backend lifecycle command
See: https://www.backblaze.com/blog/effortlessly-managing-unfinished-large-file-uploads-with-b2-cloud-storage/
See: https://www.backblaze.com/docs/cloud-storage-lifecycle-rules
2024-12-22 10:16:31 +00:00
Nick Craig-Wood
6a217c7dc1 build: update golang.org/x/net to v0.33.0 to fix CVE-2024-45338
An attacker can craft an input to the Parse functions that would be
processed non-linearly with respect to its length, resulting in
extremely slow parsing. This could cause a denial of service.

This only affects users running rclone servers exposed to untrusted
networks.

See: https://pkg.go.dev/vuln/GO-2024-3333
See: https://github.com/advisories/GHSA-w32m-9786-jp63
2024-12-21 18:43:26 +00:00
Nick Craig-Wood
e1748a3183 azurefiles: fix missing x-ms-file-request-intent header
According to the SDK docs

> FileRequestIntent is required when using TokenCredential for
> authentication. Acceptable value is backup.

This sets the correct option in the SDK. It does it for all types of
authentication but the SDK seems clever enough not to supply it when
it isn't needed.

This fixes the error

> MissingRequiredHeader An HTTP header that's mandatory for this
> request is not specified. x-ms-file-request-intent

Fixes #8241
2024-12-19 17:01:34 +00:00
Nick Craig-Wood
bc08e05a00 Add Thomas ten Cate to contributors 2024-12-19 17:01:34 +00:00
Thomas ten Cate
9218b69afe docs: Document --url and --unix-socket on the rc page
This page started talking about what commands you can send, without
explaining how to actually send them.

Fixes #8252.
2024-12-19 15:50:43 +00:00
Nick Craig-Wood
0ce2e12d9f docs: link to the outstanding vfs symlinks issue 2024-12-16 11:01:03 +00:00
Nick Craig-Wood
7224b76801 Add Yxxx to contributors 2024-12-16 11:01:03 +00:00
Nick Craig-Wood
d2398ccb59 Add hayden.pan to contributors 2024-12-16 11:01:03 +00:00
Yxxx
0988fd9e9f docs: update pcloud doc to avoid puzzling token error when use remote rclone authorize 2024-12-16 10:29:24 +00:00
wiserain
51cde23e82 pikpak: add option to use original file links - fixes #8246 2024-12-16 01:17:58 +09:00
hayden.pan
caac95ff54 rc/job: use mutex for adding listeners thread safety
Fix in extreme cases, when the job is executing finish(), the listener added by calling OnFinish() will never be executed.

This change should not cause compatibility issues, as consumers should not make assumptions about whether listeners will be run in a new goroutine
2024-12-15 13:05:29 +00:00
albertony
19f4580aca docs: mention in serve tls options when value is path to file - fixes #8232 2024-12-14 11:48:38 +00:00
Nick Craig-Wood
27f448d14d build: update all dependencies 2024-12-13 16:07:45 +00:00
Nick Craig-Wood
500698c5be accounting: fix debug printing when debug wasn't set 2024-12-13 15:34:44 +00:00
Nick Craig-Wood
91af6da068 Add Filipe Azevedo to contributors 2024-12-13 15:34:44 +00:00
Nick Craig-Wood
b8835fe7b4 fs: make --links flag global and add new --local-links and --vfs-links flag
Before this change the --links flag when using the VFS override the
--links flag for the local backend which meant the local backend
needed explicit config to use links.

This fixes the problem by making the --links flag global and adding a
new --local-links flag and --vfs-links flags to control the features
individually if required.
2024-12-13 12:43:20 +00:00
Nick Craig-Wood
48d9e88e8f vfs: add docs for -l/--links flag 2024-12-13 12:43:20 +00:00
Nick Craig-Wood
4e7ee9310e nfsmount,serve nfs: introduce symlink support #2975 2024-12-13 12:43:20 +00:00
Filipe Azevedo
d629102fa6 mount2: introduce symlink support #2975 2024-12-13 12:43:20 +00:00
Filipe Azevedo
db1ed69693 mount: introduce symlink support #2975 2024-12-13 12:43:20 +00:00
Filipe Azevedo
06657c49a0 cmount: introduce symlink support #2975 2024-12-13 12:43:20 +00:00
Filipe Azevedo
f1d2f2b2c8 vfstest: make VFS test suite support symlinks 2024-12-13 12:43:20 +00:00
Nick Craig-Wood
a5abe4b8b3 vfs: add symlink support to VFS
This is somewhat limited in that it only resolves symlinks when files
are opened. This will work fine for the intended use in rclone mount,
but is inadequate for the other servers probably.
2024-12-13 12:43:20 +00:00
Nick Craig-Wood
c0339327be vfs: add ELOOP error 2024-12-13 12:43:20 +00:00
Filipe Azevedo
353bc3130e vfs: Add link permissions 2024-12-13 12:43:20 +00:00
Filipe Azevedo
126f00882b vfs: Add VFS --links command line switch
This will be used to enable links support for the various mount engines
in a follow up commit.
2024-12-13 12:43:20 +00:00
Nick Craig-Wood
44c3f5e1e8 vfs: add vfs.WriteFile to match os.WriteFile 2024-12-13 12:43:20 +00:00
Filipe Azevedo
c47c94e485 fs: Move link suffix to fs 2024-12-13 12:43:20 +00:00
Nick Craig-Wood
1f328fbcfd cmount: fix problems noticed by linter 2024-12-13 12:43:20 +00:00
Filipe Azevedo
7f1240516e mount2: Fix missing . and .. entries 2024-12-13 12:43:20 +00:00
Nick Craig-Wood
f9946b37f9 sftp: fix nil check when using auth proxy
An incorrect nil check was spotted while reviewing the code for
CVE-2024-45337.

The nil check failing has never happened as far as we know. The
consequences would be a nil pointer exception.
2024-12-13 12:36:15 +00:00
Nick Craig-Wood
96fe25cf0a Add Martin Hassack to contributors 2024-12-13 12:36:15 +00:00
dependabot[bot]
a176d4cbda serve sftp: resolve CVE-2024-45337
This commit resolves CVE-2024-45337 which is an a potential auth
bypass for `rclone serve sftp`.

https://nvd.nist.gov/vuln/detail/CVE-2024-45337

However after review of the code, rclone is **not** affected as it
handles the authentication correctly. Rclone already uses the
Extensions field of the Permissions return value from the various
authentication callbacks to record data associated with the
authentication attempt as suggested in the vulnerability report.

This commit includes the recommended update to golang.org/x/crypto
anyway so that this is visible in the changelog.

Bumps [golang.org/x/crypto](https://github.com/golang/crypto) from 0.29.0 to 0.31.0.
- [Commits](https://github.com/golang/crypto/compare/v0.29.0...v0.31.0)

---
updated-dependencies:
- dependency-name: golang.org/x/crypto
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-12-13 12:28:08 +00:00
Tony Metzidis
e704e33045 googlecloudstorage: typo fix in docs 2024-12-13 11:49:21 +00:00
Martin Hassack
2f3e90f671 onedrive: add support for OAuth client credential flow - fixes #6197
This adds support for the client credential flow oauth method which
requires some special handling in onedrive:

- Special scopes are required
- The tenant is required
- The tenant needs to be used in the oauth URLs

This also:

- refactors the oauth config creation so it isn't duplicated
- defaults the drive_id to the previous one in the config
- updates the documentation

Co-authored-by: Nick Craig-Wood <nick@craig-wood.com>
2024-12-13 11:34:11 +00:00
Martin Hassack
65012beea4 lib/oauthutil: add support for OAuth client credential flow
This commit reorganises the oauth code to use our own config struct
which has all the info for the normal oauth method and also the client
credentials flow method.

It updates all backends which use lib/oauthutil to use the new config
struct which shouldn't change any functionality.

It also adds code for dealing with the client credential flow config
which doesn't require the use of a browser and doesn't have or need a
refresh token.

Co-authored-by: Nick Craig-Wood <nick@craig-wood.com>
2024-12-13 11:34:11 +00:00
Nick Craig-Wood
704217b698 lib/oauthutil: return error messages from the oauth process better 2024-12-13 11:34:11 +00:00
Nick Craig-Wood
6ade1055d5 bin/test_backend_sizes.py fix compile flags and s3 reporting
This now compiles rclone with CGO_ENABLED=0 which is closer to the
release compile.

It also removes pikpak if testing s3 as the two depend on each
other.
2024-12-13 11:34:11 +00:00
Nick Craig-Wood
6a983d601c test makefiles: add --flat flag for making directories with many entries 2024-12-11 18:21:42 +00:00
Nick Craig-Wood
eaafae95fa Add divinity76 to contributors 2024-12-11 18:21:42 +00:00
Nick Craig-Wood
5ca1436c24 Add Ilias Ozgur Can Leonard to contributors 2024-12-11 18:21:42 +00:00
Nick Craig-Wood
c46e93cc42 Add remygrandin to contributors 2024-12-11 18:21:42 +00:00
Nick Craig-Wood
66943d3d79 Add Michael R. Davis to contributors 2024-12-11 18:21:42 +00:00
divinity76
a78bc093de cmd/mountlib: better snap mount error message
Mounting will always fail when rclone is installed from the snap package manager.
But the error message generated when trying to mount from a snap install was not
very good. Improve the error message.

Fixes #8208
2024-12-06 08:14:09 +00:00
Ilias Ozgur Can Leonard
2446c4928d vfs: with --vfs-used-is-size value is calculated and then thrown away - fixes #8220 2024-12-04 22:57:41 +00:00
albertony
e11e679e90 serve sftp: fix loading of authorized keys file with comment on last line - fixes #8227 2024-12-04 13:42:10 +01:00
Manoj Ghosh
ba8e538173 oracleobjectstorage: make specifying compartmentid optional 2024-12-03 17:54:00 +00:00
Georg Welzel
40111ba5e1 plcoud: fix failing large file uploads - fixes #8147
This changes the OpenWriterAt implementation to make client/fd
handling atomic.

This PR stabilizes the situation of bigger files and multi-threaded
uploads. The root cause boils down to the old "fun" property of
pclouds fileops API: sessions are bound to TCP connections. This
forces us to use a http client with only a single connection
underneath.

With large files, we reuse the same connection for each chunk. If that
connection interrupts (e.g. because we are talking through the
internet), all chunks will fail. The probability for latter one
increases with larger files.

As the point of the whole multi-threaded feature was to speed-up large
files in the first place, this change pulls the client creation (and
hence connection handling) into each chunk. This should stabilize the
situation, as each chunk (and retry) gets its own connection.
2024-12-03 17:52:44 +00:00
remygrandin
ab58ae5b03 docs: add docker volume plugin troubleshooting steps
This proposal expand the current docker volume plugin troubleshooting possible steps to include a state cleanup command and a reminder that a un/reinstall don't clean up those cache files.


Co-authored-by: albertony <12441419+albertony@users.noreply.github.com>
2024-11-26 20:56:10 +01:00
Michael R. Davis
ca8860177e docs: fix missing state parameter in /auth link in instructions 2024-11-22 22:40:07 +00:00
Nick Craig-Wood
d65d1a44b3 build: fix build failure on ubuntu 2024-11-21 12:05:49 +00:00
Sam Harrison
c1763a3f95 docs: upgrade fontawesome to v6
Also update the Filescom icon.
2024-11-21 11:06:38 +00:00
Nick Craig-Wood
964fcd5f59 s3: fix multitenant multipart uploads with CEPH
CEPH uses a special bucket form `tenant:bucket` for multitentant
access using S3 as documented here:

https://docs.ceph.com/en/reef/radosgw/multitenancy/#s3

However when doing multipart uploads, in the reply from
`CreateMultipart` the `tenant:` was missing from the `Bucket` response
rclone was using to build the `UploadPart` request. This caused a 404
failure return. This may be a CEPH bug, but it is easy to work around.

This changes the code to use the `Bucket` and `Key` that we used in
`CreateMultipart` in `UploadPart` rather than the one returned from
`CreateMultipart` which fixes the problem.

See: https://forum.rclone.org/t/rclone-zcat-does-not-work-with-a-multitenant-ceph-backend/48618
2024-11-21 11:04:49 +00:00
Nick Craig-Wood
c6281a1217 Add David Seifert to contributors 2024-11-21 11:04:49 +00:00
Nick Craig-Wood
ff3f8f0b33 Add vintagefuture to contributors 2024-11-21 11:04:49 +00:00
Anthony Metzidis
2d844a26c3 use better docs 2024-11-20 18:05:56 +00:00
Anthony Metzidis
1b68492c85 googlecloudstorage: update docs on service account access tokens 2024-11-20 18:05:56 +00:00
David Seifert
acd5a893e2 test_all: POSIX head/tail invocations
* head -number is not allowed by POSIX.1-2024:
  https://pubs.opengroup.org/onlinepubs/9799919799/utilities/head.html
  https://devmanual.gentoo.org/tools-reference/head-and-tail/index.html
2024-11-20 18:02:07 +00:00
vintagefuture
0214a59a8c icloud: Added note about app specific password not working 2024-11-20 17:43:42 +00:00
Nick Craig-Wood
6079cab090 s3: fix download of compressed files from Cloudflare R2 - fixes #8137
Before this change attempting to download a file with
`Content-Encoding: gzip` from Cloudflare R2 gave this error

    corrupted on transfer: sizes differ src 0 vs dst 999

This was caused by the SDK v2 overriding our attempt to set
`Accept-Encoding: gzip`.

This fixes the problem by disabling the middleware that does that
overriding.
2024-11-20 12:08:23 +00:00
Nick Craig-Wood
bf57087a6e s3: fix testing tiers which don't exist except on AWS 2024-11-20 12:08:23 +00:00
Nick Craig-Wood
d8bc542ffc Changelog updates from Version v1.68.2 2024-11-15 14:51:27 +00:00
Nick Craig-Wood
01ccf204f4 local: fix permission and ownership on symlinks with --links and --metadata
Before this change, if writing to a local backend with --metadata and
--links, if the incoming metadata contained mode or ownership
information then rclone would apply the mode/ownership to the
destination of the link not the link itself.

This fixes the problem by using the link safe sycall variants
lchown/fchmodat when --links and --metadata is in use. Note that Linux
does not support setting permissions on symlinks, so rclone emits a
debug message in this case.

This also fixes setting times on symlinks on Windows which wasn't
implemented for atime, mtime and was incorrectly setting the target of
the symlink for btime.

See: https://github.com/rclone/rclone/security/advisories/GHSA-hrxh-9w67-g4cv
2024-11-14 16:20:18 +00:00
Nick Craig-Wood
84b64dcdf9 Revert "Merge commit from fork"
This reverts commit 1e2b354456.
2024-11-14 16:20:06 +00:00
Nick Craig-Wood
8cc1020a58 Add Dimitrios Slamaris to contributors 2024-11-14 16:15:49 +00:00
Nick Craig-Wood
1e2b354456 Merge commit from fork
Before this change, if writing to a local backend with --metadata and
--links, if the incoming metadata contained mode or ownership
information then rclone would apply the mode/ownership to the
destination of the link not the link itself.

This fixes the problem by using the link safe sycall variants
lchown/fchmodat when --links and --metadata is in use. Note that Linux
does not support setting permissions on symlinks, so rclone emits a
debug message in this case.

This also fixes setting times on symlinks on Windows which wasn't
implemented for atime, mtime and was incorrectly setting the target of
the symlink for btime.

See: https://github.com/rclone/rclone/security/advisories/GHSA-hrxh-9w67-g4cv
2024-11-14 16:13:57 +00:00
Nick Craig-Wood
f639cd9c78 onedrive: fix integration tests after precision change
We changed the precision of the onedrive personal backend in
c053429b9c from 1mS to 1S.

However the tests did not get updated. This changes the time tests to
use `fstest.AssertTimeEqualWithPrecision` which compares with
precision so hopefully won't break again.
2024-11-12 13:09:15 +00:00
Nick Craig-Wood
e50f995d87 operations: fix TestRemoveExisting on crypt backends by shortening the file name 2024-11-12 13:09:15 +00:00
Dimitrios Slamaris
abe884e744 bisync: fix output capture restoring the wrong output for logrus
Before this change, if rclone is used as a library and logrus is used
after a call to rc `sync/bisync`, logging does not work anymore and
leads to writing to a closed pipe.

This change restores the output correctly.

Fixes #8158
2024-11-12 11:42:54 +00:00
Nick Craig-Wood
173b2ac956 serve sftp: update github.com/pkg/sftp to v1.13.7 and fix deadlock in tests
Before this change, upgrading to v1.13.7 caused a deadlock in the tests.

This was caused by additional locking in the sftp package exposing a
bad choice by the rclone code.

See https://github.com/pkg/sftp/issues/603 and thanks to @puellanivis
for the fix suggestion.
2024-11-11 18:15:00 +00:00
Nick Craig-Wood
1317fdb9b8 build: fix comments after golangci-lint upgrade 2024-11-11 18:03:36 +00:00
Nick Craig-Wood
1072173d58 build: update all dependencies 2024-11-11 18:03:34 +00:00
dependabot[bot]
df19c6f7bf build(deps): bump github.com/golang-jwt/jwt/v4 from 4.5.0 to 4.5.1
Bumps [github.com/golang-jwt/jwt/v4](https://github.com/golang-jwt/jwt) from 4.5.0 to 4.5.1.
- [Release notes](https://github.com/golang-jwt/jwt/releases)
- [Changelog](https://github.com/golang-jwt/jwt/blob/main/VERSION_HISTORY.md)
- [Commits](https://github.com/golang-jwt/jwt/compare/v4.5.0...v4.5.1)

---
updated-dependencies:
- dependency-name: github.com/golang-jwt/jwt/v4
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-11-11 18:01:03 +00:00
Nick Craig-Wood
ee72554fb9 pikpak: fix fatal crash on startup with token that can't be refreshed 2024-11-08 19:34:09 +00:00
Nick Craig-Wood
abb4f77568 yandex: fix server side copying over existing object
This was causing a conflict error. This was fixed by renaming the
existing file first and if the copy was successful deleting it, or
renaming it back.
2024-11-08 18:17:55 +00:00
Nick Craig-Wood
ca2b27422f sugarsync: fix server side copying over existing object
This was causing a conflict error. This was fixed by renaming the
existing file first and if the copy was successful deleting it, or
renaming it back.
2024-11-08 18:17:55 +00:00
Nick Craig-Wood
740f6b318c putio: fix server side copying over existing object
This was causing a conflict error. This was fixed by checking for the
existing object and deleting it after the file was server side copied.
2024-11-08 18:17:55 +00:00
Nick Craig-Wood
f307d929a8 onedrive: fix server side copying over existing object
This was causing a conflict error. This was fixed by renaming the
existing file first and if the copy was successful deleting it, or
renaming it back.
2024-11-08 18:17:55 +00:00
Nick Craig-Wood
ceea6753ee dropbox: fix server side copying over existing object
This was causing a conflict error. This was fixed by renaming the
existing file first and if the copy was successful deleting it, or
renaming it back.
2024-11-08 18:17:55 +00:00
Nick Craig-Wood
2bafbf3c04 operations: add RemoveExisting to safely remove an existing file
This renames the file first and if the operation is successful then it
deletes the renamed file.
2024-11-08 18:17:55 +00:00
Nick Craig-Wood
3e14ba54b8 gofile: fix server side copying over existing object
This was creating a duplicate.
2024-11-08 14:01:51 +00:00
Nick Craig-Wood
2f7a30cf61 test_all: try to fix mailru rate limits in integration tests
The Mailru backend integration tests have been failing due to new rate
limits on the backend.

This patch

- Removes Mailru from the chunker tests
- Adds the flag so we only run one Mailru test at once
2024-11-08 10:02:44 +00:00
Nick Craig-Wood
0ad925278d Add shenpengfeng to contributors 2024-11-08 10:02:44 +00:00
Nick Craig-Wood
e3053350f3 Add Dimitar Ivanov to contributors 2024-11-08 10:02:44 +00:00
shenpengfeng
b9207e5727 docs: fix function name in comment 2024-10-29 09:26:37 +01:00
Dimitar Ivanov
40159e7a16 sftp: allow inline ssh public certificate for sftp
Currently rclone allows us to specify the path to a public ssh
certificate file.

That works great for cases where we can specify key path, like local
envs.

If users are using rclone with [volsync](https://github.com/backube/volsync/tree/main/docs/usage/rclone)
there currently is a limitation that users can specify only the rclone config file.
With this change users can pass the public certificate in the same fashion
as they can with `key_file`.
2024-10-25 10:40:57 +01:00
Nick Craig-Wood
16baa24964 serve s3: fix excess locking which was making serve s3 single threaded
The fix for this was in the upstream library to narrow the locking
window.

See: https://forum.rclone.org/t/can-rclone-serve-s3-handle-more-than-one-client/48329/
2024-10-25 10:36:50 +01:00
Nick Craig-Wood
72f06bcc4b lib/oauthutil: allow the browser opening function to be overridden 2024-10-24 17:56:50 +01:00
Nick Craig-Wood
c527dd8c9c Add Moises Lima to contributors 2024-10-24 17:56:50 +01:00
Moises Lima
29fd894189 lib/http: disable automatic authentication skipping for unix sockets
Disabling the authentication for unix sockets makes it impossible to
use `rclone serve` behind a proxy that that communicates with rclone
via a unix socket.

Re-enabling the authentication should not have any effect on most
users of unix sockets as they do not set authentication up with a unix
socket anyway.
2024-10-24 12:39:28 +01:00
Nick Craig-Wood
175aa07cdd onedrive: fix Retry-After handling to look at 503 errors also
According to the Microsoft docs a Retry-After header can be returned
on 429 errors and 503 errors, but before this change we were only
checking for it on 429 errors.

See: https://forum.rclone.org/t/onedrive-503-response-retry-after-not-used/48045
2024-10-23 13:00:32 +01:00
Kaloyan Raev
75257fc9cd s3: Storj provider: fix server-side copy of files bigger than 5GB
Like some other S3-compatible providers, Storj does not currently
implements UploadPartCopy and returns NotImplemented errors for
multi-part server side copies.

This patch works around the problem by raising --s3-copy-cutoff for
Storj to the maximum. This means that rclone will never use
multi-part copies for files in Storj. This includes files larger than
5GB which (according to AWS documentation) must be copied with
multi-part copy. This works fine for Storj.

See https://github.com/storj/roadmap/issues/40
2024-10-22 21:15:04 +01:00
Nick Craig-Wood
53ff3b3b32 s3: add Selectel as a provider 2024-10-22 19:54:33 +01:00
Nick Craig-Wood
8b4b59412d fs: fix Don't know how to set key "chunkSize" on upload errors in tests
Before this testing any backend which implemented the OpenChunkWriter
gave this error:

    ERROR : writer-at-subdir/writer-at-file: Don't know how to set key "chunkSize" on upload

This was due to the ChunkOption incorrectly rendering into HTTP
headers which weren't understood by the backend.
2024-10-22 19:54:33 +01:00
Nick Craig-Wood
264c9fb2c0 drive: implement rclone backend rescue to rescue orphaned files
Fixes #4166
2024-10-21 10:15:01 +01:00
Nick Craig-Wood
1b10cd3732 Add tgfisher to contributors 2024-10-21 10:15:01 +01:00
Nick Craig-Wood
d97492cbc3 Add Diego Monti to contributors 2024-10-21 10:15:01 +01:00
Nick Craig-Wood
82a510e793 Add Randy Bush to contributors 2024-10-21 10:15:01 +01:00
Nick Craig-Wood
9f2c590e13 Add Alexandre Hamez to contributors 2024-10-21 10:15:01 +01:00
Nick Craig-Wood
11a90917ec Add Simon Bos to contributors 2024-10-21 10:15:01 +01:00
tgfisher
8ca7b2af07 docs: mention that inline comments are not supported in a filter-file 2024-10-21 09:10:09 +02:00
Diego Monti
a19ddffe92 s3: add Wasabi eu-south-1 region
Ref. https://docs.wasabi.com/docs/what-are-the-service-urls-for-wasabi-s-different-storage-regions
2024-10-14 14:05:33 +02:00
Randy Bush
3e2c0f8c04 docs: fix forward refs in step 9 of using your own client id 2024-10-14 13:25:25 +02:00
Alexandre Hamez
589458d1fe docs: fix Scaleway Glacier website URL 2024-10-12 12:13:47 +01:00
Simon Bos
69897b97fb dlna: fix loggingResponseWriter disregarding log level 2024-10-08 15:27:05 +01:00
albertony
4db09331c6 build: remove required property on boolean inputs
Since boolean inputs are now properly treated as booleans, and GitHub Web GUI shows
them as checkboxes, setting required does nothing.
2024-10-03 16:31:36 +01:00
albertony
fcd3b88332 build: use inputs context in github workflow
Currently input options are retrieved from the event payload, via github.event.inputs,
and that still works, but boolean values are represented as strings there while in the
dedicated inputs context the boolean types are preserved, which means conditional
expressions can be simplified.
2024-10-03 16:31:36 +01:00
Nick Craig-Wood
1ca3f12672 s3: fix crash when using --s3-download-url after migration to SDKv2
Before this change rclone was crashing when the download URL did not
supply an X-Amz-Storage-Class header.

This change allows the header to be missing.

See: https://forum.rclone.org/t/sigsegv-on-ubuntu-24-04/48047
2024-10-03 14:31:56 +01:00
Nick Craig-Wood
e7a0fd0f70 docs: update overview to show pcloud can set modtime
See 258092f9c6 and #7896
2024-10-03 14:31:56 +01:00
Nick Craig-Wood
c23c59544d Add André Tran to contributors 2024-10-03 14:31:56 +01:00
Nick Craig-Wood
9dec3de990 Add Matthias Gatto to contributors 2024-10-03 14:31:56 +01:00
Nick Craig-Wood
5caa695c79 Add lostb1t to contributors 2024-10-03 14:31:56 +01:00
Nick Craig-Wood
8400809900 Add Noam Ross to contributors 2024-10-03 14:31:11 +01:00
Nick Craig-Wood
e49516d5f4 Add Benjamin Legrand to contributors 2024-10-03 14:31:11 +01:00
Matthias Gatto
9614fc60f2 s3: add Outscale provider
Signed-off-by: matthias.gatto <matthias.gatto@outscale.com>
Co-authored-by: André Tran <andre.tran@outscale.com>
2024-10-02 10:26:41 +01:00
lostb1t
51db76fd47 Add ICloud Drive backend 2024-10-02 10:19:11 +01:00
Noam Ross
17e7ccfad5 drive: add support for markdown format 2024-09-30 17:22:32 +01:00
Benjamin Legrand
8a6fc8535d accounting: fix global error acounting
fs.CountError is called when an error is encountered. The method was
calling GlobalStats().Error(err) which incremented the error at the
global stats level. This led to calls to core/stats with group= filter
returning an error count of 0 even if errors actually occured.

This change requires the context to be provided when calling
fs.CountError. Doing so, we can retrieve the correct StatsInfo to
increment the errors from.

Fixes #5865
2024-09-30 17:20:42 +01:00
Nick Craig-Wood
c053429b9c onedrive: fix time precision for OneDrive personal
This reduces the precision advertised by the backend from 1ms to 1s
for OneDrive personal accounts.

The precision was set to 1ms as part of:

1473de3f04 onedrive: add metadata support

which was released in v1.66.0.

However it appears not all OneDrive personal accounts support 1ms time
precision and that Microsoft may be migrating accounts away from this
to backends which only support 1s precision.

Fixes #8101
2024-09-30 11:34:06 +01:00
Nick Craig-Wood
18989fbf85 Add RcloneView as a sponsor 2024-09-30 11:34:06 +01:00
Nick Craig-Wood
a7451c6a77 Add Leandro Piccilli to contributors 2024-09-30 11:32:13 +01:00
nielash
5147d1101c cache: skip bisync tests
per ncw: "I don't care about cache as it is deprecated - we should probably stop
it running bisync tests"
https://github.com/rclone/rclone/pull/7795#issuecomment-2163295857
2024-09-29 18:37:52 -04:00
nielash
11ad2a1316 bisync: allow blank hashes on tests
Some backends support hashes but allow them to be blank. In other words, we
can't expect them to be reliably non-blank, and we shouldn't treat a blank hash
as an error.

Before this change, the bisync integration tests errored if a backend said it
supported hashes but in fact sometimes lacked them. After this change, such
errors are ignored.
2024-09-29 18:37:52 -04:00
nielash
3c7ad8d961 box: fix server-side copying a file over existing dst - fixes #3511
Before this change, server-side copying a src file over a dst that already exists
gave `Error "item_name_in_use" (409): Item with the same name already exists`.

This change fixes the error by copying to a temporary name first, then moving it
to the real name.

There might be a more graceful way to overwrite a file during a copy, but I
didn't see one in the API docs.
https://developer.box.com/reference/post-files-id-copy/
In the meantime, this workaround is better than a critical error.

This should (hopefully) fix 8 bisync integration tests.
2024-09-29 18:37:52 -04:00
nielash
a3e8fb584a sync: add tests for copying/moving a file over itself
This should catch issues like this, for example:
https://github.com/rclone/rclone/issues/3511#issuecomment-528332895
2024-09-29 18:37:52 -04:00
nielash
9b4b3033da fs/cache: fix parent not getting pinned when remote is a file
Before this change, when cache.GetFn was called on a file rather than a
directory, two cache entries would be added (the file + its parent) but only one
of them would get pinned if the caller then called Pin(f). This left the other
one exposed to expiration if the ci.FsCacheExpireDuration was reached. This was
problematic because both entries point to the same Fs, and if one entry expires
while the other is pinned, the Shutdown method gets erroneously called on an Fs
that is still in use.

An example of the problem showed up in the Hasher backend, which uses the
Shutdown method to stop the bolt db used to store hashes. If a command was run
on a Hasher file (ex. `rclone md5sum --download hasher:somelargefile.zip`) and
hashing the file took longer than the --fs-cache-expire-duration (5m by default), the
bolt db was stopped before the hashing operation completed, resulting in an
error.

This change fixes the issue by ensuring that:
1. only one entry is added to the cache (the file's parent, not the file).
2. future lookups correctly find the entry regardless of whether they are called
	with the parent name or one of its children.
3. fs.ErrorIsFile is returned when (and only when) fsString points to a file
	(preserving the fix from 8d5bc7f28b).

Note that f.Root() should always point to the parent dir as of c69eb84573
2024-09-28 13:49:56 +01:00
Leandro Piccilli
94997d25d2 gcs: add access token auth with --gcs-access-token 2024-09-27 17:37:07 +01:00
Nick Craig-Wood
19458e8459 accounting: write the current bwlimit to the log on SIGUSR2 2024-09-26 18:01:18 +01:00
Nick Craig-Wood
7d32da441e accounting: fix wrong message on SIGUSR2 to enable/disable bwlimit
This was caused by the message code only looking at one of the
bandwidth filters, not all of them.

Fixes #8104
2024-09-26 17:53:58 +01:00
Nick Craig-Wood
22e13eea47 gphotos: implment --gphotos-proxy to allow download of full resolution media
This works in conjunction with the gphotosdl tool

https://github.com/rclone/gphotosdl
2024-09-26 12:57:28 +01:00
Nick Craig-Wood
de9b593f02 googlephotos: remove noisy debugging statements 2024-09-26 12:52:53 +01:00
Nick Craig-Wood
b2b4f8196c docs: add note to CONTRIBUTING that the overview needs editing in 2 places 2024-09-25 17:56:33 +01:00
Nick Craig-Wood
84cebb6872 test_all: add ignoretests parameter for skipping certain tests
Use like this for a `backend:` in `config.yaml`

   ignoretests:
     - "fs/operations"
     - "fs/sync"
2024-09-25 16:03:43 +01:00
Nick Craig-Wood
cb9f4f8461 build: replace "golang.org/x/exp/slices" with "slices" now go1.21 is required 2024-09-25 16:03:43 +01:00
Nick Craig-Wood
498d9cfa85 Changelog updates from Version v1.68.1 2024-09-24 17:26:49 +01:00
Dan McArdle
109e4ed0ed Makefile: Fail when doc recipes create dir named '$HOME'
This commit makes the `commanddocs` and `backenddocs` fail if they
accidentally create a directory named '$HOME'. This is basically a
regression test for issue #8092.

It also makes those recipes rmdir the '$HOME/.config/rclone/'
directories. This will only delete empty directories, so nothing of
value should ever be deleted.
2024-09-24 10:38:25 +01:00
Dan McArdle
353270263a Makefile: Prevent doc recipe from creating dir named '$HOME'
Prior to this commit, running `make doc` had the unwanted side effect of
creating a directory literally named `$HOME` in the source tree.

Fixed #8092
2024-09-24 10:38:25 +01:00
wiserain
f8d782c02d pikpak: fix cid/gcid calculations for fs.OverrideRemote
Previously, cid/gcid (custom hash for pikpak) calculations failed when 
attempting to unwrap object info from `fs.OverrideRemote`. 

This commit introduces a new function that can correctly unwrap 
object info from both regular objects and `fs.OverrideRemote` types, 
ensuring uploads with accurate cid/gcid calculations in all scenarios.
2024-09-21 10:22:31 +09:00
albertony
3dec664a19 bisync: change exit code from 2 to 7 for critically aborted run 2024-09-20 18:51:08 +02:00
albertony
a849fd59f0 cmd: change exit code from 1 to 2 for syntax and usage errors 2024-09-20 18:51:08 +02:00
nielash
462a1cf491 local: fix --copy-links on macOS when cloning
Before this change, --copy-links erroneously behaved like --links when using cloning
on macOS, and cloning was not supported at all when using --links.

After this change, --copy-links does what it's supposed to, and takes advantage of
cloning when possible, by copying the file being linked to instead of the link
itself.

Cloning is now also supported in --links mode for regular files (which benefit
most from cloning). symlinks in --links mode continue to be tossed back to be
handled by rclone's special translation logic.

See https://forum.rclone.org/t/macos-local-to-local-copy-with-copy-links-causes-error/47671/5?u=nielash
2024-09-20 17:43:52 +01:00
Nick Craig-Wood
0b7b3cacdc azureblob: add --azureblob-use-az to force the use of the Azure CLI for auth
Setting this can be useful if you wish to use the az CLI on a host with
a System Managed Identity that you do not want to use.

Fixes #8078
2024-09-20 16:16:09 +01:00
Nick Craig-Wood
976103d50b azureblob: add --azureblob-disable-instance-discovery
If set this skips requesting Microsoft Entra instance metadata

See #8078
2024-09-20 16:16:09 +01:00
Nick Craig-Wood
192524c004 s3: add initial --s3-directory-bucket to support AWS Directory Buckets
This will ensure no Content-Md5 headers are sent and ensure ETags are not
interpreted as MD5 sums. X-Amz-Meta-Md5chksum will be set on all objects
whether single or multipart uploaded.

This also sets "no_check_bucket = true".

This is enough to make the integration tests pass, but there are some
limitations as noted in the docs.

See: https://forum.rclone.org/t/support-s3-directory-bucket/47653/
2024-09-19 12:01:24 +01:00
Nick Craig-Wood
28667f58bf Add Lawrence Murray to contributors 2024-09-19 12:01:24 +01:00
Lawrence Murray
c669f4e218 backend/protondrive: improve performance of Proton Drive backend
This change removes redundant calls to the Proton Drive Bridge when
creating Objects. Specifically, the function List() would get a
directory listing, get a link for each file, construct a remote path
from that link, then get a link for that remote path again by calling
getObjectLink() unnecessarily. This change removes that unnecessary
call, and tidies up a couple of functions around this with unused
parameters.

Related to performance issues reported in #7322 and #7413
2024-09-18 18:15:24 +01:00
Nick Craig-Wood
1a9e6a527d ftp: implement --ftp-no-check-upload to allow upload to write only dirs
Fixes #8079
2024-09-18 12:57:01 +01:00
Nick Craig-Wood
8c48cadd9c docs: document that fusermount3 may be needed when mounting/unmounting
See: https://forum.rclone.org/t/documentation-fusermount-vs-fusermount3/47816/
2024-09-18 12:57:01 +01:00
Nick Craig-Wood
76e1ba8c46 Add rishi.sridhar to contributors 2024-09-18 12:57:01 +01:00
Nick Craig-Wood
232e4cd18f Add quiescens to contributors 2024-09-18 12:57:00 +01:00
buengese
88141928f2 docs/zoho: update options 2024-09-17 20:40:42 +01:00
buengese
a2a0388036 zoho: make upload cutoff configurable 2024-09-17 20:40:42 +01:00
buengese
48543d38e8 zoho: add support for private spaces 2024-09-17 20:40:42 +01:00
buengese
eceb390152 zoho: try to handle rate limits a bit better 2024-09-17 20:40:42 +01:00
buengese
f4deffdc96 zoho: print clear error message when missing oauth scope 2024-09-17 20:40:42 +01:00
buengese
c172742cef zoho: switch to large file upload API for larger files, fix missing URL encoding of filenames for the upload API 2024-09-17 20:40:42 +01:00
buengese
7daed30754 zoho: use download server to accelerate downloads
Co-authored-by: rishi.sridhar <rishi.sridhar@zohocorp.com>
2024-09-17 20:40:42 +01:00
quiescens
b1b4c7f27b opendrive: add about support to backend 2024-09-17 17:20:42 +01:00
wiserain
ed84553dc1 pikpak: fix login issue where token retrieval fails
This addresses the login issue caused by pikpak's recent cancellation 
of existing login methods and requirement for additional verifications. 

To resolve this, we've made the following changes:

1. Similar to lib/oauthutil, we've integrated a mechanism to handle 
captcha tokens.

2. A new pikpakClient has been introduced to wrap the existing 
rest.Client and incorporate the necessary headers including 
x-captcha-token for each request.

3. Several options have been added/removed to support persistent 
user/client identification.

* client_id: No longer configurable.
* client_secret: Deprecated as it's no longer used.
* user_agent: A new option that defaults to PC/Firefox's user agent 
but can be overridden using the --pikpak-user-agent flag.
* device_id: A new option that is randomly generated if invalid. 
It is recommended not to delete or change it frequently.
* captcha_token: A new option that is automatically managed 
by rclone, similar to the OAuth token.

Fixes #7950 #8005
2024-09-18 01:09:21 +09:00
Nick Craig-Wood
c94edbb76b webdav: nextcloud: implement backoff and retry for 423 LOCKED errors
When uploading chunked files to nextcloud, it gives a 423 error while
it is merging files.

This waits for an exponentially increasing amount of time for it to
clear.

If after we have received a 423 error we receive a 404 error then we
assume all is good as this is what appears to happen in practice.

Fixes #7109
2024-09-17 16:46:02 +01:00
Nick Craig-Wood
2dcb327bc0 s3: fix rclone ignoring static credentials when env_auth=true
The SDKv2 conversion introduced a regression to do with setting
credentials with env_auth=true. The rclone documentation explicitly
states that env_auth only applies if secret_access_key and
access_key_id are blank and users had been relying on that.

However after the SDKv2 conversion we were ignoring static credentials
if env_auth=true.

This fixes the problem by ignoring env_auth=true if secret_access_key
and access_key_id are both provided. This brings rclone back into line
with the documentation and users expectations.

Fixes #8067
2024-09-17 16:07:56 +01:00
Nick Craig-Wood
874d66658e fs: fix setting stringArray config values from environment variables
After the config re-organisation, the setting of stringArray config
values (eg `--exclude` set with `RCLONE_EXCLUDE`) was broken and gave
a message like this for `RCLONE_EXCLUDE=*.jpg`:

    Failed to load "filter" default values: failed to initialise "filter" options:
    couldn't parse config item "exclude" = "*.jpg" as []string: parsing "*.jpg" as []string failed:
    invalid character '/' looking for beginning of value

This was caused by the parser trying to parse the input string as a
JSON value.

When the config was re-organised it was thought that the internal
representation of stringArray values was not important as it was never
visible externally, however this turned out not to be true.

A defined representation was chosen - a comma separated string and
this was documented and tests were introduced in this patch.

This potentially introduces a very small backwards incompatibility. In
rclone v1.67.0

    RCLONE_EXCLUDE=a,b

Would be interpreted as

    --exclude "a,b"

Whereas this new code will interpret it as

    --exclude "a" --exclude "b"

The benefit of being able to set multiple values with an environment
variable was deemed to outweigh the very small backwards compatibility
risk.

If a value with a `,` is needed, then use CSV escaping, eg

    RCLONE_EXCLUDE="a,b"

(Note this needs to have the quotes in so at the unix shell that would be

    RCLONE_EXCLUDE='"a,b"'

Fixes #8063
2024-09-13 15:52:51 +01:00
Nick Craig-Wood
3af757e26d rc: fix default value of --metrics-addr
Before this fix it was empty string, which isn't a good default for a
stringArray.
2024-09-13 15:52:51 +01:00
Nick Craig-Wood
fef1b61585 fs: fix --dump filters not always appearing
Before this fix, we initialised the options blocks in a random order.
This meant that there was a 50/50 chance whether --dump filters would
show the filters or not as it was depending on the "main" block having
being read first to set the Dump flags.

This initialises the options blocks in a defined order which is
alphabetically but with main first which fixes the problem.
2024-09-13 15:52:51 +01:00
Nick Craig-Wood
3fca7a60a5 docs: correct notes on docker manual build 2024-09-13 15:52:51 +01:00
Nick Craig-Wood
6b3f41fa0c Add ttionya to contributors 2024-09-13 15:52:51 +01:00
ttionya
3d0ee47aa2 build: fix docker release build - fixes #8062
This updates the action to use `docker/build-push-action` instead of `ilteoood/docker_buildx`
which fixes the build problem in testing.
2024-09-12 17:57:53 +01:00
Pawel Palucha
da70088b11 docs: add section for improving performance for s3 2024-09-12 11:29:35 +01:00
Nick Craig-Wood
1bc9b94cf2 onedrive: fix spurious "Couldn't decode error response: EOF" DEBUG
This DEBUG was being generated on redirects which don't have a JSON
body and is irrelevant.
2024-09-10 16:19:38 +01:00
Nick Craig-Wood
15a026d3be Add Divyam to contributors 2024-09-10 16:19:38 +01:00
Divyam
ad122c6f6f serve docker: add missing vfs-read-chunk-streams option in docker volume driver 2024-09-09 10:07:25 +01:00
Nick Craig-Wood
b155231cdd Start v1.69.0-DEV development 2024-09-08 17:22:19 +01:00
Nick Craig-Wood
49f69196c2 Version v1.68.0 2024-09-08 16:21:56 +01:00
Nick Craig-Wood
3f7651291b gofile: fix failed downloads on newly uploaded objects
The upload routine no longer returns a url to download the object.

This fixes the problem by fetching it if necessary when we attempt to
Open the object.
2024-09-08 14:58:29 +01:00
Nick Craig-Wood
796013dd06 gofile: fix Move a file
For some reason the parent ID got out of date in the Object (exact
reason not known - but the fact that this was OK before suggests a
change in the provider).

However we know the parent ID as it is in the directory cache, so use
that instead.
2024-09-08 14:58:29 +01:00
Nick Craig-Wood
e0da406ca7 test_all: mark linkbox fs/sync test TestSyncOverlapWithFilter as ignore
This gives the error

> Update second step failed: Linkbox error 500: The file name needs to include a suffix, such as xxx.mp4

As linkbox can't have files starting with "." and we are trying to save a file called ".ignore".
2024-09-08 14:58:14 +01:00
albertony
9a02c04028 jottacloud: fix setting of metadata on server side move - fixes #7900 2024-09-07 16:00:03 +01:00
albertony
918185273f docs: group the different options affecting lsjson output 2024-09-07 09:41:08 +01:00
Nick Craig-Wood
3f2074901a fichier: fix server side move - fixes #7856
The server side move had a combination of bugs
- Fichier changed the API disallowing a move to the same name
- Rclone was using the wrong object for some operations
2024-09-06 18:20:10 +01:00
Nick Craig-Wood
648afc7df4 fichier: Fix detection of Flood Detected error 2024-09-06 18:20:10 +01:00
Nick Craig-Wood
16e0245a8e rc: add vfs/queue-set-expiry to adjust expiry of items in the VFS queue 2024-09-06 17:33:35 +01:00
Nick Craig-Wood
59acb9dfa9 rc: add vfs/queue to show the status of the upload queue 2024-09-06 17:33:35 +01:00
Nick Craig-Wood
bfec159504 vfs: keep a record of the file size in the writeback queue 2024-09-06 17:33:35 +01:00
Nick Craig-Wood
842396c8a0 build: fix gocritic change missed in merge
The original problem was introduced here

bcdfad3c83 build: update logging statements to make json log work #6038

And this was fixed non-optimally here

f1a84d171e build: fix build after update
2024-09-06 17:33:35 +01:00
Nick Craig-Wood
5f9a201b45 Add Oleg Kunitsyn to contributors 2024-09-06 17:33:35 +01:00
Nick Craig-Wood
22583d0a5f Add fsantagostinobietti to contributors 2024-09-06 17:33:35 +01:00
Nick Craig-Wood
f1466a429c Add Mathieu Moreau to contributors 2024-09-06 17:33:35 +01:00
Florian Klink
e3b09211b8 lib/sd-activation: wrap coreos/go-systemd
It fails to build on plan9, which is part of the rclone CI matrix, and
the PR fixing it upstream doesn't seem to be getting traction.

Stub it on our side, we can still remove this once it gets merged.
2024-09-06 17:21:56 +01:00
Florian Klink
156feff9f2 sftp: support listening on passed FDs 2024-09-06 17:21:56 +01:00
Florian Klink
b29a22095f http: fix addr CLI arg help text
This was missing the fact rclone also supports listening on Unix Domain
Sockets.
2024-09-06 17:21:56 +01:00
Florian Klink
861c01caf5 http: support listening on passed FDs
Instead of the listening addresses specified above, rclone will listen to all
FDs passed by the service manager, if any (and ignore any arguments passed by
`--{{ .Prefix }}addr`.

This allows rclone to be a socket-activated service. It can be configured as described in
https://www.freedesktop.org/software/systemd/man/latest/systemd.socket.html

It's possible to test this interactively through `systemd-socket-activate`,
firing of a request in a second terminal:

```
❯ systemd-socket-activate -l 8088 -l 8089 --fdname=foo:bar -- ./rclone serve webdav :local:test/
Listening on [::]:8088 as 3.
Listening on [::]:8089 as 4.
Communication attempt on fd 3.
Execing ./rclone (./rclone serve webdav :local:test/)
2024/04/24 18:14:42 NOTICE: Local file system at /home/flokli/dev/flokli/rclone/test: WebDav Server started on [sd-listen:bar-0/ sd-listen:foo-0/]
```
2024-09-06 17:21:56 +01:00
Nick Craig-Wood
f1a84d171e build: fix build after update
This adds an import missed in

bcdfad3c83 build: update logging statements to make json log work #6038
2024-09-06 17:18:46 +01:00
albertony
bcdfad3c83 build: update logging statements to make json log work - fixes #6038
This changes log statements from log to fs package, which is required for --use-json-log
to properly make log output in JSON format. The recently added custom linting rule,
handled by ruleguard via gocritic via golangci-lint, warns about these and suggests
the alternative. Fixing was therefore basically running "golangci-lint run --fix",
although some manual fixup of mainly imports are necessary following that.
2024-09-06 17:04:18 +01:00
albertony
88b0757288 build: update custom linting rule for log to suggest new non-format functions 2024-09-06 17:04:18 +01:00
albertony
33d6c3f92f fs: add non-format variants of log functions to avoid non-constant format string warnings 2024-09-06 17:04:18 +01:00
albertony
752809309d fs: add log Printf, Fatalf and Panicf 2024-09-06 17:04:18 +01:00
albertony
4a54cc134f fs: refactor base log method name for improved consistency 2024-09-06 17:04:18 +01:00
albertony
dfc2c98bbf fs: refactor log statements to use common helper 2024-09-06 17:04:18 +01:00
albertony
604d6bcb9c build: enable custom linting rules with ruleguard via gocritic 2024-09-06 17:04:18 +01:00
Oleg Kunitsyn
d15704ef9f rcserver: implement prometheus metrics on a dedicated port - fixes #7940 2024-09-06 15:00:36 +01:00
fsantagostinobietti
26bc9826e5 swift: add total/free space info in about command.
With the enhancement in version v2.0.3 of ncw/swift library, we can now get Total and Free space info from remotes that support this feature (ex. Blomp storage)
2024-09-06 12:46:51 +01:00
Mathieu Moreau
2a28b0eaf0 docs: filtering: added Byte unit for min/max-size parameters. 2024-09-06 12:28:29 +01:00
Nick Craig-Wood
2d1c2b1f76 config encryption: set, remove and check to manage config file encryption #7859 2024-09-06 10:34:29 +01:00
Nick Craig-Wood
ffb2e2a6de config: use --password-command to set config file password if supplied
Before this change, rclone ignored the --password-command on the
rclone config setting except when decrypting an existing config file.

This change allows for offloading the password storage/generation into
external hardware key or other protected password storage.

Fixes #7859
2024-09-06 10:34:29 +01:00
Nick Craig-Wood
c9c283533c config: factor --password-command code into its own function #7859 2024-09-06 10:34:29 +01:00
Nick Craig-Wood
71799d7efd Add yuval-cloudinary to contributors 2024-09-06 10:34:29 +01:00
Nick Craig-Wood
8f4fdf6cc8 Add nipil to contributors 2024-09-06 10:34:29 +01:00
yuval-cloudinary
91b11f9eac documentation: add cheatsheet for configuration encryption 2024-09-05 17:39:25 +01:00
nipil
b49927fbd0 docs: more secure two-step signature and hash validation 2024-09-05 16:54:26 +01:00
Nick Craig-Wood
1a8b7662e7 serve nfs: unify the nfs library logging with rclone's logging better
Before this we ignored the logging levels and logged everything as
debug. This will obey the rclone logging flags and log at the correct
level.
2024-09-04 10:50:21 +01:00
Nick Craig-Wood
6ba3e24853 serve nfs: fix incorrect user id and group id exported to NFS #7973
Before this change all exports were exported as root and the --uid and
--gid flags of the VFS were ignored.

This fixes the issue by exporting the UID and GID correctly which
default to the current user and group unless set explicitly.
2024-09-04 10:50:21 +01:00
Nick Craig-Wood
802a938bd1 zoho: fix inefficiencies uploading with new API to avoid throttling
Before this fix, rclone queried the uploaded object to find its size
and modtime after upload as the API did not return these items.

Zoho have subsequently modified the API to return these items so
rclone uses them to avoid an API call.

This should help with rclone being throttled by Zoho.

See: https://forum.rclone.org/t/second-followup-on-the-older-topic-rclone-invokes-more-number-of-workdrive-s-files-listing-api-calls-which-exceeds-the-throttling-limit/45697/20
2024-09-04 10:45:47 +01:00
Nick Craig-Wood
9deb3e8adf Add crystalstall to contributors 2024-09-04 10:45:12 +01:00
crystalstall
296281a6eb docs: fix some function names in comments
Signed-off-by: crystalstall <crystalruby@qq.com>
2024-09-02 18:20:08 +02:00
albertony
711478554e lib/file: use builtin MkdirAll with go1.22 instead of our own custom version for windows
Starting with go1.22 the standard os.MkdirAll has improved its handling of volume names,
and as part of that it now stops recursing into parent directory if it is a volume name
(see: cd589c8a73).
This is similar to what was our main change and reason for creating a custom version. When
building with go1.22 or newer we can therefore stop using our custom version, with the
advantage that we automatically get current and future relevant improvements from golang.
To support building with go1.21 the existing custom version is still kept, and therefore
also our wrapper function file.MkdirAll - but it now just calls os.MkdirAll with go1.22
or newer on Windows.

See #5401, #6420 and acf1e2df84 for details about the
creation of our custom version of MkdirAll.
2024-09-02 18:16:38 +02:00
albertony
906aef91fa docs: document that paths using volume guids are supported 2024-09-02 18:01:47 +02:00
Nick Craig-Wood
6b58cd0870 s3: fix accounting for mulpart transfers after migration to SDKv2 #4989 2024-08-31 20:11:53 +01:00
Sebastian Bünger
af9f8ced80 yandex: implement custom user agent to help with upload speeds 2024-08-29 18:25:08 +01:00
Georg Welzel
c63f1865f3 operations: copy: generate stable partial suffix 2024-08-28 08:45:38 +02:00
Nick Craig-Wood
1bb89bc818 docs: add missing sftp providers to README and main docs page - fixes #8038 2024-08-28 07:25:49 +01:00
Nick Craig-Wood
a365503750 nfsmount: fix stale handle problem after converting options to new style
This problem was caused by the defaults not being set for the options
after the conversion to the new config system in

28ba4b832d serve nfs: convert options to new style

This makes the nfs serve options globally available so nfsmount can
use them directly.

Fixes #8029
2024-08-28 07:08:23 +01:00
Nick Craig-Wood
3bb6d0a42b docs: mark flags.md as auto generated so contributors don't edit it 2024-08-28 07:08:23 +01:00
Nick Craig-Wood
f65755b3a3 Add Pawel Palucha to contributors 2024-08-28 07:08:21 +01:00
Nick Craig-Wood
33c5f35935 Add John Oxley to contributors 2024-08-28 07:07:34 +01:00
Nick Craig-Wood
4367b999c9 Add Georg Welzel to contributors 2024-08-28 07:04:02 +01:00
Nick Craig-Wood
b57e6213aa Add Péter Bozsó to contributors 2024-08-28 07:04:02 +01:00
Nick Craig-Wood
cd90ba4337 Add Sam Harrison to contributors 2024-08-28 07:04:02 +01:00
Pawel Palucha
0e5eb7a9bb s3: allow restoring from intelligent-tiering storage class 2024-08-25 15:08:06 +02:00
nielash
956c2963fd bisync: don't convert modtime precision in listings - fixes #8025
Before this change, bisync proactively converted modtime precision when greater
than what the destination backend supported.

This dates back to a time before bisync considered the modifyWindow for same-side
comparisons. Back then, it was problematic to save a listing with 12:54:49.7 for
a backend that can't handle that precision, as on the next run the backend would
report the time as 12:54:50 and bisync would think the file had changed. So the
truncation was a workaround to anticipate this and proactively record the time
with the precision we expect to receive next time.

However, this caused problems for backends (such as dropbox) that round instead
of truncating as bisync expected.

After this change, bisync preserves the original precision in the listing
(without conversion), even when greater than what the backend supports, to avoid
rounding error. On the next run, bisync will compare it to the rounded time
reported by the backend, and if it's within the modifyWindow, it will treat them
as equivalent.
2024-08-24 22:32:48 -04:00
John Oxley
146562975b build: rename Unknwon/goconfig to unknwon/goconfig
Before this change we used the repo with an initial uppercase `U`. However it is now canonically spelled with a lower case `u`.

This package is too old to have a go.mod but the README clearly states the desired capitalization.

In 4b0d4b818a the
recommended capitalization was changed to lower case.

Co-authored-by: John Oxley <joxley@meta.com>
2024-08-23 11:03:27 +01:00
Georg Welzel
4c1cb0622e backend: pcloud: Implement OpenWriterAt feature 2024-08-22 23:42:32 +02:00
Georg Welzel
258092f9c6 backend: pcloud: implement SetModTime - Fixes #7896 2024-08-22 23:42:32 +02:00
Sam Harrison
be448c9e13 filescom: don't make an extra fetch call on each item in a list response 2024-08-19 22:31:57 +02:00
albertony
4e708e59f2 local: fix incorrect conversion between integer types 2024-08-18 10:29:36 +02:00
albertony
c8366dfef3 local: fix incorrect conversion between integer types 2024-08-17 17:07:17 +02:00
albertony
1e14523b82 docs: make tardigrade page auto redirect to storj page 2024-08-17 16:00:42 +02:00
albertony
da25305ba0 docs: update backend config samples 2024-08-17 16:00:18 +02:00
albertony
e439121ab2 config: fix size computation for allocation may overflow 2024-08-17 15:03:39 +02:00
albertony
37c12732f9 lib: fix incorrect conversion between integer types 2024-08-17 15:03:39 +02:00
albertony
4c488e7517 serve docker: fix incorrect conversion between integer types 2024-08-17 15:03:39 +02:00
albertony
7261f47bd2 local: fix incorrect conversion between integer types 2024-08-17 15:03:39 +02:00
albertony
1db8b20fbc s3: fix incorrect conversion between integer types 2024-08-17 15:03:39 +02:00
albertony
a87d8967fc s3: fix potentially unsafe quoting issue 2024-08-17 15:03:39 +02:00
albertony
4804f1f1e9 dropbox: fix potentially unsafe quoting issue 2024-08-17 15:03:39 +02:00
Eng Zer Jun
d1c84f9115 refactor: replace min/max helpers with built-in min/max
We upgraded our minimum Go version in commit ca24447090. We can now use
the built-in `min` and `max` functions directly.

Reference: https://go.dev/ref/spec#Min_and_max
Signed-off-by: Eng Zer Jun <engzerjun@gmail.com>
2024-08-17 13:09:44 +02:00
JT Olio
e0b08883cb go.mod: update storj.io/uplink to latest release
this has a couple of bug fixes and small enhancements.

we are working on reducing the size of this library, but this
version bump does not yet have those improvements.
2024-08-16 23:06:45 +02:00
Péter Bozsó
a0af72c27a docs: update ssh tunnel example 2024-08-16 20:27:23 +02:00
Péter Bozsó
28d6985764 docs: update rclone authorize section 2024-08-16 20:27:23 +02:00
Péter Bozsó
f2ce9a9557 docs: fix command highlight 2024-08-16 20:27:23 +02:00
albertony
95151eac82 docs: fix alignment of some of the icons in the storage system dropdown 2024-08-16 11:07:54 +02:00
Sam Harrison
bd9bf4eb1c docs: mark filescom as supporting link sharing 2024-08-15 22:55:45 +01:00
albertony
705c72d293 build: enable gocritic linter
Running with default set of checks, except disabling the following
- appendAssign: append result not assigned to the same slice (diagnostics check, many false positives)
- captLocal: using capitalized names for local variables (style check, too opinionated)
- commentFormatting: not having a space between `//` and comment text (style check, too opinionated)
- exitAfterDefer: log.Fatalln will exit, and `defer func(){...}(...)` will not run (diagnostics check, to be revisited)
- ifElseChain: rewrite if-else to switch statement (style check, many occurrences and a bit opinionated, to be revisited)
- singleCaseSwitch: should rewrite switch statement to if statement (style check, many occurrences and a bit opinionated, to be revisited)
2024-08-15 22:08:34 +01:00
albertony
330c6702eb build: ignore remaining gocritic lint issues 2024-08-15 22:08:34 +01:00
albertony
4d787ae87f build: fix gocritic lint issue unlambda 2024-08-15 22:08:34 +01:00
albertony
86e9a56d73 build: fix gocritic lint issue dupbranchbody 2024-08-15 22:08:34 +01:00
albertony
64e8013c1b build: fix gocritic lint issue sloppylen 2024-08-15 22:08:34 +01:00
albertony
33bff6fe71 build: fix gocritic lint issue wrapperfunc 2024-08-15 22:08:34 +01:00
albertony
e82b5b11af build: fix gocritic lint issue elseif 2024-08-15 22:08:34 +01:00
albertony
4454ed9d3b build: fix gocritic lint issue underef 2024-08-15 22:08:34 +01:00
albertony
bad8207378 build: fix gocritic lint issue valswap 2024-08-15 22:08:34 +01:00
albertony
c6d3714e73 build: fix gocritic lint issue assignop 2024-08-15 22:08:34 +01:00
albertony
59501fcdb6 build: fix gocritic lint issue unslice 2024-08-15 22:08:34 +01:00
Florian Klink
afd199d756 dlna: document external subtitle feature 2024-08-15 22:01:52 +01:00
Florian Klink
00e073df1e dlna: set more correct mime type
The code currently hardcodes `text/srt` for all subtitles.

`text/srt` is wrong, it seems `application/x-subrip` is the official
extension coming from the official mime database, at least (and still
works with the Samsung TV I tested with). Also add that one to `fs/
mimetype.go`.

Compared to previous iterations of this PR, I dropped tests ensuring
certain mime types are present - as detection still seems to be fairly
platform-specific.
2024-08-15 22:01:52 +01:00
Florian Klink
2e007f89c7 dlna: don't swallow video.{idx,sub}
.idx and .sub subtitle files only work if both are present, but the code
was overwriting the first-inserted element to subtitlesByName, as it was
keyed by the basename without extension.

Make subtitlesByName point to a slice of nodes instead.
2024-08-15 22:01:52 +01:00
Florian Klink
edd9347694 dlna: add cds_test.go
This tests the mediaWithResources function in various scenarios.
2024-08-15 22:01:52 +01:00
Florian Klink
1fad49ee35 dlna: also look at "Subs" subdirectory
Apparently it seems pretty common for subtitles to be put in a
subdirectory called "Subs", rather than in the same directory as the
media file itself.

This covers that usecase, by checking the returned listing for a
directory called "Subs" to exist.

If it does, its child nodes are added to the list before they're being
passed to mediaWithResources, allowing these subtitles to be discovered
automatically.
2024-08-15 22:01:52 +01:00
Sam Harrison
182b2a6417 chore: add childish-sambino as filescom maintainer 2024-08-15 22:00:26 +01:00
albertony
da9faf1ffe Make filtering rules for help and listremotes more lenient 2024-08-15 18:45:12 +02:00
albertony
303358eeda help: cleanup template syntax (consistent whitespace) 2024-08-15 18:45:12 +02:00
albertony
62233b4993 help: avoid empty additional help topics header 2024-08-15 18:45:12 +02:00
albertony
498abcc062 help: make help command output less distracting 2024-08-15 18:45:12 +02:00
albertony
482bfae8fa docs: consistent newline of first line in command output 2024-08-15 18:26:34 +02:00
Sam Harrison
ae9960a4ed filescom: add Files.com backend 2024-08-15 17:00:39 +01:00
Nick Craig-Wood
089c168fb9 fstests: attempt to fix flaky serve s3 test
Sometimes (particularly on macOS amd64) the serve s3 test fails with
TestIntegration/FsMkdir/FsPutError where it wasn't expecting to get an
object but it did.

This is likely caused by a race between the serve s3 goroutine
deleting the half uploaded file and the fstests code looking for it to
not exist.

This fix treats it like any other eventual consistency problem and
retries the check using the test framework.
2024-08-15 16:30:29 +01:00
albertony
6f515ded8f docs: move the link to global flags page to the main options header 2024-08-15 16:41:45 +02:00
albertony
91c6faff71 docs: make command group options subsections of main options 2024-08-15 16:41:45 +02:00
albertony
874616a73e docs: stop shouting the SEE ALSO header 2024-08-15 16:41:45 +02:00
albertony
458d93ea7e docs: fix the rclone root command header levels 2024-08-15 16:41:45 +02:00
albertony
513653910c docs: make the see also section header consistent and listed in toc of command pages 2024-08-15 16:41:45 +02:00
nielash
bd5199910b local: --local-no-clone flag to disable cloning for server-side copies
This flag allows users to disable the reflink cloning feature and instead force
"deep" copies, for certain use cases where data redundancy is preferable. It is
functionally equivalent to using `--disable Copy` on local.
2024-08-15 15:36:38 +01:00
nielash
f6d836eefd local: support setting custom --metadata during server-side Copy 2024-08-15 15:36:38 +01:00
nielash
87ec26001f local: add server-side copy with xattrs on macOS (part-fix #1710)
Before this change, macOS-specific metadata was not preserved by rclone, even for
local-to-local transfers (it does not use the "user." prefix, nor is Mac metadata
limited to xattrs.) Additionally, rclone did not take advantage of APFS's native
"cloning" functionality for fast and deduplicated transfers.

After this change, local (on macOS only) supports "server-side copy" similarly to
other remotes, and achieves this by using (when possible) macOS's native APFS
"cloning", which is the same underlying mechanism deployed when a user
duplicates a file via the Finder UI. This has several advantages over the
previous behavior:

- It is extremely fast (even large files can be cloned instantly)
- It is very efficient in terms of storage, as it automatically deduplicates when
possible (i.e. so that having two identical files does not consume more storage
than having just one.) (The concept is similar to a "hard link", but subsequent
modifications will not affect the original file.)
- It preserves Mac-specific metadata to the maximum degree, including not only
xattrs but also metadata not easily settable by other methods, including Finder
and Spotlight params.

When server-side "clone" is not available (for example, on non-APFS volumes), it
falls back to server-side "copy" (still preserving metadata but using more disk
storage.) It is only used when both remotes are local (and not wrapped by other
remotes, such as crypt.) The behavior of local on non-mac systems is unchanged.
2024-08-15 15:36:38 +01:00
albertony
3e12612aae docs: add automatic alias redirects for command pages 2024-08-15 16:18:38 +02:00
Florian Klink
aee2480fc4 cmd/rc: add --unix-socket option
This adds an additional flag --unix-socket, and if supplied connects
to the unix socket given.

    rclone rcd --rc-addr unix:///tmp/my.socket
    rclone rc --unix-socket /tmp/my.socket core/stats
2024-08-15 15:14:51 +01:00
Florian Klink
3ffa47ea16 webdav: add --webdav-unix-socket-path to connect to a unix socket
This adds a new optional parameter to the backend, to specify a path
to a unix domain socket to connect to, instead the specified URL.

The URL itself is still used for the rest of the HTTP client, allowing
host and subpath to stay intact.

This allows using rclone with the webdav backend to connect to a WebDAV
server provided at a Unix Domain socket:

    rclone serve webdav --addr unix:///tmp/my.socket remote:path
    rclone --webdav-unix-socket /tmp/my.socket --webdav-url http://localhost lsf :webdav:
2024-08-15 15:14:51 +01:00
Nick Craig-Wood
70e8ad456f serve nfs: implement on disk cache for file handles 2024-08-14 21:55:26 +01:00
Nick Craig-Wood
55b9b3e33a serve nfs: factor caching to its own file 2024-08-14 21:55:26 +01:00
Nick Craig-Wood
ce7dfa075c serve nfs: update github.com/willscott/go-nfs to latest
This fixes various cache invalidation bugs
2024-08-14 21:55:26 +01:00
Nick Craig-Wood
a697d27455 serve nfs: store billy FS in the Handler 2024-08-14 21:55:26 +01:00
Nick Craig-Wood
cae22a7562 serve nfs: mask unimplemented error from chmod 2024-08-14 21:55:26 +01:00
Nick Craig-Wood
877321c2fb serve nfs: add tracing to filesystem calls 2024-08-14 21:55:26 +01:00
Nick Craig-Wood
574378e871 serve nfs: rename types and methods which should be internal 2024-08-14 21:55:26 +01:00
Nick Craig-Wood
50d42babd8 nfsmount: require --vfs-cache-mode writes or above in tests
These tests fail for --vfs-cache-mode minimal on Linux for the same
reason they don't work properly with --vfs-cache-mode off
2024-08-14 21:55:26 +01:00
Nick Craig-Wood
13ea77dd71 nfsmount: allow tests to run on any unix where sudo mount/umount works 2024-08-14 21:55:26 +01:00
Nick Craig-Wood
62b76b631c nfsmount: make the --sudo flag work for umount as well as mount 2024-08-14 21:55:26 +01:00
Nick Craig-Wood
96f92b7364 nfsmount: add tcp option to NFS mount options to fix mounting under Linux 2024-08-14 21:55:26 +01:00
Nick Craig-Wood
7c02a63884 build: install NFS client libraries to allow nfsmount tests to run 2024-08-14 21:55:26 +01:00
Nick Craig-Wood
67d4394a37 vfstest: fix crash if open failed 2024-08-14 21:55:26 +01:00
Nick Craig-Wood
c1a98768bc Implement Gofile backend - fixes #4632 2024-08-14 21:15:37 +01:00
Nick Craig-Wood
bac9abebfb lib/encoder: add Exclamation mark encoding 2024-08-14 21:15:37 +01:00
Nick Craig-Wood
27b281ef69 chunkedreader: add --vfs-read-chunk-streams to parallel read chunks
This converts the ChunkedReader into an interface and provides two
implementations one sequential and one parallel.

This can be used to improve the performance of the VFS on high
bandwidth or high latency links.

Fixes #4760
2024-08-14 21:13:09 +01:00
Nick Craig-Wood
10270a4354 accounting: fix race detected by the race detector 2024-08-14 21:13:09 +01:00
Nick Craig-Wood
d08b49d723 pool: Add ability to wait for a write to RW 2024-08-14 21:13:09 +01:00
Nick Craig-Wood
cb2d2d72a0 pool: Make RW thread safe so can read and write at the same time 2024-08-14 21:13:09 +01:00
Nick Craig-Wood
e686e34f89 multipart: make pool buffer size public 2024-08-14 21:13:09 +01:00
Nick Craig-Wood
5f66350331 Add Fornax to contributors 2024-08-14 21:12:56 +01:00
Nick Craig-Wood
e1d935b854 build: use go1.23 for the linter
This reverts commit 485aa90d13.

As the upstream problem is now fixed by golangci-lint v1.60.1
2024-08-14 18:27:13 +01:00
Nick Craig-Wood
61b27cda80 build: fix govet lint errors with golangci-lint v1.60.1
There were a lot of instances of this lint error

    printf: non-constant format string in call to github.com/rclone/rclone/fs.Logf (govet)

Which were fixed by re-arranging the arguments and adding "%s".

There were quite a few genuine bugs which were found too.
2024-08-14 18:25:40 +01:00
Nick Craig-Wood
83613634f9 build: bisync: fix govet lint errors with golangci-lint v1.60.1
There were a lot of instances of this lint error

    printf: non-constant format string in call to github.com/rclone/rclone/fs.Logf (govet)

Most of these could not easily be fixed so had nolint lines added.

This should probably be done in a neater way perhaps by making
LogColorf/ErrorColorf functions.
2024-08-14 18:21:31 +01:00
Nick Craig-Wood
1c80cbd13a build: fix staticcheck lint errors with golangci-lint v1.60.1 2024-08-14 17:48:24 +01:00
Nick Craig-Wood
9d5315a944 build: fix gosimple lint errors with golangci-lint v1.60.1 2024-08-14 17:46:12 +01:00
Nick Craig-Wood
8d1d096c11 drive: fix copying Google Docs to a backend which only supports SHA1
When copying Google Docs to Backblaze B2 errors like this would happen

    ERROR : test.docx: Failed to calculate src hash: hash type not supported
    ERROR : test.docx: corrupted on transfer: sha1 hashes differ src

This was due to an oversight in

8fd66daab6 drive: add support of SHA-1 and SHA-256 checksum

Which omitted to change the base object (which includes Google Docs) so
that it supported SHA-1 and SHA-256.
2024-08-12 20:27:12 +01:00
Nick Craig-Wood
4b922d86d7 drive: update docs on creating admin service accounts 2024-08-12 20:27:12 +01:00
Fornax
3b3625037c Add pixeldrain backend
This commit adds support for pixeldrain's experimental filesystem API.
2024-08-12 13:35:44 +01:00
kapitainsky
bfa3278f30 docs: add comment how to reduce rclone binary size (#8000)
See #7998
2024-08-10 17:52:32 +01:00
albertony
e334366345 Make listremotes long output backwards compatible - fixes #7995
The format was changed to include the source attribute in #7404, but that is now
reverted and the source information is only shown in json output.
2024-08-09 17:39:00 +01:00
Nick Craig-Wood
642d4082ac test_backend_sizes.py calculates space in the binary each backend uses #7998 2024-08-09 12:13:24 +01:00
albertony
024ff6ed15 listremotes: added options for filtering, ordering and json output 2024-08-08 13:41:31 +01:00
albertony
d6b0743cf4 config: make getting config values more consistent 2024-08-08 13:41:31 +01:00
albertony
e4749cf0d0 config: make listing of remotes more consistent 2024-08-08 13:41:31 +01:00
albertony
8d2907d8f5 config: avoid remote with empty name from environment 2024-08-08 13:41:31 +01:00
albertony
1720d3e11c help: global flags help command extended filtering 2024-08-08 13:41:31 +01:00
albertony
c6352231e4 help: global flags help command now takes glob filter 2024-08-08 13:41:31 +01:00
albertony
731947f3ca filter: add options for glob to regexp without anchors and special path rules 2024-08-08 13:41:31 +01:00
albertony
16d642825d docs: remove old genautocomplete command docs and add as alias from the newer completion command 2024-08-08 13:34:10 +01:00
albertony
50aebcf403 docs: replace references to genautocomplete with the new name completion 2024-08-08 13:34:10 +01:00
Nick Craig-Wood
c8555d1b16 serve s3: update to AWS SDKv2 by updating github.com/rclone/gofakes3
This is the last dependency for the SDKv1 and this commit removes it
from go.mod also.
2024-08-07 16:35:39 +01:00
Nick Craig-Wood
3ec0ff5d8f s3: fix SSE-C after SDKv2 change
The new SDK apparently keeds the customer key to be base64 encoded
where the old one did that for you automatically.

See: https://github.com/aws/aws-sdk-go-v2/issues/2736
See: https://forum.rclone.org/t/new-s3-backend-help-testing-needed/47139/3
2024-08-07 12:13:13 +01:00
wiserain
746516511d pikpak: update to using AWS SDK v2 #4989 2024-08-07 12:13:13 +01:00
Nick Craig-Wood
8aef1de695 s3: fix Cloudflare R2 integration tests after SDKv2 update #4989
Cloudflare will normally automatically decompress files with
`Content-Encoding: gzip` when downloaded. This is not what AWS S3 does
and it breaks the integration tests.

This fudges the integration tests to upload the test file with
`Cache-Control: no-transform` on Cloudflare R2 and puts a note in the
docs about this problem.
2024-08-07 12:13:13 +01:00
Nick Craig-Wood
cb611b8330 s3: add --s3-sdk-log-mode to control SDK debugging 2024-08-07 12:13:13 +01:00
Nick Craig-Wood
66ae050a8b s3: fix GCS provider after SDKv2 update #4989
This also adds GCS via S3 to the integration tester.
2024-08-07 12:13:13 +01:00
Nick Craig-Wood
fd9049c83d s3: update to using AWS SDK v2 - fixes #4989
SDK v2 conversion

Changes

  - `--s3-sts-endpoint` is no longer supported
  - `--s3-use-unsigned-payload` to control use of trailer checksums (needed for non AWS)
2024-08-07 12:13:13 +01:00
Nick Craig-Wood
a1f52bcf50 fstest: implement method to skip ChunkedCopy tests 2024-08-06 12:45:07 +01:00
Nick Craig-Wood
0470450583 build: disable wasm/js build due to go bug
Rclone is too big for js/wasm until
https://github.com/golang/go/issues/64856 is fixed
2024-08-04 12:18:34 +01:00
Nick Craig-Wood
1901bae4eb Add @dmcardle as gitannex maintainer 2024-08-01 17:48:39 +01:00
Nick Craig-Wood
9866d1c636 docs: s3: add section on using too much memory #7974 2024-08-01 16:33:09 +01:00
Nick Craig-Wood
c5c7bcdd45 docs: link the workaround for big directory syncs in the FAQ #7974 2024-08-01 16:33:09 +01:00
Nick Craig-Wood
d5c7b55ba5 Add David Seifert to contributors 2024-08-01 16:33:09 +01:00
Nick Craig-Wood
feafbfca52 Add Will Miles to contributors 2024-08-01 16:33:09 +01:00
Nick Craig-Wood
abe01179ae Add Ernie Hershey to contributors 2024-08-01 16:33:09 +01:00
David Seifert
612c717ea0 docs: rc: fix correct _path to _root in on the fly backend docs 2024-07-30 10:19:47 +01:00
Saleh Dindar
f26d2c6ba8 fs/http: reload client certificates on expiry
In corporate environments, client certificates have short life times
for added security, and they get renewed automatically. This means
that client certificate can expire in the middle of long running
command such as `mount`.

This commit attempts to reload the client certificates 30s before they
expire.

This will be active for all backends which use HTTP.
2024-07-24 15:02:32 +01:00
Will Miles
dcecb0ede4 docs: clarify hasher operation
Add a line to the "other operations" block to indicate that the hasher overlay will apply auto-size and other checks for all commands.
2024-07-24 11:07:52 +01:00
Ernie Hershey
47588a7fd0 docs: fix typo in batcher docs for dropbox and googlephotos 2024-07-24 10:58:22 +01:00
Nick Craig-Wood
ba381f8721 b2: update versions documentation - fixes #7878 2024-07-24 10:52:05 +01:00
Nick Craig-Wood
8f0ddcca4e s3: document need to set force_path_style for buckets with invalid DNS names
Fixes #6110
2024-07-23 11:34:08 +01:00
Nick Craig-Wood
404ef80025 ncdu: document that excludes are not shown - fixes #6087 2024-07-23 11:29:07 +01:00
Nick Craig-Wood
13fa583368 sftp: clarify the docs for key_pem - fixes #7921 2024-07-23 10:07:44 +01:00
Nick Craig-Wood
e111ffba9e serve ftp: fix failed startup due to config changes
See: https://forum.rclone.org/t/failed-to-ftp-failed-to-parse-host-port/46959
2024-07-22 14:54:32 +01:00
Nick Craig-Wood
30ba7542ff docs: add Route4Me as a sponsor 2024-07-22 14:48:41 +01:00
wiserain
31fabb3402 pikpak: correct file transfer progress for uploads by hash
Pikpak can accelerate file uploads by leveraging existing content 
in its storage (identified by a custom hash called gcid). 
Previously, file transfer statistics were incorrect for uploads 
without outbound traffic as the input stream remained unchanged.

This commit addresses the issue by:

* Removing unnecessary unwrapping/wrapping of accountings 
before/after gcid calculation, leading immediate AccountRead() on buffering.
* Correctly tracking file transfer statistics for uploads 
with no incoming/outgoing traffic by marking them as Server Side Copies.

This change ensures correct statistics tracking and improves overall user experience.
2024-07-20 21:50:08 +09:00
Nick Craig-Wood
b3edc9d360 fs: fix --use-json-log and -vv after config reorganization 2024-07-20 12:49:08 +01:00
Nick Craig-Wood
04f35fc3ac Add Tobias Markus to contributors 2024-07-20 12:49:08 +01:00
Tobias Markus
8e5dd79e4d ulozto: fix upload of > 2GB files on 32 bit platforms - fixes #7960 2024-07-20 11:29:34 +01:00
Nick Craig-Wood
b809e71d6f lib/mmap: fix lint error on deprecated reflect.SliceHeader
reflect.SliceHeader is deprecated, however the replacement gives a go
vet warning so this disables the lint warning in one use of
reflect.SliceHeader and replaces it in the other.
2024-07-20 10:54:47 +01:00
Nick Craig-Wood
d149d1ec3e lib/http: fix tests after go1.23 update
go1.22 output the Content-Length on a bad Range request on a file but
go1.23 doesn't - adapt the tests accordingly.
2024-07-20 10:54:47 +01:00
Nick Craig-Wood
3b51ad24b2 rc: fix tests after go1.23 upgrade
go1.23 adds a doctype to the HTML output when serving file listings.
This adapts the tests for that.
2024-07-20 10:54:47 +01:00
Nick Craig-Wood
485aa90d13 build: use go1.22 for the linter to fix excess memory usage
golangci-lint seems to have a bug which uses excess memory under go1.23

See: https://github.com/golangci/golangci-lint/issues/4874
2024-07-20 10:54:47 +01:00
Nick Craig-Wood
8958d06456 build: update all dependencies 2024-07-20 10:54:47 +01:00
Nick Craig-Wood
ca24447090 build: update to go1.23rc1 and make go1.21 the minimum required version 2024-07-20 10:54:47 +01:00
Nick Craig-Wood
d008381e59 Add AThePeanut4 to contributors 2024-07-20 10:54:47 +01:00
AThePeanut4
14629c66f9 systemd: prevent unmount rc command from sending a STOPPING=1 sd-notify message
This prevents an `rclone rcd` server from prematurely going into the
'deactivating' state, which was causing systemd to kill it with a
SIGABRT after the stop timeout.

Fixes #7540
2024-07-19 10:32:34 +01:00
Nick Craig-Wood
4824837eed azureblob: allow anonymous access for public resources
See: https://forum.rclone.org/t/azure-blob-public-resources/46882
2024-07-18 11:13:29 +01:00
Nick Craig-Wood
5287a9b5fa Add Ke Wang to contributors 2024-07-18 11:13:29 +01:00
Nick Craig-Wood
f2ce1767f0 Add itsHenry to contributors 2024-07-18 11:13:29 +01:00
Nick Craig-Wood
7f048ac901 Add Tomasz Melcer to contributors 2024-07-18 11:13:29 +01:00
Nick Craig-Wood
b0d0e0b267 Add Paul Collins to contributors 2024-07-18 11:13:29 +01:00
Nick Craig-Wood
f5eef420a4 Add Russ Bubley to contributors 2024-07-18 11:13:29 +01:00
Sawjan Gurung
9de485f949 serve s3: implement --auth-proxy
This implements --auth-proxy for serve s3. In addition it:

* add listbuckets tests with and without authProxy
* use auth proxy test framework
* servetest: implement workaround for #7454
* update github.com/rclone/gofakes3 to fix race condition
2024-07-17 15:14:08 +01:00
Kyle Reynolds
d4b29fef92 fs: Allow semicolons as well as spaces in --bwlimit timetable parsing - fixes #7595 2024-07-17 11:04:01 +01:00
wiserain
471531eb6a pikpak: optimize upload by pre-fetching gcid from API
This commit optimizes the PikPak upload process by pre-fetching the Global 
Content Identifier (gcid) from the API server before calculating it locally.

Previously, a gcid required for uploads was calculated locally. This process was 
resource-intensive and time-consuming. By first checking for a cached gcid 
on the server, we can potentially avoid the local calculation entirely. 
This significantly improves upload speed especially for large files.
2024-07-17 12:20:09 +09:00
Nick Craig-Wood
afd2663057 rc: add option blocks parameter to options/get and options/info 2024-07-16 15:02:50 +01:00
Ke Wang
97d6a00483 chore(deps): update github.com/rclone/gofakes3 2024-07-16 10:58:02 +01:00
Nick Craig-Wood
5ddedae431 fstest: fix compile after merge
After merging this commit

56caab2033 b2: Include custom upload headers in large file info

The compile failed as a change had been missed. Should have rebased
before merging!
2024-07-15 12:18:14 +01:00
URenko
e1b7bf7701 local: fix encoding of root path
fix #7824
Statements like rclone copy <somewhere> . will spontaneously miss
if . expands to a path with a Full Width replacement character.
This is due to the incorrect order in which
relative paths and decoding were handled in the original implementation.
2024-07-15 12:10:04 +01:00
URenko
2a615f4681 vfs: fix cache encoding with special characters - #7760
The vfs use the hardcoded OS encoding when creating temp file,
but decode it with encoding for the local filesystem (--local-encoding)
when copying it to remote.
This caused failures when the filenames contained special characters.
The hardcoded OS encoding is now used uniformly.
2024-07-15 12:10:04 +01:00
URenko
e041796bfe docs: correct description of encoding None and add Raw. 2024-07-15 12:10:04 +01:00
URenko
1b9217bc78 lib/encoder: add EncodeRaw 2024-07-15 12:10:04 +01:00
wiserain
846c1aeed0 pikpak: non-buffered hash calculation for local source files 2024-07-15 11:53:01 +01:00
Pat Patterson
56caab2033 b2: Include custom upload headers in large file info - fixes #7744 2024-07-15 11:51:37 +01:00
itsHenry
495a5759d3 chore(deps): update github.com/rclone/gofakes3 2024-07-15 11:34:28 +01:00
Nick Craig-Wood
d9bd6f35f2 fs/test: fix erratic test 2024-07-15 11:09:54 +01:00
Nick Craig-Wood
532a0818f7 fs: make sure we load the options defaults to start with 2024-07-15 11:09:54 +01:00
Nick Craig-Wood
91558ce6aa fs: fix the defaults overriding the actual config
After re-organising the config it became apparent that there was a bug
in the config system which hadn't manifested until now.

This was the default config overriding the main config and was fixed
by noting when the defaults had actually changed.
2024-07-15 11:09:54 +01:00
Nick Craig-Wood
8fbb259091 rc: add options/info call to enumerate options
This also makes some fields in the Options block optional - these are
documented in rc.md
2024-07-15 11:09:54 +01:00
Nick Craig-Wood
4d2bc190cc fs: convert main options to new config system
There are some flags which haven't been converted which could be
converted in the future.
2024-07-15 11:09:54 +01:00
Nick Craig-Wood
c2bf300dd8 accounting: fix creating of global stats ignoring the config
Before this change the global stats were created before the global
config which meant they ignored the global config completely.
2024-07-15 11:09:54 +01:00
Nick Craig-Wood
c954c397d9 filter: convert options to new style 2024-07-15 11:09:54 +01:00
Nick Craig-Wood
25c6379688 filter: rename Opt to Options for consistency 2024-07-15 11:09:54 +01:00
Nick Craig-Wood
ce1859cd82 rc: convert options to new style 2024-07-15 11:09:54 +01:00
Nick Craig-Wood
cf25ae69ad lib/http: convert options to new style
There are still users of the old style options which haven't been
converted yet.
2024-07-15 11:09:54 +01:00
Nick Craig-Wood
dce8317042 log: convert options to new style 2024-07-15 11:09:54 +01:00
Nick Craig-Wood
eff2497633 serve sftp: convert options to new style 2024-07-15 11:09:54 +01:00
Nick Craig-Wood
28ba4b832d serve nfs: convert options to new style 2024-07-15 11:09:54 +01:00
Nick Craig-Wood
58da1a165c serve ftp: convert options to new style 2024-07-15 11:09:54 +01:00
Nick Craig-Wood
eec95a164d serve dlna: convert options to new style 2024-07-15 11:09:54 +01:00
Nick Craig-Wood
44cd2e07ca cmd/mountlib: convert mount options to new style 2024-07-15 11:09:54 +01:00
Nick Craig-Wood
a28287e96d vfs: convert vfs options to new style
This also
- move in use options (Opt) from vfsflags to vfscommon
- change os.FileMode to vfscommon.FileMode in parameters
- rework vfscommon.FileMode and add tests
2024-07-15 11:09:54 +01:00
Nick Craig-Wood
fc1d8dafd5 vfs: convert time.Duration option to fs.Duration 2024-07-15 11:09:54 +01:00
Nick Craig-Wood
2c57fe9826 cmd/mountlib: convert time.Duration option to fs.Duration 2024-07-15 11:09:54 +01:00
Nick Craig-Wood
7c51b10d15 configstruct: skip items with config:"-" 2024-07-15 11:09:54 +01:00
Nick Craig-Wood
3280b6b83c configstruct: allow parsing of []string encoded as JSON 2024-07-15 11:09:54 +01:00
Nick Craig-Wood
1a77a2f92b configstruct: make nested config structs work 2024-07-15 11:09:54 +01:00
Nick Craig-Wood
c156716d01 configstruct: fix parsing of invalid booleans in the config
Apparently fmt.Sscanln doesn't parse bool's properly and this isn't
likely to be fixed by the Go team who regard sscanf as a mistake.

This only uses sscan for integers and uses the correct routine for
everything else.

This also implements parsing time.Duration

See: https://github.com/golang/go/issues/43306
2024-07-15 11:09:54 +01:00
Nick Craig-Wood
0d9d0eef4c fs: check the names and types of the options blocks are correct 2024-07-15 11:09:54 +01:00
Nick Craig-Wood
2e653f8128 fs: make Flagger and FlaggerNP interfaces public so we can test flags elsewhere 2024-07-15 11:09:54 +01:00
Nick Craig-Wood
e79273f9c9 fs: add Options registry and rework rc to use it 2024-07-15 11:09:54 +01:00
Nick Craig-Wood
8e10fe71f7 fs: allow []string to work in Options 2024-07-15 11:09:54 +01:00
Nick Craig-Wood
c6ab37a59f flags: factor AddFlagsFromOptions from cmd
This is in preparation for generalising the backend config
2024-07-15 11:09:53 +01:00
Nick Craig-Wood
671a15f65f fs: add Groups and FieldName to Option 2024-07-15 11:09:53 +01:00
Nick Craig-Wood
8d72698d5a fs: refactor fs.ConfigMap to take a prefix and Options rather than an fs.RegInfo
This is in preparation for generalising the backend config system
2024-07-15 11:09:53 +01:00
Nick Craig-Wood
6e853c82d8 sftp: ignore errors when closing the connection pool
There is no need to report errors when draining the connection pool -
they are useless at this point.

See: https://forum.rclone.org/t/rclone-fails-to-close-unused-tcp-connections-due-to-use-of-closed-network-connection/46735
2024-07-15 10:48:45 +01:00
Tomasz Melcer
27267547b9 sftp: use uint32 for mtime
The SFTP protocol (and the golang sftp package) internally uses uint32 unix
time for expressing mtime. Hence it is a waste of memory to store it as 24-byte
time.Time data structure in long-lived data structures. So despite that the
golang sftp package uses time.Time as external interface, we can re-encode the
value back to the original format and save memory.

Co-authored-by: Tomasz Melcer <tomasz@melcer.pl>
2024-07-09 10:23:11 +01:00
wiserain
cdcf0e5cb8 pikpak: optimize file move by removing unnecessary readMetaData() call
Previously, the code relied on calling `readMetaData()` after every file move operation.
This introduced an unnecessary API call and potentially impacted performance.

This change removes the redundant `readMetaData()` call, improving efficiency.
2024-07-08 18:16:00 +09:00
wiserain
6507770014 pikpak: fix error with copyto command
Fixes an issue where copied files could not be renamed when using the
`copyto` command. This occurred because the object ID was empty
before calling `readMetaData`. The fix preemptively calls `readMetaData`
to ensure an object ID is available before attempting the rename operation.
2024-07-08 10:37:42 +09:00
Paul Collins
bd5799c079 swift: add workarounds for bad listings in Ceph RGW
Ceph's Swift API emulation does not fully confirm to the API spec.
As a result, it sometimes returns fewer items in a container than
the requested limit, which according to the spec should means
that there are no more objects left in the container.  (Note that
python-swiftclient always fetches unless the current page is empty.)

This commit adds a pair of new Swift backend settings to handle this.

Set `fetch_until_empty_page` to true to always fetch another
page of the container listing unless there are no items left.

Alternatively, set `partial_page_fetch_threshold` to an integer
percentage.  In this case rclone will fetch a new page only when
the current page is within this percentage of the limit.

Swift API reference: https://docs.openstack.org/swift/latest/api/pagination.html

PR against ncw/swift with research and discussion: https://github.com/ncw/swift/pull/167

Fixes #7924
2024-06-28 11:14:26 +01:00
Russ Bubley
c834eb7dcb sftp: fix docs on connections not to refer to concurrency 2024-06-28 10:42:52 +01:00
829 changed files with 91022 additions and 53542 deletions

View File

@@ -17,22 +17,21 @@ on:
manual:
description: Manual run (bypass default conditions)
type: boolean
required: true
default: true
jobs:
build:
if: ${{ github.event.inputs.manual == 'true' || (github.repository == 'rclone/rclone' && (github.event_name != 'pull_request' || github.event.pull_request.head.repo.full_name != github.event.pull_request.base.repo.full_name)) }}
if: inputs.manual || (github.repository == 'rclone/rclone' && (github.event_name != 'pull_request' || github.event.pull_request.head.repo.full_name != github.event.pull_request.base.repo.full_name))
timeout-minutes: 60
strategy:
fail-fast: false
matrix:
job_name: ['linux', 'linux_386', 'mac_amd64', 'mac_arm64', 'windows', 'other_os', 'go1.20', 'go1.21']
job_name: ['linux', 'linux_386', 'mac_amd64', 'mac_arm64', 'windows', 'other_os', 'go1.23']
include:
- job_name: linux
os: ubuntu-latest
go: '>=1.22.0-rc.1'
go: '>=1.24.0-rc.1'
gotags: cmount
build_flags: '-include "^linux/"'
check: true
@@ -43,14 +42,14 @@ jobs:
- job_name: linux_386
os: ubuntu-latest
go: '>=1.22.0-rc.1'
go: '>=1.24.0-rc.1'
goarch: 386
gotags: cmount
quicktest: true
- job_name: mac_amd64
os: macos-latest
go: '>=1.22.0-rc.1'
go: '>=1.24.0-rc.1'
gotags: 'cmount'
build_flags: '-include "^darwin/amd64" -cgo'
quicktest: true
@@ -59,14 +58,14 @@ jobs:
- job_name: mac_arm64
os: macos-latest
go: '>=1.22.0-rc.1'
go: '>=1.24.0-rc.1'
gotags: 'cmount'
build_flags: '-include "^darwin/arm64" -cgo -macos-arch arm64 -cgo-cflags=-I/usr/local/include -cgo-ldflags=-L/usr/local/lib'
deploy: true
- job_name: windows
os: windows-latest
go: '>=1.22.0-rc.1'
go: '>=1.24.0-rc.1'
gotags: cmount
cgo: '0'
build_flags: '-include "^windows/"'
@@ -76,20 +75,14 @@ jobs:
- job_name: other_os
os: ubuntu-latest
go: '>=1.22.0-rc.1'
go: '>=1.24.0-rc.1'
build_flags: '-exclude "^(windows/|darwin/|linux/)"'
compile_all: true
deploy: true
- job_name: go1.20
- job_name: go1.23
os: ubuntu-latest
go: '1.20'
quicktest: true
racequicktest: true
- job_name: go1.21
os: ubuntu-latest
go: '1.21'
go: '1.23'
quicktest: true
racequicktest: true
@@ -124,7 +117,8 @@ jobs:
sudo modprobe fuse
sudo chmod 666 /dev/fuse
sudo chown root:$USER /etc/fuse.conf
sudo apt-get install fuse3 libfuse-dev rpm pkg-config git-annex git-annex-remote-rclone
sudo apt-get update
sudo apt-get install -y fuse3 libfuse-dev rpm pkg-config git-annex git-annex-remote-rclone nfs-common
if: matrix.os == 'ubuntu-latest'
- name: Install Libraries on macOS
@@ -217,7 +211,7 @@ jobs:
if: env.RCLONE_CONFIG_PASS != '' && matrix.deploy && github.head_ref == '' && github.repository == 'rclone/rclone'
lint:
if: ${{ github.event.inputs.manual == 'true' || (github.repository == 'rclone/rclone' && (github.event_name != 'pull_request' || github.event.pull_request.head.repo.full_name != github.event.pull_request.base.repo.full_name)) }}
if: inputs.manual || (github.repository == 'rclone/rclone' && (github.event_name != 'pull_request' || github.event.pull_request.head.repo.full_name != github.event.pull_request.base.repo.full_name))
timeout-minutes: 30
name: "lint"
runs-on: ubuntu-latest
@@ -232,12 +226,14 @@ jobs:
- name: Checkout
uses: actions/checkout@v4
with:
fetch-depth: 0
- name: Install Go
id: setup-go
uses: actions/setup-go@v5
with:
go-version: '>=1.22.0-rc.1'
go-version: '>=1.23.0-rc.1'
check-latest: true
cache: false
@@ -295,8 +291,12 @@ jobs:
- name: Scan for vulnerabilities
run: govulncheck ./...
- name: Scan edits of autogenerated files
run: bin/check_autogenerated_edits.py
if: github.event_name == 'pull_request'
android:
if: ${{ github.event.inputs.manual == 'true' || (github.repository == 'rclone/rclone' && (github.event_name != 'pull_request' || github.event.pull_request.head.repo.full_name != github.event.pull_request.base.repo.full_name)) }}
if: inputs.manual || (github.repository == 'rclone/rclone' && (github.event_name != 'pull_request' || github.event.pull_request.head.repo.full_name != github.event.pull_request.base.repo.full_name))
timeout-minutes: 30
name: "android-all"
runs-on: ubuntu-latest
@@ -311,7 +311,7 @@ jobs:
- name: Set up Go
uses: actions/setup-go@v5
with:
go-version: '>=1.22.0-rc.1'
go-version: '>=1.24.0-rc.1'
- name: Set global environment variables
shell: bash

View File

@@ -1,77 +0,0 @@
name: Docker beta build
on:
push:
branches:
- master
jobs:
build:
if: github.repository == 'rclone/rclone'
runs-on: ubuntu-latest
name: Build image job
steps:
- name: Free some space
shell: bash
run: |
df -h .
# Remove android SDK
sudo rm -rf /usr/local/lib/android || true
# Remove .net runtime
sudo rm -rf /usr/share/dotnet || true
df -h .
- name: Checkout master
uses: actions/checkout@v4
with:
fetch-depth: 0
- name: Login to Docker Hub
uses: docker/login-action@v3
with:
username: ${{ secrets.DOCKERHUB_USERNAME }}
password: ${{ secrets.DOCKERHUB_TOKEN }}
- name: Extract metadata (tags, labels) for Docker
id: meta
uses: docker/metadata-action@v5
with:
images: ghcr.io/${{ github.repository }}
- name: Set up QEMU
uses: docker/setup-qemu-action@v3
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
- name: Login to GitHub Container Registry
uses: docker/login-action@v3
with:
registry: ghcr.io
# This is the user that triggered the Workflow. In this case, it will
# either be the user whom created the Release or manually triggered
# the workflow_dispatch.
username: ${{ github.actor }}
# `secrets.GITHUB_TOKEN` is a secret that's automatically generated by
# GitHub Actions at the start of a workflow run to identify the job.
# This is used to authenticate against GitHub Container Registry.
# See https://docs.github.com/en/actions/security-guides/automatic-token-authentication#about-the-github_token-secret
# for more detailed information.
password: ${{ secrets.GITHUB_TOKEN }}
- name: Show disk usage
shell: bash
run: |
df -h .
- name: Build and publish image
uses: docker/build-push-action@v6
with:
file: Dockerfile
context: .
push: true # push the image to ghcr
tags: |
ghcr.io/rclone/rclone:beta
rclone/rclone:beta
labels: ${{ steps.meta.outputs.labels }}
platforms: linux/amd64,linux/386,linux/arm64,linux/arm/v7,linux/arm/v6
cache-from: type=gha, scope=${{ github.workflow }}
cache-to: type=gha, mode=max, scope=${{ github.workflow }}
provenance: false
# Eventually cache will need to be cleared if builds more frequent than once a week
# https://github.com/docker/build-push-action/issues/252
- name: Show disk usage
shell: bash
run: |
df -h .

View File

@@ -0,0 +1,294 @@
---
# Github Actions release for rclone
# -*- compile-command: "yamllint -f parsable build_publish_docker_image.yml" -*-
name: Build & Push Docker Images
# Trigger the workflow on push or pull request
on:
push:
branches:
- '**'
tags:
- '**'
workflow_dispatch:
inputs:
manual:
description: Manual run (bypass default conditions)
type: boolean
default: true
jobs:
build-image:
if: inputs.manual || (github.repository == 'rclone/rclone' && github.event_name != 'pull_request')
timeout-minutes: 60
strategy:
fail-fast: false
matrix:
include:
- platform: linux/amd64
runs-on: ubuntu-24.04
- platform: linux/386
runs-on: ubuntu-24.04
- platform: linux/arm64
runs-on: ubuntu-24.04-arm
- platform: linux/arm/v7
runs-on: ubuntu-24.04-arm
- platform: linux/arm/v6
runs-on: ubuntu-24.04-arm
name: Build Docker Image for ${{ matrix.platform }}
runs-on: ${{ matrix.runs-on }}
steps:
- name: Free Space
shell: bash
run: |
df -h .
# Remove android SDK
sudo rm -rf /usr/local/lib/android || true
# Remove .net runtime
sudo rm -rf /usr/share/dotnet || true
df -h .
- name: Checkout Repository
uses: actions/checkout@v4
with:
fetch-depth: 0
- name: Set REPO_NAME Variable
run: |
echo "REPO_NAME=`echo ${{github.repository}} | tr '[:upper:]' '[:lower:]'`" >> ${GITHUB_ENV}
- name: Set PLATFORM Variable
run: |
platform=${{ matrix.platform }}
echo "PLATFORM=${platform//\//-}" >> $GITHUB_ENV
- name: Set CACHE_NAME Variable
shell: python
run: |
import os, re
def slugify(input_string, max_length=63):
slug = input_string.lower()
slug = re.sub(r'[^a-z0-9 -]', ' ', slug)
slug = slug.strip()
slug = re.sub(r'\s+', '-', slug)
slug = re.sub(r'-+', '-', slug)
slug = slug[:max_length]
slug = re.sub(r'[-]+$', '', slug)
return slug
ref_name_slug = "cache"
if os.environ.get("GITHUB_REF_NAME") and os.environ['GITHUB_EVENT_NAME'] == "pull_request":
ref_name_slug += "-pr-" + slugify(os.environ['GITHUB_REF_NAME'])
with open(os.environ['GITHUB_ENV'], 'a') as env:
env.write(f"CACHE_NAME={ref_name_slug}\n")
- name: Get ImageOS
# There's no way around this, because "ImageOS" is only available to
# processes, but the setup-go action uses it in its key.
id: imageos
uses: actions/github-script@v7
with:
result-encoding: string
script: |
return process.env.ImageOS
- name: Extract Metadata (tags, labels) for Docker
id: meta
uses: docker/metadata-action@v5
env:
DOCKER_METADATA_ANNOTATIONS_LEVELS: manifest,manifest-descriptor # Important for digest annotation (used by Github packages)
with:
images: |
ghcr.io/${{ env.REPO_NAME }}
labels: |
org.opencontainers.image.url=https://github.com/rclone/rclone/pkgs/container/rclone
org.opencontainers.image.vendor=${{ github.repository_owner }}
org.opencontainers.image.authors=rclone <https://github.com/rclone>
org.opencontainers.image.source=${{ github.server_url }}/${{ github.repository }}
org.opencontainers.image.revision=${{ github.sha }}
tags: |
type=sha
type=ref,event=pr
type=ref,event=branch
type=semver,pattern={{version}}
type=semver,pattern={{major}}
type=semver,pattern={{major}}.{{minor}}
type=raw,value=beta,enable={{is_default_branch}}
- name: Setup QEMU
uses: docker/setup-qemu-action@v3
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
- name: Load Go Build Cache for Docker
id: go-cache
uses: actions/cache@v4
with:
key: ${{ runner.os }}-${{ steps.imageos.outputs.result }}-go-${{ env.CACHE_NAME }}-${{ env.PLATFORM }}-${{ hashFiles('**/go.mod') }}-${{ hashFiles('**/go.sum') }}
restore-keys: |
${{ runner.os }}-${{ steps.imageos.outputs.result }}-go-${{ env.CACHE_NAME }}-${{ env.PLATFORM }}
# Cache only the go builds, the module download is cached via the docker layer caching
path: |
go-build-cache
- name: Inject Go Build Cache into Docker
uses: reproducible-containers/buildkit-cache-dance@v3
with:
cache-map: |
{
"go-build-cache": "/root/.cache/go-build"
}
skip-extraction: ${{ steps.go-cache.outputs.cache-hit }}
- name: Login to GitHub Container Registry
uses: docker/login-action@v3
with:
registry: ghcr.io
# This is the user that triggered the Workflow. In this case, it will
# either be the user whom created the Release or manually triggered
# the workflow_dispatch.
username: ${{ github.actor }}
password: ${{ secrets.GITHUB_TOKEN }}
- name: Build and Publish Image Digest
id: build
uses: docker/build-push-action@v6
with:
file: Dockerfile
context: .
provenance: false
# don't specify 'tags' here (error "get can't push tagged ref by digest")
# tags: ${{ steps.meta.outputs.tags }}
labels: ${{ steps.meta.outputs.labels }}
annotations: ${{ steps.meta.outputs.annotations }}
platforms: ${{ matrix.platform }}
outputs: |
type=image,name=ghcr.io/${{ env.REPO_NAME }},push-by-digest=true,name-canonical=true,push=true
cache-from: |
type=registry,ref=ghcr.io/${{ env.REPO_NAME }}:build-${{ env.CACHE_NAME }}-${{ env.PLATFORM }}
cache-to: |
type=registry,ref=ghcr.io/${{ env.REPO_NAME }}:build-${{ env.CACHE_NAME }}-${{ env.PLATFORM }},image-manifest=true,mode=max,compression=zstd
- name: Export Image Digest
run: |
mkdir -p /tmp/digests
digest="${{ steps.build.outputs.digest }}"
touch "/tmp/digests/${digest#sha256:}"
- name: Upload Image Digest
uses: actions/upload-artifact@v4
with:
name: digests-${{ env.PLATFORM }}
path: /tmp/digests/*
retention-days: 1
if-no-files-found: error
merge-image:
name: Merge & Push Final Docker Image
runs-on: ubuntu-24.04
needs:
- build-image
steps:
- name: Download Image Digests
uses: actions/download-artifact@v4
with:
path: /tmp/digests
pattern: digests-*
merge-multiple: true
- name: Set REPO_NAME Variable
run: |
echo "REPO_NAME=`echo ${{github.repository}} | tr '[:upper:]' '[:lower:]'`" >> ${GITHUB_ENV}
- name: Extract Metadata (tags, labels) for Docker
id: meta
uses: docker/metadata-action@v5
env:
DOCKER_METADATA_ANNOTATIONS_LEVELS: index
with:
images: |
${{ env.REPO_NAME }}
ghcr.io/${{ env.REPO_NAME }}
labels: |
org.opencontainers.image.url=https://github.com/rclone/rclone/pkgs/container/rclone
org.opencontainers.image.vendor=${{ github.repository_owner }}
org.opencontainers.image.authors=rclone <https://github.com/rclone>
org.opencontainers.image.source=${{ github.server_url }}/${{ github.repository }}
org.opencontainers.image.revision=${{ github.sha }}
tags: |
type=sha
type=ref,event=pr
type=ref,event=branch
type=semver,pattern={{version}}
type=semver,pattern={{major}}
type=semver,pattern={{major}}.{{minor}}
type=raw,value=beta,enable={{is_default_branch}}
- name: Extract Tags
shell: python
run: |
import json, os
metadata_json = os.environ['DOCKER_METADATA_OUTPUT_JSON']
metadata = json.loads(metadata_json)
tags = [f"--tag '{tag}'" for tag in metadata["tags"]]
tags_string = " ".join(tags)
with open(os.environ['GITHUB_ENV'], 'a') as env:
env.write(f"TAGS={tags_string}\n")
- name: Extract Annotations
shell: python
run: |
import json, os
metadata_json = os.environ['DOCKER_METADATA_OUTPUT_JSON']
metadata = json.loads(metadata_json)
annotations = [f"--annotation '{annotation}'" for annotation in metadata["annotations"]]
annotations_string = " ".join(annotations)
with open(os.environ['GITHUB_ENV'], 'a') as env:
env.write(f"ANNOTATIONS={annotations_string}\n")
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
- name: Login to Docker Hub
uses: docker/login-action@v3
with:
username: ${{ secrets.DOCKERHUB_USERNAME }}
password: ${{ secrets.DOCKERHUB_TOKEN }}
- name: Login to GitHub Container Registry
uses: docker/login-action@v3
with:
registry: ghcr.io
# This is the user that triggered the Workflow. In this case, it will
# either be the user whom created the Release or manually triggered
# the workflow_dispatch.
username: ${{ github.actor }}
password: ${{ secrets.GITHUB_TOKEN }}
- name: Create & Push Manifest List
working-directory: /tmp/digests
run: |
docker buildx imagetools create \
${{ env.TAGS }} \
${{ env.ANNOTATIONS }} \
$(printf 'ghcr.io/${{ env.REPO_NAME }}@sha256:%s ' *)
- name: Inspect and Run Multi-Platform Image
run: |
docker buildx imagetools inspect --raw ${{ env.REPO_NAME }}:${{ steps.meta.outputs.version }}
docker buildx imagetools inspect --raw ghcr.io/${{ env.REPO_NAME }}:${{ steps.meta.outputs.version }}
docker run --rm ghcr.io/${{ env.REPO_NAME }}:${{ steps.meta.outputs.version }} version

View File

@@ -0,0 +1,49 @@
---
# Github Actions release for rclone
# -*- compile-command: "yamllint -f parsable build_publish_docker_plugin.yml" -*-
name: Release Build for Docker Plugin
on:
release:
types: [published]
workflow_dispatch:
inputs:
manual:
description: Manual run (bypass default conditions)
type: boolean
default: true
jobs:
build_docker_volume_plugin:
if: inputs.manual || github.repository == 'rclone/rclone'
name: Build docker plugin job
runs-on: ubuntu-latest
steps:
- name: Free some space
shell: bash
run: |
df -h .
# Remove android SDK
sudo rm -rf /usr/local/lib/android || true
# Remove .net runtime
sudo rm -rf /usr/share/dotnet || true
df -h .
- name: Checkout master
uses: actions/checkout@v4
with:
fetch-depth: 0
- name: Build and publish docker plugin
shell: bash
run: |
VER=${GITHUB_REF#refs/tags/}
PLUGIN_USER=rclone
docker login --username ${{ secrets.DOCKER_HUB_USER }} \
--password-stdin <<< "${{ secrets.DOCKER_HUB_PASSWORD }}"
for PLUGIN_ARCH in amd64 arm64 arm/v7 arm/v6 ;do
export PLUGIN_USER PLUGIN_ARCH
make docker-plugin PLUGIN_TAG=${PLUGIN_ARCH/\//-}
make docker-plugin PLUGIN_TAG=${PLUGIN_ARCH/\//-}-${VER#v}
done
make docker-plugin PLUGIN_ARCH=amd64 PLUGIN_TAG=latest
make docker-plugin PLUGIN_ARCH=amd64 PLUGIN_TAG=${VER#v}

View File

@@ -1,77 +0,0 @@
name: Docker release build
on:
release:
types: [published]
jobs:
build:
if: github.repository == 'rclone/rclone'
runs-on: ubuntu-latest
name: Build image job
steps:
- name: Free some space
shell: bash
run: |
df -h .
# Remove android SDK
sudo rm -rf /usr/local/lib/android || true
# Remove .net runtime
sudo rm -rf /usr/share/dotnet || true
df -h .
- name: Checkout master
uses: actions/checkout@v4
with:
fetch-depth: 0
- name: Get actual patch version
id: actual_patch_version
run: echo ::set-output name=ACTUAL_PATCH_VERSION::$(echo $GITHUB_REF | cut -d / -f 3 | sed 's/v//g')
- name: Get actual minor version
id: actual_minor_version
run: echo ::set-output name=ACTUAL_MINOR_VERSION::$(echo $GITHUB_REF | cut -d / -f 3 | sed 's/v//g' | cut -d "." -f 1,2)
- name: Get actual major version
id: actual_major_version
run: echo ::set-output name=ACTUAL_MAJOR_VERSION::$(echo $GITHUB_REF | cut -d / -f 3 | sed 's/v//g' | cut -d "." -f 1)
- name: Build and publish image
uses: ilteoood/docker_buildx@1.1.0
with:
tag: latest,${{ steps.actual_patch_version.outputs.ACTUAL_PATCH_VERSION }},${{ steps.actual_minor_version.outputs.ACTUAL_MINOR_VERSION }},${{ steps.actual_major_version.outputs.ACTUAL_MAJOR_VERSION }}
imageName: rclone/rclone
platform: linux/amd64,linux/386,linux/arm64,linux/arm/v7,linux/arm/v6
publish: true
dockerHubUser: ${{ secrets.DOCKER_HUB_USER }}
dockerHubPassword: ${{ secrets.DOCKER_HUB_PASSWORD }}
build_docker_volume_plugin:
if: github.repository == 'rclone/rclone'
needs: build
runs-on: ubuntu-latest
name: Build docker plugin job
steps:
- name: Free some space
shell: bash
run: |
df -h .
# Remove android SDK
sudo rm -rf /usr/local/lib/android || true
# Remove .net runtime
sudo rm -rf /usr/share/dotnet || true
df -h .
- name: Checkout master
uses: actions/checkout@v4
with:
fetch-depth: 0
- name: Build and publish docker plugin
shell: bash
run: |
VER=${GITHUB_REF#refs/tags/}
PLUGIN_USER=rclone
docker login --username ${{ secrets.DOCKER_HUB_USER }} \
--password-stdin <<< "${{ secrets.DOCKER_HUB_PASSWORD }}"
for PLUGIN_ARCH in amd64 arm64 arm/v7 arm/v6 ;do
export PLUGIN_USER PLUGIN_ARCH
make docker-plugin PLUGIN_TAG=${PLUGIN_ARCH/\//-}
make docker-plugin PLUGIN_TAG=${PLUGIN_ARCH/\//-}-${VER#v}
done
make docker-plugin PLUGIN_ARCH=amd64 PLUGIN_TAG=latest
make docker-plugin PLUGIN_ARCH=amd64 PLUGIN_TAG=${VER#v}

View File

@@ -13,6 +13,7 @@ linters:
- stylecheck
- unused
- misspell
- gocritic
#- prealloc
#- maligned
disable-all: true
@@ -98,3 +99,46 @@ linters-settings:
# Only enable the checks performed by the staticcheck stand-alone tool,
# as documented here: https://staticcheck.io/docs/configuration/options/#checks
checks: ["all", "-ST1000", "-ST1003", "-ST1016", "-ST1020", "-ST1021", "-ST1022", "-ST1023"]
gocritic:
# Enable all default checks with some exceptions and some additions (commented).
# Cannot use both enabled-checks and disabled-checks, so must specify all to be used.
disable-all: true
enabled-checks:
#- appendAssign # Enabled by default
- argOrder
- assignOp
- badCall
- badCond
#- captLocal # Enabled by default
- caseOrder
- codegenComment
#- commentFormatting # Enabled by default
- defaultCaseOrder
- deprecatedComment
- dupArg
- dupBranchBody
- dupCase
- dupSubExpr
- elseif
#- exitAfterDefer # Enabled by default
- flagDeref
- flagName
#- ifElseChain # Enabled by default
- mapKey
- newDeref
- offBy1
- regexpMust
- ruleguard # Not enabled by default
#- singleCaseSwitch # Enabled by default
- sloppyLen
- sloppyTypeAssert
- switchTrue
- typeSwitchVar
- underef
- unlambda
- unslice
- valSwap
- wrapperFunc
settings:
ruleguard:
rules: "${configDir}/bin/rules.go"

View File

@@ -209,7 +209,7 @@ altogether with an HTML report and test retries then from the
project root:
go install github.com/rclone/rclone/fstest/test_all
test_all -backend drive
test_all -backends drive
### Full integration testing
@@ -490,7 +490,7 @@ alphabetical order of full name of remote (e.g. `drive` is ordered as
- `docs/content/remote.md` - main docs page (note the backend options are automatically added to this file with `make backenddocs`)
- make sure this has the `autogenerated options` comments in (see your reference backend docs)
- update them in your backend with `bin/make_backend_docs.py remote`
- `docs/content/overview.md` - overview docs
- `docs/content/overview.md` - overview docs - add an entry into the Features table and the Optional Features table.
- `docs/content/docs.md` - list of remotes in config section
- `docs/content/_index.md` - front page of rclone.org
- `docs/layouts/chrome/navbar.html` - add it to the website navigation
@@ -508,7 +508,7 @@ You'll need to modify the following files
- `backend/s3/s3.go`
- Add the provider to `providerOption` at the top of the file
- Add endpoints and other config for your provider gated on the provider in `fs.RegInfo`.
- Exclude your provider from genric config questions (eg `region` and `endpoint).
- Exclude your provider from generic config questions (eg `region` and `endpoint).
- Add the provider to the `setQuirks` function - see the documentation there.
- `docs/content/s3.md`
- Add the provider at the top of the page.

View File

@@ -1,19 +1,47 @@
FROM golang:alpine AS builder
COPY . /go/src/github.com/rclone/rclone/
ARG CGO_ENABLED=0
WORKDIR /go/src/github.com/rclone/rclone/
RUN apk add --no-cache make bash gawk git
RUN \
CGO_ENABLED=0 \
make
RUN ./rclone version
RUN echo "**** Set Go Environment Variables ****" && \
go env -w GOCACHE=/root/.cache/go-build
RUN echo "**** Install Dependencies ****" && \
apk add --no-cache \
make \
bash \
gawk \
git
COPY go.mod .
COPY go.sum .
RUN echo "**** Download Go Dependencies ****" && \
go mod download -x
RUN echo "**** Verify Go Dependencies ****" && \
go mod verify
COPY . .
RUN --mount=type=cache,target=/root/.cache/go-build,sharing=locked \
echo "**** Build Binary ****" && \
make
RUN echo "**** Print Version Binary ****" && \
./rclone version
# Begin final image
FROM alpine:latest
RUN apk --no-cache add ca-certificates fuse3 tzdata && \
echo "user_allow_other" >> /etc/fuse.conf
RUN echo "**** Install Dependencies ****" && \
apk add --no-cache \
ca-certificates \
fuse3 \
tzdata && \
echo "Enable user_allow_other in fuse" && \
echo "user_allow_other" >> /etc/fuse.conf
COPY --from=builder /go/src/github.com/rclone/rclone/rclone /usr/local/bin/

View File

@@ -21,6 +21,8 @@ Current active maintainers of rclone are:
| Chun-Hung Tseng | @henrybear327 | Proton Drive Backend |
| Hideo Aoyama | @boukendesho | snap packaging |
| nielash | @nielash | bisync |
| Dan McArdle | @dmcardle | gitannex |
| Sam Harrison | @childish-sambino | filescom |
**This is a work in progress Draft**

5881
MANUAL.html generated

File diff suppressed because it is too large Load Diff

6370
MANUAL.md generated

File diff suppressed because it is too large Load Diff

6311
MANUAL.txt generated

File diff suppressed because it is too large Load Diff

View File

@@ -144,10 +144,14 @@ MANUAL.txt: MANUAL.md
pandoc -s --from markdown-smart --to plain MANUAL.md -o MANUAL.txt
commanddocs: rclone
XDG_CACHE_HOME="" XDG_CONFIG_HOME="" HOME="\$$HOME" USER="\$$USER" rclone gendocs docs/content/
-@rmdir -p '$$HOME/.config/rclone'
XDG_CACHE_HOME="" XDG_CONFIG_HOME="" HOME="\$$HOME" USER="\$$USER" rclone gendocs --config=/notfound docs/content/
@[ ! -e '$$HOME' ] || (echo 'Error: created unwanted directory named $$HOME' && exit 1)
backenddocs: rclone bin/make_backend_docs.py
-@rmdir -p '$$HOME/.config/rclone'
XDG_CACHE_HOME="" XDG_CONFIG_HOME="" HOME="\$$HOME" USER="\$$USER" ./bin/make_backend_docs.py
@[ ! -e '$$HOME' ] || (echo 'Error: created unwanted directory named $$HOME' && exit 1)
rcdocs: rclone
bin/make_rc_docs.sh

View File

@@ -55,14 +55,18 @@ Rclone *("rsync for cloud storage")* is a command-line program to sync files and
* Dropbox [:page_facing_up:](https://rclone.org/dropbox/)
* Enterprise File Fabric [:page_facing_up:](https://rclone.org/filefabric/)
* Fastmail Files [:page_facing_up:](https://rclone.org/webdav/#fastmail-files)
* Files.com [:page_facing_up:](https://rclone.org/filescom/)
* FTP [:page_facing_up:](https://rclone.org/ftp/)
* GoFile [:page_facing_up:](https://rclone.org/gofile/)
* Google Cloud Storage [:page_facing_up:](https://rclone.org/googlecloudstorage/)
* Google Drive [:page_facing_up:](https://rclone.org/drive/)
* Google Photos [:page_facing_up:](https://rclone.org/googlephotos/)
* HDFS (Hadoop Distributed Filesystem) [:page_facing_up:](https://rclone.org/hdfs/)
* Hetzner Storage Box [:page_facing_up:](https://rclone.org/sftp/#hetzner-storage-box)
* HiDrive [:page_facing_up:](https://rclone.org/hidrive/)
* HTTP [:page_facing_up:](https://rclone.org/http/)
* Huawei Cloud Object Storage Service(OBS) [:page_facing_up:](https://rclone.org/s3/#huawei-obs)
* iCloud Drive [:page_facing_up:](https://rclone.org/iclouddrive/)
* ImageKit [:page_facing_up:](https://rclone.org/imagekit/)
* Internet Archive [:page_facing_up:](https://rclone.org/internetarchive/)
* Jottacloud [:page_facing_up:](https://rclone.org/jottacloud/)
@@ -89,10 +93,12 @@ Rclone *("rsync for cloud storage")* is a command-line program to sync files and
* OpenStack Swift [:page_facing_up:](https://rclone.org/swift/)
* Oracle Cloud Storage [:page_facing_up:](https://rclone.org/swift/)
* Oracle Object Storage [:page_facing_up:](https://rclone.org/oracleobjectstorage/)
* Outscale [:page_facing_up:](https://rclone.org/s3/#outscale)
* ownCloud [:page_facing_up:](https://rclone.org/webdav/#owncloud)
* pCloud [:page_facing_up:](https://rclone.org/pcloud/)
* Petabox [:page_facing_up:](https://rclone.org/s3/#petabox)
* PikPak [:page_facing_up:](https://rclone.org/pikpak/)
* Pixeldrain [:page_facing_up:](https://rclone.org/pixeldrain/)
* premiumize.me [:page_facing_up:](https://rclone.org/premiumizeme/)
* put.io [:page_facing_up:](https://rclone.org/putio/)
* Proton Drive [:page_facing_up:](https://rclone.org/protondrive/)
@@ -101,9 +107,11 @@ Rclone *("rsync for cloud storage")* is a command-line program to sync files and
* Quatrix [:page_facing_up:](https://rclone.org/quatrix/)
* Rackspace Cloud Files [:page_facing_up:](https://rclone.org/swift/)
* RackCorp Object Storage [:page_facing_up:](https://rclone.org/s3/#RackCorp)
* rsync.net [:page_facing_up:](https://rclone.org/sftp/#rsync-net)
* Scaleway [:page_facing_up:](https://rclone.org/s3/#scaleway)
* Seafile [:page_facing_up:](https://rclone.org/seafile/)
* SeaweedFS [:page_facing_up:](https://rclone.org/s3/#seaweedfs)
* Selectel Object Storage [:page_facing_up:](https://rclone.org/s3/#selectel)
* SFTP [:page_facing_up:](https://rclone.org/sftp/)
* SMB / CIFS [:page_facing_up:](https://rclone.org/smb/)
* StackPath [:page_facing_up:](https://rclone.org/s3/#stackpath)

View File

@@ -47,13 +47,20 @@ Early in the next release cycle update the dependencies.
* `git commit -a -v -m "build: update all dependencies"`
If the `make updatedirect` upgrades the version of go in the `go.mod`
then go to manual mode. `go1.20` here is the lowest supported version
go 1.22.0
then go to manual mode. `go1.22` here is the lowest supported version
in the `go.mod`.
If `make updatedirect` added a `toolchain` directive then remove it.
We don't want to force a toolchain on our users. Linux packagers are
often using a version of Go that is a few versions out of date.
```
go list -m -f '{{if not (or .Main .Indirect)}}{{.Path}}{{end}}' all > /tmp/potential-upgrades
go get -d $(cat /tmp/potential-upgrades)
go mod tidy -go=1.20 -compat=1.20
go mod tidy -go=1.22 -compat=1.22
```
If the `go mod tidy` fails use the output from it to remove the
@@ -86,6 +93,16 @@ build.
Once it compiles locally, push it on a test branch and commit fixes
until the tests pass.
### Major versions
The above procedure will not upgrade major versions, so v2 to v3.
However this tool can show which major versions might need to be
upgraded:
go run github.com/icholy/gomajor@latest list -major
Expect API breakage when updating major versions.
## Tidy beta
At some point after the release run
@@ -114,8 +131,8 @@ Now
* git co ${BASE_TAG}-stable
* git cherry-pick any fixes
* Do the steps as above
* make startstable
* Do the steps as above
* git co master
* `#` cherry pick the changes to the changelog - check the diff to make sure it is correct
* git checkout ${BASE_TAG}-stable docs/content/changelog.md
@@ -168,6 +185,8 @@ docker buildx build -t rclone/rclone:testing --progress=plain --platform linux/a
To make a full build then set the tags correctly and add `--push`
Note that you can't only build one architecture - you need to build them all.
```
docker buildx build --platform linux/amd64,linux/386,linux/arm64,linux/arm/v7 -t rclone/rclone:1.54.1 -t rclone/rclone:1.54 -t rclone/rclone:1 -t rclone/rclone:latest --push .
docker buildx build --platform linux/amd64,linux/386,linux/arm64,linux/arm/v7,linux/arm/v6 -t rclone/rclone:1.54.1 -t rclone/rclone:1.54 -t rclone/rclone:1 -t rclone/rclone:latest --push .
```

View File

@@ -1 +1 @@
v1.68.0
v1.70.0

View File

@@ -23,8 +23,8 @@ func prepare(t *testing.T, root string) {
configfile.Install()
// Configure the remote
config.FileSet(remoteName, "type", "alias")
config.FileSet(remoteName, "remote", root)
config.FileSetValue(remoteName, "type", "alias")
config.FileSetValue(remoteName, "remote", root)
}
func TestNewFS(t *testing.T) {

View File

@@ -10,6 +10,7 @@ import (
_ "github.com/rclone/rclone/backend/box"
_ "github.com/rclone/rclone/backend/cache"
_ "github.com/rclone/rclone/backend/chunker"
_ "github.com/rclone/rclone/backend/cloudinary"
_ "github.com/rclone/rclone/backend/combine"
_ "github.com/rclone/rclone/backend/compress"
_ "github.com/rclone/rclone/backend/crypt"
@@ -17,13 +18,16 @@ import (
_ "github.com/rclone/rclone/backend/dropbox"
_ "github.com/rclone/rclone/backend/fichier"
_ "github.com/rclone/rclone/backend/filefabric"
_ "github.com/rclone/rclone/backend/filescom"
_ "github.com/rclone/rclone/backend/ftp"
_ "github.com/rclone/rclone/backend/gofile"
_ "github.com/rclone/rclone/backend/googlecloudstorage"
_ "github.com/rclone/rclone/backend/googlephotos"
_ "github.com/rclone/rclone/backend/hasher"
_ "github.com/rclone/rclone/backend/hdfs"
_ "github.com/rclone/rclone/backend/hidrive"
_ "github.com/rclone/rclone/backend/http"
_ "github.com/rclone/rclone/backend/iclouddrive"
_ "github.com/rclone/rclone/backend/imagekit"
_ "github.com/rclone/rclone/backend/internetarchive"
_ "github.com/rclone/rclone/backend/jottacloud"
@@ -39,6 +43,7 @@ import (
_ "github.com/rclone/rclone/backend/oracleobjectstorage"
_ "github.com/rclone/rclone/backend/pcloud"
_ "github.com/rclone/rclone/backend/pikpak"
_ "github.com/rclone/rclone/backend/pixeldrain"
_ "github.com/rclone/rclone/backend/premiumizeme"
_ "github.com/rclone/rclone/backend/protondrive"
_ "github.com/rclone/rclone/backend/putio"

File diff suppressed because it is too large Load Diff

View File

@@ -3,16 +3,149 @@
package azureblob
import (
"context"
"encoding/base64"
"strings"
"testing"
"github.com/Azure/azure-sdk-for-go/sdk/storage/azblob/blockblob"
"github.com/rclone/rclone/fs"
"github.com/rclone/rclone/fstest"
"github.com/rclone/rclone/fstest/fstests"
"github.com/rclone/rclone/lib/random"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
)
func (f *Fs) InternalTest(t *testing.T) {
// Check first feature flags are set on this
// remote
func TestBlockIDCreator(t *testing.T) {
// Check creation and random number
bic, err := newBlockIDCreator()
require.NoError(t, err)
bic2, err := newBlockIDCreator()
require.NoError(t, err)
assert.NotEqual(t, bic.random, bic2.random)
assert.NotEqual(t, bic.random, [8]byte{})
// Set random to known value for tests
bic.random = [8]byte{1, 2, 3, 4, 5, 6, 7, 8}
chunkNumber := uint64(0xFEDCBA9876543210)
// Check creation of ID
want := base64.StdEncoding.EncodeToString([]byte{0xFE, 0xDC, 0xBA, 0x98, 0x76, 0x54, 0x32, 0x10, 1, 2, 3, 4, 5, 6, 7, 8})
assert.Equal(t, "/ty6mHZUMhABAgMEBQYHCA==", want)
got := bic.newBlockID(chunkNumber)
assert.Equal(t, want, got)
assert.Equal(t, "/ty6mHZUMhABAgMEBQYHCA==", got)
// Test checkID is working
assert.NoError(t, bic.checkID(chunkNumber, got))
assert.ErrorContains(t, bic.checkID(chunkNumber, "$"+got), "illegal base64")
assert.ErrorContains(t, bic.checkID(chunkNumber, "AAAA"+got), "bad block ID length")
assert.ErrorContains(t, bic.checkID(chunkNumber+1, got), "expecting decoded")
assert.ErrorContains(t, bic2.checkID(chunkNumber, got), "random bytes")
}
func (f *Fs) testFeatures(t *testing.T) {
// Check first feature flags are set on this remote
enabled := f.Features().SetTier
assert.True(t, enabled)
enabled = f.Features().GetTier
assert.True(t, enabled)
}
type ReadSeekCloser struct {
*strings.Reader
}
func (r *ReadSeekCloser) Close() error {
return nil
}
// Stage a block at remote but don't commit it
func (f *Fs) stageBlockWithoutCommit(ctx context.Context, t *testing.T, remote string) {
var (
containerName, blobPath = f.split(remote)
containerClient = f.cntSVC(containerName)
blobClient = containerClient.NewBlockBlobClient(blobPath)
data = "uncommitted data"
blockID = "1"
blockIDBase64 = base64.StdEncoding.EncodeToString([]byte(blockID))
)
r := &ReadSeekCloser{strings.NewReader(data)}
_, err := blobClient.StageBlock(ctx, blockIDBase64, r, nil)
require.NoError(t, err)
// Verify the block is staged but not committed
blockList, err := blobClient.GetBlockList(ctx, blockblob.BlockListTypeAll, nil)
require.NoError(t, err)
found := false
for _, block := range blockList.UncommittedBlocks {
if *block.Name == blockIDBase64 {
found = true
break
}
}
require.True(t, found, "Block ID not found in uncommitted blocks")
}
// This tests uploading a blob where it has uncommitted blocks with a different ID size.
//
// https://gauravmantri.com/2013/05/18/windows-azure-blob-storage-dealing-with-the-specified-blob-or-block-content-is-invalid-error/
//
// TestIntegration/FsMkdir/FsPutFiles/Internal/WriteUncommittedBlocks
func (f *Fs) testWriteUncommittedBlocks(t *testing.T) {
var (
ctx = context.Background()
remote = "testBlob"
)
// Multipart copy the blob please
oldUseCopyBlob, oldCopyCutoff := f.opt.UseCopyBlob, f.opt.CopyCutoff
f.opt.UseCopyBlob = false
f.opt.CopyCutoff = f.opt.ChunkSize
defer func() {
f.opt.UseCopyBlob, f.opt.CopyCutoff = oldUseCopyBlob, oldCopyCutoff
}()
// Create a blob with uncommitted blocks
f.stageBlockWithoutCommit(ctx, t, remote)
// Now attempt to overwrite the block with a different sized block ID to provoke this error
// Check the object does not exist
_, err := f.NewObject(ctx, remote)
require.Equal(t, fs.ErrorObjectNotFound, err)
// Upload a multipart file over the block with uncommitted chunks of a different ID size
size := 4*int(f.opt.ChunkSize) - 1
contents := random.String(size)
item := fstest.NewItem(remote, contents, fstest.Time("2001-05-06T04:05:06.499Z"))
o := fstests.PutTestContents(ctx, t, f, &item, contents, true)
// Check size
assert.Equal(t, int64(size), o.Size())
// Create a new blob with uncommitted blocks
newRemote := "testBlob2"
f.stageBlockWithoutCommit(ctx, t, newRemote)
// Copy over that block
dst, err := f.Copy(ctx, o, newRemote)
require.NoError(t, err)
// Check basics
assert.Equal(t, int64(size), dst.Size())
assert.Equal(t, newRemote, dst.Remote())
// Check contents
gotContents := fstests.ReadObject(ctx, t, dst, -1)
assert.Equal(t, contents, gotContents)
// Remove the object
require.NoError(t, dst.Remove(ctx))
}
func (f *Fs) InternalTest(t *testing.T) {
t.Run("Features", f.testFeatures)
t.Run("WriteUncommittedBlocks", f.testWriteUncommittedBlocks)
}

View File

@@ -15,13 +15,17 @@ import (
// TestIntegration runs integration tests against the remote
func TestIntegration(t *testing.T) {
name := "TestAzureBlob"
fstests.Run(t, &fstests.Opt{
RemoteName: "TestAzureBlob:",
RemoteName: name + ":",
NilObject: (*Object)(nil),
TiersToTest: []string{"Hot", "Cool", "Cold"},
ChunkedUpload: fstests.ChunkedUploadConfig{
MinChunkSize: defaultChunkSize,
},
ExtraConfig: []fstests.ExtraConfigItem{
{Name: name, Key: "use_copy_blob", Value: "false"},
},
})
}
@@ -40,6 +44,7 @@ func TestIntegration2(t *testing.T) {
},
ExtraConfig: []fstests.ExtraConfigItem{
{Name: name, Key: "directory_markers", Value: "true"},
{Name: name, Key: "use_copy_blob", Value: "false"},
},
})
}
@@ -48,8 +53,13 @@ func (f *Fs) SetUploadChunkSize(cs fs.SizeSuffix) (fs.SizeSuffix, error) {
return f.setUploadChunkSize(cs)
}
func (f *Fs) SetCopyCutoff(cs fs.SizeSuffix) (fs.SizeSuffix, error) {
return f.setCopyCutoff(cs)
}
var (
_ fstests.SetUploadChunkSizer = (*Fs)(nil)
_ fstests.SetCopyCutoffer = (*Fs)(nil)
)
func TestValidateAccessTier(t *testing.T) {

View File

@@ -237,6 +237,30 @@ msi_client_id, or msi_mi_res_id parameters.`,
Help: "Azure resource ID of the user-assigned MSI to use, if any.\n\nLeave blank if msi_client_id or msi_object_id specified.",
Advanced: true,
Sensitive: true,
}, {
Name: "disable_instance_discovery",
Help: `Skip requesting Microsoft Entra instance metadata
This should be set true only by applications authenticating in
disconnected clouds, or private clouds such as Azure Stack.
It determines whether rclone requests Microsoft Entra instance
metadata from ` + "`https://login.microsoft.com/`" + ` before
authenticating.
Setting this to true will skip this request, making you responsible
for ensuring the configured authority is valid and trustworthy.
`,
Default: false,
Advanced: true,
}, {
Name: "use_az",
Help: `Use Azure CLI tool az for authentication
Set to use the [Azure CLI tool az](https://learn.microsoft.com/en-us/cli/azure/)
as the sole means of authentication.
Setting this can be useful if you wish to use the az CLI on a host with
a System Managed Identity that you do not want to use.
Don't set env_auth at the same time.
`,
Default: false,
Advanced: true,
}, {
Name: "endpoint",
Help: "Endpoint for the service.\n\nLeave blank normally.",
@@ -319,10 +343,12 @@ type Options struct {
Username string `config:"username"`
Password string `config:"password"`
ServicePrincipalFile string `config:"service_principal_file"`
DisableInstanceDiscovery bool `config:"disable_instance_discovery"`
UseMSI bool `config:"use_msi"`
MSIObjectID string `config:"msi_object_id"`
MSIClientID string `config:"msi_client_id"`
MSIResourceID string `config:"msi_mi_res_id"`
UseAZ bool `config:"use_az"`
Endpoint string `config:"endpoint"`
ChunkSize fs.SizeSuffix `config:"chunk_size"`
MaxStreamSize fs.SizeSuffix `config:"max_stream_size"`
@@ -393,8 +419,10 @@ func newFsFromOptions(ctx context.Context, name, root string, opt *Options) (fs.
policyClientOptions := policy.ClientOptions{
Transport: newTransporter(ctx),
}
backup := service.ShareTokenIntentBackup
clientOpt := service.ClientOptions{
ClientOptions: policyClientOptions,
ClientOptions: policyClientOptions,
FileRequestIntent: &backup,
}
// Here we auth by setting one of cred, sharedKeyCred or f.client
@@ -412,7 +440,8 @@ func newFsFromOptions(ctx context.Context, name, root string, opt *Options) (fs.
}
// Read credentials from the environment
options := azidentity.DefaultAzureCredentialOptions{
ClientOptions: policyClientOptions,
ClientOptions: policyClientOptions,
DisableInstanceDiscovery: opt.DisableInstanceDiscovery,
}
cred, err = azidentity.NewDefaultAzureCredential(&options)
if err != nil {
@@ -423,6 +452,13 @@ func newFsFromOptions(ctx context.Context, name, root string, opt *Options) (fs.
if err != nil {
return nil, fmt.Errorf("create new shared key credential failed: %w", err)
}
case opt.UseAZ:
var options = azidentity.AzureCLICredentialOptions{}
cred, err = azidentity.NewAzureCLICredential(&options)
fmt.Println(cred)
if err != nil {
return nil, fmt.Errorf("failed to create Azure CLI credentials: %w", err)
}
case opt.SASURL != "":
client, err = service.NewClientWithNoCredential(opt.SASURL, &clientOpt)
if err != nil {
@@ -897,7 +933,7 @@ func (o *Object) getMetadata(ctx context.Context) error {
// Hash returns the MD5 of an object returning a lowercase hex string
//
// May make a network request becaue the [fs.List] method does not
// May make a network request because the [fs.List] method does not
// return MD5 hashes for DirEntry
func (o *Object) Hash(ctx context.Context, ty hash.Type) (string, error) {
if ty != hash.MD5 {
@@ -1035,12 +1071,10 @@ func (o *Object) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, op
if _, createErr := fc.Create(ctx, size, nil); createErr != nil {
return fmt.Errorf("update: unable to create file: %w", createErr)
}
} else {
} else if size != o.Size() {
// Resize the file if needed
if size != o.Size() {
if _, resizeErr := fc.Resize(ctx, size, nil); resizeErr != nil {
return fmt.Errorf("update: unable to resize while trying to update: %w ", resizeErr)
}
if _, resizeErr := fc.Resize(ctx, size, nil); resizeErr != nil {
return fmt.Errorf("update: unable to resize while trying to update: %w ", resizeErr)
}
}

View File

@@ -61,7 +61,7 @@ const chars = "abcdefghijklmnopqrstuvwzyxABCDEFGHIJKLMNOPQRSTUVWZYX"
func randomString(charCount int) string {
strBldr := strings.Builder{}
for i := 0; i < charCount; i++ {
for range charCount {
randPos := rand.Int63n(52)
strBldr.WriteByte(chars[randPos])
}

View File

@@ -42,9 +42,10 @@ type Bucket struct {
// LifecycleRule is a single lifecycle rule
type LifecycleRule struct {
DaysFromHidingToDeleting *int `json:"daysFromHidingToDeleting"`
DaysFromUploadingToHiding *int `json:"daysFromUploadingToHiding"`
FileNamePrefix string `json:"fileNamePrefix"`
DaysFromHidingToDeleting *int `json:"daysFromHidingToDeleting"`
DaysFromUploadingToHiding *int `json:"daysFromUploadingToHiding"`
DaysFromStartingToCancelingUnfinishedLargeFiles *int `json:"daysFromStartingToCancelingUnfinishedLargeFiles"`
FileNamePrefix string `json:"fileNamePrefix"`
}
// Timestamp is a UTC time when this file was uploaded. It is a base
@@ -129,10 +130,10 @@ type AuthorizeAccountResponse struct {
AbsoluteMinimumPartSize int `json:"absoluteMinimumPartSize"` // The smallest possible size of a part of a large file.
AccountID string `json:"accountId"` // The identifier for the account.
Allowed struct { // An object (see below) containing the capabilities of this auth token, and any restrictions on using it.
BucketID string `json:"bucketId"` // When present, access is restricted to one bucket.
BucketName string `json:"bucketName"` // When present, name of bucket - may be empty
Capabilities []string `json:"capabilities"` // A list of strings, each one naming a capability the key has.
NamePrefix interface{} `json:"namePrefix"` // When present, access is restricted to files whose names start with the prefix
BucketID string `json:"bucketId"` // When present, access is restricted to one bucket.
BucketName string `json:"bucketName"` // When present, name of bucket - may be empty
Capabilities []string `json:"capabilities"` // A list of strings, each one naming a capability the key has.
NamePrefix any `json:"namePrefix"` // When present, access is restricted to files whose names start with the prefix
} `json:"allowed"`
APIURL string `json:"apiUrl"` // The base URL to use for all API calls except for uploading and downloading files.
AuthorizationToken string `json:"authorizationToken"` // An authorization token to use with all calls, other than b2_authorize_account, that need an Authorization header.

View File

@@ -42,11 +42,11 @@ func TestTimestampIsZero(t *testing.T) {
}
func TestTimestampEqual(t *testing.T) {
assert.False(t, emptyT.Equal(emptyT))
assert.False(t, emptyT.Equal(emptyT)) //nolint:gocritic // Don't include gocritic when running golangci-lint to avoid dupArg: suspicious method call with the same argument and receiver
assert.False(t, t0.Equal(emptyT))
assert.False(t, emptyT.Equal(t0))
assert.False(t, t0.Equal(t1))
assert.False(t, t1.Equal(t0))
assert.True(t, t0.Equal(t0))
assert.True(t, t1.Equal(t1))
assert.True(t, t0.Equal(t0)) //nolint:gocritic // Don't include gocritic when running golangci-lint to avoid dupArg: suspicious method call with the same argument and receiver
assert.True(t, t1.Equal(t1)) //nolint:gocritic // Don't include gocritic when running golangci-lint to avoid dupArg: suspicious method call with the same argument and receiver
}

View File

@@ -16,6 +16,7 @@ import (
"io"
"net/http"
"path"
"slices"
"strconv"
"strings"
"sync"
@@ -30,7 +31,8 @@ import (
"github.com/rclone/rclone/fs/fserrors"
"github.com/rclone/rclone/fs/fshttp"
"github.com/rclone/rclone/fs/hash"
"github.com/rclone/rclone/fs/walk"
"github.com/rclone/rclone/fs/list"
"github.com/rclone/rclone/fs/operations"
"github.com/rclone/rclone/lib/bucket"
"github.com/rclone/rclone/lib/encoder"
"github.com/rclone/rclone/lib/multipart"
@@ -588,12 +590,7 @@ func (f *Fs) authorizeAccount(ctx context.Context) error {
// hasPermission returns if the current AuthorizationToken has the selected permission
func (f *Fs) hasPermission(permission string) bool {
for _, capability := range f.info.Allowed.Capabilities {
if capability == permission {
return true
}
}
return false
return slices.Contains(f.info.Allowed.Capabilities, permission)
}
// getUploadURL returns the upload info with the UploadURL and the AuthorizationToken
@@ -921,7 +918,7 @@ func (f *Fs) List(ctx context.Context, dir string) (entries fs.DirEntries, err e
// of listing recursively that doing a directory traversal.
func (f *Fs) ListR(ctx context.Context, dir string, callback fs.ListRCallback) (err error) {
bucket, directory := f.split(dir)
list := walk.NewListRHelper(callback)
list := list.NewHelper(callback)
listR := func(bucket, directory, prefix string, addBucket bool) error {
last := ""
return f.list(ctx, bucket, directory, prefix, addBucket, true, 0, f.opt.Versions, false, func(remote string, object *api.File, isDirectory bool) error {
@@ -1274,7 +1271,7 @@ func (f *Fs) purge(ctx context.Context, dir string, oldOnly bool, deleteHidden b
toBeDeleted := make(chan *api.File, f.ci.Transfers)
var wg sync.WaitGroup
wg.Add(f.ci.Transfers)
for i := 0; i < f.ci.Transfers; i++ {
for range f.ci.Transfers {
go func() {
defer wg.Done()
for object := range toBeDeleted {
@@ -1317,16 +1314,22 @@ func (f *Fs) purge(ctx context.Context, dir string, oldOnly bool, deleteHidden b
// Check current version of the file
if deleteHidden && object.Action == "hide" {
fs.Debugf(remote, "Deleting current version (id %q) as it is a hide marker", object.ID)
toBeDeleted <- object
if !operations.SkipDestructive(ctx, object.Name, "remove hide marker") {
toBeDeleted <- object
}
} else if deleteUnfinished && object.Action == "start" && isUnfinishedUploadStale(object.UploadTimestamp) {
fs.Debugf(remote, "Deleting current version (id %q) as it is a start marker (upload started at %s)", object.ID, time.Time(object.UploadTimestamp).Local())
toBeDeleted <- object
if !operations.SkipDestructive(ctx, object.Name, "remove pending upload") {
toBeDeleted <- object
}
} else {
fs.Debugf(remote, "Not deleting current version (id %q) %q dated %v (%v ago)", object.ID, object.Action, time.Time(object.UploadTimestamp).Local(), time.Since(time.Time(object.UploadTimestamp)))
}
} else {
fs.Debugf(remote, "Deleting (id %q)", object.ID)
toBeDeleted <- object
if !operations.SkipDestructive(ctx, object.Name, "delete") {
toBeDeleted <- object
}
}
last = remote
tr.Done(ctx, nil)
@@ -1593,7 +1596,11 @@ func (o *Object) decodeMetaDataRaw(ID, SHA1 string, Size int64, UploadTimestamp
o.size = Size
// Use the UploadTimestamp if can't get file info
o.modTime = time.Time(UploadTimestamp)
return o.parseTimeString(Info[timeKey])
err = o.parseTimeString(Info[timeKey])
if err != nil {
return err
}
return nil
}
// decodeMetaData sets the metadata in the object from an api.File
@@ -1695,6 +1702,16 @@ func timeString(modTime time.Time) string {
return strconv.FormatInt(modTime.UnixNano()/1e6, 10)
}
// parseTimeStringHelper converts a decimal string number of milliseconds
// elapsed since January 1, 1970 UTC into a time.Time
func parseTimeStringHelper(timeString string) (time.Time, error) {
unixMilliseconds, err := strconv.ParseInt(timeString, 10, 64)
if err != nil {
return time.Time{}, err
}
return time.Unix(unixMilliseconds/1e3, (unixMilliseconds%1e3)*1e6).UTC(), nil
}
// parseTimeString converts a decimal string number of milliseconds
// elapsed since January 1, 1970 UTC into a time.Time and stores it in
// the modTime variable.
@@ -1702,12 +1719,12 @@ func (o *Object) parseTimeString(timeString string) (err error) {
if timeString == "" {
return nil
}
unixMilliseconds, err := strconv.ParseInt(timeString, 10, 64)
modTime, err := parseTimeStringHelper(timeString)
if err != nil {
fs.Debugf(o, "Failed to parse mod time string %q: %v", timeString, err)
return nil
}
o.modTime = time.Unix(unixMilliseconds/1e3, (unixMilliseconds%1e3)*1e6).UTC()
o.modTime = modTime
return nil
}
@@ -1861,6 +1878,7 @@ func (o *Object) getOrHead(ctx context.Context, method string, options []fs.Open
ContentType: resp.Header.Get("Content-Type"),
Info: Info,
}
// When reading files from B2 via cloudflare using
// --b2-download-url cloudflare strips the Content-Length
// headers (presumably so it can inject stuff) so use the old
@@ -1917,7 +1935,7 @@ func init() {
// urlEncode encodes in with % encoding
func urlEncode(in string) string {
var out bytes.Buffer
for i := 0; i < len(in); i++ {
for i := range len(in) {
c := in[i]
if noNeedToEncode[c] {
_ = out.WriteByte(c)
@@ -1958,7 +1976,7 @@ func (o *Object) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, op
if err == nil {
fs.Debugf(o, "File is big enough for chunked streaming")
up, err := o.fs.newLargeUpload(ctx, o, in, src, o.fs.opt.ChunkSize, false, nil)
up, err := o.fs.newLargeUpload(ctx, o, in, src, o.fs.opt.ChunkSize, false, nil, options...)
if err != nil {
o.fs.putRW(rw)
return err
@@ -1990,7 +2008,10 @@ func (o *Object) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, op
return o.decodeMetaDataFileInfo(up.info)
}
modTime := src.ModTime(ctx)
modTime, err := o.getModTime(ctx, src, options)
if err != nil {
return err
}
calculatedSha1, _ := src.Hash(ctx, hash.SHA1)
if calculatedSha1 == "" {
@@ -2095,6 +2116,36 @@ func (o *Object) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, op
return o.decodeMetaDataFileInfo(&response)
}
// Get modTime from the source; if --metadata is set, fetch the src metadata and get it from there.
// When metadata support is added to b2, this method will need a more generic name
func (o *Object) getModTime(ctx context.Context, src fs.ObjectInfo, options []fs.OpenOption) (time.Time, error) {
modTime := src.ModTime(ctx)
// Fetch metadata if --metadata is in use
meta, err := fs.GetMetadataOptions(ctx, o.fs, src, options)
if err != nil {
return time.Time{}, fmt.Errorf("failed to read metadata from source object: %w", err)
}
// merge metadata into request and user metadata
for k, v := range meta {
k = strings.ToLower(k)
// For now, the only metadata we're concerned with is "mtime"
switch k {
case "mtime":
// mtime in meta overrides source ModTime
metaModTime, err := time.Parse(time.RFC3339Nano, v)
if err != nil {
fs.Debugf(o, "failed to parse metadata %s: %q: %v", k, v, err)
} else {
modTime = metaModTime
}
default:
// Do nothing for now
}
}
return modTime, nil
}
// OpenChunkWriter returns the chunk size and a ChunkWriter
//
// Pass in the remote and the src object
@@ -2126,7 +2177,7 @@ func (f *Fs) OpenChunkWriter(ctx context.Context, remote string, src fs.ObjectIn
Concurrency: o.fs.opt.UploadConcurrency,
//LeavePartsOnError: o.fs.opt.LeavePartsOnError,
}
up, err := f.newLargeUpload(ctx, o, nil, src, f.opt.ChunkSize, false, nil)
up, err := f.newLargeUpload(ctx, o, nil, src, f.opt.ChunkSize, false, nil, options...)
return info, up, err
}
@@ -2172,6 +2223,7 @@ This will dump something like this showing the lifecycle rules.
{
"daysFromHidingToDeleting": 1,
"daysFromUploadingToHiding": null,
"daysFromStartingToCancelingUnfinishedLargeFiles": null,
"fileNamePrefix": ""
}
]
@@ -2198,12 +2250,13 @@ overwrites will still cause versions to be made.
See: https://www.backblaze.com/docs/cloud-storage-lifecycle-rules
`,
Opts: map[string]string{
"daysFromHidingToDeleting": "After a file has been hidden for this many days it is deleted. 0 is off.",
"daysFromUploadingToHiding": "This many days after uploading a file is hidden",
"daysFromHidingToDeleting": "After a file has been hidden for this many days it is deleted. 0 is off.",
"daysFromUploadingToHiding": "This many days after uploading a file is hidden",
"daysFromStartingToCancelingUnfinishedLargeFiles": "Cancels any unfinished large file versions after this many days",
},
}
func (f *Fs) lifecycleCommand(ctx context.Context, name string, arg []string, opt map[string]string) (out interface{}, err error) {
func (f *Fs) lifecycleCommand(ctx context.Context, name string, arg []string, opt map[string]string) (out any, err error) {
var newRule api.LifecycleRule
if daysStr := opt["daysFromHidingToDeleting"]; daysStr != "" {
days, err := strconv.Atoi(daysStr)
@@ -2219,14 +2272,23 @@ func (f *Fs) lifecycleCommand(ctx context.Context, name string, arg []string, op
}
newRule.DaysFromUploadingToHiding = &days
}
if daysStr := opt["daysFromStartingToCancelingUnfinishedLargeFiles"]; daysStr != "" {
days, err := strconv.Atoi(daysStr)
if err != nil {
return nil, fmt.Errorf("bad daysFromStartingToCancelingUnfinishedLargeFiles: %w", err)
}
newRule.DaysFromStartingToCancelingUnfinishedLargeFiles = &days
}
bucketName, _ := f.split("")
if bucketName == "" {
return nil, errors.New("bucket required")
}
skip := operations.SkipDestructive(ctx, name, "update lifecycle rules")
var bucket *api.Bucket
if newRule.DaysFromHidingToDeleting != nil || newRule.DaysFromUploadingToHiding != nil {
if !skip && (newRule.DaysFromHidingToDeleting != nil || newRule.DaysFromUploadingToHiding != nil || newRule.DaysFromStartingToCancelingUnfinishedLargeFiles != nil) {
bucketID, err := f.getBucketID(ctx, bucketName)
if err != nil {
return nil, err
@@ -2283,7 +2345,7 @@ Durations are parsed as per the rest of rclone, 2h, 7d, 7w etc.
},
}
func (f *Fs) cleanupCommand(ctx context.Context, name string, arg []string, opt map[string]string) (out interface{}, err error) {
func (f *Fs) cleanupCommand(ctx context.Context, name string, arg []string, opt map[string]string) (out any, err error) {
maxAge := defaultMaxAge
if opt["max-age"] != "" {
maxAge, err = fs.ParseDuration(opt["max-age"])
@@ -2306,7 +2368,7 @@ it would do.
`,
}
func (f *Fs) cleanupHiddenCommand(ctx context.Context, name string, arg []string, opt map[string]string) (out interface{}, err error) {
func (f *Fs) cleanupHiddenCommand(ctx context.Context, name string, arg []string, opt map[string]string) (out any, err error) {
return nil, f.cleanUp(ctx, true, false, 0)
}
@@ -2325,7 +2387,7 @@ var commandHelp = []fs.CommandHelp{
// The result should be capable of being JSON encoded
// If it is a string or a []string it will be shown to the user
// otherwise it will be JSON encoded and shown to the user like that
func (f *Fs) Command(ctx context.Context, name string, arg []string, opt map[string]string) (out interface{}, err error) {
func (f *Fs) Command(ctx context.Context, name string, arg []string, opt map[string]string) (out any, err error) {
switch name {
case "lifecycle":
return f.lifecycleCommand(ctx, name, arg, opt)

View File

@@ -5,6 +5,7 @@ import (
"crypto/sha1"
"fmt"
"path"
"sort"
"strings"
"testing"
"time"
@@ -13,6 +14,7 @@ import (
"github.com/rclone/rclone/fs"
"github.com/rclone/rclone/fs/cache"
"github.com/rclone/rclone/fs/hash"
"github.com/rclone/rclone/fs/object"
"github.com/rclone/rclone/fstest"
"github.com/rclone/rclone/fstest/fstests"
"github.com/rclone/rclone/lib/bucket"
@@ -184,57 +186,120 @@ func TestParseTimeString(t *testing.T) {
}
// This is adapted from the s3 equivalent.
func (f *Fs) InternalTestMetadata(t *testing.T) {
ctx := context.Background()
original := random.String(1000)
contents := fstest.Gz(t, original)
mimeType := "text/html"
item := fstest.NewItem("test-metadata", contents, fstest.Time("2001-05-06T04:05:06.499Z"))
btime := time.Now()
obj := fstests.PutTestContentsMetadata(ctx, t, f, &item, contents, true, mimeType, nil)
defer func() {
assert.NoError(t, obj.Remove(ctx))
}()
o := obj.(*Object)
gotMetadata, err := o.getMetaData(ctx)
require.NoError(t, err)
// We currently have a limited amount of metadata to test with B2
assert.Equal(t, mimeType, gotMetadata.ContentType, "Content-Type")
// Modification time from the x-bz-info-src_last_modified_millis header
var mtime api.Timestamp
err = mtime.UnmarshalJSON([]byte(gotMetadata.Info[timeKey]))
if err != nil {
fs.Debugf(o, "Bad "+timeHeader+" header: %v", err)
// Return a map of the headers in the options with keys stripped of the "x-bz-info-" prefix
func OpenOptionToMetaData(options []fs.OpenOption) map[string]string {
var headers = make(map[string]string)
for _, option := range options {
k, v := option.Header()
k = strings.ToLower(k)
if strings.HasPrefix(k, headerPrefix) {
headers[k[len(headerPrefix):]] = v
}
}
assert.Equal(t, item.ModTime, time.Time(mtime), "Modification time")
// Upload time
gotBtime := time.Time(gotMetadata.UploadTimestamp)
dt := gotBtime.Sub(btime)
assert.True(t, dt < time.Minute && dt > -time.Minute, fmt.Sprintf("btime more than 1 minute out want %v got %v delta %v", btime, gotBtime, dt))
return headers
}
t.Run("GzipEncoding", func(t *testing.T) {
// Test that the gzipped file we uploaded can be
// downloaded
checkDownload := func(wantContents string, wantSize int64, wantHash string) {
gotContents := fstests.ReadObject(ctx, t, o, -1)
assert.Equal(t, wantContents, gotContents)
assert.Equal(t, wantSize, o.Size())
gotHash, err := o.Hash(ctx, hash.SHA1)
func (f *Fs) internalTestMetadata(t *testing.T, size string, uploadCutoff string, chunkSize string) {
what := fmt.Sprintf("Size%s/UploadCutoff%s/ChunkSize%s", size, uploadCutoff, chunkSize)
t.Run(what, func(t *testing.T) {
ctx := context.Background()
ss := fs.SizeSuffix(0)
err := ss.Set(size)
require.NoError(t, err)
original := random.String(int(ss))
contents := fstest.Gz(t, original)
mimeType := "text/html"
if chunkSize != "" {
ss := fs.SizeSuffix(0)
err := ss.Set(chunkSize)
require.NoError(t, err)
_, err = f.SetUploadChunkSize(ss)
require.NoError(t, err)
assert.Equal(t, wantHash, gotHash)
}
t.Run("NoDecompress", func(t *testing.T) {
checkDownload(contents, int64(len(contents)), sha1Sum(t, contents))
if uploadCutoff != "" {
ss := fs.SizeSuffix(0)
err := ss.Set(uploadCutoff)
require.NoError(t, err)
_, err = f.SetUploadCutoff(ss)
require.NoError(t, err)
}
item := fstest.NewItem("test-metadata", contents, fstest.Time("2001-05-06T04:05:06.499Z"))
btime := time.Now()
metadata := fs.Metadata{
// Just mtime for now - limit to milliseconds since x-bz-info-src_last_modified_millis can't support any
"mtime": "2009-05-06T04:05:06.499Z",
}
// Need to specify HTTP options with the header prefix since they are passed as-is
options := []fs.OpenOption{
&fs.HTTPOption{Key: "X-Bz-Info-a", Value: "1"},
&fs.HTTPOption{Key: "X-Bz-Info-b", Value: "2"},
}
obj := fstests.PutTestContentsMetadata(ctx, t, f, &item, true, contents, true, mimeType, metadata, options...)
defer func() {
assert.NoError(t, obj.Remove(ctx))
}()
o := obj.(*Object)
gotMetadata, err := o.getMetaData(ctx)
require.NoError(t, err)
// X-Bz-Info-a & X-Bz-Info-b
optMetadata := OpenOptionToMetaData(options)
for k, v := range optMetadata {
got := gotMetadata.Info[k]
assert.Equal(t, v, got, k)
}
assert.Equal(t, mimeType, gotMetadata.ContentType, "Content-Type")
// Modification time from the x-bz-info-src_last_modified_millis header
var mtime api.Timestamp
err = mtime.UnmarshalJSON([]byte(gotMetadata.Info[timeKey]))
if err != nil {
fs.Debugf(o, "Bad "+timeHeader+" header: %v", err)
}
assert.Equal(t, item.ModTime, time.Time(mtime), "Modification time")
// Upload time
gotBtime := time.Time(gotMetadata.UploadTimestamp)
dt := gotBtime.Sub(btime)
assert.True(t, dt < time.Minute && dt > -time.Minute, fmt.Sprintf("btime more than 1 minute out want %v got %v delta %v", btime, gotBtime, dt))
t.Run("GzipEncoding", func(t *testing.T) {
// Test that the gzipped file we uploaded can be
// downloaded
checkDownload := func(wantContents string, wantSize int64, wantHash string) {
gotContents := fstests.ReadObject(ctx, t, o, -1)
assert.Equal(t, wantContents, gotContents)
assert.Equal(t, wantSize, o.Size())
gotHash, err := o.Hash(ctx, hash.SHA1)
require.NoError(t, err)
assert.Equal(t, wantHash, gotHash)
}
t.Run("NoDecompress", func(t *testing.T) {
checkDownload(contents, int64(len(contents)), sha1Sum(t, contents))
})
})
})
}
func (f *Fs) InternalTestMetadata(t *testing.T) {
// 1 kB regular file
f.internalTestMetadata(t, "1kiB", "", "")
// 10 MiB large file
f.internalTestMetadata(t, "10MiB", "6MiB", "6MiB")
}
func sha1Sum(t *testing.T, s string) string {
hash := sha1.Sum([]byte(s))
return fmt.Sprintf("%x", hash)
@@ -394,24 +459,161 @@ func (f *Fs) InternalTestVersions(t *testing.T) {
})
t.Run("Cleanup", func(t *testing.T) {
require.NoError(t, f.cleanUp(ctx, true, false, 0))
items := append([]fstest.Item{newItem}, fstests.InternalTestFiles...)
fstest.CheckListing(t, f, items)
// Set --b2-versions for this test
f.opt.Versions = true
defer func() {
f.opt.Versions = false
}()
fstest.CheckListing(t, f, items)
t.Run("DryRun", func(t *testing.T) {
f.opt.Versions = true
defer func() {
f.opt.Versions = false
}()
// Listing should be unchanged after dry run
before := listAllFiles(ctx, t, f, dirName)
ctx, ci := fs.AddConfig(ctx)
ci.DryRun = true
require.NoError(t, f.cleanUp(ctx, true, false, 0))
after := listAllFiles(ctx, t, f, dirName)
assert.Equal(t, before, after)
})
t.Run("RealThing", func(t *testing.T) {
f.opt.Versions = true
defer func() {
f.opt.Versions = false
}()
// Listing should reflect current state after cleanup
require.NoError(t, f.cleanUp(ctx, true, false, 0))
items := append([]fstest.Item{newItem}, fstests.InternalTestFiles...)
fstest.CheckListing(t, f, items)
})
})
// Purge gets tested later
}
func (f *Fs) InternalTestCleanupUnfinished(t *testing.T) {
ctx := context.Background()
// B2CleanupHidden tests cleaning up hidden files
t.Run("CleanupUnfinished", func(t *testing.T) {
dirName := "unfinished"
fileCount := 5
expectedFiles := []string{}
for i := 1; i < fileCount; i++ {
fileName := fmt.Sprintf("%s/unfinished-%d", dirName, i)
expectedFiles = append(expectedFiles, fileName)
obj := &Object{
fs: f,
remote: fileName,
}
objInfo := object.NewStaticObjectInfo(fileName, fstest.Time("2002-02-03T04:05:06.499999999Z"), -1, true, nil, nil)
_, err := f.newLargeUpload(ctx, obj, nil, objInfo, f.opt.ChunkSize, false, nil)
require.NoError(t, err)
}
checkListing(ctx, t, f, dirName, expectedFiles)
t.Run("DryRun", func(t *testing.T) {
// Listing should not change after dry run
ctx, ci := fs.AddConfig(ctx)
ci.DryRun = true
require.NoError(t, f.cleanUp(ctx, false, true, 0))
checkListing(ctx, t, f, dirName, expectedFiles)
})
t.Run("RealThing", func(t *testing.T) {
// Listing should be empty after real cleanup
require.NoError(t, f.cleanUp(ctx, false, true, 0))
checkListing(ctx, t, f, dirName, []string{})
})
})
}
func listAllFiles(ctx context.Context, t *testing.T, f *Fs, dirName string) []string {
bucket, directory := f.split(dirName)
foundFiles := []string{}
require.NoError(t, f.list(ctx, bucket, directory, "", false, true, 0, true, false, func(remote string, object *api.File, isDirectory bool) error {
if !isDirectory {
foundFiles = append(foundFiles, object.Name)
}
return nil
}))
sort.Strings(foundFiles)
return foundFiles
}
func checkListing(ctx context.Context, t *testing.T, f *Fs, dirName string, expectedFiles []string) {
foundFiles := listAllFiles(ctx, t, f, dirName)
sort.Strings(expectedFiles)
assert.Equal(t, expectedFiles, foundFiles)
}
func (f *Fs) InternalTestLifecycleRules(t *testing.T) {
ctx := context.Background()
opt := map[string]string{}
t.Run("InitState", func(t *testing.T) {
// There should be no lifecycle rules at the outset
lifecycleRulesIf, err := f.lifecycleCommand(ctx, "lifecycle", nil, opt)
lifecycleRules := lifecycleRulesIf.([]api.LifecycleRule)
require.NoError(t, err)
assert.Equal(t, 0, len(lifecycleRules))
})
t.Run("DryRun", func(t *testing.T) {
// There should still be no lifecycle rules after each dry run operation
ctx, ci := fs.AddConfig(ctx)
ci.DryRun = true
opt["daysFromHidingToDeleting"] = "30"
lifecycleRulesIf, err := f.lifecycleCommand(ctx, "lifecycle", nil, opt)
lifecycleRules := lifecycleRulesIf.([]api.LifecycleRule)
require.NoError(t, err)
assert.Equal(t, 0, len(lifecycleRules))
delete(opt, "daysFromHidingToDeleting")
opt["daysFromUploadingToHiding"] = "40"
lifecycleRulesIf, err = f.lifecycleCommand(ctx, "lifecycle", nil, opt)
lifecycleRules = lifecycleRulesIf.([]api.LifecycleRule)
require.NoError(t, err)
assert.Equal(t, 0, len(lifecycleRules))
opt["daysFromHidingToDeleting"] = "30"
lifecycleRulesIf, err = f.lifecycleCommand(ctx, "lifecycle", nil, opt)
lifecycleRules = lifecycleRulesIf.([]api.LifecycleRule)
require.NoError(t, err)
assert.Equal(t, 0, len(lifecycleRules))
})
t.Run("RealThing", func(t *testing.T) {
opt["daysFromHidingToDeleting"] = "30"
lifecycleRulesIf, err := f.lifecycleCommand(ctx, "lifecycle", nil, opt)
lifecycleRules := lifecycleRulesIf.([]api.LifecycleRule)
require.NoError(t, err)
assert.Equal(t, 1, len(lifecycleRules))
assert.Equal(t, 30, *lifecycleRules[0].DaysFromHidingToDeleting)
delete(opt, "daysFromHidingToDeleting")
opt["daysFromUploadingToHiding"] = "40"
lifecycleRulesIf, err = f.lifecycleCommand(ctx, "lifecycle", nil, opt)
lifecycleRules = lifecycleRulesIf.([]api.LifecycleRule)
require.NoError(t, err)
assert.Equal(t, 1, len(lifecycleRules))
assert.Equal(t, 40, *lifecycleRules[0].DaysFromUploadingToHiding)
opt["daysFromHidingToDeleting"] = "30"
lifecycleRulesIf, err = f.lifecycleCommand(ctx, "lifecycle", nil, opt)
lifecycleRules = lifecycleRulesIf.([]api.LifecycleRule)
require.NoError(t, err)
assert.Equal(t, 1, len(lifecycleRules))
assert.Equal(t, 30, *lifecycleRules[0].DaysFromHidingToDeleting)
assert.Equal(t, 40, *lifecycleRules[0].DaysFromUploadingToHiding)
})
}
// -run TestIntegration/FsMkdir/FsPutFiles/Internal
func (f *Fs) InternalTest(t *testing.T) {
t.Run("Metadata", f.InternalTestMetadata)
t.Run("Versions", f.InternalTestVersions)
t.Run("CleanupUnfinished", f.InternalTestCleanupUnfinished)
t.Run("LifecycleRules", f.InternalTestLifecycleRules)
}
var _ fstests.InternalTester = (*Fs)(nil)

View File

@@ -91,7 +91,7 @@ type largeUpload struct {
// newLargeUpload starts an upload of object o from in with metadata in src
//
// If newInfo is set then metadata from that will be used instead of reading it from src
func (f *Fs) newLargeUpload(ctx context.Context, o *Object, in io.Reader, src fs.ObjectInfo, defaultChunkSize fs.SizeSuffix, doCopy bool, newInfo *api.File) (up *largeUpload, err error) {
func (f *Fs) newLargeUpload(ctx context.Context, o *Object, in io.Reader, src fs.ObjectInfo, defaultChunkSize fs.SizeSuffix, doCopy bool, newInfo *api.File, options ...fs.OpenOption) (up *largeUpload, err error) {
size := src.Size()
parts := 0
chunkSize := defaultChunkSize
@@ -104,11 +104,6 @@ func (f *Fs) newLargeUpload(ctx context.Context, o *Object, in io.Reader, src fs
parts++
}
}
opts := rest.Opts{
Method: "POST",
Path: "/b2_start_large_file",
}
bucket, bucketPath := o.split()
bucketID, err := f.getBucketID(ctx, bucket)
if err != nil {
@@ -118,12 +113,27 @@ func (f *Fs) newLargeUpload(ctx context.Context, o *Object, in io.Reader, src fs
BucketID: bucketID,
Name: f.opt.Enc.FromStandardPath(bucketPath),
}
optionsToSend := make([]fs.OpenOption, 0, len(options))
if newInfo == nil {
modTime := src.ModTime(ctx)
modTime, err := o.getModTime(ctx, src, options)
if err != nil {
return nil, err
}
request.ContentType = fs.MimeType(ctx, src)
request.Info = map[string]string{
timeKey: timeString(modTime),
}
// Custom upload headers - remove header prefix since they are sent in the body
for _, option := range options {
k, v := option.Header()
k = strings.ToLower(k)
if strings.HasPrefix(k, headerPrefix) {
request.Info[k[len(headerPrefix):]] = v
} else {
optionsToSend = append(optionsToSend, option)
}
}
// Set the SHA1 if known
if !o.fs.opt.DisableCheckSum || doCopy {
if calculatedSha1, err := src.Hash(ctx, hash.SHA1); err == nil && calculatedSha1 != "" {
@@ -134,6 +144,11 @@ func (f *Fs) newLargeUpload(ctx context.Context, o *Object, in io.Reader, src fs
request.ContentType = newInfo.ContentType
request.Info = newInfo.Info
}
opts := rest.Opts{
Method: "POST",
Path: "/b2_start_large_file",
Options: optionsToSend,
}
var response api.StartLargeFileResponse
err = f.pacer.Call(func() (bool, error) {
resp, err := f.srv.CallJSON(ctx, &opts, &request, &response)
@@ -463,17 +478,14 @@ func (up *largeUpload) Copy(ctx context.Context) (err error) {
remaining = up.size
)
g.SetLimit(up.f.opt.UploadConcurrency)
for part := 0; part < up.parts; part++ {
for part := range up.parts {
// Fail fast, in case an errgroup managed function returns an error
// gCtx is cancelled. There is no point in copying all the other parts.
if gCtx.Err() != nil {
break
}
reqSize := remaining
if reqSize >= up.chunkSize {
reqSize = up.chunkSize
}
reqSize := min(remaining, up.chunkSize)
part := part // for the closure
g.Go(func() (err error) {

View File

@@ -43,9 +43,9 @@ import (
"github.com/rclone/rclone/lib/jwtutil"
"github.com/rclone/rclone/lib/oauthutil"
"github.com/rclone/rclone/lib/pacer"
"github.com/rclone/rclone/lib/random"
"github.com/rclone/rclone/lib/rest"
"github.com/youmark/pkcs8"
"golang.org/x/oauth2"
)
const (
@@ -64,12 +64,10 @@ const (
// Globals
var (
// Description of how to auth for this app
oauthConfig = &oauth2.Config{
Scopes: nil,
Endpoint: oauth2.Endpoint{
AuthURL: "https://app.box.com/api/oauth2/authorize",
TokenURL: "https://app.box.com/api/oauth2/token",
},
oauthConfig = &oauthutil.Config{
Scopes: nil,
AuthURL: "https://app.box.com/api/oauth2/authorize",
TokenURL: "https://app.box.com/api/oauth2/token",
ClientID: rcloneClientID,
ClientSecret: obscure.MustReveal(rcloneEncryptedClientSecret),
RedirectURL: oauthutil.RedirectURL,
@@ -239,8 +237,8 @@ func getClaims(boxConfig *api.ConfigJSON, boxSubType string) (claims *boxCustomC
return claims, nil
}
func getSigningHeaders(boxConfig *api.ConfigJSON) map[string]interface{} {
signingHeaders := map[string]interface{}{
func getSigningHeaders(boxConfig *api.ConfigJSON) map[string]any {
signingHeaders := map[string]any{
"kid": boxConfig.BoxAppSettings.AppAuth.PublicKeyID,
}
return signingHeaders
@@ -256,8 +254,10 @@ func getQueryParams(boxConfig *api.ConfigJSON) map[string]string {
}
func getDecryptedPrivateKey(boxConfig *api.ConfigJSON) (key *rsa.PrivateKey, err error) {
block, rest := pem.Decode([]byte(boxConfig.BoxAppSettings.AppAuth.PrivateKey))
if block == nil {
return nil, errors.New("box: failed to PEM decode private key")
}
if len(rest) > 0 {
return nil, fmt.Errorf("box: extra data included in private key: %w", err)
}
@@ -619,7 +619,7 @@ func (f *Fs) CreateDir(ctx context.Context, pathID, leaf string) (newID string,
return shouldRetry(ctx, resp, err)
})
if err != nil {
//fmt.Printf("...Error %v\n", err)
// fmt.Printf("...Error %v\n", err)
return "", err
}
// fmt.Printf("...Id %q\n", *info.Id)
@@ -966,6 +966,26 @@ func (f *Fs) Copy(ctx context.Context, src fs.Object, remote string) (fs.Object,
return nil, err
}
// check if dest already exists
item, err := f.preUploadCheck(ctx, leaf, directoryID, src.Size())
if err != nil {
return nil, err
}
if item != nil { // dest already exists, need to copy to temp name and then move
tempSuffix := "-rclone-copy-" + random.String(8)
fs.Debugf(remote, "dst already exists, copying to temp name %v", remote+tempSuffix)
tempObj, err := f.Copy(ctx, src, remote+tempSuffix)
if err != nil {
return nil, err
}
fs.Debugf(remote+tempSuffix, "moving to real name %v", remote)
err = f.deleteObject(ctx, item.ID)
if err != nil {
return nil, err
}
return f.Move(ctx, tempObj, remote)
}
// Copy the object
opts := rest.Opts{
Method: "POST",
@@ -1323,12 +1343,8 @@ func (f *Fs) changeNotifyRunner(ctx context.Context, notifyFunc func(string, fs.
nextStreamPosition = streamPosition
for {
limit := f.opt.ListChunk
// box only allows a max of 500 events
if limit > 500 {
limit = 500
}
limit := min(f.opt.ListChunk, 500)
opts := rest.Opts{
Method: "GET",

View File

@@ -105,7 +105,7 @@ func (o *Object) commitUpload(ctx context.Context, SessionID string, parts []api
const defaultDelay = 10
var tries int
outer:
for tries = 0; tries < maxTries; tries++ {
for tries = range maxTries {
err = o.fs.pacer.Call(func() (bool, error) {
resp, err = o.fs.srv.CallJSON(ctx, &opts, &request, nil)
if err != nil {
@@ -203,7 +203,7 @@ func (o *Object) uploadMultipart(ctx context.Context, in io.Reader, leaf, direct
errs := make(chan error, 1)
var wg sync.WaitGroup
outer:
for part := 0; part < session.TotalParts; part++ {
for part := range session.TotalParts {
// Check any errors
select {
case err = <-errs:
@@ -211,10 +211,7 @@ outer:
default:
}
reqSize := remaining
if reqSize >= chunkSize {
reqSize = chunkSize
}
reqSize := min(remaining, chunkSize)
// Make a block of memory
buf := make([]byte, reqSize)

View File

@@ -29,6 +29,7 @@ import (
"github.com/rclone/rclone/fs/config/obscure"
"github.com/rclone/rclone/fs/fspath"
"github.com/rclone/rclone/fs/hash"
"github.com/rclone/rclone/fs/list"
"github.com/rclone/rclone/fs/rc"
"github.com/rclone/rclone/fs/walk"
"github.com/rclone/rclone/lib/atexit"
@@ -409,18 +410,16 @@ func NewFs(ctx context.Context, name, rootPath string, m configmap.Mapper) (fs.F
if err != nil {
return nil, fmt.Errorf("failed to connect to the Plex API %v: %w", opt.PlexURL, err)
}
} else {
if opt.PlexPassword != "" && opt.PlexUsername != "" {
decPass, err := obscure.Reveal(opt.PlexPassword)
if err != nil {
decPass = opt.PlexPassword
}
f.plexConnector, err = newPlexConnector(f, opt.PlexURL, opt.PlexUsername, decPass, opt.PlexInsecure, func(token string) {
m.Set("plex_token", token)
})
if err != nil {
return nil, fmt.Errorf("failed to connect to the Plex API %v: %w", opt.PlexURL, err)
}
} else if opt.PlexPassword != "" && opt.PlexUsername != "" {
decPass, err := obscure.Reveal(opt.PlexPassword)
if err != nil {
decPass = opt.PlexPassword
}
f.plexConnector, err = newPlexConnector(f, opt.PlexURL, opt.PlexUsername, decPass, opt.PlexInsecure, func(token string) {
m.Set("plex_token", token)
})
if err != nil {
return nil, fmt.Errorf("failed to connect to the Plex API %v: %w", opt.PlexURL, err)
}
}
}
@@ -1088,13 +1087,13 @@ func (f *Fs) List(ctx context.Context, dir string) (entries fs.DirEntries, err e
return cachedEntries, nil
}
func (f *Fs) recurse(ctx context.Context, dir string, list *walk.ListRHelper) error {
func (f *Fs) recurse(ctx context.Context, dir string, list *list.Helper) error {
entries, err := f.List(ctx, dir)
if err != nil {
return err
}
for i := 0; i < len(entries); i++ {
for i := range entries {
innerDir, ok := entries[i].(fs.Directory)
if ok {
err := f.recurse(ctx, innerDir.Remote(), list)
@@ -1140,7 +1139,7 @@ func (f *Fs) ListR(ctx context.Context, dir string, callback fs.ListRCallback) (
}
// if we're here, we're gonna do a standard recursive traversal and cache everything
list := walk.NewListRHelper(callback)
list := list.NewHelper(callback)
err = f.recurse(ctx, dir, list)
if err != nil {
return err
@@ -1430,7 +1429,7 @@ func (f *Fs) cacheReader(u io.Reader, src fs.ObjectInfo, originalRead func(inn i
}()
// wait until both are done
for c := 0; c < 2; c++ {
for range 2 {
<-done
}
}
@@ -1755,7 +1754,7 @@ func (f *Fs) About(ctx context.Context) (*fs.Usage, error) {
}
// Stats returns stats about the cache storage
func (f *Fs) Stats() (map[string]map[string]interface{}, error) {
func (f *Fs) Stats() (map[string]map[string]any, error) {
return f.cache.Stats()
}
@@ -1935,7 +1934,7 @@ var commandHelp = []fs.CommandHelp{
// The result should be capable of being JSON encoded
// If it is a string or a []string it will be shown to the user
// otherwise it will be JSON encoded and shown to the user like that
func (f *Fs) Command(ctx context.Context, name string, arg []string, opt map[string]string) (interface{}, error) {
func (f *Fs) Command(ctx context.Context, name string, arg []string, opt map[string]string) (any, error) {
switch name {
case "stats":
return f.Stats()

View File

@@ -10,7 +10,6 @@ import (
goflag "flag"
"fmt"
"io"
"log"
"math/rand"
"os"
"path"
@@ -33,7 +32,7 @@ import (
"github.com/rclone/rclone/fstest"
"github.com/rclone/rclone/fstest/testy"
"github.com/rclone/rclone/lib/random"
"github.com/rclone/rclone/vfs/vfsflags"
"github.com/rclone/rclone/vfs/vfscommon"
"github.com/stretchr/testify/require"
)
@@ -93,7 +92,7 @@ func TestMain(m *testing.M) {
goflag.Parse()
var rc int
log.Printf("Running with the following params: \n remote: %v", remoteName)
fs.Logf(nil, "Running with the following params: \n remote: %v", remoteName)
runInstance = newRun()
rc = m.Run()
os.Exit(rc)
@@ -123,10 +122,10 @@ func TestInternalListRootAndInnerRemotes(t *testing.T) {
/* TODO: is this testing something?
func TestInternalVfsCache(t *testing.T) {
vfsflags.Opt.DirCacheTime = time.Second * 30
vfscommon.Opt.DirCacheTime = time.Second * 30
testSize := int64(524288000)
vfsflags.Opt.CacheMode = vfs.CacheModeWrites
vfscommon.Opt.CacheMode = vfs.CacheModeWrites
id := "tiuufo"
rootFs, boltDb := runInstance.newCacheFs(t, remoteName, id, true, true, nil, map[string]string{"writes": "true", "info_age": "1h"})
defer runInstance.cleanupFs(t, rootFs, boltDb)
@@ -338,7 +337,7 @@ func TestInternalCachedUpdatedContentMatches(t *testing.T) {
func TestInternalWrappedWrittenContentMatches(t *testing.T) {
id := fmt.Sprintf("tiwwcm%v", time.Now().Unix())
vfsflags.Opt.DirCacheTime = time.Second
vfscommon.Opt.DirCacheTime = fs.Duration(time.Second)
rootFs, _ := runInstance.newCacheFs(t, remoteName, id, true, true, nil)
if runInstance.rootIsCrypt {
t.Skip("test skipped with crypt remote")
@@ -361,14 +360,14 @@ func TestInternalWrappedWrittenContentMatches(t *testing.T) {
require.NoError(t, err)
require.Equal(t, int64(len(checkSample)), o.Size())
for i := 0; i < len(checkSample); i++ {
for i := range checkSample {
require.Equal(t, testData[i], checkSample[i])
}
}
func TestInternalLargeWrittenContentMatches(t *testing.T) {
id := fmt.Sprintf("tilwcm%v", time.Now().Unix())
vfsflags.Opt.DirCacheTime = time.Second
vfscommon.Opt.DirCacheTime = fs.Duration(time.Second)
rootFs, _ := runInstance.newCacheFs(t, remoteName, id, true, true, nil)
if runInstance.rootIsCrypt {
t.Skip("test skipped with crypt remote")
@@ -388,7 +387,7 @@ func TestInternalLargeWrittenContentMatches(t *testing.T) {
readData, err := runInstance.readDataFromRemote(t, rootFs, "data.bin", 0, testSize, false)
require.NoError(t, err)
for i := 0; i < len(readData); i++ {
for i := range readData {
require.Equalf(t, testData[i], readData[i], "at byte %v", i)
}
}
@@ -408,7 +407,7 @@ func TestInternalWrappedFsChangeNotSeen(t *testing.T) {
// update in the wrapped fs
originalSize, err := runInstance.size(t, rootFs, "data.bin")
require.NoError(t, err)
log.Printf("original size: %v", originalSize)
fs.Logf(nil, "original size: %v", originalSize)
o, err := cfs.UnWrap().NewObject(context.Background(), runInstance.encryptRemoteIfNeeded(t, "data.bin"))
require.NoError(t, err)
@@ -417,7 +416,7 @@ func TestInternalWrappedFsChangeNotSeen(t *testing.T) {
if runInstance.rootIsCrypt {
data2, err = base64.StdEncoding.DecodeString(cryptedText3Base64)
require.NoError(t, err)
expectedSize = expectedSize + 1 // FIXME newline gets in, likely test data issue
expectedSize++ // FIXME newline gets in, likely test data issue
} else {
data2 = []byte("test content")
}
@@ -425,7 +424,7 @@ func TestInternalWrappedFsChangeNotSeen(t *testing.T) {
err = o.Update(context.Background(), bytes.NewReader(data2), objInfo)
require.NoError(t, err)
require.Equal(t, int64(len(data2)), o.Size())
log.Printf("updated size: %v", len(data2))
fs.Logf(nil, "updated size: %v", len(data2))
// get a new instance from the cache
if runInstance.wrappedIsExternal {
@@ -485,49 +484,49 @@ func TestInternalMoveWithNotify(t *testing.T) {
err = runInstance.retryBlock(func() error {
li, err := runInstance.list(t, rootFs, "test")
if err != nil {
log.Printf("err: %v", err)
fs.Logf(nil, "err: %v", err)
return err
}
if len(li) != 2 {
log.Printf("not expected listing /test: %v", li)
fs.Logf(nil, "not expected listing /test: %v", li)
return fmt.Errorf("not expected listing /test: %v", li)
}
li, err = runInstance.list(t, rootFs, "test/one")
if err != nil {
log.Printf("err: %v", err)
fs.Logf(nil, "err: %v", err)
return err
}
if len(li) != 0 {
log.Printf("not expected listing /test/one: %v", li)
fs.Logf(nil, "not expected listing /test/one: %v", li)
return fmt.Errorf("not expected listing /test/one: %v", li)
}
li, err = runInstance.list(t, rootFs, "test/second")
if err != nil {
log.Printf("err: %v", err)
fs.Logf(nil, "err: %v", err)
return err
}
if len(li) != 1 {
log.Printf("not expected listing /test/second: %v", li)
fs.Logf(nil, "not expected listing /test/second: %v", li)
return fmt.Errorf("not expected listing /test/second: %v", li)
}
if fi, ok := li[0].(os.FileInfo); ok {
if fi.Name() != "data.bin" {
log.Printf("not expected name: %v", fi.Name())
fs.Logf(nil, "not expected name: %v", fi.Name())
return fmt.Errorf("not expected name: %v", fi.Name())
}
} else if di, ok := li[0].(fs.DirEntry); ok {
if di.Remote() != "test/second/data.bin" {
log.Printf("not expected remote: %v", di.Remote())
fs.Logf(nil, "not expected remote: %v", di.Remote())
return fmt.Errorf("not expected remote: %v", di.Remote())
}
} else {
log.Printf("unexpected listing: %v", li)
fs.Logf(nil, "unexpected listing: %v", li)
return fmt.Errorf("unexpected listing: %v", li)
}
log.Printf("complete listing: %v", li)
fs.Logf(nil, "complete listing: %v", li)
return nil
}, 12, time.Second*10)
require.NoError(t, err)
@@ -577,43 +576,43 @@ func TestInternalNotifyCreatesEmptyParts(t *testing.T) {
err = runInstance.retryBlock(func() error {
found = boltDb.HasEntry(path.Join(cfs.Root(), runInstance.encryptRemoteIfNeeded(t, "test")))
if !found {
log.Printf("not found /test")
fs.Logf(nil, "not found /test")
return fmt.Errorf("not found /test")
}
found = boltDb.HasEntry(path.Join(cfs.Root(), runInstance.encryptRemoteIfNeeded(t, "test"), runInstance.encryptRemoteIfNeeded(t, "one")))
if !found {
log.Printf("not found /test/one")
fs.Logf(nil, "not found /test/one")
return fmt.Errorf("not found /test/one")
}
found = boltDb.HasEntry(path.Join(cfs.Root(), runInstance.encryptRemoteIfNeeded(t, "test"), runInstance.encryptRemoteIfNeeded(t, "one"), runInstance.encryptRemoteIfNeeded(t, "test2")))
if !found {
log.Printf("not found /test/one/test2")
fs.Logf(nil, "not found /test/one/test2")
return fmt.Errorf("not found /test/one/test2")
}
li, err := runInstance.list(t, rootFs, "test/one")
if err != nil {
log.Printf("err: %v", err)
fs.Logf(nil, "err: %v", err)
return err
}
if len(li) != 1 {
log.Printf("not expected listing /test/one: %v", li)
fs.Logf(nil, "not expected listing /test/one: %v", li)
return fmt.Errorf("not expected listing /test/one: %v", li)
}
if fi, ok := li[0].(os.FileInfo); ok {
if fi.Name() != "test2" {
log.Printf("not expected name: %v", fi.Name())
fs.Logf(nil, "not expected name: %v", fi.Name())
return fmt.Errorf("not expected name: %v", fi.Name())
}
} else if di, ok := li[0].(fs.DirEntry); ok {
if di.Remote() != "test/one/test2" {
log.Printf("not expected remote: %v", di.Remote())
fs.Logf(nil, "not expected remote: %v", di.Remote())
return fmt.Errorf("not expected remote: %v", di.Remote())
}
} else {
log.Printf("unexpected listing: %v", li)
fs.Logf(nil, "unexpected listing: %v", li)
return fmt.Errorf("unexpected listing: %v", li)
}
log.Printf("complete listing /test/one/test2")
fs.Logf(nil, "complete listing /test/one/test2")
return nil
}, 12, time.Second*10)
require.NoError(t, err)
@@ -689,7 +688,7 @@ func TestInternalMaxChunkSizeRespected(t *testing.T) {
co, ok := o.(*cache.Object)
require.True(t, ok)
for i := 0; i < 4; i++ { // read first 4
for i := range 4 { // read first 4
_ = runInstance.readDataFromObj(t, co, chunkSize*int64(i), chunkSize*int64(i+1), false)
}
cfs.CleanUpCache(true)
@@ -708,7 +707,7 @@ func TestInternalMaxChunkSizeRespected(t *testing.T) {
func TestInternalExpiredEntriesRemoved(t *testing.T) {
id := fmt.Sprintf("tieer%v", time.Now().Unix())
vfsflags.Opt.DirCacheTime = time.Second * 4 // needs to be lower than the defined
vfscommon.Opt.DirCacheTime = fs.Duration(time.Second * 4) // needs to be lower than the defined
rootFs, _ := runInstance.newCacheFs(t, remoteName, id, true, true, nil)
cfs, err := runInstance.getCacheFs(rootFs)
require.NoError(t, err)
@@ -743,7 +742,7 @@ func TestInternalExpiredEntriesRemoved(t *testing.T) {
}
func TestInternalBug2117(t *testing.T) {
vfsflags.Opt.DirCacheTime = time.Second * 10
vfscommon.Opt.DirCacheTime = fs.Duration(time.Second * 10)
id := fmt.Sprintf("tib2117%v", time.Now().Unix())
rootFs, _ := runInstance.newCacheFs(t, remoteName, id, false, true, map[string]string{"info_age": "72h", "chunk_clean_interval": "15m"})
@@ -771,24 +770,24 @@ func TestInternalBug2117(t *testing.T) {
di, err := runInstance.list(t, rootFs, "test/dir1/dir2")
require.NoError(t, err)
log.Printf("len: %v", len(di))
fs.Logf(nil, "len: %v", len(di))
require.Len(t, di, 1)
time.Sleep(time.Second * 30)
di, err = runInstance.list(t, rootFs, "test/dir1/dir2")
require.NoError(t, err)
log.Printf("len: %v", len(di))
fs.Logf(nil, "len: %v", len(di))
require.Len(t, di, 1)
di, err = runInstance.list(t, rootFs, "test/dir1")
require.NoError(t, err)
log.Printf("len: %v", len(di))
fs.Logf(nil, "len: %v", len(di))
require.Len(t, di, 4)
di, err = runInstance.list(t, rootFs, "test")
require.NoError(t, err)
log.Printf("len: %v", len(di))
fs.Logf(nil, "len: %v", len(di))
require.Len(t, di, 4)
}
@@ -829,7 +828,7 @@ func newRun() *run {
} else {
r.tmpUploadDir = uploadDir
}
log.Printf("Temp Upload Dir: %v", r.tmpUploadDir)
fs.Logf(nil, "Temp Upload Dir: %v", r.tmpUploadDir)
return r
}
@@ -850,8 +849,8 @@ func (r *run) encryptRemoteIfNeeded(t *testing.T, remote string) string {
func (r *run) newCacheFs(t *testing.T, remote, id string, needRemote, purge bool, flags map[string]string) (fs.Fs, *cache.Persistent) {
fstest.Initialise()
remoteExists := false
for _, s := range config.FileSections() {
if s == remote {
for _, s := range config.GetRemotes() {
if s.Name == remote {
remoteExists = true
}
}
@@ -875,12 +874,12 @@ func (r *run) newCacheFs(t *testing.T, remote, id string, needRemote, purge bool
cacheRemote := remote
if !remoteExists {
localRemote := remote + "-local"
config.FileSet(localRemote, "type", "local")
config.FileSet(localRemote, "nounc", "true")
config.FileSetValue(localRemote, "type", "local")
config.FileSetValue(localRemote, "nounc", "true")
m.Set("type", "cache")
m.Set("remote", localRemote+":"+filepath.Join(os.TempDir(), localRemote))
} else {
remoteType := config.FileGet(remote, "type")
remoteType := config.GetValue(remote, "type")
if remoteType == "" {
t.Skipf("skipped due to invalid remote type for %v", remote)
return nil, nil
@@ -891,14 +890,14 @@ func (r *run) newCacheFs(t *testing.T, remote, id string, needRemote, purge bool
m.Set("password", cryptPassword1)
m.Set("password2", cryptPassword2)
}
remoteRemote := config.FileGet(remote, "remote")
remoteRemote := config.GetValue(remote, "remote")
if remoteRemote == "" {
t.Skipf("skipped due to invalid remote wrapper for %v", remote)
return nil, nil
}
remoteRemoteParts := strings.Split(remoteRemote, ":")
remoteWrapping := remoteRemoteParts[0]
remoteType := config.FileGet(remoteWrapping, "type")
remoteType := config.GetValue(remoteWrapping, "type")
if remoteType != "cache" {
t.Skipf("skipped due to invalid remote type for %v: '%v'", remoteWrapping, remoteType)
return nil, nil
@@ -972,7 +971,7 @@ func (r *run) randomReader(t *testing.T, size int64) io.ReadCloser {
f, err := os.CreateTemp("", "rclonecache-tempfile")
require.NoError(t, err)
for i := 0; i < int(cnt); i++ {
for range int(cnt) {
data := randStringBytes(int(chunk))
_, _ = f.Write(data)
}
@@ -1086,9 +1085,9 @@ func (r *run) rm(t *testing.T, f fs.Fs, remote string) error {
return err
}
func (r *run) list(t *testing.T, f fs.Fs, remote string) ([]interface{}, error) {
func (r *run) list(t *testing.T, f fs.Fs, remote string) ([]any, error) {
var err error
var l []interface{}
var l []any
var list fs.DirEntries
list, err = f.List(context.Background(), remote)
for _, ll := range list {
@@ -1192,7 +1191,7 @@ func (r *run) updateData(t *testing.T, rootFs fs.Fs, src, data, append string) e
func (r *run) cleanSize(t *testing.T, size int64) int64 {
if r.rootIsCrypt {
denominator := int64(65536 + 16)
size = size - 32
size -= 32
quotient := size / denominator
remainder := size % denominator
return (quotient*65536 + remainder - 16)
@@ -1216,7 +1215,7 @@ func (r *run) listenForBackgroundUpload(t *testing.T, f fs.Fs, remote string) ch
var err error
var state cache.BackgroundUploadState
for i := 0; i < 2; i++ {
for range 2 {
select {
case state = <-buCh:
// continue
@@ -1294,7 +1293,7 @@ func (r *run) completeAllBackgroundUploads(t *testing.T, f fs.Fs, lastRemote str
func (r *run) retryBlock(block func() error, maxRetries int, rate time.Duration) error {
var err error
for i := 0; i < maxRetries; i++ {
for range maxRetries {
err = block()
if err == nil {
return nil

View File

@@ -17,7 +17,7 @@ func TestIntegration(t *testing.T) {
fstests.Run(t, &fstests.Opt{
RemoteName: "TestCache:",
NilObject: (*cache.Object)(nil),
UnimplementableFsMethods: []string{"PublicLink", "OpenWriterAt", "OpenChunkWriter", "DirSetModTime", "MkdirMetadata"},
UnimplementableFsMethods: []string{"PublicLink", "OpenWriterAt", "OpenChunkWriter", "DirSetModTime", "MkdirMetadata", "ListP"},
UnimplementableObjectMethods: []string{"MimeType", "ID", "GetTier", "SetTier", "Metadata", "SetMetadata"},
UnimplementableDirectoryMethods: []string{"Metadata", "SetMetadata", "SetModTime"},
SkipInvalidUTF8: true, // invalid UTF-8 confuses the cache

View File

@@ -162,7 +162,7 @@ func TestInternalUploadQueueMoreFiles(t *testing.T) {
randInstance := rand.New(rand.NewSource(time.Now().Unix()))
lastFile := ""
for i := 0; i < totalFiles; i++ {
for i := range totalFiles {
size := int64(randInstance.Intn(maxSize-minSize) + minSize)
testReader := runInstance.randomReader(t, size)
remote := "test/" + strconv.Itoa(i) + ".bin"

View File

@@ -182,7 +182,7 @@ func (r *Handle) queueOffset(offset int64) {
}
}
for i := 0; i < r.workers; i++ {
for i := range r.workers {
o := r.preloadOffset + int64(r.cacheFs().opt.ChunkSize)*int64(i)
if o < 0 || o >= r.cachedObject.Size() {
continue
@@ -208,7 +208,7 @@ func (r *Handle) getChunk(chunkStart int64) ([]byte, error) {
offset := chunkStart % int64(r.cacheFs().opt.ChunkSize)
// we align the start offset of the first chunk to a likely chunk in the storage
chunkStart = chunkStart - offset
chunkStart -= offset
r.queueOffset(chunkStart)
found := false
@@ -222,7 +222,7 @@ func (r *Handle) getChunk(chunkStart int64) ([]byte, error) {
if !found {
// we're gonna give the workers a chance to pickup the chunk
// and retry a couple of times
for i := 0; i < r.cacheFs().opt.ReadRetries*8; i++ {
for i := range r.cacheFs().opt.ReadRetries * 8 {
data, err = r.storage().GetChunk(r.cachedObject, chunkStart)
if err == nil {
found = true
@@ -327,7 +327,7 @@ func (r *Handle) Seek(offset int64, whence int) (int64, error) {
chunkStart := r.offset - (r.offset % int64(r.cacheFs().opt.ChunkSize))
if chunkStart >= int64(r.cacheFs().opt.ChunkSize) {
chunkStart = chunkStart - int64(r.cacheFs().opt.ChunkSize)
chunkStart -= int64(r.cacheFs().opt.ChunkSize)
}
r.queueOffset(chunkStart)
@@ -415,10 +415,8 @@ func (w *worker) run() {
continue
}
}
} else {
if w.r.storage().HasChunk(w.r.cachedObject, chunkStart) {
continue
}
} else if w.r.storage().HasChunk(w.r.cachedObject, chunkStart) {
continue
}
chunkEnd := chunkStart + int64(w.r.cacheFs().opt.ChunkSize)

View File

@@ -209,7 +209,7 @@ func (p *plexConnector) authenticate() error {
if err != nil {
return err
}
var data map[string]interface{}
var data map[string]any
err = json.NewDecoder(resp.Body).Decode(&data)
if err != nil {
return fmt.Errorf("failed to obtain token: %w", err)
@@ -273,11 +273,11 @@ func (p *plexConnector) isPlaying(co *Object) bool {
}
// adapted from: https://stackoverflow.com/a/28878037 (credit)
func get(m interface{}, path ...interface{}) (interface{}, bool) {
func get(m any, path ...any) (any, bool) {
for _, p := range path {
switch idx := p.(type) {
case string:
if mm, ok := m.(map[string]interface{}); ok {
if mm, ok := m.(map[string]any); ok {
if val, found := mm[idx]; found {
m = val
continue
@@ -285,7 +285,7 @@ func get(m interface{}, path ...interface{}) (interface{}, bool) {
}
return nil, false
case int:
if mm, ok := m.([]interface{}); ok {
if mm, ok := m.([]any); ok {
if len(mm) > idx {
m = mm[idx]
continue

View File

@@ -18,6 +18,7 @@ import (
"github.com/rclone/rclone/fs"
"github.com/rclone/rclone/fs/walk"
bolt "go.etcd.io/bbolt"
"go.etcd.io/bbolt/errors"
)
// Constants
@@ -597,7 +598,7 @@ func (b *Persistent) CleanChunksBySize(maxSize int64) {
})
if err != nil {
if err == bolt.ErrDatabaseNotOpen {
if err == errors.ErrDatabaseNotOpen {
// we're likely a late janitor and we need to end quietly as there's no guarantee of what exists anymore
return
}
@@ -606,16 +607,16 @@ func (b *Persistent) CleanChunksBySize(maxSize int64) {
}
// Stats returns a go map with the stats key values
func (b *Persistent) Stats() (map[string]map[string]interface{}, error) {
r := make(map[string]map[string]interface{})
r["data"] = make(map[string]interface{})
func (b *Persistent) Stats() (map[string]map[string]any, error) {
r := make(map[string]map[string]any)
r["data"] = make(map[string]any)
r["data"]["oldest-ts"] = time.Now()
r["data"]["oldest-file"] = ""
r["data"]["newest-ts"] = time.Now()
r["data"]["newest-file"] = ""
r["data"]["total-chunks"] = 0
r["data"]["total-size"] = int64(0)
r["files"] = make(map[string]interface{})
r["files"] = make(map[string]any)
r["files"]["oldest-ts"] = time.Now()
r["files"]["oldest-name"] = ""
r["files"]["newest-ts"] = time.Now()

View File

@@ -356,7 +356,8 @@ func NewFs(ctx context.Context, name, rpath string, m configmap.Mapper) (fs.Fs,
DirModTimeUpdatesOnWrite: true,
}).Fill(ctx, f).Mask(ctx, baseFs).WrapsFs(f, baseFs)
f.features.Disable("ListR") // Recursive listing may cause chunker skip files
f.features.ListR = nil // Recursive listing may cause chunker skip files
f.features.ListP = nil // ListP not supported yet
return f, err
}
@@ -632,7 +633,7 @@ func (f *Fs) parseChunkName(filePath string) (parentPath string, chunkNo int, ct
// forbidChunk prints error message or raises error if file is chunk.
// First argument sets log prefix, use `false` to suppress message.
func (f *Fs) forbidChunk(o interface{}, filePath string) error {
func (f *Fs) forbidChunk(o any, filePath string) error {
if parentPath, _, _, _ := f.parseChunkName(filePath); parentPath != "" {
if f.opt.FailHard {
return fmt.Errorf("chunk overlap with %q", parentPath)
@@ -680,7 +681,7 @@ func (f *Fs) newXactID(ctx context.Context, filePath string) (xactID string, err
circleSec := unixSec % closestPrimeZzzzSeconds
first4chars := strconv.FormatInt(circleSec, 36)
for tries := 0; tries < maxTransactionProbes; tries++ {
for range maxTransactionProbes {
f.xactIDMutex.Lock()
randomness := f.xactIDRand.Int63n(maxTwoBase36Digits + 1)
f.xactIDMutex.Unlock()
@@ -987,7 +988,7 @@ func (f *Fs) scanObject(ctx context.Context, remote string, quickScan bool) (fs.
}
}
if o.main == nil && (o.chunks == nil || len(o.chunks) == 0) {
if o.main == nil && len(o.chunks) == 0 {
// Scanning hasn't found data chunks with conforming names.
if f.useMeta || quickScan {
// Metadata is required but absent and there are no chunks.
@@ -1189,10 +1190,7 @@ func (f *Fs) put(
}
tempRemote := f.makeChunkName(baseRemote, c.chunkNo, "", xactID)
size := c.sizeLeft
if size > c.chunkSize {
size = c.chunkSize
}
size := min(c.sizeLeft, c.chunkSize)
savedReadCount := c.readCount
// If a single chunk is expected, avoid the extra rename operation
@@ -1477,10 +1475,7 @@ func (c *chunkingReader) dummyRead(in io.Reader, size int64) error {
const bufLen = 1048576 // 1 MiB
buf := make([]byte, bufLen)
for size > 0 {
n := size
if n > bufLen {
n = bufLen
}
n := min(size, bufLen)
if _, err := io.ReadFull(in, buf[0:n]); err != nil {
return err
}
@@ -2480,7 +2475,7 @@ func unmarshalSimpleJSON(ctx context.Context, metaObject fs.Object, data []byte)
if len(data) > maxMetadataSizeWritten {
return nil, false, ErrMetaTooBig
}
if data == nil || len(data) < 2 || data[0] != '{' || data[len(data)-1] != '}' {
if len(data) < 2 || data[0] != '{' || data[len(data)-1] != '}' {
return nil, false, errors.New("invalid json")
}
var metadata metaSimpleJSON

View File

@@ -40,7 +40,7 @@ func testPutLarge(t *testing.T, f *Fs, kilobytes int) {
})
}
type settings map[string]interface{}
type settings map[string]any
func deriveFs(ctx context.Context, t *testing.T, f fs.Fs, path string, opts settings) fs.Fs {
fsName := strings.Split(f.Name(), "{")[0] // strip off hash

View File

@@ -46,6 +46,7 @@ func TestIntegration(t *testing.T) {
"DirCacheFlush",
"UserInfo",
"Disconnect",
"ListP",
},
}
if *fstest.RemoteName == "" {

View File

@@ -0,0 +1,48 @@
// Package api has type definitions for cloudinary
package api
import (
"fmt"
)
// CloudinaryEncoder extends the built-in encoder
type CloudinaryEncoder interface {
// FromStandardPath takes a / separated path in Standard encoding
// and converts it to a / separated path in this encoding.
FromStandardPath(string) string
// FromStandardName takes name in Standard encoding and converts
// it in this encoding.
FromStandardName(string) string
// ToStandardPath takes a / separated path in this encoding
// and converts it to a / separated path in Standard encoding.
ToStandardPath(string) string
// ToStandardName takes name in this encoding and converts
// it in Standard encoding.
ToStandardName(string, string) string
// Encoded root of the remote (as passed into NewFs)
FromStandardFullPath(string) string
}
// UpdateOptions was created to pass options from Update to Put
type UpdateOptions struct {
PublicID string
ResourceType string
DeliveryType string
AssetFolder string
DisplayName string
}
// Header formats the option as a string
func (o *UpdateOptions) Header() (string, string) {
return "UpdateOption", fmt.Sprintf("%s/%s/%s", o.ResourceType, o.DeliveryType, o.PublicID)
}
// Mandatory returns whether the option must be parsed or can be ignored
func (o *UpdateOptions) Mandatory() bool {
return false
}
// String formats the option into human-readable form
func (o *UpdateOptions) String() string {
return fmt.Sprintf("Fully qualified Public ID: %s/%s/%s", o.ResourceType, o.DeliveryType, o.PublicID)
}

View File

@@ -0,0 +1,754 @@
// Package cloudinary provides an interface to the Cloudinary DAM
package cloudinary
import (
"context"
"encoding/hex"
"errors"
"fmt"
"io"
"net/http"
"net/url"
"path"
"slices"
"strconv"
"strings"
"time"
"github.com/cloudinary/cloudinary-go/v2"
SDKApi "github.com/cloudinary/cloudinary-go/v2/api"
"github.com/cloudinary/cloudinary-go/v2/api/admin"
"github.com/cloudinary/cloudinary-go/v2/api/admin/search"
"github.com/cloudinary/cloudinary-go/v2/api/uploader"
"github.com/rclone/rclone/backend/cloudinary/api"
"github.com/rclone/rclone/fs"
"github.com/rclone/rclone/fs/config"
"github.com/rclone/rclone/fs/config/configmap"
"github.com/rclone/rclone/fs/config/configstruct"
"github.com/rclone/rclone/fs/fserrors"
"github.com/rclone/rclone/fs/fshttp"
"github.com/rclone/rclone/fs/hash"
"github.com/rclone/rclone/lib/encoder"
"github.com/rclone/rclone/lib/pacer"
"github.com/rclone/rclone/lib/rest"
"github.com/zeebo/blake3"
)
// Cloudinary shouldn't have a trailing dot if there is no path
func cldPathDir(somePath string) string {
if somePath == "" || somePath == "." {
return somePath
}
dir := path.Dir(somePath)
if dir == "." {
return ""
}
return dir
}
// Register with Fs
func init() {
fs.Register(&fs.RegInfo{
Name: "cloudinary",
Description: "Cloudinary",
NewFs: NewFs,
Options: []fs.Option{
{
Name: "cloud_name",
Help: "Cloudinary Environment Name",
Required: true,
Sensitive: true,
},
{
Name: "api_key",
Help: "Cloudinary API Key",
Required: true,
Sensitive: true,
},
{
Name: "api_secret",
Help: "Cloudinary API Secret",
Required: true,
Sensitive: true,
},
{
Name: "upload_prefix",
Help: "Specify the API endpoint for environments out of the US",
},
{
Name: "upload_preset",
Help: "Upload Preset to select asset manipulation on upload",
},
{
Name: config.ConfigEncoding,
Help: config.ConfigEncodingHelp,
Advanced: true,
Default: (encoder.Base | // Slash,LtGt,DoubleQuote,Question,Asterisk,Pipe,Hash,Percent,BackSlash,Del,Ctl,RightSpace,InvalidUtf8,Dot
encoder.EncodeSlash |
encoder.EncodeLtGt |
encoder.EncodeDoubleQuote |
encoder.EncodeQuestion |
encoder.EncodeAsterisk |
encoder.EncodePipe |
encoder.EncodeHash |
encoder.EncodePercent |
encoder.EncodeBackSlash |
encoder.EncodeDel |
encoder.EncodeCtl |
encoder.EncodeRightSpace |
encoder.EncodeInvalidUtf8 |
encoder.EncodeDot),
},
{
Name: "eventually_consistent_delay",
Default: fs.Duration(0),
Advanced: true,
Help: "Wait N seconds for eventual consistency of the databases that support the backend operation",
},
{
Name: "adjust_media_files_extensions",
Default: true,
Advanced: true,
Help: "Cloudinary handles media formats as a file attribute and strips it from the name, which is unlike most other file systems",
},
{
Name: "media_extensions",
Default: []string{
"3ds", "3g2", "3gp", "ai", "arw", "avi", "avif", "bmp", "bw",
"cr2", "cr3", "djvu", "dng", "eps3", "fbx", "flif", "flv", "gif",
"glb", "gltf", "hdp", "heic", "heif", "ico", "indd", "jp2", "jpe",
"jpeg", "jpg", "jxl", "jxr", "m2ts", "mov", "mp4", "mpeg", "mts",
"mxf", "obj", "ogv", "pdf", "ply", "png", "psd", "svg", "tga",
"tif", "tiff", "ts", "u3ma", "usdz", "wdp", "webm", "webp", "wmv"},
Advanced: true,
Help: "Cloudinary supported media extensions",
},
},
})
}
// Options defines the configuration for this backend
type Options struct {
CloudName string `config:"cloud_name"`
APIKey string `config:"api_key"`
APISecret string `config:"api_secret"`
UploadPrefix string `config:"upload_prefix"`
UploadPreset string `config:"upload_preset"`
Enc encoder.MultiEncoder `config:"encoding"`
EventuallyConsistentDelay fs.Duration `config:"eventually_consistent_delay"`
MediaExtensions []string `config:"media_extensions"`
AdjustMediaFilesExtensions bool `config:"adjust_media_files_extensions"`
}
// Fs represents a remote cloudinary server
type Fs struct {
name string
root string
opt Options
features *fs.Features
pacer *fs.Pacer
srv *rest.Client // For downloading assets via the Cloudinary CDN
cld *cloudinary.Cloudinary // API calls are going through the Cloudinary SDK
lastCRUD time.Time
}
// Object describes a cloudinary object
type Object struct {
fs *Fs
remote string
size int64
modTime time.Time
url string
md5sum string
publicID string
resourceType string
deliveryType string
}
// NewFs constructs an Fs from the path, bucket:path
func NewFs(ctx context.Context, name string, root string, m configmap.Mapper) (fs.Fs, error) {
opt := new(Options)
err := configstruct.Set(m, opt)
if err != nil {
return nil, err
}
// Initialize the Cloudinary client
cld, err := cloudinary.NewFromParams(opt.CloudName, opt.APIKey, opt.APISecret)
if err != nil {
return nil, fmt.Errorf("failed to create Cloudinary client: %w", err)
}
cld.Admin.Client = *fshttp.NewClient(ctx)
cld.Upload.Client = *fshttp.NewClient(ctx)
if opt.UploadPrefix != "" {
cld.Config.API.UploadPrefix = opt.UploadPrefix
}
client := fshttp.NewClient(ctx)
f := &Fs{
name: name,
root: root,
opt: *opt,
cld: cld,
pacer: fs.NewPacer(ctx, pacer.NewDefault(pacer.MinSleep(1000), pacer.MaxSleep(10000), pacer.DecayConstant(2))),
srv: rest.NewClient(client),
}
f.features = (&fs.Features{
CanHaveEmptyDirectories: true,
}).Fill(ctx, f)
if root != "" {
// Check to see if the root actually an existing file
remote := path.Base(root)
f.root = cldPathDir(root)
_, err := f.NewObject(ctx, remote)
if err != nil {
if err == fs.ErrorObjectNotFound || errors.Is(err, fs.ErrorNotAFile) {
// File doesn't exist so return the previous root
f.root = root
return f, nil
}
return nil, err
}
// return an error with an fs which points to the parent
return f, fs.ErrorIsFile
}
return f, nil
}
// ------------------------------------------------------------
// FromStandardPath implementation of the api.CloudinaryEncoder
func (f *Fs) FromStandardPath(s string) string {
return strings.ReplaceAll(f.opt.Enc.FromStandardPath(s), "&", "\uFF06")
}
// FromStandardName implementation of the api.CloudinaryEncoder
func (f *Fs) FromStandardName(s string) string {
if f.opt.AdjustMediaFilesExtensions {
parsedURL, err := url.Parse(s)
ext := ""
if err != nil {
fs.Logf(nil, "Error parsing URL: %v", err)
} else {
ext = path.Ext(parsedURL.Path)
if slices.Contains(f.opt.MediaExtensions, strings.ToLower(strings.TrimPrefix(ext, "."))) {
s = strings.TrimSuffix(parsedURL.Path, ext)
}
}
}
return strings.ReplaceAll(f.opt.Enc.FromStandardName(s), "&", "\uFF06")
}
// ToStandardPath implementation of the api.CloudinaryEncoder
func (f *Fs) ToStandardPath(s string) string {
return strings.ReplaceAll(f.opt.Enc.ToStandardPath(s), "\uFF06", "&")
}
// ToStandardName implementation of the api.CloudinaryEncoder
func (f *Fs) ToStandardName(s string, assetURL string) string {
ext := ""
if f.opt.AdjustMediaFilesExtensions {
parsedURL, err := url.Parse(assetURL)
if err != nil {
fs.Logf(nil, "Error parsing URL: %v", err)
} else {
ext = path.Ext(parsedURL.Path)
if !slices.Contains(f.opt.MediaExtensions, strings.ToLower(strings.TrimPrefix(ext, "."))) {
ext = ""
}
}
}
return strings.ReplaceAll(f.opt.Enc.ToStandardName(s), "\uFF06", "&") + ext
}
// FromStandardFullPath encodes a full path to Cloudinary standard
func (f *Fs) FromStandardFullPath(dir string) string {
return path.Join(api.CloudinaryEncoder.FromStandardPath(f, f.root), api.CloudinaryEncoder.FromStandardPath(f, dir))
}
// ToAssetFolderAPI encodes folders as expected by the Cloudinary SDK
func (f *Fs) ToAssetFolderAPI(dir string) string {
return strings.ReplaceAll(dir, "%", "%25")
}
// ToDisplayNameElastic encodes a special case of elasticsearch
func (f *Fs) ToDisplayNameElastic(dir string) string {
return strings.ReplaceAll(dir, "!", "\\!")
}
// Name of the remote (as passed into NewFs)
func (f *Fs) Name() string {
return f.name
}
// Root of the remote (as passed into NewFs)
func (f *Fs) Root() string {
return f.root
}
// WaitEventuallyConsistent waits till the FS is eventually consistent
func (f *Fs) WaitEventuallyConsistent() {
if f.opt.EventuallyConsistentDelay == fs.Duration(0) {
return
}
delay := time.Duration(f.opt.EventuallyConsistentDelay)
timeSinceLastCRUD := time.Since(f.lastCRUD)
if timeSinceLastCRUD < delay {
time.Sleep(delay - timeSinceLastCRUD)
}
}
// String converts this Fs to a string
func (f *Fs) String() string {
return fmt.Sprintf("Cloudinary root '%s'", f.root)
}
// Features returns the optional features of this Fs
func (f *Fs) Features() *fs.Features {
return f.features
}
// List the objects and directories in dir into entries
func (f *Fs) List(ctx context.Context, dir string) (fs.DirEntries, error) {
remotePrefix := f.FromStandardFullPath(dir)
if remotePrefix != "" && !strings.HasSuffix(remotePrefix, "/") {
remotePrefix += "/"
}
var entries fs.DirEntries
dirs := make(map[string]struct{})
nextCursor := ""
f.WaitEventuallyConsistent()
for {
// user the folders api to list folders.
folderParams := admin.SubFoldersParams{
Folder: f.ToAssetFolderAPI(remotePrefix),
MaxResults: 500,
}
if nextCursor != "" {
folderParams.NextCursor = nextCursor
}
results, err := f.cld.Admin.SubFolders(ctx, folderParams)
if err != nil {
return nil, fmt.Errorf("failed to list sub-folders: %w", err)
}
if results.Error.Message != "" {
if strings.HasPrefix(results.Error.Message, "Can't find folder with path") {
return nil, fs.ErrorDirNotFound
}
return nil, fmt.Errorf("failed to list sub-folders: %s", results.Error.Message)
}
for _, folder := range results.Folders {
relativePath := api.CloudinaryEncoder.ToStandardPath(f, strings.TrimPrefix(folder.Path, remotePrefix))
parts := strings.Split(relativePath, "/")
// It's a directory
dirName := parts[len(parts)-1]
if _, found := dirs[dirName]; !found {
d := fs.NewDir(path.Join(dir, dirName), time.Time{})
entries = append(entries, d)
dirs[dirName] = struct{}{}
}
}
// Break if there are no more results
if results.NextCursor == "" {
break
}
nextCursor = results.NextCursor
}
for {
// Use the assets.AssetsByAssetFolder API to list assets
assetsParams := admin.AssetsByAssetFolderParams{
AssetFolder: remotePrefix,
MaxResults: 500,
}
if nextCursor != "" {
assetsParams.NextCursor = nextCursor
}
results, err := f.cld.Admin.AssetsByAssetFolder(ctx, assetsParams)
if err != nil {
return nil, fmt.Errorf("failed to list assets: %w", err)
}
for _, asset := range results.Assets {
remote := path.Join(dir, api.CloudinaryEncoder.ToStandardName(f, asset.DisplayName, asset.SecureURL))
o := &Object{
fs: f,
remote: remote,
size: int64(asset.Bytes),
modTime: asset.CreatedAt,
url: asset.SecureURL,
publicID: asset.PublicID,
resourceType: asset.AssetType,
deliveryType: asset.Type,
}
entries = append(entries, o)
}
// Break if there are no more results
if results.NextCursor == "" {
break
}
nextCursor = results.NextCursor
}
return entries, nil
}
// NewObject finds the Object at remote. If it can't be found it returns the error fs.ErrorObjectNotFound.
func (f *Fs) NewObject(ctx context.Context, remote string) (fs.Object, error) {
searchParams := search.Query{
Expression: fmt.Sprintf("asset_folder:\"%s\" AND display_name:\"%s\"",
f.FromStandardFullPath(cldPathDir(remote)),
f.ToDisplayNameElastic(api.CloudinaryEncoder.FromStandardName(f, path.Base(remote)))),
SortBy: []search.SortByField{{"uploaded_at": "desc"}},
MaxResults: 2,
}
var results *admin.SearchResult
f.WaitEventuallyConsistent()
err := f.pacer.Call(func() (bool, error) {
var err1 error
results, err1 = f.cld.Admin.Search(ctx, searchParams)
if err1 == nil && results.TotalCount != len(results.Assets) {
err1 = errors.New("partial response so waiting for eventual consistency")
}
return shouldRetry(ctx, nil, err1)
})
if err != nil {
return nil, fs.ErrorObjectNotFound
}
if results.TotalCount == 0 || len(results.Assets) == 0 {
return nil, fs.ErrorObjectNotFound
}
asset := results.Assets[0]
o := &Object{
fs: f,
remote: remote,
size: int64(asset.Bytes),
modTime: asset.UploadedAt,
url: asset.SecureURL,
md5sum: asset.Etag,
publicID: asset.PublicID,
resourceType: asset.ResourceType,
deliveryType: asset.Type,
}
return o, nil
}
func (f *Fs) getSuggestedPublicID(assetFolder string, displayName string, modTime time.Time) string {
payload := []byte(path.Join(assetFolder, displayName))
hash := blake3.Sum256(payload)
return hex.EncodeToString(hash[:])
}
// Put uploads content to Cloudinary
func (f *Fs) Put(ctx context.Context, in io.Reader, src fs.ObjectInfo, options ...fs.OpenOption) (fs.Object, error) {
if src.Size() == 0 {
return nil, fs.ErrorCantUploadEmptyFiles
}
params := uploader.UploadParams{
UploadPreset: f.opt.UploadPreset,
}
updateObject := false
var modTime time.Time
for _, option := range options {
if updateOptions, ok := option.(*api.UpdateOptions); ok {
if updateOptions.PublicID != "" {
updateObject = true
params.Overwrite = SDKApi.Bool(true)
params.Invalidate = SDKApi.Bool(true)
params.PublicID = updateOptions.PublicID
params.ResourceType = updateOptions.ResourceType
params.Type = SDKApi.DeliveryType(updateOptions.DeliveryType)
params.AssetFolder = updateOptions.AssetFolder
params.DisplayName = updateOptions.DisplayName
modTime = src.ModTime(ctx)
}
}
}
if !updateObject {
params.AssetFolder = f.FromStandardFullPath(cldPathDir(src.Remote()))
params.DisplayName = api.CloudinaryEncoder.FromStandardName(f, path.Base(src.Remote()))
// We want to conform to the unique asset ID of rclone, which is (asset_folder,display_name,last_modified).
// We also want to enable customers to choose their own public_id, in case duplicate names are not a crucial use case.
// Upload_presets that apply randomness to the public ID would not work well with rclone duplicate assets support.
params.FilenameOverride = f.getSuggestedPublicID(params.AssetFolder, params.DisplayName, src.ModTime(ctx))
}
uploadResult, err := f.cld.Upload.Upload(ctx, in, params)
f.lastCRUD = time.Now()
if err != nil {
return nil, fmt.Errorf("failed to upload to Cloudinary: %w", err)
}
if !updateObject {
modTime = uploadResult.CreatedAt
}
if uploadResult.Error.Message != "" {
return nil, errors.New(uploadResult.Error.Message)
}
o := &Object{
fs: f,
remote: src.Remote(),
size: int64(uploadResult.Bytes),
modTime: modTime,
url: uploadResult.SecureURL,
md5sum: uploadResult.Etag,
publicID: uploadResult.PublicID,
resourceType: uploadResult.ResourceType,
deliveryType: uploadResult.Type,
}
return o, nil
}
// Precision of the remote
func (f *Fs) Precision() time.Duration {
return fs.ModTimeNotSupported
}
// Hashes returns the supported hash sets
func (f *Fs) Hashes() hash.Set {
return hash.Set(hash.MD5)
}
// Mkdir creates empty folders
func (f *Fs) Mkdir(ctx context.Context, dir string) error {
params := admin.CreateFolderParams{Folder: f.ToAssetFolderAPI(f.FromStandardFullPath(dir))}
res, err := f.cld.Admin.CreateFolder(ctx, params)
f.lastCRUD = time.Now()
if err != nil {
return err
}
if res.Error.Message != "" {
return errors.New(res.Error.Message)
}
return nil
}
// Rmdir deletes empty folders
func (f *Fs) Rmdir(ctx context.Context, dir string) error {
// Additional test because Cloudinary will delete folders without
// assets, regardless of empty sub-folders
folder := f.ToAssetFolderAPI(f.FromStandardFullPath(dir))
folderParams := admin.SubFoldersParams{
Folder: folder,
MaxResults: 1,
}
results, err := f.cld.Admin.SubFolders(ctx, folderParams)
if err != nil {
return err
}
if results.TotalCount > 0 {
return fs.ErrorDirectoryNotEmpty
}
params := admin.DeleteFolderParams{Folder: folder}
res, err := f.cld.Admin.DeleteFolder(ctx, params)
f.lastCRUD = time.Now()
if err != nil {
return err
}
if res.Error.Message != "" {
if strings.HasPrefix(res.Error.Message, "Can't find folder with path") {
return fs.ErrorDirNotFound
}
return errors.New(res.Error.Message)
}
return nil
}
// retryErrorCodes is a slice of error codes that we will retry
var retryErrorCodes = []int{
420, // Too Many Requests (legacy)
429, // Too Many Requests
500, // Internal Server Error
502, // Bad Gateway
503, // Service Unavailable
504, // Gateway Timeout
509, // Bandwidth Limit Exceeded
}
// shouldRetry returns a boolean as to whether this resp and err
// deserve to be retried. It returns the err as a convenience
func shouldRetry(ctx context.Context, resp *http.Response, err error) (bool, error) {
if fserrors.ContextError(ctx, &err) {
return false, err
}
if err != nil {
tryAgain := "Try again on "
if idx := strings.Index(err.Error(), tryAgain); idx != -1 {
layout := "2006-01-02 15:04:05 UTC"
dateStr := err.Error()[idx+len(tryAgain) : idx+len(tryAgain)+len(layout)]
timestamp, err2 := time.Parse(layout, dateStr)
if err2 == nil {
return true, fserrors.NewErrorRetryAfter(time.Until(timestamp))
}
}
fs.Debugf(nil, "Retrying API error %v", err)
return true, err
}
return fserrors.ShouldRetry(err) || fserrors.ShouldRetryHTTP(resp, retryErrorCodes), err
}
// ------------------------------------------------------------
// Hash returns the MD5 of an object
func (o *Object) Hash(ctx context.Context, ty hash.Type) (string, error) {
if ty != hash.MD5 {
return "", hash.ErrUnsupported
}
return o.md5sum, nil
}
// Return a string version
func (o *Object) String() string {
if o == nil {
return "<nil>"
}
return o.remote
}
// Fs returns the parent Fs
func (o *Object) Fs() fs.Info {
return o.fs
}
// Remote returns the remote path
func (o *Object) Remote() string {
return o.remote
}
// ModTime returns the modification time of the object
func (o *Object) ModTime(ctx context.Context) time.Time {
return o.modTime
}
// Size of object in bytes
func (o *Object) Size() int64 {
return o.size
}
// Storable returns if this object is storable
func (o *Object) Storable() bool {
return true
}
// SetModTime sets the modification time of the local fs object
func (o *Object) SetModTime(ctx context.Context, modTime time.Time) error {
return fs.ErrorCantSetModTime
}
// Open an object for read
func (o *Object) Open(ctx context.Context, options ...fs.OpenOption) (in io.ReadCloser, err error) {
var resp *http.Response
opts := rest.Opts{
Method: "GET",
RootURL: o.url,
Options: options,
}
var offset int64
var count int64
var key string
var value string
fs.FixRangeOption(options, o.size)
for _, option := range options {
switch x := option.(type) {
case *fs.RangeOption:
offset, count = x.Decode(o.size)
if count < 0 {
count = o.size - offset
}
key, value = option.Header()
case *fs.SeekOption:
offset = x.Offset
count = o.size - offset
key, value = option.Header()
default:
if option.Mandatory() {
fs.Logf(o, "Unsupported mandatory option: %v", option)
}
}
}
if key != "" && value != "" {
opts.ExtraHeaders = make(map[string]string)
opts.ExtraHeaders[key] = value
}
// Make sure that the asset is fully available
err = o.fs.pacer.Call(func() (bool, error) {
resp, err = o.fs.srv.Call(ctx, &opts)
if err == nil {
cl, clErr := strconv.Atoi(resp.Header.Get("content-length"))
if clErr == nil && count == int64(cl) {
return false, nil
}
}
return shouldRetry(ctx, resp, err)
})
if err != nil {
return nil, fmt.Errorf("failed download of \"%s\": %w", o.url, err)
}
return resp.Body, err
}
// Update the object with the contents of the io.Reader
func (o *Object) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, options ...fs.OpenOption) error {
options = append(options, &api.UpdateOptions{
PublicID: o.publicID,
ResourceType: o.resourceType,
DeliveryType: o.deliveryType,
DisplayName: api.CloudinaryEncoder.FromStandardName(o.fs, path.Base(o.Remote())),
AssetFolder: o.fs.FromStandardFullPath(cldPathDir(o.Remote())),
})
updatedObj, err := o.fs.Put(ctx, in, src, options...)
if err != nil {
return err
}
if uo, ok := updatedObj.(*Object); ok {
o.size = uo.size
o.modTime = time.Now() // Skipping uo.modTime because the API returns the create time
o.url = uo.url
o.md5sum = uo.md5sum
o.publicID = uo.publicID
o.resourceType = uo.resourceType
o.deliveryType = uo.deliveryType
}
return nil
}
// Remove an object
func (o *Object) Remove(ctx context.Context) error {
params := uploader.DestroyParams{
PublicID: o.publicID,
ResourceType: o.resourceType,
Type: o.deliveryType,
}
res, dErr := o.fs.cld.Upload.Destroy(ctx, params)
o.fs.lastCRUD = time.Now()
if dErr != nil {
return dErr
}
if res.Error.Message != "" {
return errors.New(res.Error.Message)
}
if res.Result != "ok" {
return errors.New(res.Result)
}
return nil
}

View File

@@ -0,0 +1,23 @@
// Test Cloudinary filesystem interface
package cloudinary_test
import (
"testing"
"github.com/rclone/rclone/backend/cloudinary"
"github.com/rclone/rclone/fstest/fstests"
)
// TestIntegration runs integration tests against the remote
func TestIntegration(t *testing.T) {
name := "TestCloudinary"
fstests.Run(t, &fstests.Opt{
RemoteName: name + ":",
NilObject: (*cloudinary.Object)(nil),
SkipInvalidUTF8: true,
ExtraConfig: []fstests.ExtraConfigItem{
{Name: name, Key: "eventually_consistent_delay", Value: "7"},
},
})
}

View File

@@ -20,6 +20,7 @@ import (
"github.com/rclone/rclone/fs/config/configmap"
"github.com/rclone/rclone/fs/config/configstruct"
"github.com/rclone/rclone/fs/hash"
"github.com/rclone/rclone/fs/list"
"github.com/rclone/rclone/fs/operations"
"github.com/rclone/rclone/fs/walk"
"golang.org/x/sync/errgroup"
@@ -265,6 +266,9 @@ func NewFs(ctx context.Context, name, root string, m configmap.Mapper) (outFs fs
}
}
// Enable ListP always
features.ListP = f.ListP
// Enable Purge when any upstreams support it
if features.Purge == nil {
for _, u := range f.upstreams {
@@ -809,24 +813,52 @@ func (u *upstream) wrapEntries(ctx context.Context, entries fs.DirEntries) (fs.D
// This should return ErrDirNotFound if the directory isn't
// found.
func (f *Fs) List(ctx context.Context, dir string) (entries fs.DirEntries, err error) {
return list.WithListP(ctx, dir, f)
}
// ListP lists the objects and directories of the Fs starting
// from dir non recursively into out.
//
// dir should be "" to start from the root, and should not
// have trailing slashes.
//
// This should return ErrDirNotFound if the directory isn't
// found.
//
// It should call callback for each tranche of entries read.
// These need not be returned in any particular order. If
// callback returns an error then the listing will stop
// immediately.
func (f *Fs) ListP(ctx context.Context, dir string, callback fs.ListRCallback) error {
// defer log.Trace(f, "dir=%q", dir)("entries = %v, err=%v", &entries, &err)
if f.root == "" && dir == "" {
entries = make(fs.DirEntries, 0, len(f.upstreams))
entries := make(fs.DirEntries, 0, len(f.upstreams))
for combineDir := range f.upstreams {
d := fs.NewLimitedDirWrapper(combineDir, fs.NewDir(combineDir, f.when))
entries = append(entries, d)
}
return entries, nil
return callback(entries)
}
u, uRemote, err := f.findUpstream(dir)
if err != nil {
return nil, err
return err
}
entries, err = u.f.List(ctx, uRemote)
if err != nil {
return nil, err
wrappedCallback := func(entries fs.DirEntries) error {
entries, err := u.wrapEntries(ctx, entries)
if err != nil {
return err
}
return callback(entries)
}
return u.wrapEntries(ctx, entries)
listP := u.f.Features().ListP
if listP == nil {
entries, err := u.f.List(ctx, uRemote)
if err != nil {
return err
}
return wrappedCallback(entries)
}
return listP(ctx, dir, wrappedCallback)
}
// ListR lists the objects and directories of the Fs starting

View File

@@ -29,6 +29,7 @@ import (
"github.com/rclone/rclone/fs/config/configstruct"
"github.com/rclone/rclone/fs/fspath"
"github.com/rclone/rclone/fs/hash"
"github.com/rclone/rclone/fs/list"
"github.com/rclone/rclone/fs/log"
"github.com/rclone/rclone/fs/object"
"github.com/rclone/rclone/fs/operations"
@@ -38,6 +39,7 @@ import (
const (
initialChunkSize = 262144 // Initial and max sizes of chunks when reading parts of the file. Currently
maxChunkSize = 8388608 // at 256 KiB and 8 MiB.
chunkStreams = 0 // Streams to use for reading
bufferSize = 8388608
heuristicBytes = 1048576
@@ -207,6 +209,8 @@ func NewFs(ctx context.Context, name, rpath string, m configmap.Mapper) (fs.Fs,
if !operations.CanServerSideMove(wrappedFs) {
f.features.Disable("PutStream")
}
// Enable ListP always
f.features.ListP = f.ListP
return f, err
}
@@ -351,11 +355,39 @@ func (f *Fs) processEntries(entries fs.DirEntries) (newEntries fs.DirEntries, er
// found.
// List entries and process them
func (f *Fs) List(ctx context.Context, dir string) (entries fs.DirEntries, err error) {
entries, err = f.Fs.List(ctx, dir)
if err != nil {
return nil, err
return list.WithListP(ctx, dir, f)
}
// ListP lists the objects and directories of the Fs starting
// from dir non recursively into out.
//
// dir should be "" to start from the root, and should not
// have trailing slashes.
//
// This should return ErrDirNotFound if the directory isn't
// found.
//
// It should call callback for each tranche of entries read.
// These need not be returned in any particular order. If
// callback returns an error then the listing will stop
// immediately.
func (f *Fs) ListP(ctx context.Context, dir string, callback fs.ListRCallback) error {
wrappedCallback := func(entries fs.DirEntries) error {
entries, err := f.processEntries(entries)
if err != nil {
return err
}
return callback(entries)
}
return f.processEntries(entries)
listP := f.Fs.Features().ListP
if listP == nil {
entries, err := f.Fs.List(ctx, dir)
if err != nil {
return err
}
return wrappedCallback(entries)
}
return listP(ctx, dir, wrappedCallback)
}
// ListR lists the objects and directories of the Fs starting
@@ -1362,7 +1394,7 @@ func (o *Object) Open(ctx context.Context, options ...fs.OpenOption) (rc io.Read
}
}
// Get a chunkedreader for the wrapped object
chunkedReader := chunkedreader.New(ctx, o.Object, initialChunkSize, maxChunkSize)
chunkedReader := chunkedreader.New(ctx, o.Object, initialChunkSize, maxChunkSize, chunkStreams)
// Get file handle
var file io.Reader
if offset != 0 {

View File

@@ -192,7 +192,7 @@ func newCipher(mode NameEncryptionMode, password, salt string, dirNameEncrypt bo
dirNameEncrypt: dirNameEncrypt,
encryptedSuffix: ".bin",
}
c.buffers.New = func() interface{} {
c.buffers.New = func() any {
return new([blockSize]byte)
}
err := c.Key(password, salt)
@@ -329,14 +329,14 @@ func (c *Cipher) obfuscateSegment(plaintext string) string {
for _, runeValue := range plaintext {
dir += int(runeValue)
}
dir = dir % 256
dir %= 256
// We'll use this number to store in the result filename...
var result bytes.Buffer
_, _ = result.WriteString(strconv.Itoa(dir) + ".")
// but we'll augment it with the nameKey for real calculation
for i := 0; i < len(c.nameKey); i++ {
for i := range len(c.nameKey) {
dir += int(c.nameKey[i])
}
@@ -418,7 +418,7 @@ func (c *Cipher) deobfuscateSegment(ciphertext string) (string, error) {
}
// add the nameKey to get the real rotate distance
for i := 0; i < len(c.nameKey); i++ {
for i := range len(c.nameKey) {
dir += int(c.nameKey[i])
}
@@ -450,7 +450,7 @@ func (c *Cipher) deobfuscateSegment(ciphertext string) (string, error) {
if pos >= 26 {
pos -= 6
}
pos = pos - thisdir
pos -= thisdir
if pos < 0 {
pos += 52
}
@@ -664,7 +664,7 @@ func (n *nonce) increment() {
// add a uint64 to the nonce
func (n *nonce) add(x uint64) {
carry := uint16(0)
for i := 0; i < 8; i++ {
for i := range 8 {
digit := (*n)[i]
xDigit := byte(x)
x >>= 8
@@ -888,7 +888,7 @@ func (fh *decrypter) fillBuffer() (err error) {
fs.Errorf(nil, "crypt: ignoring: %v", ErrorEncryptedBadBlock)
// Zero out the bad block and continue
for i := range (*fh.buf)[:n] {
(*fh.buf)[i] = 0
fh.buf[i] = 0
}
}
fh.bufIndex = 0

View File

@@ -1307,10 +1307,7 @@ func TestNewDecrypterSeekLimit(t *testing.T) {
open := func(ctx context.Context, underlyingOffset, underlyingLimit int64) (io.ReadCloser, error) {
end := len(ciphertext)
if underlyingLimit >= 0 {
end = int(underlyingOffset + underlyingLimit)
if end > len(ciphertext) {
end = len(ciphertext)
}
end = min(int(underlyingOffset+underlyingLimit), len(ciphertext))
}
reader = io.NopCloser(bytes.NewBuffer(ciphertext[int(underlyingOffset):end]))
return reader, nil
@@ -1490,7 +1487,7 @@ func TestDecrypterRead(t *testing.T) {
assert.NoError(t, err)
// Test truncating the file at each possible point
for i := 0; i < len(file16)-1; i++ {
for i := range len(file16) - 1 {
what := fmt.Sprintf("truncating to %d/%d", i, len(file16))
cd := newCloseDetector(bytes.NewBuffer(file16[:i]))
fh, err := c.newDecrypter(cd)

View File

@@ -18,6 +18,7 @@ import (
"github.com/rclone/rclone/fs/config/obscure"
"github.com/rclone/rclone/fs/fspath"
"github.com/rclone/rclone/fs/hash"
"github.com/rclone/rclone/fs/list"
)
// Globals
@@ -293,6 +294,9 @@ func NewFs(ctx context.Context, name, rpath string, m configmap.Mapper) (fs.Fs,
PartialUploads: true,
}).Fill(ctx, f).Mask(ctx, wrappedFs).WrapsFs(f, wrappedFs)
// Enable ListP always
f.features.ListP = f.ListP
return f, err
}
@@ -416,11 +420,40 @@ func (f *Fs) encryptEntries(ctx context.Context, entries fs.DirEntries) (newEntr
// This should return ErrDirNotFound if the directory isn't
// found.
func (f *Fs) List(ctx context.Context, dir string) (entries fs.DirEntries, err error) {
entries, err = f.Fs.List(ctx, f.cipher.EncryptDirName(dir))
if err != nil {
return nil, err
return list.WithListP(ctx, dir, f)
}
// ListP lists the objects and directories of the Fs starting
// from dir non recursively into out.
//
// dir should be "" to start from the root, and should not
// have trailing slashes.
//
// This should return ErrDirNotFound if the directory isn't
// found.
//
// It should call callback for each tranche of entries read.
// These need not be returned in any particular order. If
// callback returns an error then the listing will stop
// immediately.
func (f *Fs) ListP(ctx context.Context, dir string, callback fs.ListRCallback) error {
wrappedCallback := func(entries fs.DirEntries) error {
entries, err := f.encryptEntries(ctx, entries)
if err != nil {
return err
}
return callback(entries)
}
return f.encryptEntries(ctx, entries)
listP := f.Fs.Features().ListP
encryptedDir := f.cipher.EncryptDirName(dir)
if listP == nil {
entries, err := f.Fs.List(ctx, encryptedDir)
if err != nil {
return err
}
return wrappedCallback(entries)
}
return listP(ctx, encryptedDir, wrappedCallback)
}
// ListR lists the objects and directories of the Fs starting
@@ -924,7 +957,7 @@ Usage Example:
// The result should be capable of being JSON encoded
// If it is a string or a []string it will be shown to the user
// otherwise it will be JSON encoded and shown to the user like that
func (f *Fs) Command(ctx context.Context, name string, arg []string, opt map[string]string) (out interface{}, err error) {
func (f *Fs) Command(ctx context.Context, name string, arg []string, opt map[string]string) (out any, err error) {
switch name {
case "decode":
out := make([]string, 0, len(arg))

View File

@@ -25,7 +25,7 @@ func Pad(n int, buf []byte) []byte {
}
length := len(buf)
padding := n - (length % n)
for i := 0; i < padding; i++ {
for range padding {
buf = append(buf, byte(padding))
}
if (len(buf) % n) != 0 {
@@ -54,7 +54,7 @@ func Unpad(n int, buf []byte) ([]byte, error) {
if padding == 0 {
return nil, ErrorPaddingTooShort
}
for i := 0; i < padding; i++ {
for i := range padding {
if buf[length-1-i] != byte(padding) {
return nil, ErrorPaddingNotAllTheSame
}

View File

@@ -18,6 +18,7 @@ import (
"net/http"
"os"
"path"
"slices"
"sort"
"strconv"
"strings"
@@ -37,8 +38,8 @@ import (
"github.com/rclone/rclone/fs/fshttp"
"github.com/rclone/rclone/fs/fspath"
"github.com/rclone/rclone/fs/hash"
"github.com/rclone/rclone/fs/list"
"github.com/rclone/rclone/fs/operations"
"github.com/rclone/rclone/fs/walk"
"github.com/rclone/rclone/lib/dircache"
"github.com/rclone/rclone/lib/encoder"
"github.com/rclone/rclone/lib/env"
@@ -80,9 +81,10 @@ const (
// Globals
var (
// Description of how to auth for this app
driveConfig = &oauth2.Config{
driveConfig = &oauthutil.Config{
Scopes: []string{scopePrefix + "drive"},
Endpoint: google.Endpoint,
AuthURL: google.Endpoint.AuthURL,
TokenURL: google.Endpoint.TokenURL,
ClientID: rcloneClientID,
ClientSecret: obscure.MustReveal(rcloneEncryptedClientSecret),
RedirectURL: oauthutil.RedirectURL,
@@ -120,6 +122,7 @@ var (
"text/html": ".html",
"text/plain": ".txt",
"text/tab-separated-values": ".tsv",
"text/markdown": ".md",
}
_mimeTypeToExtensionLinks = map[string]string{
"application/x-link-desktop": ".desktop",
@@ -197,13 +200,7 @@ func driveScopes(scopesString string) (scopes []string) {
// Returns true if one of the scopes was "drive.appfolder"
func driveScopesContainsAppFolder(scopes []string) bool {
for _, scope := range scopes {
if scope == scopePrefix+"drive.appfolder" {
return true
}
}
return false
return slices.Contains(scopes, scopePrefix+"drive.appfolder")
}
func driveOAuthOptions() []fs.Option {
@@ -957,12 +954,7 @@ func parseDrivePath(path string) (root string, err error) {
type listFn func(*drive.File) bool
func containsString(slice []string, s string) bool {
for _, e := range slice {
if e == s {
return true
}
}
return false
return slices.Contains(slice, s)
}
// getFile returns drive.File for the ID passed and fields passed in
@@ -1151,13 +1143,7 @@ OUTER:
// Check the case of items is correct since
// the `=` operator is case insensitive.
if title != "" && title != item.Name {
found := false
for _, stem := range stems {
if stem == item.Name {
found = true
break
}
}
found := slices.Contains(stems, item.Name)
if !found {
continue
}
@@ -1210,6 +1196,7 @@ func fixMimeType(mimeTypeIn string) string {
}
return mimeTypeOut
}
func fixMimeTypeMap(in map[string][]string) (out map[string][]string) {
out = make(map[string][]string, len(in))
for k, v := range in {
@@ -1220,9 +1207,11 @@ func fixMimeTypeMap(in map[string][]string) (out map[string][]string) {
}
return out
}
func isInternalMimeType(mimeType string) bool {
return strings.HasPrefix(mimeType, "application/vnd.google-apps.")
}
func isLinkMimeType(mimeType string) bool {
return strings.HasPrefix(mimeType, "application/x-link-")
}
@@ -1557,13 +1546,10 @@ func (f *Fs) getFileFields(ctx context.Context) (fields googleapi.Field) {
func (f *Fs) newRegularObject(ctx context.Context, remote string, info *drive.File) (obj fs.Object, err error) {
// wipe checksum if SkipChecksumGphotos and file is type Photo or Video
if f.opt.SkipChecksumGphotos {
for _, space := range info.Spaces {
if space == "photos" {
info.Md5Checksum = ""
info.Sha1Checksum = ""
info.Sha256Checksum = ""
break
}
if slices.Contains(info.Spaces, "photos") {
info.Md5Checksum = ""
info.Sha1Checksum = ""
info.Sha256Checksum = ""
}
}
o := &Object{
@@ -1655,7 +1641,8 @@ func (f *Fs) newObjectWithInfo(ctx context.Context, remote string, info *drive.F
// When the drive.File cannot be represented as an fs.Object it will return (nil, nil).
func (f *Fs) newObjectWithExportInfo(
ctx context.Context, remote string, info *drive.File,
extension, exportName, exportMimeType string, isDocument bool) (o fs.Object, err error) {
extension, exportName, exportMimeType string, isDocument bool,
) (o fs.Object, err error) {
// Note that resolveShortcut will have been called already if
// we are being called from a listing. However the drive.Item
// will have been resolved so this will do nothing.
@@ -1846,6 +1833,7 @@ func linkTemplate(mt string) *template.Template {
})
return _linkTemplates[mt]
}
func (f *Fs) fetchFormats(ctx context.Context) {
fetchFormatsOnce.Do(func() {
var about *drive.About
@@ -1891,7 +1879,8 @@ func (f *Fs) importFormats(ctx context.Context) map[string][]string {
// Look through the exportExtensions and find the first format that can be
// converted. If none found then return ("", "", false)
func (f *Fs) findExportFormatByMimeType(ctx context.Context, itemMimeType string) (
extension, mimeType string, isDocument bool) {
extension, mimeType string, isDocument bool,
) {
exportMimeTypes, isDocument := f.exportFormats(ctx)[itemMimeType]
if isDocument {
for _, _extension := range f.exportExtensions {
@@ -2200,7 +2189,7 @@ func (f *Fs) ListR(ctx context.Context, dir string, callback fs.ListRCallback) (
wg := sync.WaitGroup{}
in := make(chan listREntry, listRInputBuffer)
out := make(chan error, f.ci.Checkers)
list := walk.NewListRHelper(callback)
list := list.NewHelper(callback)
overflow := []listREntry{}
listed := 0
@@ -2219,7 +2208,7 @@ func (f *Fs) ListR(ctx context.Context, dir string, callback fs.ListRCallback) (
case in <- job:
default:
overflow = append(overflow, job)
wg.Add(-1)
wg.Done()
}
}
@@ -2238,7 +2227,7 @@ func (f *Fs) ListR(ctx context.Context, dir string, callback fs.ListRCallback) (
wg.Add(1)
in <- listREntry{directoryID, dir}
for i := 0; i < f.ci.Checkers; i++ {
for range f.ci.Checkers {
go f.listRRunner(ctx, &wg, in, out, cb, sendJob)
}
go func() {
@@ -2247,11 +2236,8 @@ func (f *Fs) ListR(ctx context.Context, dir string, callback fs.ListRCallback) (
// if the input channel overflowed add the collected entries to the channel now
for len(overflow) > 0 {
mu.Lock()
l := len(overflow)
// only fill half of the channel to prevent entries being put into overflow again
if l > listRInputBuffer/2 {
l = listRInputBuffer / 2
}
l := min(len(overflow), listRInputBuffer/2)
wg.Add(l)
for _, d := range overflow[:l] {
in <- d
@@ -2271,7 +2257,7 @@ func (f *Fs) ListR(ctx context.Context, dir string, callback fs.ListRCallback) (
mu.Unlock()
}()
// wait until the all workers to finish
for i := 0; i < f.ci.Checkers; i++ {
for range f.ci.Checkers {
e := <-out
mu.Lock()
// if one worker returns an error early, close the input so all other workers exit
@@ -2687,7 +2673,7 @@ func (f *Fs) purgeCheck(ctx context.Context, dir string, check bool) error {
if shortcutID != "" {
return f.delete(ctx, shortcutID, f.opt.UseTrash)
}
var trashedFiles = false
trashedFiles := false
if check {
found, err := f.list(ctx, []string{directoryID}, "", false, false, f.opt.TrashedOnly, true, func(item *drive.File) bool {
if !item.Trashed {
@@ -2924,7 +2910,6 @@ func (f *Fs) CleanUp(ctx context.Context) error {
err := f.svc.Files.EmptyTrash().Context(ctx).Do()
return f.shouldRetry(ctx, err)
})
if err != nil {
return err
}
@@ -3185,6 +3170,7 @@ func (f *Fs) ChangeNotify(ctx context.Context, notifyFunc func(string, fs.EntryT
}
}()
}
func (f *Fs) changeNotifyStartPageToken(ctx context.Context) (pageToken string, err error) {
var startPageToken *drive.StartPageToken
err = f.pacer.Call(func() (bool, error) {
@@ -3523,14 +3509,14 @@ func (f *Fs) unTrashDir(ctx context.Context, dir string, recurse bool) (r unTras
return f.unTrash(ctx, dir, directoryID, true)
}
// copy file with id to dest
func (f *Fs) copyID(ctx context.Context, id, dest string) (err error) {
// copy or move file with id to dest
func (f *Fs) copyOrMoveID(ctx context.Context, operation string, id, dest string) (err error) {
info, err := f.getFile(ctx, id, f.getFileFields(ctx))
if err != nil {
return fmt.Errorf("couldn't find id: %w", err)
}
if info.MimeType == driveFolderType {
return fmt.Errorf("can't copy directory use: rclone copy --drive-root-folder-id %s %s %s", id, fs.ConfigString(f), dest)
return fmt.Errorf("can't %s directory use: rclone %s --drive-root-folder-id %s %s %s", operation, operation, id, fs.ConfigString(f), dest)
}
info.Name = f.opt.Enc.ToStandardName(info.Name)
o, err := f.newObjectWithInfo(ctx, info.Name, info)
@@ -3551,14 +3537,21 @@ func (f *Fs) copyID(ctx context.Context, id, dest string) (err error) {
if err != nil {
return err
}
_, err = operations.Copy(ctx, dstFs, nil, destLeaf, o)
if err != nil {
return fmt.Errorf("copy failed: %w", err)
var opErr error
if operation == "moveid" {
_, opErr = operations.Move(ctx, dstFs, nil, destLeaf, o)
} else {
_, opErr = operations.Copy(ctx, dstFs, nil, destLeaf, o)
}
if opErr != nil {
return fmt.Errorf("%s failed: %w", operation, opErr)
}
return nil
}
func (f *Fs) query(ctx context.Context, query string) (entries []*drive.File, err error) {
// Run the drive query calling fn on each entry found
func (f *Fs) queryFn(ctx context.Context, query string, fn func(*drive.File)) (err error) {
list := f.svc.Files.List()
if query != "" {
list.Q(query)
@@ -3577,10 +3570,7 @@ func (f *Fs) query(ctx context.Context, query string) (entries []*drive.File, er
if f.rootFolderID == "appDataFolder" {
list.Spaces("appDataFolder")
}
fields := fmt.Sprintf("files(%s),nextPageToken,incompleteSearch", f.getFileFields(ctx))
var results []*drive.File
for {
var files *drive.FileList
err = f.pacer.Call(func() (bool, error) {
@@ -3588,20 +3578,66 @@ func (f *Fs) query(ctx context.Context, query string) (entries []*drive.File, er
return f.shouldRetry(ctx, err)
})
if err != nil {
return nil, fmt.Errorf("failed to execute query: %w", err)
return fmt.Errorf("failed to execute query: %w", err)
}
if files.IncompleteSearch {
fs.Errorf(f, "search result INCOMPLETE")
}
results = append(results, files.Files...)
for _, item := range files.Files {
fn(item)
}
if files.NextPageToken == "" {
break
}
list.PageToken(files.NextPageToken)
}
return nil
}
// Run the drive query returning the entries found
func (f *Fs) query(ctx context.Context, query string) (entries []*drive.File, err error) {
var results []*drive.File
err = f.queryFn(ctx, query, func(item *drive.File) {
results = append(results, item)
})
if err != nil {
return nil, err
}
return results, nil
}
// Rescue, list or delete orphaned files
func (f *Fs) rescue(ctx context.Context, dirID string, delete bool) (err error) {
return f.queryFn(ctx, "'me' in owners and trashed=false", func(item *drive.File) {
if len(item.Parents) != 0 {
return
}
// Have found an orphaned entry
if delete {
fs.Infof(item.Name, "Deleting orphan %q into trash", item.Id)
err = f.delete(ctx, item.Id, true)
if err != nil {
fs.Errorf(item.Name, "Failed to delete orphan %q: %v", item.Id, err)
}
} else if dirID == "" {
operations.SyncPrintf("%q, %q\n", item.Name, item.Id)
} else {
fs.Infof(item.Name, "Rescuing orphan %q", item.Id)
err = f.pacer.Call(func() (bool, error) {
_, err = f.svc.Files.Update(item.Id, nil).
AddParents(dirID).
Fields(f.getFileFields(ctx)).
SupportsAllDrives(true).
Context(ctx).Do()
return f.shouldRetry(ctx, err)
})
if err != nil {
fs.Errorf(item.Name, "Failed to rescue orphan %q: %v", item.Id, err)
}
}
})
}
var commandHelp = []fs.CommandHelp{{
Name: "get",
Short: "Get command for fetching the drive config parameters",
@@ -3746,6 +3782,28 @@ attempted if possible.
Use the --interactive/-i or --dry-run flag to see what would be copied before copying.
`,
}, {
Name: "moveid",
Short: "Move files by ID",
Long: `This command moves files by ID
Usage:
rclone backend moveid drive: ID path
rclone backend moveid drive: ID1 path1 ID2 path2
It moves the drive file with ID given to the path (an rclone path which
will be passed internally to rclone moveto).
The path should end with a / to indicate move the file as named to
this directory. If it doesn't end with a / then the last path
component will be used as the file name.
If the destination is a drive backend then server-side moving will be
attempted if possible.
Use the --interactive/-i or --dry-run flag to see what would be moved beforehand.
`,
}, {
Name: "exportformats",
Short: "Dump the export formats for debug purposes",
@@ -3793,6 +3851,37 @@ The result is a JSON array of matches, for example:
"webViewLink": "https://drive.google.com/file/d/0AxBe_CDEF4zkGHI4d0FjYko2QkD/view?usp=drivesdk\u0026resourcekey=0-ABCDEFGHIXJQpIGqBJq3MC"
}
]`,
}, {
Name: "rescue",
Short: "Rescue or delete any orphaned files",
Long: `This command rescues or deletes any orphaned files or directories.
Sometimes files can get orphaned in Google Drive. This means that they
are no longer in any folder in Google Drive.
This command finds those files and either rescues them to a directory
you specify or deletes them.
Usage:
This can be used in 3 ways.
First, list all orphaned files
rclone backend rescue drive:
Second rescue all orphaned files to the directory indicated
rclone backend rescue drive: "relative/path/to/rescue/directory"
e.g. To rescue all orphans to a directory called "Orphans" in the top level
rclone backend rescue drive: Orphans
Third delete all orphaned files to the trash
rclone backend rescue drive: -o delete
`,
}}
// Command the backend to run a named command
@@ -3804,7 +3893,7 @@ The result is a JSON array of matches, for example:
// The result should be capable of being JSON encoded
// If it is a string or a []string it will be shown to the user
// otherwise it will be JSON encoded and shown to the user like that
func (f *Fs) Command(ctx context.Context, name string, arg []string, opt map[string]string) (out interface{}, err error) {
func (f *Fs) Command(ctx context.Context, name string, arg []string, opt map[string]string) (out any, err error) {
switch name {
case "get":
out := make(map[string]string)
@@ -3893,16 +3982,16 @@ func (f *Fs) Command(ctx context.Context, name string, arg []string, opt map[str
dir = arg[0]
}
return f.unTrashDir(ctx, dir, true)
case "copyid":
case "copyid", "moveid":
if len(arg)%2 != 0 {
return nil, errors.New("need an even number of arguments")
}
for len(arg) > 0 {
id, dest := arg[0], arg[1]
arg = arg[2:]
err = f.copyID(ctx, id, dest)
err = f.copyOrMoveID(ctx, name, id, dest)
if err != nil {
return nil, fmt.Errorf("failed copying %q to %q: %w", id, dest, err)
return nil, fmt.Errorf("failed %s %q to %q: %w", name, id, dest, err)
}
}
return nil, nil
@@ -3913,14 +4002,29 @@ func (f *Fs) Command(ctx context.Context, name string, arg []string, opt map[str
case "query":
if len(arg) == 1 {
query := arg[0]
var results, err = f.query(ctx, query)
results, err := f.query(ctx, query)
if err != nil {
return nil, fmt.Errorf("failed to execute query: %q, error: %w", query, err)
}
return results, nil
} else {
return nil, errors.New("need a query argument")
}
return nil, errors.New("need a query argument")
case "rescue":
dirID := ""
_, delete := opt["delete"]
if len(arg) == 0 {
// no arguments - list only
} else if !delete && len(arg) == 1 {
dir := arg[0]
dirID, err = f.dirCache.FindDir(ctx, dir, true)
if err != nil {
return nil, fmt.Errorf("failed to find or create rescue directory %q: %w", dir, err)
}
fs.Infof(f, "Rescuing orphans into %q", dir)
} else {
return nil, errors.New("syntax error: need 0 or 1 args or -o delete")
}
return nil, f.rescue(ctx, dirID, delete)
default:
return nil, fs.ErrorCommandNotFound
}
@@ -3964,8 +4068,9 @@ func (o *Object) Hash(ctx context.Context, t hash.Type) (string, error) {
}
return "", hash.ErrUnsupported
}
func (o *baseObject) Hash(ctx context.Context, t hash.Type) (string, error) {
if t != hash.MD5 {
if t != hash.MD5 && t != hash.SHA1 && t != hash.SHA256 {
return "", hash.ErrUnsupported
}
return "", nil
@@ -3978,7 +4083,8 @@ func (o *baseObject) Size() int64 {
// getRemoteInfoWithExport returns a drive.File and the export settings for the remote
func (f *Fs) getRemoteInfoWithExport(ctx context.Context, remote string) (
info *drive.File, extension, exportName, exportMimeType string, isDocument bool, err error) {
info *drive.File, extension, exportName, exportMimeType string, isDocument bool, err error,
) {
leaf, directoryID, err := f.dirCache.FindPath(ctx, remote, false)
if err != nil {
if err == fs.ErrorDirNotFound {
@@ -4191,12 +4297,13 @@ func (o *Object) Open(ctx context.Context, options ...fs.OpenOption) (in io.Read
}
return o.baseObject.open(ctx, o.url, options...)
}
func (o *documentObject) Open(ctx context.Context, options ...fs.OpenOption) (in io.ReadCloser, err error) {
// Update the size with what we are reading as it can change from
// the HEAD in the listing to this GET. This stops rclone marking
// the transfer as corrupted.
var offset, end int64 = 0, -1
var newOptions = options[:0]
newOptions := options[:0]
for _, o := range options {
// Note that Range requests don't work on Google docs:
// https://developers.google.com/drive/v3/web/manage-downloads#partial_download
@@ -4223,9 +4330,10 @@ func (o *documentObject) Open(ctx context.Context, options ...fs.OpenOption) (in
}
return
}
func (o *linkObject) Open(ctx context.Context, options ...fs.OpenOption) (in io.ReadCloser, err error) {
var offset, limit int64 = 0, -1
var data = o.content
data := o.content
for _, option := range options {
switch x := option.(type) {
case *fs.SeekOption:
@@ -4250,7 +4358,8 @@ func (o *linkObject) Open(ctx context.Context, options ...fs.OpenOption) (in io.
}
func (o *baseObject) update(ctx context.Context, updateInfo *drive.File, uploadMimeType string, in io.Reader,
src fs.ObjectInfo) (info *drive.File, err error) {
src fs.ObjectInfo,
) (info *drive.File, err error) {
// Make the API request to upload metadata and file data.
size := src.Size()
if size >= 0 && size < int64(o.fs.opt.UploadCutoff) {
@@ -4328,6 +4437,7 @@ func (o *Object) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, op
return nil
}
func (o *documentObject) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, options ...fs.OpenOption) error {
srcMimeType := fs.MimeType(ctx, src)
importMimeType := ""
@@ -4423,6 +4533,7 @@ func (o *baseObject) Metadata(ctx context.Context) (metadata fs.Metadata, err er
func (o *documentObject) ext() string {
return o.baseObject.remote[len(o.baseObject.remote)-o.extLen:]
}
func (o *linkObject) ext() string {
return o.baseObject.remote[len(o.baseObject.remote)-o.extLen:]
}

View File

@@ -95,7 +95,7 @@ func TestInternalParseExtensions(t *testing.T) {
wantErr error
}{
{"doc", []string{".doc"}, nil},
{" docx ,XLSX, pptx,svg", []string{".docx", ".xlsx", ".pptx", ".svg"}, nil},
{" docx ,XLSX, pptx,svg,md", []string{".docx", ".xlsx", ".pptx", ".svg", ".md"}, nil},
{"docx,svg,Docx", []string{".docx", ".svg"}, nil},
{"docx,potato,docx", []string{".docx"}, errors.New(`couldn't find MIME type for extension ".potato"`)},
} {
@@ -479,8 +479,8 @@ func (f *Fs) InternalTestUnTrash(t *testing.T) {
require.NoError(t, f.Purge(ctx, "trashDir"))
}
// TestIntegration/FsMkdir/FsPutFiles/Internal/CopyID
func (f *Fs) InternalTestCopyID(t *testing.T) {
// TestIntegration/FsMkdir/FsPutFiles/Internal/CopyOrMoveID
func (f *Fs) InternalTestCopyOrMoveID(t *testing.T) {
ctx := context.Background()
obj, err := f.NewObject(ctx, existingFile)
require.NoError(t, err)
@@ -498,7 +498,7 @@ func (f *Fs) InternalTestCopyID(t *testing.T) {
}
t.Run("BadID", func(t *testing.T) {
err = f.copyID(ctx, "ID-NOT-FOUND", dir+"/")
err = f.copyOrMoveID(ctx, "moveid", "ID-NOT-FOUND", dir+"/")
require.Error(t, err)
assert.Contains(t, err.Error(), "couldn't find id")
})
@@ -506,19 +506,31 @@ func (f *Fs) InternalTestCopyID(t *testing.T) {
t.Run("Directory", func(t *testing.T) {
rootID, err := f.dirCache.RootID(ctx, false)
require.NoError(t, err)
err = f.copyID(ctx, rootID, dir+"/")
err = f.copyOrMoveID(ctx, "moveid", rootID, dir+"/")
require.Error(t, err)
assert.Contains(t, err.Error(), "can't copy directory")
assert.Contains(t, err.Error(), "can't moveid directory")
})
t.Run("WithoutDestName", func(t *testing.T) {
err = f.copyID(ctx, o.id, dir+"/")
t.Run("MoveWithoutDestName", func(t *testing.T) {
err = f.copyOrMoveID(ctx, "moveid", o.id, dir+"/")
require.NoError(t, err)
checkFile(path.Base(existingFile))
})
t.Run("WithDestName", func(t *testing.T) {
err = f.copyID(ctx, o.id, dir+"/potato.txt")
t.Run("CopyWithoutDestName", func(t *testing.T) {
err = f.copyOrMoveID(ctx, "copyid", o.id, dir+"/")
require.NoError(t, err)
checkFile(path.Base(existingFile))
})
t.Run("MoveWithDestName", func(t *testing.T) {
err = f.copyOrMoveID(ctx, "moveid", o.id, dir+"/potato.txt")
require.NoError(t, err)
checkFile("potato.txt")
})
t.Run("CopyWithDestName", func(t *testing.T) {
err = f.copyOrMoveID(ctx, "copyid", o.id, dir+"/potato.txt")
require.NoError(t, err)
checkFile("potato.txt")
})
@@ -566,7 +578,7 @@ func (f *Fs) InternalTestAgeQuery(t *testing.T) {
// Check set up for filtering
assert.True(t, f.Features().FilterAware)
opt := &filter.Opt{}
opt := &filter.Options{}
err := opt.MaxAge.Set("1h")
assert.NoError(t, err)
flt, err := filter.NewFilter(opt)
@@ -647,7 +659,7 @@ func (f *Fs) InternalTest(t *testing.T) {
})
t.Run("Shortcuts", f.InternalTestShortcuts)
t.Run("UnTrash", f.InternalTestUnTrash)
t.Run("CopyID", f.InternalTestCopyID)
t.Run("CopyOrMoveID", f.InternalTestCopyOrMoveID)
t.Run("Query", f.InternalTestQuery)
t.Run("AgeQuery", f.InternalTestAgeQuery)
t.Run("ShouldRetry", f.InternalTestShouldRetry)

View File

@@ -4,6 +4,7 @@ import (
"context"
"encoding/json"
"fmt"
"maps"
"strconv"
"strings"
"sync"
@@ -324,9 +325,7 @@ func (o *baseObject) parseMetadata(ctx context.Context, info *drive.File) (err e
metadata := make(fs.Metadata, 16)
// Dump user metadata first as it overrides system metadata
for k, v := range info.Properties {
metadata[k] = v
}
maps.Copy(metadata, info.Properties)
// System metadata
metadata["copy-requires-writer-permission"] = fmt.Sprint(info.CopyRequiresWriterPermission)

View File

@@ -177,10 +177,7 @@ func (rx *resumableUpload) Upload(ctx context.Context) (*drive.File, error) {
if start >= rx.ContentLength {
break
}
reqSize = rx.ContentLength - start
if reqSize >= int64(rx.f.opt.ChunkSize) {
reqSize = int64(rx.f.opt.ChunkSize)
}
reqSize = min(rx.ContentLength-start, int64(rx.f.opt.ChunkSize))
chunk = readers.NewRepeatableLimitReaderBuffer(rx.Media, buf, reqSize)
} else {
// If size unknown read into buffer

View File

@@ -11,7 +11,6 @@ import (
"fmt"
"github.com/dropbox/dropbox-sdk-go-unofficial/v6/dropbox/files"
"github.com/rclone/rclone/fs/fserrors"
)
// finishBatch commits the batch, returning a batch status to poll or maybe complete
@@ -21,14 +20,10 @@ func (f *Fs) finishBatch(ctx context.Context, items []*files.UploadSessionFinish
}
err = f.pacer.Call(func() (bool, error) {
complete, err = f.srv.UploadSessionFinishBatchV2(arg)
// If error is insufficient space then don't retry
if e, ok := err.(files.UploadSessionFinishAPIError); ok {
if e.EndpointError != nil && e.EndpointError.Path != nil && e.EndpointError.Path.Tag == files.WriteErrorInsufficientSpace {
err = fserrors.NoRetryError(err)
return false, err
}
if retry, err := shouldRetryExclude(ctx, err); !retry {
return retry, err
}
// after the first chunk is uploaded, we retry everything
// after the first chunk is uploaded, we retry everything except the excluded errors
return err != nil, err
})
if err != nil {

View File

@@ -55,10 +55,7 @@ func (d *digest) Write(p []byte) (n int, err error) {
n = len(p)
for len(p) > 0 {
d.writtenMore = true
toWrite := bytesPerBlock - d.n
if toWrite > len(p) {
toWrite = len(p)
}
toWrite := min(bytesPerBlock-d.n, len(p))
_, err = d.blockHash.Write(p[:toWrite])
if err != nil {
panic(hashReturnedError)

View File

@@ -11,7 +11,7 @@ import (
func testChunk(t *testing.T, chunk int) {
data := make([]byte, chunk)
for i := 0; i < chunk; i++ {
for i := range chunk {
data[i] = 'A'
}
for _, test := range []struct {

View File

@@ -47,6 +47,7 @@ import (
"github.com/rclone/rclone/fs/config/obscure"
"github.com/rclone/rclone/fs/fserrors"
"github.com/rclone/rclone/fs/hash"
"github.com/rclone/rclone/fs/operations"
"github.com/rclone/rclone/lib/batcher"
"github.com/rclone/rclone/lib/encoder"
"github.com/rclone/rclone/lib/oauthutil"
@@ -91,9 +92,12 @@ const (
maxFileNameLength = 255
)
type exportAPIFormat string
type exportExtension string // dotless
var (
// Description of how to auth for this app
dropboxConfig = &oauth2.Config{
dropboxConfig = &oauthutil.Config{
Scopes: []string{
"files.metadata.write",
"files.content.write",
@@ -108,7 +112,8 @@ var (
// AuthURL: "https://www.dropbox.com/1/oauth2/authorize",
// TokenURL: "https://api.dropboxapi.com/1/oauth2/token",
// },
Endpoint: dropbox.OAuthEndpoint(""),
AuthURL: dropbox.OAuthEndpoint("").AuthURL,
TokenURL: dropbox.OAuthEndpoint("").TokenURL,
ClientID: rcloneClientID,
ClientSecret: obscure.MustReveal(rcloneEncryptedClientSecret),
RedirectURL: oauthutil.RedirectLocalhostURL,
@@ -130,10 +135,20 @@ var (
DefaultTimeoutAsync: 10 * time.Second,
DefaultBatchSizeAsync: 100,
}
exportKnownAPIFormats = map[exportAPIFormat]exportExtension{
"markdown": "md",
"html": "html",
}
// Populated based on exportKnownAPIFormats
exportKnownExtensions = map[exportExtension]exportAPIFormat{}
paperExtension = ".paper"
paperTemplateExtension = ".papert"
)
// Gets an oauth config with the right scopes
func getOauthConfig(m configmap.Mapper) *oauth2.Config {
func getOauthConfig(m configmap.Mapper) *oauthutil.Config {
// If not impersonating, use standard scopes
if impersonate, _ := m.Get("impersonate"); impersonate == "" {
return dropboxConfig
@@ -245,23 +260,61 @@ folders.`,
Help: "Specify a different Dropbox namespace ID to use as the root for all paths.",
Default: "",
Advanced: true,
}}...), defaultBatcherOptions.FsOptions("For full info see [the main docs](https://rclone.org/dropbox/#batch-mode)\n\n")...),
}, {
Name: "export_formats",
Help: `Comma separated list of preferred formats for exporting files
Certain Dropbox files can only be accessed by exporting them to another format.
These include Dropbox Paper documents.
For each such file, rclone will choose the first format on this list that Dropbox
considers valid. If none is valid, it will choose Dropbox's default format.
Known formats include: "html", "md" (markdown)`,
Default: fs.CommaSepList{"html", "md"},
Advanced: true,
}, {
Name: "skip_exports",
Help: "Skip exportable files in all listings.\n\nIf given, exportable files practically become invisible to rclone.",
Default: false,
Advanced: true,
}, {
Name: "show_all_exports",
Default: false,
Help: `Show all exportable files in listings.
Adding this flag will allow all exportable files to be server side copied.
Note that rclone doesn't add extensions to the exportable file names in this mode.
Do **not** use this flag when trying to download exportable files - rclone
will fail to download them.
`,
Advanced: true,
},
}...), defaultBatcherOptions.FsOptions("For full info see [the main docs](https://rclone.org/dropbox/#batch-mode)\n\n")...),
})
for apiFormat, ext := range exportKnownAPIFormats {
exportKnownExtensions[ext] = apiFormat
}
}
// Options defines the configuration for this backend
type Options struct {
ChunkSize fs.SizeSuffix `config:"chunk_size"`
Impersonate string `config:"impersonate"`
SharedFiles bool `config:"shared_files"`
SharedFolders bool `config:"shared_folders"`
BatchMode string `config:"batch_mode"`
BatchSize int `config:"batch_size"`
BatchTimeout fs.Duration `config:"batch_timeout"`
AsyncBatch bool `config:"async_batch"`
PacerMinSleep fs.Duration `config:"pacer_min_sleep"`
Enc encoder.MultiEncoder `config:"encoding"`
RootNsid string `config:"root_namespace"`
ChunkSize fs.SizeSuffix `config:"chunk_size"`
Impersonate string `config:"impersonate"`
SharedFiles bool `config:"shared_files"`
SharedFolders bool `config:"shared_folders"`
BatchMode string `config:"batch_mode"`
BatchSize int `config:"batch_size"`
BatchTimeout fs.Duration `config:"batch_timeout"`
AsyncBatch bool `config:"async_batch"`
PacerMinSleep fs.Duration `config:"pacer_min_sleep"`
Enc encoder.MultiEncoder `config:"encoding"`
RootNsid string `config:"root_namespace"`
ExportFormats fs.CommaSepList `config:"export_formats"`
SkipExports bool `config:"skip_exports"`
ShowAllExports bool `config:"show_all_exports"`
}
// Fs represents a remote dropbox server
@@ -281,8 +334,18 @@ type Fs struct {
pacer *fs.Pacer // To pace the API calls
ns string // The namespace we are using or "" for none
batcher *batcher.Batcher[*files.UploadSessionFinishArg, *files.FileMetadata]
exportExts []exportExtension
}
type exportType int
const (
notExport exportType = iota // a regular file
exportHide // should be hidden
exportListOnly // listable, but can't export
exportExportable // can export
)
// Object describes a dropbox object
//
// Dropbox Objects always have full metadata
@@ -294,6 +357,9 @@ type Object struct {
bytes int64 // size of the object
modTime time.Time // time it was last modified
hash string // content_hash of the object
exportType exportType
exportAPIFormat exportAPIFormat
}
// Name of the remote (as passed into NewFs)
@@ -316,32 +382,46 @@ func (f *Fs) Features() *fs.Features {
return f.features
}
// shouldRetry returns a boolean as to whether this err deserves to be
// retried. It returns the err as a convenience
func shouldRetry(ctx context.Context, err error) (bool, error) {
if fserrors.ContextError(ctx, &err) {
return false, err
}
// Some specific errors which should be excluded from retries
func shouldRetryExclude(ctx context.Context, err error) (bool, error) {
if err == nil {
return false, err
}
errString := err.Error()
if fserrors.ContextError(ctx, &err) {
return false, err
}
// First check for specific errors
//
// These come back from the SDK in a whole host of different
// error types, but there doesn't seem to be a consistent way
// of reading the error cause, so here we just check using the
// error string which isn't perfect but does the job.
errString := err.Error()
if strings.Contains(errString, "insufficient_space") {
return false, fserrors.FatalError(err)
} else if strings.Contains(errString, "malformed_path") {
return false, fserrors.NoRetryError(err)
}
return true, err
}
// shouldRetry returns a boolean as to whether this err deserves to be
// retried. It returns the err as a convenience
func shouldRetry(ctx context.Context, err error) (bool, error) {
if retry, err := shouldRetryExclude(ctx, err); !retry {
return retry, err
}
// Then handle any official Retry-After header from Dropbox's SDK
switch e := err.(type) {
case auth.RateLimitAPIError:
if e.RateLimitError.RetryAfter > 0 {
fs.Logf(errString, "Too many requests or write operations. Trying again in %d seconds.", e.RateLimitError.RetryAfter)
fs.Logf(nil, "Error %v. Too many requests or write operations. Trying again in %d seconds.", err, e.RateLimitError.RetryAfter)
err = pacer.RetryAfterError(err, time.Duration(e.RateLimitError.RetryAfter)*time.Second)
}
return true, err
}
// Keep old behavior for backward compatibility
errString := err.Error()
if strings.Contains(errString, "too_many_write_operations") || strings.Contains(errString, "too_many_requests") || errString == "" {
return true, err
}
@@ -386,7 +466,7 @@ func NewFs(ctx context.Context, name, root string, m configmap.Mapper) (fs.Fs, e
oldToken = strings.TrimSpace(oldToken)
if ok && oldToken != "" && oldToken[0] != '{' {
fs.Infof(name, "Converting token to new format")
newToken := fmt.Sprintf(`{"access_token":"%s","token_type":"bearer","expiry":"0001-01-01T00:00:00Z"}`, oldToken)
newToken := fmt.Sprintf(`{"access_token":%q,"token_type":"bearer","expiry":"0001-01-01T00:00:00Z"}`, oldToken)
err := config.SetValueAndSave(name, config.ConfigToken, newToken)
if err != nil {
return nil, fmt.Errorf("NewFS convert token: %w", err)
@@ -420,6 +500,14 @@ func NewFs(ctx context.Context, name, root string, m configmap.Mapper) (fs.Fs, e
HeaderGenerator: f.headerGenerator,
}
for _, e := range opt.ExportFormats {
ext := exportExtension(e)
if exportKnownExtensions[ext] == "" {
return nil, fmt.Errorf("dropbox: unknown export format '%s'", e)
}
f.exportExts = append(f.exportExts, ext)
}
// unauthorized config for endpoints that fail with auth
ucfg := dropbox.Config{
LogLevel: dropbox.LogOff, // logging in the SDK: LogOff, LogDebug, LogInfo
@@ -572,38 +660,126 @@ func (f *Fs) setRoot(root string) {
}
}
type getMetadataResult struct {
entry files.IsMetadata
notFound bool
err error
}
// getMetadata gets the metadata for a file or directory
func (f *Fs) getMetadata(ctx context.Context, objPath string) (entry files.IsMetadata, notFound bool, err error) {
err = f.pacer.Call(func() (bool, error) {
entry, err = f.srv.GetMetadata(&files.GetMetadataArg{
func (f *Fs) getMetadata(ctx context.Context, objPath string) (res getMetadataResult) {
res.err = f.pacer.Call(func() (bool, error) {
res.entry, res.err = f.srv.GetMetadata(&files.GetMetadataArg{
Path: f.opt.Enc.FromStandardPath(objPath),
})
return shouldRetry(ctx, err)
return shouldRetry(ctx, res.err)
})
if err != nil {
switch e := err.(type) {
if res.err != nil {
switch e := res.err.(type) {
case files.GetMetadataAPIError:
if e.EndpointError != nil && e.EndpointError.Path != nil && e.EndpointError.Path.Tag == files.LookupErrorNotFound {
notFound = true
err = nil
res.notFound = true
res.err = nil
}
}
}
return
}
// getFileMetadata gets the metadata for a file
func (f *Fs) getFileMetadata(ctx context.Context, filePath string) (fileInfo *files.FileMetadata, err error) {
entry, notFound, err := f.getMetadata(ctx, filePath)
if err != nil {
return nil, err
// Get metadata such that the result would be exported with the given extension
// Return a channel that will eventually receive the metadata
func (f *Fs) getMetadataForExt(ctx context.Context, filePath string, wantExportExtension exportExtension) chan getMetadataResult {
ch := make(chan getMetadataResult, 1)
wantDownloadable := (wantExportExtension == "")
go func() {
defer close(ch)
res := f.getMetadata(ctx, filePath)
info, ok := res.entry.(*files.FileMetadata)
if !ok { // Can't check anything about file, just return what we have
ch <- res
return
}
// Return notFound if downloadability or extension doesn't match
if wantDownloadable != info.IsDownloadable {
ch <- getMetadataResult{notFound: true}
return
}
if !info.IsDownloadable {
_, ext := f.chooseExportFormat(info)
if ext != wantExportExtension {
ch <- getMetadataResult{notFound: true}
return
}
}
// Return our real result or error
ch <- res
}()
return ch
}
// For a given rclone-path, figure out what the Dropbox-path may be, in order of preference.
// Multiple paths might be plausible, due to export path munging.
func (f *Fs) possibleMetadatas(ctx context.Context, filePath string) (ret []<-chan getMetadataResult) {
ret = []<-chan getMetadataResult{}
// Prefer an exact match
ret = append(ret, f.getMetadataForExt(ctx, filePath, ""))
// Check if we're plausibly an export path, otherwise we're done
if f.opt.SkipExports || f.opt.ShowAllExports {
return
}
if notFound {
dotted := path.Ext(filePath)
if dotted == "" {
return
}
ext := exportExtension(dotted[1:])
if exportKnownExtensions[ext] == "" {
return
}
// We might be an export path! Try all possibilities
base := strings.TrimSuffix(filePath, dotted)
// `foo.papert.md` will only come from `foo.papert`. Never check something like `foo.papert.paper`
if strings.HasSuffix(base, paperTemplateExtension) {
ret = append(ret, f.getMetadataForExt(ctx, base, ext))
return
}
// Otherwise, try both `foo.md` coming from `foo`, or from `foo.paper`
ret = append(ret, f.getMetadataForExt(ctx, base, ext))
ret = append(ret, f.getMetadataForExt(ctx, base+paperExtension, ext))
return
}
// getFileMetadata gets the metadata for a file
func (f *Fs) getFileMetadata(ctx context.Context, filePath string) (*files.FileMetadata, error) {
var res getMetadataResult
// Try all possible metadatas
possibleMetadatas := f.possibleMetadatas(ctx, filePath)
for _, ch := range possibleMetadatas {
res = <-ch
if res.err != nil {
return nil, res.err
}
if !res.notFound {
break
}
}
if res.notFound {
return nil, fs.ErrorObjectNotFound
}
fileInfo, ok := entry.(*files.FileMetadata)
fileInfo, ok := res.entry.(*files.FileMetadata)
if !ok {
if _, ok = entry.(*files.FolderMetadata); ok {
if _, ok = res.entry.(*files.FolderMetadata); ok {
return nil, fs.ErrorIsDir
}
return nil, fs.ErrorNotAFile
@@ -612,15 +788,15 @@ func (f *Fs) getFileMetadata(ctx context.Context, filePath string) (fileInfo *fi
}
// getDirMetadata gets the metadata for a directory
func (f *Fs) getDirMetadata(ctx context.Context, dirPath string) (dirInfo *files.FolderMetadata, err error) {
entry, notFound, err := f.getMetadata(ctx, dirPath)
if err != nil {
return nil, err
func (f *Fs) getDirMetadata(ctx context.Context, dirPath string) (*files.FolderMetadata, error) {
res := f.getMetadata(ctx, dirPath)
if res.err != nil {
return nil, res.err
}
if notFound {
if res.notFound {
return nil, fs.ErrorDirNotFound
}
dirInfo, ok := entry.(*files.FolderMetadata)
dirInfo, ok := res.entry.(*files.FolderMetadata)
if !ok {
return nil, fs.ErrorIsFile
}
@@ -820,16 +996,15 @@ func (f *Fs) List(ctx context.Context, dir string) (entries fs.DirEntries, err e
var res *files.ListFolderResult
for {
if !started {
arg := files.ListFolderArg{
Path: f.opt.Enc.FromStandardPath(root),
Recursive: false,
Limit: 1000,
}
arg := files.NewListFolderArg(f.opt.Enc.FromStandardPath(root))
arg.Recursive = false
arg.Limit = 1000
if root == "/" {
arg.Path = "" // Specify root folder as empty string
}
err = f.pacer.Call(func() (bool, error) {
res, err = f.srv.ListFolder(&arg)
res, err = f.srv.ListFolder(arg)
return shouldRetry(ctx, err)
})
if err != nil {
@@ -882,7 +1057,9 @@ func (f *Fs) List(ctx context.Context, dir string) (entries fs.DirEntries, err e
if err != nil {
return nil, err
}
entries = append(entries, o)
if o.(*Object).exportType.listable() {
entries = append(entries, o)
}
}
}
if !res.HasMore {
@@ -968,16 +1145,14 @@ func (f *Fs) purgeCheck(ctx context.Context, dir string, check bool) (err error)
}
// check directory empty
arg := files.ListFolderArg{
Path: encRoot,
Recursive: false,
}
arg := files.NewListFolderArg(encRoot)
arg.Recursive = false
if root == "/" {
arg.Path = "" // Specify root folder as empty string
}
var res *files.ListFolderResult
err = f.pacer.Call(func() (bool, error) {
res, err = f.srv.ListFolder(&arg)
res, err = f.srv.ListFolder(arg)
return shouldRetry(ctx, err)
})
if err != nil {
@@ -1020,13 +1195,20 @@ func (f *Fs) Precision() time.Duration {
// Will only be called if src.Fs().Name() == f.Name()
//
// If it isn't possible then return fs.ErrorCantCopy
func (f *Fs) Copy(ctx context.Context, src fs.Object, remote string) (fs.Object, error) {
func (f *Fs) Copy(ctx context.Context, src fs.Object, remote string) (dst fs.Object, err error) {
srcObj, ok := src.(*Object)
if !ok {
fs.Debugf(src, "Can't copy - not same remote type")
return nil, fs.ErrorCantCopy
}
// Find and remove existing object
cleanup, err := operations.RemoveExisting(ctx, f, remote, "server side copy")
if err != nil {
return nil, err
}
defer cleanup(&err)
// Temporary Object under construction
dstObj := &Object{
fs: f,
@@ -1040,7 +1222,6 @@ func (f *Fs) Copy(ctx context.Context, src fs.Object, remote string) (fs.Object,
ToPath: f.opt.Enc.FromStandardPath(dstObj.remotePath()),
},
}
var err error
var result *files.RelocationResult
err = f.pacer.Call(func() (bool, error) {
result, err = f.srv.CopyV2(&arg)
@@ -1152,6 +1333,16 @@ func (f *Fs) PublicLink(ctx context.Context, remote string, expire fs.Duration,
return shouldRetry(ctx, err)
})
if err != nil && createArg.Settings.Expires != nil && strings.Contains(err.Error(), sharing.SharedLinkSettingsErrorNotAuthorized) {
// Some plans can't create links with expiry
fs.Debugf(absPath, "can't create link with expiry, trying without")
createArg.Settings.Expires = nil
err = f.pacer.Call(func() (bool, error) {
linkRes, err = f.sharing.CreateSharedLinkWithSettings(&createArg)
return shouldRetry(ctx, err)
})
}
if err != nil && strings.Contains(err.Error(),
sharing.CreateSharedLinkWithSettingsErrorSharedLinkAlreadyExists) {
fs.Debugf(absPath, "has a public link already, attempting to retrieve it")
@@ -1316,16 +1507,14 @@ func (f *Fs) changeNotifyCursor(ctx context.Context) (cursor string, err error)
var startCursor *files.ListFolderGetLatestCursorResult
err = f.pacer.Call(func() (bool, error) {
arg := files.ListFolderArg{
Path: f.opt.Enc.FromStandardPath(f.slashRoot),
Recursive: true,
}
arg := files.NewListFolderArg(f.opt.Enc.FromStandardPath(f.slashRoot))
arg.Recursive = true
if arg.Path == "/" {
arg.Path = ""
}
startCursor, err = f.srv.ListFolderGetLatestCursor(&arg)
startCursor, err = f.srv.ListFolderGetLatestCursor(arg)
return shouldRetry(ctx, err)
})
@@ -1429,8 +1618,50 @@ func (f *Fs) Shutdown(ctx context.Context) error {
return nil
}
func (f *Fs) chooseExportFormat(info *files.FileMetadata) (exportAPIFormat, exportExtension) {
// Find API export formats Dropbox supports for this file
// Sometimes Dropbox lists a format in ExportAs but not ExportOptions, so check both
ei := info.ExportInfo
dropboxFormatStrings := append([]string{ei.ExportAs}, ei.ExportOptions...)
// Find which extensions these correspond to
exportExtensions := map[exportExtension]exportAPIFormat{}
var dropboxPreferredAPIFormat exportAPIFormat
var dropboxPreferredExtension exportExtension
for _, format := range dropboxFormatStrings {
apiFormat := exportAPIFormat(format)
// Only consider formats we know about
if ext, ok := exportKnownAPIFormats[apiFormat]; ok {
if dropboxPreferredAPIFormat == "" {
dropboxPreferredAPIFormat = apiFormat
dropboxPreferredExtension = ext
}
exportExtensions[ext] = apiFormat
}
}
// See if the user picked a valid extension
for _, ext := range f.exportExts {
if apiFormat, ok := exportExtensions[ext]; ok {
return apiFormat, ext
}
}
// If no matches, prefer the first valid format Dropbox lists
return dropboxPreferredAPIFormat, dropboxPreferredExtension
}
// ------------------------------------------------------------
func (et exportType) listable() bool {
return et != exportHide
}
// something we should _try_ to export
func (et exportType) exportable() bool {
return et == exportExportable || et == exportListOnly
}
// Fs returns the parent Fs
func (o *Object) Fs() fs.Info {
return o.fs
@@ -1474,6 +1705,32 @@ func (o *Object) Size() int64 {
return o.bytes
}
func (o *Object) setMetadataForExport(info *files.FileMetadata) {
o.bytes = -1
o.hash = ""
if o.fs.opt.SkipExports {
o.exportType = exportHide
return
}
if o.fs.opt.ShowAllExports {
o.exportType = exportListOnly
return
}
var exportExt exportExtension
o.exportAPIFormat, exportExt = o.fs.chooseExportFormat(info)
if o.exportAPIFormat == "" {
o.exportType = exportHide
} else {
o.exportType = exportExportable
// get rid of any paper extension, if present
o.remote = strings.TrimSuffix(o.remote, paperExtension)
// add the export extension
o.remote += "." + string(exportExt)
}
}
// setMetadataFromEntry sets the fs data from a files.FileMetadata
//
// This isn't a complete set of metadata and has an inaccurate date
@@ -1482,6 +1739,10 @@ func (o *Object) setMetadataFromEntry(info *files.FileMetadata) error {
o.bytes = int64(info.Size)
o.modTime = info.ClientModified
o.hash = info.ContentHash
if !info.IsDownloadable {
o.setMetadataForExport(info)
}
return nil
}
@@ -1545,6 +1806,27 @@ func (o *Object) Storable() bool {
return true
}
func (o *Object) export(ctx context.Context) (in io.ReadCloser, err error) {
if o.exportType == exportListOnly || o.exportAPIFormat == "" {
fs.Debugf(o.remote, "No export format found")
return nil, fs.ErrorObjectNotFound
}
arg := files.ExportArg{Path: o.id, ExportFormat: string(o.exportAPIFormat)}
var exportResult *files.ExportResult
err = o.fs.pacer.Call(func() (bool, error) {
exportResult, in, err = o.fs.srv.Export(&arg)
return shouldRetry(ctx, err)
})
if err != nil {
return nil, err
}
o.bytes = int64(exportResult.ExportMetadata.Size)
o.hash = exportResult.ExportMetadata.ExportHash
return
}
// Open an object for read
func (o *Object) Open(ctx context.Context, options ...fs.OpenOption) (in io.ReadCloser, err error) {
if o.fs.opt.SharedFiles {
@@ -1564,6 +1846,10 @@ func (o *Object) Open(ctx context.Context, options ...fs.OpenOption) (in io.Read
return
}
if o.exportType.exportable() {
return o.export(ctx)
}
fs.FixRangeOption(options, o.bytes)
headers := fs.OpenOptionHeaders(options)
arg := files.DownloadArg{
@@ -1692,14 +1978,10 @@ func (o *Object) uploadChunked(ctx context.Context, in0 io.Reader, commitInfo *f
err = o.fs.pacer.Call(func() (bool, error) {
entry, err = o.fs.srv.UploadSessionFinish(args, nil)
// If error is insufficient space then don't retry
if e, ok := err.(files.UploadSessionFinishAPIError); ok {
if e.EndpointError != nil && e.EndpointError.Path != nil && e.EndpointError.Path.Tag == files.WriteErrorInsufficientSpace {
err = fserrors.NoRetryError(err)
return false, err
}
if retry, err := shouldRetryExclude(ctx, err); !retry {
return retry, err
}
// after the first chunk is uploaded, we retry everything
// after the first chunk is uploaded, we retry everything except the excluded errors
return err != nil, err
})
if err != nil {

View File

@@ -1,9 +1,16 @@
package dropbox
import (
"context"
"io"
"strings"
"testing"
"github.com/dropbox/dropbox-sdk-go-unofficial/v6/dropbox"
"github.com/dropbox/dropbox-sdk-go-unofficial/v6/dropbox/files"
"github.com/rclone/rclone/fstest/fstests"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
)
func TestInternalCheckPathLength(t *testing.T) {
@@ -42,3 +49,54 @@ func TestInternalCheckPathLength(t *testing.T) {
assert.Equal(t, test.ok, err == nil, test.in)
}
}
func (f *Fs) importPaperForTest(t *testing.T) {
content := `# test doc
Lorem ipsum __dolor__ sit amet
[link](http://google.com)
`
arg := files.PaperCreateArg{
Path: f.slashRootSlash + "export.paper",
ImportFormat: &files.ImportFormat{Tagged: dropbox.Tagged{Tag: files.ImportFormatMarkdown}},
}
var err error
err = f.pacer.Call(func() (bool, error) {
reader := strings.NewReader(content)
_, err = f.srv.PaperCreate(&arg, reader)
return shouldRetry(context.Background(), err)
})
require.NoError(t, err)
}
func (f *Fs) InternalTestPaperExport(t *testing.T) {
ctx := context.Background()
f.importPaperForTest(t)
f.exportExts = []exportExtension{"html"}
obj, err := f.NewObject(ctx, "export.html")
require.NoError(t, err)
rc, err := obj.Open(ctx)
require.NoError(t, err)
defer func() { require.NoError(t, rc.Close()) }()
buf, err := io.ReadAll(rc)
require.NoError(t, err)
text := string(buf)
for _, excerpt := range []string{
"Lorem ipsum",
"<b>dolor</b>",
`href="http://google.com"`,
} {
require.Contains(t, text, excerpt)
}
}
func (f *Fs) InternalTest(t *testing.T) {
t.Run("PaperExport", f.InternalTestPaperExport)
}
var _ fstests.InternalTester = (*Fs)(nil)

View File

@@ -61,7 +61,7 @@ func shouldRetry(ctx context.Context, resp *http.Response, err error) (bool, err
return false, err // No such user
case 186:
return false, err // IP blocked?
case 374:
case 374, 412: // Flood detected seems to be #412 now
fs.Debugf(nil, "Sleeping for 30 seconds due to: %v", err)
time.Sleep(30 * time.Second)
default:

View File

@@ -441,23 +441,28 @@ func (f *Fs) Move(ctx context.Context, src fs.Object, remote string) (fs.Object,
fs.Debugf(src, "Can't move - not same remote type")
return nil, fs.ErrorCantMove
}
srcFs := srcObj.fs
// Find current directory ID
_, currentDirectoryID, err := f.dirCache.FindPath(ctx, remote, false)
srcLeaf, srcDirectoryID, err := srcFs.dirCache.FindPath(ctx, srcObj.remote, false)
if err != nil {
return nil, err
}
// Create temporary object
dstObj, leaf, directoryID, err := f.createObject(ctx, remote)
dstObj, dstLeaf, dstDirectoryID, err := f.createObject(ctx, remote)
if err != nil {
return nil, err
}
// If it is in the correct directory, just rename it
var url string
if currentDirectoryID == directoryID {
resp, err := f.renameFile(ctx, srcObj.file.URL, leaf)
if srcDirectoryID == dstDirectoryID {
// No rename needed
if srcLeaf == dstLeaf {
return src, nil
}
resp, err := f.renameFile(ctx, srcObj.file.URL, dstLeaf)
if err != nil {
return nil, fmt.Errorf("couldn't rename file: %w", err)
}
@@ -466,11 +471,16 @@ func (f *Fs) Move(ctx context.Context, src fs.Object, remote string) (fs.Object,
}
url = resp.URLs[0].URL
} else {
folderID, err := strconv.Atoi(directoryID)
dstFolderID, err := strconv.Atoi(dstDirectoryID)
if err != nil {
return nil, err
}
resp, err := f.moveFile(ctx, srcObj.file.URL, folderID, leaf)
rename := dstLeaf
// No rename needed
if srcLeaf == dstLeaf {
rename = ""
}
resp, err := f.moveFile(ctx, srcObj.file.URL, dstFolderID, rename)
if err != nil {
return nil, fmt.Errorf("couldn't move file: %w", err)
}

View File

@@ -216,11 +216,11 @@ var ItemFields = mustFields(Item{})
// fields returns the JSON fields in use by opt as a | separated
// string.
func fields(opt interface{}) (pipeTags string, err error) {
func fields(opt any) (pipeTags string, err error) {
var tags []string
def := reflect.ValueOf(opt)
defType := def.Type()
for i := 0; i < def.NumField(); i++ {
for i := range def.NumField() {
field := defType.Field(i)
tag, ok := field.Tag.Lookup("json")
if !ok {
@@ -239,7 +239,7 @@ func fields(opt interface{}) (pipeTags string, err error) {
// mustFields returns the JSON fields in use by opt as a | separated
// string. It panics on failure.
func mustFields(opt interface{}) string {
func mustFields(opt any) string {
tags, err := fields(opt)
if err != nil {
panic(err)
@@ -351,12 +351,12 @@ type SpaceInfo struct {
// DeleteResponse is returned from doDeleteFile
type DeleteResponse struct {
Status
Deleted []string `json:"deleted"`
Errors []interface{} `json:"errors"`
ID string `json:"fi_id"`
BackgroundTask int `json:"backgroundtask"`
UsSize string `json:"us_size"`
PaSize string `json:"pa_size"`
Deleted []string `json:"deleted"`
Errors []any `json:"errors"`
ID string `json:"fi_id"`
BackgroundTask int `json:"backgroundtask"`
UsSize string `json:"us_size"`
PaSize string `json:"pa_size"`
//SpaceInfo SpaceInfo `json:"spaceinfo"`
}

View File

@@ -371,7 +371,7 @@ func (f *Fs) getToken(ctx context.Context) (token string, err error) {
}
// params for rpc
type params map[string]interface{}
type params map[string]any
// rpc calls the rpc.php method of the SME file fabric
//

View File

@@ -0,0 +1,900 @@
// Package filescom provides an interface to the Files.com
// object storage system.
package filescom
import (
"context"
"errors"
"fmt"
"io"
"net/http"
"net/url"
"path"
"slices"
"strings"
"time"
files_sdk "github.com/Files-com/files-sdk-go/v3"
"github.com/Files-com/files-sdk-go/v3/bundle"
"github.com/Files-com/files-sdk-go/v3/file"
file_migration "github.com/Files-com/files-sdk-go/v3/filemigration"
"github.com/Files-com/files-sdk-go/v3/folder"
"github.com/Files-com/files-sdk-go/v3/session"
"github.com/rclone/rclone/fs"
"github.com/rclone/rclone/fs/config"
"github.com/rclone/rclone/fs/config/configmap"
"github.com/rclone/rclone/fs/config/configstruct"
"github.com/rclone/rclone/fs/config/obscure"
"github.com/rclone/rclone/fs/fserrors"
"github.com/rclone/rclone/fs/fshttp"
"github.com/rclone/rclone/fs/hash"
"github.com/rclone/rclone/lib/encoder"
"github.com/rclone/rclone/lib/pacer"
)
/*
Run of rclone info
stringNeedsEscaping = []rune{
'/', '\x00'
}
maxFileLength = 512 // for 1 byte unicode characters
maxFileLength = 512 // for 2 byte unicode characters
maxFileLength = 512 // for 3 byte unicode characters
maxFileLength = 512 // for 4 byte unicode characters
canWriteUnnormalized = true
canReadUnnormalized = true
canReadRenormalized = true
canStream = true
*/
const (
minSleep = 10 * time.Millisecond
maxSleep = 2 * time.Second
decayConstant = 2 // bigger for slower decay, exponential
folderNotEmpty = "processing-failure/folder-not-empty"
)
// Register with Fs
func init() {
fs.Register(&fs.RegInfo{
Name: "filescom",
Description: "Files.com",
NewFs: NewFs,
Options: []fs.Option{
{
Name: "site",
Help: "Your site subdomain (e.g. mysite) or custom domain (e.g. myfiles.customdomain.com).",
}, {
Name: "username",
Help: "The username used to authenticate with Files.com.",
}, {
Name: "password",
Help: "The password used to authenticate with Files.com.",
IsPassword: true,
}, {
Name: "api_key",
Help: "The API key used to authenticate with Files.com.",
Advanced: true,
Sensitive: true,
}, {
Name: config.ConfigEncoding,
Help: config.ConfigEncodingHelp,
Advanced: true,
Default: (encoder.Display |
encoder.EncodeBackSlash |
encoder.EncodeRightSpace |
encoder.EncodeRightCrLfHtVt |
encoder.EncodeInvalidUtf8),
}},
})
}
// Options defines the configuration for this backend
type Options struct {
Site string `config:"site"`
Username string `config:"username"`
Password string `config:"password"`
APIKey string `config:"api_key"`
Enc encoder.MultiEncoder `config:"encoding"`
}
// Fs represents a remote files.com server
type Fs struct {
name string // name of this remote
root string // the path we are working on
opt Options // parsed options
features *fs.Features // optional features
fileClient *file.Client // the connection to the file API
folderClient *folder.Client // the connection to the folder API
migrationClient *file_migration.Client // the connection to the file migration API
bundleClient *bundle.Client // the connection to the bundle API
pacer *fs.Pacer // pacer for API calls
}
// Object describes a files object
//
// Will definitely have info but maybe not meta
type Object struct {
fs *Fs // what this object is part of
remote string // The remote path
size int64 // size of the object
crc32 string // CRC32 of the object content
md5 string // MD5 of the object content
mimeType string // Content-Type of the object
modTime time.Time // modification time of the object
}
// ------------------------------------------------------------
// Name of the remote (as passed into NewFs)
func (f *Fs) Name() string {
return f.name
}
// Root of the remote (as passed into NewFs)
func (f *Fs) Root() string {
return f.root
}
// String converts this Fs to a string
func (f *Fs) String() string {
return fmt.Sprintf("files root '%s'", f.root)
}
// Features returns the optional features of this Fs
func (f *Fs) Features() *fs.Features {
return f.features
}
// Encode remote and turn it into an absolute path in the share
func (f *Fs) absPath(remote string) string {
return f.opt.Enc.FromStandardPath(path.Join(f.root, remote))
}
// retryErrorCodes is a slice of error codes that we will retry
var retryErrorCodes = []int{
429, // Too Many Requests.
500, // Internal Server Error
502, // Bad Gateway
503, // Service Unavailable
504, // Gateway Timeout
509, // Bandwidth Limit Exceeded
}
// shouldRetry returns a boolean as to whether this err deserves to be
// retried. It returns the err as a convenience
func shouldRetry(ctx context.Context, err error) (bool, error) {
if fserrors.ContextError(ctx, &err) {
return false, err
}
if apiErr, ok := err.(files_sdk.ResponseError); ok {
if slices.Contains(retryErrorCodes, apiErr.HttpCode) {
fs.Debugf(nil, "Retrying API error %v", err)
return true, err
}
}
return fserrors.ShouldRetry(err), err
}
// readMetaDataForPath reads the metadata from the path
func (f *Fs) readMetaDataForPath(ctx context.Context, path string) (info *files_sdk.File, err error) {
params := files_sdk.FileFindParams{
Path: f.absPath(path),
}
var file files_sdk.File
err = f.pacer.Call(func() (bool, error) {
file, err = f.fileClient.Find(params, files_sdk.WithContext(ctx))
return shouldRetry(ctx, err)
})
if err != nil {
return nil, err
}
return &file, nil
}
// NewFs constructs an Fs from the path, container:path
func NewFs(ctx context.Context, name, root string, m configmap.Mapper) (fs.Fs, error) {
// Parse config into Options struct
opt := new(Options)
err := configstruct.Set(m, opt)
if err != nil {
return nil, err
}
root = strings.Trim(root, "/")
config, err := newClientConfig(ctx, opt)
if err != nil {
return nil, err
}
f := &Fs{
name: name,
root: root,
opt: *opt,
fileClient: &file.Client{Config: config},
folderClient: &folder.Client{Config: config},
migrationClient: &file_migration.Client{Config: config},
bundleClient: &bundle.Client{Config: config},
pacer: fs.NewPacer(ctx, pacer.NewDefault(pacer.MinSleep(minSleep), pacer.MaxSleep(maxSleep), pacer.DecayConstant(decayConstant))),
}
f.features = (&fs.Features{
CaseInsensitive: true,
CanHaveEmptyDirectories: true,
ReadMimeType: true,
DirModTimeUpdatesOnWrite: true,
}).Fill(ctx, f)
if f.root != "" {
info, err := f.readMetaDataForPath(ctx, "")
if err == nil && !info.IsDir() {
f.root = path.Dir(f.root)
if f.root == "." {
f.root = ""
}
return f, fs.ErrorIsFile
}
}
return f, err
}
func newClientConfig(ctx context.Context, opt *Options) (config files_sdk.Config, err error) {
if opt.Site != "" {
if strings.Contains(opt.Site, ".") {
config.EndpointOverride = opt.Site
} else {
config.Subdomain = opt.Site
}
_, err = url.ParseRequestURI(config.Endpoint())
if err != nil {
err = fmt.Errorf("invalid domain or subdomain: %v", opt.Site)
return
}
}
config = config.Init().SetCustomClient(fshttp.NewClient(ctx))
if opt.APIKey != "" {
config.APIKey = opt.APIKey
return
}
if opt.Username == "" {
err = errors.New("username not found")
return
}
if opt.Password == "" {
err = errors.New("password not found")
return
}
opt.Password, err = obscure.Reveal(opt.Password)
if err != nil {
return
}
sessionClient := session.Client{Config: config}
params := files_sdk.SessionCreateParams{
Username: opt.Username,
Password: opt.Password,
}
thisSession, err := sessionClient.Create(params, files_sdk.WithContext(ctx))
if err != nil {
err = fmt.Errorf("couldn't create session: %w", err)
return
}
config.SessionId = thisSession.Id
return
}
// Return an Object from a path
//
// If it can't be found it returns the error fs.ErrorObjectNotFound.
func (f *Fs) newObjectWithInfo(ctx context.Context, remote string, file *files_sdk.File) (fs.Object, error) {
o := &Object{
fs: f,
remote: remote,
}
var err error
if file != nil {
err = o.setMetaData(file)
} else {
err = o.readMetaData(ctx) // reads info and meta, returning an error
}
if err != nil {
return nil, err
}
return o, nil
}
// NewObject finds the Object at remote. If it can't be found
// it returns the error fs.ErrorObjectNotFound.
func (f *Fs) NewObject(ctx context.Context, remote string) (fs.Object, error) {
return f.newObjectWithInfo(ctx, remote, nil)
}
// List the objects and directories in dir into entries. The
// entries can be returned in any order but should be for a
// complete directory.
//
// dir should be "" to list the root, and should not have
// trailing slashes.
//
// This should return ErrDirNotFound if the directory isn't
// found.
func (f *Fs) List(ctx context.Context, dir string) (entries fs.DirEntries, err error) {
var it *folder.Iter
params := files_sdk.FolderListForParams{
Path: f.absPath(dir),
}
err = f.pacer.Call(func() (bool, error) {
it, err = f.folderClient.ListFor(params, files_sdk.WithContext(ctx))
return shouldRetry(ctx, err)
})
if err != nil {
return nil, fmt.Errorf("couldn't list files: %w", err)
}
for it.Next() {
item := ptr(it.File())
remote := f.opt.Enc.ToStandardPath(item.DisplayName)
remote = path.Join(dir, remote)
if remote == dir {
continue
}
if item.IsDir() {
d := fs.NewDir(remote, item.ModTime())
entries = append(entries, d)
} else {
o, err := f.newObjectWithInfo(ctx, remote, item)
if err != nil {
return nil, err
}
entries = append(entries, o)
}
}
err = it.Err()
if files_sdk.IsNotExist(err) {
return nil, fs.ErrorDirNotFound
}
return
}
// Creates from the parameters passed in a half finished Object which
// must have setMetaData called on it
//
// Returns the object and error.
//
// Used to create new objects
func (f *Fs) createObject(ctx context.Context, remote string) (o *Object, err error) {
// Create the directory for the object if it doesn't exist
err = f.mkParentDir(ctx, remote)
if err != nil {
return
}
// Temporary Object under construction
o = &Object{
fs: f,
remote: remote,
}
return o, nil
}
// Put the object
//
// Copy the reader in to the new object which is returned.
//
// The new object may have been created if an error is returned
func (f *Fs) Put(ctx context.Context, in io.Reader, src fs.ObjectInfo, options ...fs.OpenOption) (fs.Object, error) {
// Temporary Object under construction
fs := &Object{
fs: f,
remote: src.Remote(),
}
return fs, fs.Update(ctx, in, src, options...)
}
// PutStream uploads to the remote path with the modTime given of indeterminate size
func (f *Fs) PutStream(ctx context.Context, in io.Reader, src fs.ObjectInfo, options ...fs.OpenOption) (fs.Object, error) {
return f.Put(ctx, in, src, options...)
}
func (f *Fs) mkdir(ctx context.Context, path string) error {
if path == "" || path == "." {
return nil
}
params := files_sdk.FolderCreateParams{
Path: path,
MkdirParents: ptr(true),
}
err := f.pacer.Call(func() (bool, error) {
_, err := f.folderClient.Create(params, files_sdk.WithContext(ctx))
return shouldRetry(ctx, err)
})
if files_sdk.IsExist(err) {
return nil
}
return err
}
// Make the parent directory of remote
func (f *Fs) mkParentDir(ctx context.Context, remote string) error {
return f.mkdir(ctx, path.Dir(f.absPath(remote)))
}
// Mkdir creates the container if it doesn't exist
func (f *Fs) Mkdir(ctx context.Context, dir string) error {
return f.mkdir(ctx, f.absPath(dir))
}
// DirSetModTime sets the directory modtime for dir
func (f *Fs) DirSetModTime(ctx context.Context, dir string, modTime time.Time) error {
o := Object{
fs: f,
remote: dir,
}
return o.SetModTime(ctx, modTime)
}
// purgeCheck removes the root directory, if check is set then it
// refuses to do so if it has anything in
func (f *Fs) purgeCheck(ctx context.Context, dir string, check bool) error {
path := f.absPath(dir)
if path == "" || path == "." {
return errors.New("can't purge root directory")
}
params := files_sdk.FileDeleteParams{
Path: path,
Recursive: ptr(!check),
}
err := f.pacer.Call(func() (bool, error) {
err := f.fileClient.Delete(params, files_sdk.WithContext(ctx))
// Allow for eventual consistency deletion of child objects.
if isFolderNotEmpty(err) {
return true, err
}
return shouldRetry(ctx, err)
})
if err != nil {
if files_sdk.IsNotExist(err) {
return fs.ErrorDirNotFound
} else if isFolderNotEmpty(err) {
return fs.ErrorDirectoryNotEmpty
}
return fmt.Errorf("rmdir failed: %w", err)
}
return nil
}
// Rmdir deletes the root folder
//
// Returns an error if it isn't empty
func (f *Fs) Rmdir(ctx context.Context, dir string) error {
return f.purgeCheck(ctx, dir, true)
}
// Precision return the precision of this Fs
func (f *Fs) Precision() time.Duration {
return time.Second
}
// Copy src to this remote using server-side copy operations.
//
// This is stored with the remote path given.
//
// It returns the destination Object and a possible error.
//
// Will only be called if src.Fs().Name() == f.Name()
//
// If it isn't possible then return fs.ErrorCantCopy
func (f *Fs) Copy(ctx context.Context, src fs.Object, remote string) (dstObj fs.Object, err error) {
srcObj, ok := src.(*Object)
if !ok {
fs.Debugf(src, "Can't copy - not same remote type")
return nil, fs.ErrorCantCopy
}
err = srcObj.readMetaData(ctx)
if err != nil {
return
}
srcPath := srcObj.fs.absPath(srcObj.remote)
dstPath := f.absPath(remote)
if strings.EqualFold(srcPath, dstPath) {
return nil, fmt.Errorf("can't copy %q -> %q as are same name when lowercase", srcPath, dstPath)
}
// Create temporary object
dstObj, err = f.createObject(ctx, remote)
if err != nil {
return
}
// Copy the object
params := files_sdk.FileCopyParams{
Path: srcPath,
Destination: dstPath,
Overwrite: ptr(true),
}
var action files_sdk.FileAction
err = f.pacer.Call(func() (bool, error) {
action, err = f.fileClient.Copy(params, files_sdk.WithContext(ctx))
return shouldRetry(ctx, err)
})
if err != nil {
return
}
err = f.waitForAction(ctx, action, "copy")
if err != nil {
return
}
err = dstObj.SetModTime(ctx, srcObj.modTime)
return
}
// Purge deletes all the files and the container
//
// Optional interface: Only implement this if you have a way of
// deleting all the files quicker than just running Remove() on the
// result of List()
func (f *Fs) Purge(ctx context.Context, dir string) error {
return f.purgeCheck(ctx, dir, false)
}
// move a file or folder
func (f *Fs) move(ctx context.Context, src *Fs, srcRemote string, dstRemote string) (info *files_sdk.File, err error) {
// Move the object
params := files_sdk.FileMoveParams{
Path: src.absPath(srcRemote),
Destination: f.absPath(dstRemote),
}
var action files_sdk.FileAction
err = f.pacer.Call(func() (bool, error) {
action, err = f.fileClient.Move(params, files_sdk.WithContext(ctx))
return shouldRetry(ctx, err)
})
if err != nil {
return nil, err
}
err = f.waitForAction(ctx, action, "move")
if err != nil {
return nil, err
}
info, err = f.readMetaDataForPath(ctx, dstRemote)
return
}
func (f *Fs) waitForAction(ctx context.Context, action files_sdk.FileAction, operation string) (err error) {
var migration files_sdk.FileMigration
err = f.pacer.Call(func() (bool, error) {
migration, err = f.migrationClient.Wait(action, func(migration files_sdk.FileMigration) {
// noop
}, files_sdk.WithContext(ctx))
return shouldRetry(ctx, err)
})
if err == nil && migration.Status != "completed" {
return fmt.Errorf("%v did not complete successfully: %v", operation, migration.Status)
}
return
}
// Move src to this remote using server-side move operations.
//
// This is stored with the remote path given.
//
// It returns the destination Object and a possible error.
//
// Will only be called if src.Fs().Name() == f.Name()
//
// If it isn't possible then return fs.ErrorCantMove
func (f *Fs) Move(ctx context.Context, src fs.Object, remote string) (fs.Object, error) {
srcObj, ok := src.(*Object)
if !ok {
fs.Debugf(src, "Can't move - not same remote type")
return nil, fs.ErrorCantMove
}
// Create temporary object
dstObj, err := f.createObject(ctx, remote)
if err != nil {
return nil, err
}
// Do the move
info, err := f.move(ctx, srcObj.fs, srcObj.remote, dstObj.remote)
if err != nil {
return nil, err
}
err = dstObj.setMetaData(info)
if err != nil {
return nil, err
}
return dstObj, nil
}
// DirMove moves src, srcRemote to this remote at dstRemote
// using server-side move operations.
//
// Will only be called if src.Fs().Name() == f.Name()
//
// If it isn't possible then return fs.ErrorCantDirMove
//
// If destination exists then return fs.ErrorDirExists
func (f *Fs) DirMove(ctx context.Context, src fs.Fs, srcRemote, dstRemote string) (err error) {
srcFs, ok := src.(*Fs)
if !ok {
fs.Debugf(srcFs, "Can't move directory - not same remote type")
return fs.ErrorCantDirMove
}
// Check if destination exists
_, err = f.readMetaDataForPath(ctx, dstRemote)
if err == nil {
return fs.ErrorDirExists
}
// Create temporary object
dstObj, err := f.createObject(ctx, dstRemote)
if err != nil {
return
}
// Do the move
_, err = f.move(ctx, srcFs, srcRemote, dstObj.remote)
return
}
// PublicLink adds a "readable by anyone with link" permission on the given file or folder.
func (f *Fs) PublicLink(ctx context.Context, remote string, expire fs.Duration, unlink bool) (url string, err error) {
params := files_sdk.BundleCreateParams{
Paths: []string{f.absPath(remote)},
}
if expire < fs.DurationOff {
params.ExpiresAt = ptr(time.Now().Add(time.Duration(expire)))
}
var bundle files_sdk.Bundle
err = f.pacer.Call(func() (bool, error) {
bundle, err = f.bundleClient.Create(params, files_sdk.WithContext(ctx))
return shouldRetry(ctx, err)
})
url = bundle.Url
return
}
// Hashes returns the supported hash sets.
func (f *Fs) Hashes() hash.Set {
return hash.NewHashSet(hash.CRC32, hash.MD5)
}
// ------------------------------------------------------------
// Fs returns the parent Fs
func (o *Object) Fs() fs.Info {
return o.fs
}
// Return a string version
func (o *Object) String() string {
if o == nil {
return "<nil>"
}
return o.remote
}
// Remote returns the remote path
func (o *Object) Remote() string {
return o.remote
}
// Hash returns the MD5 of an object returning a lowercase hex string
func (o *Object) Hash(ctx context.Context, t hash.Type) (string, error) {
switch t {
case hash.CRC32:
if o.crc32 == "" {
return "", nil
}
return fmt.Sprintf("%08s", o.crc32), nil
case hash.MD5:
return o.md5, nil
}
return "", hash.ErrUnsupported
}
// Size returns the size of an object in bytes
func (o *Object) Size() int64 {
return o.size
}
// setMetaData sets the metadata from info
func (o *Object) setMetaData(file *files_sdk.File) error {
o.modTime = file.ModTime()
if !file.IsDir() {
o.size = file.Size
o.crc32 = file.Crc32
o.md5 = file.Md5
o.mimeType = file.MimeType
}
return nil
}
// readMetaData gets the metadata if it hasn't already been fetched
//
// it also sets the info
func (o *Object) readMetaData(ctx context.Context) (err error) {
file, err := o.fs.readMetaDataForPath(ctx, o.remote)
if err != nil {
if files_sdk.IsNotExist(err) {
return fs.ErrorObjectNotFound
}
return err
}
if file.IsDir() {
return fs.ErrorIsDir
}
return o.setMetaData(file)
}
// ModTime returns the modification time of the object
//
// It attempts to read the objects mtime and if that isn't present the
// LastModified returned in the http headers
func (o *Object) ModTime(ctx context.Context) time.Time {
return o.modTime
}
// SetModTime sets the modification time of the local fs object
func (o *Object) SetModTime(ctx context.Context, modTime time.Time) (err error) {
params := files_sdk.FileUpdateParams{
Path: o.fs.absPath(o.remote),
ProvidedMtime: &modTime,
}
var file files_sdk.File
err = o.fs.pacer.Call(func() (bool, error) {
file, err = o.fs.fileClient.Update(params, files_sdk.WithContext(ctx))
return shouldRetry(ctx, err)
})
if err != nil {
return err
}
return o.setMetaData(&file)
}
// Storable returns a boolean showing whether this object storable
func (o *Object) Storable() bool {
return true
}
// Open an object for read
func (o *Object) Open(ctx context.Context, options ...fs.OpenOption) (in io.ReadCloser, err error) {
// Offset and Count for range download
var offset, count int64
fs.FixRangeOption(options, o.size)
for _, option := range options {
switch x := option.(type) {
case *fs.RangeOption:
offset, count = x.Decode(o.size)
if count < 0 {
count = o.size - offset
}
case *fs.SeekOption:
offset = x.Offset
count = o.size - offset
default:
if option.Mandatory() {
fs.Logf(o, "Unsupported mandatory option: %v", option)
}
}
}
params := files_sdk.FileDownloadParams{
Path: o.fs.absPath(o.remote),
}
headers := &http.Header{}
headers.Set("Range", fmt.Sprintf("bytes=%v-%v", offset, offset+count-1))
err = o.fs.pacer.Call(func() (bool, error) {
_, err = o.fs.fileClient.Download(
params,
files_sdk.WithContext(ctx),
files_sdk.RequestHeadersOption(headers),
files_sdk.ResponseBodyOption(func(closer io.ReadCloser) error {
in = closer
return err
}),
)
return shouldRetry(ctx, err)
})
return
}
// Returns a pointer to t - useful for returning pointers to constants
func ptr[T any](t T) *T {
return &t
}
func isFolderNotEmpty(err error) bool {
var re files_sdk.ResponseError
ok := errors.As(err, &re)
return ok && re.Type == folderNotEmpty
}
// Update the object with the contents of the io.Reader, modTime and size
//
// If existing is set then it updates the object rather than creating a new one.
//
// The new object may have been created if an error is returned.
func (o *Object) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, options ...fs.OpenOption) error {
uploadOpts := []file.UploadOption{
file.UploadWithContext(ctx),
file.UploadWithReader(in),
file.UploadWithDestinationPath(o.fs.absPath(o.remote)),
file.UploadWithProvidedMtime(src.ModTime(ctx)),
}
err := o.fs.pacer.Call(func() (bool, error) {
err := o.fs.fileClient.Upload(uploadOpts...)
return shouldRetry(ctx, err)
})
if err != nil {
return err
}
return o.readMetaData(ctx)
}
// Remove an object
func (o *Object) Remove(ctx context.Context) error {
params := files_sdk.FileDeleteParams{
Path: o.fs.absPath(o.remote),
}
return o.fs.pacer.Call(func() (bool, error) {
err := o.fs.fileClient.Delete(params, files_sdk.WithContext(ctx))
return shouldRetry(ctx, err)
})
}
// MimeType of an Object if known, "" otherwise
func (o *Object) MimeType(ctx context.Context) string {
return o.mimeType
}
// Check the interfaces are satisfied
var (
_ fs.Fs = (*Fs)(nil)
_ fs.Purger = (*Fs)(nil)
_ fs.PutStreamer = (*Fs)(nil)
_ fs.Copier = (*Fs)(nil)
_ fs.Mover = (*Fs)(nil)
_ fs.DirMover = (*Fs)(nil)
_ fs.PublicLinker = (*Fs)(nil)
_ fs.Object = (*Object)(nil)
_ fs.MimeTyper = (*Object)(nil)
)

View File

@@ -0,0 +1,17 @@
// Test Files filesystem interface
package filescom_test
import (
"testing"
"github.com/rclone/rclone/backend/filescom"
"github.com/rclone/rclone/fstest/fstests"
)
// TestIntegration runs integration tests against the remote
func TestIntegration(t *testing.T) {
fstests.Run(t, &fstests.Opt{
RemoteName: "TestFilesCom:",
NilObject: (*filescom.Object)(nil),
})
}

View File

@@ -85,7 +85,7 @@ to an encrypted one. Cannot be used in combination with implicit FTPS.`,
Default: false,
}, {
Name: "concurrency",
Help: strings.Replace(`Maximum number of FTP simultaneous connections, 0 for unlimited.
Help: strings.ReplaceAll(`Maximum number of FTP simultaneous connections, 0 for unlimited.
Note that setting this is very likely to cause deadlocks so it should
be used with care.
@@ -99,7 +99,7 @@ maximum of |--checkers| and |--transfers|.
So for |concurrency 3| you'd use |--checkers 2 --transfers 2
--check-first| or |--checkers 1 --transfers 1|.
`, "|", "`", -1),
`, "|", "`"),
Default: 0,
Advanced: true,
}, {
@@ -180,12 +180,28 @@ If this is set and no password is supplied then rclone will ask for a password
Default: "",
Help: `Socks 5 proxy host.
Supports the format user:pass@host:port, user@host:port, host:port.
Supports the format user:pass@host:port, user@host:port, host:port.
Example:
Example:
myUser:myPass@localhost:9005
`,
myUser:myPass@localhost:9005
`,
Advanced: true,
}, {
Name: "no_check_upload",
Default: false,
Help: `Don't check the upload is OK
Normally rclone will try to check the upload exists after it has
uploaded a file to make sure the size and modification time are as
expected.
This flag stops rclone doing these checks. This enables uploading to
folders which are write only.
You will likely need to use the --inplace flag also if uploading to
a write only folder.
`,
Advanced: true,
}, {
Name: config.ConfigEncoding,
@@ -232,6 +248,7 @@ type Options struct {
AskPassword bool `config:"ask_password"`
Enc encoder.MultiEncoder `config:"encoding"`
SocksProxy string `config:"socks_proxy"`
NoCheckUpload bool `config:"no_check_upload"`
}
// Fs represents a remote FTP server
@@ -1303,6 +1320,16 @@ func (o *Object) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, op
return fmt.Errorf("update stor: %w", err)
}
o.fs.putFtpConnection(&c, nil)
if o.fs.opt.NoCheckUpload {
o.info = &FileInfo{
Name: o.remote,
Size: uint64(src.Size()),
ModTime: src.ModTime(ctx),
precise: true,
IsDir: false,
}
return nil
}
if err = o.SetModTime(ctx, src.ModTime(ctx)); err != nil {
return fmt.Errorf("SetModTime: %w", err)
}

View File

@@ -17,7 +17,7 @@ import (
"github.com/stretchr/testify/require"
)
type settings map[string]interface{}
type settings map[string]any
func deriveFs(ctx context.Context, t *testing.T, f fs.Fs, opts settings) fs.Fs {
fsName := strings.Split(f.Name(), "{")[0] // strip off hash

311
backend/gofile/api/types.go Normal file
View File

@@ -0,0 +1,311 @@
// Package api has type definitions for gofile
//
// Converted from the API docs with help from https://mholt.github.io/json-to-go/
package api
import (
"fmt"
"time"
)
const (
// 2017-05-03T07:26:10-07:00
timeFormat = `"` + time.RFC3339 + `"`
)
// Time represents date and time information for the
// gofile API, by using RFC3339
type Time time.Time
// MarshalJSON turns a Time into JSON (in UTC)
func (t *Time) MarshalJSON() (out []byte, err error) {
timeString := (*time.Time)(t).Format(timeFormat)
return []byte(timeString), nil
}
// UnmarshalJSON turns JSON into a Time
func (t *Time) UnmarshalJSON(data []byte) error {
newT, err := time.Parse(timeFormat, string(data))
if err != nil {
return err
}
*t = Time(newT)
return nil
}
// Error is returned from gofile when things go wrong
type Error struct {
Status string `json:"status"`
}
// Error returns a string for the error and satisfies the error interface
func (e Error) Error() string {
out := fmt.Sprintf("Error %q", e.Status)
return out
}
// IsError returns true if there is an error
func (e Error) IsError() bool {
return e.Status != "ok"
}
// Err returns err if not nil, or e if IsError or nil
func (e Error) Err(err error) error {
if err != nil {
return err
}
if e.IsError() {
return e
}
return nil
}
// Check Error satisfies the error interface
var _ error = (*Error)(nil)
// Types of things in Item
const (
ItemTypeFolder = "folder"
ItemTypeFile = "file"
)
// Item describes a folder or a file as returned by /contents
type Item struct {
ID string `json:"id"`
ParentFolder string `json:"parentFolder"`
Type string `json:"type"`
Name string `json:"name"`
Size int64 `json:"size"`
Code string `json:"code"`
CreateTime int64 `json:"createTime"`
ModTime int64 `json:"modTime"`
Link string `json:"link"`
MD5 string `json:"md5"`
MimeType string `json:"mimetype"`
ChildrenCount int `json:"childrenCount"`
DirectLinks map[string]*DirectLink `json:"directLinks"`
//Public bool `json:"public"`
//ServerSelected string `json:"serverSelected"`
//Thumbnail string `json:"thumbnail"`
//DownloadCount int `json:"downloadCount"`
//TotalDownloadCount int64 `json:"totalDownloadCount"`
//TotalSize int64 `json:"totalSize"`
//ChildrenIDs []string `json:"childrenIds"`
Children map[string]*Item `json:"children"`
}
// ToNativeTime converts a go time to a native time
func ToNativeTime(t time.Time) int64 {
return t.Unix()
}
// FromNativeTime converts native time to a go time
func FromNativeTime(t int64) time.Time {
return time.Unix(t, 0)
}
// DirectLink describes a direct link to a file so it can be
// downloaded by third parties.
type DirectLink struct {
ExpireTime int64 `json:"expireTime"`
SourceIpsAllowed []any `json:"sourceIpsAllowed"`
DomainsAllowed []any `json:"domainsAllowed"`
Auth []any `json:"auth"`
IsReqLink bool `json:"isReqLink"`
DirectLink string `json:"directLink"`
}
// Contents is returned from the /contents call
type Contents struct {
Error
Data struct {
Item
} `json:"data"`
Metadata Metadata `json:"metadata"`
}
// Metadata is returned when paging is in use
type Metadata struct {
TotalCount int `json:"totalCount"`
TotalPages int `json:"totalPages"`
Page int `json:"page"`
PageSize int `json:"pageSize"`
HasNextPage bool `json:"hasNextPage"`
}
// AccountsGetID is the result of /accounts/getid
type AccountsGetID struct {
Error
Data struct {
ID string `json:"id"`
} `json:"data"`
}
// Stats of storage and traffic
type Stats struct {
FolderCount int64 `json:"folderCount"`
FileCount int64 `json:"fileCount"`
Storage int64 `json:"storage"`
TrafficDirectGenerated int64 `json:"trafficDirectGenerated"`
TrafficReqDownloaded int64 `json:"trafficReqDownloaded"`
TrafficWebDownloaded int64 `json:"trafficWebDownloaded"`
}
// AccountsGet is the result of /accounts/{id}
type AccountsGet struct {
Error
Data struct {
ID string `json:"id"`
Email string `json:"email"`
Tier string `json:"tier"`
PremiumType string `json:"premiumType"`
Token string `json:"token"`
RootFolder string `json:"rootFolder"`
SubscriptionProvider string `json:"subscriptionProvider"`
SubscriptionEndDate int `json:"subscriptionEndDate"`
SubscriptionLimitDirectTraffic int64 `json:"subscriptionLimitDirectTraffic"`
SubscriptionLimitStorage int64 `json:"subscriptionLimitStorage"`
StatsCurrent Stats `json:"statsCurrent"`
// StatsHistory map[int]map[int]map[int]Stats `json:"statsHistory"`
} `json:"data"`
}
// CreateFolderRequest is the input to /contents/createFolder
type CreateFolderRequest struct {
ParentFolderID string `json:"parentFolderId"`
FolderName string `json:"folderName"`
ModTime int64 `json:"modTime,omitempty"`
}
// CreateFolderResponse is the output from /contents/createFolder
type CreateFolderResponse struct {
Error
Data Item `json:"data"`
}
// DeleteRequest is the input to DELETE /contents
type DeleteRequest struct {
ContentsID string `json:"contentsId"` // comma separated list of IDs
}
// DeleteResponse is the input to DELETE /contents
type DeleteResponse struct {
Error
Data map[string]Error
}
// Server is an upload server
type Server struct {
Name string `json:"name"`
Zone string `json:"zone"`
}
// String returns a string representation of the Server
func (s *Server) String() string {
return fmt.Sprintf("%s (%s)", s.Name, s.Zone)
}
// Root returns the root URL for the server
func (s *Server) Root() string {
return fmt.Sprintf("https://%s.gofile.io/", s.Name)
}
// URL returns the upload URL for the server
func (s *Server) URL() string {
return fmt.Sprintf("https://%s.gofile.io/contents/uploadfile", s.Name)
}
// ServersResponse is the output from /servers
type ServersResponse struct {
Error
Data struct {
Servers []Server `json:"servers"`
} `json:"data"`
}
// UploadResponse is returned by POST /contents/uploadfile
type UploadResponse struct {
Error
Data Item `json:"data"`
}
// DirectLinksRequest specifies the parameters for the direct link
type DirectLinksRequest struct {
ExpireTime int64 `json:"expireTime,omitempty"`
SourceIpsAllowed []any `json:"sourceIpsAllowed,omitempty"`
DomainsAllowed []any `json:"domainsAllowed,omitempty"`
Auth []any `json:"auth,omitempty"`
}
// DirectLinksResult is returned from POST /contents/{id}/directlinks
type DirectLinksResult struct {
Error
Data struct {
ExpireTime int64 `json:"expireTime"`
SourceIpsAllowed []any `json:"sourceIpsAllowed"`
DomainsAllowed []any `json:"domainsAllowed"`
Auth []any `json:"auth"`
IsReqLink bool `json:"isReqLink"`
ID string `json:"id"`
DirectLink string `json:"directLink"`
} `json:"data"`
}
// UpdateItemRequest describes the updates to be done to an item for PUT /contents/{id}/update
//
// The Value of the attribute to define :
// For Attribute "name" : The name of the content (file or folder)
// For Attribute "description" : The description displayed on the download page (folder only)
// For Attribute "tags" : A comma-separated list of tags (folder only)
// For Attribute "public" : either true or false (folder only)
// For Attribute "expiry" : A unix timestamp of the expiration date (folder only)
// For Attribute "password" : The password to set (folder only)
type UpdateItemRequest struct {
Attribute string `json:"attribute"`
Value any `json:"attributeValue"`
}
// UpdateItemResponse is returned by PUT /contents/{id}/update
type UpdateItemResponse struct {
Error
Data Item `json:"data"`
}
// MoveRequest is the input to /contents/move
type MoveRequest struct {
FolderID string `json:"folderId"`
ContentsID string `json:"contentsId"` // comma separated list of IDs
}
// MoveResponse is returned by POST /contents/move
type MoveResponse struct {
Error
Data map[string]struct {
Error
Item `json:"data"`
} `json:"data"`
}
// CopyRequest is the input to /contents/copy
type CopyRequest struct {
FolderID string `json:"folderId"`
ContentsID string `json:"contentsId"` // comma separated list of IDs
}
// CopyResponse is returned by POST /contents/copy
type CopyResponse struct {
Error
Data map[string]struct {
Error
Item `json:"data"`
} `json:"data"`
}
// UploadServerStatus is returned when fetching the root of an upload server
type UploadServerStatus struct {
Error
Data struct {
Server string `json:"server"`
Test string `json:"test"`
} `json:"data"`
}

1659
backend/gofile/gofile.go Normal file

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,17 @@
// Test Gofile filesystem interface
package gofile_test
import (
"testing"
"github.com/rclone/rclone/backend/gofile"
"github.com/rclone/rclone/fstest/fstests"
)
// TestIntegration runs integration tests against the remote
func TestIntegration(t *testing.T) {
fstests.Run(t, &fstests.Opt{
RemoteName: "TestGoFile:",
NilObject: (*gofile.Object)(nil),
})
}

View File

@@ -35,7 +35,7 @@ import (
"github.com/rclone/rclone/fs/fserrors"
"github.com/rclone/rclone/fs/fshttp"
"github.com/rclone/rclone/fs/hash"
"github.com/rclone/rclone/fs/walk"
"github.com/rclone/rclone/fs/list"
"github.com/rclone/rclone/lib/bucket"
"github.com/rclone/rclone/lib/encoder"
"github.com/rclone/rclone/lib/env"
@@ -62,9 +62,10 @@ const (
var (
// Description of how to auth for this app
storageConfig = &oauth2.Config{
storageConfig = &oauthutil.Config{
Scopes: []string{storage.DevstorageReadWriteScope},
Endpoint: google.Endpoint,
AuthURL: google.Endpoint.AuthURL,
TokenURL: google.Endpoint.TokenURL,
ClientID: rcloneClientID,
ClientSecret: obscure.MustReveal(rcloneEncryptedClientSecret),
RedirectURL: oauthutil.RedirectURL,
@@ -106,6 +107,12 @@ func init() {
Help: "Service Account Credentials JSON blob.\n\nLeave blank normally.\nNeeded only if you want use SA instead of interactive login.",
Hide: fs.OptionHideBoth,
Sensitive: true,
}, {
Name: "access_token",
Help: "Short-lived access token.\n\nLeave blank normally.\nNeeded only if you want use short-lived access token instead of interactive login.",
Hide: fs.OptionHideConfigurator,
Sensitive: true,
Advanced: true,
}, {
Name: "anonymous",
Help: "Access public buckets and objects without credentials.\n\nSet to 'true' if you just want to download files and don't configure credentials.",
@@ -379,6 +386,7 @@ type Options struct {
Enc encoder.MultiEncoder `config:"encoding"`
EnvAuth bool `config:"env_auth"`
DirectoryMarkers bool `config:"directory_markers"`
AccessToken string `config:"access_token"`
}
// Fs represents a remote storage server
@@ -535,6 +543,9 @@ func NewFs(ctx context.Context, name, root string, m configmap.Mapper) (fs.Fs, e
if err != nil {
return nil, fmt.Errorf("failed to configure Google Cloud Storage: %w", err)
}
} else if opt.AccessToken != "" {
ts := oauth2.Token{AccessToken: opt.AccessToken}
oAuthClient = oauth2.NewClient(ctx, oauth2.StaticTokenSource(&ts))
} else {
oAuthClient, _, err = oauthutil.NewClient(ctx, name, m, storageConfig)
if err != nil {
@@ -834,7 +845,7 @@ func (f *Fs) List(ctx context.Context, dir string) (entries fs.DirEntries, err e
// of listing recursively that doing a directory traversal.
func (f *Fs) ListR(ctx context.Context, dir string, callback fs.ListRCallback) (err error) {
bucket, directory := f.split(dir)
list := walk.NewListRHelper(callback)
list := list.NewHelper(callback)
listR := func(bucket, directory, prefix string, addBucket bool) error {
return f.list(ctx, bucket, directory, prefix, addBucket, true, func(remote string, object *storage.Object, isDirectory bool) error {
entry, err := f.itemToDirEntry(ctx, remote, object, isDirectory)
@@ -944,7 +955,6 @@ func (f *Fs) Mkdir(ctx context.Context, dir string) (err error) {
return e
}
return f.createDirectoryMarker(ctx, bucket, dir)
}
// mkdirParent creates the parent bucket/directory if it doesn't exist

View File

@@ -4,6 +4,7 @@ package googlephotos
import (
"path"
"slices"
"strings"
"sync"
@@ -119,7 +120,7 @@ func (as *albums) _del(album *api.Album) {
dirs := as.path[dir]
for i, dir := range dirs {
if dir == leaf {
dirs = append(dirs[:i], dirs[i+1:]...)
dirs = slices.Delete(dirs, i, i+1)
break
}
}

View File

@@ -28,13 +28,11 @@ import (
"github.com/rclone/rclone/fs/fserrors"
"github.com/rclone/rclone/fs/fshttp"
"github.com/rclone/rclone/fs/hash"
"github.com/rclone/rclone/fs/log"
"github.com/rclone/rclone/lib/batcher"
"github.com/rclone/rclone/lib/encoder"
"github.com/rclone/rclone/lib/oauthutil"
"github.com/rclone/rclone/lib/pacer"
"github.com/rclone/rclone/lib/rest"
"golang.org/x/oauth2"
"golang.org/x/oauth2/google"
)
@@ -61,13 +59,14 @@ const (
var (
// Description of how to auth for this app
oauthConfig = &oauth2.Config{
oauthConfig = &oauthutil.Config{
Scopes: []string{
"openid",
"profile",
scopeReadWrite, // this must be at position scopeAccess
},
Endpoint: google.Endpoint,
AuthURL: google.Endpoint.AuthURL,
TokenURL: google.Endpoint.TokenURL,
ClientID: rcloneClientID,
ClientSecret: obscure.MustReveal(rcloneEncryptedClientSecret),
RedirectURL: oauthutil.RedirectURL,
@@ -160,6 +159,34 @@ listings and transferred.
Without this flag, archived media will not be visible in directory
listings and won't be transferred.`,
Advanced: true,
}, {
Name: "proxy",
Default: "",
Help: strings.ReplaceAll(`Use the gphotosdl proxy for downloading the full resolution images
The Google API will deliver images and video which aren't full
resolution, and/or have EXIF data missing.
However if you use the gphotosdl proxy then you can download original,
unchanged images.
This runs a headless browser in the background.
Download the software from [gphotosdl](https://github.com/rclone/gphotosdl)
First run with
gphotosdl -login
Then once you have logged into google photos close the browser window
and run
gphotosdl
Then supply the parameter |--gphotos-proxy "http://localhost:8282"| to make
rclone use the proxy.
`, "|", "`"),
Advanced: true,
}, {
Name: config.ConfigEncoding,
Help: config.ConfigEncodingHelp,
@@ -181,6 +208,7 @@ type Options struct {
BatchMode string `config:"batch_mode"`
BatchSize int `config:"batch_size"`
BatchTimeout fs.Duration `config:"batch_timeout"`
Proxy string `config:"proxy"`
}
// Fs represents a remote storage server
@@ -360,7 +388,7 @@ func (f *Fs) fetchEndpoint(ctx context.Context, name string) (endpoint string, e
Method: "GET",
RootURL: "https://accounts.google.com/.well-known/openid-configuration",
}
var openIDconfig map[string]interface{}
var openIDconfig map[string]any
err = f.pacer.Call(func() (bool, error) {
resp, err := f.unAuth.CallJSON(ctx, &opts, nil, &openIDconfig)
return shouldRetry(ctx, resp, err)
@@ -420,7 +448,7 @@ func (f *Fs) Disconnect(ctx context.Context) (err error) {
"token_type_hint": []string{"access_token"},
},
}
var res interface{}
var res any
err = f.pacer.Call(func() (bool, error) {
resp, err := f.srv.CallJSON(ctx, &opts, nil, &res)
return shouldRetry(ctx, resp, err)
@@ -454,7 +482,7 @@ func (f *Fs) newObjectWithInfo(ctx context.Context, remote string, info *api.Med
// NewObject finds the Object at remote. If it can't be found
// it returns the error fs.ErrorObjectNotFound.
func (f *Fs) NewObject(ctx context.Context, remote string) (fs.Object, error) {
defer log.Trace(f, "remote=%q", remote)("")
// defer log.Trace(f, "remote=%q", remote)("")
return f.newObjectWithInfo(ctx, remote, nil)
}
@@ -667,7 +695,7 @@ func (f *Fs) listUploads(ctx context.Context, dir string) (entries fs.DirEntries
// This should return ErrDirNotFound if the directory isn't
// found.
func (f *Fs) List(ctx context.Context, dir string) (entries fs.DirEntries, err error) {
defer log.Trace(f, "dir=%q", dir)("err=%v", &err)
// defer log.Trace(f, "dir=%q", dir)("err=%v", &err)
match, prefix, pattern := patterns.match(f.root, dir, false)
if pattern == nil || pattern.isFile {
return nil, fs.ErrorDirNotFound
@@ -684,7 +712,7 @@ func (f *Fs) List(ctx context.Context, dir string) (entries fs.DirEntries, err e
//
// The new object may have been created if an error is returned
func (f *Fs) Put(ctx context.Context, in io.Reader, src fs.ObjectInfo, options ...fs.OpenOption) (fs.Object, error) {
defer log.Trace(f, "src=%+v", src)("")
// defer log.Trace(f, "src=%+v", src)("")
// Temporary Object under construction
o := &Object{
fs: f,
@@ -737,7 +765,7 @@ func (f *Fs) getOrCreateAlbum(ctx context.Context, albumTitle string) (album *ap
// Mkdir creates the album if it doesn't exist
func (f *Fs) Mkdir(ctx context.Context, dir string) (err error) {
defer log.Trace(f, "dir=%q", dir)("err=%v", &err)
// defer log.Trace(f, "dir=%q", dir)("err=%v", &err)
match, prefix, pattern := patterns.match(f.root, dir, false)
if pattern == nil {
return fs.ErrorDirNotFound
@@ -761,7 +789,7 @@ func (f *Fs) Mkdir(ctx context.Context, dir string) (err error) {
//
// Returns an error if it isn't empty
func (f *Fs) Rmdir(ctx context.Context, dir string) (err error) {
defer log.Trace(f, "dir=%q")("err=%v", &err)
// defer log.Trace(f, "dir=%q")("err=%v", &err)
match, _, pattern := patterns.match(f.root, dir, false)
if pattern == nil {
return fs.ErrorDirNotFound
@@ -834,7 +862,7 @@ func (o *Object) Hash(ctx context.Context, t hash.Type) (string, error) {
// Size returns the size of an object in bytes
func (o *Object) Size() int64 {
defer log.Trace(o, "")("")
// defer log.Trace(o, "")("")
if !o.fs.opt.ReadSize || o.bytes >= 0 {
return o.bytes
}
@@ -935,7 +963,7 @@ func (o *Object) readMetaData(ctx context.Context) (err error) {
// It attempts to read the objects mtime and if that isn't present the
// LastModified returned in the http headers
func (o *Object) ModTime(ctx context.Context) time.Time {
defer log.Trace(o, "")("")
// defer log.Trace(o, "")("")
err := o.readMetaData(ctx)
if err != nil {
fs.Debugf(o, "ModTime: Failed to read metadata: %v", err)
@@ -965,16 +993,20 @@ func (o *Object) downloadURL() string {
// Open an object for read
func (o *Object) Open(ctx context.Context, options ...fs.OpenOption) (in io.ReadCloser, err error) {
defer log.Trace(o, "")("")
// defer log.Trace(o, "")("")
err = o.readMetaData(ctx)
if err != nil {
fs.Debugf(o, "Open: Failed to read metadata: %v", err)
return nil, err
}
url := o.downloadURL()
if o.fs.opt.Proxy != "" {
url = strings.TrimRight(o.fs.opt.Proxy, "/") + "/id/" + o.id
}
var resp *http.Response
opts := rest.Opts{
Method: "GET",
RootURL: o.downloadURL(),
RootURL: url,
Options: options,
}
err = o.fs.pacer.Call(func() (bool, error) {
@@ -1067,7 +1099,7 @@ func (f *Fs) commitBatch(ctx context.Context, items []uploadedItem, results []*a
//
// The new object may have been created if an error is returned
func (o *Object) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, options ...fs.OpenOption) (err error) {
defer log.Trace(o, "src=%+v", src)("err=%v", &err)
// defer log.Trace(o, "src=%+v", src)("err=%v", &err)
match, _, pattern := patterns.match(o.fs.root, o.remote, true)
if pattern == nil || !pattern.isFile || !pattern.canUpload {
return errCantUpload
@@ -1136,7 +1168,7 @@ func (o *Object) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, op
errors := make([]error, 1)
results := make([]*api.MediaItem, 1)
err = o.fs.commitBatch(ctx, []uploadedItem{uploaded}, results, errors)
if err != nil {
if err == nil {
err = errors[0]
info = results[0]
}

View File

@@ -2,6 +2,7 @@ package googlephotos
import (
"context"
"errors"
"fmt"
"io"
"net/http"
@@ -35,7 +36,7 @@ func TestIntegration(t *testing.T) {
*fstest.RemoteName = "TestGooglePhotos:"
}
f, err := fs.NewFs(ctx, *fstest.RemoteName)
if err == fs.ErrorNotFoundInConfigFile {
if errors.Is(err, fs.ErrorNotFoundInConfigFile) {
t.Skipf("Couldn't create google photos backend - skipping tests: %v", err)
}
require.NoError(t, err)

View File

@@ -24,7 +24,7 @@ import (
// The result should be capable of being JSON encoded
// If it is a string or a []string it will be shown to the user
// otherwise it will be JSON encoded and shown to the user like that
func (f *Fs) Command(ctx context.Context, name string, arg []string, opt map[string]string) (out interface{}, err error) {
func (f *Fs) Command(ctx context.Context, name string, arg []string, opt map[string]string) (out any, err error) {
switch name {
case "drop":
return nil, f.db.Stop(true)

View File

@@ -18,6 +18,7 @@ import (
"github.com/rclone/rclone/fs/config/configstruct"
"github.com/rclone/rclone/fs/fspath"
"github.com/rclone/rclone/fs/hash"
"github.com/rclone/rclone/fs/list"
"github.com/rclone/rclone/lib/kv"
)
@@ -182,6 +183,9 @@ func NewFs(ctx context.Context, fsname, rpath string, cmap configmap.Mapper) (fs
}
f.features = stubFeatures.Fill(ctx, f).Mask(ctx, f.Fs).WrapsFs(f, f.Fs)
// Enable ListP always
f.features.ListP = f.ListP
cache.PinUntilFinalized(f.Fs, f)
return f, err
}
@@ -237,10 +241,39 @@ func (f *Fs) wrapEntries(baseEntries fs.DirEntries) (hashEntries fs.DirEntries,
// List the objects and directories in dir into entries.
func (f *Fs) List(ctx context.Context, dir string) (entries fs.DirEntries, err error) {
if entries, err = f.Fs.List(ctx, dir); err != nil {
return nil, err
return list.WithListP(ctx, dir, f)
}
// ListP lists the objects and directories of the Fs starting
// from dir non recursively into out.
//
// dir should be "" to start from the root, and should not
// have trailing slashes.
//
// This should return ErrDirNotFound if the directory isn't
// found.
//
// It should call callback for each tranche of entries read.
// These need not be returned in any particular order. If
// callback returns an error then the listing will stop
// immediately.
func (f *Fs) ListP(ctx context.Context, dir string, callback fs.ListRCallback) error {
wrappedCallback := func(entries fs.DirEntries) error {
entries, err := f.wrapEntries(entries)
if err != nil {
return err
}
return callback(entries)
}
return f.wrapEntries(entries)
listP := f.Fs.Features().ListP
if listP == nil {
entries, err := f.Fs.List(ctx, dir)
if err != nil {
return err
}
return wrappedCallback(entries)
}
return listP(ctx, dir, wrappedCallback)
}
// ListR lists the objects and directories recursively into out.

View File

@@ -6,6 +6,7 @@ import (
"encoding/gob"
"errors"
"fmt"
"maps"
"strings"
"time"
@@ -195,9 +196,7 @@ func (op *kvPut) Do(ctx context.Context, b kv.Bucket) (err error) {
r.Fp = op.fp
}
for hashType, hashVal := range op.hashes {
r.Hashes[hashType] = hashVal
}
maps.Copy(r.Hashes, op.hashes)
if data, err = r.encode(op.key); err != nil {
return fmt.Errorf("marshal failed: %w", err)
}

View File

@@ -31,7 +31,6 @@ import (
"github.com/rclone/rclone/lib/oauthutil"
"github.com/rclone/rclone/lib/pacer"
"github.com/rclone/rclone/lib/rest"
"golang.org/x/oauth2"
)
const (
@@ -48,11 +47,9 @@ const (
// Globals
var (
// Description of how to auth for this app.
oauthConfig = &oauth2.Config{
Endpoint: oauth2.Endpoint{
AuthURL: "https://my.hidrive.com/client/authorize",
TokenURL: "https://my.hidrive.com/oauth2/token",
},
oauthConfig = &oauthutil.Config{
AuthURL: "https://my.hidrive.com/client/authorize",
TokenURL: "https://my.hidrive.com/oauth2/token",
ClientID: rcloneClientID,
ClientSecret: obscure.MustReveal(rcloneEncryptedClientSecret),
RedirectURL: oauthutil.TitleBarRedirectURL,

View File

@@ -52,10 +52,7 @@ func writeByBlock(p []byte, writer io.Writer, blockSize uint32, bytesInBlock *ui
total := len(p)
nullBytes := make([]byte, blockSize)
for len(p) > 0 {
toWrite := int(blockSize - *bytesInBlock)
if toWrite > len(p) {
toWrite = len(p)
}
toWrite := min(int(blockSize-*bytesInBlock), len(p))
c, err := writer.Write(p[:toWrite])
*bytesInBlock += uint32(c)
*onlyNullBytesInBlock = *onlyNullBytesInBlock && bytes.Equal(nullBytes[:toWrite], p[:toWrite])
@@ -276,7 +273,7 @@ func (h *hidriveHash) Sum(b []byte) []byte {
}
checksum := zeroSum
for i := 0; i < len(h.levels); i++ {
for i := range h.levels {
level := h.levels[i]
if i < len(h.levels)-1 {
// Aggregate non-empty non-final levels.

View File

@@ -216,7 +216,7 @@ func TestLevelWrite(t *testing.T) {
func TestLevelIsFull(t *testing.T) {
content := [hidrivehash.Size]byte{0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19}
l := hidrivehash.NewLevel()
for i := 0; i < 256; i++ {
for range 256 {
assert.False(t, l.(internal.LevelHash).IsFull())
written, err := l.Write(content[:])
assert.Equal(t, len(content), written)

View File

@@ -180,7 +180,6 @@ func getFsEndpoint(ctx context.Context, client *http.Client, url string, opt *Op
}
addHeaders(req, opt)
res, err := noRedir.Do(req)
if err != nil {
fs.Debugf(nil, "Assuming path is a file as HEAD request could not be sent: %v", err)
return createFileResult()
@@ -249,6 +248,14 @@ func (f *Fs) httpConnection(ctx context.Context, opt *Options) (isFile bool, err
f.httpClient = client
f.endpoint = u
f.endpointURL = u.String()
if isFile {
// Correct root if definitely pointing to a file
f.root = path.Dir(f.root)
if f.root == "." || f.root == "/" {
f.root = ""
}
}
return isFile, nil
}
@@ -331,12 +338,13 @@ func (f *Fs) NewObject(ctx context.Context, remote string) (fs.Object, error) {
// Join's the remote onto the base URL
func (f *Fs) url(remote string) string {
trimmedRemote := strings.TrimLeft(remote, "/") // remove leading "/" since we always have it in f.endpointURL
if f.opt.NoEscape {
// Directly concatenate without escaping, no_escape behavior
return f.endpointURL + remote
return f.endpointURL + trimmedRemote
}
// Default behavior
return f.endpointURL + rest.URLPathEscape(remote)
return f.endpointURL + rest.URLPathEscape(trimmedRemote)
}
// Errors returned by parseName
@@ -504,7 +512,7 @@ func (f *Fs) List(ctx context.Context, dir string) (entries fs.DirEntries, err e
entries = append(entries, entry)
entriesMu.Unlock()
}
for i := 0; i < checkers; i++ {
for range checkers {
wg.Add(1)
go func() {
defer wg.Done()
@@ -739,7 +747,7 @@ It doesn't return anything.
// The result should be capable of being JSON encoded
// If it is a string or a []string it will be shown to the user
// otherwise it will be JSON encoded and shown to the user like that
func (f *Fs) Command(ctx context.Context, name string, arg []string, opt map[string]string) (out interface{}, err error) {
func (f *Fs) Command(ctx context.Context, name string, arg []string, opt map[string]string) (out any, err error) {
switch name {
case "set":
newOpt := f.opt

View File

@@ -191,6 +191,33 @@ func TestNewObject(t *testing.T) {
assert.Equal(t, fs.ErrorObjectNotFound, err)
}
func TestNewObjectWithLeadingSlash(t *testing.T) {
f := prepare(t)
o, err := f.NewObject(context.Background(), "/four/under four.txt")
require.NoError(t, err)
assert.Equal(t, "/four/under four.txt", o.Remote())
assert.Equal(t, int64(8+lineEndSize), o.Size())
_, ok := o.(*Object)
assert.True(t, ok)
// Test the time is correct on the object
tObj := o.ModTime(context.Background())
fi, err := os.Stat(filepath.Join(filesPath, "four", "under four.txt"))
require.NoError(t, err)
tFile := fi.ModTime()
fstest.AssertTimeEqualWithPrecision(t, o.Remote(), tFile, tObj, time.Second)
// check object not found
o, err = f.NewObject(context.Background(), "/not found.txt")
assert.Nil(t, o)
assert.Equal(t, fs.ErrorObjectNotFound, err)
}
func TestOpen(t *testing.T) {
m := prepareServer(t)

View File

@@ -0,0 +1,166 @@
// Package api provides functionality for interacting with the iCloud API.
package api
import (
"bytes"
"context"
"encoding/json"
"errors"
"fmt"
"net/http"
"strings"
"github.com/rclone/rclone/fs"
"github.com/rclone/rclone/fs/fshttp"
"github.com/rclone/rclone/lib/rest"
)
const (
baseEndpoint = "https://www.icloud.com"
homeEndpoint = "https://www.icloud.com"
setupEndpoint = "https://setup.icloud.com/setup/ws/1"
authEndpoint = "https://idmsa.apple.com/appleauth/auth"
)
type sessionSave func(*Session)
// Client defines the client configuration
type Client struct {
appleID string
password string
srv *rest.Client
Session *Session
sessionSaveCallback sessionSave
drive *DriveService
}
// New creates a new Client instance with the provided Apple ID, password, trust token, cookies, and session save callback.
//
// Parameters:
// - appleID: the Apple ID of the user.
// - password: the password of the user.
// - trustToken: the trust token for the session.
// - clientID: the client id for the session.
// - cookies: the cookies for the session.
// - sessionSaveCallback: the callback function to save the session.
func New(appleID, password, trustToken string, clientID string, cookies []*http.Cookie, sessionSaveCallback sessionSave) (*Client, error) {
icloud := &Client{
appleID: appleID,
password: password,
srv: rest.NewClient(fshttp.NewClient(context.Background())),
Session: NewSession(),
sessionSaveCallback: sessionSaveCallback,
}
icloud.Session.TrustToken = trustToken
icloud.Session.Cookies = cookies
icloud.Session.ClientID = clientID
return icloud, nil
}
// DriveService returns the DriveService instance associated with the Client.
func (c *Client) DriveService() (*DriveService, error) {
var err error
if c.drive == nil {
c.drive, err = NewDriveService(c)
if err != nil {
return nil, err
}
}
return c.drive, nil
}
// Request makes a request and retries it if the session is invalid.
//
// This function is the main entry point for making requests to the iCloud
// API. If the initial request returns a 401 (Unauthorized), it will try to
// reauthenticate and retry the request.
func (c *Client) Request(ctx context.Context, opts rest.Opts, request any, response any) (resp *http.Response, err error) {
resp, err = c.Session.Request(ctx, opts, request, response)
if err != nil && resp != nil {
// try to reauth
if resp.StatusCode == 401 || resp.StatusCode == 421 {
err = c.Authenticate(ctx)
if err != nil {
return nil, err
}
if c.Session.Requires2FA() {
return nil, errors.New("trust token expired, please reauth")
}
return c.RequestNoReAuth(ctx, opts, request, response)
}
}
return resp, err
}
// RequestNoReAuth makes a request without re-authenticating.
//
// This function is useful when you have a session that is already
// authenticated, but you need to make a request without triggering
// a re-authentication.
func (c *Client) RequestNoReAuth(ctx context.Context, opts rest.Opts, request any, response any) (resp *http.Response, err error) {
// Make the request without re-authenticating
resp, err = c.Session.Request(ctx, opts, request, response)
return resp, err
}
// Authenticate authenticates the client with the iCloud API.
func (c *Client) Authenticate(ctx context.Context) error {
if c.Session.Cookies != nil {
if err := c.Session.ValidateSession(ctx); err == nil {
fs.Debugf("icloud", "Valid session, no need to reauth")
return nil
}
c.Session.Cookies = nil
}
fs.Debugf("icloud", "Authenticating as %s\n", c.appleID)
err := c.Session.SignIn(ctx, c.appleID, c.password)
if err == nil {
err = c.Session.AuthWithToken(ctx)
if err == nil && c.sessionSaveCallback != nil {
c.sessionSaveCallback(c.Session)
}
}
return err
}
// SignIn signs in the client using the provided context and credentials.
func (c *Client) SignIn(ctx context.Context) error {
return c.Session.SignIn(ctx, c.appleID, c.password)
}
// IntoReader marshals the provided values into a JSON encoded reader
func IntoReader(values any) (*bytes.Reader, error) {
m, err := json.Marshal(values)
if err != nil {
return nil, err
}
return bytes.NewReader(m), nil
}
// RequestError holds info on a result state, icloud can return a 200 but the result is unknown
type RequestError struct {
Status string
Text string
}
// Error satisfy the error interface.
func (e *RequestError) Error() string {
return fmt.Sprintf("%s: %s", e.Text, e.Status)
}
func newRequestError(Status string, Text string) *RequestError {
return &RequestError{
Status: strings.ToLower(Status),
Text: Text,
}
}
// newErr orf makes a new error from sprintf parameters.
func newRequestErrorf(Status string, Text string, Parameters ...any) *RequestError {
return newRequestError(strings.ToLower(Status), fmt.Sprintf(Text, Parameters...))
}

View File

@@ -0,0 +1,913 @@
package api
import (
"bytes"
"context"
"io"
"mime"
"net/http"
"net/url"
"path/filepath"
"strconv"
"strings"
"time"
"github.com/google/uuid"
"github.com/rclone/rclone/fs"
"github.com/rclone/rclone/lib/rest"
)
const (
defaultZone = "com.apple.CloudDocs"
statusOk = "OK"
statusEtagConflict = "ETAG_CONFLICT"
)
// DriveService represents an iCloud Drive service.
type DriveService struct {
icloud *Client
RootID string
endpoint string
docsEndpoint string
}
// NewDriveService creates a new DriveService instance.
func NewDriveService(icloud *Client) (*DriveService, error) {
return &DriveService{icloud: icloud, RootID: "FOLDER::com.apple.CloudDocs::root", endpoint: icloud.Session.AccountInfo.Webservices["drivews"].URL, docsEndpoint: icloud.Session.AccountInfo.Webservices["docws"].URL}, nil
}
// GetItemByDriveID retrieves a DriveItem by its Drive ID.
func (d *DriveService) GetItemByDriveID(ctx context.Context, id string, includeChildren bool) (*DriveItem, *http.Response, error) {
items, resp, err := d.GetItemsByDriveID(ctx, []string{id}, includeChildren)
if err != nil {
return nil, resp, err
}
return items[0], resp, err
}
// GetItemsByDriveID retrieves DriveItems by their Drive IDs.
func (d *DriveService) GetItemsByDriveID(ctx context.Context, ids []string, includeChildren bool) ([]*DriveItem, *http.Response, error) {
var err error
_items := []map[string]any{}
for _, id := range ids {
_items = append(_items, map[string]any{
"drivewsid": id,
"partialData": false,
"includeHierarchy": false,
})
}
var body *bytes.Reader
var path string
if !includeChildren {
values := []map[string]any{{
"items": _items,
}}
body, err = IntoReader(values)
if err != nil {
return nil, nil, err
}
path = "/retrieveItemDetails"
} else {
values := _items
body, err = IntoReader(values)
if err != nil {
return nil, nil, err
}
path = "/retrieveItemDetailsInFolders"
}
opts := rest.Opts{
Method: "POST",
Path: path,
ExtraHeaders: d.icloud.Session.GetHeaders(map[string]string{}),
RootURL: d.endpoint,
Body: body,
}
var items []*DriveItem
resp, err := d.icloud.Request(ctx, opts, nil, &items)
if err != nil {
return nil, resp, err
}
return items, resp, err
}
// GetDocByPath retrieves a document by its path.
func (d *DriveService) GetDocByPath(ctx context.Context, path string) (*Document, *http.Response, error) {
values := url.Values{}
values.Set("unified_format", "false")
body, err := IntoReader(path)
if err != nil {
return nil, nil, err
}
opts := rest.Opts{
Method: "POST",
Path: "/ws/" + defaultZone + "/list/lookup_by_path",
ExtraHeaders: d.icloud.Session.GetHeaders(map[string]string{}),
RootURL: d.docsEndpoint,
Parameters: values,
Body: body,
}
var item []*Document
resp, err := d.icloud.Request(ctx, opts, nil, &item)
if err != nil {
return nil, resp, err
}
return item[0], resp, err
}
// GetItemByPath retrieves a DriveItem by its path.
func (d *DriveService) GetItemByPath(ctx context.Context, path string) (*DriveItem, *http.Response, error) {
values := url.Values{}
values.Set("unified_format", "true")
body, err := IntoReader(path)
if err != nil {
return nil, nil, err
}
opts := rest.Opts{
Method: "POST",
Path: "/ws/" + defaultZone + "/list/lookup_by_path",
ExtraHeaders: d.icloud.Session.GetHeaders(map[string]string{}),
RootURL: d.docsEndpoint,
Parameters: values,
Body: body,
}
var item []*DriveItem
resp, err := d.icloud.Request(ctx, opts, nil, &item)
if err != nil {
return nil, resp, err
}
return item[0], resp, err
}
// GetDocByItemID retrieves a document by its item ID.
func (d *DriveService) GetDocByItemID(ctx context.Context, id string) (*Document, *http.Response, error) {
values := url.Values{}
values.Set("document_id", id)
values.Set("unified_format", "false") // important
opts := rest.Opts{
Method: "GET",
Path: "/ws/" + defaultZone + "/list/lookup_by_id",
ExtraHeaders: d.icloud.Session.GetHeaders(map[string]string{}),
RootURL: d.docsEndpoint,
Parameters: values,
}
var item *Document
resp, err := d.icloud.Request(ctx, opts, nil, &item)
if err != nil {
return nil, resp, err
}
return item, resp, err
}
// GetItemRawByItemID retrieves a DriveItemRaw by its item ID.
func (d *DriveService) GetItemRawByItemID(ctx context.Context, id string) (*DriveItemRaw, *http.Response, error) {
opts := rest.Opts{
Method: "GET",
Path: "/v1/item/" + id,
ExtraHeaders: d.icloud.Session.GetHeaders(map[string]string{}),
RootURL: d.docsEndpoint,
}
var item *DriveItemRaw
resp, err := d.icloud.Request(ctx, opts, nil, &item)
if err != nil {
return nil, resp, err
}
return item, resp, err
}
// GetItemsInFolder retrieves a list of DriveItemRaw objects in a folder with the given ID.
func (d *DriveService) GetItemsInFolder(ctx context.Context, id string, limit int64) ([]*DriveItemRaw, *http.Response, error) {
values := url.Values{}
values.Set("limit", strconv.FormatInt(limit, 10))
opts := rest.Opts{
Method: "GET",
Path: "/v1/enumerate/" + id,
ExtraHeaders: d.icloud.Session.GetHeaders(map[string]string{}),
RootURL: d.docsEndpoint,
Parameters: values,
}
items := struct {
Items []*DriveItemRaw `json:"drive_item"`
}{}
resp, err := d.icloud.Request(ctx, opts, nil, &items)
if err != nil {
return nil, resp, err
}
return items.Items, resp, err
}
// GetDownloadURLByDriveID retrieves the download URL for a file in the DriveService.
func (d *DriveService) GetDownloadURLByDriveID(ctx context.Context, id string) (string, *http.Response, error) {
_, zone, docid := DeconstructDriveID(id)
values := url.Values{}
values.Set("document_id", docid)
if zone == "" {
zone = defaultZone
}
opts := rest.Opts{
Method: "GET",
Path: "/ws/" + zone + "/download/by_id",
Parameters: values,
ExtraHeaders: d.icloud.Session.GetHeaders(map[string]string{}),
RootURL: d.docsEndpoint,
}
var filer *FileRequest
resp, err := d.icloud.Request(ctx, opts, nil, &filer)
if err != nil {
return "", resp, err
}
var url string
if filer.DataToken != nil {
url = filer.DataToken.URL
} else {
url = filer.PackageToken.URL
}
return url, resp, err
}
// DownloadFile downloads a file from the given URL using the provided options.
func (d *DriveService) DownloadFile(ctx context.Context, url string, opt []fs.OpenOption) (*http.Response, error) {
opts := &rest.Opts{
Method: "GET",
ExtraHeaders: d.icloud.Session.GetHeaders(map[string]string{}),
RootURL: url,
Options: opt,
}
resp, err := d.icloud.srv.Call(ctx, opts)
if err != nil {
// icloud has some weird http codes
if resp.StatusCode == 330 {
loc, err := resp.Location()
if err == nil {
return d.DownloadFile(ctx, loc.String(), opt)
}
}
return resp, err
}
return d.icloud.srv.Call(ctx, opts)
}
// MoveItemToTrashByItemID moves an item to the trash based on the item ID.
func (d *DriveService) MoveItemToTrashByItemID(ctx context.Context, id, etag string, force bool) (*DriveItem, *http.Response, error) {
doc, resp, err := d.GetDocByItemID(ctx, id)
if err != nil {
return nil, resp, err
}
return d.MoveItemToTrashByID(ctx, doc.DriveID(), etag, force)
}
// MoveItemToTrashByID moves an item to the trash based on the item ID.
func (d *DriveService) MoveItemToTrashByID(ctx context.Context, drivewsid, etag string, force bool) (*DriveItem, *http.Response, error) {
values := map[string]any{
"items": []map[string]any{{
"drivewsid": drivewsid,
"etag": etag,
"clientId": drivewsid,
}}}
body, err := IntoReader(values)
if err != nil {
return nil, nil, err
}
opts := rest.Opts{
Method: "POST",
Path: "/moveItemsToTrash",
ExtraHeaders: d.icloud.Session.GetHeaders(map[string]string{}),
RootURL: d.endpoint,
Body: body,
}
item := struct {
Items []*DriveItem `json:"items"`
}{}
resp, err := d.icloud.Request(ctx, opts, nil, &item)
if err != nil {
return nil, resp, err
}
if item.Items[0].Status != statusOk {
// rerun with latest etag
if force && item.Items[0].Status == "ETAG_CONFLICT" {
return d.MoveItemToTrashByID(ctx, drivewsid, item.Items[0].Etag, false)
}
err = newRequestError(item.Items[0].Status, "unknown request status")
}
return item.Items[0], resp, err
}
// CreateNewFolderByItemID creates a new folder by item ID.
func (d *DriveService) CreateNewFolderByItemID(ctx context.Context, id, name string) (*DriveItem, *http.Response, error) {
doc, resp, err := d.GetDocByItemID(ctx, id)
if err != nil {
return nil, resp, err
}
return d.CreateNewFolderByDriveID(ctx, doc.DriveID(), name)
}
// CreateNewFolderByDriveID creates a new folder by its Drive ID.
func (d *DriveService) CreateNewFolderByDriveID(ctx context.Context, drivewsid, name string) (*DriveItem, *http.Response, error) {
values := map[string]any{
"destinationDrivewsId": drivewsid,
"folders": []map[string]any{{
"clientId": "FOLDER::UNKNOWN_ZONE::TempId-" + uuid.New().String(),
"name": name,
}},
}
body, err := IntoReader(values)
if err != nil {
return nil, nil, err
}
opts := rest.Opts{
Method: "POST",
Path: "/createFolders",
ExtraHeaders: d.icloud.Session.GetHeaders(map[string]string{}),
RootURL: d.endpoint,
Body: body,
}
var fResp *CreateFoldersResponse
resp, err := d.icloud.Request(ctx, opts, nil, &fResp)
if err != nil {
return nil, resp, err
}
status := fResp.Folders[0].Status
if status != statusOk {
err = newRequestError(status, "unknown request status")
}
return fResp.Folders[0], resp, err
}
// RenameItemByItemID renames a DriveItem by its item ID.
func (d *DriveService) RenameItemByItemID(ctx context.Context, id, etag, name string, force bool) (*DriveItem, *http.Response, error) {
doc, resp, err := d.GetDocByItemID(ctx, id)
if err != nil {
return nil, resp, err
}
return d.RenameItemByDriveID(ctx, doc.DriveID(), doc.Etag, name, force)
}
// RenameItemByDriveID renames a DriveItem by its drive ID.
func (d *DriveService) RenameItemByDriveID(ctx context.Context, id, etag, name string, force bool) (*DriveItem, *http.Response, error) {
values := map[string]any{
"items": []map[string]any{{
"drivewsid": id,
"name": name,
"etag": etag,
// "extension": split[1],
}},
}
body, err := IntoReader(values)
if err != nil {
return nil, nil, err
}
opts := rest.Opts{
Method: "POST",
Path: "/renameItems",
ExtraHeaders: d.icloud.Session.GetHeaders(map[string]string{}),
RootURL: d.endpoint,
Body: body,
}
var items *DriveItem
resp, err := d.icloud.Request(ctx, opts, nil, &items)
if err != nil {
return nil, resp, err
}
status := items.Items[0].Status
if status != statusOk {
// rerun with latest etag
if force && status == "ETAG_CONFLICT" {
return d.RenameItemByDriveID(ctx, id, items.Items[0].Etag, name, false)
}
err = newRequestErrorf(status, "unknown inner status for: %s %s", opts.Method, resp.Request.URL)
}
return items.Items[0], resp, err
}
// MoveItemByItemID moves an item by its item ID to a destination item ID.
func (d *DriveService) MoveItemByItemID(ctx context.Context, id, etag, dstID string, force bool) (*DriveItem, *http.Response, error) {
docSrc, resp, err := d.GetDocByItemID(ctx, id)
if err != nil {
return nil, resp, err
}
docDst, resp, err := d.GetDocByItemID(ctx, dstID)
if err != nil {
return nil, resp, err
}
return d.MoveItemByDriveID(ctx, docSrc.DriveID(), docSrc.Etag, docDst.DriveID(), force)
}
// MoveItemByDocID moves an item by its doc ID.
// func (d *DriveService) MoveItemByDocID(ctx context.Context, srcDocID, srcEtag, dstDocID string, force bool) (*DriveItem, *http.Response, error) {
// return d.MoveItemByDriveID(ctx, srcDocID, srcEtag, docDst.DriveID(), force)
// }
// MoveItemByDriveID moves an item by its drive ID.
func (d *DriveService) MoveItemByDriveID(ctx context.Context, id, etag, dstID string, force bool) (*DriveItem, *http.Response, error) {
values := map[string]any{
"destinationDrivewsId": dstID,
"items": []map[string]any{{
"drivewsid": id,
"etag": etag,
"clientId": id,
}},
}
body, err := IntoReader(values)
if err != nil {
return nil, nil, err
}
opts := rest.Opts{
Method: "POST",
Path: "/moveItems",
ExtraHeaders: d.icloud.Session.GetHeaders(map[string]string{}),
RootURL: d.endpoint,
Body: body,
}
var items *DriveItem
resp, err := d.icloud.Request(ctx, opts, nil, &items)
if err != nil {
return nil, resp, err
}
status := items.Items[0].Status
if status != statusOk {
// rerun with latest etag
if force && status == "ETAG_CONFLICT" {
return d.MoveItemByDriveID(ctx, id, items.Items[0].Etag, dstID, false)
}
err = newRequestErrorf(status, "unknown inner status for: %s %s", opts.Method, resp.Request.URL)
}
return items.Items[0], resp, err
}
// CopyDocByItemID copies a document by its item ID.
func (d *DriveService) CopyDocByItemID(ctx context.Context, itemID string) (*DriveItemRaw, *http.Response, error) {
// putting name in info doesn't work. extension does work so assume this is a bug in the endpoint
values := map[string]any{
"info_to_update": map[string]any{},
}
body, err := IntoReader(values)
if err != nil {
return nil, nil, err
}
opts := rest.Opts{
Method: "POST",
Path: "/v1/item/copy/" + itemID,
ExtraHeaders: d.icloud.Session.GetHeaders(map[string]string{}),
RootURL: d.docsEndpoint,
Body: body,
}
var info *DriveItemRaw
resp, err := d.icloud.Request(ctx, opts, nil, &info)
if err != nil {
return nil, resp, err
}
return info, resp, err
}
// CreateUpload creates an url for an upload.
func (d *DriveService) CreateUpload(ctx context.Context, size int64, name string) (*UploadResponse, *http.Response, error) {
// first we need to request an upload url
values := map[string]any{
"filename": name,
"type": "FILE",
"size": strconv.FormatInt(size, 10),
"content_type": GetContentTypeForFile(name),
}
body, err := IntoReader(values)
if err != nil {
return nil, nil, err
}
opts := rest.Opts{
Method: "POST",
Path: "/ws/" + defaultZone + "/upload/web",
ExtraHeaders: d.icloud.Session.GetHeaders(map[string]string{}),
RootURL: d.docsEndpoint,
Body: body,
}
var responseInfo []*UploadResponse
resp, err := d.icloud.Request(ctx, opts, nil, &responseInfo)
if err != nil {
return nil, resp, err
}
return responseInfo[0], resp, err
}
// Upload uploads a file to the given url
func (d *DriveService) Upload(ctx context.Context, in io.Reader, size int64, name, uploadURL string) (*SingleFileResponse, *http.Response, error) {
// TODO: implement multipart upload
opts := rest.Opts{
Method: "POST",
ExtraHeaders: d.icloud.Session.GetHeaders(map[string]string{}),
RootURL: uploadURL,
Body: in,
ContentLength: &size,
ContentType: GetContentTypeForFile(name),
// MultipartContentName: "files",
MultipartFileName: name,
}
var singleFileResponse *SingleFileResponse
resp, err := d.icloud.Request(ctx, opts, nil, &singleFileResponse)
if err != nil {
return nil, resp, err
}
return singleFileResponse, resp, err
}
// UpdateFile updates a file in the DriveService.
//
// ctx: the context.Context object for the request.
// r: a pointer to the UpdateFileInfo struct containing the information for the file update.
// Returns a pointer to the DriveItem struct representing the updated file, the http.Response object, and an error if any.
func (d *DriveService) UpdateFile(ctx context.Context, r *UpdateFileInfo) (*DriveItem, *http.Response, error) {
body, err := IntoReader(r)
if err != nil {
return nil, nil, err
}
opts := rest.Opts{
Method: "POST",
Path: "/ws/" + defaultZone + "/update/documents",
ExtraHeaders: d.icloud.Session.GetHeaders(map[string]string{}),
RootURL: d.docsEndpoint,
Body: body,
}
var responseInfo *DocumentUpdateResponse
resp, err := d.icloud.Request(ctx, opts, nil, &responseInfo)
if err != nil {
return nil, resp, err
}
doc := responseInfo.Results[0].Document
item := DriveItem{
Drivewsid: "FILE::com.apple.CloudDocs::" + doc.DocumentID,
Docwsid: doc.DocumentID,
Itemid: doc.ItemID,
Etag: doc.Etag,
ParentID: doc.ParentID,
DateModified: time.Unix(r.Mtime, 0),
DateCreated: time.Unix(r.Mtime, 0),
Type: doc.Type,
Name: doc.Name,
Size: doc.Size,
}
return &item, resp, err
}
// UpdateFileInfo represents the information for an update to a file in the DriveService.
type UpdateFileInfo struct {
AllowConflict bool `json:"allow_conflict"`
Btime int64 `json:"btime"`
Command string `json:"command"`
CreateShortGUID bool `json:"create_short_guid"`
Data struct {
Receipt string `json:"receipt,omitempty"`
ReferenceSignature string `json:"reference_signature,omitempty"`
Signature string `json:"signature,omitempty"`
Size int64 `json:"size,omitempty"`
WrappingKey string `json:"wrapping_key,omitempty"`
} `json:"data,omitempty"`
DocumentID string `json:"document_id"`
FileFlags FileFlags `json:"file_flags"`
Mtime int64 `json:"mtime"`
Path struct {
Path string `json:"path"`
StartingDocumentID string `json:"starting_document_id"`
} `json:"path"`
}
// FileFlags defines the file flags for a document.
type FileFlags struct {
IsExecutable bool `json:"is_executable"`
IsHidden bool `json:"is_hidden"`
IsWritable bool `json:"is_writable"`
}
// NewUpdateFileInfo creates a new UpdateFileInfo object with default values.
//
// Returns an UpdateFileInfo object.
func NewUpdateFileInfo() UpdateFileInfo {
return UpdateFileInfo{
Command: "add_file",
CreateShortGUID: true,
AllowConflict: true,
FileFlags: FileFlags{
IsExecutable: true,
IsHidden: false,
IsWritable: true,
},
}
}
// DriveItemRaw is a raw drive item.
// not suure what to call this but there seems to be a "unified" and non "unified" drive item response. This is the non unified.
type DriveItemRaw struct {
ItemID string `json:"item_id"`
ItemInfo *DriveItemRawInfo `json:"item_info"`
}
// SplitName splits the name of a DriveItemRaw into its name and extension.
//
// It returns the name and extension as separate strings. If the name ends with a dot,
// it means there is no extension, so an empty string is returned for the extension.
// If the name does not contain a dot, it means
func (d *DriveItemRaw) SplitName() (string, string) {
name := d.ItemInfo.Name
// ends with a dot, no extension
if strings.HasSuffix(name, ".") {
return name, ""
}
lastInd := strings.LastIndex(name, ".")
if lastInd == -1 {
return name, ""
}
return name[:lastInd], name[lastInd+1:]
}
// ModTime returns the modification time of the DriveItemRaw.
//
// It parses the ModifiedAt field of the ItemInfo struct and converts it to a time.Time value.
// If the parsing fails, it returns the zero value of time.Time.
// The returned time.Time value represents the modification time of the DriveItemRaw.
func (d *DriveItemRaw) ModTime() time.Time {
i, err := strconv.ParseInt(d.ItemInfo.ModifiedAt, 10, 64)
if err != nil {
return time.Time{}
}
return time.UnixMilli(i)
}
// CreatedTime returns the creation time of the DriveItemRaw.
//
// It parses the CreatedAt field of the ItemInfo struct and converts it to a time.Time value.
// If the parsing fails, it returns the zero value of time.Time.
// The returned time.Time
func (d *DriveItemRaw) CreatedTime() time.Time {
i, err := strconv.ParseInt(d.ItemInfo.CreatedAt, 10, 64)
if err != nil {
return time.Time{}
}
return time.UnixMilli(i)
}
// DriveItemRawInfo is the raw information about a drive item.
type DriveItemRawInfo struct {
Name string `json:"name"`
// Extension is absolutely borked on endpoints so dont use it.
Extension string `json:"extension"`
Size int64 `json:"size,string"`
Type string `json:"type"`
Version string `json:"version"`
ModifiedAt string `json:"modified_at"`
CreatedAt string `json:"created_at"`
Urls struct {
URLDownload string `json:"url_download"`
} `json:"urls"`
}
// IntoDriveItem converts a DriveItemRaw into a DriveItem.
//
// It takes no parameters.
// It returns a pointer to a DriveItem.
func (d *DriveItemRaw) IntoDriveItem() *DriveItem {
name, extension := d.SplitName()
return &DriveItem{
Itemid: d.ItemID,
Name: name,
Extension: extension,
Type: d.ItemInfo.Type,
Etag: d.ItemInfo.Version,
DateModified: d.ModTime(),
DateCreated: d.CreatedTime(),
Size: d.ItemInfo.Size,
Urls: d.ItemInfo.Urls,
}
}
// DocumentUpdateResponse is the response of a document update request.
type DocumentUpdateResponse struct {
Status struct {
StatusCode int `json:"status_code"`
ErrorMessage string `json:"error_message"`
} `json:"status"`
Results []struct {
Status struct {
StatusCode int `json:"status_code"`
ErrorMessage string `json:"error_message"`
} `json:"status"`
OperationID any `json:"operation_id"`
Document *Document `json:"document"`
} `json:"results"`
}
// Document represents a document on iCloud.
type Document struct {
Status struct {
StatusCode int `json:"status_code"`
ErrorMessage string `json:"error_message"`
} `json:"status"`
DocumentID string `json:"document_id"`
ItemID string `json:"item_id"`
Urls struct {
URLDownload string `json:"url_download"`
} `json:"urls"`
Etag string `json:"etag"`
ParentID string `json:"parent_id"`
Name string `json:"name"`
Type string `json:"type"`
Deleted bool `json:"deleted"`
Mtime int64 `json:"mtime"`
LastEditorName string `json:"last_editor_name"`
Data DocumentData `json:"data"`
Size int64 `json:"size"`
Btime int64 `json:"btime"`
Zone string `json:"zone"`
FileFlags struct {
IsExecutable bool `json:"is_executable"`
IsWritable bool `json:"is_writable"`
IsHidden bool `json:"is_hidden"`
} `json:"file_flags"`
LastOpenedTime int64 `json:"lastOpenedTime"`
RestorePath any `json:"restorePath"`
HasChainedParent bool `json:"hasChainedParent"`
}
// DriveID returns the drive ID of the Document.
func (d *Document) DriveID() string {
if d.Zone == "" {
d.Zone = defaultZone
}
return d.Type + "::" + d.Zone + "::" + d.DocumentID
}
// DocumentData represents the data of a document.
type DocumentData struct {
Signature string `json:"signature"`
Owner string `json:"owner"`
Size int64 `json:"size"`
ReferenceSignature string `json:"reference_signature"`
WrappingKey string `json:"wrapping_key"`
PcsInfo string `json:"pcsInfo"`
}
// SingleFileResponse is the response of a single file request.
type SingleFileResponse struct {
SingleFile *SingleFileInfo `json:"singleFile"`
}
// SingleFileInfo represents the information of a single file.
type SingleFileInfo struct {
ReferenceSignature string `json:"referenceChecksum"`
Size int64 `json:"size"`
Signature string `json:"fileChecksum"`
WrappingKey string `json:"wrappingKey"`
Receipt string `json:"receipt"`
}
// UploadResponse is the response of an upload request.
type UploadResponse struct {
URL string `json:"url"`
DocumentID string `json:"document_id"`
}
// FileRequestToken represents the token of a file request.
type FileRequestToken struct {
URL string `json:"url"`
Token string `json:"token"`
Signature string `json:"signature"`
WrappingKey string `json:"wrapping_key"`
ReferenceSignature string `json:"reference_signature"`
}
// FileRequest represents the request of a file.
type FileRequest struct {
DocumentID string `json:"document_id"`
ItemID string `json:"item_id"`
OwnerDsid int64 `json:"owner_dsid"`
DataToken *FileRequestToken `json:"data_token,omitempty"`
PackageToken *FileRequestToken `json:"package_token,omitempty"`
DoubleEtag string `json:"double_etag"`
}
// CreateFoldersResponse is the response of a create folders request.
type CreateFoldersResponse struct {
Folders []*DriveItem `json:"folders"`
}
// DriveItem represents an item on iCloud.
type DriveItem struct {
DateCreated time.Time `json:"dateCreated"`
Drivewsid string `json:"drivewsid"`
Docwsid string `json:"docwsid"`
Itemid string `json:"item_id"`
Zone string `json:"zone"`
Name string `json:"name"`
ParentID string `json:"parentId"`
Hierarchy []DriveItem `json:"hierarchy"`
Etag string `json:"etag"`
Type string `json:"type"`
AssetQuota int64 `json:"assetQuota"`
FileCount int64 `json:"fileCount"`
ShareCount int64 `json:"shareCount"`
ShareAliasCount int64 `json:"shareAliasCount"`
DirectChildrenCount int64 `json:"directChildrenCount"`
Items []*DriveItem `json:"items"`
NumberOfItems int64 `json:"numberOfItems"`
Status string `json:"status"`
Extension string `json:"extension,omitempty"`
DateModified time.Time `json:"dateModified,omitempty"`
DateChanged time.Time `json:"dateChanged,omitempty"`
Size int64 `json:"size,omitempty"`
LastOpenTime time.Time `json:"lastOpenTime,omitempty"`
Urls struct {
URLDownload string `json:"url_download"`
} `json:"urls"`
}
// IsFolder returns true if the item is a folder.
func (d *DriveItem) IsFolder() bool {
return d.Type == "FOLDER" || d.Type == "APP_CONTAINER" || d.Type == "APP_LIBRARY"
}
// DownloadURL returns the download URL of the item.
func (d *DriveItem) DownloadURL() string {
return d.Urls.URLDownload
}
// FullName returns the full name of the item.
// name + extension
func (d *DriveItem) FullName() string {
if d.Extension != "" {
return d.Name + "." + d.Extension
}
return d.Name
}
// GetDocIDFromDriveID returns the DocumentID from the drive ID.
func GetDocIDFromDriveID(id string) string {
split := strings.Split(id, "::")
return split[len(split)-1]
}
// DeconstructDriveID returns the document type, zone, and document ID from the drive ID.
func DeconstructDriveID(id string) (docType, zone, docid string) {
split := strings.Split(id, "::")
if len(split) < 3 {
return "", "", id
}
return split[0], split[1], split[2]
}
// ConstructDriveID constructs a drive ID from the given components.
func ConstructDriveID(id string, zone string, t string) string {
return strings.Join([]string{t, zone, id}, "::")
}
// GetContentTypeForFile detects content type for given file name.
func GetContentTypeForFile(name string) string {
// detect MIME type by looking at the filename only
mimeType := mime.TypeByExtension(filepath.Ext(name))
if mimeType == "" {
// api requires a mime type passed in
mimeType = "text/plain"
}
return strings.Split(mimeType, ";")[0]
}

View File

@@ -0,0 +1,406 @@
package api
import (
"context"
"fmt"
"maps"
"net/http"
"net/url"
"slices"
"strings"
"github.com/oracle/oci-go-sdk/v65/common"
"github.com/rclone/rclone/fs/fshttp"
"github.com/rclone/rclone/lib/rest"
)
// Session represents an iCloud session
type Session struct {
SessionToken string `json:"session_token"`
Scnt string `json:"scnt"`
SessionID string `json:"session_id"`
AccountCountry string `json:"account_country"`
TrustToken string `json:"trust_token"`
ClientID string `json:"client_id"`
Cookies []*http.Cookie `json:"cookies"`
AccountInfo AccountInfo `json:"account_info"`
srv *rest.Client `json:"-"`
}
// String returns the session as a string
// func (s *Session) String() string {
// jsession, _ := json.Marshal(s)
// return string(jsession)
// }
// Request makes a request
func (s *Session) Request(ctx context.Context, opts rest.Opts, request any, response any) (*http.Response, error) {
resp, err := s.srv.CallJSON(ctx, &opts, &request, &response)
if err != nil {
return resp, err
}
if val := resp.Header.Get("X-Apple-ID-Account-Country"); val != "" {
s.AccountCountry = val
}
if val := resp.Header.Get("X-Apple-ID-Session-Id"); val != "" {
s.SessionID = val
}
if val := resp.Header.Get("X-Apple-Session-Token"); val != "" {
s.SessionToken = val
}
if val := resp.Header.Get("X-Apple-TwoSV-Trust-Token"); val != "" {
s.TrustToken = val
}
if val := resp.Header.Get("scnt"); val != "" {
s.Scnt = val
}
return resp, nil
}
// Requires2FA returns true if the session requires 2FA
func (s *Session) Requires2FA() bool {
return s.AccountInfo.DsInfo.HsaVersion == 2 && s.AccountInfo.HsaChallengeRequired
}
// SignIn signs in the session
func (s *Session) SignIn(ctx context.Context, appleID, password string) error {
trustTokens := []string{}
if s.TrustToken != "" {
trustTokens = []string{s.TrustToken}
}
values := map[string]any{
"accountName": appleID,
"password": password,
"rememberMe": true,
"trustTokens": trustTokens,
}
body, err := IntoReader(values)
if err != nil {
return err
}
opts := rest.Opts{
Method: "POST",
Path: "/signin",
Parameters: url.Values{},
ExtraHeaders: s.GetAuthHeaders(map[string]string{}),
RootURL: authEndpoint,
IgnoreStatus: true, // need to handle 409 for hsa2
NoResponse: true,
Body: body,
}
opts.Parameters.Set("isRememberMeEnabled", "true")
_, err = s.Request(ctx, opts, nil, nil)
return err
}
// AuthWithToken authenticates the session
func (s *Session) AuthWithToken(ctx context.Context) error {
values := map[string]any{
"accountCountryCode": s.AccountCountry,
"dsWebAuthToken": s.SessionToken,
"extended_login": true,
"trustToken": s.TrustToken,
}
body, err := IntoReader(values)
if err != nil {
return err
}
opts := rest.Opts{
Method: "POST",
Path: "/accountLogin",
ExtraHeaders: GetCommonHeaders(map[string]string{}),
RootURL: setupEndpoint,
Body: body,
}
resp, err := s.Request(ctx, opts, nil, &s.AccountInfo)
if err == nil {
s.Cookies = resp.Cookies()
}
return err
}
// Validate2FACode validates the 2FA code
func (s *Session) Validate2FACode(ctx context.Context, code string) error {
values := map[string]any{"securityCode": map[string]string{"code": code}}
body, err := IntoReader(values)
if err != nil {
return err
}
headers := s.GetAuthHeaders(map[string]string{})
headers["scnt"] = s.Scnt
headers["X-Apple-ID-Session-Id"] = s.SessionID
opts := rest.Opts{
Method: "POST",
Path: "/verify/trusteddevice/securitycode",
ExtraHeaders: headers,
RootURL: authEndpoint,
Body: body,
NoResponse: true,
}
_, err = s.Request(ctx, opts, nil, nil)
if err == nil {
if err := s.TrustSession(ctx); err != nil {
return err
}
return nil
}
return fmt.Errorf("validate2FACode failed: %w", err)
}
// TrustSession trusts the session
func (s *Session) TrustSession(ctx context.Context) error {
headers := s.GetAuthHeaders(map[string]string{})
headers["scnt"] = s.Scnt
headers["X-Apple-ID-Session-Id"] = s.SessionID
opts := rest.Opts{
Method: "GET",
Path: "/2sv/trust",
ExtraHeaders: headers,
RootURL: authEndpoint,
NoResponse: true,
ContentLength: common.Int64(0),
}
_, err := s.Request(ctx, opts, nil, nil)
if err != nil {
return fmt.Errorf("trustSession failed: %w", err)
}
return s.AuthWithToken(ctx)
}
// ValidateSession validates the session
func (s *Session) ValidateSession(ctx context.Context) error {
opts := rest.Opts{
Method: "POST",
Path: "/validate",
ExtraHeaders: s.GetHeaders(map[string]string{}),
RootURL: setupEndpoint,
ContentLength: common.Int64(0),
}
_, err := s.Request(ctx, opts, nil, &s.AccountInfo)
if err != nil {
return fmt.Errorf("validateSession failed: %w", err)
}
return nil
}
// GetAuthHeaders returns the authentication headers for the session.
//
// It takes an `overwrite` map[string]string parameter which allows
// overwriting the default headers. It returns a map[string]string.
func (s *Session) GetAuthHeaders(overwrite map[string]string) map[string]string {
headers := map[string]string{
"Accept": "application/json",
"Content-Type": "application/json",
"X-Apple-OAuth-Client-Id": s.ClientID,
"X-Apple-OAuth-Client-Type": "firstPartyAuth",
"X-Apple-OAuth-Redirect-URI": "https://www.icloud.com",
"X-Apple-OAuth-Require-Grant-Code": "true",
"X-Apple-OAuth-Response-Mode": "web_message",
"X-Apple-OAuth-Response-Type": "code",
"X-Apple-OAuth-State": s.ClientID,
"X-Apple-Widget-Key": s.ClientID,
"Origin": homeEndpoint,
"Referer": fmt.Sprintf("%s/", homeEndpoint),
"User-Agent": "Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:103.0) Gecko/20100101 Firefox/103.0",
}
maps.Copy(headers, overwrite)
return headers
}
// GetHeaders Gets the authentication headers required for a request
func (s *Session) GetHeaders(overwrite map[string]string) map[string]string {
headers := GetCommonHeaders(map[string]string{})
headers["Cookie"] = s.GetCookieString()
maps.Copy(headers, overwrite)
return headers
}
// GetCookieString returns the cookie header string for the session.
func (s *Session) GetCookieString() string {
cookieHeader := ""
// we only care about name and value.
for _, cookie := range s.Cookies {
cookieHeader = cookieHeader + cookie.Name + "=" + cookie.Value + ";"
}
return cookieHeader
}
// GetCommonHeaders generates common HTTP headers with optional overwrite.
func GetCommonHeaders(overwrite map[string]string) map[string]string {
headers := map[string]string{
"Content-Type": "application/json",
"Origin": baseEndpoint,
"Referer": fmt.Sprintf("%s/", baseEndpoint),
"User-Agent": "Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:103.0) Gecko/20100101 Firefox/103.0",
}
maps.Copy(headers, overwrite)
return headers
}
// MergeCookies merges two slices of http.Cookies, ensuring no duplicates are added.
func MergeCookies(left []*http.Cookie, right []*http.Cookie) ([]*http.Cookie, error) {
var hashes []string
for _, cookie := range right {
hashes = append(hashes, cookie.Raw)
}
for _, cookie := range left {
if !slices.Contains(hashes, cookie.Raw) {
right = append(right, cookie)
}
}
return right, nil
}
// GetCookiesForDomain filters the provided cookies based on the domain of the given URL.
func GetCookiesForDomain(url *url.URL, cookies []*http.Cookie) ([]*http.Cookie, error) {
var domainCookies []*http.Cookie
for _, cookie := range cookies {
if strings.HasSuffix(url.Host, cookie.Domain) {
domainCookies = append(domainCookies, cookie)
}
}
return domainCookies, nil
}
// NewSession creates a new Session instance with default values.
func NewSession() *Session {
session := &Session{}
session.srv = rest.NewClient(fshttp.NewClient(context.Background())).SetRoot(baseEndpoint)
//session.ClientID = "auth-" + uuid.New().String()
return session
}
// AccountInfo represents an account info
type AccountInfo struct {
DsInfo *ValidateDataDsInfo `json:"dsInfo"`
HasMinimumDeviceForPhotosWeb bool `json:"hasMinimumDeviceForPhotosWeb"`
ICDPEnabled bool `json:"iCDPEnabled"`
Webservices map[string]*webService `json:"webservices"`
PcsEnabled bool `json:"pcsEnabled"`
TermsUpdateNeeded bool `json:"termsUpdateNeeded"`
ConfigBag struct {
Urls struct {
AccountCreateUI string `json:"accountCreateUI"`
AccountLoginUI string `json:"accountLoginUI"`
AccountLogin string `json:"accountLogin"`
AccountRepairUI string `json:"accountRepairUI"`
DownloadICloudTerms string `json:"downloadICloudTerms"`
RepairDone string `json:"repairDone"`
AccountAuthorizeUI string `json:"accountAuthorizeUI"`
VettingURLForEmail string `json:"vettingUrlForEmail"`
AccountCreate string `json:"accountCreate"`
GetICloudTerms string `json:"getICloudTerms"`
VettingURLForPhone string `json:"vettingUrlForPhone"`
} `json:"urls"`
AccountCreateEnabled bool `json:"accountCreateEnabled"`
} `json:"configBag"`
HsaTrustedBrowser bool `json:"hsaTrustedBrowser"`
AppsOrder []string `json:"appsOrder"`
Version int `json:"version"`
IsExtendedLogin bool `json:"isExtendedLogin"`
PcsServiceIdentitiesIncluded bool `json:"pcsServiceIdentitiesIncluded"`
IsRepairNeeded bool `json:"isRepairNeeded"`
HsaChallengeRequired bool `json:"hsaChallengeRequired"`
RequestInfo struct {
Country string `json:"country"`
TimeZone string `json:"timeZone"`
Region string `json:"region"`
} `json:"requestInfo"`
PcsDeleted bool `json:"pcsDeleted"`
ICloudInfo struct {
SafariBookmarksHasMigratedToCloudKit bool `json:"SafariBookmarksHasMigratedToCloudKit"`
} `json:"iCloudInfo"`
Apps map[string]*ValidateDataApp `json:"apps"`
}
// ValidateDataDsInfo represents an validation info
type ValidateDataDsInfo struct {
HsaVersion int `json:"hsaVersion"`
LastName string `json:"lastName"`
ICDPEnabled bool `json:"iCDPEnabled"`
TantorMigrated bool `json:"tantorMigrated"`
Dsid string `json:"dsid"`
HsaEnabled bool `json:"hsaEnabled"`
IsHideMyEmailSubscriptionActive bool `json:"isHideMyEmailSubscriptionActive"`
IroncadeMigrated bool `json:"ironcadeMigrated"`
Locale string `json:"locale"`
BrZoneConsolidated bool `json:"brZoneConsolidated"`
ICDRSCapableDeviceList string `json:"ICDRSCapableDeviceList"`
IsManagedAppleID bool `json:"isManagedAppleID"`
IsCustomDomainsFeatureAvailable bool `json:"isCustomDomainsFeatureAvailable"`
IsHideMyEmailFeatureAvailable bool `json:"isHideMyEmailFeatureAvailable"`
ContinueOnDeviceEligibleDeviceInfo []string `json:"ContinueOnDeviceEligibleDeviceInfo"`
Gilligvited bool `json:"gilligvited"`
AppleIDAliases []any `json:"appleIdAliases"`
UbiquityEOLEnabled bool `json:"ubiquityEOLEnabled"`
IsPaidDeveloper bool `json:"isPaidDeveloper"`
CountryCode string `json:"countryCode"`
NotificationID string `json:"notificationId"`
PrimaryEmailVerified bool `json:"primaryEmailVerified"`
ADsID string `json:"aDsID"`
Locked bool `json:"locked"`
ICDRSCapableDeviceCount int `json:"ICDRSCapableDeviceCount"`
HasICloudQualifyingDevice bool `json:"hasICloudQualifyingDevice"`
PrimaryEmail string `json:"primaryEmail"`
AppleIDEntries []struct {
IsPrimary bool `json:"isPrimary"`
Type string `json:"type"`
Value string `json:"value"`
} `json:"appleIdEntries"`
GilliganEnabled bool `json:"gilligan-enabled"`
IsWebAccessAllowed bool `json:"isWebAccessAllowed"`
FullName string `json:"fullName"`
MailFlags struct {
IsThreadingAvailable bool `json:"isThreadingAvailable"`
IsSearchV2Provisioned bool `json:"isSearchV2Provisioned"`
SCKMail bool `json:"sCKMail"`
IsMppSupportedInCurrentCountry bool `json:"isMppSupportedInCurrentCountry"`
} `json:"mailFlags"`
LanguageCode string `json:"languageCode"`
AppleID string `json:"appleId"`
HasUnreleasedOS bool `json:"hasUnreleasedOS"`
AnalyticsOptInStatus bool `json:"analyticsOptInStatus"`
FirstName string `json:"firstName"`
ICloudAppleIDAlias string `json:"iCloudAppleIdAlias"`
NotesMigrated bool `json:"notesMigrated"`
BeneficiaryInfo struct {
IsBeneficiary bool `json:"isBeneficiary"`
} `json:"beneficiaryInfo"`
HasPaymentInfo bool `json:"hasPaymentInfo"`
PcsDelet bool `json:"pcsDelet"`
AppleIDAlias string `json:"appleIdAlias"`
BrMigrated bool `json:"brMigrated"`
StatusCode int `json:"statusCode"`
FamilyEligible bool `json:"familyEligible"`
}
// ValidateDataApp represents an app
type ValidateDataApp struct {
CanLaunchWithOneFactor bool `json:"canLaunchWithOneFactor"`
IsQualifiedForBeta bool `json:"isQualifiedForBeta"`
}
// WebService represents a web service
type webService struct {
PcsRequired bool `json:"pcsRequired"`
URL string `json:"url"`
UploadURL string `json:"uploadUrl"`
Status string `json:"status"`
}

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,18 @@
//go:build !plan9 && !solaris
package iclouddrive_test
import (
"testing"
"github.com/rclone/rclone/backend/iclouddrive"
"github.com/rclone/rclone/fstest/fstests"
)
// TestIntegration runs integration tests against the remote
func TestIntegration(t *testing.T) {
fstests.Run(t, &fstests.Opt{
RemoteName: "TestICloudDrive:",
NilObject: (*iclouddrive.Object)(nil),
})
}

View File

@@ -0,0 +1,7 @@
// Build for iclouddrive for unsupported platforms to stop go complaining
// about "no buildable Go source files "
//go:build plan9 || solaris
// Package iclouddrive implements the iCloud Drive backend
package iclouddrive

View File

@@ -75,7 +75,7 @@ type MoveFolderParam struct {
DestinationPath string `validate:"nonzero" json:"destinationPath"`
}
// JobIDResponse respresents response struct with JobID for folder operations
// JobIDResponse represents response struct with JobID for folder operations
type JobIDResponse struct {
JobID string `json:"jobId"`
}

View File

@@ -56,7 +56,7 @@ func (ik *ImageKit) URL(params URLParam) (string, error) {
var expires = strconv.FormatInt(now+params.ExpireSeconds, 10)
var path = strings.Replace(resultURL, endpoint, "", 1)
path = path + expires
path += expires
mac := hmac.New(sha1.New, []byte(ik.PrivateKey))
mac.Write([]byte(path))
signature := hex.EncodeToString(mac.Sum(nil))

View File

@@ -4,6 +4,7 @@ import (
"context"
"fmt"
"net/http"
"slices"
"strconv"
"time"
@@ -142,12 +143,7 @@ func shouldRetryHTTP(resp *http.Response, retryErrorCodes []int) bool {
if resp == nil {
return false
}
for _, e := range retryErrorCodes {
if resp.StatusCode == e {
return true
}
}
return false
return slices.Contains(retryErrorCodes, resp.StatusCode)
}
func (f *Fs) shouldRetry(ctx context.Context, resp *http.Response, err error) (bool, error) {

View File

@@ -13,6 +13,7 @@ import (
"net/url"
"path"
"regexp"
"slices"
"strconv"
"strings"
"time"
@@ -151,6 +152,19 @@ Owner is able to add custom keys. Metadata feature grabs all the keys including
Help: "Host of InternetArchive Frontend.\n\nLeave blank for default value.",
Default: "https://archive.org",
Advanced: true,
}, {
Name: "item_metadata",
Help: `Metadata to be set on the IA item, this is different from file-level metadata that can be set using --metadata-set.
Format is key=value and the 'x-archive-meta-' prefix is automatically added.`,
Default: []string{},
Hide: fs.OptionHideConfigurator,
Advanced: true,
}, {
Name: "item_derive",
Help: `Whether to trigger derive on the IA item or not. If set to false, the item will not be derived by IA upon upload.
The derive process produces a number of secondary files from an upload to make an upload more usable on the web.
Setting this to false is useful for uploading files that are already in a format that IA can display or reduce burden on IA's infrastructure.`,
Default: true,
}, {
Name: "disable_checksum",
Help: `Don't ask the server to test against MD5 checksum calculated by rclone.
@@ -187,7 +201,7 @@ Only enable if you need to be guaranteed to be reflected after write operations.
const iaItemMaxSize int64 = 1099511627776
// metadata keys that are not writeable
var roMetadataKey = map[string]interface{}{
var roMetadataKey = map[string]any{
// do not add mtime here, it's a documented exception
"name": nil, "source": nil, "size": nil, "md5": nil,
"crc32": nil, "sha1": nil, "format": nil, "old_version": nil,
@@ -201,6 +215,8 @@ type Options struct {
Endpoint string `config:"endpoint"`
FrontEndpoint string `config:"front_endpoint"`
DisableChecksum bool `config:"disable_checksum"`
ItemMetadata []string `config:"item_metadata"`
ItemDerive bool `config:"item_derive"`
WaitArchive fs.Duration `config:"wait_archive"`
Enc encoder.MultiEncoder `config:"encoding"`
}
@@ -790,17 +806,23 @@ func (o *Object) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, op
"x-amz-filemeta-rclone-update-track": updateTracker,
// we add some more headers for intuitive actions
"x-amz-auto-make-bucket": "1", // create an item if does not exist, do nothing if already
"x-archive-auto-make-bucket": "1", // same as above in IAS3 original way
"x-archive-keep-old-version": "0", // do not keep old versions (a.k.a. trashes in other clouds)
"x-archive-meta-mediatype": "data", // mark media type of the uploading file as "data"
"x-archive-queue-derive": "0", // skip derivation process (e.g. encoding to smaller files, OCR on PDFs)
"x-archive-cascade-delete": "1", // enable "cascate delete" (delete all derived files in addition to the file itself)
"x-amz-auto-make-bucket": "1", // create an item if does not exist, do nothing if already
"x-archive-auto-make-bucket": "1", // same as above in IAS3 original way
"x-archive-keep-old-version": "0", // do not keep old versions (a.k.a. trashes in other clouds)
"x-archive-cascade-delete": "1", // enable "cascate delete" (delete all derived files in addition to the file itself)
}
if size >= 0 {
headers["Content-Length"] = fmt.Sprintf("%d", size)
headers["x-archive-size-hint"] = fmt.Sprintf("%d", size)
}
// This is IA's ITEM metadata, not file metadata
headers, err = o.appendItemMetadataHeaders(headers, o.fs.opt)
if err != nil {
return err
}
var mdata fs.Metadata
mdata, err = fs.GetMetadataOptions(ctx, o.fs, src, options)
if err == nil && mdata != nil {
@@ -863,6 +885,51 @@ func (o *Object) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, op
return err
}
func (o *Object) appendItemMetadataHeaders(headers map[string]string, options Options) (newHeaders map[string]string, err error) {
metadataCounter := make(map[string]int)
metadataValues := make(map[string][]string)
// First pass: count occurrences and collect values
for _, v := range options.ItemMetadata {
parts := strings.SplitN(v, "=", 2)
if len(parts) != 2 {
return newHeaders, errors.New("item metadata key=value should be in the form key=value")
}
key, value := parts[0], parts[1]
metadataCounter[key]++
metadataValues[key] = append(metadataValues[key], value)
}
// Second pass: add headers with appropriate prefixes
for key, count := range metadataCounter {
if count == 1 {
// Only one occurrence, use x-archive-meta-
headers[fmt.Sprintf("x-archive-meta-%s", key)] = metadataValues[key][0]
} else {
// Multiple occurrences, use x-archive-meta01-, x-archive-meta02-, etc.
for i, value := range metadataValues[key] {
headers[fmt.Sprintf("x-archive-meta%02d-%s", i+1, key)] = value
}
}
}
if o.fs.opt.ItemDerive {
headers["x-archive-queue-derive"] = "1"
} else {
headers["x-archive-queue-derive"] = "0"
}
fs.Debugf(o, "Setting IA item derive: %t", o.fs.opt.ItemDerive)
for k, v := range headers {
if strings.HasPrefix(k, "x-archive-meta") {
fs.Debugf(o, "Setting IA item metadata: %s=%s", k, v)
}
}
return headers, nil
}
// Remove an object
func (o *Object) Remove(ctx context.Context) (err error) {
bucket, bucketPath := o.split()
@@ -925,10 +992,8 @@ func (o *Object) Metadata(ctx context.Context) (m fs.Metadata, err error) {
func (f *Fs) shouldRetry(resp *http.Response, err error) (bool, error) {
if resp != nil {
for _, e := range retryErrorCodes {
if resp.StatusCode == e {
return true, err
}
if slices.Contains(retryErrorCodes, resp.StatusCode) {
return true, err
}
}
// Ok, not an awserr, check for generic failure conditions
@@ -1081,13 +1146,7 @@ func (f *Fs) waitFileUpload(ctx context.Context, reqPath, tracker string, newSiz
}
fileTrackers, _ := listOrString(iaFile.UpdateTrack)
trackerMatch := false
for _, v := range fileTrackers {
if v == tracker {
trackerMatch = true
break
}
}
trackerMatch := slices.Contains(fileTrackers, tracker)
if !trackerMatch {
continue
}

View File

@@ -70,7 +70,7 @@ func (t *Rfc3339Time) MarshalXML(e *xml.Encoder, start xml.StartElement) error {
// MarshalJSON turns a Rfc3339Time into JSON
func (t *Rfc3339Time) MarshalJSON() ([]byte, error) {
return []byte(fmt.Sprintf("\"%s\"", t.String())), nil
return fmt.Appendf(nil, "\"%s\"", t.String()), nil
}
// LoginToken is struct representing the login token generated in the WebUI
@@ -165,25 +165,25 @@ type DeviceRegistrationResponse struct {
// CustomerInfo provides general information about the account. Required for finding the correct internal username.
type CustomerInfo struct {
Username string `json:"username"`
Email string `json:"email"`
Name string `json:"name"`
CountryCode string `json:"country_code"`
LanguageCode string `json:"language_code"`
CustomerGroupCode string `json:"customer_group_code"`
BrandCode string `json:"brand_code"`
AccountType string `json:"account_type"`
SubscriptionType string `json:"subscription_type"`
Usage int64 `json:"usage"`
Quota int64 `json:"quota"`
BusinessUsage int64 `json:"business_usage"`
BusinessQuota int64 `json:"business_quota"`
WriteLocked bool `json:"write_locked"`
ReadLocked bool `json:"read_locked"`
LockedCause interface{} `json:"locked_cause"`
WebHash string `json:"web_hash"`
AndroidHash string `json:"android_hash"`
IOSHash string `json:"ios_hash"`
Username string `json:"username"`
Email string `json:"email"`
Name string `json:"name"`
CountryCode string `json:"country_code"`
LanguageCode string `json:"language_code"`
CustomerGroupCode string `json:"customer_group_code"`
BrandCode string `json:"brand_code"`
AccountType string `json:"account_type"`
SubscriptionType string `json:"subscription_type"`
Usage int64 `json:"usage"`
Quota int64 `json:"quota"`
BusinessUsage int64 `json:"business_usage"`
BusinessQuota int64 `json:"business_quota"`
WriteLocked bool `json:"write_locked"`
ReadLocked bool `json:"read_locked"`
LockedCause any `json:"locked_cause"`
WebHash string `json:"web_hash"`
AndroidHash string `json:"android_hash"`
IOSHash string `json:"ios_hash"`
}
// TrashResponse is returned when emptying the Trash

View File

@@ -31,7 +31,7 @@ import (
"github.com/rclone/rclone/fs/fserrors"
"github.com/rclone/rclone/fs/fshttp"
"github.com/rclone/rclone/fs/hash"
"github.com/rclone/rclone/fs/walk"
"github.com/rclone/rclone/fs/list"
"github.com/rclone/rclone/lib/encoder"
"github.com/rclone/rclone/lib/oauthutil"
"github.com/rclone/rclone/lib/pacer"
@@ -277,11 +277,9 @@ machines.`)
m.Set(configClientID, teliaseCloudClientID)
m.Set(configTokenURL, teliaseCloudTokenURL)
return oauthutil.ConfigOut("choose_device", &oauthutil.Options{
OAuth2Config: &oauth2.Config{
Endpoint: oauth2.Endpoint{
AuthURL: teliaseCloudAuthURL,
TokenURL: teliaseCloudTokenURL,
},
OAuth2Config: &oauthutil.Config{
AuthURL: teliaseCloudAuthURL,
TokenURL: teliaseCloudTokenURL,
ClientID: teliaseCloudClientID,
Scopes: []string{"openid", "jotta-default", "offline_access"},
RedirectURL: oauthutil.RedirectLocalhostURL,
@@ -292,11 +290,9 @@ machines.`)
m.Set(configClientID, telianoCloudClientID)
m.Set(configTokenURL, telianoCloudTokenURL)
return oauthutil.ConfigOut("choose_device", &oauthutil.Options{
OAuth2Config: &oauth2.Config{
Endpoint: oauth2.Endpoint{
AuthURL: telianoCloudAuthURL,
TokenURL: telianoCloudTokenURL,
},
OAuth2Config: &oauthutil.Config{
AuthURL: telianoCloudAuthURL,
TokenURL: telianoCloudTokenURL,
ClientID: telianoCloudClientID,
Scopes: []string{"openid", "jotta-default", "offline_access"},
RedirectURL: oauthutil.RedirectLocalhostURL,
@@ -307,11 +303,9 @@ machines.`)
m.Set(configClientID, tele2CloudClientID)
m.Set(configTokenURL, tele2CloudTokenURL)
return oauthutil.ConfigOut("choose_device", &oauthutil.Options{
OAuth2Config: &oauth2.Config{
Endpoint: oauth2.Endpoint{
AuthURL: tele2CloudAuthURL,
TokenURL: tele2CloudTokenURL,
},
OAuth2Config: &oauthutil.Config{
AuthURL: tele2CloudAuthURL,
TokenURL: tele2CloudTokenURL,
ClientID: tele2CloudClientID,
Scopes: []string{"openid", "jotta-default", "offline_access"},
RedirectURL: oauthutil.RedirectLocalhostURL,
@@ -322,11 +316,9 @@ machines.`)
m.Set(configClientID, onlimeCloudClientID)
m.Set(configTokenURL, onlimeCloudTokenURL)
return oauthutil.ConfigOut("choose_device", &oauthutil.Options{
OAuth2Config: &oauth2.Config{
Endpoint: oauth2.Endpoint{
AuthURL: onlimeCloudAuthURL,
TokenURL: onlimeCloudTokenURL,
},
OAuth2Config: &oauthutil.Config{
AuthURL: onlimeCloudAuthURL,
TokenURL: onlimeCloudTokenURL,
ClientID: onlimeCloudClientID,
Scopes: []string{"openid", "jotta-default", "offline_access"},
RedirectURL: oauthutil.RedirectLocalhostURL,
@@ -924,19 +916,17 @@ func getOAuthClient(ctx context.Context, name string, m configmap.Mapper) (oAuth
}
baseClient := fshttp.NewClient(ctx)
oauthConfig := &oauth2.Config{
Endpoint: oauth2.Endpoint{
AuthURL: defaultTokenURL,
TokenURL: defaultTokenURL,
},
oauthConfig := &oauthutil.Config{
AuthURL: defaultTokenURL,
TokenURL: defaultTokenURL,
}
if ver == configVersion {
oauthConfig.ClientID = defaultClientID
// if custom endpoints are set use them else stick with defaults
if tokenURL, ok := m.Get(configTokenURL); ok {
oauthConfig.Endpoint.TokenURL = tokenURL
oauthConfig.TokenURL = tokenURL
// jottacloud is weird. we need to use the tokenURL as authURL
oauthConfig.Endpoint.AuthURL = tokenURL
oauthConfig.AuthURL = tokenURL
}
} else if ver == legacyConfigVersion {
clientID, ok := m.Get(configClientID)
@@ -950,8 +940,8 @@ func getOAuthClient(ctx context.Context, name string, m configmap.Mapper) (oAuth
oauthConfig.ClientID = clientID
oauthConfig.ClientSecret = obscure.MustReveal(clientSecret)
oauthConfig.Endpoint.TokenURL = legacyTokenURL
oauthConfig.Endpoint.AuthURL = legacyTokenURL
oauthConfig.TokenURL = legacyTokenURL
oauthConfig.AuthURL = legacyTokenURL
// add the request filter to fix token refresh
if do, ok := baseClient.Transport.(interface {
@@ -1274,7 +1264,7 @@ func (f *Fs) ListR(ctx context.Context, dir string, callback fs.ListRCallback) (
Parameters: url.Values{},
}
opts.Parameters.Set("mode", "liststream")
list := walk.NewListRHelper(callback)
list := list.NewHelper(callback)
var resp *http.Response
err = f.pacer.Call(func() (bool, error) {
@@ -1555,7 +1545,7 @@ func (f *Fs) Move(ctx context.Context, src fs.Object, remote string) (fs.Object,
}
info, err := f.copyOrMove(ctx, "mv", srcObj.filePath(), remote)
if err != nil && meta != nil {
if err == nil && meta != nil {
createTime, createTimeMeta := srcObj.parseFsMetadataTime(meta, "btime")
if !createTimeMeta {
createTime = srcObj.createTime

View File

@@ -59,7 +59,7 @@ func (f *Fs) InternalTestMetadata(t *testing.T) {
//"utime" - read-only
//"content-type" - read-only
}
obj := fstests.PutTestContentsMetadata(ctx, t, f, &item, contents, true, "text/html", metadata)
obj := fstests.PutTestContentsMetadata(ctx, t, f, &item, false, contents, true, "text/html", metadata)
defer func() {
assert.NoError(t, obj.Remove(ctx))
}()

View File

@@ -193,7 +193,7 @@ func (o *Object) set(e *entity) {
// Call linkbox with the query in opts and return result
//
// This will be checked for error and an error will be returned if Status != 1
func getUnmarshaledResponse(ctx context.Context, f *Fs, opts *rest.Opts, result interface{}) error {
func getUnmarshaledResponse(ctx context.Context, f *Fs, opts *rest.Opts, result any) error {
err := f.pacer.Call(func() (bool, error) {
resp, err := f.srv.CallJSON(ctx, opts, nil, &result)
return f.shouldRetry(ctx, resp, err)

View File

@@ -5,18 +5,18 @@ package local
import (
"context"
"fmt"
"syscall"
"unsafe"
"github.com/rclone/rclone/fs"
"golang.org/x/sys/windows"
)
var getFreeDiskSpace = syscall.NewLazyDLL("kernel32.dll").NewProc("GetDiskFreeSpaceExW")
var getFreeDiskSpace = windows.NewLazySystemDLL("kernel32.dll").NewProc("GetDiskFreeSpaceExW")
// About gets quota information
func (f *Fs) About(ctx context.Context) (*fs.Usage, error) {
var available, total, free int64
root, e := syscall.UTF16PtrFromString(f.root)
root, e := windows.UTF16PtrFromString(f.root)
if e != nil {
return nil, fmt.Errorf("failed to read disk usage: %w", e)
}
@@ -26,7 +26,7 @@ func (f *Fs) About(ctx context.Context) (*fs.Usage, error) {
uintptr(unsafe.Pointer(&total)), // lpTotalNumberOfBytes
uintptr(unsafe.Pointer(&free)), // lpTotalNumberOfFreeBytes
)
if e1 != syscall.Errno(0) {
if e1 != windows.Errno(0) {
return nil, fmt.Errorf("failed to read disk usage: %w", e1)
}
usage := &fs.Usage{

View File

@@ -0,0 +1,93 @@
//go:build darwin && cgo
// Package local provides a filesystem interface
package local
import (
"context"
"fmt"
"path/filepath"
"runtime"
"github.com/go-darwin/apfs"
"github.com/rclone/rclone/fs"
)
// Copy src to this remote using server-side copy operations.
//
// # This is stored with the remote path given
//
// # It returns the destination Object and a possible error
//
// Will only be called if src.Fs().Name() == f.Name()
//
// If it isn't possible then return fs.ErrorCantCopy
func (f *Fs) Copy(ctx context.Context, src fs.Object, remote string) (fs.Object, error) {
if runtime.GOOS != "darwin" || f.opt.NoClone {
return nil, fs.ErrorCantCopy
}
srcObj, ok := src.(*Object)
if !ok {
fs.Debugf(src, "Can't clone - not same remote type")
return nil, fs.ErrorCantCopy
}
if f.opt.TranslateSymlinks && srcObj.translatedLink { // in --links mode, use cloning only for regular files
return nil, fs.ErrorCantCopy
}
// Fetch metadata if --metadata is in use
meta, err := fs.GetMetadataOptions(ctx, f, src, fs.MetadataAsOpenOptions(ctx))
if err != nil {
return nil, fmt.Errorf("copy: failed to read metadata: %w", err)
}
// Create destination
dstObj := f.newObject(remote)
err = dstObj.mkdirAll()
if err != nil {
return nil, err
}
srcPath := srcObj.path
if f.opt.FollowSymlinks { // in --copy-links mode, find the real file being pointed to and pass that in instead
srcPath, err = filepath.EvalSymlinks(srcPath)
if err != nil {
return nil, err
}
}
err = Clone(srcPath, f.localPath(remote))
if err != nil {
return nil, err
}
// Set metadata if --metadata is in use
if meta != nil {
err = dstObj.writeMetadata(meta)
if err != nil {
return nil, fmt.Errorf("copy: failed to set metadata: %w", err)
}
}
return f.NewObject(ctx, remote)
}
// Clone uses APFS cloning if possible, otherwise falls back to copying (with full metadata preservation)
// note that this is closely related to unix.Clonefile(src, dst, unix.CLONE_NOFOLLOW) but not 100% identical
// https://opensource.apple.com/source/copyfile/copyfile-173.40.2/copyfile.c.auto.html
func Clone(src, dst string) error {
state := apfs.CopyFileStateAlloc()
defer func() {
if err := apfs.CopyFileStateFree(state); err != nil {
fs.Errorf(dst, "free state error: %v", err)
}
}()
cloned, err := apfs.CopyFile(src, dst, state, apfs.COPYFILE_CLONE)
fs.Debugf(dst, "isCloned: %v, error: %v", cloned, err)
return err
}
// Check the interfaces are satisfied
var (
_ fs.Copier = &Fs{}
)

16
backend/local/lchmod.go Normal file
View File

@@ -0,0 +1,16 @@
//go:build windows || plan9 || js || linux
package local
import "os"
const haveLChmod = false
// lChmod changes the mode of the named file to mode. If the file is a symbolic
// link, it changes the link, not the target. If there is an error,
// it will be of type *PathError.
func lChmod(name string, mode os.FileMode) error {
// Can't do this safely on this OS - chmoding a symlink always
// changes the destination.
return nil
}

View File

@@ -0,0 +1,41 @@
//go:build !windows && !plan9 && !js && !linux
package local
import (
"os"
"syscall"
"golang.org/x/sys/unix"
)
const haveLChmod = true
// syscallMode returns the syscall-specific mode bits from Go's portable mode bits.
//
// Borrowed from the syscall source since it isn't public.
func syscallMode(i os.FileMode) (o uint32) {
o |= uint32(i.Perm())
if i&os.ModeSetuid != 0 {
o |= syscall.S_ISUID
}
if i&os.ModeSetgid != 0 {
o |= syscall.S_ISGID
}
if i&os.ModeSticky != 0 {
o |= syscall.S_ISVTX
}
return o
}
// lChmod changes the mode of the named file to mode. If the file is a symbolic
// link, it changes the link, not the target. If there is an error,
// it will be of type *PathError.
func lChmod(name string, mode os.FileMode) error {
// NB linux does not support AT_SYMLINK_NOFOLLOW as a parameter to fchmodat
// and returns ENOTSUP if you try, so we don't support this on linux
if e := unix.Fchmodat(unix.AT_FDCWD, name, syscallMode(mode), unix.AT_SYMLINK_NOFOLLOW); e != nil {
return &os.PathError{Op: "lChmod", Path: name, Err: e}
}
return nil
}

View File

@@ -1,4 +1,4 @@
//go:build windows || plan9 || js
//go:build plan9 || js
package local

Some files were not shown because too many files have changed in this diff Show More