1
0
mirror of https://github.com/rclone/rclone.git synced 2025-12-06 00:03:32 +00:00

Compare commits

...

587 Commits

Author SHA1 Message Date
Nick Craig-Wood
be2c44f5af rc: config/unlock: rename parameter to configPassword accept old as well
We accidentally added a non `camelCase` parameter to the rc
(`config_password`)- this fixes it (to `configPassword`) but accepts
the old name too as it has been in a release.
2025-11-20 16:09:25 +00:00
Nick Craig-Wood
1db0f51be4 rc: correct names of parameters in job/list output
These were accidentally committed as snake_case whereas we use
camelCase elsewhere.

This corrects the issue before the first release in v1.72.0
2025-11-20 15:47:51 +00:00
hunshcn
6440052fbd s3: fix single file copying behavior with low permission - Fixes #8975 2025-11-18 17:01:07 +00:00
Nick Craig-Wood
4afb59bc93 docs: onedrive: note how to backup up any user's data 2025-11-18 16:21:06 +00:00
Nick Craig-Wood
0343670375 Add Dominik Sander to contributors 2025-11-18 16:21:06 +00:00
Nick Craig-Wood
5b2b372ba9 Add jijamik to contributors 2025-11-18 16:21:06 +00:00
Dominik Sander
08c35ae741 box: allow to configure with config file contents
Especially when using rclone via rc it is helpful to configure the box
backend using the contents of the config file instead of heaving to
upload the file to the server that is running rclone.
2025-11-18 16:09:06 +00:00
Oleg Kunitsyn
ecea0cd6f9 http: add basic metadata and provide it via serve
Co-authored-by: dougal <147946567+roucc@users.noreply.github.com>
2025-11-17 16:52:30 +00:00
jijamik
80e6389a50 ftp: fix transfers from servers that return 250 ok messages 2025-11-14 21:01:25 +00:00
dougal
a3ccf4d8a0 b2: allow individual old versions to be deleted with --b2-versions - fixes #1626 2025-11-14 17:04:45 +00:00
Nick Craig-Wood
31df39d356 build: fix tls: failed to verify certificate: x509: negative serial number
Before Go 1.23, x509.ParseCertificate accepted certificates with
negative serial numbers. Rejecting these certificates caused a small
number of users to see this error.

From Go 1.23 debug flags can be added to go.mod so this change adds a
debug flag to ensure negative serial numbers are still allowed since
this is a spec violation, not a security issue.

See: https://forum.rclone.org/t/ssl-validation-broken-between-v1-69-1-latest-version/
2025-11-14 12:51:17 +00:00
Nick Craig-Wood
03d3811f7f Add Sean Turner to contributors 2025-11-14 12:51:17 +00:00
Sean Turner
83b83f7768 s3: add support for --upload-header If-Match and If-None-Match
The If-Match and If-None-Match headers were being dropped rather
than implemented in the Put Object request to S3. These headers
make requests conditional which allow AWS S3 Bucket Policies to
prevent Object overwriting.
2025-11-13 13:50:47 +00:00
n4n5
71138082ea fix: comment typos 2025-11-13 13:47:40 +00:00
Nick Craig-Wood
cf94824426 dropbox: fix error moving just created objects - fixes #8881
The bisync tests have been failing as Dropbox is failing to move just
created objects. This seems to be caused by an eventual consistency
problem so this attempts to fix it by retrying the specific error.
2025-11-12 15:54:01 +00:00
hunshcn
16971ab6b9 s3: add --s3-use-data-integrity-protections to fix BadDigest error in Alibaba, Tencent
Since aws/aws-sdk-go-v2#2960, aws-go-sdk-v2 changes its default integrity
behavior. This breaks some s3 providers (eg Tencent, Alibaba)

https://github.com/aws/aws-sdk-go-v2/discussions/2960

This introduces `use_data_integrity_protections` option to disable it.

Defaults to false with it set to true for AWS.

Fixes #8432
Fixes #8483
2025-11-12 15:15:13 +00:00
Nick Craig-Wood
9f75af38e3 rc: make sure fatal errors don't crash rclone - fixes #8955
Before this change, if any code called fs.Fatal(f) then it would stop
rclone as designed. However this is not appropriate when using the RC
API - we want the error returned to the user.

This change turns the fs.Fatal(f) call into a panic which is caught by
the RC API handler and returned to the user as a 500 error.
2025-11-12 12:22:04 +00:00
Nick Craig-Wood
b5e4d39b05 pacer: factor call stack searching into its own package 2025-11-12 12:22:04 +00:00
Nick Craig-Wood
4d19afdbbf rc: add osVersion, osKernel and osArch to core/version
This makes it return the same info as `rclone version`
2025-11-12 11:16:48 +00:00
Nick Craig-Wood
2ebfedce85 build: update all dependencies 2025-11-12 10:36:30 +00:00
dependabot[bot]
1a4b85b6e7 build(deps): bump golangci/golangci-lint-action from 8 to 9
Bumps [golangci/golangci-lint-action](https://github.com/golangci/golangci-lint-action) from 8 to 9.
- [Release notes](https://github.com/golangci/golangci-lint-action/releases)
- [Commits](https://github.com/golangci/golangci-lint-action/compare/v8...v9)

---
updated-dependencies:
- dependency-name: golangci/golangci-lint-action
  dependency-version: '9'
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-11-11 17:10:10 +01:00
Nick Craig-Wood
5052b80298 webdav: fix out of memory with sharepoint-ntlm when uploading large file
Fixes #7469
Fixes #8959
See: https://forum.rclone.org/t/huge-memory-usage-10gb-when-upload-a-single-large-file-16gb-in-webdav/43312/
2025-11-10 16:57:18 +00:00
Nick Craig-Wood
fada870ff0 testserver: fix owncloud test server startup 2025-11-10 16:57:18 +00:00
Nick Craig-Wood
38f456c527 Add aliaj1 to contributors 2025-11-10 16:57:18 +00:00
aliaj1
e6d82ac6ee ulozto: Fix downloads returning HTML error page
The uloz.to backend was failing to download files, instead returning
an HTML page with a "Slow download" message. This was caused by
recent changes in the uloz.to API.

This commit fixes the issue by making the following changes to the
download process:

1.  The `hash` received from the download link API is now appended as a
    query parameter to the download URL.
2.  The download is now performed using the authenticated `rest` client
    to ensure premium access is recognized.
3.  The `DeviceID` is now generated dynamically for each download request
    to avoid potential rate-limiting of a static ID.
2025-11-10 15:56:06 +00:00
Nick Craig-Wood
4c74ded85a docs: adjust spectra logic example endpoint name 2025-11-10 13:47:33 +00:00
kapitainsky
43848f5c42 docs: update version introduced to v1.70 in doi docs
Fixes #8948
2025-11-08 21:33:38 +00:00
Nick Craig-Wood
fb895f69a1 testserver: fix HDFS server after run.bash adjustments 2025-11-05 17:56:28 +00:00
Nick Craig-Wood
b204090325 testserver: remind developers about allocating a port 2025-11-05 17:56:28 +00:00
Nick Craig-Wood
1821d86911 testserver: make run.bash variables less likely to collide with scripts 2025-11-05 17:56:28 +00:00
Nick Craig-Wood
7ce67347fb testserver: fix seafile servers messing up _connect string 2025-11-05 17:56:28 +00:00
Nick Craig-Wood
0228bbff39 testserver: make sure TestWebdavInfiniteScale uses an assigned port 2025-11-05 17:56:28 +00:00
Nick Craig-Wood
6890bd7738 testserver: make sure we don't overwrite the NAME variable set
This fixes some oddities stopping and starting servers
2025-11-05 17:56:28 +00:00
Nick Craig-Wood
bc5d1dfaf3 Add n4n5 to contributors 2025-11-05 17:56:28 +00:00
Nick Craig-Wood
c33aeb705f Add Alex to contributors 2025-11-05 17:56:28 +00:00
Nick Craig-Wood
12cf8e71df Add Copilot to contributors 2025-11-05 17:56:28 +00:00
albertony
ec5ddb68a8 docs: update contributing docs regarding backend documentation 2025-11-05 14:06:09 +01:00
n4n5
8335596207 rc: add jobs stats 2025-11-05 12:36:39 +00:00
albertony
4f56ab2341 docs: fix alignment of some of the icons in the storage system dropdown 2025-11-04 23:00:46 +01:00
albertony
8b5b7ecfd9 docs: run markdownlint on _index.md 2025-11-04 23:00:46 +01:00
albertony
2aa2cfc70e docs: fix markdownlint issues and other styling improvements in backend command docs 2025-11-04 23:00:46 +01:00
albertony
7265b2331f docs: fix markdownlint issue md046/code-block-style in backend command docs 2025-11-04 23:00:46 +01:00
albertony
0dd56ff2a3 docs: fix missing punctuation in backend commands short description 2025-11-04 23:00:46 +01:00
albertony
2443cb284e docs: fix markdownlint issues in backend command generated output 2025-11-04 23:00:46 +01:00
albertony
0f3aa17fb6 build: improve backend docs autogenerated marker line
Replace custom rem hugo shortcode template with HTML comment. HTML comments are now
allowed in Hugo without enabling unsafe HTML parsing.

Improve the text in the comment: Remove unnecessary quoting, and avoid impression that
make backenddocs has to be run and results committed, since we have a lint check which
will then report error because we want to prevent manual changes in autogenerated sections.

Disable the markdownlint rule line-length on the autogenerated marker line.

Make the autogenerated marker detection a bit more robust.

See #8942 for more details.
2025-11-04 21:56:01 +01:00
Alex
8f74e7d331 backend/compress: add zstd compression
Added support for reading and writing zstd-compressed archives in seekable format
using "github.com/klauspost/compress/zstd" and
"github.com/SaveTheRbtz/zstd-seekable-format-go/pkg".

Bumped Go version from 1.24.0 to 1.24.4 due to requirements of
"github.com/SaveTheRbtz/zstd-seekable-format-go/pkg".
2025-11-04 14:50:56 +00:00
Copilot
ee92673e1b sftp: fix zombie SSH processes with --sftp-ssh - Fixes #8929
Before this fix using --sftp-ssh with the sftp backend could leave
zombie processes.

This patch fixes the problem that sshClientExternal.session was never
assigned, so Wait() always returned nil without waiting for the SSH
process to exit. This caused zombie processes because the process was
never reaped.

It also ensures that Wait() is only called once on each process.

I gave this issue to Copilot to fix as an experiment. It went off in
the wrong direction to start with and fixed something which wasn't the
problem but still needed fixing. With a bit of a nudge it fixed the
correct problem too.

Co-authored-by: Nick Craig-Wood <nick@craig-wood.com>
2025-11-04 12:09:47 +00:00
Nick Craig-Wood
55655efabf testserver: fix tests failing due to stopped servers
Before this fix there were various issues with the test server
framework, most noticeably servers stopping when they shouldn't
causing timeouts. This was caused by the reference counting in the Go
code not being engineered to work in multiple processes so it was not
working at all properly.

This fix moves the reference counting logic to the start scripts and
in turn removes that logic from the Go code. This means that the
reference counting is now global and works correctly over multiple
processes.
2025-11-04 11:45:15 +00:00
dougal
700e6e11fd docs: add new integration tester site link 2025-11-03 17:15:53 +00:00
Nick Craig-Wood
edb47076b5 docs: update the method for running integration tests 2025-11-03 16:52:33 +00:00
Nick Craig-Wood
e5fd97b8d2 bisync: fix failing tests
In this commit

d240d044c3 check: improved reporting of differences in sizes and contents

We adjusted the sense of operations.CheckIdenticalDownload to return
true if files are identical as is implied by the name, but we forgot
to invert the logic in the bisync DownloadCheckFn which caused lots of
tests to fail.
2025-11-03 16:52:33 +00:00
Nick Craig-Wood
bc57a31859 Add SublimePeace to contributors 2025-11-03 16:52:33 +00:00
dougal
4adb48fbbc b2: fix "expected a FileSseMode but found: ''"
94deb6bd6f b2: Add Server-Side encryption support

From the commit above, without setting SSE, rclone would send invalid
SSE requests with empty strings. This is as omitempty only works with
struct pointers not structs.
2025-11-03 16:42:40 +00:00
SublimePeace
c41d0f7d3a docs: s3: clarify multipart uploads memory usage
Clarified phrasing to avoid confusion. Fixed a typo.

Fixes #8525
2025-11-03 16:35:33 +00:00
Nick Craig-Wood
d34ba258b0 test_all: fix detection of running servers
Before this change stopping servers was unreliable, expecially the non
docker based ones. This caused timeouts and connection errors in the
tests.
2025-11-03 14:44:39 +00:00
Nick Craig-Wood
05d54a95b8 accounting: add AccountReadN for use in cluster 2025-11-03 14:44:39 +00:00
Nick Craig-Wood
f16b39165b fs: add NonDefaultRC for discovering options in use
This enables us to send rc messages with the config in use.
2025-11-03 14:44:39 +00:00
Nick Craig-Wood
86edb26fd5 fs: move tests into correct files 2025-11-03 14:44:39 +00:00
Nick Craig-Wood
203e1bdbf9 rc: add NewJobFromBytes for reading jobs from non HTTP transactions 2025-11-03 14:44:39 +00:00
Nick Craig-Wood
a522c056fe rc: add job/batch for sending batches of rc commands to run concurrently 2025-11-03 14:44:39 +00:00
Nick Craig-Wood
31adc7d89f Add Ted Robertson to contributors 2025-11-03 14:44:39 +00:00
Nick Craig-Wood
c559ab7c58 Add Joseph Brownlee to contributors 2025-11-03 14:44:39 +00:00
Nick Craig-Wood
80610ef774 Add fries1234 to contributors 2025-11-03 14:44:39 +00:00
Nick Craig-Wood
a6c943a1ad Add Fawzib Rojas to contributors 2025-11-03 14:43:56 +00:00
Nick Craig-Wood
53e0dbb5cb Add Riaz Arbi to contributors 2025-11-03 14:43:56 +00:00
Nick Craig-Wood
3a0000526b Add Lukas Krejci to contributors 2025-11-03 14:43:56 +00:00
Nick Craig-Wood
1fa6941e26 Add Adam Dinwoodie to contributors 2025-11-03 14:43:56 +00:00
Nick Craig-Wood
9bb7ad31e6 Add dulanting to contributors 2025-11-03 14:43:56 +00:00
Ted Robertson
da8c6847ad docs: add AppArmor restrictions to rclone mount 2025-11-01 19:28:14 +00:00
albertony
d240d044c3 check: improved reporting of differences in sizes and contents
fixes rclone check --download not showing differing files
2025-11-01 19:23:01 +00:00
iTrooz
1056ace80f mega: implement 2FA login 2025-11-01 19:03:49 +00:00
albertony
a06c1c0cb7 docs: change to light code block style to better match overall theme 2025-11-01 18:55:11 +01:00
albertony
7672c3d586 docs: fix various markdownlint issues 2025-11-01 18:54:19 +01:00
albertony
f361cdf1cb build: restrict the markdown languages to use for code blocks 2025-11-01 15:52:41 +01:00
albertony
26d3c71bab docs: fix various markdownlint issues 2025-11-01 15:33:38 +01:00
albertony
c76396f03c docs: fix markdownlint issue md013/line-length 2025-11-01 15:33:38 +01:00
albertony
059ad47336 docs: change syntax hightlighting for command examples from sh to console 2025-11-01 15:33:38 +01:00
Joseph Brownlee
becc068d36 docs: Clarify remote naming convention
Co-authored-by: dougal <147946567+roucc@users.noreply.github.com>
Co-authored-by: dougal <dougal.craigwood@gmail.com>
2025-10-31 15:42:38 +00:00
fries1234
94deb6bd6f b2: Add Server-Side encryption support
This commit adds SSE-C (Server-Side Encryption - Customer) support to
the B2 native backend. The server uses a customer provided AES-256 key
to encrypt the files when you upload them to the bucket, and then it
discards your key from the servers RAM after you're done uploading.

The option names and descriptions are based off the S3 backend
implementation as the way S3 and B2 does SSE-C is pretty similar.

Fixes #6585
2025-10-31 15:33:31 +00:00
Fawzib Rojas
cc09978b79 Added rclone archive command to create and read archive files
Co-Authored-By: Nick Craig-Wood <nick@craig-wood.com>
2025-10-30 16:20:48 +00:00
Fawzib Rojas
409dc75328 accounting: add io.Seeker/io.ReaderAt support to accounting.Account
This is a pass through implementation which will fail if the
underlying reader does not have the interface.
2025-10-30 16:20:48 +00:00
Nick Craig-Wood
fb30c5f8dd operations: add ReadAt method to ReOpen 2025-10-30 16:20:48 +00:00
Nick Craig-Wood
203df6cc58 fstest: add ResetRun to allow the remote to be reset in tests 2025-10-30 16:20:48 +00:00
Riaz Arbi
459e10d599 gcs: fix --gcs-storage-class to work with server side copy for objects 2025-10-30 15:20:16 +00:00
Lukas Krejci
1ba4fd1d83 ulozto: implement the about functionality 2025-10-30 15:06:37 +00:00
Adam Dinwoodie
77553b8dd5 local: add --skip-specials to ignore special files
Give users a way to explicitly acknowledge that pipes, sockets and block
devices are to be ignored without warnings.

This follows the precedent set in commit 6152bab28 (local: add
--skip-links to suppress symlink warnings, 2017-07-21) for ignoring
warnings about symlinks.
2025-10-29 17:00:25 +00:00
Andrew Ruthven
5420dbbe38 swift: Report disk usage in segment containers
Large objects are split and stored in a _segments container in Swift.
These should be included when reporting on the space used.

Fixes #8857
2025-10-29 16:55:53 +00:00
dulanting
87b71dd6b9 refactor: use strings.Builder to improve performance 2025-10-29 16:48:34 +00:00
Nick Craig-Wood
a0bcdc2638 Archive backend to read archives on cloud storage.
Initial support with Zip and Squashfs archives.

Fixes #8633
See #2815
2025-10-28 11:05:41 +00:00
Nick Craig-Wood
e42fa9f92d vfs: remove unecessary import in tests to fix import cycles 2025-10-28 11:05:41 +00:00
Nick Craig-Wood
4586104dc7 Add Lakshmi-Surekha to contributors 2025-10-28 11:05:35 +00:00
Nick Craig-Wood
c4c360a285 Add Andrew Gunnerson to contributors 2025-10-28 11:05:35 +00:00
Nick Craig-Wood
ce4860b9b6 Add divinity76 to contributors 2025-10-28 11:05:35 +00:00
Lakshmi-Surekha
ed87f82d21 build: enable support for aix/ppc64
* Adds "aix/ppc64" to the cross-compile target list.
* Including AIX in the build tag of "metadata_other.go".
* Excluding AIX from the main ncdu build tags.
* Marking AIX as an unsupported platform for ncdu.
* Excluding AIX from the fallback redirect implementation.
* Excluding AIX from unix build tags to avoid undefined unix.WNOHANG.
2025-10-27 13:34:58 +00:00
Andrew Gunnerson
0a82929b94 rc: fix name of "queue" JSON key in docs for vfs/cache
Signed-off-by: Andrew Gunnerson <accounts+github@chiller3.com>
2025-10-27 13:28:24 +00:00
divinity76
1e8ee3b813 cmount: windows: improve error message on missing winfsp 2025-10-27 13:22:04 +00:00
Nick Craig-Wood
eaab3f5271 docs: add the Provider to the options examples in the backend docs 2025-10-26 10:25:12 +00:00
Nick Craig-Wood
25b05f1210 Add Aneesh Agrawal to contributors 2025-10-26 10:25:12 +00:00
Nick Craig-Wood
2dc1b07863 Add viocha to contributors 2025-10-26 10:25:12 +00:00
Nick Craig-Wood
49acacec2e Add reddaisyy to contributors 2025-10-26 10:25:12 +00:00
Aneesh Agrawal
70d2fe6568 fs: remove unnecessary Seek call on log file
We were seeing a (non-fatal) error in our logs:
```
Failed to seek log file to end: seek /proc/1/fd/1: illegal seek
```

Because we open the log file with O_APPEND,
we don't need to manually seek to the end.
As https://pkg.go.dev/os#File.Seek also confirms
that the behavior of `Seek` is not specified
if the file has been opened with O_APPEND,
remove the `Seek` call.
2025-10-25 19:38:57 +01:00
dougal
f28c83c6de s3: make it easier to add new S3 providers
Before this change, you had to modify a fragile data-structure
containing all providers. This often led to things being out of order,
duplicates and conflicts whilst merging. As well as the changes for
one provider being in different places across the file.

After this change, new providers are defined in an easy to edit YAML file,
one per provider.

The config output has been tested before and after for all providers
and any changes are cosmetic only.
2025-10-25 19:37:29 +01:00
dependabot[bot]
2cf44e584c build(deps): bump actions/upload-artifact from 4 to 5
Bumps [actions/upload-artifact](https://github.com/actions/upload-artifact) from 4 to 5.
- [Release notes](https://github.com/actions/upload-artifact/releases)
- [Commits](https://github.com/actions/upload-artifact/compare/v4...v5)

---
updated-dependencies:
- dependency-name: actions/upload-artifact
  dependency-version: '5'
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-10-25 12:09:16 +02:00
dependabot[bot]
bba9027817 build(deps): bump actions/download-artifact from 5 to 6
Bumps [actions/download-artifact](https://github.com/actions/download-artifact) from 5 to 6.
- [Release notes](https://github.com/actions/download-artifact/releases)
- [Commits](https://github.com/actions/download-artifact/compare/v5...v6)

---
updated-dependencies:
- dependency-name: actions/download-artifact
  dependency-version: '6'
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-10-25 12:09:06 +02:00
dougal
51859af8d9 ftp: fix SOCK proxy support - fixes #8892 (#8918) 2025-10-24 14:50:13 +01:00
viocha
4f60f8915d webdav: Add Access-Control-Max-Age header for CORS preflight caching - fixes #5078 2025-10-24 10:19:22 +01:00
hunshcn
6663eb346f webdav: use SpaceSepList to parse bearer token command 2025-10-23 19:56:37 +01:00
reddaisyy
1d0e1ea0b5 refactor: use strings.Builder to improve performance 2025-10-23 16:40:30 +01:00
Nick Craig-Wood
71631621c4 docs: re-arrange sponsors page 2025-10-23 14:50:51 +01:00
Nick Craig-Wood
31e904d84c docs: add Spectra Logic as a sponsor 2025-10-23 14:50:51 +01:00
Nick Craig-Wood
30c9843e3d Add Oleksandr Redko to contributors 2025-10-23 14:50:51 +01:00
Oleksandr Redko
c8a834f0e8 build: enable all govet checks (except fieldalignment and shadow) and fix issues. 2025-10-22 18:37:58 +01:00
Nick Craig-Wood
b272c50c4c march: fix --no-traverse being very slow - fixes #8860
Before this change --no-traverse was calling NewObject on directories
(where it would always fail) as well as files. This was very
noticeable when doing syncs with --max-age which were only
transferring a small number of objects. This should have been very
quick, but the NewObject calls for each directory slowed the sync down
a lot.

This changes replaces the check to see if the source entry is an
Object that got missed out from this commit:

88e30eecbf march: fix deadlock when using --no-traverse - fixes #8656
2025-10-22 14:14:52 +01:00
Nick Craig-Wood
b8700e8042 Add vastonus to contributors 2025-10-22 14:14:52 +01:00
kingston125
73193b0565 s3: add new FileLu S5 endpoints
Add US, EU, AP, and ME endpoints
2025-10-22 12:25:05 +01:00
vastonus
c4eef3065f build: remove obsolete build tag 2025-10-21 18:56:06 +01:00
Nick Craig-Wood
ba2a642961 azurefiles: add ListP interface - #4788 2025-10-21 18:40:23 +01:00
Nick Craig-Wood
979c6a573d dropbox: add ListP interface - #4788 2025-10-21 18:40:23 +01:00
Nick Craig-Wood
bbb866018e webdav: add ListP interface - #4788 2025-10-21 18:40:23 +01:00
Nick Craig-Wood
7706f02294 pcloud: add ListP interface - #4788 2025-10-21 18:40:23 +01:00
Nick Craig-Wood
6df7913181 box: add ListP interface - #4788 2025-10-21 18:40:23 +01:00
Nick Craig-Wood
c079495d1f onedrive: add ListP interface - #4788 2025-10-21 18:40:23 +01:00
Nick Craig-Wood
3bf1ac5b07 drive: add ListP interface - #4788 2025-10-21 18:40:23 +01:00
Nick Craig-Wood
091caa34c6 Add hunshcn to contributors 2025-10-21 18:40:23 +01:00
hunshcn
d507e9be39 webdav: optimize bearer token fetching with singleflight 2025-10-21 11:14:37 +01:00
Nick Craig-Wood
40b3251e41 Changelog updates from Version v1.71.2 2025-10-20 16:56:47 +01:00
albertony
484d955ea8 lib/http: cleanup indentation and other whitespace in http serve template 2025-10-20 11:53:55 +01:00
albertony
8fa9f255a0 docs: improve formatting of http serve template parameters 2025-10-20 11:53:55 +01:00
Nick Craig-Wood
e7f11af1ca build: stop markdown linter leaving behind docker containers 2025-10-20 11:51:23 +01:00
Nick Craig-Wood
0b5c4cc442 Add Marco Ferretti to contributors 2025-10-20 11:51:23 +01:00
Marco Ferretti
178ddafdc7 s3: add cubbit as provider 2025-10-20 11:01:34 +01:00
dougal
ad316ec6e3 s3: add servercore as a provider 2025-10-17 16:35:06 +01:00
Nick Craig-Wood
61b022dfc3 docs: update sponsors 2025-10-17 12:04:51 +01:00
Nick Craig-Wood
1903b4c1a2 docs: update sponsor images 2025-10-15 16:33:10 +01:00
Nick Craig-Wood
f7cbcf556f docs: update privacy policy with a section on user data 2025-10-14 16:24:07 +01:00
Nick Craig-Wood
3581e628c0 Add Dulani Woods to contributors 2025-10-14 16:24:07 +01:00
Nick Craig-Wood
62c41bf449 Add spiffytech to contributors 2025-10-14 16:24:07 +01:00
Dulani Woods
c5864e113b gcs: add region us-east5 - fixes #8863 2025-10-14 14:13:56 +01:00
albertony
39259a5bd1 jottacloud: refactor service list from map to slice to get predefined order 2025-10-11 20:57:19 +02:00
albertony
2e376eb3b9 jottacloud: added support for traditional oauth authentication also for the main service
This renames whitelabel authentication to traditional authentication and adds support for
the main Jottacloud service also here, as it can be used as an alternative to the
authentication based on personal login token for those who prefer it. Documentation
also adjusted correspondingly, and restructured the authentication section a bit more
since some of the sections that was under standard authentication in reality also
applies to the traditional authentication.
2025-10-11 20:57:19 +02:00
albertony
de8e9d4693 oauthutil: improved debug logs from token refresh 2025-10-10 20:10:21 +02:00
spiffytech
710cf49bc6 backend: add S3 provider for Hetzner object storage #8183 2025-10-10 18:20:43 +01:00
albertony
8dacac60ea jottacloud: improved token refresh handling
The oauthutil.Renew was initialized early in NewFs, before the first request to the
service where a token is needed. When token is already expired at the time NewFs is
called, the Renew operation would be triggered immediately, only to abort before actually
performing a token refresh, for reason described in debug message:

    Token expired but no uploads in progress - doing nothing

Then later in NewFs, a request to the customer endpoint was made, and since it requires
a valid token it would perform a token refresh after all.

This was not a big problem, but a bit unnecessary, and the debug log messages made it
confusing to understand what rclone was actually doing regarding token refreshing.

If, from debugger, we were forcing the Renew operation to perform actual token refresh,
even if no uploads in process, then it would fail because it actually needs the username
which is retrieved from the customer endpoint

    jottacloud root '': Token refresh failed: read metadata failed: error 400: org.springframework.security.core.userdetails.UsernameNotFoundException: Username not found in url! (Bad Request)

Don't think this can happen in any real situations, but better to make sure it never can.
2025-10-10 18:59:19 +02:00
dougal
3a80d4d4b4 s3: provider reordering
+ fixing some typos
2025-10-10 16:30:03 +01:00
dougal
a531f987a8 index: add missing providers 2025-10-10 16:30:03 +01:00
dougal
e906b8d0c4 docs: add missing ` 2025-10-10 16:30:03 +01:00
dougal
a5932ef91a s3: add rabata as a provider 2025-10-10 16:30:03 +01:00
Nick Craig-Wood
3afa563eaf mega: fix 402 payment required errors - fixes #8758
The underlying library now supports hashcash which should fix this
problem.
2025-10-09 11:58:49 +01:00
Nick Craig-Wood
9d9654b31f Add Andrew Ruthven to contributors 2025-10-09 11:58:49 +01:00
Nick Craig-Wood
cfe257f13d Add Microscotch to contributors 2025-10-09 11:58:49 +01:00
Nick Craig-Wood
0375efbd35 Add iTrooz to contributors 2025-10-09 11:58:49 +01:00
Andrew Ruthven
cad1954213 build: Bump SwiftAIO container to a newer one
The bouncestorage image hasn't been updated for 4 years and has this
message at the top of the docs:

  This repository is outdated; please use dockerswiftaio/docker-swift instead.

However, dockerswiftaio/docker-swift hasn't been updated for 2 years.
Switch to openstackswift/saio instead, which is getting regular updates.

This requires some minor changes to one test, and how we start the
container.
2025-10-06 16:55:48 +01:00
Andrew Ruthven
604e37caa5 build: Retry stopping the test server
On my system there needs to be a slight pause between stopping and
checking to see if SwiftAIO has stopped. Without the pause the tests fail for
a non-obvious reason.

Instead of using a magic sleep, re-use the retry logic that is used for
starting the test server.
2025-10-06 16:55:48 +01:00
Andrew Ruthven
b249d384b9 build: Increase attempts to connect to test server
On the system I'm testing Swift on it can take ~90 retries for SwiftAIO to
be ready. Extend the retry attempts.
2025-10-06 16:55:48 +01:00
Andrew Ruthven
04e91838db swift: If storage_policy isn't set, use the root containers policy
Ensure that if we need to create a segments container it uses the same
storage policy as the root container.

Fixes #8858
2025-10-06 16:55:48 +01:00
Microscotch
94829aaec5 proton: automated 2FA login with OTP secret key
add OTP secret key to config to generate 2FA code
2025-10-06 16:18:38 +01:00
iTrooz
f574e3395c serve s3: fix log output to remove the EXTRA messages
As shown in

81e56a30c8/log.go (L74)

it seems like the wanted behaviour for merging arguments is the one of PrintLn,
which is "put a space between each arg"
2025-10-06 15:17:21 +01:00
albertony
2bc155a96a docs/jottacloud: update description of invalid_grant error according to changes 2025-10-05 11:22:27 +02:00
albertony
adc8ea3427 jottacloud: add support for MediaMarkt Cloud as a whitelabel service
This was requested in issue #8852, after authentication was already fixed for existing
whitelabels.
2025-10-05 00:48:01 +02:00
kingston125
068eea025c s3: add FileLu S5 provider 2025-10-04 15:48:01 +01:00
iTrooz
4510aa679a docs: fix variants of --user-from-header 2025-10-04 08:10:49 +02:00
dougal
79281354c7 vfs: fix chunker integration test 2025-10-03 17:10:24 +01:00
Nick Craig-Wood
f57a178719 test_all: give TestZoho: extra time as it has been timing out 2025-10-03 16:03:29 +01:00
Nick Craig-Wood
44f2e2ed39 test_all: give TestCompressDrive: extra time as it has been timing out 2025-10-03 16:02:07 +01:00
Nick Craig-Wood
13e1752d94 rclone config string: reduce quoting with Human rendering for strings #8859 2025-10-03 15:54:15 +01:00
Nick Craig-Wood
bb82c0e43b Add juejinyuxitu to contributors 2025-10-03 15:54:15 +01:00
albertony
1af7151e73 docs/jottacloud: update documentation with new whitelabel services and changed configuration flow 2025-10-02 19:16:03 +02:00
albertony
fd63478ed6 jottacloud: abort attempts to run unsupported rclone authorize command 2025-10-02 19:16:03 +02:00
albertony
5133b05c74 jottacloud: minor adjustment of texts in config ui 2025-10-02 19:16:03 +02:00
albertony
6ba96ede4b jottacloud: add support for Let's Go Cloud (from MediaMarkt) as a whitelabel service 2025-10-02 19:16:03 +02:00
albertony
2896973964 jottacloud: fix authentication for whitelabel services from Elkjøp subsidiaries
This adds support for them in the whitelabel autentication type, relying on OpenID
Connect, same as Telia, Tele2 etc already uses.

Until recently the Elkjøp subsidiaries still supported the legacy authentication type
only, but that seem to have changed. They no longer support legacy authentication, which
made existing rclone version incompatible with them.

With this the legacy authentication has no known uses, however the implementation of
it is still kept for now.

Fixes #8852
2025-10-02 19:16:03 +02:00
albertony
be123d85ff jottacloud: refactor config handling of whitelabel services to use openid provider configuration 2025-10-02 19:16:03 +02:00
albertony
b1b9562ab7 jottacloud: remove nil error object from error message 2025-10-02 19:16:03 +02:00
albertony
5146b66569 jottacloud: fix legacy authentication
This fixes the issue where configuration would fail after supplying passoword:

    Reveal failed: input too short when revealing password - is it obscured?
2025-10-02 19:16:03 +02:00
albertony
8898372d5a docs: add remote setup page to main docs dropdown 2025-10-02 18:46:16 +02:00
albertony
091fe9e453 docs: update remote setup page 2025-10-02 18:46:16 +02:00
albertony
8fdb68e41a docs: add link from authorize command docs to remote setup docs 2025-10-02 18:46:16 +02:00
albertony
c124aa2ed3 docs: lowercase internet and web browser instead of Internet browser 2025-10-02 18:46:16 +02:00
albertony
54e8bb89f7 docs: use the term backend name instead of fs name for authorize command 2025-10-02 18:46:16 +02:00
Nick Craig-Wood
50c1b594ab add rclone config string for making connection strings #8859 2025-10-02 17:30:08 +01:00
Nick Craig-Wood
72437a9ca2 config: add more human readable configmap.Simple output
Before this, String() quoted every part of the config map even if it
wasn't necessary.

The new Human() method removes the quoting and adds the special case
for "true" values.
2025-10-02 17:30:08 +01:00
dougal
8ed55c61e1 serve http: download folders as zip
Now folders can be downloaded as a zip. You can also use --disable-zip
to not show this.
2025-09-26 15:18:02 +01:00
dougal
bd598c1ceb s3: reorder providers to be in alphabetical order 2025-09-26 15:14:45 +01:00
juejinyuxitu
7e30665102 refactor: use strings.FieldsFuncSeq to reduce memory allocations
Signed-off-by: juejinyuxitu <juejinyuxitu@outlook.com>
2025-09-26 15:12:53 +01:00
Nick Craig-Wood
d44957a09c accounting: add SetMaxCompletedTransfers method to fix bisync race #8815
Before this change bisync adjusted the global MaxCompletedTransfers
variable which caused races.

This adds a SetMaxCompletedTransfers method and uses it in bisync.

The MaxCompletedTransfers global becomes the default. This can be
changed externally if rclone is in use as a library, and the commit
history indicates that MaxCompletedTransfers was added for exactly
this purpose so we try not to break it here.
2025-09-26 14:54:47 +01:00
Nick Craig-Wood
37524e2dea accounting: add RemoveDoneTransfers method to fix bisync race #8815
Before this change bisync was adjusting MaxCompletedTransfers in order
to clear the done transfers from the stats.

This wasn't working (because it was only clearing one transfer) and
was part of a race adjusting MaxCompletedTransfers.

This fixes the problem by introducing a new method RemoveDoneTransfers
to clear the done transfers explicitly and calling it in bisync.
2025-09-26 14:54:47 +01:00
Nick Craig-Wood
2f6a6c8233 bisync: fix race when CaptureOutput is used concurrently #8815
Before this change CaptureOutput could trip the race detector when
used concurrently. In particular if go routines using the logging are
outlasting the return from `fun()`.

This fixes the problem with a mutex.
2025-09-26 14:54:47 +01:00
Nick Craig-Wood
4ad40b6554 build: update all dependencies 2025-09-26 14:53:36 +01:00
Nick Craig-Wood
4f33d64f25 Makefile: remove deprecated go mod usage 2025-09-26 14:53:36 +01:00
Vikas Bhansali
519623d9f1 azurefiles: Fix server side copy not waiting for completion - fixes #8848 2025-09-26 12:41:42 +01:00
Nick Craig-Wood
913278327b Changelog updates from Version v1.71.1 2025-09-24 17:34:26 +01:00
Nick Craig-Wood
a9b05e4c7a test_all: fix branch name in test report 2025-09-24 15:35:09 +01:00
Nick Craig-Wood
5d6d79e7d4 pacer: fix deadlock with --max-connections
If the pacer was used recursively and --max-connections was in use
then it could deadlock if all the connections were in use at the time
of recursive call (likely).

This affected the azureblob backend because when it receives an
InvalidBlockOrBlob error it attempts to clear the condition before
retrying. This in turn involves recursively calling the pacer.

This fixes the problem by skipping the --max-connections check if the
pacer is called recursively.

The recursive detection is done by stack inspection which isn't ideal,
but the alternative would be to add ctx to all >1,000 pacer calls. The
benchmark reveals stack inspection takes about 55nS per stack level so
it is relatively cheap.
2025-09-22 17:39:27 +01:00
Nick Craig-Wood
11de074cbf Revert "azureblob: fix deadlock with --max-connections with InvalidBlockOrBlob errors"
This reverts commit 0c1902cc6037d81eaf95e931172879517a25d529.

This turns out not to be sufficient so we need a better approach
2025-09-22 17:39:27 +01:00
Nick Craig-Wood
e9ab177a32 Add Youfu Zhang to contributors 2025-09-22 17:39:27 +01:00
Nick Craig-Wood
f3f4fba98d Add Matt LaPaglia to contributors 2025-09-22 17:39:27 +01:00
Sudipto Baral
03fccdd67b smb: optimize smb mount performance by avoiding stat checks during initialization
add IsPathDir function and tests for trailing slash optimization
2025-09-22 15:33:44 +01:00
Youfu Zhang
231083647e pikpak: fix unnecessary retries by using URL expire parameter - fixes #8601
Before this change, rclone would unnecessarily retry downloads when
the `Link.Expire` field was unreliable but the download URL contained
a valid expire query parameter. This primarily affects cases where
media links are unavailable or when `no_media_link` is enabled.

The `Link.Valid()` method now primarily checks the URL's expire query
parameter (as Unix timestamp) and falls back to the Expire field
only when URL parsing fails. This eliminates the `error no link`
retry loops while maintaining backward compatibility.

Signed-off-by: Youfu Zhang <zhangyoufu@gmail.com>
2025-09-19 12:46:26 +09:00
dougal
0e203a7546 serve http: fix: logging url on start 2025-09-18 14:49:58 +01:00
Matt LaPaglia
a7dd787569 docs: fix typo 2025-09-16 14:27:10 +02:00
dougal
689555033e b2: fix 1TB+ uploads
Before this change the minimum chunk size would default to 96M which
would allow a maximum size of just below 1TB file to be uploaded, due to
the 10000 part rule for b2.

Now the calculated chunk size is used so the chunk size can be 5GB
making a max file size of 50TB.

Fixes #8460
2025-09-15 13:05:20 +01:00
Nick Craig-Wood
4fc4898287 march: fix deadlock when using --fast-list on syncs - fixes #8811
Before this change, it was possible to have a deadlock when using
--fast-list for a sync if both the source and destination supported
ListR.

This fixes the problem by shortening the locking window.
2025-09-15 12:55:29 +01:00
Nick Craig-Wood
b003169088 build: slices.Contains, added in go1.21 2025-09-15 12:45:57 +01:00
Nick Craig-Wood
babd112665 build: use strings.CutPrefix introduced in go1.20 2025-09-15 12:45:57 +01:00
Nick Craig-Wood
71b9b4ad7a build: use sequence Split introduced in go1.24 2025-09-15 12:45:57 +01:00
Nick Craig-Wood
4368863fcb build: use "for i := range n", added in go1.22 2025-09-15 12:45:57 +01:00
Nick Craig-Wood
04d49bf0ea build: modernize benchmark usage 2025-09-15 12:45:57 +01:00
Nick Craig-Wood
d7aa37d263 build: in tests use t.Context, added in go1.24 2025-09-15 12:45:57 +01:00
Nick Craig-Wood
379dffa61c build: replace interface{} by the 'any' type added in go1.18 2025-09-15 12:45:57 +01:00
Nick Craig-Wood
5fd4ece31f build: use the built-in min or max functions added in go1.21 2025-09-15 12:45:57 +01:00
Nick Craig-Wood
fc3f95190b Add russcoss to contributors 2025-09-15 12:45:57 +01:00
russcoss
d6f5652b65 build: remove x := x made unnecessary by the new semantics of loops in go1.22
Signed-off-by: russcoss <russcoss@outlook.com>
2025-09-14 15:58:20 +01:00
Nick Craig-Wood
b5cbb7520d lib/pool: fix unreliable TestPoolMaxBufferMemory test
This turned out to be a problem in the tests. The tests used to do

1. allocate
2. increment
3. free
4. decrement

But if one goroutine had just completed 2 and another had just
completed 3 then this can cause the test to register too many
allocations.

This was fixed by doing the test in this order instead:

1. allocate
2. increment
3. decrement
4. free

The 4 operations are atomic.

Fixes #8813
2025-09-12 10:39:32 +01:00
Nick Craig-Wood
a170dfa55b Update S-Pegg1 email 2025-09-12 10:39:32 +01:00
Nick Craig-Wood
1449c5b5ba Add Jean-Christophe Cura to contributors 2025-09-12 10:39:32 +01:00
dougal
35fe609722 pool: fix flaky unreliability test 2025-09-11 18:09:50 +01:00
dougal
cce399515f copyurl: reworked code, added concurrency and tests
- Added Tests
- Fixed file name handling
- Added concurrent downloads
- Limited downloads to --transfers
- Fixes #8127
2025-09-11 13:56:14 +01:00
S-Pegg1
8c5af2f51c copyurl: Added --url to read urls from csv file - #8127 2025-09-11 13:56:14 +01:00
dougal
c639d3656e docs: HDFS: erasure coding limitation #8808 2025-09-10 19:26:55 +01:00
nielash
d9fbbba5c3 fstest: fix slice bounds out of range error when using -remotes local
Before this change, TestIntegration/FsName could fail with "slice bounds out of
range [:-1]" when run with -remotes local.

It also caused issues with
'^TestGitAnnexFstestBackendCases$/^(TransferStorePathWithInteriorWhitespace|TransferStoreRelative)$'.

This change fixes the issue by accepting either "" or "local" to indicate the
local remote.
2025-09-09 12:09:42 -04:00
nielash
fd87560388 local: fix time zones on tests
Before this change, TestMetadata could fail due to a difference between the
user's local time zone and UTC causing the string representation of the date to
be off by one day. This change fixes the issue by comparing both in the Local
time zone.
2025-09-09 12:09:42 -04:00
dougal
d87720a787 s3: added SpectraLogic as a provider 2025-09-09 16:40:10 +01:00
nielash
d541caa52b local: fix rmdir "Access is denied" on windows - fixes #8363
Before this change, Rmdir (and other commands that rely on Rmdir) would fail
with "Access is denied" on Windows, if the directory had
FILE_ATTRIBUTE_READONLY. This could happen if, for example, an empty folder had
a custom icon added via Windows Explorer's interface (Properties => Customize =>
Change Icon...).

However, Microsoft docs indicate that "This attribute is not honored on
directories."
https://learn.microsoft.com/en-us/windows/win32/fileio/file-attribute-constants#file_attribute_readonly
Accordingly, this created an odd situation where such directories were removable
(by their owner) via File Explorer and the rd command, but not via rclone.

An upstream issue has been open since 2018, but has not yet resulted in a fix.
https://github.com/golang/go/issues/26295

This change gets around the issue by doing os.Chmod on the dir and then retrying
os.Remove. If the dir is not empty, this will still fail with "The directory is
not empty."

A bisync user confirmed that it fixed their issue in
https://forum.rclone.org/t/bisync-leaving-empty-directories-on-unc-path-1-or-local-filesystem-path-2-on-directory-renames/52456/4?u=nielash

It is likely also a fix for #8019, although @ncw is correct that Purge would be
a more efficient solution in that particular scenario.
2025-09-09 11:25:09 -04:00
nielash
fd1665ae93 bisync: fix error handling for renamed conflicts
Before this change, rclone could crash during modifyListing if a rename's
srcNewName is known but not found in the srcList
(srcNewName != "" && new == nil).
This scenario should not happen, but if it does, we should print an error
instead of crashing.

On #8458 there is a report of this possibly happening on v1.68.2. It is unknown
what the underlying issue was, and whether it still exists in the latest
version, but if it does, the user will now see an error and debug info instead
of a crash.
2025-09-06 12:43:23 -04:00
Jean-Christophe Cura
457d80e8a9 docs: pcloud: update root_folder_id instructions 2025-09-05 20:50:00 +01:00
Nick Craig-Wood
c5a3e86df8 operations: fix partial name collisions for non --inplace copies
In this commit:

c63f1865f3 operations: copy: generate stable partial suffix

We made the partial suffix for non inplace copies stable. This was a
hash based off the file fingerprint.

However, given a directory of files which have the same fingerprint
the partial suffix collides. On some backends (eg the local backend)
the fingerprint is just the size and modification time so files with
different contents can collide.

The effect of collisions was hash failures on copy when using
--transfers > 1. These copies invariably retried successfully which
probably explains why this bug hasn't been reported.

This fixes the problem by adding the file name to the hash.

It also makes sure the hash is always represented as 8 hex bytes for
consistency.
2025-09-05 16:09:46 +01:00
Ed Craig-Wood
4026e8db20 drive: docs: update making your own client ID instructions
update instructions with the most recent changes to google cloud console
2025-09-05 15:30:52 +01:00
dougal
c9ce686231 swift: add ListP interface - #4788 2025-09-05 15:29:37 +01:00
dougal
b085598cbc memory: add ListP interface - #4788 2025-09-05 15:29:37 +01:00
dougal
bb47dccdeb oraceobjectstorage: add ListP interface - #4788 2025-09-05 15:29:37 +01:00
dougal
7a279d2789 B2: add ListP interface - #4788 2025-09-05 15:29:37 +01:00
dougal
9bd5df658a azureblob: add ListP interface - #4788 2025-09-05 15:29:37 +01:00
dougal
d512e4d566 googlecloudstorage: add ListP interface - Fixes #8763 2025-09-05 15:29:37 +01:00
dependabot[bot]
3dd68c824a build: bump actions/github-script from 7 to 8
Bumps [actions/github-script](https://github.com/actions/github-script) from 7 to 8.
- [Release notes](https://github.com/actions/github-script/releases)
- [Commits](https://github.com/actions/github-script/compare/v7...v8)

---
updated-dependencies:
- dependency-name: actions/github-script
  dependency-version: '8'
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-09-05 08:14:32 +02:00
dependabot[bot]
fbe73c993b build: bump actions/setup-go from 5 to 6
Bumps [actions/setup-go](https://github.com/actions/setup-go) from 5 to 6.
- [Release notes](https://github.com/actions/setup-go/releases)
- [Commits](https://github.com/actions/setup-go/compare/v5...v6)

---
updated-dependencies:
- dependency-name: actions/setup-go
  dependency-version: '6'
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-09-05 08:12:38 +02:00
nielash
d915f75edf bisync: fix chunker integration tests
Before this change, TestChunkerS3: tests were failing because our use of
obj.Remove (for "modtime_write_test") created an unexpected extra transfer.

This is because chunker calls operations.Move for removes, which (per its
function comment) is supposed to be only accounted as a check. But because S3
can Copy but not Move, the move falls back to copy and ends up getting counted
as a transfer anyway.
99e8a63df2/fs/operations/operations.go (L506)
99e8a63df2/fs/operations/copy.go (L381)

This is probably a bug that should get a more proper fix in operations. But in
the meantime, we can get around it by doing our "modtime_write_test" with its
own unique stats group.
2025-09-04 14:38:10 -04:00
nielash
26b629f42f bisync: fix koofr integration tests
Before this change, koofr failed certain bisync tests because it can't set mod
time without deleting and re-uploading. This caused the "nothing to transfer" log
to not get printed where expected (as it is only printed when there are 0
transfers, but koofr requires extra transfers to set modtime.)

This change fixes the issue by ignoring the absence of the "nothing to transfer"
log line on backends that return `fs.ErrorCantSetModTimeWithoutDelete` for
`obj.SetModTime`.
2025-09-04 14:38:10 -04:00
Nick Craig-Wood
ceaac2194c internetarchive: fix server side copy files with spaces
In this commit we broke server side copy for files with spaces

4c5764204d internetarchive: fix server side copy files with &

This fixes the problem by using rest.URLPathEscapeAll which escapes
everything possible.

Fixes #8754
2025-09-04 10:37:27 +01:00
Nick Craig-Wood
1f14b6aa35 lib/rest: add URLPathEscapeAll to URL escape as many chars as possible 2025-09-04 10:37:27 +01:00
Nick Craig-Wood
dd75af6a18 Add alternate email for dougal to contributors 2025-09-04 10:37:27 +01:00
dougal
99e8a63df2 test speed: add command to test a specified remotes speed
Run speed test to try and work in a given time budget, uploading
randomly created files to the remote then downloading them again.

Fixes #3198
2025-09-03 12:37:52 +01:00
Nick Craig-Wood
0019e18ac3 docs: add link to MEGA S4 from MEGA page 2025-09-02 17:22:32 +01:00
Nick Craig-Wood
218c3bf6e9 Add Robin Rolf to contributors 2025-09-02 17:22:32 +01:00
Nick Craig-Wood
8f9702583d Add anon-pradip to contributors 2025-09-02 17:22:32 +01:00
Robin Rolf
e6578fb5a1 s3: Add Intercolo provider 2025-09-02 16:34:43 +01:00
albertony
fa1d7da272 gendocs: refactor and add logging of skipped command docs 2025-09-02 14:06:31 +02:00
albertony
813708c24d gendocs: ignore missing rclone_mount.md, rclone_nfsmount.md, rclone_serve_nfs.md on windows 2025-09-02 14:06:31 +02:00
nielash
fee4716343 bin: add bisync.md generator
This change adds make_bisync_docs.go step to dynamically update the list of
ignored and failed tests in bisync.md
2025-09-01 14:43:40 -04:00
nielash
6e9a675b3f fstest: refactor to decouple package from implementation 2025-09-01 14:43:40 -04:00
nielash
7f5a444350 gendocs: ignore missing rclone_mount.md on macOS 2025-09-01 14:43:40 -04:00
nielash
d2916ac5c7 bisync: ignore expected "nothing to transfer" differences on tests
The "There was nothing to transfer" log is only printed when the number of
transfers is exactly 0. However, there are a variety of reasons why the transfer
count would be expected to differ between backends. For example, if either side
lacks hashes, the sync may in fact need to transfer, where it would otherwise
skip based on hash or just update modtime. Transfer stats will also differ in
the "src and dst identical but can't set mod time without deleting and re-
uploading" scenario (because the re-upload is a transfer), and where --download-hash
is needed (because calculating the hash requires downloading the file, which is
a transfer).

Before this change, these expected differences would result in erroneous test
failures. This change fixes the issue by ignoring the absence of the "nothing to
transfer" log where it is expected.

Note that this issue did not occur before
9e200531b1
because the number of transfers was not getting reset between test steps,
sometimes resulting in an artificially inflated transfers count.
2025-09-01 14:05:00 -04:00
nielash
3369a15285 bisync: fix TestBisyncConcurrent ignoring -case
Before this change, TestBisyncConcurrent would still run the "basic" test case
if a non-blank -case arg was used to specify a case other than "basic". This
change fixes it by skipping in this scenario.
2025-09-01 14:05:00 -04:00
nielash
58aee30de7 bisync: make number of parallel tests configurable
Example usage:
go test ./cmd/bisync -remote local -race -pcount 10
2025-09-01 14:05:00 -04:00
anon-pradip
ef919241a6 docs: clarify subcommand description in rclone usage 2025-09-01 17:09:51 +01:00
albertony
d5386bb9a7 docs: fix description of regex syntax of name transform 2025-09-01 16:40:14 +01:00
albertony
bf46ea5611 docs: add some more details about supported regex syntax 2025-09-01 16:40:14 +01:00
nielash
b8a379c9c9 makefile: fix lib/transform docs not getting updated
As of
4280ec75cc
the lib/transform docs are generated with //go:generate and embedded with
//go:embed.

Before this change, however, they were not getting automatically updated with
subsequent changes (like
fe62a2bb4e)
because `go generate ./lib/transform` was not being run as part of the release
making process.

This change fixes that by running it in `make commanddocs`.
2025-09-01 16:39:20 +01:00
Nick Craig-Wood
8c37a9c2ef lib/pool: fix flaky test which was causing timeouts
This puts a limit on the number of allocation failures in a row which
stops the test timing out as the exponential backoffs get very large.
2025-09-01 16:25:31 +01:00
Nick Craig-Wood
963a72ce01 Add dougal to contributors 2025-09-01 16:25:31 +01:00
dougal
a4962e21d1 vfs: fix SIGHUP killing serve instead of flushing directory caches
Before, rclone serve would crash when sent a SIGHUP which contradicts
the documentation - saying it should flush the directory caches.

Moved signal handling from the mount into the vfs layer, which now
handles SIGHUP on all uses of the VFS including mount and serve.

Fixes #8607
2025-09-01 13:15:11 +01:00
nielash
9e200531b1 bisync: use unique stats groups on tests 2025-08-30 17:46:33 +01:00
Nick Craig-Wood
04683f2032 fstest: stop errors in test cleanup changing the global stats
This was causing the concurrent bisync tests to fail every now and again.
2025-08-30 17:46:33 +01:00
Nick Craig-Wood
b41f7994da Add Motte to contributors 2025-08-30 17:46:33 +01:00
Nick Craig-Wood
13a5ffe391 Add Claudius Ellsel to contributors 2025-08-30 17:46:33 +01:00
Nick Craig-Wood
85deea82e4 build: add local markdown linting to make check 2025-08-28 16:56:40 +01:00
Motte
89a8ea7a91 lsf: add support for unix and unixnano time formats 2025-08-28 16:28:49 +01:00
albertony
c8912eb6a0 docs: remove broken links from rc to commands 2025-08-28 11:52:18 +02:00
albertony
01674949a1 hashsum: changed output format when listing algorithms 2025-08-27 23:36:28 +02:00
Claudius Ellsel
98e1d3ee73 docs: add example of how to add date as suffix 2025-08-27 22:01:28 +02:00
Nick Craig-Wood
50d7a80331 box: fix about after change in API return - fixes #8776 2025-08-26 18:03:09 +01:00
Nick Craig-Wood
bc3e8e1abd Add skbeh to contributors 2025-08-26 18:03:09 +01:00
Nick Craig-Wood
30e80d0716 Add Tilman Vogel to contributors 2025-08-26 18:03:09 +01:00
albertony
f288920696 docs: fix incorrectly escaped windows path separators 2025-08-26 14:29:33 +02:00
albertony
fa2bbd705c build: restore error handling in gendocs 2025-08-26 14:28:05 +02:00
skbeh
43a794860f combine: propagate SlowHash feature 2025-08-26 12:39:32 +01:00
albertony
adfe6b3bad docs/oracleobjectstorage: add introduction before external links and remove broken link 2025-08-26 12:04:00 +02:00
albertony
091ccb649c docs: fix markdown lint issues in backend docs 2025-08-26 12:04:00 +02:00
albertony
2e02d49578 docs: fix markdown lint issues in command docs 2025-08-26 12:04:00 +02:00
albertony
514535ad46 docs: update markdown code block json indent size 2 2025-08-26 12:04:00 +02:00
Tilman Vogel
b010591c96 mount: do not log successful unmount as an error - fixes #8766 2025-08-23 16:30:33 +01:00
Nick Craig-Wood
1aaee9edce Start v1.72.0-DEV development 2025-08-22 17:42:25 +01:00
Nick Craig-Wood
3f0e9f5fca Version v1.71.0 2025-08-22 16:03:16 +01:00
Nick Craig-Wood
cfd0d28742 fs: tls: add --client-pass support for encrypted --client-key files
This also widens the supported types

- Unencrypted PKCS#1 ("BEGIN RSA PRIVATE KEY")
- Unencrypted PKCS#8 ("BEGIN PRIVATE KEY")
- Encrypted PKCS#8 ("BEGIN ENCRYPTED PRIVATE KEY")
- Legacy PEM encryption (e.g., DEK-Info headers), which are automatically detected.
2025-08-22 12:19:29 +01:00
Nick Craig-Wood
e7a2b322ec ftp: make TLS config default to global TLS config - Fixes #6671
This allows --ca-cert, --client-cert, --no-check-certificate etc to be
used.

This also allows `override.ca_cert = XXX` to be used in the config
file.
2025-08-22 12:19:29 +01:00
Nick Craig-Wood
d3a0805a2b fshttp: return *Transport rather than http.RoundTripper from NewTransport
This allows further customization, reading the existing config and is
the Go recommended way "accept interfaces, return structs".
2025-08-22 12:19:29 +01:00
nielash
d4edf8ac18 bisync: release from beta
As of v1.71, bisync is officially out of beta.

Some history:

- bisync was born in 2018 as https://github.com/cjnaz/rclonesync-V2
by @cjnaz, written in python.
- In 2021, @ivandeex ported it to go with @cjnaz's support.
https://github.com/rclone/rclone/pull/5164
- It was introduced as an "experimental" feature in v1.58.
6210e22ab5
- In 2023, bisync needed a new maintainer, and @nielash volunteered.
https://forum.rclone.org/t/bisync-bugs-and-feature-requests/37636
- Later in 2023, bisync received a major overhaul and was relabeled "beta"
(from "experimental"). https://github.com/rclone/rclone/pull/7410
- In 2024, integration tests were introduced for bisync (which previously had
only unit tests). https://github.com/rclone/rclone/pull/7693
- As of August 2025, bisync is stable and integration tests are passing on all
of the "flagship" backends.

Development doesn't stop here, of course. But bisync has come a long way since
its "experimental" days, and the "beta" tag is no longer needed.
2025-08-22 12:13:59 +01:00
nielash
87d14b000a bisync: fix markdown formatting issues flagged by linter in docs 2025-08-22 12:13:59 +01:00
nielash
12bded980b bisync: fix --no-slow-hash settings on path2
Before this change, if path2 had slow hashes, and --no-slow-hash or --slow-hash-sync-only
was in use, bisync was erroneously setting path1's hashtype to 'none' instead of
path2's. This change fixes the issue.

See https://forum.rclone.org/t/hashtype-mismatch-with-slow-hash-sync-only-in-onedrive-local-bisync/52138/2?u=nielash
2025-08-22 12:13:59 +01:00
Nick Craig-Wood
6e0e76af9d Add cui to contributors 2025-08-22 12:13:59 +01:00
Nick Craig-Wood
6f9b2f7b9b docs: add code of conduct 2025-08-22 11:42:51 +01:00
cui
f61d79396d lib/mmap: convert to using unsafe.Slice to avoid deprecated reflect.SliceHeader 2025-08-22 00:35:50 +01:00
dependabot[bot]
9b22e38450 build: bump golangci/golangci-lint-action from 6 to 8
Bumps [golangci/golangci-lint-action](https://github.com/golangci/golangci-lint-action) from 6 to 8.
- [Release notes](https://github.com/golangci/golangci-lint-action/releases)
- [Commits](https://github.com/golangci/golangci-lint-action/compare/v6...v8)

---
updated-dependencies:
- dependency-name: golangci/golangci-lint-action
  dependency-version: '8'
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-08-22 00:14:01 +01:00
albertony
9e4fe18830 build: update golangci-lint configuration 2025-08-22 00:14:01 +01:00
albertony
ae5cc1ab37 build: ignore revive lint issue var-naming: avoid meaningless package names 2025-08-22 00:14:01 +01:00
albertony
d4be38ec02 build: fix lint issue: should omit type error from declaration 2025-08-22 00:14:01 +01:00
albertony
115cff3007 Revert "build: downgrade linter to use go1.24 until it is fixed for go1.25"
This reverts commit 8f84f91666.
2025-08-22 00:14:01 +01:00
albertony
70b862f026 build: migrate golangci-lint configuration to v2 format 2025-08-22 00:14:01 +01:00
Nick Craig-Wood
321cf23e9c s3: add --s3-use-arn-region flag - fixes #8686 2025-08-22 00:02:41 +01:00
Nick Craig-Wood
7e8d4bd915 Add Binbin Qian to contributors 2025-08-22 00:02:41 +01:00
Nick Craig-Wood
06f45e0ac0 Add Lucas Bremgartner to contributors 2025-08-22 00:02:41 +01:00
Binbin Qian
4af2f01abc docs: add tips about outdated certificates 2025-08-21 08:21:02 +02:00
Lucas Bremgartner
dd3fff6eae FAQ: specify the availability of SSL_CERT_* env vars
SSL_CERT_FILE and SSL_CERT_DIR env vars are only available on Unix systems other than macOS.

Addressing comment https://github.com/rclone/rclone/pull/1977#issuecomment-3201961570
2025-08-20 12:34:04 +01:00
wiserain
ca6631746a pikpak: add file name integrity check during upload
This commit introduces a new validation step to ensure data integrity 
during file uploads.

- The API's returned file name (new.File.Name) is now verified 
  against the requested file name (leaf) immediately after 
  the initial upload ticket is created.
- If a mismatch is detected, the upload process is aborted with an error, 
  and the defer cleanup logic is triggered to delete any partially created file.
- This addresses an unexpected API behavior where numbered suffixes 
  might be appended to filenames even without conflicts.
- This change prevents corrupted or misnamed files from being uploaded 
  without client-side awareness.
2025-08-19 22:00:23 +09:00
nielash
e5fe0b1476 bisync: skip TestBisyncConcurrent on non-local
See discussion on
https://github.com/rclone/rclone/pull/8708#discussion_r2280308808
2025-08-18 17:57:14 -04:00
Nick Craig-Wood
4c5764204d internetarchive: fix server side copy files with &
Before this change, server side copy of files with & gave the error:

    Invalid Argument</Message><Resource>x-(amz|archive)-copy-source
    header has bad character

This fix switches to using url.QueryEscape which escapes everything
from url.PathEscape which doesn't escape &.

Fixes #8754
2025-08-18 19:37:30 +01:00
Nick Craig-Wood
d70f40229e Revert "s3: set useAlreadyExists to false for Alibaba OSS"
This reverts commit 64ed9b175f.

This fails the integration tests with

s3_internal_test.go:434: Creating a bucket we already have created returned code: No Error
s3_internal_test.go:439:
    	Error Trace:	backend/s3/s3_internal_test.go:439
    	Error:      	Should be true
    	Test:       	TestIntegration/FsMkdir/FsPutFiles/Internal/Versions/Mkdir
    	Messages:   	Need to set UseAlreadyExists quirk
2025-08-18 19:37:30 +01:00
Nick Craig-Wood
05b13b47b5 Add huangnauh to contributors 2025-08-18 19:37:30 +01:00
Sudipto Baral
ecd52aa809 smb: improve multithreaded upload performance using multiple connections
In the current design, OpenWriterAt provides the interface for random-access
writes, and openChunkWriterFromOpenWriterAt wraps this interface to enable
parallel chunk uploads using multiple goroutines. A global connection pool is
already in place to manage SMB connections across files.

However, currently only one connection is used per file, which makes multiple
goroutines compete for the connection during multithreaded writes.

This changes create separate connections for each goroutine, which allows true
parallelism by giving each goroutine its own SMB connection

Signed-off-by: sudipto baral <sudiptobaral.me@gmail.com>
2025-08-18 16:29:18 +01:00
nielash
269abb1aee bisync: fix data races on tests 2025-08-17 20:16:46 -04:00
nielash
d91cbb2626 bisync: remove unused parameters 2025-08-17 20:16:46 -04:00
nielash
9073d17313 bisync: deglobalize to fix concurrent runs via rc - fixes #8675
Before this change, bisync used some global variables, which could cause errors
if running multiple concurrent bisync runs through the rc. (Running normally
from the command line was not affected.)

This change deglobalizes those variables so that multiple bisync runs can be
safely run at once, from the same rclone instance.
2025-08-17 20:16:46 -04:00
huangnauh
cc20d93f47 mount: fix identification of symlinks in directory listings 2025-08-17 12:57:35 +01:00
Nick Craig-Wood
cb1507fa96 s3: fix Content-Type: aws-chunked causing upload errors with --metadata
`Content-Type: aws-chunked` is used on S3 PUT requests to signal SigV4
streaming uploads: the body is sent in AWS-formatted chunks, each
chunk framed and HMAC-signed.

When copying from a non S3 compatible object store (like Digital
Ocean) the objects can have `Content-Type: aws-chunked` (which you
won't see on AWS S3). Attempting to copy these objects to S3 with
`--metadata` this produces this error.

    aws-chunked encoding is not supported when x-amz-content-sha256 UNSIGNED-PAYLOAD is supplied

This patch makes sure `aws-chunked` is removed from the `Content-Type`
metadata both on the way in and the way out.

Fixes #8724
2025-08-16 17:11:54 +01:00
Nick Craig-Wood
b0b3b04b3b config: fix problem reading pasted tokens over 4095 bytes
Before this change we were reading input from stdin using the terminal
in the default line mode which has a limit of 4095 characters.

The typical culprit was onedrive tokens (which are very long) giving the error

    Couldn't decode response: invalid character 'e' looking for beginning of value

This change swaps over to use the github.com/peterh/liner read line
library which does not have that limitation and also enables more
sensible cursor editing.

Fixes #8688 #8323 #5835
2025-08-16 16:44:35 +01:00
Nick Craig-Wood
8d878d0a5f config: fix test failure on local machine with a config file
This uses a temporary config file instead.
2025-08-16 16:44:00 +01:00
Nick Craig-Wood
8d353039a6 log: add log rotation to --log-file - fixes #2259 2025-08-16 16:38:23 +01:00
Nick Craig-Wood
4b777db20b accounting: Fix stats (speed=0 and eta=nil) when starting jobs via rc
Before this change we used the current context to start the average
loop. This means that if the context came from the rc the average loop
would be cancelled at the end of the rc request leading the speed not
being measured.

This uses the background context for the accounting loop so it doesn't
get cancelled when its parent gets cancelled.
2025-08-16 16:33:38 +01:00
Nick Craig-Wood
16ad0c2aef docs: update overview table for oracle object storage 2025-08-16 16:00:14 +01:00
Nick Craig-Wood
e46dec2a94 Add praveen-solanki-oracle to contributors 2025-08-16 16:00:14 +01:00
praveen-solanki-oracle
2b54b63cb3 oracleobjectstorage: add read only metadata support - Fixes #8705 2025-08-16 15:55:53 +01:00
Nick Craig-Wood
f2eb5f35f6 doc: sync doesn't symlinks in dest without --link - Fixes #8749 2025-08-16 09:22:31 +01:00
Nick Craig-Wood
d9a36ef45c s3: sort providers in docs 2025-08-15 17:38:31 +01:00
Nick Craig-Wood
eade7710e7 s3: add docs for Exaba Object Storage 2025-08-15 17:38:31 +01:00
Nick Craig-Wood
e6470d998c azureblob: fix double accounting for multipart uploads - fixes #8718
Before this change multipart uploads using OpenChunkWriter would
account for twice the space used.

This fixes the problem by adjusting the accounting delay.
2025-08-14 16:59:34 +01:00
Nick Craig-Wood
0c0fb93111 pool: fix deadlock with --max-buffer-memory
Before this change we used an overcomplicated method of memory
reservations in the pool.RW which caused deadlocks.

This changes it to use a much simpler reservation system where we
actually reserve the memory and store it in the pool.RW. This allows
us to use the semaphore.Weighted to count the actually memory in use
(rather than the memory in use and in the cache). This in turn allows
accurate use of the semaphore by users wanting memory.
2025-08-14 16:14:59 +01:00
Nick Craig-Wood
3f60764bd4 azureblob: fix deadlock with --max-connections with InvalidBlockOrBlob errors
Before this change the azureblob backend could deadlock when using
--max-connections. This is because when it receives InvalidBlockOrBlob
error it attempts to clear the condition before retrying. This in turn
involved recursively calling the pacer. At this point the pacer can
easily have no connections left which causes a deadlock as all the
other pacer connections are waiting for the InvalidBlockOrBlob to be
resolved.

This fixes the problem by using a temporary pacer when resolving the
InvalidBlockOrBlob errors.
2025-08-14 16:14:59 +01:00
Nick Craig-Wood
8f84f91666 build: downgrade linter to use go1.24 until it is fixed for go1.25 2025-08-13 17:54:45 +01:00
Nick Craig-Wood
2c91772bf1 build: update all dependencies 2025-08-13 17:54:45 +01:00
Nick Craig-Wood
c3f721755d build: update to go1.25 and make go1.24 the minimum required version 2025-08-13 17:54:45 +01:00
Nick Craig-Wood
8a952583a5 Add Timothy Jacobs to contributors 2025-08-13 17:54:40 +01:00
nielash
fc5bd21e28 bisync: fix time.Local data race on tests - fixes #8272
Before this change, the bisync tests were directly setting the time.Local
variable to UTC.

The reason for overriding the time zone on the tests is to make them
deterministic regardless of where in the world the user happens to be. There are
some goldenized strings which have the time zone hard-coded and would result in a
miscompare failure outside of that time zone.

However, mutating the time.Local variable is not the right way to do this, as OP
correctly pointed out on #8272.

Setting the TZ environment variable from within the code was also not an ideal
solution because, while it worked on unix, it did not work on Windows. See
fbac94a799/src/time/zoneinfo.go (L79-L80)

This change fixes the issue by defining a new bisync.LogTZ setting for use when
printing timestamps in /cmd/bisync/resolve.go. We override this on the tests
instead of time.Local.
2025-08-13 11:58:35 -04:00
nielash
be73a10a97 googlecloudstorage: fix rateLimitExceeded error on bisync tests
Additional to googlecloudstorage's general rate limiting, it apparently has a
separate limit for updating the same object more than once per second:

googleapi: Error 429: The object rclone-test-
demilaf1fexu/015108so/check_access/path2/modtime_write_test exceeded the rate
limit for object mutation operations (create, update, and delete). Please reduce
your request rate. See https://cloud.google.com/storage/docs/gcs429.,
rateLimitExceeded

We were encountering this in the part of the bisync tests where we create an
object, verify that we can edit its modtime, then remove it. We were not
encountering it elsewhere because it only concerns manipulations of the same
object -- not the rate of API calls in general. For the same reason, the standard
pacer is not an effective solution for enforcing this (unless, of course, we
want to slow the entire test down by setting a 1s MinSleep across the board.)

While ideally this would be handled in the backend, this gets around it by
sleeping for 1s in the relevant part of the bisync tests.
2025-08-13 11:58:35 -04:00
Timothy Jacobs
7edf8eb233 accounting: populate transfer snapshot with "what" value 2025-08-13 16:25:38 +01:00
dependabot[bot]
99144dcbba build(deps): bump actions/checkout from 4 to 5
Bumps [actions/checkout](https://github.com/actions/checkout) from 4 to 5.
- [Release notes](https://github.com/actions/checkout/releases)
- [Changelog](https://github.com/actions/checkout/blob/main/CHANGELOG.md)
- [Commits](https://github.com/actions/checkout/compare/v4...v5)

---
updated-dependencies:
- dependency-name: actions/checkout
  dependency-version: '5'
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-08-12 19:39:49 +02:00
dependabot[bot]
8f90f830bd build(deps): bump actions/download-artifact from 4 to 5
Bumps [actions/download-artifact](https://github.com/actions/download-artifact) from 4 to 5.
- [Release notes](https://github.com/actions/download-artifact/releases)
- [Commits](https://github.com/actions/download-artifact/compare/v4...v5)

---
updated-dependencies:
- dependency-name: actions/download-artifact
  dependency-version: '5'
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-08-12 17:49:55 +02:00
nielash
456108f29e googlecloudstorage: enable bisync integration tests
These were habitually failing at some point and ignored for that reason, but
seem to be passing now. It is possible that in the interim, the underlying issue
was resolved by another commit. If there is still an issue lurking, the nightly
tests will surely reveal it (and give us a log to look at.)
2025-08-09 18:12:17 -04:00
nielash
f7968aad1c fstest: fix parsing of commas in -remotes
Connection string remotes like "TestGoogleCloudStorage,directory_markers:" use
commas. Before this change, these could not be passed with the -remotes flag,
which expected commas to be used only as separators.

After this change, CSV parsing is used so that commas will be properly
recognized inside a terminal-escaped and quoted value, like:

-remotes local,\"TestGoogleCloudStorage,directory_markers:\"
2025-08-09 18:12:17 -04:00
nielash
2a587d21c4 azurefiles: fix hash getting erased when modtime is set
Before this change, setting an object's modtime with o.SetModTime() (without
updating the file's content) would inadvertently erase its md5 hash.

The documentation notes: "If this property isn't specified on the request, the
property is cleared for the file. Subsequent calls to Get File Properties won't
return this property, unless it's explicitly set on the file again."
https://learn.microsoft.com/en-us/rest/api/storageservices/set-file-properties#common-request-headers

This change fixes the issue by setting ContentMD5 (and ContentType), to the
extent we have it, during SetModTime.

Discovered on bisync integration tests such as TestBisyncRemoteRemote/resolve
2025-08-09 18:12:17 -04:00
nielash
4b0df05907 bisync: disable --sftp-copy-is-hardlink on sftp tests
Before this change, TestSFTPOpenssh integration tests would fail due to setting
copy_is_hardlink=true in /fstest/testserver/init.d/TestSFTPOpenssh.

For example, if a file was server-side copied from path1 to path2 and then the
bisync tests set the path2 modtime, the path1 modtime would also unexpectedly
mutate.

Hardlinks are not the same as copies. The bisync tests assume that they can
modify a file on one side without affecting a file on the other. This change
essentially sets --sftp-copy-is-hardlink to the default of false for the bisync
tests.
2025-08-09 18:12:17 -04:00
Anagh Kumar Baranwal
a92af34825 local: fix --copy-links on Windows when listing Junction points 2025-08-10 00:33:34 +05:30
Nick Craig-Wood
8ffde402f6 operations: fix too many connections open when using --max-memory
Before this change we opened the connection before allocating memory.
This meant a long wait sometimes for memory and too many connections
open.

Now we allocate the memory first before opening the connection.
2025-08-07 12:45:44 +01:00
Nick Craig-Wood
117d8d9fdb pool: fix deadlock with --max-memory and multipart transfers
Because multipart transfers can need more than one buffer to complete,
if transfers was set very high, it was possible for lots of multipart
transfers to start, grab fewer buffers than chunk size, then deadlock
because no more memory was available.

This fixes the problem by introducing a reservation system which the
multipart transfer uses to ensure it can reserve all the memory for
one chunk before starting.
2025-08-07 12:45:44 +01:00
Nick Craig-Wood
5050f42b8b pool: unify memory between multipart and asyncreader to use one pool
Before this the multipart code and asyncreader used separate pools
which is inefficient on memory use.
2025-08-07 12:45:44 +01:00
Nick Craig-Wood
fcbcdea067 docs: update links to rcloneui 2025-08-05 16:25:58 +01:00
Nick Craig-Wood
d4e68bf66b docs: add MEGA S4 as a gold sponsor
This also tidies the menu cards.
2025-08-01 12:40:29 +01:00
Nick Craig-Wood
743d160fdd about: fix potential overflow of about in various backends
Before this fix it was possible for an about call in various backends
to exceed an int64 and wrap.

This patch causes it to clip to the max int64 value instead.
2025-07-31 11:38:51 +01:00
Nick Craig-Wood
dc95f36bc1 box: fix about: cannot unmarshal number 1.0e+18 into Go struct field
Before this change rclone about was failing with

    cannot unmarshal number 1.0e+18 into Go struct field User.space_amount of type int64

As Box increased Enterprise accounts user.space_amount from 30PB to
1e+18 or 888.178PB returning it as a floating point number, not an integer.

This fix reads it as a float64 and clips it to the maximum value of an
int64 if necessary.
2025-07-31 11:38:51 +01:00
Nick Craig-Wood
d3e3af377a oauthutil: fix nil pointer crash when started with expired token 2025-07-31 11:38:51 +01:00
n4n5
db4812fbfa rc: listremotes should send an empty array instead of nil 2025-07-25 15:37:25 +01:00
n4n5
ff9cbab5fa config: add error if RCLONE_CONFIG_PASS was supplied but didn't decrypt config 2025-07-25 11:24:18 +01:00
n4n5
30d8ab5f2f rc: add config/unlock to unlock the config file 2025-07-25 11:19:07 +01:00
Anagh Kumar Baranwal
d71a4195d6 ftp: allow insecure TLS ciphers - fixes #8701
Signed-off-by: Anagh Kumar Baranwal <6824881+darthShadow@users.noreply.github.com>
2025-07-25 10:30:18 +01:00
zjx20
64ed9b175f s3: set useAlreadyExists to false for Alibaba OSS 2025-07-24 23:22:16 +01:00
Nick Craig-Wood
2b10340e4e docs: update sponsors page 2025-07-24 15:19:15 +01:00
Nick Craig-Wood
3c596f8d11 fs: allow global variables to be overriden or set on backend creation
This allows backend config to contain

- `override.var` - set var during remote creation only
- `global.var` - set var in the global config permanently

Fixes #8563
2025-07-23 15:09:51 +01:00
Nick Craig-Wood
6a9c221841 fs: allow setting of --http_proxy from command line
This in turn allows `override.http_proxy` to be set in backend configs
to set an http proxy for a single backend.
2025-07-23 15:09:51 +01:00
Nick Craig-Wood
c49b24ff90 tests: cloudinary: remove test ignore after merging fix from #8707 2025-07-23 13:12:55 +01:00
Nick Craig-Wood
edbbfd1e86 Add Antonin Goude to contributors 2025-07-23 13:12:55 +01:00
Nick Craig-Wood
0e0af7499c Add Yu Xin to contributors 2025-07-23 13:12:55 +01:00
Nick Craig-Wood
eb4fe3ef4c Add houance to contributors 2025-07-23 13:12:55 +01:00
Nick Craig-Wood
70eb0f21d9 Add Florent Vennetier to contributors 2025-07-23 13:12:55 +01:00
Nick Craig-Wood
12378bae27 Add n4n5 to contributors 2025-07-23 13:12:55 +01:00
Nick Craig-Wood
3c08c4df3a Add Albin Parou to contributors 2025-07-23 13:12:55 +01:00
Nick Craig-Wood
897509ae10 Add liubingrun to contributors 2025-07-23 13:12:55 +01:00
nielash
0eb7ee2e16 sync: fix testLoggerVsLsf when backend only reads modtime
There are some backends (like PikPak) that advertise a precision of
fs.ModTimeNotSupported but do actually return a modtime when asked. In the case
of PikPak, it is because the modtime can be read but not written, and is not
considered reliable enough to use for syncing.

Before this change, testLoggerVsLsf got confused in this scenario (expected a
blank modtime but got non-blank). Adding to the confusion, it only reaches this
code if the backend happens to support md5 hashes, and the fsrc and fdst have
the same precision.

This change fixes the issue by setting the modtime string on both sides to
"none" in this scenario. Note that we can't use "" (blank) because
(operations.ListFormat).AddModTime would replace that with "2006-01-02 15:04:05".
2025-07-23 12:49:52 +01:00
nielash
c1ebfb7e04 sync: fix testLoggerVsLsf checking wrong fs
Before this change, two tests (TestServerSideCopyOverSelf and
TestServerSideMoveOverSelf) were checking the wrong Fs in the call to
testLoggerVsLsf. This fixes it by making sure we are testing the same two Fs's
we synced.
2025-07-23 12:49:52 +01:00
Nick Craig-Wood
3d62058693 docs: fix make opengraph tags absolute as not all sites understand relative 2025-07-22 18:00:33 +01:00
albertony
122890799f docs: update contributing guide regarding markdown documentation 2025-07-21 20:23:16 +02:00
albertony
65078d5846 build: add markdown linting to workflow 2025-07-21 20:23:16 +02:00
albertony
92f304902d build: add markdownlint configuration 2025-07-21 20:23:16 +02:00
albertony
45477a6c7d docs: minor format cleanup install.md 2025-07-21 20:23:16 +02:00
albertony
79b549b5a4 docs: fix markdownlint issue md049/emphasis-style 2025-07-21 20:23:16 +02:00
albertony
318880b4ad docs: fix markdownlint issue md036/no-emphasis-as-heading 2025-07-21 20:23:16 +02:00
albertony
75521dcf6e docs: fix markdownlint issue md033/no-inline-html 2025-07-21 20:23:16 +02:00
albertony
8bf20dd545 docs: fix markdownlint issue md025/single-title 2025-07-21 20:23:16 +02:00
albertony
744bce1246 docs: fix markdownlint issue md041/first-line-heading 2025-07-21 20:23:16 +02:00
albertony
c817fc5c57 docs: fix markdownlint issue md001/heading-increment 2025-07-21 20:23:16 +02:00
albertony
0bb4d0a985 docs: fix markdownlint issue md003/heading-style 2025-07-21 20:23:16 +02:00
albertony
a8605abd34 docs: fix markdownlint issue md034/no-bare-urls 2025-07-21 20:23:16 +02:00
albertony
953fb4490b docs: fix markdownlint issue md010/no-hard-tabs 2025-07-21 20:23:16 +02:00
albertony
b17c3d18af docs: fix markdownlint issue md013/line-length 2025-07-21 20:23:16 +02:00
albertony
b45580fa19 docs: fix markdownlint issue md038/no-space-in-code 2025-07-21 20:23:16 +02:00
albertony
1c26f40078 docs: fix markdownlint issue md040/fenced-code-language 2025-07-21 20:23:16 +02:00
albertony
667ad093eb docs: fix markdownlint issue md046/code-block-style 2025-07-21 20:23:16 +02:00
albertony
2c369aedf5 docs: fix markdownlint issue md037/no-space-in-emphasis 2025-07-21 20:23:16 +02:00
albertony
7a0d5ab0b4 docs: fix markdownlint issue md059/descriptive-link-text 2025-07-21 20:23:16 +02:00
albertony
75582b804b docs: fix markdownlint issues md007/ul-indent md004/ul-style 2025-07-21 20:23:16 +02:00
albertony
73452551c6 docs: fix markdownlint issue md012/no-multiple-blanks 2025-07-21 20:23:16 +02:00
albertony
cb3cf5068b docs: fix markdownlint issue md058/blanks-around-tables 2025-07-21 20:23:16 +02:00
albertony
428f518771 docs: fix markdownlint issue md022/blanks-around-headings 2025-07-21 20:23:16 +02:00
albertony
0411a41e11 docs: fix markdownlint issue md031/blanks-around-fences 2025-07-21 20:23:16 +02:00
albertony
07b37bcd12 docs: fix markdownlint issue md032/blanks-around-lists 2025-07-21 20:23:16 +02:00
albertony
0506826ff5 docs: fix markdownlint issue md009/no-trailing-spaces 2025-07-21 20:23:16 +02:00
albertony
4fcd36a5ab docs: fix markdownlint issue md014/commands-show-output 2025-07-21 20:23:16 +02:00
albertony
b2f43f39ba docs: fix markdownlint issues md007/ul-indent md004/ul-style (bin/update-authors.py) 2025-07-21 20:23:16 +02:00
albertony
074d73d12b docs: fix markdownlint issues md007/ul-indent md004/ul-style (authors.md) 2025-07-21 20:23:16 +02:00
Nick Craig-Wood
6457bcf51e docs: add opengraph tags for website social media previews 2025-07-21 17:48:23 +01:00
Nick Craig-Wood
8d12519f3d mount: note that bucket based remotes can use directory markers 2025-07-21 17:48:23 +01:00
wiserain
8a7c401366 pikpak: add docs for methods to clarify name collision handling and restrictions 2025-07-21 17:43:15 +01:00
wiserain
0aae8f346f pikpak: enhance Copy method to handle name collisions and improve error management 2025-07-21 17:43:15 +01:00
wiserain
e991328967 pikpak: enhance Move for better handling of error and name collision 2025-07-21 17:43:15 +01:00
Yu Xin
614d02a673 accounting: fix incorrect stats with --transfers=1 - fixes #8670 2025-07-21 16:54:19 +01:00
houance
018ebdded5 rc: fix operations/check ignoring oneWay parameter
Change param from parsing "oneway" to "oneWay" as bool value, as the docs
say "oneWay -  check one way only, source files must exist on remote"
2025-07-21 16:41:08 +01:00
Florent Vennetier
fc08983d71 s3: add OVHcloud Object Storage provider
Co-Authored-By: Antonin Goude <antonin.goude@ovhcloud.com>
2025-07-21 16:34:53 +01:00
n4n5
7b61084891 docs: rc: fix description of how to read local config 2025-07-21 15:42:37 +01:00
albertony
d1ac6c2fe1 build: limit check for edits of autogenerated files to only commits in a pull request 2025-07-17 16:20:38 +02:00
albertony
da9c99272c build: extend check for edits of autogenerated files to all commits in a pull request 2025-07-17 16:20:38 +02:00
Sudipto Baral
9c7594d78f smb: refresh Kerberos credentials when ccache file changes
This change enhances the SMB backend in Rclone to automatically refresh
Kerberos credentials when the associated ccache file is updated.

Previously, credentials were only loaded once per path and cached
indefinitely, which caused issues when service tickets expired or the
cache was renewed on the server.
2025-07-17 14:34:44 +01:00
Albin Parou
70226cc653 s3: fix multipart upload and server side copy when using bucket policy SSE-C
When uploading or moving data within an s3-compatible bucket, the
`SSECustomer*` headers should always be forwarded: on
`CreateMultipartUpload`, `UploadPart`, `UploadCopyPart` and
`CompleteMultipartUpload`. But currently rclone doesn't forward those
headers to `CompleteMultipartUpload`.

This is a requirement if you want to enforce `SSE-C` at the bucket level
via a bucket policy. Cf: `This parameter is required only when the
object was created using a checksum algorithm or if your bucket policy
requires the use of SSE-C.` in
https://docs.aws.amazon.com/AmazonS3/latest/API/API_CompleteMultipartUpload.html
2025-07-17 14:29:31 +01:00
liubingrun
c20e4bd99c backend/s3: Fix memory leak by cloning strings #8683
This commit addresses a potential memory leak in the S3 backend where
strings extracted from large API responses were keeping the entire
response in memory. The issue occurs because Go strings share underlying
memory with their source, preventing garbage collection of large XML
responses even when only small substrings are needed.

Signed-off-by: liubingrun <liubr1@chinatelecom.cn>
2025-07-17 12:31:52 +01:00
Nick Craig-Wood
ccfe153e9b purge: exit with a fatal error if filters are set on rclone purge
Fixes #8491
2025-07-17 11:17:08 +01:00
Nick Craig-Wood
c9730bcaaf docs: Add Backblaze as a Platinum sponsor 2025-07-17 11:17:08 +01:00
Nick Craig-Wood
03dd7486c1 Add Sam Pegg to contributors 2025-07-17 11:17:08 +01:00
raider13209
6249009fdf googlephotos: added warning for Google Photos compatability-fixes #8672 2025-07-17 10:48:12 +01:00
Nick Craig-Wood
8e2d76459f test: remove flakey TestChunkerChunk50bYandex: test 2025-07-16 16:39:57 +01:00
albertony
5e539c6a72 docs: Consolidate entries for Josh Soref in contributors 2025-07-13 14:05:45 +02:00
albertony
8866112400 docs: remove dead link to example of writing a plugin 2025-07-13 13:51:38 +02:00
Nick Craig-Wood
bfdd5e2c22 filescom: document that hashes need to be enabled - fixes #8674 2025-07-11 14:15:59 +01:00
Nick Craig-Wood
f3f16cd2b9 Add Sudipto Baral to contributors 2025-07-11 14:15:59 +01:00
albertony
d84ea2ec52 docs: fix incorrect json syntax in sample output 2025-07-11 13:49:27 +02:00
albertony
b259241c07 docs: ignore author email piyushgarg80
This should merge the two duplicates:
- piyushgarg <piyushgarg80@gmail.com>
- Piyush <piyushgarg80>
2025-07-11 13:49:06 +02:00
albertony
a8ab0730a7 docs: fix header level for --dump option section 2025-07-10 12:36:10 +02:00
albertony
cef207cf94 docs: use stringArray as parameter type 2025-07-10 12:36:10 +02:00
albertony
e728ea32d1 docs: use consistent markdown heading syntax 2025-07-10 12:36:10 +02:00
Nick Craig-Wood
ccdee0420f imagekit: remove server side Copy method as it was downloading and uploading
The Copy method was downloading the file and uploading it again rather
than server side copying it.

It looks from the docs that the upload process can read a URL so this
might be possible, but the removed code is incorrect.
2025-07-10 11:29:27 +01:00
Nick Craig-Wood
8a51e11d23 imagekit: don't low level retry uploads
Low level retrying uploads can lead to partial or empty files being
uploaded as the io.Reader has been read in the first attempt.
2025-07-10 11:29:27 +01:00
Nick Craig-Wood
9083f1ff15 imagekit: return correct error when attempting to upload zero length files
Imagekit doesn't support empty files so return correct error for
integration tests to process properly.
2025-07-10 11:29:27 +01:00
Sudipto Baral
2964b1a169 smb: add --smb-kerberos-ccache option to set kerberos ccache per smb backend 2025-07-10 10:17:42 +01:00
Nick Craig-Wood
b6767820de test: fix smb kerberos integration tests
Thanks @sudiptob2 for the tip!
2025-07-09 18:05:29 +01:00
Nick Craig-Wood
821e7fce45 Changelog updates from Version v1.70.3 2025-07-09 16:26:56 +01:00
albertony
b7c6268d3e config: make parsing of duration options consistent
All user visible Durations should be fs.Duration rather than time.Duration. Suffix is then optional and defaults to s. Additional suffices d, w, M and y are supported, in addition to ms, s, m and h - which are the only ones supported by time.Duration. Absolute times can also be specified, and will be interpreted as duration relative to now.
2025-07-08 12:08:14 +02:00
albertony
521d6b88d4 docs: cleanup usage 2025-07-08 11:28:28 +02:00
albertony
cf767b0856 docs: break long lines 2025-07-08 11:28:28 +02:00
albertony
25f7809822 docs: add option value type to header where missing 2025-07-08 11:28:28 +02:00
albertony
74c0b1ea3b docs: mention that identifiers in option values are case insensitive 2025-07-08 11:28:28 +02:00
albertony
f4dcb1e9cf docs: rewrite dump option examples 2025-07-08 11:28:28 +02:00
albertony
90f1d023ff docs: use markdown inline code format for dump option headers that are real examples 2025-07-08 11:28:28 +02:00
albertony
e9c5f2d4e8 docs: change spelling from server side to server-side 2025-07-08 11:28:28 +02:00
albertony
1249e9b5ac docs: cleanup header casing 2025-07-08 11:28:28 +02:00
albertony
d47bc5f6c4 docs: rename OSX to macOS 2025-07-08 11:28:28 +02:00
albertony
efb1794135 docs: fix list and code block issue 2025-07-08 11:28:28 +02:00
albertony
71b98a03a9 docs: consistent markdown list format 2025-07-08 11:28:28 +02:00
albertony
8e625c6593 docs: split section with general description of options with that documenting actual main options 2025-07-08 11:28:28 +02:00
albertony
6b2cd7c631 docs: improve description of option types 2025-07-08 11:28:28 +02:00
albertony
aa4aead63c docs: use space instead of equal sign to separate option and value in headers 2025-07-08 11:28:28 +02:00
albertony
c491d12cd0 docs: use comma to separate short and long option format in headers 2025-07-08 11:28:28 +02:00
albertony
9e4d703a56 docs: remove use of uncommon parameter types 2025-07-08 11:28:28 +02:00
albertony
fc0c0a7771 docs: remove use of parameter type FILE 2025-07-08 11:28:28 +02:00
albertony
d5cc0d83b0 docs: remove use of parameter type DIR 2025-07-08 11:28:28 +02:00
albertony
52762dc866 docs: remove use of parameter type CONFIG_FILE 2025-07-08 11:28:28 +02:00
albertony
3c092cfc17 docs: change use of parameter type N and NUMBER to int consistent with flags and cli help 2025-07-08 11:28:28 +02:00
albertony
7f3f1af541 docs: change use of parameter type TIME to Duration consistent with flags and cli help 2025-07-08 11:28:28 +02:00
albertony
f885c481f0 docs: change use of parameter type BANDWIDTH_SPEC to BwTimetable consistent with flags and cli help 2025-07-08 11:28:28 +02:00
albertony
865d4b2bda docs: change use of parameter type SIZE to SizeSuffix consistent with flags and cli help 2025-07-08 11:28:28 +02:00
albertony
3cb1e65eb6 docs: cleanup markdown header format 2025-07-08 11:28:28 +02:00
albertony
f667346718 docs: explain separated list parameters 2025-07-08 11:28:28 +02:00
Nick Craig-Wood
c6e1f59415 azureblob: fix server side copy error "requires exactly one scope"
Before this change, if not using shared key or SAS URL authentication
for the source, rclone gave this error

    ManagedIdentityCredential.GetToken() requires exactly one scope

when doing server side copies.

This was introduced in:

3a5ddfcd3c azureblob: implement multipart server side copy

This fixes the problem by creating a temporary SAS URL using user
delegation to read the source blob when copying.

Fixes #8662
2025-07-08 07:50:51 +01:00
Nick Craig-Wood
f353c92852 test: remove and ignore failing integration tests
- remove non docker based Swift tests as they are too slow
- ignore TestChunkerChunk50b test which always fails
2025-07-08 07:48:54 +01:00
albertony
1e88c6a18b docs: explain the json log format in more detail 2025-07-07 10:21:13 +02:00
albertony
7242aed1c3 check: fix difference report (was reporting error counts) 2025-07-07 08:16:55 +01:00
albertony
81e63785fe serve sftp: add support for more hashes (crc32, sha256, blake3, xxh3, xxh128) 2025-07-07 09:11:29 +02:00
albertony
c7937f53d4 serve sftp: extract function refactoring for handling hashsum commands 2025-07-07 09:11:29 +02:00
albertony
58fa1c975f sftp: add support for more hashes (crc32, sha256, blake3, xxh3, xxh128) 2025-07-07 09:11:29 +02:00
albertony
da49fc1b6d local: configurable supported hashes 2025-07-07 09:11:29 +02:00
albertony
df9c921dd5 hash: add support for BLAKE3, XXH3, XXH128 2025-07-07 09:11:29 +02:00
Nick Craig-Wood
d9c227eff6 vfs: make integration TestDirEntryModTimeInvalidation test more reliable
Before this change it was not taking the Precision of the remote into account.
2025-07-06 14:35:16 +01:00
Nick Craig-Wood
524c285d88 smb: skip non integration tests when doing integration tests 2025-07-06 13:39:54 +01:00
Nick Craig-Wood
4107246335 seafile: fix integration test errors by adding dot to encoding
The seafile backend used to be able to cope with files called "." and
".." but at some point became unable to do so, causing integration
test failurs.

This adds EncodeDot to the encoding which encodes "." and ".." names.
2025-07-05 21:27:10 +01:00
Nick Craig-Wood
87a65ec6a5 linkbox: fix upload error "user upload file not exist"
Linkbox have started issuing 302 redirects on some of their PUT
requests when rclone uploads a file.

This is problematic for several reasons:

1. This is the wrong redirect code - it should be 307 to preserve the method
2. Since Expect/100-Continue isn't supported the whole body gets uploaded

This fixes the problem by first doing a HEAD request on the URL. This
will allow us to read the redirect Location and not upload the body to
the wrong place.

It should still work (albeit a little more inefficiently) if Linkbox
stop redirecting the PUT requests.

See: https://forum.rclone.org/t/linkbox-upload-error/51795
Fixes: #8606
2025-07-05 09:26:43 +01:00
Nick Craig-Wood
c6d0b61982 build: remove integration tests which are too slow
This removes

- TestCompressSwift: - never finishes - too slow - we have TestCompressS3 instead
- TestCryptSwift: - never finishes - too slow - we have TestCryptS3 instead
- TestChunkerChunk50bBox: - often times out - covered by other tests
2025-07-05 09:24:00 +01:00
Nick Craig-Wood
88e30eecbf march: fix deadlock when using --no-traverse - fixes #8656
This ocurred whenever there were more than 100 files in the source due
to the output channel filling up.

The fix is not to use list.NewSorter but take more care to output the
dst objects in the same order the src objects are delivered. As the
src objects are delivered sorted, no sorting is needed.

In order not to cause another deadlock, we need to send nil dst
objects which is safe since this adjusts the termination conditions
for the channels.

Thanks to @jeremy for the test script the Go tests are based on.
2025-07-04 14:52:28 +01:00
wiserain
f904378c4d pikpak: improve error handling for missing links and unrecoverable 500s
This commit improves error handling in two specific scenarios:

* Missing Download Links: A 5-second delay is introduced when a download
  link is missing, as low-level retries aren't enough. Empirically, it
  takes about 30s-1m for the link to become available. This resolves
  failed integration tests: backend: TestIntegration/FsMkdir/FsPutFiles/
  ObjectUpdate, vfs: TestFileReadAtNonZeroLength

* Unrecoverable 500 Errors: The shouldRetry method is updated to skip
  retries for 500 errors from "idx.shub.mypikpak.com" indicating "no
  record for gcid." These errors are non-recoverable, so retrying is futile.
2025-07-04 15:27:29 +09:00
wiserain
24eb8dcde0 pikpak: rewrite upload to bypass AWS S3 manager - fixes #8629
This commit introduces a significant rewrite of PikPak's upload, specifically
targeting direct handling of file uploads rather than relying on the generic
S3 manager. The primary motivation is to address critical upload failures
reported in #8629.

* Added new `multipart.go` file for multipart uploads using AWS S3 SDK.
* Removed dependency on AWS S3 manager; replaced with custom handling.
* Updated PikPak test package with new multipart upload tests,
  including configurable chunk size and upload cutoff.
* Added new configuration option `upload_cutoff` to control chunked uploads.
* Defined constraints for `chunk_size` and `upload_cutoff` (min/max values,
  validation).
* Adjusted default `upload_concurrency` from 5 to 4.
2025-07-04 11:25:12 +09:00
Nick Craig-Wood
a97425d9cb test: fix TestSMBKerberos password expiring errors
ERROR(runtime): uncaught exception - kinit for rclone@RCLONE.LOCAL failed (Password has expired)
2025-07-03 19:31:45 +01:00
Nick Craig-Wood
c51878f9a9 Add Vikas Bhansali to contributors 2025-07-03 19:31:45 +01:00
Nick Craig-Wood
92f0a73ac6 Add Ross Smith II to contributors 2025-07-03 19:31:45 +01:00
Vikas Bhansali
163c149f3f azureblob,azurefiles: add support for client assertion based authentication 2025-07-03 09:57:07 +01:00
WeidiDeng
224ca0ae8e webdav: fix setting modtime to that of local object instead of remote
In this commit the source of the modtime got changed to the wrong object by accident

0b9671313b webdav: add an ownCloud Infinite Scale vendor that enables tus chunked upload support

This reverts that change and fixes the integration tests.
2025-07-03 09:42:15 +01:00
Ross Smith II
5bf6cd1f4f build: set default shell to bash in build.yml
Per https://docs.github.com/en/actions/reference/workflow-syntax-for-github-actions#defaultsrunshell
2025-07-02 20:06:57 +01:00
Nick Craig-Wood
555739eec5 docs: fix filescom/filelu link mixup
See: https://forum.rclone.org/t/a-small-bug-in-rclone-documentation/51774
2025-07-02 15:37:28 +01:00
Nick Craig-Wood
c036ce90fe Add Davide Bizzarri to contributors 2025-07-02 15:37:28 +01:00
Davide Bizzarri
6163ae7cc7 fix: b2 versionAt read metadata 2025-07-01 17:36:07 +01:00
Nick Craig-Wood
cd950e30cb test: make TestWebdavInfiniteScale startup more reliable
This adds a _connect_delay=5s which allows the server to startup
properly. It also makes sure it stores its config in /tmp rather than
the current working directory.
2025-07-01 17:13:41 +01:00
Nick Craig-Wood
89dfae96ad test_all: add _connect_delay for slow starting servers 2025-07-01 17:13:11 +01:00
Nick Craig-Wood
c0a2d730a6 docs: update link for filescom 2025-06-30 11:09:37 +01:00
Nick Craig-Wood
592407230b test_all: make TestWebdav InfiniteScale integration tests run 2025-06-28 10:57:27 +01:00
Nick Craig-Wood
a7c3ddb482 test_all: make SMB with Kerberos integration tests run properly 2025-06-28 10:56:41 +01:00
Nick Craig-Wood
7a1813c531 test_all: allow an env parameter to set environment variables 2025-06-28 10:55:48 +01:00
Nick Craig-Wood
16e3d1becd Changelog updates from Version v1.70.2 2025-06-27 14:35:34 +01:00
Nick Craig-Wood
c0f6b910ae Add Ali Zein Yousuf to contributors 2025-06-27 14:35:34 +01:00
Nick Craig-Wood
e3bf8dc122 Add $@M@RTH_ to contributors 2025-06-27 14:35:34 +01:00
Ali Zein Yousuf
086a835131 docs: update client ID instructions to current Azure AD portal - fixes #8027 2025-06-27 12:22:10 +01:00
$@M@RTH_
d0668de192 s3: add Zata provider 2025-06-26 17:13:19 +01:00
Nick Craig-Wood
4df974ccc4 pacer: fix nil pointer deref in RetryError - fixes #8077
Before this change, if RetryAfterError was called with a nil err, then
it's Error method would return this when wrapped in a fmt.Errorf
statement

    error %!v(PANIC=Error method: runtime error: invalid memory address or nil pointer dereference))

Looking at the code, it looks like RetryAfterError will usually be
called with a nil pointer, so this patch makes sure it has a sensible
error.
2025-06-25 21:19:17 +01:00
Nick Craig-Wood
a50c903a82 docs: Remove Warp as a sponsor 2025-06-25 16:37:09 +01:00
Nick Craig-Wood
97a8092c14 docs: add files.com as a Gold sponsor 2025-06-25 16:37:09 +01:00
Nick Craig-Wood
526565b810 docs: add links to SecureBuild docker image 2025-06-25 16:37:09 +01:00
Nick Craig-Wood
64804b81bd Add curlwget to contributors 2025-06-25 16:37:09 +01:00
nielash
e10f516a5e convmv: fix moving to unicode-equivalent name - fixes #8634
Before this change, using convmv to convert filenames between NFD and NFC could
fail on certain backends (such as onedrive) that were insensitive to the
difference. This change fixes the issue by extending the existing
needsMoveCaseInsensitive logic for use in this scenario.
2025-06-25 11:19:50 +01:00
nielash
fe62a2bb4e transform: add truncate_keep_extension and truncate_bytes
This change adds a truncate_bytes mode which counts the number of bytes, as
opposed to the number of UTF-8 characters. This can be useful for ensuring that a
crypt-encoded filename will not exceed the underlying backend's length limits
(see https://forum.rclone.org/t/any-clear-file-name-length-when-using-crypt/36930 ).

This change also adds support for _keep_extension when using truncate and
truncate_bytes.
2025-06-25 11:19:50 +01:00
nielash
d6ecb949ca convmv: make --dry-run logs less noisy
Before this change, convmv dry runs would log a SkipDestructive message for
every single object, even objects that would not really be moved during a real
run. This made it quite difficult to tell what would actually happen during the
real run. This change fixes that by returning silently in such cases (as would
happen during a real run.)
2025-06-25 11:19:50 +01:00
nielash
a845a96538 sync: avoid copying dir metadata to itself
In convmv, src and dst can point to the same directory. Unless a dir's name is
changing, we should leave it alone and not attempt to copy its metadata to
itself.
2025-06-25 11:19:50 +01:00
curlwget
92f30fda8d docs: fix some function names in comments
Signed-off-by: curlwget <curlwget@icloud.com>
2025-06-24 15:04:45 +01:00
Nick Craig-Wood
559ef2eba8 combine: fix directory not found errors with ListP interface - Fixes #8627
In

b1d774c2e3 combine: implement ListP interface

We introduced the ListP interface to the combine backend. This was
passing the wrong remote to the upstreams. This was picked up by the
integration tests but was ignored by accident.
2025-06-23 17:43:52 +01:00
Nick Craig-Wood
17b25d7ce2 local: fix --skip-links on Windows when skipping Junction points
Due to a change in Go which was enabled by the `go 1.22` in `go.mod`
rclone has stopped skipping junction points ("My Documents" in
particular) if `--skip-links` is set on Windows.

This is because the output from os.Lstat has changed and junction
points are no longer marked with os.ModeSymlink but with
os.ModeIrregular instead.

This fix now skips os.ModeIrregular objects if --skip-links is set on
Windows only.

Fixes #8561
See: https://github.com/golang/go/issues/73827
2025-06-23 16:39:14 +01:00
Nick Craig-Wood
fe3253eefd Add Marvin Rösch to contributors 2025-06-23 16:39:14 +01:00
dependabot[bot]
c38ca6b2d1 build: bump github.com/go-chi/chi/v5 from 5.2.1 to 5.2.2 to fix GHSA-vrw8-fxc6-2r93
See: https://github.com/go-chi/chi/security/advisories/GHSA-vrw8-fxc6-2r93
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-06-20 18:27:36 +01:00
Marvin Rösch
5aa9811084 copy,copyto,move,moveto: implement logger flags to store result of sync
This enables the logger flags (`--combined`, `--missing-on-src`
etc.) for the `rclone copy` and `move` commands (as well as their
`copyto` and `moveto` variants) akin to `rclone sync`. Warnings for
unsupported/wonky flag combinations are also printed, e.g. when the
destination is not traversed but `--dest-after` is specified.

- fs/operations: add reusable methods for operation logging
- cmd/sync: use reusable methods for implementing logging in sync command
- cmd: implement logging for copy/copyto/move/moveto commands
- fs/operations/operationsflags: warn about logs in conjunction with --no-traverse
- cmd: add logger docs to copy and move commands

Fixes #8115
2025-06-20 16:55:00 +01:00
Nick Craig-Wood
3cae373064 log: fix deadlock when using systemd logging - fixes #8621
In this commit the logging system was re-worked

dfa4d94827 fs: Remove github.com/sirupsen/logrus and replace with log/slog

Unfortunately the systemd logging was still using the plain log
package and this caused a deadlock as it was recursively calling the
logging package.

The fix was to use the dedicated systemd journal logging routines in
the process removing a TODO!
2025-06-20 15:26:57 +01:00
Nick Craig-Wood
b6b8526fb4 docs: googlephotos: detail how to make your own client_id - fixes #8622 2025-06-20 12:14:46 +01:00
Nick Craig-Wood
6f86143176 Add necaran to contributors 2025-06-20 12:14:46 +01:00
necaran
beffef2882 mega: fix tls handshake failure - fixes #8565
The cipher suites used by Mega's storage endpoints: https://github.com/meganz/webclient/issues/103
are no longer supported by default since Go 1.22: https://tip.golang.org/doc/go1.22#minor_library_changes
This therefore assigns the cipher suites explicitly to include the one Mega needs.
2025-06-19 18:05:00 +01:00
Nick Craig-Wood
96f5bbcdd7 Changelog updates from Version v1.70.1 2025-06-19 14:25:38 +01:00
Nick Craig-Wood
27ce78bee4 Add jinjingroad to contributors 2025-06-19 14:25:29 +01:00
Ed Craig-Wood
898a59062b docs: DOI grammar error 2025-06-19 08:05:38 +02:00
albertony
c5f55243e1 docs: lib/transform: cleanup formatting 2025-06-19 08:04:46 +02:00
albertony
62a9727ab5 lib/transform: avoid empty charmap entry 2025-06-19 08:04:46 +02:00
jinjingroad
16f1e08b73 chore: fix function name
Signed-off-by: jinjingroad <jinjingroad@sina.com>
2025-06-19 08:02:51 +02:00
Nick Craig-Wood
4280ec75cc convmv: fix spurious "error running command echo" on Windows
Before this change the help for convmv was generated by running the
examples each time rclone started up. Unfortunately this involved
running the echo command which did not work on Windows.

This pre-generates the help into `transform.md` and embeds it. It can
be re-generated with `go generate` which is a better solution.

See: https://forum.rclone.org/t/invoke-of-1-70-0-complains-of-echo-not-found/51618
2025-06-18 14:28:14 +01:00
Ed Craig-Wood
b064cc2116 docs: client-credentials is not support by all backends 2025-06-18 14:06:57 +01:00
Nick Craig-Wood
f8b50f8d8f Start v1.71.0-DEV development 2025-06-18 11:31:52 +01:00
Nick Craig-Wood
9d464e8e9a Version v1.70.0 2025-06-17 17:53:11 +01:00
Nick Craig-Wood
92fea7eb1b ftp: add --ftp-http-proxy to connect via HTTP CONNECT proxy 2025-06-17 17:53:11 +01:00
Nick Craig-Wood
f226d12a2f pcloud: fix "Access denied. You do not have permissions to perform this operation" on large uploads
The API we use for OpenWriterAt seems to have been disabled at pcloud

    PUT /file_open?flags=XXX&folderid=XXX&name=XXX HTTP/1.1

gives

    {
            "result": 2003,
            "error": "Access denied. You do not have permissions to perform this operation."
    }

So disable OpenWriterAt and hence multipart uploads for the moment.
2025-06-17 12:46:35 +01:00
nielash
359260c49d operations: fix TransformFile when can't server-side copy/move 2025-06-16 17:40:19 +01:00
Nick Craig-Wood
125c8a98bb fstest: fix -verbose flag after logging revamp 2025-06-16 17:39:37 +01:00
Nick Craig-Wood
81fccd9c39 googlecloudstorage: fix directory marker after // changes in #5858
Before this change we were creating the directory markers with double
slashes on.
2025-06-16 17:33:40 +01:00
Nick Craig-Wood
1dc3421c7f s3: fix directory marker after // changes in #5858
Before this change we were creating the directory markers with double
slashes on.
2025-06-16 17:33:40 +01:00
Nick Craig-Wood
073184132e azureblob: fix directory marker after // changes in #5858
Before this change we were creating the directory markers with double
slashes on.
2025-06-16 17:33:40 +01:00
Nick Craig-Wood
476ff65fd7 tests: ignore some more habitually failing tests 2025-06-13 16:25:42 +01:00
Nick Craig-Wood
2847412433 googlephotos: fix typo in error message - Fixes #8600 2025-06-13 14:59:08 +01:00
Nick Craig-Wood
5c81132da0 s3: MEGA S4 support 2025-06-13 11:47:21 +01:00
Nick Craig-Wood
6e1c7b9239 Add Ser-Bul to contributors 2025-06-13 11:47:21 +01:00
nielash
e469c8974c chunker: fix double-transform
Before this change, chunker could double-transform a file under certain
conditions, when --name-transform was in use. This change fixes the issue by
ensuring that --name-transform is disabled during internal file moves.
2025-06-12 18:31:01 +01:00
Ser-Bul
629b427443 docs: mailru: added note about permissions level choice for the apps password 2025-06-12 17:35:42 +01:00
Nick Craig-Wood
108504963c tests: ignore habitually failing tests and backends
This ignores:

- cmd/bisync where it always fails
- cmd/gitannex where it always fails
- sharefile - citrix have refused to give us a testing account
- duplicated sia backend
- iclouddrive - token expiring every 30 days makes it too difficult

It would be nice to fix up these things at some point, but for the
integration test results to be useful they need less noise in them.
2025-06-12 16:24:14 +01:00
Nick Craig-Wood
6aa09fb1d6 docs: link to asciinema rather than including the js 2025-06-12 15:10:56 +01:00
Nick Craig-Wood
bfa6852334 docs: target="_blank" must have rel="noopener" 2025-06-12 15:10:56 +01:00
nielash
63d55d4a39 sync: fix testLoggerVsLsf when dst is local
Before this change, the testLoggerVsLsf function would get confused if given
r.Flocal when expecting r.Fremote. This change makes it agnostic.
2025-06-12 11:11:51 +01:00
kingston125
578ee49550 docs: fix FileLu docs
* Reorder providers alphabetically: moved FileLu above Files.com
* Added FileLu storage to docs.md
2025-06-11 16:25:30 +01:00
Nick Craig-Wood
dda6a863e9 build: update all dependencies
This updates all direct and indirect dependencies

It stops the linter complaining about deprecated azidentiy APIs also.
2025-06-09 14:19:53 +01:00
Nick Craig-Wood
99358cee88 onedrive: fix crash if no metadata was updated
Before this change, rclone would crash if no metadata was updated.
This could happen if the --onedrive-metadata-permissions read was
supplied but metadata to write was supplied.

Fixes #8586
2025-06-06 17:40:25 +01:00
Nick Craig-Wood
768a4236e6 Add kingston125 to contributors 2025-06-06 17:40:25 +01:00
Nick Craig-Wood
ffbf002ba8 Add Flora Thiebaut to contributors 2025-06-06 17:40:25 +01:00
kingston125
4a1b5b864c Add FileLu cloud storage backend 2025-06-06 15:15:07 +01:00
Flora Thiebaut
3b3096c940 doi: add new doi backend
Add a new backend to support mounting datasets published with a digital
object identifier (DOI).
2025-06-05 16:40:54 +01:00
Nick Craig-Wood
51fd697c7a build: fix check_autogenerated_edits.py flagging up files that didn't exist
Before this change new backend docs would have their changes flagged
which is undesirable for the first revision.
2025-06-05 16:37:01 +01:00
Nick Craig-Wood
210acb42cd docs: rc: add more info on how to discover _config and _filter parameters #8584 2025-06-05 10:44:33 +01:00
Nick Craig-Wood
6c36615efe s3: add Exaba provider 2025-06-04 17:42:48 +01:00
nielash
d4e2717081 convmv: add convmv command
convmv supports advanced path name transformations for converting and renaming
files and directories by applying prefixes, suffixes, and other alterations.

For example:

rclone convmv "stories/The Quick Brown Fox!.txt" --name-transform "all,uppercase"
// Output: STORIES/THE QUICK BROWN FOX!.TXT

See help doc for complete details.
2025-06-04 17:24:07 +01:00
nielash
013c563293 lib/transform: add transform library and --name-transform flag
lib/transform adds the transform library, supporting advanced path name
transformations for converting and renaming files and directories by applying
prefixes, suffixes, and other alterations.

It also adds the --name-transform flag for use with sync, copy, and move.

Multiple transformations can be used in sequence, applied in the order they are
specified on the command line.

By default --name-transform will only apply to file names. The means only the leaf
file name will be transformed. However some of the transforms would be better
applied to the whole path or just directories. To choose which which part of the
file path is affected some tags can be added to the --name-transform:

file	Only transform the leaf name of files (DEFAULT)
dir	Only transform name of directories - these may appear anywhere in the path
all	Transform the entire path for files and directories

Example syntax:
--name-transform file,prefix=ABC
--name-transform dir,prefix=DEF
2025-06-04 17:24:07 +01:00
nielash
41a407dcc9 march: split src and dst
splits m.key into separate functions for src and dst to prepare for
lib/transform which will want to do transforms on the src side only.

Co-Authored-By: Nick Craig-Wood <nick@craig-wood.com>
2025-06-04 17:24:07 +01:00
Nick Craig-Wood
cf1f5a7af6 Add ahxxm to contributors 2025-06-04 17:24:07 +01:00
Nick Craig-Wood
597872e5d7 Add Nathanael Demacon to contributors 2025-06-04 17:24:07 +01:00
ahxxm
e2d6872745 b2: use file id from listing when not presented in headers - fixes #8113 2025-06-04 16:23:58 +01:00
Nathanael Demacon
ddebca8d42 fs: fix goroutine leak and improve stats accounting process
This fixes the go routine leak in the stats accounting

- don't start stats average loop when initializing `StatsInfo`
- stop the loop instead of pausing
- use a context instead of a channel
- move `period` variable in `averageValues` struct

Fixes #8570
2025-06-04 14:43:19 +01:00
Nick Craig-Wood
5173ca0454 march: fix syncing with a duplicate file and directory
As part of the out of memory syncing code, in this commit

0148bd4668 march: Implement callback based syncing

we changed the syncing method to use a sorted stream of directory
entries.

Unfortunately as part of this change the sort order of files and
directories became undefined.

This meant that if there existed both a file `foo` and a directory
`foo` in the same directory (as is common on object storage systems)
then these could be matched up incorrectly.

They could be matched up correctly like this

- `foo` (directory) - `foo` (directory)
- `foo` (file)      - `foo` (file)

Or incorrectly like this (one of many possibilities)

- no match          - `foo` (file)
- `foo` (directory) - `foo` (directory)
- `foo` (file)      - no match

Just depending on how the input listings were ordered.

This in turn made container based syncing with a duplicated file and
directory name erratic, deleting files when it shouldn't.

This patch ensures that directories always sync before files by adding
a suffix to the sort key depending on whether the entry was a file or
directory.
2025-06-04 10:54:31 +01:00
Nick Craig-Wood
ccac9813f3 Add PrathameshLakawade to contributors 2025-06-04 10:54:31 +01:00
Nick Craig-Wood
9133fd03df Add Oleksiy Stashok to contributors 2025-06-04 10:54:31 +01:00
PrathameshLakawade
2e891f4ff8 docs: fix page_facing_up typo next to Lyve Cloud in README.md 2025-06-04 08:25:17 +02:00
PrathameshLakawade
3c66d9ccb1 backend/s3: require custom endpoint for Lyve Cloud v2 support
Lyve Cloud v2 no longer provides a shared S3 endpoint like v1 did. Instead, each customer receives
a unique, reseller-specific endpoint. To reflect this change, the S3 backend now requires users to
manually enter their endpoint when selecting Lyve Cloud as a provider.
Previously, users selected from a list of hardcoded Lyve Cloud v1 endpoints. This was not compatible
with Lyve Cloud v2 accounts and could cause confusion or misconfiguration.

This change:
- Removes outdated pre-defined endpoint selection for Lyve Cloud
- Requires users to provide their own endpoint
- Adds a format example to guide correct usage

Before: Users selected a fixed endpoint from a list (v1 only)
After:  Users must input their own endpoint (v2-compatible)
2025-06-03 16:19:41 +01:00
Oleksiy Stashok
badf16cc34 backend: skip hash calculation when the hashType is None - fixes #8518
When hashType is None `local` backend still runs expensive logic that reads the entire file content to produce an empty string.
2025-06-03 15:40:50 +01:00
Nick Craig-Wood
0ee7cd80f2 azureblob: fix multipart server side copies of 0 sized files
Before this fix multipart server side copies would fail.

This problem was due to an incorrect calculation of the number of
parts to transfer - it calculated 1 part to transfer rather than 0.
2025-06-02 17:22:37 +01:00
Nick Craig-Wood
aeb43c6a4c Add Jeremy Daer to contributors 2025-06-02 17:22:37 +01:00
Nick Craig-Wood
12322a2141 Add wbulot to contributors 2025-06-02 17:22:37 +01:00
Jeremy Daer
4fd5a3d0a2 s3: add Pure Storage FlashBlade provider support (#8575)
Pure Storage FlashBlade is an enterprise object storage platform that
provides S3-compatible APIs. This change adds FlashBlade as a new
provider option in the S3 backend.

Before this change, FlashBlade users had to use the "Other" provider
with manual configuration of various compatibility flags. This often
resulted in suboptimal performance due to conservative default settings.

After this change, users can select the "FlashBlade" S3 provider and
get an optimal configuration:

- ListObjectsV2 enabled for better performance
- AWS-compatible multipart ETags for reliable transfers
- Proper handling of "AlreadyOwnedByYou" bucket creation responses
- Path-style URLs by default (virtual-host style with DNS setup)
- Unsigned payloads to ensure compatibility with all rclone features

FlashBlade supports modern S3 features including trailer checksum
algorithms (SHA256, CRC32, CRC32C), object versioning, and lifecycle
management.

Provider settings were verified by testing against a FlashBlade//E
system running Purity//FB 4.5.7.

Documentation and test configurations are included.

Integration test results:
```
go test -v -fast-list -remote TestS3FlashBlade:
PASS
ok  	github.com/rclone/rclone/backend/s3	232.444s
```
2025-05-30 12:35:13 +01:00
wbulot
3594330177 backend/gofile: update to use new direct upload endpoint
Update the Gofile backend to use the new direct upload endpoint based on the latest API changes.
The previous implementation used dynamic server selection, but Gofile has simplified their API
to use a single upload endpoint at https://upload.gofile.io/uploadfile.

This change:
- Removes server selection logic and related code
- Simplifies the Fs struct by removing server-related fields
- Updates the upload process to use the direct upload URL
2025-05-27 14:28:25 +01:00
Nick Craig-Wood
15510c66d4 log: add --windows-event-log-level to support Windows Event Log
This provides JSON logs in the Windows Event Log.
2025-05-23 11:27:49 +01:00
Nick Craig-Wood
dfa4d94827 fs: Remove github.com/sirupsen/logrus and replace with log/slog
This removes logrus which is not developed any more and replaces it
with the new log/slog from the Go standard library.

It implements its own slog Handler which is backwards compatible with
all of rclone's previous logging modes.
2025-05-23 11:27:49 +01:00
Nick Craig-Wood
36b89960e3 Add fhuber to contributors 2025-05-23 11:27:49 +01:00
fhuber
a3f3fc61ee cmd serve s3: fix ListObjectsV2 response
add trailing slash to s3 ListObjectsV2 response because some clients expect a trailing forward slash to distinguish if the returned object is a directory

Fixes #8464
2025-05-22 22:27:38 +01:00
Nick Craig-Wood
b8fde4fc46 Changelog updates from Version v1.69.3 2025-05-22 09:55:00 +01:00
Nick Craig-Wood
c37fe733df onedrive: re-add --onedrive-upload-cutoff flag
This was removed as part of #1716 to fix rclone uploads taking double
the space.

7f744033d8 onedrive: Removed upload cutoff and always do session uploads

As far as I can see, two revisions are still being created for single
part uploads so the default for this flag is set to -1, off.

However it may be useful for experimentation.

See: #8545
2025-05-15 15:25:10 +01:00
Nick Craig-Wood
b31659904f onedrive: fix "The upload session was not found" errors
Before this change, sometimes, perhaps on heavily loaded sharepoint
servers, uploads would sometimes fail with the error:

{"error":{"code":"itemNotFound","message":"The upload session was not found"}}

This retries the upload after a 5 second delay up to --low-level-retries times.

Fixes #8545
2025-05-15 15:25:10 +01:00
Nick Craig-Wood
ebcf51336e Add Germán Casares to contributors 2025-05-15 15:25:10 +01:00
Nick Craig-Wood
a334bba643 Add Jeff Geerling to contributors 2025-05-15 15:25:10 +01:00
Germán Casares
d4fd93e7f3 googlephotos: update read only and read write scopes to meet Google's requirements.
As part of changes to the Google Photos APIs the scopes rclone used
for accessing Google photos have been removed.

This commit replaces the scopes with updated ones.

These aren't as powerful as the old scopes - this means rclone will
only be able to download photos it uploaded from March 31, 2025.

To use these new scopes do `rclone reconnect yourgooglephotosremote:`

Fixes #8434

Co-authored-by: Nick Craig-Wood <nick@craig-wood.com>
2025-05-12 16:43:23 +01:00
albertony
6644bdba0f build: update github.com/ebitengine/purego to v0.8.3 to fix mac_amd64 build
Fixes #8552
2025-05-12 09:08:15 +02:00
albertony
68a65e878f docs: add hint about config touch and config file not found 2025-05-09 08:30:34 +01:00
Jeff Geerling
7606ad8294 docs: add FAQ for dismissing 'rclone.conf not found'
See: https://forum.rclone.org/t/notice-about-missing-rclone-conf-is-annoying/51116
2025-05-09 08:23:31 +02:00
Nick Craig-Wood
32847e88b4 docs: document how to keep an out of tree backend 2025-05-08 17:16:28 +01:00
Nick Craig-Wood
2e879586bd Add Clément Wehrung to contributors 2025-05-08 17:16:28 +01:00
Clément Wehrung
9d55b2411f iclouddrive: fix panic and files potentially downloaded twice
- Fixing SIGSEGV Fixes #8211
- Removed files potentially downloaded twice
2025-05-07 18:00:33 +01:00
Nick Craig-Wood
fe880c0fac docs: move --max-connections documentation to the correct place 2025-05-06 15:23:55 +01:00
Nick Craig-Wood
b160089be7 Add Ben Boeckel to contributors 2025-05-06 15:23:55 +01:00
Nick Craig-Wood
c2254164f8 Add Tho Neyugn to contributors 2025-05-06 15:23:55 +01:00
Ben Boeckel
e57b94c4ac docs: fix typo in s3/storj docs 2025-05-04 18:57:47 +02:00
661 changed files with 117295 additions and 53668 deletions

View File

@@ -23,15 +23,18 @@ jobs:
build:
if: inputs.manual || (github.repository == 'rclone/rclone' && (github.event_name != 'pull_request' || github.event.pull_request.head.repo.full_name != github.event.pull_request.base.repo.full_name))
timeout-minutes: 60
defaults:
run:
shell: bash
strategy:
fail-fast: false
matrix:
job_name: ['linux', 'linux_386', 'mac_amd64', 'mac_arm64', 'windows', 'other_os', 'go1.23']
job_name: ['linux', 'linux_386', 'mac_amd64', 'mac_arm64', 'windows', 'other_os', 'go1.24']
include:
- job_name: linux
os: ubuntu-latest
go: '>=1.24.0-rc.1'
go: '>=1.25.0-rc.1'
gotags: cmount
build_flags: '-include "^linux/"'
check: true
@@ -42,14 +45,14 @@ jobs:
- job_name: linux_386
os: ubuntu-latest
go: '>=1.24.0-rc.1'
go: '>=1.25.0-rc.1'
goarch: 386
gotags: cmount
quicktest: true
- job_name: mac_amd64
os: macos-latest
go: '>=1.24.0-rc.1'
go: '>=1.25.0-rc.1'
gotags: 'cmount'
build_flags: '-include "^darwin/amd64" -cgo'
quicktest: true
@@ -58,14 +61,14 @@ jobs:
- job_name: mac_arm64
os: macos-latest
go: '>=1.24.0-rc.1'
go: '>=1.25.0-rc.1'
gotags: 'cmount'
build_flags: '-include "^darwin/arm64" -cgo -macos-arch arm64 -cgo-cflags=-I/usr/local/include -cgo-ldflags=-L/usr/local/lib'
deploy: true
- job_name: windows
os: windows-latest
go: '>=1.24.0-rc.1'
go: '>=1.25.0-rc.1'
gotags: cmount
cgo: '0'
build_flags: '-include "^windows/"'
@@ -75,14 +78,14 @@ jobs:
- job_name: other_os
os: ubuntu-latest
go: '>=1.24.0-rc.1'
go: '>=1.25.0-rc.1'
build_flags: '-exclude "^(windows/|darwin/|linux/)"'
compile_all: true
deploy: true
- job_name: go1.23
- job_name: go1.24
os: ubuntu-latest
go: '1.23'
go: '1.24'
quicktest: true
racequicktest: true
@@ -92,18 +95,17 @@ jobs:
steps:
- name: Checkout
uses: actions/checkout@v4
uses: actions/checkout@v5
with:
fetch-depth: 0
- name: Install Go
uses: actions/setup-go@v5
uses: actions/setup-go@v6
with:
go-version: ${{ matrix.go }}
check-latest: true
- name: Set environment variables
shell: bash
run: |
echo 'GOTAGS=${{ matrix.gotags }}' >> $GITHUB_ENV
echo 'BUILD_FLAGS=${{ matrix.build_flags }}' >> $GITHUB_ENV
@@ -112,7 +114,6 @@ jobs:
if [[ "${{ matrix.cgo }}" != "" ]]; then echo 'CGO_ENABLED=${{ matrix.cgo }}' >> $GITHUB_ENV ; fi
- name: Install Libraries on Linux
shell: bash
run: |
sudo modprobe fuse
sudo chmod 666 /dev/fuse
@@ -122,7 +123,6 @@ jobs:
if: matrix.os == 'ubuntu-latest'
- name: Install Libraries on macOS
shell: bash
run: |
# https://github.com/Homebrew/brew/issues/15621#issuecomment-1619266788
# https://github.com/orgs/Homebrew/discussions/4612#discussioncomment-6319008
@@ -151,7 +151,6 @@ jobs:
if: matrix.os == 'windows-latest'
- name: Print Go version and environment
shell: bash
run: |
printf "Using go at: $(which go)\n"
printf "Go version: $(go version)\n"
@@ -163,29 +162,24 @@ jobs:
env
- name: Build rclone
shell: bash
run: |
make
- name: Rclone version
shell: bash
run: |
rclone version
- name: Run tests
shell: bash
run: |
make quicktest
if: matrix.quicktest
- name: Race test
shell: bash
run: |
make racequicktest
if: matrix.racequicktest
- name: Run librclone tests
shell: bash
run: |
make -C librclone/ctest test
make -C librclone/ctest clean
@@ -193,14 +187,12 @@ jobs:
if: matrix.librclonetest
- name: Compile all architectures test
shell: bash
run: |
make
make compile_all
if: matrix.compile_all
- name: Deploy built binaries
shell: bash
run: |
if [[ "${{ matrix.os }}" == "ubuntu-latest" ]]; then make release_dep_linux ; fi
make ci_beta
@@ -219,21 +211,20 @@ jobs:
steps:
- name: Get runner parameters
id: get-runner-parameters
shell: bash
run: |
echo "year-week=$(/bin/date -u "+%Y%V")" >> $GITHUB_OUTPUT
echo "runner-os-version=$ImageOS" >> $GITHUB_OUTPUT
- name: Checkout
uses: actions/checkout@v4
uses: actions/checkout@v5
with:
fetch-depth: 0
- name: Install Go
id: setup-go
uses: actions/setup-go@v5
uses: actions/setup-go@v6
with:
go-version: '>=1.23.0-rc.1'
go-version: '>=1.24.0-rc.1'
check-latest: true
cache: false
@@ -248,13 +239,13 @@ jobs:
restore-keys: golangci-lint-${{ steps.get-runner-parameters.outputs.runner-os-version }}-go${{ steps.setup-go.outputs.go-version }}-${{ steps.get-runner-parameters.outputs.year-week }}-
- name: Code quality test (Linux)
uses: golangci/golangci-lint-action@v6
uses: golangci/golangci-lint-action@v9
with:
version: latest
skip-cache: true
- name: Code quality test (Windows)
uses: golangci/golangci-lint-action@v6
uses: golangci/golangci-lint-action@v9
env:
GOOS: "windows"
with:
@@ -262,7 +253,7 @@ jobs:
skip-cache: true
- name: Code quality test (macOS)
uses: golangci/golangci-lint-action@v6
uses: golangci/golangci-lint-action@v9
env:
GOOS: "darwin"
with:
@@ -270,7 +261,7 @@ jobs:
skip-cache: true
- name: Code quality test (FreeBSD)
uses: golangci/golangci-lint-action@v6
uses: golangci/golangci-lint-action@v9
env:
GOOS: "freebsd"
with:
@@ -278,7 +269,7 @@ jobs:
skip-cache: true
- name: Code quality test (OpenBSD)
uses: golangci/golangci-lint-action@v6
uses: golangci/golangci-lint-action@v9
env:
GOOS: "openbsd"
with:
@@ -291,8 +282,21 @@ jobs:
- name: Scan for vulnerabilities
run: govulncheck ./...
- name: Check Markdown format
uses: DavidAnson/markdownlint-cli2-action@v20
with:
globs: |
CONTRIBUTING.md
MAINTAINERS.md
README.md
RELEASE.md
CODE_OF_CONDUCT.md
librclone\README.md
backend\s3\README.md
docs/content/{_index,authors,bugs,changelog,docs,downloads,faq,filtering,gui,install,licence,overview,privacy}.md
- name: Scan edits of autogenerated files
run: bin/check_autogenerated_edits.py
run: bin/check_autogenerated_edits.py 'origin/${{ github.base_ref }}'
if: github.event_name == 'pull_request'
android:
@@ -303,18 +307,17 @@ jobs:
steps:
- name: Checkout
uses: actions/checkout@v4
uses: actions/checkout@v5
with:
fetch-depth: 0
# Upgrade together with NDK version
- name: Set up Go
uses: actions/setup-go@v5
uses: actions/setup-go@v6
with:
go-version: '>=1.24.0-rc.1'
go-version: '>=1.25.0-rc.1'
- name: Set global environment variables
shell: bash
run: |
echo "VERSION=$(make version)" >> $GITHUB_ENV
@@ -333,7 +336,6 @@ jobs:
run: env PATH=$PATH:~/go/bin gomobile bind -androidapi ${RCLONE_NDK_VERSION} -v -target=android/arm -javapkg=org.rclone -ldflags '-s -X github.com/rclone/rclone/fs.Version='${VERSION} github.com/rclone/rclone/librclone/gomobile
- name: arm-v7a Set environment variables
shell: bash
run: |
echo "CC=$(echo $ANDROID_NDK/toolchains/llvm/prebuilt/linux-x86_64/bin/armv7a-linux-androideabi${RCLONE_NDK_VERSION}-clang)" >> $GITHUB_ENV
echo "CC_FOR_TARGET=$CC" >> $GITHUB_ENV
@@ -347,7 +349,6 @@ jobs:
run: go build -v -tags android -trimpath -ldflags '-s -X github.com/rclone/rclone/fs.Version='${VERSION} -o build/rclone-android-${RCLONE_NDK_VERSION}-armv7a .
- name: arm64-v8a Set environment variables
shell: bash
run: |
echo "CC=$(echo $ANDROID_NDK/toolchains/llvm/prebuilt/linux-x86_64/bin/aarch64-linux-android${RCLONE_NDK_VERSION}-clang)" >> $GITHUB_ENV
echo "CC_FOR_TARGET=$CC" >> $GITHUB_ENV
@@ -360,7 +361,6 @@ jobs:
run: go build -v -tags android -trimpath -ldflags '-s -X github.com/rclone/rclone/fs.Version='${VERSION} -o build/rclone-android-${RCLONE_NDK_VERSION}-armv8a .
- name: x86 Set environment variables
shell: bash
run: |
echo "CC=$(echo $ANDROID_NDK/toolchains/llvm/prebuilt/linux-x86_64/bin/i686-linux-android${RCLONE_NDK_VERSION}-clang)" >> $GITHUB_ENV
echo "CC_FOR_TARGET=$CC" >> $GITHUB_ENV
@@ -373,7 +373,6 @@ jobs:
run: go build -v -tags android -trimpath -ldflags '-s -X github.com/rclone/rclone/fs.Version='${VERSION} -o build/rclone-android-${RCLONE_NDK_VERSION}-x86 .
- name: x64 Set environment variables
shell: bash
run: |
echo "CC=$(echo $ANDROID_NDK/toolchains/llvm/prebuilt/linux-x86_64/bin/x86_64-linux-android${RCLONE_NDK_VERSION}-clang)" >> $GITHUB_ENV
echo "CC_FOR_TARGET=$CC" >> $GITHUB_ENV

View File

@@ -52,7 +52,7 @@ jobs:
df -h .
- name: Checkout Repository
uses: actions/checkout@v4
uses: actions/checkout@v5
with:
fetch-depth: 0
@@ -92,7 +92,7 @@ jobs:
# There's no way around this, because "ImageOS" is only available to
# processes, but the setup-go action uses it in its key.
id: imageos
uses: actions/github-script@v7
uses: actions/github-script@v8
with:
result-encoding: string
script: |
@@ -183,7 +183,7 @@ jobs:
touch "/tmp/digests/${digest#sha256:}"
- name: Upload Image Digest
uses: actions/upload-artifact@v4
uses: actions/upload-artifact@v5
with:
name: digests-${{ env.PLATFORM }}
path: /tmp/digests/*
@@ -198,7 +198,7 @@ jobs:
steps:
- name: Download Image Digests
uses: actions/download-artifact@v4
uses: actions/download-artifact@v6
with:
path: /tmp/digests
pattern: digests-*

View File

@@ -30,7 +30,7 @@ jobs:
sudo rm -rf /usr/share/dotnet || true
df -h .
- name: Checkout master
uses: actions/checkout@v4
uses: actions/checkout@v5
with:
fetch-depth: 0
- name: Build and publish docker plugin

View File

@@ -1,144 +1,151 @@
# golangci-lint configuration options
version: "2"
linters:
# Configure the linter set. To avoid unexpected results the implicit default
# set is ignored and all the ones to use are explicitly enabled.
default: none
enable:
# Default
- errcheck
- goimports
- revive
- ineffassign
- govet
- unconvert
- ineffassign
- staticcheck
- gosimple
- stylecheck
- unused
- misspell
# Additional
- gocritic
#- prealloc
#- maligned
disable-all: true
- misspell
#- prealloc # TODO
- revive
- unconvert
# Configure checks. Mostly using defaults but with some commented exceptions.
settings:
govet:
enable-all: true
disable:
- fieldalignment
- shadow
staticcheck:
# With staticcheck there is only one setting, so to extend the implicit
# default value it must be explicitly included.
checks:
# Default
- all
- -ST1000
- -ST1003
- -ST1016
- -ST1020
- -ST1021
- -ST1022
# Disable quickfix checks
- -QF*
gocritic:
# With gocritic there are different settings, but since enabled-checks
# and disabled-checks cannot both be set, for full customization the
# alternative is to disable all defaults and explicitly enable the ones
# to use.
disable-all: true
enabled-checks:
#- appendAssign # Skip default
- argOrder
- assignOp
- badCall
- badCond
#- captLocal # Skip default
- caseOrder
- codegenComment
#- commentFormatting # Skip default
- defaultCaseOrder
- deprecatedComment
- dupArg
- dupBranchBody
- dupCase
- dupSubExpr
- elseif
#- exitAfterDefer # Skip default
- flagDeref
- flagName
#- ifElseChain # Skip default
- mapKey
- newDeref
- offBy1
- regexpMust
- ruleguard # Enable additional check that are not enabled by default
#- singleCaseSwitch # Skip default
- sloppyLen
- sloppyTypeAssert
- switchTrue
- typeSwitchVar
- underef
- unlambda
- unslice
- valSwap
- wrapperFunc
settings:
ruleguard:
rules: ${base-path}/bin/rules.go
revive:
# With revive there is in reality only one setting, and when at least one
# rule are specified then only these rules will be considered, defaults
# and all others are then implicitly disabled, so must explicitly enable
# all rules to be used.
rules:
- name: blank-imports
disabled: false
- name: context-as-argument
disabled: false
- name: context-keys-type
disabled: false
- name: dot-imports
disabled: false
#- name: empty-block # Skip default
# disabled: true
- name: error-naming
disabled: false
- name: error-return
disabled: false
- name: error-strings
disabled: false
- name: errorf
disabled: false
- name: exported
disabled: false
#- name: increment-decrement # Skip default
# disabled: true
- name: indent-error-flow
disabled: false
- name: package-comments
disabled: false
- name: range
disabled: false
- name: receiver-naming
disabled: false
#- name: redefines-builtin-id # Skip default
# disabled: true
#- name: superfluous-else # Skip default
# disabled: true
- name: time-naming
disabled: false
- name: unexported-return
disabled: false
#- name: unreachable-code # Skip default
# disabled: true
#- name: unused-parameter # Skip default
# disabled: true
- name: var-declaration
disabled: false
- name: var-naming
disabled: false
formatters:
enable:
- goimports
issues:
# Enable some lints excluded by default
exclude-use-default: false
# Maximum issues count per one linter. Set to 0 to disable. Default is 50.
max-issues-per-linter: 0
# Maximum count of issues with the same text. Set to 0 to disable. Default is 3.
max-same-issues: 0
exclude-rules:
- linters:
- staticcheck
text: 'SA1019: "github.com/rclone/rclone/cmd/serve/httplib" is deprecated'
# don't disable the revive messages about comments on exported functions
include:
- EXC0012
- EXC0013
- EXC0014
- EXC0015
run:
# timeout for analysis, e.g. 30s, 5m, default is 1m
# Timeout for total work, e.g. 30s, 5m, 5m30s. Default is 0 (disabled).
timeout: 10m
linters-settings:
revive:
# setting rules seems to disable all the rules, so re-enable them here
rules:
- name: blank-imports
disabled: false
- name: context-as-argument
disabled: false
- name: context-keys-type
disabled: false
- name: dot-imports
disabled: false
- name: empty-block
disabled: true
- name: error-naming
disabled: false
- name: error-return
disabled: false
- name: error-strings
disabled: false
- name: errorf
disabled: false
- name: exported
disabled: false
- name: increment-decrement
disabled: true
- name: indent-error-flow
disabled: false
- name: package-comments
disabled: false
- name: range
disabled: false
- name: receiver-naming
disabled: false
- name: redefines-builtin-id
disabled: true
- name: superfluous-else
disabled: true
- name: time-naming
disabled: false
- name: unexported-return
disabled: false
- name: unreachable-code
disabled: true
- name: unused-parameter
disabled: true
- name: var-declaration
disabled: false
- name: var-naming
disabled: false
stylecheck:
# Only enable the checks performed by the staticcheck stand-alone tool,
# as documented here: https://staticcheck.io/docs/configuration/options/#checks
checks: ["all", "-ST1000", "-ST1003", "-ST1016", "-ST1020", "-ST1021", "-ST1022", "-ST1023"]
gocritic:
# Enable all default checks with some exceptions and some additions (commented).
# Cannot use both enabled-checks and disabled-checks, so must specify all to be used.
disable-all: true
enabled-checks:
#- appendAssign # Enabled by default
- argOrder
- assignOp
- badCall
- badCond
#- captLocal # Enabled by default
- caseOrder
- codegenComment
#- commentFormatting # Enabled by default
- defaultCaseOrder
- deprecatedComment
- dupArg
- dupBranchBody
- dupCase
- dupSubExpr
- elseif
#- exitAfterDefer # Enabled by default
- flagDeref
- flagName
#- ifElseChain # Enabled by default
- mapKey
- newDeref
- offBy1
- regexpMust
- ruleguard # Not enabled by default
#- singleCaseSwitch # Enabled by default
- sloppyLen
- sloppyTypeAssert
- switchTrue
- typeSwitchVar
- underef
- unlambda
- unslice
- valSwap
- wrapperFunc
settings:
ruleguard:
rules: "${configDir}/bin/rules.go"

72
.markdownlint.yml Normal file
View File

@@ -0,0 +1,72 @@
default: true
# Use specific styles, to be consistent accross all documents.
# Default is to accept any as long as it is consistent within the same document.
heading-style: # MD003
style: atx
ul-style: # MD004
style: dash
hr-style: # MD035
style: ---
code-block-style: # MD046
style: fenced
code-fence-style: # MD048
style: backtick
emphasis-style: # MD049
style: asterisk
strong-style: # MD050
style: asterisk
# Allow multiple headers with same text as long as they are not siblings.
no-duplicate-heading: # MD024
siblings_only: true
# Allow long lines in code blocks and tables.
line-length: # MD013
code_blocks: false
tables: false
# The Markdown files used to generated docs with Hugo contain a top level
# header, even though the YAML front matter has a title property (which is
# used for the HTML document title only). Suppress Markdownlint warning:
# Multiple top-level headings in the same document.
single-title: # MD025
level: 1
front_matter_title:
# The HTML docs generated by Hugo from Markdown files may have slightly
# different header anchors than GitHub rendered Markdown, e.g. Hugo trims
# leading dashes so "--config string" becomes "#config-string" while it is
# "#--config-string" in GitHub preview. When writing links to headers in the
# Markdown files we must use whatever works in the final HTML generated docs.
# Suppress Markdownlint warning: Link fragments should be valid.
link-fragments: false # MD051
# Restrict the languages and language identifiers to use for code blocks.
# We only want those supported by both Hugo and GitHub. These are documented
# here:
# https://gohugo.io/content-management/syntax-highlighting/#languages
# https://docs.github.com//get-started/writing-on-github/working-with-advanced-formatting/creating-and-highlighting-code-blocks#syntax-highlighting
# In addition, we only want to allow identifiers (aliases) that correspond to
# the same language in Hugo and GitHub, and preferrably also VSCode and other
# commonly used tools, to avoid confusion. An example of this is that "shell"
# by some are considered an identifier for shell scripts, i.e. an alias for
# "sh", while others consider it an identifier for shell sessions, i.e. an
# alias for "console". Although Hugo and GitHub in this case are consistent and
# have choosen the former, using "sh" instead, and not allowing use of "shell",
# avoids the confusion entirely.
fenced-code-language: # MD040
allowed_languages:
- text
- console
- sh
- bat
- ini
- json
- yaml
- go
- python
- c++
- c#
- java
- powershell

80
CODE_OF_CONDUCT.md Normal file
View File

@@ -0,0 +1,80 @@
# Rclone Code of Conduct
Like the technical community as a whole, the Rclone team and community
is made up of a mixture of professionals and volunteers from all over
the world, working on every aspect of the mission - including
mentorship, teaching, and connecting people.
Diversity is one of our huge strengths, but it can also lead to
communication issues and unhappiness. To that end, we have a few
ground rules that we ask people to adhere to. This code applies
equally to founders, mentors and those seeking help and guidance.
This isn't an exhaustive list of things that you can't do. Rather,
take it in the spirit in which it's intended - a guide to make it
easier to enrich all of us and the technical communities in which we
participate.
This code of conduct applies to all spaces managed by the Rclone
project or Rclone Services Ltd. This includes the issue tracker, the
forum, the GitHub site, the wiki, any other online services or
in-person events. In addition, violations of this code outside these
spaces may affect a person's ability to participate within them.
- **Be friendly and patient.**
- **Be welcoming.** We strive to be a community that welcomes and
supports people of all backgrounds and identities. This includes,
but is not limited to members of any race, ethnicity, culture,
national origin, colour, immigration status, social and economic
class, educational level, sex, sexual orientation, gender identity
and expression, age, size, family status, political belief,
religion, and mental and physical ability.
- **Be considerate.** Your work will be used by other people, and you
in turn will depend on the work of others. Any decision you take
will affect users and colleagues, and you should take those
consequences into account when making decisions. Remember that we're
a world-wide community, so you might not be communicating in someone
else's primary language.
- **Be respectful.** Not all of us will agree all the time, but
disagreement is no excuse for poor behavior and poor manners. We
might all experience some frustration now and then, but we cannot
allow that frustration to turn into a personal attack. It's
important to remember that a community where people feel
uncomfortable or threatened is not a productive one. Members of the
Rclone community should be respectful when dealing with other
members as well as with people outside the Rclone community.
- **Be careful in the words that you choose.** We are a community of
professionals, and we conduct ourselves professionally. Be kind to
others. Do not insult or put down other participants. Harassment and
other exclusionary behavior aren't acceptable. This includes, but is
not limited to:
- Violent threats or language directed against another person.
- Discriminatory jokes and language.
- Posting sexually explicit or violent material.
- Posting (or threatening to post) other people's personally
identifying information ("doxing").
- Personal insults, especially those using racist or sexist terms.
- Unwelcome sexual attention.
- Advocating for, or encouraging, any of the above behavior.
- Repeated harassment of others. In general, if someone asks you to
stop, then stop.
- **When we disagree, try to understand why.** Disagreements, both
social and technical, happen all the time and Rclone is no
exception. It is important that we resolve disagreements and
differing views constructively. Remember that we're different. The
strength of Rclone comes from its varied community, people from a
wide range of backgrounds. Different people have different
perspectives on issues. Being unable to understand why someone holds
a viewpoint doesn't mean that they're wrong. Don't forget that it is
human to err and blaming each other doesn't get us anywhere.
Instead, focus on helping to resolve issues and learning from
mistakes.
If you believe someone is violating the code of conduct, we ask that
you report it by emailing [info@rclone.com](mailto:info@rclone.com).
Original text courtesy of the [Speak Up! project](http://web.archive.org/web/20141109123859/http://speakup.io/coc.html).
## Questions?
If you have questions, please feel free to [contact us](mailto:info@rclone.com).

View File

@@ -15,61 +15,81 @@ with the [latest beta of rclone](https://beta.rclone.org/):
- Rclone version (e.g. output from `rclone version`)
- Which OS you are using and how many bits (e.g. Windows 10, 64 bit)
- The command you were trying to run (e.g. `rclone copy /tmp remote:tmp`)
- A log of the command with the `-vv` flag (e.g. output from `rclone -vv copy /tmp remote:tmp`)
- if the log contains secrets then edit the file with a text editor first to obscure them
- A log of the command with the `-vv` flag (e.g. output from
`rclone -vv copy /tmp remote:tmp`)
- if the log contains secrets then edit the file with a text editor first to
obscure them
## Submitting a new feature or bug fix
If you find a bug that you'd like to fix, or a new feature that you'd
like to implement then please submit a pull request via GitHub.
If it is a big feature, then [make an issue](https://github.com/rclone/rclone/issues) first so it can be discussed.
If it is a big feature, then [make an issue](https://github.com/rclone/rclone/issues)
first so it can be discussed.
To prepare your pull request first press the fork button on [rclone's GitHub
page](https://github.com/rclone/rclone).
Then [install Git](https://git-scm.com/downloads) and set your public contribution [name](https://docs.github.com/en/github/getting-started-with-github/setting-your-username-in-git) and [email](https://docs.github.com/en/github/setting-up-and-managing-your-github-user-account/setting-your-commit-email-address#setting-your-commit-email-address-in-git).
Then [install Git](https://git-scm.com/downloads) and set your public contribution
[name](https://docs.github.com/en/github/getting-started-with-github/setting-your-username-in-git)
and [email](https://docs.github.com/en/github/setting-up-and-managing-your-github-user-account/setting-your-commit-email-address#setting-your-commit-email-address-in-git).
Next open your terminal, change directory to your preferred folder and initialise your local rclone project:
Next open your terminal, change directory to your preferred folder and initialise
your local rclone project:
git clone https://github.com/rclone/rclone.git
cd rclone
git remote rename origin upstream
# if you have SSH keys setup in your GitHub account:
git remote add origin git@github.com:YOURUSER/rclone.git
# otherwise:
git remote add origin https://github.com/YOURUSER/rclone.git
```console
git clone https://github.com/rclone/rclone.git
cd rclone
git remote rename origin upstream
# if you have SSH keys setup in your GitHub account:
git remote add origin git@github.com:YOURUSER/rclone.git
# otherwise:
git remote add origin https://github.com/YOURUSER/rclone.git
```
Note that most of the terminal commands in the rest of this guide must be executed from the rclone folder created above.
Note that most of the terminal commands in the rest of this guide must be
executed from the rclone folder created above.
Now [install Go](https://golang.org/doc/install) and verify your installation:
go version
```console
go version
```
Great, you can now compile and execute your own version of rclone:
go build
./rclone version
```console
go build
./rclone version
```
(Note that you can also replace `go build` with `make`, which will include a
more accurate version number in the executable as well as enable you to specify
more build options.) Finally make a branch to add your new feature
git checkout -b my-new-feature
```console
git checkout -b my-new-feature
```
And get hacking.
You may like one of the [popular editors/IDE's for Go](https://github.com/golang/go/wiki/IDEsAndTextEditorPlugins) and a quick view on the rclone [code organisation](#code-organisation).
You may like one of the [popular editors/IDE's for Go](https://github.com/golang/go/wiki/IDEsAndTextEditorPlugins)
and a quick view on the rclone [code organisation](#code-organisation).
When ready - test the affected functionality and run the unit tests for the code you changed
When ready - test the affected functionality and run the unit tests for the
code you changed
cd folder/with/changed/files
go test -v
```console
cd folder/with/changed/files
go test -v
```
Note that you may need to make a test remote, e.g. `TestSwift` for some
of the unit tests.
This is typically enough if you made a simple bug fix, otherwise please read the rclone [testing](#testing) section too.
This is typically enough if you made a simple bug fix, otherwise please read
the rclone [testing](#testing) section too.
Make sure you
@@ -79,14 +99,19 @@ Make sure you
When you are done with that push your changes to GitHub:
git push -u origin my-new-feature
```console
git push -u origin my-new-feature
```
and open the GitHub website to [create your pull
request](https://help.github.com/articles/creating-a-pull-request/).
Your changes will then get reviewed and you might get asked to fix some stuff. If so, then make the changes in the same branch, commit and push your updates to GitHub.
Your changes will then get reviewed and you might get asked to fix some stuff.
If so, then make the changes in the same branch, commit and push your updates to
GitHub.
You may sometimes be asked to [base your changes on the latest master](#basing-your-changes-on-the-latest-master) or [squash your commits](#squashing-your-commits).
You may sometimes be asked to [base your changes on the latest master](#basing-your-changes-on-the-latest-master)
or [squash your commits](#squashing-your-commits).
## Using Git and GitHub
@@ -94,87 +119,118 @@ You may sometimes be asked to [base your changes on the latest master](#basing-y
Follow the guideline for [commit messages](#commit-messages) and then:
git checkout my-new-feature # To switch to your branch
git status # To see the new and changed files
git add FILENAME # To select FILENAME for the commit
git status # To verify the changes to be committed
git commit # To do the commit
git log # To verify the commit. Use q to quit the log
```console
git checkout my-new-feature # To switch to your branch
git status # To see the new and changed files
git add FILENAME # To select FILENAME for the commit
git status # To verify the changes to be committed
git commit # To do the commit
git log # To verify the commit. Use q to quit the log
```
You can modify the message or changes in the latest commit using:
git commit --amend
```console
git commit --amend
```
If you amend to commits that have been pushed to GitHub, then you will have to [replace your previously pushed commits](#replacing-your-previously-pushed-commits).
If you amend to commits that have been pushed to GitHub, then you will have to
[replace your previously pushed commits](#replacing-your-previously-pushed-commits).
### Replacing your previously pushed commits
Note that you are about to rewrite the GitHub history of your branch. It is good practice to involve your collaborators before modifying commits that have been pushed to GitHub.
Note that you are about to rewrite the GitHub history of your branch. It is good
practice to involve your collaborators before modifying commits that have been
pushed to GitHub.
Your previously pushed commits are replaced by:
git push --force origin my-new-feature
```console
git push --force origin my-new-feature
```
### Basing your changes on the latest master
To base your changes on the latest version of the [rclone master](https://github.com/rclone/rclone/tree/master) (upstream):
To base your changes on the latest version of the
[rclone master](https://github.com/rclone/rclone/tree/master) (upstream):
git checkout master
git fetch upstream
git merge --ff-only
git push origin --follow-tags # optional update of your fork in GitHub
git checkout my-new-feature
git rebase master
```console
git checkout master
git fetch upstream
git merge --ff-only
git push origin --follow-tags # optional update of your fork in GitHub
git checkout my-new-feature
git rebase master
```
If you rebase commits that have been pushed to GitHub, then you will have to [replace your previously pushed commits](#replacing-your-previously-pushed-commits).
If you rebase commits that have been pushed to GitHub, then you will have to
[replace your previously pushed commits](#replacing-your-previously-pushed-commits).
### Squashing your commits ###
### Squashing your commits
To combine your commits into one commit:
git log # To count the commits to squash, e.g. the last 2
git reset --soft HEAD~2 # To undo the 2 latest commits
git status # To check everything is as expected
```console
git log # To count the commits to squash, e.g. the last 2
git reset --soft HEAD~2 # To undo the 2 latest commits
git status # To check everything is as expected
```
If everything is fine, then make the new combined commit:
git commit # To commit the undone commits as one
```console
git commit # To commit the undone commits as one
```
otherwise, you may roll back using:
git reflog # To check that HEAD{1} is your previous state
git reset --soft 'HEAD@{1}' # To roll back to your previous state
```console
git reflog # To check that HEAD{1} is your previous state
git reset --soft 'HEAD@{1}' # To roll back to your previous state
```
If you squash commits that have been pushed to GitHub, then you will have to [replace your previously pushed commits](#replacing-your-previously-pushed-commits).
If you squash commits that have been pushed to GitHub, then you will have to
[replace your previously pushed commits](#replacing-your-previously-pushed-commits).
Tip: You may like to use `git rebase -i master` if you are experienced or have a more complex situation.
Tip: You may like to use `git rebase -i master` if you are experienced or have a
more complex situation.
### GitHub Continuous Integration
rclone currently uses [GitHub Actions](https://github.com/rclone/rclone/actions) to build and test the project, which should be automatically available for your fork too from the `Actions` tab in your repository.
rclone currently uses [GitHub Actions](https://github.com/rclone/rclone/actions)
to build and test the project, which should be automatically available for your
fork too from the `Actions` tab in your repository.
## Testing
### Code quality tests
If you install [golangci-lint](https://github.com/golangci/golangci-lint) then you can run the same tests as get run in the CI which can be very helpful.
If you install [golangci-lint](https://github.com/golangci/golangci-lint) then
you can run the same tests as get run in the CI which can be very helpful.
You can run them with `make check` or with `golangci-lint run ./...`.
Using these tests ensures that the rclone codebase all uses the same coding standards. These tests also check for easy mistakes to make (like forgetting to check an error return).
Using these tests ensures that the rclone codebase all uses the same coding
standards. These tests also check for easy mistakes to make (like forgetting
to check an error return).
### Quick testing
rclone's tests are run from the go testing framework, so at the top
level you can run this to run all the tests.
go test -v ./...
```console
go test -v ./...
```
You can also use `make`, if supported by your platform
make quicktest
```console
make quicktest
```
The quicktest is [automatically run by GitHub](#github-continuous-integration) when you push your branch to GitHub.
The quicktest is [automatically run by GitHub](#github-continuous-integration)
when you push your branch to GitHub.
### Backend testing
@@ -190,41 +246,50 @@ need to make a remote called `TestDrive`.
You can then run the unit tests in the drive directory. These tests
are skipped if `TestDrive:` isn't defined.
cd backend/drive
go test -v
```console
cd backend/drive
go test -v
```
You can then run the integration tests which test all of rclone's
operations. Normally these get run against the local file system,
but they can be run against any of the remotes.
cd fs/sync
go test -v -remote TestDrive:
go test -v -remote TestDrive: -fast-list
```console
cd fs/sync
go test -v -remote TestDrive:
go test -v -remote TestDrive: -fast-list
cd fs/operations
go test -v -remote TestDrive:
cd fs/operations
go test -v -remote TestDrive:
```
If you want to use the integration test framework to run these tests
altogether with an HTML report and test retries then from the
project root:
go install github.com/rclone/rclone/fstest/test_all
test_all -backends drive
```console
go run ./fstest/test_all -backends drive
```
### Full integration testing
If you want to run all the integration tests against all the remotes,
then change into the project root and run
make check
make test
```console
make check
make test
```
The commands may require some extra go packages which you can install with
make build_dep
```console
make build_dep
```
The full integration tests are run daily on the integration test server. You can
find the results at https://pub.rclone.org/integration-tests/
find the results at <https://integration.rclone.org>
## Code Organisation
@@ -232,46 +297,48 @@ Rclone code is organised into a small number of top level directories
with modules beneath.
- backend - the rclone backends for interfacing to cloud providers -
- all - import this to load all the cloud providers
- ...providers
- all - import this to load all the cloud providers
- ...providers
- bin - scripts for use while building or maintaining rclone
- cmd - the rclone commands
- all - import this to load all the commands
- ...commands
- all - import this to load all the commands
- ...commands
- cmdtest - end-to-end tests of commands, flags, environment variables,...
- docs - the documentation and website
- content - adjust these docs only - everything else is autogenerated
- command - these are auto-generated - edit the corresponding .go file
- content - adjust these docs only, except those marked autogenerated
or portions marked autogenerated where the corresponding .go file must be
edited instead, and everything else is autogenerated
- commands - these are auto-generated, edit the corresponding .go file
- fs - main rclone definitions - minimal amount of code
- accounting - bandwidth limiting and statistics
- asyncreader - an io.Reader which reads ahead
- config - manage the config file and flags
- driveletter - detect if a name is a drive letter
- filter - implements include/exclude filtering
- fserrors - rclone specific error handling
- fshttp - http handling for rclone
- fspath - path handling for rclone
- hash - defines rclone's hash types and functions
- list - list a remote
- log - logging facilities
- march - iterates directories in lock step
- object - in memory Fs objects
- operations - primitives for sync, e.g. Copy, Move
- sync - sync directories
- walk - walk a directory
- accounting - bandwidth limiting and statistics
- asyncreader - an io.Reader which reads ahead
- config - manage the config file and flags
- driveletter - detect if a name is a drive letter
- filter - implements include/exclude filtering
- fserrors - rclone specific error handling
- fshttp - http handling for rclone
- fspath - path handling for rclone
- hash - defines rclone's hash types and functions
- list - list a remote
- log - logging facilities
- march - iterates directories in lock step
- object - in memory Fs objects
- operations - primitives for sync, e.g. Copy, Move
- sync - sync directories
- walk - walk a directory
- fstest - provides integration test framework
- fstests - integration tests for the backends
- mockdir - mocks an fs.Directory
- mockobject - mocks an fs.Object
- test_all - Runs integration tests for everything
- fstests - integration tests for the backends
- mockdir - mocks an fs.Directory
- mockobject - mocks an fs.Object
- test_all - Runs integration tests for everything
- graphics - the images used in the website, etc.
- lib - libraries used by the backend
- atexit - register functions to run when rclone exits
- dircache - directory ID to name caching
- oauthutil - helpers for using oauth
- pacer - retries with backoff and paces operations
- readers - a selection of useful io.Readers
- rest - a thin abstraction over net/http for REST
- atexit - register functions to run when rclone exits
- dircache - directory ID to name caching
- oauthutil - helpers for using oauth
- pacer - retries with backoff and paces operations
- readers - a selection of useful io.Readers
- rest - a thin abstraction over net/http for REST
- librclone - in memory interface to rclone's API for embedding rclone
- vfs - Virtual FileSystem layer for implementing rclone mount and similar
@@ -279,47 +346,109 @@ with modules beneath.
If you are adding a new feature then please update the documentation.
The documentation sources are generally in Markdown format, in conformance
with the CommonMark specification and compatible with GitHub Flavored
Markdown (GFM). The markdown format and style is checked as part of the lint
operation that runs automatically on pull requests, to enforce standards and
consistency. This is based on the [markdownlint](https://github.com/DavidAnson/markdownlint)
tool by David Anson, which can also be integrated into editors so you can
perform the same checks while writing. It generally follows Ciro Santilli's
[Markdown Style Guide](https://cirosantilli.com/markdown-style-guide), which
is good source if you want to know more.
HTML pages, served as website <rclone.org>, are generated from the Markdown,
using [Hugo](https://gohugo.io). Note that when generating the HTML pages,
there is currently used a different algorithm for generating header anchors
than what GitHub uses for its Markdown rendering. For example, in the HTML docs
generated by Hugo any leading `-` characters are ignored, which means when
linking to a header with text `--config string` we therefore need to use the
link `#config-string` in our Markdown source, which will not work in GitHub's
preview where `#--config-string` would be the correct link.
Most of the documentation are written directly in text files with extension
`.md`, mainly within folder `docs/content`. Note that several of such files
are autogenerated (e.g. the command documentation, and `docs/content/flags.md`),
or contain autogenerated portions (e.g. the backend documentation under
`docs/content/commands`). These are marked with an `autogenerated` comment.
The sources of the autogenerated text are usually Markdown formatted text
embedded as string values in the Go source code, so you need to locate these
and edit the `.go` file instead. The `MANUAL.*`, `rclone.1` and other text
files in the root of the repository are also autogenerated. The autogeneration
of files, and the website, will be done during the release process. See the
`make doc` and `make website` targets in the Makefile if you are interested in
how. You don't need to run these when adding a feature.
If you add a new general flag (not for a backend), then document it in
`docs/content/docs.md` - the flags there are supposed to be in
alphabetical order.
If you add a new backend option/flag, then it should be documented in
the source file in the `Help:` field.
the source file in the `Help:` field:
- Start with the most important information about the option,
as a single sentence on a single line.
- This text will be used for the command-line flag help.
- It will be combined with other information, such as any default value,
and the result will look odd if not written as a single sentence.
- It should end with a period/full stop character, which will be shown
in docs but automatically removed when producing the flag help.
- Try to keep it below 80 characters, to reduce text wrapping in the terminal.
as a single sentence on a single line.
- This text will be used for the command-line flag help.
- It will be combined with other information, such as any default value,
and the result will look odd if not written as a single sentence.
- It should end with a period/full stop character, which will be shown
in docs but automatically removed when producing the flag help.
- Try to keep it below 80 characters, to reduce text wrapping in the terminal.
- More details can be added in a new paragraph, after an empty line (`"\n\n"`).
- Like with docs generated from Markdown, a single line break is ignored
and two line breaks creates a new paragraph.
- This text will be shown to the user in `rclone config`
and in the docs (where it will be added by `make backenddocs`,
normally run some time before next release).
- Like with docs generated from Markdown, a single line break is ignored
and two line breaks creates a new paragraph.
- This text will be shown to the user in `rclone config`
and in the docs (where it will be added by `make backenddocs`,
normally run some time before next release).
- To create options of enumeration type use the `Examples:` field.
- Each example value have their own `Help:` field, but they are treated
a bit different than the main option help text. They will be shown
as an unordered list, therefore a single line break is enough to
create a new list item. Also, for enumeration texts like name of
countries, it looks better without an ending period/full stop character.
- Each example value have their own `Help:` field, but they are treated
a bit different than the main option help text. They will be shown
as an unordered list, therefore a single line break is enough to
create a new list item. Also, for enumeration texts like name of
countries, it looks better without an ending period/full stop character.
- You can run `make backenddocs` to verify the resulting Markdown.
- This will update the autogenerated sections of the backend docs Markdown
files under `docs/content`.
- It requires you to have [Python](https://www.python.org) installed.
- The `backenddocs` make target runs the Python script `bin/make_backend_docs.py`,
and you can also run this directly, optionally with the name of a backend
as argument to only update the docs for a specific backend.
- **Do not** commit the updated Markdown files. This operation is run as part of
the release process. Since any manual changes in the autogenerated sections
of the Markdown files will then be lost, we have a pull request check that
reports error for any changes within the autogenerated sections. Should you
have done manual changes outside of the autogenerated sections they must be
committed, of course.
- You can run `make serve` to verify the resulting website.
- This will build the website and serve it locally, so you can open it in
your web browser and verify that the end result looks OK. Check specifically
any added links, also in light of the note above regarding different algorithms
for generated header anchors.
- It requires you to have the [Hugo](https://gohugo.io) tool available.
- The `serve` make target depends on the `website` target, which runs the
`hugo` command from the `docs` directory to build the website, and then
it serves the website locally with an embedded web server using a command
`hugo server --logLevel info -w --disableFastRender --ignoreCache`, so you
can run similar Hugo commands directly as well.
The only documentation you need to edit are the `docs/content/*.md`
files. The `MANUAL.*`, `rclone.1`, website, etc. are all auto-generated
from those during the release process. See the `make doc` and `make
website` targets in the Makefile if you are interested in how. You
don't need to run these when adding a feature.
When writing documentation for an entirely new backend,
see [backend documentation](#backend-documentation).
Documentation for rclone sub commands is with their code, e.g.
`cmd/ls/ls.go`. Write flag help strings as a single sentence on a single
line, without a period/full stop character at the end, as it will be
combined unmodified with other information (such as any default value).
If you are updating documentation for a command, you must do that in the
command source code, e.g. `cmd/ls/ls.go`. Write flag help strings as a single
sentence on a single line, without a period/full stop character at the end,
as it will be combined unmodified with other information (such as any default
value).
Note that you can use [GitHub's online editor](https://help.github.com/en/github/managing-files-in-a-repository/editing-files-in-another-users-repository)
for small changes in the docs which makes it very easy.
Note that you can use
[GitHub's online editor](https://help.github.com/en/github/managing-files-in-a-repository/editing-files-in-another-users-repository)
for small changes in the docs which makes it very easy. Just remember the
caveat when linking to header anchors, noted above, which means that GitHub's
Markdown preview may not be an entirely reliable verification of the results.
After your changes have been merged, you can verify them on
[tip.rclone.org](https://tip.rclone.org). This site is updated daily with the
current state of the master branch at 07:00 UTC. The changes will be on the main
[rclone.org](https://rclone.org) site once they have been included in a release.
## Making a release
@@ -350,13 +479,13 @@ change will get linked into the issue.
Here is an example of a short commit message:
```
```text
drive: add team drive support - fixes #885
```
And here is an example of a longer one:
```
```text
mount: fix hang on errored upload
In certain circumstances, if an upload failed then the mount could hang
@@ -379,7 +508,9 @@ To add a dependency `github.com/ncw/new_dependency` see the
instructions below. These will fetch the dependency and add it to
`go.mod` and `go.sum`.
go get github.com/ncw/new_dependency
```console
go get github.com/ncw/new_dependency
```
You can add constraints on that package when doing `go get` (see the
go docs linked above), but don't unless you really need to.
@@ -391,7 +522,9 @@ and `go.sum` in the same commit as your other changes.
If you need to update a dependency then run
go get golang.org/x/crypto
```console
go get golang.org/x/crypto
```
Check in a single commit as above.
@@ -434,25 +567,38 @@ remote or an fs.
### Getting going
- Create `backend/remote/remote.go` (copy this from a similar remote)
- box is a good one to start from if you have a directory-based remote (and shows how to use the directory cache)
- b2 is a good one to start from if you have a bucket-based remote
- box is a good one to start from if you have a directory-based remote (and
shows how to use the directory cache)
- b2 is a good one to start from if you have a bucket-based remote
- Add your remote to the imports in `backend/all/all.go`
- HTTP based remotes are easiest to maintain if they use rclone's [lib/rest](https://pkg.go.dev/github.com/rclone/rclone/lib/rest) module, but if there is a really good Go SDK from the provider then use that instead.
- Try to implement as many optional methods as possible as it makes the remote more usable.
- Use [lib/encoder](https://pkg.go.dev/github.com/rclone/rclone/lib/encoder) to make sure we can encode any path name and `rclone info` to help determine the encodings needed
- `rclone purge -v TestRemote:rclone-info`
- `rclone test info --all --remote-encoding None -vv --write-json remote.json TestRemote:rclone-info`
- `go run cmd/test/info/internal/build_csv/main.go -o remote.csv remote.json`
- open `remote.csv` in a spreadsheet and examine
- HTTP based remotes are easiest to maintain if they use rclone's
[lib/rest](https://pkg.go.dev/github.com/rclone/rclone/lib/rest) module, but
if there is a really good Go SDK from the provider then use that instead.
- Try to implement as many optional methods as possible as it makes the remote
more usable.
- Use [lib/encoder](https://pkg.go.dev/github.com/rclone/rclone/lib/encoder) to
make sure we can encode any path name and `rclone info` to help determine the
encodings needed
- `rclone purge -v TestRemote:rclone-info`
- `rclone test info --all --remote-encoding None -vv --write-json remote.json TestRemote:rclone-info`
- `go run cmd/test/info/internal/build_csv/main.go -o remote.csv remote.json`
- open `remote.csv` in a spreadsheet and examine
### Guidelines for a speedy merge
- **Do** use [lib/rest](https://pkg.go.dev/github.com/rclone/rclone/lib/rest) if you are implementing a REST like backend and parsing XML/JSON in the backend.
- **Do** use rclone's Client or Transport from [fs/fshttp](https://pkg.go.dev/github.com/rclone/rclone/fs/fshttp) if your backend is HTTP based - this adds features like `--dump bodies`, `--tpslimit`, `--user-agent` without you having to code anything!
- **Do** follow your example backend exactly - use the same code order, function names, layout, structure. **Don't** move stuff around and **Don't** delete the comments.
- **Do not** split your backend up into `fs.go` and `object.go` (there are a few backends like that - don't follow them!)
- **Do** use [lib/rest](https://pkg.go.dev/github.com/rclone/rclone/lib/rest)
if you are implementing a REST like backend and parsing XML/JSON in the backend.
- **Do** use rclone's Client or Transport from [fs/fshttp](https://pkg.go.dev/github.com/rclone/rclone/fs/fshttp)
if your backend is HTTP based - this adds features like `--dump bodies`,
`--tpslimit`, `--user-agent` without you having to code anything!
- **Do** follow your example backend exactly - use the same code order, function
names, layout, structure. **Don't** move stuff around and **Don't** delete the
comments.
- **Do not** split your backend up into `fs.go` and `object.go` (there are a few
backends like that - don't follow them!)
- **Do** put your API type definitions in a separate file - by preference `api/types.go`
- **Remember** we have >50 backends to maintain so keeping them as similar as possible to each other is a high priority!
- **Remember** we have >50 backends to maintain so keeping them as similar as
possible to each other is a high priority!
### Unit tests
@@ -463,19 +609,19 @@ remote or an fs.
### Integration tests
- Add your backend to `fstest/test_all/config.yaml`
- Once you've done that then you can use the integration test framework from the project root:
- go install ./...
- test_all -backends remote
- Once you've done that then you can use the integration test framework from
the project root:
- `go run ./fstest/test_all -backends remote`
Or if you want to run the integration tests manually:
- Make sure integration tests pass with
- `cd fs/operations`
- `go test -v -remote TestRemote:`
- `cd fs/sync`
- `go test -v -remote TestRemote:`
- `cd fs/operations`
- `go test -v -remote TestRemote:`
- `cd fs/sync`
- `go test -v -remote TestRemote:`
- If your remote defines `ListR` check with this also
- `go test -v -remote TestRemote: -fast-list`
- `go test -v -remote TestRemote: -fast-list`
See the [testing](#testing) section for more information on integration tests.
@@ -487,10 +633,13 @@ alphabetical order of full name of remote (e.g. `drive` is ordered as
`Google Drive`) but with the local file system last.
- `README.md` - main GitHub page
- `docs/content/remote.md` - main docs page (note the backend options are automatically added to this file with `make backenddocs`)
- make sure this has the `autogenerated options` comments in (see your reference backend docs)
- update them in your backend with `bin/make_backend_docs.py remote`
- `docs/content/overview.md` - overview docs - add an entry into the Features table and the Optional Features table.
- `docs/content/remote.md` - main docs page (note the backend options are
automatically added to this file with `make backenddocs`)
- make sure this has the `autogenerated options` comments in (see your
reference backend docs)
- update them in your backend with `bin/make_backend_docs.py remote`
- `docs/content/overview.md` - overview docs - add an entry into the Features
table and the Optional Features table.
- `docs/content/docs.md` - list of remotes in config section
- `docs/content/_index.md` - front page of rclone.org
- `docs/layouts/chrome/navbar.html` - add it to the website navigation
@@ -501,74 +650,55 @@ in the web browser and the links (internal and external) all work.
## Adding a new s3 provider
It is quite easy to add a new S3 provider to rclone.
You'll need to modify the following files
- `backend/s3/s3.go`
- Add the provider to `providerOption` at the top of the file
- Add endpoints and other config for your provider gated on the provider in `fs.RegInfo`.
- Exclude your provider from generic config questions (eg `region` and `endpoint).
- Add the provider to the `setQuirks` function - see the documentation there.
- `docs/content/s3.md`
- Add the provider at the top of the page.
- Add a section about the provider linked from there.
- Add a transcript of a trial `rclone config` session
- Edit the transcript to remove things which might change in subsequent versions
- **Do not** alter or add to the autogenerated parts of `s3.md`
- **Do not** run `make backenddocs` or `bin/make_backend_docs.py s3`
- `README.md` - this is the home page in github
- Add the provider and a link to the section you wrote in `docs/contents/s3.md`
- `docs/content/_index.md` - this is the home page of rclone.org
- Add the provider and a link to the section you wrote in `docs/contents/s3.md`
When adding the provider, endpoints, quirks, docs etc keep them in
alphabetical order by `Provider` name, but with `AWS` first and
`Other` last.
Once you've written the docs, run `make serve` and check they look OK
in the web browser and the links (internal and external) all work.
Once you've written the code, test `rclone config` works to your
satisfaction, and check the integration tests work `go test -v -remote
NewS3Provider:`. You may need to adjust the quirks to get them to
pass. Some providers just can't pass the tests with control characters
in the names so if these fail and the provider doesn't support
`urlEncodeListings` in the quirks then ignore them. Note that the
`SetTier` test may also fail on non AWS providers.
For an example of adding an s3 provider see [eb3082a1](https://github.com/rclone/rclone/commit/eb3082a1ebdb76d5625f14cedec3f5154a5e7b10).
[Please see the guide in the S3 backend directory](backend/s3/README.md).
## Writing a plugin
New features (backends, commands) can also be added "out-of-tree", through Go plugins.
Changes will be kept in a dynamically loaded file instead of being compiled into the main binary.
This is useful if you can't merge your changes upstream or don't want to maintain a fork of rclone.
New features (backends, commands) can also be added "out-of-tree", through Go
plugins. Changes will be kept in a dynamically loaded file instead of being
compiled into the main binary. This is useful if you can't merge your changes
upstream or don't want to maintain a fork of rclone.
### Usage
- Naming
- Plugins names must have the pattern `librcloneplugin_KIND_NAME.so`.
- `KIND` should be one of `backend`, `command` or `bundle`.
- Example: A plugin with backend support for PiFS would be called
`librcloneplugin_backend_pifs.so`.
- Loading
- Supported on macOS & Linux as of now. ([Go issue for Windows support](https://github.com/golang/go/issues/19282))
- Supported on rclone v1.50 or greater.
- All plugins in the folder specified by variable `$RCLONE_PLUGIN_PATH` are loaded.
- If this variable doesn't exist, plugin support is disabled.
- Plugins must be compiled against the exact version of rclone to work.
(The rclone used during building the plugin must be the same as the source of rclone)
- Naming
- Plugins names must have the pattern `librcloneplugin_KIND_NAME.so`.
- `KIND` should be one of `backend`, `command` or `bundle`.
- Example: A plugin with backend support for PiFS would be called
`librcloneplugin_backend_pifs.so`.
- Loading
- Supported on macOS & Linux as of now. ([Go issue for Windows support](https://github.com/golang/go/issues/19282))
- Supported on rclone v1.50 or greater.
- All plugins in the folder specified by variable `$RCLONE_PLUGIN_PATH` are loaded.
- If this variable doesn't exist, plugin support is disabled.
- Plugins must be compiled against the exact version of rclone to work.
(The rclone used during building the plugin must be the same as the source
of rclone)
### Building
To turn your existing additions into a Go plugin, move them to an external repository
and change the top-level package name to `main`.
Check `rclone --version` and make sure that the plugin's rclone dependency and host Go version match.
Check `rclone --version` and make sure that the plugin's rclone dependency and
host Go version match.
Then, run `go build -buildmode=plugin -o PLUGIN_NAME.so .` to build the plugin.
[Go reference](https://godoc.org/github.com/rclone/rclone/lib/plugin)
[Minimal example](https://gist.github.com/terorie/21b517ee347828e899e1913efc1d684f)
## Keeping a backend or command out of tree
Rclone was designed to be modular so it is very easy to keep a backend
or a command out of the main rclone source tree.
So for example if you had a backend which accessed your proprietary
systems or a command which was specialised for your needs you could
add them out of tree.
This may be easier than using a plugin and is supported on all
platforms not just macOS and Linux.
This is explained further in <https://github.com/rclone/rclone_out_of_tree_example>
which has an example of an out of tree backend `ram` (which is a
renamed version of the `memory` backend).

View File

@@ -1,4 +1,4 @@
# Maintainers guide for rclone #
# Maintainers guide for rclone
Current active maintainers of rclone are:
@@ -24,80 +24,108 @@ Current active maintainers of rclone are:
| Dan McArdle | @dmcardle | gitannex |
| Sam Harrison | @childish-sambino | filescom |
**This is a work in progress Draft**
## This is a work in progress draft
This is a guide for how to be an rclone maintainer. This is mostly a write-up of what I (@ncw) attempt to do.
This is a guide for how to be an rclone maintainer. This is mostly a write-up
of what I (@ncw) attempt to do.
## Triaging Tickets ##
## Triaging Tickets
When a ticket comes in it should be triaged. This means it should be classified by adding labels and placed into a milestone. Quite a lot of tickets need a bit of back and forth to determine whether it is a valid ticket so tickets may remain without labels or milestone for a while.
When a ticket comes in it should be triaged. This means it should be classified
by adding labels and placed into a milestone. Quite a lot of tickets need a bit
of back and forth to determine whether it is a valid ticket so tickets may
remain without labels or milestone for a while.
Rclone uses the labels like this:
* `bug` - a definitely verified bug
* `can't reproduce` - a problem which we can't reproduce
* `doc fix` - a bug in the documentation - if users need help understanding the docs add this label
* `duplicate` - normally close these and ask the user to subscribe to the original
* `enhancement: new remote` - a new rclone backend
* `enhancement` - a new feature
* `FUSE` - to do with `rclone mount` command
* `good first issue` - mark these if you find a small self-contained issue - these get shown to new visitors to the project
* `help` wanted - mark these if you find a self-contained issue - these get shown to new visitors to the project
* `IMPORTANT` - note to maintainers not to forget to fix this for the release
* `maintenance` - internal enhancement, code re-organisation, etc.
* `Needs Go 1.XX` - waiting for that version of Go to be released
* `question` - not a `bug` or `enhancement` - direct to the forum for next time
* `Remote: XXX` - which rclone backend this affects
* `thinking` - not decided on the course of action yet
- `bug` - a definitely verified bug
- `can't reproduce` - a problem which we can't reproduce
- `doc fix` - a bug in the documentation - if users need help understanding the
docs add this label
- `duplicate` - normally close these and ask the user to subscribe to the original
- `enhancement: new remote` - a new rclone backend
- `enhancement` - a new feature
- `FUSE` - to do with `rclone mount` command
- `good first issue` - mark these if you find a small self-contained issue -
these get shown to new visitors to the project
- `help` wanted - mark these if you find a self-contained issue - these get
shown to new visitors to the project
- `IMPORTANT` - note to maintainers not to forget to fix this for the release
- `maintenance` - internal enhancement, code re-organisation, etc.
- `Needs Go 1.XX` - waiting for that version of Go to be released
- `question` - not a `bug` or `enhancement` - direct to the forum for next time
- `Remote: XXX` - which rclone backend this affects
- `thinking` - not decided on the course of action yet
If it turns out to be a bug or an enhancement it should be tagged as such, with the appropriate other tags. Don't forget the "good first issue" tag to give new contributors something easy to do to get going.
If it turns out to be a bug or an enhancement it should be tagged as such, with
the appropriate other tags. Don't forget the "good first issue" tag to give new
contributors something easy to do to get going.
When a ticket is tagged it should be added to a milestone, either the next release, the one after, Soon or Help Wanted. Bugs can be added to the "Known Bugs" milestone if they aren't planned to be fixed or need to wait for something (e.g. the next go release).
When a ticket is tagged it should be added to a milestone, either the next
release, the one after, Soon or Help Wanted. Bugs can be added to the
"Known Bugs" milestone if they aren't planned to be fixed or need to wait for
something (e.g. the next go release).
The milestones have these meanings:
* v1.XX - stuff we would like to fit into this release
* v1.XX+1 - stuff we are leaving until the next release
* Soon - stuff we think is a good idea - waiting to be scheduled for a release
* Help wanted - blue sky stuff that might get moved up, or someone could help with
* Known bugs - bugs waiting on external factors or we aren't going to fix for the moment
- v1.XX - stuff we would like to fit into this release
- v1.XX+1 - stuff we are leaving until the next release
- Soon - stuff we think is a good idea - waiting to be scheduled for a release
- Help wanted - blue sky stuff that might get moved up, or someone could help with
- Known bugs - bugs waiting on external factors or we aren't going to fix for
the moment
Tickets [with no milestone](https://github.com/rclone/rclone/issues?utf8=✓&q=is%3Aissue%20is%3Aopen%20no%3Amile) are good candidates for ones that have slipped between the gaps and need following up.
Tickets [with no milestone](https://github.com/rclone/rclone/issues?utf8=✓&q=is%3Aissue%20is%3Aopen%20no%3Amile)
are good candidates for ones that have slipped between the gaps and need
following up.
## Closing Tickets ##
## Closing Tickets
Close tickets as soon as you can - make sure they are tagged with a release. Post a link to a beta in the ticket with the fix in, asking for feedback.
Close tickets as soon as you can - make sure they are tagged with a release.
Post a link to a beta in the ticket with the fix in, asking for feedback.
## Pull requests ##
## Pull requests
Try to process pull requests promptly!
Merging pull requests on GitHub itself works quite well nowadays so you can squash and rebase or rebase pull requests. rclone doesn't use merge commits. Use the squash and rebase option if you need to edit the commit message.
Merging pull requests on GitHub itself works quite well nowadays so you can
squash and rebase or rebase pull requests. rclone doesn't use merge commits.
Use the squash and rebase option if you need to edit the commit message.
After merging the commit, in your local master branch, do `git pull` then run `bin/update-authors.py` to update the authors file then `git push`.
After merging the commit, in your local master branch, do `git pull` then run
`bin/update-authors.py` to update the authors file then `git push`.
Sometimes pull requests need to be left open for a while - this especially true of contributions of new backends which take a long time to get right.
Sometimes pull requests need to be left open for a while - this especially true
of contributions of new backends which take a long time to get right.
## Merges ##
## Merges
If you are merging a branch locally then do `git merge --ff-only branch-name` to avoid a merge commit. You'll need to rebase the branch if it doesn't merge cleanly.
If you are merging a branch locally then do `git merge --ff-only branch-name` to
avoid a merge commit. You'll need to rebase the branch if it doesn't merge cleanly.
## Release cycle ##
## Release cycle
Rclone aims for a 6-8 week release cycle. Sometimes release cycles take longer if there is something big to merge that didn't stabilize properly or for personal reasons.
Rclone aims for a 6-8 week release cycle. Sometimes release cycles take longer
if there is something big to merge that didn't stabilize properly or for personal
reasons.
High impact regressions should be fixed before the next release.
Near the start of the release cycle, the dependencies should be updated with `make update` to give time for bugs to surface.
Near the start of the release cycle, the dependencies should be updated with
`make update` to give time for bugs to surface.
Towards the end of the release cycle try not to merge anything too big so let things settle down.
Towards the end of the release cycle try not to merge anything too big so let
things settle down.
Follow the instructions in RELEASE.md for making the release. Note that the testing part is the most time-consuming often needing several rounds of test and fix depending on exactly how many new features rclone has gained.
Follow the instructions in RELEASE.md for making the release. Note that the
testing part is the most time-consuming often needing several rounds of test
and fix depending on exactly how many new features rclone has gained.
## Mailing list ##
## Mailing list
There is now an invite-only mailing list for rclone developers `rclone-dev` on google groups.
There is now an invite-only mailing list for rclone developers `rclone-dev` on
google groups.
## TODO ##
## TODO
I should probably make a dev@rclone.org to register with cloud providers.
I should probably make a <dev@rclone.org> to register with cloud providers.

49221
MANUAL.html generated

File diff suppressed because it is too large Load Diff

22942
MANUAL.md generated

File diff suppressed because it is too large Load Diff

9061
MANUAL.txt generated

File diff suppressed because it is too large Load Diff

View File

@@ -100,6 +100,7 @@ compiletest:
check: rclone
@echo "-- START CODE QUALITY REPORT -------------------------------"
@golangci-lint run $(LINTTAGS) ./...
@bin/markdown-lint
@echo "-- END CODE QUALITY REPORT ---------------------------------"
# Get the build dependencies
@@ -113,21 +114,21 @@ release_dep_linux:
# Update dependencies
showupdates:
@echo "*** Direct dependencies that could be updated ***"
@GO111MODULE=on go list -u -f '{{if (and (not (or .Main .Indirect)) .Update)}}{{.Path}}: {{.Version}} -> {{.Update.Version}}{{end}}' -m all 2> /dev/null
@go list -u -f '{{if (and (not (or .Main .Indirect)) .Update)}}{{.Path}}: {{.Version}} -> {{.Update.Version}}{{end}}' -m all 2> /dev/null
# Update direct dependencies only
updatedirect:
GO111MODULE=on go get -d $$(go list -m -f '{{if not (or .Main .Indirect)}}{{.Path}}{{end}}' all)
GO111MODULE=on go mod tidy
go get $$(go list -m -f '{{if not (or .Main .Indirect)}}{{.Path}}{{end}}' all)
go mod tidy
# Update direct and indirect dependencies and test dependencies
update:
GO111MODULE=on go get -d -u -t ./...
GO111MODULE=on go mod tidy
go get -u -t ./...
go mod tidy
# Tidy the module dependencies
tidy:
GO111MODULE=on go mod tidy
go mod tidy
doc: rclone.1 MANUAL.html MANUAL.txt rcdocs commanddocs
@@ -144,9 +145,11 @@ MANUAL.txt: MANUAL.md
pandoc -s --from markdown-smart --to plain MANUAL.md -o MANUAL.txt
commanddocs: rclone
go generate ./lib/transform
-@rmdir -p '$$HOME/.config/rclone'
XDG_CACHE_HOME="" XDG_CONFIG_HOME="" HOME="\$$HOME" USER="\$$USER" rclone gendocs --config=/notfound docs/content/
@[ ! -e '$$HOME' ] || (echo 'Error: created unwanted directory named $$HOME' && exit 1)
go run bin/make_bisync_docs.go ./docs/content/
backenddocs: rclone bin/make_backend_docs.py
-@rmdir -p '$$HOME/.config/rclone'
@@ -243,7 +246,7 @@ fetch_binaries:
rclone -P sync --exclude "/testbuilds/**" --delete-excluded $(BETA_UPLOAD) build/
serve: website
cd docs && hugo server --logLevel info -w --disableFastRender
cd docs && hugo server --logLevel info -w --disableFastRender --ignoreCache
tag: retag doc
bin/make_changelog.py $(LAST_TAG) $(VERSION) > docs/content/changelog.md.new

263
README.md
View File

@@ -1,6 +1,6 @@
<!-- markdownlint-disable-next-line first-line-heading no-inline-html -->
[<img src="https://rclone.org/img/logo_on_light__horizontal_color.svg" width="50%" alt="rclone logo">](https://rclone.org/#gh-light-mode-only)
<!-- markdownlint-disable-next-line no-inline-html -->
[<img src="https://rclone.org/img/logo_on_dark__horizontal_color.svg" width="50%" alt="rclone logo">](https://rclone.org/#gh-dark-mode-only)
[Website](https://rclone.org) |
@@ -18,97 +18,111 @@
# Rclone
Rclone *("rsync for cloud storage")* is a command-line program to sync files and directories to and from different cloud storage providers.
Rclone *("rsync for cloud storage")* is a command-line program to sync files and
directories to and from different cloud storage providers.
## Storage providers
* 1Fichier [:page_facing_up:](https://rclone.org/fichier/)
* Akamai Netstorage [:page_facing_up:](https://rclone.org/netstorage/)
* Alibaba Cloud (Aliyun) Object Storage System (OSS) [:page_facing_up:](https://rclone.org/s3/#alibaba-oss)
* Amazon S3 [:page_facing_up:](https://rclone.org/s3/)
* ArvanCloud Object Storage (AOS) [:page_facing_up:](https://rclone.org/s3/#arvan-cloud-object-storage-aos)
* Backblaze B2 [:page_facing_up:](https://rclone.org/b2/)
* Box [:page_facing_up:](https://rclone.org/box/)
* Ceph [:page_facing_up:](https://rclone.org/s3/#ceph)
* China Mobile Ecloud Elastic Object Storage (EOS) [:page_facing_up:](https://rclone.org/s3/#china-mobile-ecloud-eos)
* Cloudflare R2 [:page_facing_up:](https://rclone.org/s3/#cloudflare-r2)
* Citrix ShareFile [:page_facing_up:](https://rclone.org/sharefile/)
* DigitalOcean Spaces [:page_facing_up:](https://rclone.org/s3/#digitalocean-spaces)
* Digi Storage [:page_facing_up:](https://rclone.org/koofr/#digi-storage)
* Dreamhost [:page_facing_up:](https://rclone.org/s3/#dreamhost)
* Dropbox [:page_facing_up:](https://rclone.org/dropbox/)
* Enterprise File Fabric [:page_facing_up:](https://rclone.org/filefabric/)
* Fastmail Files [:page_facing_up:](https://rclone.org/webdav/#fastmail-files)
* Files.com [:page_facing_up:](https://rclone.org/filescom/)
* FTP [:page_facing_up:](https://rclone.org/ftp/)
* GoFile [:page_facing_up:](https://rclone.org/gofile/)
* Google Cloud Storage [:page_facing_up:](https://rclone.org/googlecloudstorage/)
* Google Drive [:page_facing_up:](https://rclone.org/drive/)
* Google Photos [:page_facing_up:](https://rclone.org/googlephotos/)
* HDFS (Hadoop Distributed Filesystem) [:page_facing_up:](https://rclone.org/hdfs/)
* Hetzner Storage Box [:page_facing_up:](https://rclone.org/sftp/#hetzner-storage-box)
* HiDrive [:page_facing_up:](https://rclone.org/hidrive/)
* HTTP [:page_facing_up:](https://rclone.org/http/)
* Huawei Cloud Object Storage Service(OBS) [:page_facing_up:](https://rclone.org/s3/#huawei-obs)
* iCloud Drive [:page_facing_up:](https://rclone.org/iclouddrive/)
* ImageKit [:page_facing_up:](https://rclone.org/imagekit/)
* Internet Archive [:page_facing_up:](https://rclone.org/internetarchive/)
* Jottacloud [:page_facing_up:](https://rclone.org/jottacloud/)
* IBM COS S3 [:page_facing_up:](https://rclone.org/s3/#ibm-cos-s3)
* IONOS Cloud [:page_facing_up:](https://rclone.org/s3/#ionos)
* Koofr [:page_facing_up:](https://rclone.org/koofr/)
* Leviia Object Storage [:page_facing_up:](https://rclone.org/s3/#leviia)
* Liara Object Storage [:page_facing_up:](https://rclone.org/s3/#liara-object-storage)
* Linkbox [:page_facing_up:](https://rclone.org/linkbox)
* Linode Object Storage [:page_facing_up:](https://rclone.org/s3/#linode)
* Magalu Object Storage [:page_facing_up:](https://rclone.org/s3/#magalu)
* Mail.ru Cloud [:page_facing_up:](https://rclone.org/mailru/)
* Memset Memstore [:page_facing_up:](https://rclone.org/swift/)
* Mega [:page_facing_up:](https://rclone.org/mega/)
* Memory [:page_facing_up:](https://rclone.org/memory/)
* Microsoft Azure Blob Storage [:page_facing_up:](https://rclone.org/azureblob/)
* Microsoft Azure Files Storage [:page_facing_up:](https://rclone.org/azurefiles/)
* Microsoft OneDrive [:page_facing_up:](https://rclone.org/onedrive/)
* Minio [:page_facing_up:](https://rclone.org/s3/#minio)
* Nextcloud [:page_facing_up:](https://rclone.org/webdav/#nextcloud)
* OVH [:page_facing_up:](https://rclone.org/swift/)
* Blomp Cloud Storage [:page_facing_up:](https://rclone.org/swift/)
* OpenDrive [:page_facing_up:](https://rclone.org/opendrive/)
* OpenStack Swift [:page_facing_up:](https://rclone.org/swift/)
* Oracle Cloud Storage [:page_facing_up:](https://rclone.org/swift/)
* Oracle Object Storage [:page_facing_up:](https://rclone.org/oracleobjectstorage/)
* Outscale [:page_facing_up:](https://rclone.org/s3/#outscale)
* ownCloud [:page_facing_up:](https://rclone.org/webdav/#owncloud)
* pCloud [:page_facing_up:](https://rclone.org/pcloud/)
* Petabox [:page_facing_up:](https://rclone.org/s3/#petabox)
* PikPak [:page_facing_up:](https://rclone.org/pikpak/)
* Pixeldrain [:page_facing_up:](https://rclone.org/pixeldrain/)
* premiumize.me [:page_facing_up:](https://rclone.org/premiumizeme/)
* put.io [:page_facing_up:](https://rclone.org/putio/)
* Proton Drive [:page_facing_up:](https://rclone.org/protondrive/)
* QingStor [:page_facing_up:](https://rclone.org/qingstor/)
* Qiniu Cloud Object Storage (Kodo) [:page_facing_up:](https://rclone.org/s3/#qiniu)
* Quatrix [:page_facing_up:](https://rclone.org/quatrix/)
* Rackspace Cloud Files [:page_facing_up:](https://rclone.org/swift/)
* RackCorp Object Storage [:page_facing_up:](https://rclone.org/s3/#RackCorp)
* rsync.net [:page_facing_up:](https://rclone.org/sftp/#rsync-net)
* Scaleway [:page_facing_up:](https://rclone.org/s3/#scaleway)
* Seafile [:page_facing_up:](https://rclone.org/seafile/)
* SeaweedFS [:page_facing_up:](https://rclone.org/s3/#seaweedfs)
* Selectel Object Storage [:page_facing_up:](https://rclone.org/s3/#selectel)
* SFTP [:page_facing_up:](https://rclone.org/sftp/)
* SMB / CIFS [:page_facing_up:](https://rclone.org/smb/)
* StackPath [:page_facing_up:](https://rclone.org/s3/#stackpath)
* Storj [:page_facing_up:](https://rclone.org/storj/)
* SugarSync [:page_facing_up:](https://rclone.org/sugarsync/)
* Synology C2 Object Storage [:page_facing_up:](https://rclone.org/s3/#synology-c2)
* Tencent Cloud Object Storage (COS) [:page_facing_up:](https://rclone.org/s3/#tencent-cos)
* Uloz.to [:page_facing_up:](https://rclone.org/ulozto/)
* Wasabi [:page_facing_up:](https://rclone.org/s3/#wasabi)
* WebDAV [:page_facing_up:](https://rclone.org/webdav/)
* Yandex Disk [:page_facing_up:](https://rclone.org/yandex/)
* Zoho WorkDrive [:page_facing_up:](https://rclone.org/zoho/)
* The local filesystem [:page_facing_up:](https://rclone.org/local/)
- 1Fichier [:page_facing_up:](https://rclone.org/fichier/)
- Akamai Netstorage [:page_facing_up:](https://rclone.org/netstorage/)
- Alibaba Cloud (Aliyun) Object Storage System (OSS) [:page_facing_up:](https://rclone.org/s3/#alibaba-oss)
- Amazon S3 [:page_facing_up:](https://rclone.org/s3/)
- ArvanCloud Object Storage (AOS) [:page_facing_up:](https://rclone.org/s3/#arvan-cloud-object-storage-aos)
- Backblaze B2 [:page_facing_up:](https://rclone.org/b2/)
- Box [:page_facing_up:](https://rclone.org/box/)
- Ceph [:page_facing_up:](https://rclone.org/s3/#ceph)
- China Mobile Ecloud Elastic Object Storage (EOS) [:page_facing_up:](https://rclone.org/s3/#china-mobile-ecloud-eos)
- Cloudflare R2 [:page_facing_up:](https://rclone.org/s3/#cloudflare-r2)
- Citrix ShareFile [:page_facing_up:](https://rclone.org/sharefile/)
- Cubbit DS3 [:page_facing_up:](https://rclone.org/s3/#Cubbit)
- DigitalOcean Spaces [:page_facing_up:](https://rclone.org/s3/#digitalocean-spaces)
- Digi Storage [:page_facing_up:](https://rclone.org/koofr/#digi-storage)
- Dreamhost [:page_facing_up:](https://rclone.org/s3/#dreamhost)
- Dropbox [:page_facing_up:](https://rclone.org/dropbox/)
- Enterprise File Fabric [:page_facing_up:](https://rclone.org/filefabric/)
- Exaba [:page_facing_up:](https://rclone.org/s3/#exaba)
- Fastmail Files [:page_facing_up:](https://rclone.org/webdav/#fastmail-files)
- FileLu [:page_facing_up:](https://rclone.org/filelu/)
- Files.com [:page_facing_up:](https://rclone.org/filescom/)
- FlashBlade [:page_facing_up:](https://rclone.org/s3/#pure-storage-flashblade)
- FTP [:page_facing_up:](https://rclone.org/ftp/)
- GoFile [:page_facing_up:](https://rclone.org/gofile/)
- Google Cloud Storage [:page_facing_up:](https://rclone.org/googlecloudstorage/)
- Google Drive [:page_facing_up:](https://rclone.org/drive/)
- Google Photos [:page_facing_up:](https://rclone.org/googlephotos/)
- HDFS (Hadoop Distributed Filesystem) [:page_facing_up:](https://rclone.org/hdfs/)
- Hetzner Object Storage [:page_facing_up:](https://rclone.org/s3/#hetzner)
- Hetzner Storage Box [:page_facing_up:](https://rclone.org/sftp/#hetzner-storage-box)
- HiDrive [:page_facing_up:](https://rclone.org/hidrive/)
- HTTP [:page_facing_up:](https://rclone.org/http/)
- Huawei Cloud Object Storage Service(OBS) [:page_facing_up:](https://rclone.org/s3/#huawei-obs)
- iCloud Drive [:page_facing_up:](https://rclone.org/iclouddrive/)
- ImageKit [:page_facing_up:](https://rclone.org/imagekit/)
- Internet Archive [:page_facing_up:](https://rclone.org/internetarchive/)
- Jottacloud [:page_facing_up:](https://rclone.org/jottacloud/)
- IBM COS S3 [:page_facing_up:](https://rclone.org/s3/#ibm-cos-s3)
- Intercolo Object Storage [:page_facing_up:](https://rclone.org/s3/#intercolo)
- IONOS Cloud [:page_facing_up:](https://rclone.org/s3/#ionos)
- Koofr [:page_facing_up:](https://rclone.org/koofr/)
- Leviia Object Storage [:page_facing_up:](https://rclone.org/s3/#leviia)
- Liara Object Storage [:page_facing_up:](https://rclone.org/s3/#liara-object-storage)
- Linkbox [:page_facing_up:](https://rclone.org/linkbox)
- Linode Object Storage [:page_facing_up:](https://rclone.org/s3/#linode)
- Magalu Object Storage [:page_facing_up:](https://rclone.org/s3/#magalu)
- Mail.ru Cloud [:page_facing_up:](https://rclone.org/mailru/)
- Memset Memstore [:page_facing_up:](https://rclone.org/swift/)
- MEGA [:page_facing_up:](https://rclone.org/mega/)
- MEGA S4 Object Storage [:page_facing_up:](https://rclone.org/s3/#mega)
- Memory [:page_facing_up:](https://rclone.org/memory/)
- Microsoft Azure Blob Storage [:page_facing_up:](https://rclone.org/azureblob/)
- Microsoft Azure Files Storage [:page_facing_up:](https://rclone.org/azurefiles/)
- Microsoft OneDrive [:page_facing_up:](https://rclone.org/onedrive/)
- Minio [:page_facing_up:](https://rclone.org/s3/#minio)
- Nextcloud [:page_facing_up:](https://rclone.org/webdav/#nextcloud)
- Blomp Cloud Storage [:page_facing_up:](https://rclone.org/swift/)
- OpenDrive [:page_facing_up:](https://rclone.org/opendrive/)
- OpenStack Swift [:page_facing_up:](https://rclone.org/swift/)
- Oracle Cloud Storage [:page_facing_up:](https://rclone.org/swift/)
- Oracle Object Storage [:page_facing_up:](https://rclone.org/oracleobjectstorage/)
- Outscale [:page_facing_up:](https://rclone.org/s3/#outscale)
- OVHcloud Object Storage (Swift) [:page_facing_up:](https://rclone.org/swift/)
- OVHcloud Object Storage (S3-compatible) [:page_facing_up:](https://rclone.org/s3/#ovhcloud)
- ownCloud [:page_facing_up:](https://rclone.org/webdav/#owncloud)
- pCloud [:page_facing_up:](https://rclone.org/pcloud/)
- Petabox [:page_facing_up:](https://rclone.org/s3/#petabox)
- PikPak [:page_facing_up:](https://rclone.org/pikpak/)
- Pixeldrain [:page_facing_up:](https://rclone.org/pixeldrain/)
- premiumize.me [:page_facing_up:](https://rclone.org/premiumizeme/)
- put.io [:page_facing_up:](https://rclone.org/putio/)
- Proton Drive [:page_facing_up:](https://rclone.org/protondrive/)
- QingStor [:page_facing_up:](https://rclone.org/qingstor/)
- Qiniu Cloud Object Storage (Kodo) [:page_facing_up:](https://rclone.org/s3/#qiniu)
- Rabata Cloud Storage [:page_facing_up:](https://rclone.org/s3/#Rabata)
- Quatrix [:page_facing_up:](https://rclone.org/quatrix/)
- Rackspace Cloud Files [:page_facing_up:](https://rclone.org/swift/)
- RackCorp Object Storage [:page_facing_up:](https://rclone.org/s3/#RackCorp)
- rsync.net [:page_facing_up:](https://rclone.org/sftp/#rsync-net)
- Scaleway [:page_facing_up:](https://rclone.org/s3/#scaleway)
- Seafile [:page_facing_up:](https://rclone.org/seafile/)
- Seagate Lyve Cloud [:page_facing_up:](https://rclone.org/s3/#lyve)
- SeaweedFS [:page_facing_up:](https://rclone.org/s3/#seaweedfs)
- Selectel Object Storage [:page_facing_up:](https://rclone.org/s3/#selectel)
- Servercore Object Storage [:page_facing_up:](https://rclone.org/s3/#servercore)
- SFTP [:page_facing_up:](https://rclone.org/sftp/)
- SMB / CIFS [:page_facing_up:](https://rclone.org/smb/)
- Spectra Logic [:page_facing_up:](https://rclone.org/s3/#spectralogic)
- StackPath [:page_facing_up:](https://rclone.org/s3/#stackpath)
- Storj [:page_facing_up:](https://rclone.org/storj/)
- SugarSync [:page_facing_up:](https://rclone.org/sugarsync/)
- Synology C2 Object Storage [:page_facing_up:](https://rclone.org/s3/#synology-c2)
- Tencent Cloud Object Storage (COS) [:page_facing_up:](https://rclone.org/s3/#tencent-cos)
- Uloz.to [:page_facing_up:](https://rclone.org/ulozto/)
- Wasabi [:page_facing_up:](https://rclone.org/s3/#wasabi)
- WebDAV [:page_facing_up:](https://rclone.org/webdav/)
- Yandex Disk [:page_facing_up:](https://rclone.org/yandex/)
- Zoho WorkDrive [:page_facing_up:](https://rclone.org/zoho/)
- Zata.ai [:page_facing_up:](https://rclone.org/s3/#Zata)
- The local filesystem [:page_facing_up:](https://rclone.org/local/)
Please see [the full list of all storage providers and their features](https://rclone.org/overview/)
@@ -116,50 +130,55 @@ Please see [the full list of all storage providers and their features](https://r
These backends adapt or modify other storage providers
* Alias: rename existing remotes [:page_facing_up:](https://rclone.org/alias/)
* Cache: cache remotes (DEPRECATED) [:page_facing_up:](https://rclone.org/cache/)
* Chunker: split large files [:page_facing_up:](https://rclone.org/chunker/)
* Combine: combine multiple remotes into a directory tree [:page_facing_up:](https://rclone.org/combine/)
* Compress: compress files [:page_facing_up:](https://rclone.org/compress/)
* Crypt: encrypt files [:page_facing_up:](https://rclone.org/crypt/)
* Hasher: hash files [:page_facing_up:](https://rclone.org/hasher/)
* Union: join multiple remotes to work together [:page_facing_up:](https://rclone.org/union/)
- Alias: rename existing remotes [:page_facing_up:](https://rclone.org/alias/)
- Archive: read archive files [:page_facing_up:](https://rclone.org/archive/)
- Cache: cache remotes (DEPRECATED) [:page_facing_up:](https://rclone.org/cache/)
- Chunker: split large files [:page_facing_up:](https://rclone.org/chunker/)
- Combine: combine multiple remotes into a directory tree [:page_facing_up:](https://rclone.org/combine/)
- Compress: compress files [:page_facing_up:](https://rclone.org/compress/)
- Crypt: encrypt files [:page_facing_up:](https://rclone.org/crypt/)
- Hasher: hash files [:page_facing_up:](https://rclone.org/hasher/)
- Union: join multiple remotes to work together [:page_facing_up:](https://rclone.org/union/)
## Features
* MD5/SHA-1 hashes checked at all times for file integrity
* Timestamps preserved on files
* Partial syncs supported on a whole file basis
* [Copy](https://rclone.org/commands/rclone_copy/) mode to just copy new/changed files
* [Sync](https://rclone.org/commands/rclone_sync/) (one way) mode to make a directory identical
* [Bisync](https://rclone.org/bisync/) (two way) to keep two directories in sync bidirectionally
* [Check](https://rclone.org/commands/rclone_check/) mode to check for file hash equality
* Can sync to and from network, e.g. two different cloud accounts
* Optional large file chunking ([Chunker](https://rclone.org/chunker/))
* Optional transparent compression ([Compress](https://rclone.org/compress/))
* Optional encryption ([Crypt](https://rclone.org/crypt/))
* Optional FUSE mount ([rclone mount](https://rclone.org/commands/rclone_mount/))
* Multi-threaded downloads to local disk
* Can [serve](https://rclone.org/commands/rclone_serve/) local or remote files over HTTP/WebDAV/FTP/SFTP/DLNA
- MD5/SHA-1 hashes checked at all times for file integrity
- Timestamps preserved on files
- Partial syncs supported on a whole file basis
- [Copy](https://rclone.org/commands/rclone_copy/) mode to just copy new/changed
files
- [Sync](https://rclone.org/commands/rclone_sync/) (one way) mode to make a directory
identical
- [Bisync](https://rclone.org/bisync/) (two way) to keep two directories in sync
bidirectionally
- [Check](https://rclone.org/commands/rclone_check/) mode to check for file hash
equality
- Can sync to and from network, e.g. two different cloud accounts
- Optional large file chunking ([Chunker](https://rclone.org/chunker/))
- Optional transparent compression ([Compress](https://rclone.org/compress/))
- Optional encryption ([Crypt](https://rclone.org/crypt/))
- Optional FUSE mount ([rclone mount](https://rclone.org/commands/rclone_mount/))
- Multi-threaded downloads to local disk
- Can [serve](https://rclone.org/commands/rclone_serve/) local or remote files
over HTTP/WebDAV/FTP/SFTP/DLNA
## Installation & documentation
Please see the [rclone website](https://rclone.org/) for:
* [Installation](https://rclone.org/install/)
* [Documentation & configuration](https://rclone.org/docs/)
* [Changelog](https://rclone.org/changelog/)
* [FAQ](https://rclone.org/faq/)
* [Storage providers](https://rclone.org/overview/)
* [Forum](https://forum.rclone.org/)
* ...and more
- [Installation](https://rclone.org/install/)
- [Documentation & configuration](https://rclone.org/docs/)
- [Changelog](https://rclone.org/changelog/)
- [FAQ](https://rclone.org/faq/)
- [Storage providers](https://rclone.org/overview/)
- [Forum](https://forum.rclone.org/)
- ...and more
## Downloads
* https://rclone.org/downloads/
- <https://rclone.org/downloads/>
License
-------
## License
This is free software under the terms of the MIT license (check the
[COPYING file](/COPYING) included in this package).

View File

@@ -4,52 +4,55 @@ This file describes how to make the various kinds of releases
## Extra required software for making a release
* [gh the github cli](https://github.com/cli/cli) for uploading packages
* pandoc for making the html and man pages
- [gh the github cli](https://github.com/cli/cli) for uploading packages
- pandoc for making the html and man pages
## Making a release
* git checkout master # see below for stable branch
* git pull # IMPORTANT
* git status - make sure everything is checked in
* Check GitHub actions build for master is Green
* make test # see integration test server or run locally
* make tag
* edit docs/content/changelog.md # make sure to remove duplicate logs from point releases
* make tidy
* make doc
* git status - to check for new man pages - git add them
* git commit -a -v -m "Version v1.XX.0"
* make retag
* git push origin # without --follow-tags so it doesn't push the tag if it fails
* git push --follow-tags origin
* # Wait for the GitHub builds to complete then...
* make fetch_binaries
* make tarball
* make vendorball
* make sign_upload
* make check_sign
* make upload
* make upload_website
* make upload_github
* make startdev # make startstable for stable branch
* # announce with forum post, twitter post, patreon post
- git checkout master # see below for stable branch
- git pull # IMPORTANT
- git status - make sure everything is checked in
- Check GitHub actions build for master is Green
- make test # see integration test server or run locally
- make tag
- edit docs/content/changelog.md # make sure to remove duplicate logs from point
releases
- make tidy
- make doc
- git status - to check for new man pages - git add them
- git commit -a -v -m "Version v1.XX.0"
- make retag
- git push origin # without --follow-tags so it doesn't push the tag if it fails
- git push --follow-tags origin
- \# Wait for the GitHub builds to complete then...
- make fetch_binaries
- make tarball
- make vendorball
- make sign_upload
- make check_sign
- make upload
- make upload_website
- make upload_github
- make startdev # make startstable for stable branch
- \# announce with forum post, twitter post, patreon post
## Update dependencies
Early in the next release cycle update the dependencies.
* Review any pinned packages in go.mod and remove if possible
* `make updatedirect`
* `make GOTAGS=cmount`
* `make compiletest`
* Fix anything which doesn't compile at this point and commit changes here
* `git commit -a -v -m "build: update all dependencies"`
- Review any pinned packages in go.mod and remove if possible
- `make updatedirect`
- `make GOTAGS=cmount`
- `make compiletest`
- Fix anything which doesn't compile at this point and commit changes here
- `git commit -a -v -m "build: update all dependencies"`
If the `make updatedirect` upgrades the version of go in the `go.mod`
go 1.22.0
```text
go 1.22.0
```
then go to manual mode. `go1.22` here is the lowest supported version
in the `go.mod`.
@@ -57,7 +60,7 @@ If `make updatedirect` added a `toolchain` directive then remove it.
We don't want to force a toolchain on our users. Linux packagers are
often using a version of Go that is a few versions out of date.
```
```console
go list -m -f '{{if not (or .Main .Indirect)}}{{.Path}}{{end}}' all > /tmp/potential-upgrades
go get -d $(cat /tmp/potential-upgrades)
go mod tidy -go=1.22 -compat=1.22
@@ -67,7 +70,7 @@ If the `go mod tidy` fails use the output from it to remove the
package which can't be upgraded from `/tmp/potential-upgrades` when
done
```
```console
git co go.mod go.sum
```
@@ -77,12 +80,12 @@ Optionally upgrade the direct and indirect dependencies. This is very
likely to fail if the manual method was used abve - in that case
ignore it as it is too time consuming to fix.
* `make update`
* `make GOTAGS=cmount`
* `make compiletest`
* roll back any updates which didn't compile
* `git commit -a -v --amend`
* **NB** watch out for this changing the default go version in `go.mod`
- `make update`
- `make GOTAGS=cmount`
- `make compiletest`
- roll back any updates which didn't compile
- `git commit -a -v --amend`
- **NB** watch out for this changing the default go version in `go.mod`
Note that `make update` updates all direct and indirect dependencies
and there can occasionally be forwards compatibility problems with
@@ -99,7 +102,9 @@ The above procedure will not upgrade major versions, so v2 to v3.
However this tool can show which major versions might need to be
upgraded:
go run github.com/icholy/gomajor@latest list -major
```console
go run github.com/icholy/gomajor@latest list -major
```
Expect API breakage when updating major versions.
@@ -107,7 +112,9 @@ Expect API breakage when updating major versions.
At some point after the release run
bin/tidy-beta v1.55
```console
bin/tidy-beta v1.55
```
where the version number is that of a couple ago to remove old beta binaries.
@@ -117,54 +124,64 @@ If rclone needs a point release due to some horrendous bug:
Set vars
* BASE_TAG=v1.XX # e.g. v1.52
* NEW_TAG=${BASE_TAG}.Y # e.g. v1.52.1
* echo $BASE_TAG $NEW_TAG # v1.52 v1.52.1
- BASE_TAG=v1.XX # e.g. v1.52
- NEW_TAG=${BASE_TAG}.Y # e.g. v1.52.1
- echo $BASE_TAG $NEW_TAG # v1.52 v1.52.1
First make the release branch. If this is a second point release then
this will be done already.
* git co -b ${BASE_TAG}-stable ${BASE_TAG}.0
* make startstable
- git co -b ${BASE_TAG}-stable ${BASE_TAG}.0
- make startstable
Now
* git co ${BASE_TAG}-stable
* git cherry-pick any fixes
* make startstable
* Do the steps as above
* git co master
* `#` cherry pick the changes to the changelog - check the diff to make sure it is correct
* git checkout ${BASE_TAG}-stable docs/content/changelog.md
* git commit -a -v -m "Changelog updates from Version ${NEW_TAG}"
* git push
- git co ${BASE_TAG}-stable
- git cherry-pick any fixes
- make startstable
- Do the steps as above
- git co master
- `#` cherry pick the changes to the changelog - check the diff to make sure it
is correct
- git checkout ${BASE_TAG}-stable docs/content/changelog.md
- git commit -a -v -m "Changelog updates from Version ${NEW_TAG}"
- git push
## Sponsor logos
If updating the website note that the sponsor logos have been moved out of the main repository.
If updating the website note that the sponsor logos have been moved out of the
main repository.
You will need to checkout `/docs/static/img/logos` from https://github.com/rclone/third-party-logos
You will need to checkout `/docs/static/img/logos` from <https://github.com/rclone/third-party-logos>
which is a private repo containing artwork from sponsors.
## Update the website between releases
Create an update website branch based off the last release
git co -b update-website
```console
git co -b update-website
```
If the branch already exists, double check there are no commits that need saving.
Now reset the branch to the last release
git reset --hard v1.64.0
```console
git reset --hard v1.64.0
```
Create the changes, check them in, test with `make serve` then
make upload_test_website
```console
make upload_test_website
```
Check out https://test.rclone.org and when happy
Check out <https://test.rclone.org> and when happy
make upload_website
```console
make upload_website
```
Cherry pick any changes back to master and the stable branch if it is active.
@@ -172,14 +189,14 @@ Cherry pick any changes back to master and the stable branch if it is active.
To do a basic build of rclone's docker image to debug builds locally:
```
```console
docker buildx build --load -t rclone/rclone:testing --progress=plain .
docker run --rm rclone/rclone:testing version
```
To test the multipatform build
```
```console
docker buildx build -t rclone/rclone:testing --progress=plain --platform linux/amd64,linux/386,linux/arm64,linux/arm/v7,linux/arm/v6 .
```
@@ -187,6 +204,6 @@ To make a full build then set the tags correctly and add `--push`
Note that you can't only build one architecture - you need to build them all.
```
```console
docker buildx build --platform linux/amd64,linux/386,linux/arm64,linux/arm/v7,linux/arm/v6 -t rclone/rclone:1.54.1 -t rclone/rclone:1.54 -t rclone/rclone:1 -t rclone/rclone:latest --push .
```

View File

@@ -1 +1 @@
v1.70.0
v1.72.0

View File

@@ -4,6 +4,7 @@ package all
import (
// Active file systems
_ "github.com/rclone/rclone/backend/alias"
_ "github.com/rclone/rclone/backend/archive"
_ "github.com/rclone/rclone/backend/azureblob"
_ "github.com/rclone/rclone/backend/azurefiles"
_ "github.com/rclone/rclone/backend/b2"
@@ -14,10 +15,12 @@ import (
_ "github.com/rclone/rclone/backend/combine"
_ "github.com/rclone/rclone/backend/compress"
_ "github.com/rclone/rclone/backend/crypt"
_ "github.com/rclone/rclone/backend/doi"
_ "github.com/rclone/rclone/backend/drive"
_ "github.com/rclone/rclone/backend/dropbox"
_ "github.com/rclone/rclone/backend/fichier"
_ "github.com/rclone/rclone/backend/filefabric"
_ "github.com/rclone/rclone/backend/filelu"
_ "github.com/rclone/rclone/backend/filescom"
_ "github.com/rclone/rclone/backend/ftp"
_ "github.com/rclone/rclone/backend/gofile"

679
backend/archive/archive.go Normal file
View File

@@ -0,0 +1,679 @@
//go:build !plan9
// Package archive implements a backend to access archive files in a remote
package archive
// FIXME factor common code between backends out - eg VFS initialization
// FIXME can we generalize the VFS handle caching and use it in zip backend
// Factor more stuff out if possible
// Odd stats which are probably coming from the VFS
// * tensorflow.sqfs: 0% /3.074Gi, 204.426Ki/s, 4h22m46s
// FIXME this will perform poorly for unpacking as the VFS Reader is bad
// at multiple streams - need cache mode setting?
import (
"context"
"errors"
"fmt"
"io"
"path"
"strings"
"sync"
"time"
// Import all the required archivers here
_ "github.com/rclone/rclone/backend/archive/squashfs"
_ "github.com/rclone/rclone/backend/archive/zip"
"github.com/rclone/rclone/backend/archive/archiver"
"github.com/rclone/rclone/fs"
"github.com/rclone/rclone/fs/cache"
"github.com/rclone/rclone/fs/config/configmap"
"github.com/rclone/rclone/fs/config/configstruct"
"github.com/rclone/rclone/fs/fspath"
"github.com/rclone/rclone/fs/hash"
)
// Register with Fs
func init() {
fsi := &fs.RegInfo{
Name: "archive",
Description: "Read archives",
NewFs: NewFs,
MetadataInfo: &fs.MetadataInfo{
Help: `Any metadata supported by the underlying remote is read and written.`,
},
Options: []fs.Option{{
Name: "remote",
Help: `Remote to wrap to read archives from.
Normally should contain a ':' and a path, e.g. "myremote:path/to/dir",
"myremote:bucket" or "myremote:".
If this is left empty, then the archive backend will use the root as
the remote.
This means that you can use :archive:remote:path and it will be
equivalent to setting remote="remote:path".
`,
Required: false,
}},
}
fs.Register(fsi)
}
// Options defines the configuration for this backend
type Options struct {
Remote string `config:"remote"`
}
// Fs represents a archive of upstreams
type Fs struct {
name string // name of this remote
features *fs.Features // optional features
opt Options // options for this Fs
root string // the path we are working on
f fs.Fs // remote we are wrapping
wrapper fs.Fs // fs that wraps us
mu sync.Mutex // protects the below
archives map[string]*archive // the archives we have, by path
}
// A single open archive
type archive struct {
archiver archiver.Archiver // archiver responsible
remote string // path to the archive
prefix string // prefix to add on to listings
root string // root of the archive to remove from listings
mu sync.Mutex // protects the following variables
f fs.Fs // the archive Fs, may be nil
}
// If remote is an archive then return it otherwise return nil
func findArchive(remote string) *archive {
// FIXME use something faster than linear search?
for _, archiver := range archiver.Archivers {
if strings.HasSuffix(remote, archiver.Extension) {
return &archive{
archiver: archiver,
remote: remote,
prefix: remote,
root: "",
}
}
}
return nil
}
// Find an archive buried in remote
func subArchive(remote string) *archive {
archive := findArchive(remote)
if archive != nil {
return archive
}
parent := path.Dir(remote)
if parent == "/" || parent == "." {
return nil
}
return subArchive(parent)
}
// If remote is an archive then return it otherwise return nil
func (f *Fs) findArchive(remote string) (archive *archive) {
archive = findArchive(remote)
if archive != nil {
f.mu.Lock()
f.archives[remote] = archive
f.mu.Unlock()
}
return archive
}
// Instantiate archive if it hasn't been instantiated yet
//
// This is done lazily so that we can list a directory full of
// archives without opening them all.
func (a *archive) init(ctx context.Context, f fs.Fs) (fs.Fs, error) {
a.mu.Lock()
defer a.mu.Unlock()
if a.f != nil {
return a.f, nil
}
newFs, err := a.archiver.New(ctx, f, a.remote, a.prefix, a.root)
if err != nil && err != fs.ErrorIsFile {
return nil, fmt.Errorf("failed to create archive %q: %w", a.remote, err)
}
a.f = newFs
return a.f, nil
}
// NewFs constructs an Fs from the path.
//
// The returned Fs is the actual Fs, referenced by remote in the config
func NewFs(ctx context.Context, name, root string, m configmap.Mapper) (outFs fs.Fs, err error) {
// defer log.Trace(nil, "name=%q, root=%q, m=%v", name, root, m)("f=%+v, err=%v", &outFs, &err)
// Parse config into Options struct
opt := new(Options)
err = configstruct.Set(m, opt)
if err != nil {
return nil, err
}
remote := opt.Remote
origRoot := root
// If remote is empty, use the root instead
if remote == "" {
remote = root
root = ""
}
isDirectory := strings.HasSuffix(remote, "/")
remote = strings.TrimRight(remote, "/")
if remote == "" {
remote = "/"
}
if strings.HasPrefix(remote, name+":") {
return nil, errors.New("can't point archive remote at itself - check the value of the upstreams setting")
}
_ = isDirectory
foundArchive := subArchive(remote)
if foundArchive != nil {
fs.Debugf(nil, "Found archiver for %q remote %q", foundArchive.archiver.Extension, foundArchive.remote)
// Archive path
foundArchive.root = strings.Trim(remote[len(foundArchive.remote):], "/")
// Path to the archive
archiveRemote := remote[:len(foundArchive.remote)]
// Remote is archive leaf name
foundArchive.remote = path.Base(archiveRemote)
foundArchive.prefix = ""
// Point remote to archive file
remote = archiveRemote
}
// Make sure to remove trailing . referring to the current dir
if path.Base(root) == "." {
root = strings.TrimSuffix(root, ".")
}
remotePath := fspath.JoinRootPath(remote, root)
wrappedFs, err := cache.Get(ctx, remotePath)
if err != fs.ErrorIsFile && err != nil {
return nil, fmt.Errorf("failed to make remote %q to wrap: %w", remote, err)
}
f := &Fs{
name: name,
//root: path.Join(remotePath, root),
root: origRoot,
opt: *opt,
f: wrappedFs,
archives: make(map[string]*archive),
}
cache.PinUntilFinalized(f.f, f)
// the features here are ones we could support, and they are
// ANDed with the ones from wrappedFs
f.features = (&fs.Features{
CaseInsensitive: true,
DuplicateFiles: false,
ReadMimeType: true,
WriteMimeType: true,
CanHaveEmptyDirectories: true,
BucketBased: true,
SetTier: true,
GetTier: true,
ReadMetadata: true,
WriteMetadata: true,
UserMetadata: true,
PartialUploads: true,
}).Fill(ctx, f).Mask(ctx, wrappedFs).WrapsFs(f, wrappedFs)
if foundArchive != nil {
fs.Debugf(f, "Root is an archive")
if err != fs.ErrorIsFile {
return nil, fmt.Errorf("expecting to find a file at %q", remote)
}
return foundArchive.init(ctx, f.f)
}
// Correct root if definitely pointing to a file
if err == fs.ErrorIsFile {
f.root = path.Dir(f.root)
if f.root == "." || f.root == "/" {
f.root = ""
}
}
return f, err
}
// Name of the remote (as passed into NewFs)
func (f *Fs) Name() string {
return f.name
}
// Root of the remote (as passed into NewFs)
func (f *Fs) Root() string {
return f.root
}
// String converts this Fs to a string
func (f *Fs) String() string {
return fmt.Sprintf("archive root '%s'", f.root)
}
// Features returns the optional features of this Fs
func (f *Fs) Features() *fs.Features {
return f.features
}
// Rmdir removes the root directory of the Fs object
func (f *Fs) Rmdir(ctx context.Context, dir string) error {
return f.f.Rmdir(ctx, dir)
}
// Hashes returns hash.HashNone to indicate remote hashing is unavailable
func (f *Fs) Hashes() hash.Set {
return f.f.Hashes()
}
// Mkdir makes the root directory of the Fs object
func (f *Fs) Mkdir(ctx context.Context, dir string) error {
return f.f.Mkdir(ctx, dir)
}
// Purge all files in the directory
//
// Implement this if you have a way of deleting all the files
// quicker than just running Remove() on the result of List()
//
// Return an error if it doesn't exist
func (f *Fs) Purge(ctx context.Context, dir string) error {
do := f.f.Features().Purge
if do == nil {
return fs.ErrorCantPurge
}
return do(ctx, dir)
}
// Copy src to this remote using server-side copy operations.
//
// This is stored with the remote path given.
//
// It returns the destination Object and a possible error.
//
// Will only be called if src.Fs().Name() == f.Name()
//
// If it isn't possible then return fs.ErrorCantCopy
func (f *Fs) Copy(ctx context.Context, src fs.Object, remote string) (fs.Object, error) {
do := f.f.Features().Copy
if do == nil {
return nil, fs.ErrorCantCopy
}
// FIXME
// o, ok := src.(*Object)
// if !ok {
// return nil, fs.ErrorCantCopy
// }
return do(ctx, src, remote)
}
// Move src to this remote using server-side move operations.
//
// This is stored with the remote path given.
//
// It returns the destination Object and a possible error.
//
// Will only be called if src.Fs().Name() == f.Name()
//
// If it isn't possible then return fs.ErrorCantMove
func (f *Fs) Move(ctx context.Context, src fs.Object, remote string) (fs.Object, error) {
do := f.f.Features().Move
if do == nil {
return nil, fs.ErrorCantMove
}
// FIXME
// o, ok := src.(*Object)
// if !ok {
// return nil, fs.ErrorCantMove
// }
return do(ctx, src, remote)
}
// DirMove moves src, srcRemote to this remote at dstRemote
// using server-side move operations.
//
// Will only be called if src.Fs().Name() == f.Name()
//
// If it isn't possible then return fs.ErrorCantDirMove
//
// If destination exists then return fs.ErrorDirExists
func (f *Fs) DirMove(ctx context.Context, src fs.Fs, srcRemote, dstRemote string) (err error) {
do := f.f.Features().DirMove
if do == nil {
return fs.ErrorCantDirMove
}
srcFs, ok := src.(*Fs)
if !ok {
fs.Debugf(srcFs, "Can't move directory - not same remote type")
return fs.ErrorCantDirMove
}
return do(ctx, srcFs.f, srcRemote, dstRemote)
}
// ChangeNotify calls the passed function with a path
// that has had changes. If the implementation
// uses polling, it should adhere to the given interval.
// At least one value will be written to the channel,
// specifying the initial value and updated values might
// follow. A 0 Duration should pause the polling.
// The ChangeNotify implementation must empty the channel
// regularly. When the channel gets closed, the implementation
// should stop polling and release resources.
func (f *Fs) ChangeNotify(ctx context.Context, notifyFunc func(string, fs.EntryType), ch <-chan time.Duration) {
do := f.f.Features().ChangeNotify
if do == nil {
return
}
wrappedNotifyFunc := func(path string, entryType fs.EntryType) {
// fs.Debugf(f, "ChangeNotify: path %q entryType %d", path, entryType)
notifyFunc(path, entryType)
}
do(ctx, wrappedNotifyFunc, ch)
}
// DirCacheFlush resets the directory cache - used in testing
// as an optional interface
func (f *Fs) DirCacheFlush() {
do := f.f.Features().DirCacheFlush
if do != nil {
do()
}
}
func (f *Fs) put(ctx context.Context, in io.Reader, src fs.ObjectInfo, stream bool, options ...fs.OpenOption) (fs.Object, error) {
var o fs.Object
var err error
if stream {
o, err = f.f.Features().PutStream(ctx, in, src, options...)
} else {
o, err = f.f.Put(ctx, in, src, options...)
}
if err != nil {
return nil, err
}
return o, nil
}
// Put in to the remote path with the modTime given of the given size
//
// May create the object even if it returns an error - if so
// will return the object and the error, otherwise will return
// nil and the error
func (f *Fs) Put(ctx context.Context, in io.Reader, src fs.ObjectInfo, options ...fs.OpenOption) (fs.Object, error) {
o, err := f.NewObject(ctx, src.Remote())
switch err {
case nil:
return o, o.Update(ctx, in, src, options...)
case fs.ErrorObjectNotFound:
return f.put(ctx, in, src, false, options...)
default:
return nil, err
}
}
// PutStream uploads to the remote path with the modTime given of indeterminate size
//
// May create the object even if it returns an error - if so
// will return the object and the error, otherwise will return
// nil and the error
func (f *Fs) PutStream(ctx context.Context, in io.Reader, src fs.ObjectInfo, options ...fs.OpenOption) (fs.Object, error) {
o, err := f.NewObject(ctx, src.Remote())
switch err {
case nil:
return o, o.Update(ctx, in, src, options...)
case fs.ErrorObjectNotFound:
return f.put(ctx, in, src, true, options...)
default:
return nil, err
}
}
// About gets quota information from the Fs
func (f *Fs) About(ctx context.Context) (*fs.Usage, error) {
do := f.f.Features().About
if do == nil {
return nil, errors.New("not supported by underlying remote")
}
return do(ctx)
}
// Find the Fs for the directory
func (f *Fs) findFs(ctx context.Context, dir string) (subFs fs.Fs, err error) {
f.mu.Lock()
defer f.mu.Unlock()
subFs = f.f
// FIXME should do this with a better datastructure like a prefix tree
// FIXME want to find the longest first otherwise nesting won't work
dirSlash := dir + "/"
for archiverRemote, archive := range f.archives {
subRemote := archiverRemote + "/"
if strings.HasPrefix(dirSlash, subRemote) {
subFs, err = archive.init(ctx, f.f)
if err != nil {
return nil, err
}
break
}
}
return subFs, nil
}
// List the objects and directories in dir into entries. The
// entries can be returned in any order but should be for a
// complete directory.
//
// dir should be "" to list the root, and should not have
// trailing slashes.
//
// This should return ErrDirNotFound if the directory isn't
// found.
func (f *Fs) List(ctx context.Context, dir string) (entries fs.DirEntries, err error) {
// defer log.Trace(f, "dir=%q", dir)("entries = %v, err=%v", &entries, &err)
subFs, err := f.findFs(ctx, dir)
if err != nil {
return nil, err
}
entries, err = subFs.List(ctx, dir)
if err != nil {
return nil, err
}
for i, entry := range entries {
// Can only unarchive files
if o, ok := entry.(fs.Object); ok {
remote := o.Remote()
archive := f.findArchive(remote)
if archive != nil {
// Overwrite entry with directory
entries[i] = fs.NewDir(remote, o.ModTime(ctx))
}
}
}
return entries, nil
}
// NewObject creates a new remote archive file object
func (f *Fs) NewObject(ctx context.Context, remote string) (fs.Object, error) {
dir := path.Dir(remote)
if dir == "/" || dir == "." {
dir = ""
}
subFs, err := f.findFs(ctx, dir)
if err != nil {
return nil, err
}
o, err := subFs.NewObject(ctx, remote)
if err != nil {
return nil, err
}
return o, nil
}
// Precision is the greatest precision of all the archivers
func (f *Fs) Precision() time.Duration {
return time.Second
}
// Shutdown the backend, closing any background tasks and any
// cached connections.
func (f *Fs) Shutdown(ctx context.Context) error {
if do := f.f.Features().Shutdown; do != nil {
return do(ctx)
}
return nil
}
// PublicLink generates a public link to the remote path (usually readable by anyone)
func (f *Fs) PublicLink(ctx context.Context, remote string, expire fs.Duration, unlink bool) (string, error) {
do := f.f.Features().PublicLink
if do == nil {
return "", errors.New("PublicLink not supported")
}
return do(ctx, remote, expire, unlink)
}
// PutUnchecked in to the remote path with the modTime given of the given size
//
// May create the object even if it returns an error - if so
// will return the object and the error, otherwise will return
// nil and the error
//
// May create duplicates or return errors if src already
// exists.
func (f *Fs) PutUnchecked(ctx context.Context, in io.Reader, src fs.ObjectInfo, options ...fs.OpenOption) (fs.Object, error) {
do := f.f.Features().PutUnchecked
if do == nil {
return nil, errors.New("can't PutUnchecked")
}
o, err := do(ctx, in, src, options...)
if err != nil {
return nil, err
}
return o, nil
}
// MergeDirs merges the contents of all the directories passed
// in into the first one and rmdirs the other directories.
func (f *Fs) MergeDirs(ctx context.Context, dirs []fs.Directory) error {
if len(dirs) == 0 {
return nil
}
do := f.f.Features().MergeDirs
if do == nil {
return errors.New("MergeDirs not supported")
}
return do(ctx, dirs)
}
// CleanUp the trash in the Fs
//
// Implement this if you have a way of emptying the trash or
// otherwise cleaning up old versions of files.
func (f *Fs) CleanUp(ctx context.Context) error {
do := f.f.Features().CleanUp
if do == nil {
return errors.New("not supported by underlying remote")
}
return do(ctx)
}
// OpenWriterAt opens with a handle for random access writes
//
// Pass in the remote desired and the size if known.
//
// It truncates any existing object
func (f *Fs) OpenWriterAt(ctx context.Context, remote string, size int64) (fs.WriterAtCloser, error) {
do := f.f.Features().OpenWriterAt
if do == nil {
return nil, fs.ErrorNotImplemented
}
return do(ctx, remote, size)
}
// UnWrap returns the Fs that this Fs is wrapping
func (f *Fs) UnWrap() fs.Fs {
return f.f
}
// WrapFs returns the Fs that is wrapping this Fs
func (f *Fs) WrapFs() fs.Fs {
return f.wrapper
}
// SetWrapper sets the Fs that is wrapping this Fs
func (f *Fs) SetWrapper(wrapper fs.Fs) {
f.wrapper = wrapper
}
// OpenChunkWriter returns the chunk size and a ChunkWriter
//
// Pass in the remote and the src object
// You can also use options to hint at the desired chunk size
func (f *Fs) OpenChunkWriter(ctx context.Context, remote string, src fs.ObjectInfo, options ...fs.OpenOption) (info fs.ChunkWriterInfo, writer fs.ChunkWriter, err error) {
do := f.f.Features().OpenChunkWriter
if do == nil {
return info, nil, fs.ErrorNotImplemented
}
return do(ctx, remote, src, options...)
}
// UserInfo returns info about the connected user
func (f *Fs) UserInfo(ctx context.Context) (map[string]string, error) {
do := f.f.Features().UserInfo
if do == nil {
return nil, fs.ErrorNotImplemented
}
return do(ctx)
}
// Disconnect the current user
func (f *Fs) Disconnect(ctx context.Context) error {
do := f.f.Features().Disconnect
if do == nil {
return fs.ErrorNotImplemented
}
return do(ctx)
}
// Check the interfaces are satisfied
var (
_ fs.Fs = (*Fs)(nil)
_ fs.Purger = (*Fs)(nil)
_ fs.PutStreamer = (*Fs)(nil)
_ fs.Copier = (*Fs)(nil)
_ fs.Mover = (*Fs)(nil)
_ fs.DirMover = (*Fs)(nil)
_ fs.DirCacheFlusher = (*Fs)(nil)
_ fs.ChangeNotifier = (*Fs)(nil)
_ fs.Abouter = (*Fs)(nil)
_ fs.Shutdowner = (*Fs)(nil)
_ fs.PublicLinker = (*Fs)(nil)
_ fs.PutUncheckeder = (*Fs)(nil)
_ fs.MergeDirser = (*Fs)(nil)
_ fs.CleanUpper = (*Fs)(nil)
_ fs.OpenWriterAter = (*Fs)(nil)
_ fs.OpenChunkWriter = (*Fs)(nil)
_ fs.UserInfoer = (*Fs)(nil)
_ fs.Disconnecter = (*Fs)(nil)
// FIXME _ fs.FullObject = (*Object)(nil)
)

View File

@@ -0,0 +1,221 @@
//go:build !plan9
package archive
import (
"bytes"
"context"
"fmt"
"os"
"os/exec"
"path"
"path/filepath"
"strconv"
"strings"
"testing"
_ "github.com/rclone/rclone/backend/local"
"github.com/rclone/rclone/fs"
"github.com/rclone/rclone/fs/cache"
"github.com/rclone/rclone/fs/filter"
"github.com/rclone/rclone/fs/operations"
"github.com/rclone/rclone/fstest"
"github.com/rclone/rclone/fstest/fstests"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
)
// FIXME need to test Open with seek
// run - run a shell command
func run(t *testing.T, args ...string) {
cmd := exec.Command(args[0], args[1:]...)
fs.Debugf(nil, "run args = %v", args)
out, err := cmd.CombinedOutput()
if err != nil {
t.Fatalf(`
----------------------------
Failed to run %v: %v
Command output was:
%s
----------------------------
`, args, err, out)
}
}
// check the dst and src are identical
func checkTree(ctx context.Context, name string, t *testing.T, dstArchive, src string, expectedCount int) {
t.Run(name, func(t *testing.T) {
fs.Debugf(nil, "check %q vs %q", dstArchive, src)
Farchive, err := cache.Get(ctx, dstArchive)
if err != fs.ErrorIsFile {
require.NoError(t, err)
}
Fsrc, err := cache.Get(ctx, src)
if err != fs.ErrorIsFile {
require.NoError(t, err)
}
var matches bytes.Buffer
opt := operations.CheckOpt{
Fdst: Farchive,
Fsrc: Fsrc,
Match: &matches,
}
for _, action := range []string{"Check", "Download"} {
t.Run(action, func(t *testing.T) {
matches.Reset()
if action == "Download" {
assert.NoError(t, operations.CheckDownload(ctx, &opt))
} else {
assert.NoError(t, operations.Check(ctx, &opt))
}
if expectedCount > 0 {
assert.Equal(t, expectedCount, strings.Count(matches.String(), "\n"))
}
})
}
t.Run("NewObject", func(t *testing.T) {
// Check we can run NewObject on all files and read them
assert.NoError(t, operations.ListFn(ctx, Fsrc, func(srcObj fs.Object) {
if t.Failed() {
return
}
remote := srcObj.Remote()
archiveObj, err := Farchive.NewObject(ctx, remote)
require.NoError(t, err, remote)
assert.Equal(t, remote, archiveObj.Remote(), remote)
// Test that the contents are the same
archiveBuf := fstests.ReadObject(ctx, t, archiveObj, -1)
srcBuf := fstests.ReadObject(ctx, t, srcObj, -1)
assert.Equal(t, srcBuf, archiveBuf)
if len(srcBuf) < 81 {
return
}
// Tests that Open works with SeekOption
assert.Equal(t, srcBuf[50:], fstests.ReadObject(ctx, t, archiveObj, -1, &fs.SeekOption{Offset: 50}), "contents differ after seek")
// Tests that Open works with RangeOption
for _, test := range []struct {
ro fs.RangeOption
wantStart, wantEnd int
}{
{fs.RangeOption{Start: 5, End: 15}, 5, 16},
{fs.RangeOption{Start: 80, End: -1}, 80, len(srcBuf)},
{fs.RangeOption{Start: 81, End: 100000}, 81, len(srcBuf)},
{fs.RangeOption{Start: -1, End: 20}, len(srcBuf) - 20, len(srcBuf)}, // if start is omitted this means get the final bytes
// {fs.RangeOption{Start: -1, End: -1}, 0, len(srcBuf)}, - this seems to work but the RFC doesn't define it
} {
got := fstests.ReadObject(ctx, t, archiveObj, -1, &test.ro)
foundAt := strings.Index(srcBuf, got)
help := fmt.Sprintf("%#v failed want [%d:%d] got [%d:%d]", test.ro, test.wantStart, test.wantEnd, foundAt, foundAt+len(got))
assert.Equal(t, srcBuf[test.wantStart:test.wantEnd], got, help)
}
// Test that the modtimes are correct
fstest.AssertTimeEqualWithPrecision(t, remote, srcObj.ModTime(ctx), archiveObj.ModTime(ctx), Farchive.Precision())
// Test that the sizes are correct
assert.Equal(t, srcObj.Size(), archiveObj.Size())
// Test that Strings are OK
assert.Equal(t, srcObj.String(), archiveObj.String())
}))
})
// t.Logf("Fdst ------------- %v", Fdst)
// operations.List(ctx, Fdst, os.Stdout)
// t.Logf("Fsrc ------------- %v", Fsrc)
// operations.List(ctx, Fsrc, os.Stdout)
})
}
// test creating and reading back some archives
//
// Note that this uses rclone and zip as external binaries.
func testArchive(t *testing.T, archiveName string, archiveFn func(t *testing.T, output, input string)) {
ctx := context.Background()
checkFiles := 1000
// create random test input files
inputRoot := t.TempDir()
input := filepath.Join(inputRoot, archiveName)
require.NoError(t, os.Mkdir(input, 0777))
run(t, "rclone", "test", "makefiles", "--files", strconv.Itoa(checkFiles), "--ascii", input)
// Create the archive
output := t.TempDir()
zipFile := path.Join(output, archiveName)
archiveFn(t, zipFile, input)
// Check the archive itself
checkTree(ctx, "Archive", t, ":archive:"+zipFile, input, checkFiles)
// Now check a subdirectory
fis, err := os.ReadDir(input)
require.NoError(t, err)
subDir := "NOT FOUND"
aFile := "NOT FOUND"
for _, fi := range fis {
if fi.IsDir() {
subDir = fi.Name()
} else {
aFile = fi.Name()
}
}
checkTree(ctx, "SubDir", t, ":archive:"+zipFile+"/"+subDir, filepath.Join(input, subDir), 0)
// Now check a single file
fiCtx, fi := filter.AddConfig(ctx)
require.NoError(t, fi.AddRule("+ "+aFile))
require.NoError(t, fi.AddRule("- *"))
checkTree(fiCtx, "SingleFile", t, ":archive:"+zipFile+"/"+aFile, filepath.Join(input, aFile), 0)
// Now check the level above
checkTree(ctx, "Root", t, ":archive:"+output, inputRoot, checkFiles)
// run(t, "cp", "-a", inputRoot, output, "/tmp/test-"+archiveName)
}
// Make sure we have the executable named
func skipIfNoExe(t *testing.T, exeName string) {
_, err := exec.LookPath(exeName)
if err != nil {
t.Skipf("%s executable not installed", exeName)
}
}
// Test creating and reading back some archives
//
// Note that this uses rclone and zip as external binaries.
func TestArchiveZip(t *testing.T) {
fstest.Initialise()
skipIfNoExe(t, "zip")
skipIfNoExe(t, "rclone")
testArchive(t, "test.zip", func(t *testing.T, output, input string) {
oldcwd, err := os.Getwd()
require.NoError(t, err)
require.NoError(t, os.Chdir(input))
defer func() {
require.NoError(t, os.Chdir(oldcwd))
}()
run(t, "zip", "-9r", output, ".")
})
}
// Test creating and reading back some archives
//
// Note that this uses rclone and squashfs as external binaries.
func TestArchiveSquashfs(t *testing.T) {
fstest.Initialise()
skipIfNoExe(t, "mksquashfs")
skipIfNoExe(t, "rclone")
testArchive(t, "test.sqfs", func(t *testing.T, output, input string) {
run(t, "mksquashfs", input, output)
})
}

View File

@@ -0,0 +1,67 @@
//go:build !plan9
// Test Archive filesystem interface
package archive_test
import (
"testing"
_ "github.com/rclone/rclone/backend/local"
_ "github.com/rclone/rclone/backend/memory"
"github.com/rclone/rclone/fstest"
"github.com/rclone/rclone/fstest/fstests"
)
var (
unimplementableFsMethods = []string{"ListR", "ListP", "MkdirMetadata", "DirSetModTime"}
// In these tests we receive objects from the underlying remote which don't implement these methods
unimplementableObjectMethods = []string{"GetTier", "ID", "Metadata", "MimeType", "SetTier", "UnWrap", "SetMetadata"}
)
// TestIntegration runs integration tests against the remote
func TestIntegration(t *testing.T) {
if *fstest.RemoteName == "" {
t.Skip("Skipping as -remote not set")
}
fstests.Run(t, &fstests.Opt{
RemoteName: *fstest.RemoteName,
UnimplementableFsMethods: unimplementableFsMethods,
UnimplementableObjectMethods: unimplementableObjectMethods,
})
}
func TestLocal(t *testing.T) {
if *fstest.RemoteName != "" {
t.Skip("Skipping as -remote set")
}
remote := t.TempDir()
name := "TestArchiveLocal"
fstests.Run(t, &fstests.Opt{
RemoteName: name + ":",
ExtraConfig: []fstests.ExtraConfigItem{
{Name: name, Key: "type", Value: "archive"},
{Name: name, Key: "remote", Value: remote},
},
QuickTestOK: true,
UnimplementableFsMethods: unimplementableFsMethods,
UnimplementableObjectMethods: unimplementableObjectMethods,
})
}
func TestMemory(t *testing.T) {
if *fstest.RemoteName != "" {
t.Skip("Skipping as -remote set")
}
remote := ":memory:"
name := "TestArchiveMemory"
fstests.Run(t, &fstests.Opt{
RemoteName: name + ":",
ExtraConfig: []fstests.ExtraConfigItem{
{Name: name, Key: "type", Value: "archive"},
{Name: name, Key: "remote", Value: remote},
},
QuickTestOK: true,
UnimplementableFsMethods: unimplementableFsMethods,
UnimplementableObjectMethods: unimplementableObjectMethods,
})
}

View File

@@ -0,0 +1,7 @@
// Build for archive for unsupported platforms to stop go complaining
// about "no buildable Go source files "
//go:build plan9
// Package archive implements a backend to access archive files in a remote
package archive

View File

@@ -0,0 +1,24 @@
// Package archiver registers all the archivers
package archiver
import (
"context"
"github.com/rclone/rclone/fs"
)
// Archiver describes an archive package
type Archiver struct {
// New constructs an Fs from the (wrappedFs, remote) with the objects
// prefix with prefix and rooted at root
New func(ctx context.Context, f fs.Fs, remote, prefix, root string) (fs.Fs, error)
Extension string
}
// Archivers is a slice of all registered archivers
var Archivers []Archiver
// Register adds the archivers provided to the list of known archivers
func Register(as ...Archiver) {
Archivers = append(Archivers, as...)
}

View File

@@ -0,0 +1,233 @@
// Package base is a base archive Fs
package base
import (
"context"
"errors"
"fmt"
"io"
"path"
"time"
"github.com/rclone/rclone/fs"
"github.com/rclone/rclone/fs/hash"
"github.com/rclone/rclone/vfs"
)
// Fs represents a wrapped fs.Fs
type Fs struct {
f fs.Fs
wrapper fs.Fs
name string
features *fs.Features // optional features
vfs *vfs.VFS
node vfs.Node // archive object
remote string // remote of the archive object
prefix string // position for objects
prefixSlash string // position for objects with a slash on
root string // position to read from within the archive
}
var errNotImplemented = errors.New("internal error: method not implemented in archiver")
// New constructs an Fs from the (wrappedFs, remote) with the objects
// prefix with prefix and rooted at root
func New(ctx context.Context, wrappedFs fs.Fs, remote, prefix, root string) (*Fs, error) {
// FIXME vfs cache?
// FIXME could factor out ReadFileHandle and just use that rather than the full VFS
fs.Debugf(nil, "New: remote=%q, prefix=%q, root=%q", remote, prefix, root)
VFS := vfs.New(wrappedFs, nil)
node, err := VFS.Stat(remote)
if err != nil {
return nil, fmt.Errorf("failed to find %q archive: %w", remote, err)
}
f := &Fs{
f: wrappedFs,
name: path.Join(fs.ConfigString(wrappedFs), remote),
vfs: VFS,
node: node,
remote: remote,
root: root,
prefix: prefix,
prefixSlash: prefix + "/",
}
// FIXME
// the features here are ones we could support, and they are
// ANDed with the ones from wrappedFs
//
// FIXME some of these need to be forced on - CanHaveEmptyDirectories
f.features = (&fs.Features{
CaseInsensitive: false,
DuplicateFiles: false,
ReadMimeType: false, // MimeTypes not supported with gzip
WriteMimeType: false,
BucketBased: false,
CanHaveEmptyDirectories: true,
}).Fill(ctx, f).Mask(ctx, wrappedFs).WrapsFs(f, wrappedFs)
return f, nil
}
// Name of the remote (as passed into NewFs)
func (f *Fs) Name() string {
return f.name
}
// Root of the remote (as passed into NewFs)
func (f *Fs) Root() string {
return f.root
}
// Features returns the optional features of this Fs
func (f *Fs) Features() *fs.Features {
return f.features
}
// String returns a description of the FS
func (f *Fs) String() string {
return f.name
}
// List the objects and directories in dir into entries. The
// entries can be returned in any order but should be for a
// complete directory.
//
// dir should be "" to list the root, and should not have
// trailing slashes.
//
// This should return ErrDirNotFound if the directory isn't
// found.
func (f *Fs) List(ctx context.Context, dir string) (entries fs.DirEntries, err error) {
return nil, errNotImplemented
}
// NewObject finds the Object at remote.
func (f *Fs) NewObject(ctx context.Context, remote string) (o fs.Object, err error) {
return nil, errNotImplemented
}
// Precision of the ModTimes in this Fs
func (f *Fs) Precision() time.Duration {
return time.Second
}
// Mkdir makes the directory (container, bucket)
//
// Shouldn't return an error if it already exists
func (f *Fs) Mkdir(ctx context.Context, dir string) error {
return vfs.EROFS
}
// Rmdir removes the directory (container, bucket) if empty
//
// Return an error if it doesn't exist or isn't empty
func (f *Fs) Rmdir(ctx context.Context, dir string) error {
return vfs.EROFS
}
// Put in to the remote path with the modTime given of the given size
//
// May create the object even if it returns an error - if so
// will return the object and the error, otherwise will return
// nil and the error
func (f *Fs) Put(ctx context.Context, in io.Reader, src fs.ObjectInfo, options ...fs.OpenOption) (o fs.Object, err error) {
return nil, vfs.EROFS
}
// Hashes returns the supported hash sets.
func (f *Fs) Hashes() hash.Set {
return hash.Set(hash.None)
}
// UnWrap returns the Fs that this Fs is wrapping
func (f *Fs) UnWrap() fs.Fs {
return f.f
}
// WrapFs returns the Fs that is wrapping this Fs
func (f *Fs) WrapFs() fs.Fs {
return f.wrapper
}
// SetWrapper sets the Fs that is wrapping this Fs
func (f *Fs) SetWrapper(wrapper fs.Fs) {
f.wrapper = wrapper
}
// Object describes an object to be read from the raw zip file
type Object struct {
f *Fs
remote string
}
// Fs returns read only access to the Fs that this object is part of
func (o *Object) Fs() fs.Info {
return o.f
}
// Return a string version
func (o *Object) String() string {
if o == nil {
return "<nil>"
}
return o.Remote()
}
// Remote returns the remote path
func (o *Object) Remote() string {
return o.remote
}
// Size returns the size of the file
func (o *Object) Size() int64 {
return -1
}
// ModTime returns the modification time of the object
//
// It attempts to read the objects mtime and if that isn't present the
// LastModified returned in the http headers
func (o *Object) ModTime(ctx context.Context) time.Time {
return time.Now()
}
// SetModTime sets the modification time of the local fs object
func (o *Object) SetModTime(ctx context.Context, modTime time.Time) error {
return vfs.EROFS
}
// Storable raturns a boolean indicating if this object is storable
func (o *Object) Storable() bool {
return true
}
// Hash returns the selected checksum of the file
// If no checksum is available it returns ""
func (o *Object) Hash(ctx context.Context, ht hash.Type) (string, error) {
return "", hash.ErrUnsupported
}
// Open opens the file for read. Call Close() on the returned io.ReadCloser
func (o *Object) Open(ctx context.Context, options ...fs.OpenOption) (rc io.ReadCloser, err error) {
return nil, errNotImplemented
}
// Update in to the object with the modTime given of the given size
func (o *Object) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, options ...fs.OpenOption) error {
return vfs.EROFS
}
// Remove an object
func (o *Object) Remove(ctx context.Context) error {
return vfs.EROFS
}
// Check the interfaces are satisfied
var (
_ fs.Fs = (*Fs)(nil)
_ fs.UnWrapper = (*Fs)(nil)
_ fs.Wrapper = (*Fs)(nil)
_ fs.Object = (*Object)(nil)
)

View File

@@ -0,0 +1,165 @@
package squashfs
// Could just be using bare object Open with RangeRequest which
// would transfer the minimum amount of data but may be slower.
import (
"errors"
"fmt"
"io/fs"
"os"
"sync"
"github.com/diskfs/go-diskfs/backend"
"github.com/rclone/rclone/vfs"
)
// Cache file handles for accessing the file
type cache struct {
node vfs.Node
fhsMu sync.Mutex
fhs []cacheHandle
}
// A cached file handle
type cacheHandle struct {
offset int64
fh vfs.Handle
}
// Make a new cache
func newCache(node vfs.Node) *cache {
return &cache{
node: node,
}
}
// Get a vfs.Handle from the pool or open one
//
// This tries to find an open file handle which doesn't require seeking.
func (c *cache) open(off int64) (fh vfs.Handle, err error) {
c.fhsMu.Lock()
defer c.fhsMu.Unlock()
if len(c.fhs) > 0 {
// Look for exact match first
for i, cfh := range c.fhs {
if cfh.offset == off {
// fs.Debugf(nil, "CACHE MATCH")
c.fhs = append(c.fhs[:i], c.fhs[i+1:]...)
return cfh.fh, nil
}
}
// fs.Debugf(nil, "CACHE MISS")
// Just take the first one if not found
cfh := c.fhs[0]
c.fhs = c.fhs[1:]
return cfh.fh, nil
}
fh, err = c.node.Open(os.O_RDONLY)
if err != nil {
return nil, fmt.Errorf("failed to open squashfs archive: %w", err)
}
return fh, nil
}
// Close a vfs.Handle or return it to the pool
//
// off should be the offset the file handle would read from without seeking
func (c *cache) close(fh vfs.Handle, off int64) {
c.fhsMu.Lock()
defer c.fhsMu.Unlock()
c.fhs = append(c.fhs, cacheHandle{
offset: off,
fh: fh,
})
}
// ReadAt reads len(p) bytes into p starting at offset off in the underlying
// input source. It returns the number of bytes read (0 <= n <= len(p)) and any
// error encountered.
//
// When ReadAt returns n < len(p), it returns a non-nil error explaining why
// more bytes were not returned. In this respect, ReadAt is stricter than Read.
//
// Even if ReadAt returns n < len(p), it may use all of p as scratch
// space during the call. If some data is available but not len(p) bytes,
// ReadAt blocks until either all the data is available or an error occurs.
// In this respect ReadAt is different from Read.
//
// If the n = len(p) bytes returned by ReadAt are at the end of the input
// source, ReadAt may return either err == EOF or err == nil.
//
// If ReadAt is reading from an input source with a seek offset, ReadAt should
// not affect nor be affected by the underlying seek offset.
//
// Clients of ReadAt can execute parallel ReadAt calls on the same input
// source.
//
// Implementations must not retain p.
func (c *cache) ReadAt(p []byte, off int64) (n int, err error) {
fh, err := c.open(off)
if err != nil {
return n, err
}
defer func() {
c.close(fh, off+int64(len(p)))
}()
// fs.Debugf(nil, "ReadAt(p[%d], off=%d, fh=%p)", len(p), off, fh)
return fh.ReadAt(p, off)
}
var errCacheNotImplemented = errors.New("internal error: squashfs cache doesn't implement method")
// WriteAt method dummy stub to satisfy interface
func (c *cache) WriteAt(p []byte, off int64) (n int, err error) {
return 0, errCacheNotImplemented
}
// Seek method dummy stub to satisfy interface
func (c *cache) Seek(offset int64, whence int) (int64, error) {
return 0, errCacheNotImplemented
}
// Read method dummy stub to satisfy interface
func (c *cache) Read(p []byte) (n int, err error) {
return 0, errCacheNotImplemented
}
func (c *cache) Stat() (fs.FileInfo, error) {
return nil, errCacheNotImplemented
}
// Close the file
func (c *cache) Close() (err error) {
c.fhsMu.Lock()
defer c.fhsMu.Unlock()
// Close any open file handles
for i := range c.fhs {
fh := &c.fhs[i]
newErr := fh.fh.Close()
if err == nil {
err = newErr
}
}
c.fhs = nil
return err
}
// Sys returns OS-specific file for ioctl calls via fd
func (c *cache) Sys() (*os.File, error) {
return nil, errCacheNotImplemented
}
// Writable returns file for read-write operations
func (c *cache) Writable() (backend.WritableFile, error) {
return nil, errCacheNotImplemented
}
// check interfaces
var _ backend.Storage = (*cache)(nil)

View File

@@ -0,0 +1,446 @@
// Package squashfs implements a squashfs archiver for the archive backend
package squashfs
import (
"context"
"fmt"
"io"
"path"
"strings"
"time"
"github.com/diskfs/go-diskfs/filesystem/squashfs"
"github.com/rclone/rclone/backend/archive/archiver"
"github.com/rclone/rclone/fs"
"github.com/rclone/rclone/fs/hash"
"github.com/rclone/rclone/fs/log"
"github.com/rclone/rclone/lib/readers"
"github.com/rclone/rclone/vfs"
"github.com/rclone/rclone/vfs/vfscommon"
)
func init() {
archiver.Register(archiver.Archiver{
New: New,
Extension: ".sqfs",
})
}
// Fs represents a wrapped fs.Fs
type Fs struct {
f fs.Fs
wrapper fs.Fs
name string
features *fs.Features // optional features
vfs *vfs.VFS
sqfs *squashfs.FileSystem // interface to the squashfs
c *cache
node vfs.Node // squashfs file object - set if reading
remote string // remote of the squashfs file object
prefix string // position for objects
prefixSlash string // position for objects with a slash on
root string // position to read from within the archive
}
// New constructs an Fs from the (wrappedFs, remote) with the objects
// prefix with prefix and rooted at root
func New(ctx context.Context, wrappedFs fs.Fs, remote, prefix, root string) (fs.Fs, error) {
// FIXME vfs cache?
// FIXME could factor out ReadFileHandle and just use that rather than the full VFS
fs.Debugf(nil, "Squashfs: New: remote=%q, prefix=%q, root=%q", remote, prefix, root)
vfsOpt := vfscommon.Opt
vfsOpt.ReadWait = 0
VFS := vfs.New(wrappedFs, &vfsOpt)
node, err := VFS.Stat(remote)
if err != nil {
return nil, fmt.Errorf("failed to find %q archive: %w", remote, err)
}
c := newCache(node)
// FIXME blocksize
sqfs, err := squashfs.Read(c, node.Size(), 0, 1024*1024)
if err != nil {
return nil, fmt.Errorf("failed to read squashfs: %w", err)
}
f := &Fs{
f: wrappedFs,
name: path.Join(fs.ConfigString(wrappedFs), remote),
vfs: VFS,
node: node,
sqfs: sqfs,
c: c,
remote: remote,
root: strings.Trim(root, "/"),
prefix: prefix,
prefixSlash: prefix + "/",
}
if prefix == "" {
f.prefixSlash = ""
}
singleObject := false
// Find the directory the root points to
if f.root != "" && !strings.HasSuffix(root, "/") {
native, err := f.toNative("")
if err == nil {
native = strings.TrimRight(native, "/")
_, err := f.newObjectNative(native)
if err == nil {
// If it pointed to a file, find the directory above
f.root = path.Dir(f.root)
if f.root == "." || f.root == "/" {
f.root = ""
}
}
}
}
// FIXME
// the features here are ones we could support, and they are
// ANDed with the ones from wrappedFs
//
// FIXME some of these need to be forced on - CanHaveEmptyDirectories
f.features = (&fs.Features{
CaseInsensitive: false,
DuplicateFiles: false,
ReadMimeType: false, // MimeTypes not supported with gsquashfs
WriteMimeType: false,
BucketBased: false,
CanHaveEmptyDirectories: true,
}).Fill(ctx, f).Mask(ctx, wrappedFs).WrapsFs(f, wrappedFs)
if singleObject {
return f, fs.ErrorIsFile
}
return f, nil
}
// Name of the remote (as passed into NewFs)
func (f *Fs) Name() string {
return f.name
}
// Root of the remote (as passed into NewFs)
func (f *Fs) Root() string {
return f.root
}
// Features returns the optional features of this Fs
func (f *Fs) Features() *fs.Features {
return f.features
}
// String returns a description of the FS
func (f *Fs) String() string {
return fmt.Sprintf("Squashfs %q", f.name)
}
// This turns a remote into a native path in the squashfs starting with a /
func (f *Fs) toNative(remote string) (string, error) {
native := strings.Trim(remote, "/")
if f.prefix == "" {
native = "/" + native
} else if native == f.prefix {
native = "/"
} else if !strings.HasPrefix(native, f.prefixSlash) {
return "", fmt.Errorf("internal error: %q doesn't start with prefix %q", native, f.prefixSlash)
} else {
native = native[len(f.prefix):]
}
if f.root != "" {
native = "/" + f.root + native
}
return native, nil
}
// Turn a (nativeDir, leaf) into a remote
func (f *Fs) fromNative(nativeDir string, leaf string) string {
// fs.Debugf(nil, "nativeDir = %q, leaf = %q, root=%q", nativeDir, leaf, f.root)
dir := nativeDir
if f.root != "" {
dir = strings.TrimPrefix(dir, "/"+f.root)
}
remote := f.prefixSlash + strings.Trim(path.Join(dir, leaf), "/")
// fs.Debugf(nil, "dir = %q, remote=%q", dir, remote)
return remote
}
// Convert a FileInfo into an Object from native dir
func (f *Fs) objectFromFileInfo(nativeDir string, item squashfs.FileStat) *Object {
return &Object{
fs: f,
remote: f.fromNative(nativeDir, item.Name()),
size: item.Size(),
modTime: item.ModTime(),
item: item,
}
}
// List the objects and directories in dir into entries. The
// entries can be returned in any order but should be for a
// complete directory.
//
// dir should be "" to list the root, and should not have
// trailing slashes.
//
// This should return ErrDirNotFound if the directory isn't
// found.
func (f *Fs) List(ctx context.Context, dir string) (entries fs.DirEntries, err error) {
defer log.Trace(f, "dir=%q", dir)("entries=%v, err=%v", &entries, &err)
nativeDir, err := f.toNative(dir)
if err != nil {
return nil, err
}
items, err := f.sqfs.ReadDir(nativeDir)
if err != nil {
return nil, fmt.Errorf("read squashfs: couldn't read directory: %w", err)
}
entries = make(fs.DirEntries, 0, len(items))
for _, fi := range items {
item, ok := fi.(squashfs.FileStat)
if !ok {
return nil, fmt.Errorf("internal error: unexpected type for %q: %T", fi.Name(), fi)
}
// fs.Debugf(item.Name(), "entry = %#v", item)
var entry fs.DirEntry
if err != nil {
return nil, fmt.Errorf("error reading item %q: %q", item.Name(), err)
}
if item.IsDir() {
var remote = f.fromNative(nativeDir, item.Name())
entry = fs.NewDir(remote, item.ModTime())
} else {
if item.Mode().IsRegular() {
entry = f.objectFromFileInfo(nativeDir, item)
} else {
fs.Debugf(item.Name(), "FIXME Not regular file - skipping")
continue
}
}
entries = append(entries, entry)
}
// fs.Debugf(f, "dir=%q, entries=%v", dir, entries)
return entries, nil
}
// newObjectNative finds the object at the native path passed in
func (f *Fs) newObjectNative(nativePath string) (o fs.Object, err error) {
// get the path and filename
dir, leaf := path.Split(nativePath)
dir = strings.TrimRight(dir, "/")
leaf = strings.Trim(leaf, "/")
// FIXME need to detect directory not found
fis, err := f.sqfs.ReadDir(dir)
if err != nil {
return nil, fs.ErrorObjectNotFound
}
for _, fi := range fis {
if fi.Name() == leaf {
if fi.IsDir() {
return nil, fs.ErrorNotAFile
}
item, ok := fi.(squashfs.FileStat)
if !ok {
return nil, fmt.Errorf("internal error: unexpected type for %q: %T", fi.Name(), fi)
}
o = f.objectFromFileInfo(dir, item)
break
}
}
if o == nil {
return nil, fs.ErrorObjectNotFound
}
return o, nil
}
// NewObject finds the Object at remote.
func (f *Fs) NewObject(ctx context.Context, remote string) (o fs.Object, err error) {
defer log.Trace(f, "remote=%q", remote)("obj=%v, err=%v", &o, &err)
nativePath, err := f.toNative(remote)
if err != nil {
return nil, err
}
return f.newObjectNative(nativePath)
}
// Precision of the ModTimes in this Fs
func (f *Fs) Precision() time.Duration {
return time.Second
}
// Mkdir makes the directory (container, bucket)
//
// Shouldn't return an error if it already exists
func (f *Fs) Mkdir(ctx context.Context, dir string) error {
return vfs.EROFS
}
// Rmdir removes the directory (container, bucket) if empty
//
// Return an error if it doesn't exist or isn't empty
func (f *Fs) Rmdir(ctx context.Context, dir string) error {
return vfs.EROFS
}
// Put in to the remote path with the modTime given of the given size
//
// May create the object even if it returns an error - if so
// will return the object and the error, otherwise will return
// nil and the error
func (f *Fs) Put(ctx context.Context, in io.Reader, src fs.ObjectInfo, options ...fs.OpenOption) (o fs.Object, err error) {
return nil, vfs.EROFS
}
// Hashes returns the supported hash sets.
func (f *Fs) Hashes() hash.Set {
return hash.Set(hash.None)
}
// UnWrap returns the Fs that this Fs is wrapping
func (f *Fs) UnWrap() fs.Fs {
return f.f
}
// WrapFs returns the Fs that is wrapping this Fs
func (f *Fs) WrapFs() fs.Fs {
return f.wrapper
}
// SetWrapper sets the Fs that is wrapping this Fs
func (f *Fs) SetWrapper(wrapper fs.Fs) {
f.wrapper = wrapper
}
// Object describes an object to be read from the raw squashfs file
type Object struct {
fs *Fs
remote string
size int64
modTime time.Time
item squashfs.FileStat
}
// Fs returns read only access to the Fs that this object is part of
func (o *Object) Fs() fs.Info {
return o.fs
}
// Return a string version
func (o *Object) String() string {
if o == nil {
return "<nil>"
}
return o.Remote()
}
// Turn a squashfs path into a full path for the parent Fs
// func (o *Object) path(remote string) string {
// return path.Join(o.fs.prefix, remote)
// }
// Remote returns the remote path
func (o *Object) Remote() string {
return o.remote
}
// Size returns the size of the file
func (o *Object) Size() int64 {
return o.size
}
// ModTime returns the modification time of the object
//
// It attempts to read the objects mtime and if that isn't present the
// LastModified returned in the http headers
func (o *Object) ModTime(ctx context.Context) time.Time {
return o.modTime
}
// SetModTime sets the modification time of the local fs object
func (o *Object) SetModTime(ctx context.Context, modTime time.Time) error {
return vfs.EROFS
}
// Storable raturns a boolean indicating if this object is storable
func (o *Object) Storable() bool {
return true
}
// Hash returns the selected checksum of the file
// If no checksum is available it returns ""
func (o *Object) Hash(ctx context.Context, ht hash.Type) (string, error) {
return "", hash.ErrUnsupported
}
// Open opens the file for read. Call Close() on the returned io.ReadCloser
func (o *Object) Open(ctx context.Context, options ...fs.OpenOption) (rc io.ReadCloser, err error) {
var offset, limit int64 = 0, -1
for _, option := range options {
switch x := option.(type) {
case *fs.SeekOption:
offset = x.Offset
case *fs.RangeOption:
offset, limit = x.Decode(o.Size())
default:
if option.Mandatory() {
fs.Logf(o, "Unsupported mandatory option: %v", option)
}
}
}
remote, err := o.fs.toNative(o.remote)
if err != nil {
return nil, err
}
fs.Debugf(o, "Opening %q", remote)
//fh, err := o.fs.sqfs.OpenFile(remote, os.O_RDONLY)
fh, err := o.item.Open()
if err != nil {
return nil, err
}
// discard data from start as necessary
if offset > 0 {
_, err = fh.Seek(offset, io.SeekStart)
if err != nil {
return nil, err
}
}
// If limited then don't return everything
if limit >= 0 {
fs.Debugf(nil, "limit=%d, offset=%d, options=%v", limit, offset, options)
return readers.NewLimitedReadCloser(fh, limit), nil
}
return fh, nil
}
// Update in to the object with the modTime given of the given size
func (o *Object) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, options ...fs.OpenOption) error {
return vfs.EROFS
}
// Remove an object
func (o *Object) Remove(ctx context.Context) error {
return vfs.EROFS
}
// Check the interfaces are satisfied
var (
_ fs.Fs = (*Fs)(nil)
_ fs.UnWrapper = (*Fs)(nil)
_ fs.Wrapper = (*Fs)(nil)
_ fs.Object = (*Object)(nil)
)

385
backend/archive/zip/zip.go Normal file
View File

@@ -0,0 +1,385 @@
// Package zip implements a zip archiver for the archive backend
package zip
import (
"archive/zip"
"context"
"errors"
"fmt"
"io"
"os"
"path"
"strings"
"time"
"github.com/rclone/rclone/backend/archive/archiver"
"github.com/rclone/rclone/fs"
"github.com/rclone/rclone/fs/dirtree"
"github.com/rclone/rclone/fs/hash"
"github.com/rclone/rclone/fs/log"
"github.com/rclone/rclone/lib/readers"
"github.com/rclone/rclone/vfs"
"github.com/rclone/rclone/vfs/vfscommon"
)
func init() {
archiver.Register(archiver.Archiver{
New: New,
Extension: ".zip",
})
}
// Fs represents a wrapped fs.Fs
type Fs struct {
f fs.Fs
wrapper fs.Fs
name string
features *fs.Features // optional features
vfs *vfs.VFS
node vfs.Node // zip file object - set if reading
remote string // remote of the zip file object
prefix string // position for objects
prefixSlash string // position for objects with a slash on
root string // position to read from within the archive
dt dirtree.DirTree // read from zipfile
}
// New constructs an Fs from the (wrappedFs, remote) with the objects
// prefix with prefix and rooted at root
func New(ctx context.Context, wrappedFs fs.Fs, remote, prefix, root string) (fs.Fs, error) {
// FIXME vfs cache?
// FIXME could factor out ReadFileHandle and just use that rather than the full VFS
fs.Debugf(nil, "Zip: New: remote=%q, prefix=%q, root=%q", remote, prefix, root)
vfsOpt := vfscommon.Opt
vfsOpt.ReadWait = 0
VFS := vfs.New(wrappedFs, &vfsOpt)
node, err := VFS.Stat(remote)
if err != nil {
return nil, fmt.Errorf("failed to find %q archive: %w", remote, err)
}
f := &Fs{
f: wrappedFs,
name: path.Join(fs.ConfigString(wrappedFs), remote),
vfs: VFS,
node: node,
remote: remote,
root: root,
prefix: prefix,
prefixSlash: prefix + "/",
}
// Read the contents of the zip file
singleObject, err := f.readZip()
if err != nil {
return nil, fmt.Errorf("failed to open zip file: %w", err)
}
// FIXME
// the features here are ones we could support, and they are
// ANDed with the ones from wrappedFs
//
// FIXME some of these need to be forced on - CanHaveEmptyDirectories
f.features = (&fs.Features{
CaseInsensitive: false,
DuplicateFiles: false,
ReadMimeType: false, // MimeTypes not supported with gzip
WriteMimeType: false,
BucketBased: false,
CanHaveEmptyDirectories: true,
}).Fill(ctx, f).Mask(ctx, wrappedFs).WrapsFs(f, wrappedFs)
if singleObject {
return f, fs.ErrorIsFile
}
return f, nil
}
// Name of the remote (as passed into NewFs)
func (f *Fs) Name() string {
return f.name
}
// Root of the remote (as passed into NewFs)
func (f *Fs) Root() string {
return f.root
}
// Features returns the optional features of this Fs
func (f *Fs) Features() *fs.Features {
return f.features
}
// String returns a description of the FS
func (f *Fs) String() string {
return fmt.Sprintf("Zip %q", f.name)
}
// readZip the zip file into f
//
// Returns singleObject=true if f.root points to a file
func (f *Fs) readZip() (singleObject bool, err error) {
if f.node == nil {
return singleObject, fs.ErrorDirNotFound
}
size := f.node.Size()
if size < 0 {
return singleObject, errors.New("can't read from zip file with unknown size")
}
r, err := f.node.Open(os.O_RDONLY)
if err != nil {
return singleObject, fmt.Errorf("failed to open zip file: %w", err)
}
zr, err := zip.NewReader(r, size)
if err != nil {
return singleObject, fmt.Errorf("failed to read zip file: %w", err)
}
dt := dirtree.New()
for _, file := range zr.File {
remote := strings.Trim(path.Clean(file.Name), "/")
if remote == "." {
remote = ""
}
remote = path.Join(f.prefix, remote)
if f.root != "" {
// Ignore all files outside the root
if !strings.HasPrefix(remote, f.root) {
continue
}
if remote == f.root {
remote = ""
} else {
remote = strings.TrimPrefix(remote, f.root+"/")
}
}
if strings.HasSuffix(file.Name, "/") {
dir := fs.NewDir(remote, file.Modified)
dt.AddDir(dir)
} else {
if remote == "" {
remote = path.Base(f.root)
singleObject = true
dt = dirtree.New()
}
o := &Object{
f: f,
remote: remote,
fh: &file.FileHeader,
file: file,
}
dt.Add(o)
if singleObject {
break
}
}
}
dt.CheckParents("")
dt.Sort()
f.dt = dt
//fs.Debugf(nil, "dt = %v", dt)
return singleObject, nil
}
// List the objects and directories in dir into entries. The
// entries can be returned in any order but should be for a
// complete directory.
//
// dir should be "" to list the root, and should not have
// trailing slashes.
//
// This should return ErrDirNotFound if the directory isn't
// found.
func (f *Fs) List(ctx context.Context, dir string) (entries fs.DirEntries, err error) {
defer log.Trace(f, "dir=%q", dir)("entries=%v, err=%v", &entries, &err)
// _, err = f.strip(dir)
// if err != nil {
// return nil, err
// }
entries, ok := f.dt[dir]
if !ok {
return nil, fs.ErrorDirNotFound
}
fs.Debugf(f, "dir=%q, entries=%v", dir, entries)
return entries, nil
}
// NewObject finds the Object at remote.
func (f *Fs) NewObject(ctx context.Context, remote string) (o fs.Object, err error) {
defer log.Trace(f, "remote=%q", remote)("obj=%v, err=%v", &o, &err)
if f.dt == nil {
return nil, fs.ErrorObjectNotFound
}
_, entry := f.dt.Find(remote)
if entry == nil {
return nil, fs.ErrorObjectNotFound
}
o, ok := entry.(*Object)
if !ok {
return nil, fs.ErrorNotAFile
}
return o, nil
}
// Precision of the ModTimes in this Fs
func (f *Fs) Precision() time.Duration {
return time.Second
}
// Mkdir makes the directory (container, bucket)
//
// Shouldn't return an error if it already exists
func (f *Fs) Mkdir(ctx context.Context, dir string) error {
return vfs.EROFS
}
// Rmdir removes the directory (container, bucket) if empty
//
// Return an error if it doesn't exist or isn't empty
func (f *Fs) Rmdir(ctx context.Context, dir string) error {
return vfs.EROFS
}
// Put in to the remote path with the modTime given of the given size
//
// May create the object even if it returns an error - if so
// will return the object and the error, otherwise will return
// nil and the error
func (f *Fs) Put(ctx context.Context, in io.Reader, src fs.ObjectInfo, options ...fs.OpenOption) (o fs.Object, err error) {
return nil, vfs.EROFS
}
// Hashes returns the supported hash sets.
func (f *Fs) Hashes() hash.Set {
return hash.Set(hash.CRC32)
}
// UnWrap returns the Fs that this Fs is wrapping
func (f *Fs) UnWrap() fs.Fs {
return f.f
}
// WrapFs returns the Fs that is wrapping this Fs
func (f *Fs) WrapFs() fs.Fs {
return f.wrapper
}
// SetWrapper sets the Fs that is wrapping this Fs
func (f *Fs) SetWrapper(wrapper fs.Fs) {
f.wrapper = wrapper
}
// Object describes an object to be read from the raw zip file
type Object struct {
f *Fs
remote string
fh *zip.FileHeader
file *zip.File
}
// Fs returns read only access to the Fs that this object is part of
func (o *Object) Fs() fs.Info {
return o.f
}
// Return a string version
func (o *Object) String() string {
if o == nil {
return "<nil>"
}
return o.Remote()
}
// Remote returns the remote path
func (o *Object) Remote() string {
return o.remote
}
// Size returns the size of the file
func (o *Object) Size() int64 {
return int64(o.fh.UncompressedSize64)
}
// ModTime returns the modification time of the object
//
// It attempts to read the objects mtime and if that isn't present the
// LastModified returned in the http headers
func (o *Object) ModTime(ctx context.Context) time.Time {
return o.fh.Modified
}
// SetModTime sets the modification time of the local fs object
func (o *Object) SetModTime(ctx context.Context, modTime time.Time) error {
return vfs.EROFS
}
// Storable raturns a boolean indicating if this object is storable
func (o *Object) Storable() bool {
return true
}
// Hash returns the selected checksum of the file
// If no checksum is available it returns ""
func (o *Object) Hash(ctx context.Context, ht hash.Type) (string, error) {
if ht == hash.CRC32 {
// FIXME return empty CRC if writing
if o.f.dt == nil {
return "", nil
}
return fmt.Sprintf("%08x", o.fh.CRC32), nil
}
return "", hash.ErrUnsupported
}
// Open opens the file for read. Call Close() on the returned io.ReadCloser
func (o *Object) Open(ctx context.Context, options ...fs.OpenOption) (rc io.ReadCloser, err error) {
var offset, limit int64 = 0, -1
for _, option := range options {
switch x := option.(type) {
case *fs.SeekOption:
offset = x.Offset
case *fs.RangeOption:
offset, limit = x.Decode(o.Size())
default:
if option.Mandatory() {
fs.Logf(o, "Unsupported mandatory option: %v", option)
}
}
}
rc, err = o.file.Open()
if err != nil {
return nil, err
}
// discard data from start as necessary
if offset > 0 {
_, err = io.CopyN(io.Discard, rc, offset)
if err != nil {
return nil, err
}
}
// If limited then don't return everything
if limit >= 0 {
return readers.NewLimitedReadCloser(rc, limit), nil
}
return rc, nil
}
// Update in to the object with the modTime given of the given size
func (o *Object) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, options ...fs.OpenOption) error {
return vfs.EROFS
}
// Remove an object
func (o *Object) Remove(ctx context.Context) error {
return vfs.EROFS
}
// Check the interfaces are satisfied
var (
_ fs.Fs = (*Fs)(nil)
_ fs.UnWrapper = (*Fs)(nil)
_ fs.Wrapper = (*Fs)(nil)
_ fs.Object = (*Object)(nil)
)

View File

@@ -51,6 +51,7 @@ import (
"github.com/rclone/rclone/lib/env"
"github.com/rclone/rclone/lib/multipart"
"github.com/rclone/rclone/lib/pacer"
"github.com/rclone/rclone/lib/pool"
"golang.org/x/sync/errgroup"
)
@@ -72,6 +73,7 @@ const (
emulatorAccount = "devstoreaccount1"
emulatorAccountKey = "Eby8vdM02xNOcqFlqUwJPLlmEtlCDXJ1OUzFT50uSRZ6IFsuFq2UVErCz4I6tq/K1SZFPTOtr/KBHBeksoGMGw=="
emulatorBlobEndpoint = "http://127.0.0.1:10000/devstoreaccount1"
sasCopyValidity = time.Hour // how long SAS should last when doing server side copy
)
var (
@@ -559,6 +561,11 @@ type Fs struct {
pacer *fs.Pacer // To pace and retry the API calls
uploadToken *pacer.TokenDispenser // control concurrency
publicAccess container.PublicAccessType // Container Public Access Level
// user delegation cache
userDelegationMu sync.Mutex
userDelegation *service.UserDelegationCredential
userDelegationExpiry time.Time
}
// Object describes an azure object
@@ -612,6 +619,9 @@ func parsePath(path string) (root string) {
// relative to f.root
func (f *Fs) split(rootRelativePath string) (containerName, containerPath string) {
containerName, containerPath = bucket.Split(bucket.Join(f.root, rootRelativePath))
if f.opt.DirectoryMarkers && strings.HasSuffix(containerPath, "//") {
containerPath = containerPath[:len(containerPath)-1]
}
return f.opt.Enc.FromStandardName(containerName), f.opt.Enc.FromStandardPath(containerPath)
}
@@ -928,6 +938,7 @@ func NewFs(ctx context.Context, name, root string, m configmap.Mapper) (fs.Fs, e
}
case opt.ClientID != "" && opt.Tenant != "" && opt.Username != "" && opt.Password != "":
// User with username and password
//nolint:staticcheck // this is deprecated due to Azure policy
options := azidentity.UsernamePasswordCredentialOptions{
ClientOptions: policyClientOptions,
}
@@ -980,6 +991,38 @@ func NewFs(ctx context.Context, name, root string, m configmap.Mapper) (fs.Fs, e
if err != nil {
return nil, fmt.Errorf("failed to acquire MSI token: %w", err)
}
case opt.ClientID != "" && opt.Tenant != "" && opt.MSIClientID != "":
// Workload Identity based authentication
var options azidentity.ManagedIdentityCredentialOptions
options.ID = azidentity.ClientID(opt.MSIClientID)
msiCred, err := azidentity.NewManagedIdentityCredential(&options)
if err != nil {
return nil, fmt.Errorf("failed to acquire MSI token: %w", err)
}
getClientAssertions := func(context.Context) (string, error) {
token, err := msiCred.GetToken(context.Background(), policy.TokenRequestOptions{
Scopes: []string{"api://AzureADTokenExchange"},
})
if err != nil {
return "", fmt.Errorf("failed to acquire MSI token: %w", err)
}
return token.Token, nil
}
assertOpts := &azidentity.ClientAssertionCredentialOptions{}
f.cred, err = azidentity.NewClientAssertionCredential(
opt.Tenant,
opt.ClientID,
getClientAssertions,
assertOpts)
if err != nil {
return nil, fmt.Errorf("failed to acquire client assertion token: %w", err)
}
case opt.UseAZ:
var options = azidentity.AzureCLICredentialOptions{}
f.cred, err = azidentity.NewAzureCLICredential(&options)
@@ -1213,7 +1256,7 @@ func (f *Fs) list(ctx context.Context, containerName, directory, prefix string,
continue
}
// process directory markers as directories
remote = strings.TrimRight(remote, "/")
remote, _ = strings.CutSuffix(remote, "/")
}
remote = remote[len(prefix):]
if addContainer {
@@ -1295,9 +1338,9 @@ func (f *Fs) containerOK(container string) bool {
}
// listDir lists a single directory
func (f *Fs) listDir(ctx context.Context, containerName, directory, prefix string, addContainer bool) (entries fs.DirEntries, err error) {
func (f *Fs) listDir(ctx context.Context, containerName, directory, prefix string, addContainer bool, callback func(fs.DirEntry) error) (err error) {
if !f.containerOK(containerName) {
return nil, fs.ErrorDirNotFound
return fs.ErrorDirNotFound
}
err = f.list(ctx, containerName, directory, prefix, addContainer, false, int32(f.opt.ListChunkSize), func(remote string, object *container.BlobItem, isDirectory bool) error {
entry, err := f.itemToDirEntry(ctx, remote, object, isDirectory)
@@ -1305,16 +1348,16 @@ func (f *Fs) listDir(ctx context.Context, containerName, directory, prefix strin
return err
}
if entry != nil {
entries = append(entries, entry)
return callback(entry)
}
return nil
})
if err != nil {
return nil, err
return err
}
// container must be present if listing succeeded
f.cache.MarkOK(containerName)
return entries, nil
return nil
}
// listContainers returns all the containers to out
@@ -1350,14 +1393,47 @@ func (f *Fs) listContainers(ctx context.Context) (entries fs.DirEntries, err err
// This should return ErrDirNotFound if the directory isn't
// found.
func (f *Fs) List(ctx context.Context, dir string) (entries fs.DirEntries, err error) {
return list.WithListP(ctx, dir, f)
}
// ListP lists the objects and directories of the Fs starting
// from dir non recursively into out.
//
// dir should be "" to start from the root, and should not
// have trailing slashes.
//
// This should return ErrDirNotFound if the directory isn't
// found.
//
// It should call callback for each tranche of entries read.
// These need not be returned in any particular order. If
// callback returns an error then the listing will stop
// immediately.
func (f *Fs) ListP(ctx context.Context, dir string, callback fs.ListRCallback) error {
list := list.NewHelper(callback)
container, directory := f.split(dir)
if container == "" {
if directory != "" {
return nil, fs.ErrorListBucketRequired
return fs.ErrorListBucketRequired
}
return f.listContainers(ctx)
entries, err := f.listContainers(ctx)
if err != nil {
return err
}
for _, entry := range entries {
err = list.Add(entry)
if err != nil {
return err
}
}
} else {
err := f.listDir(ctx, container, directory, f.rootDirectory, f.rootContainer == "", list.Add)
if err != nil {
return err
}
}
return f.listDir(ctx, container, directory, f.rootDirectory, f.rootContainer == "")
return list.Flush()
}
// ListR lists the objects and directories of the Fs starting
@@ -1534,7 +1610,7 @@ func (f *Fs) Mkdir(ctx context.Context, dir string) error {
// mkdirParent creates the parent bucket/directory if it doesn't exist
func (f *Fs) mkdirParent(ctx context.Context, remote string) error {
remote = strings.TrimRight(remote, "/")
remote, _ = strings.CutSuffix(remote, "/")
dir := path.Dir(remote)
if dir == "/" || dir == "." {
dir = ""
@@ -1684,6 +1760,38 @@ func (f *Fs) Purge(ctx context.Context, dir string) error {
return f.deleteContainer(ctx, container)
}
// Get a user delegation which is valid for at least sasCopyValidity
//
// This value is cached in f
func (f *Fs) getUserDelegation(ctx context.Context) (*service.UserDelegationCredential, error) {
f.userDelegationMu.Lock()
defer f.userDelegationMu.Unlock()
if f.userDelegation != nil && time.Until(f.userDelegationExpiry) > sasCopyValidity {
return f.userDelegation, nil
}
// Validity window
start := time.Now().UTC()
expiry := start.Add(2 * sasCopyValidity)
startStr := start.Format(time.RFC3339)
expiryStr := expiry.Format(time.RFC3339)
// Acquire user delegation key from the service client
info := service.KeyInfo{
Start: &startStr,
Expiry: &expiryStr,
}
userDelegationKey, err := f.svc.GetUserDelegationCredential(ctx, info, nil)
if err != nil {
return nil, fmt.Errorf("failed to get user delegation key: %w", err)
}
f.userDelegation = userDelegationKey
f.userDelegationExpiry = expiry
return f.userDelegation, nil
}
// getAuth gets auth to copy o.
//
// tokenOK is used to signal that token based auth (Microsoft Entra
@@ -1695,7 +1803,7 @@ func (f *Fs) Purge(ctx context.Context, dir string) error {
// URL (not a SAS) and token will be empty.
//
// If tokenOK is true it may also return a token for the auth.
func (o *Object) getAuth(ctx context.Context, tokenOK bool, noAuth bool) (srcURL string, token *string, err error) {
func (o *Object) getAuth(ctx context.Context, noAuth bool) (srcURL string, err error) {
f := o.fs
srcBlobSVC := o.getBlobSVC()
srcURL = srcBlobSVC.URL()
@@ -1704,29 +1812,47 @@ func (o *Object) getAuth(ctx context.Context, tokenOK bool, noAuth bool) (srcURL
case noAuth:
// If same storage account then no auth needed
case f.cred != nil:
if !tokenOK {
return srcURL, token, errors.New("not supported: Microsoft Entra ID")
}
options := policy.TokenRequestOptions{}
accessToken, err := f.cred.GetToken(ctx, options)
// Generate a User Delegation SAS URL using Azure AD credentials
userDelegationKey, err := f.getUserDelegation(ctx)
if err != nil {
return srcURL, token, fmt.Errorf("failed to create access token: %w", err)
return "", fmt.Errorf("sas creation: %w", err)
}
token = &accessToken.Token
// Build the SAS values
perms := sas.BlobPermissions{Read: true}
container, containerPath := o.split()
start := time.Now().UTC()
expiry := start.Add(sasCopyValidity)
vals := sas.BlobSignatureValues{
StartTime: start,
ExpiryTime: expiry,
Permissions: perms.String(),
ContainerName: container,
BlobName: containerPath,
}
// Sign with the delegation key
queryParameters, err := vals.SignWithUserDelegation(userDelegationKey)
if err != nil {
return "", fmt.Errorf("signing SAS with user delegation failed: %w", err)
}
// Append the SAS to the URL
srcURL = srcBlobSVC.URL() + "?" + queryParameters.Encode()
case f.sharedKeyCred != nil:
// Generate a short lived SAS URL if using shared key credentials
expiry := time.Now().Add(time.Hour)
expiry := time.Now().Add(sasCopyValidity)
sasOptions := blob.GetSASURLOptions{}
srcURL, err = srcBlobSVC.GetSASURL(sas.BlobPermissions{Read: true}, expiry, &sasOptions)
if err != nil {
return srcURL, token, fmt.Errorf("failed to create SAS URL: %w", err)
return srcURL, fmt.Errorf("failed to create SAS URL: %w", err)
}
case f.anonymous || f.opt.SASURL != "":
// If using a SASURL or anonymous, no need for any extra auth
default:
return srcURL, token, errors.New("unknown authentication type")
return srcURL, errors.New("unknown authentication type")
}
return srcURL, token, nil
return srcURL, nil
}
// Do multipart parallel copy.
@@ -1747,7 +1873,7 @@ func (f *Fs) copyMultipart(ctx context.Context, remote, dstContainer, dstPath st
o.fs = f
o.remote = remote
srcURL, token, err := src.getAuth(ctx, true, false)
srcURL, err := src.getAuth(ctx, false)
if err != nil {
return nil, fmt.Errorf("multipart copy: %w", err)
}
@@ -1768,7 +1894,7 @@ func (f *Fs) copyMultipart(ctx context.Context, remote, dstContainer, dstPath st
var (
srcSize = src.size
partSize = int64(chunksize.Calculator(o, src.size, blockblob.MaxBlocks, f.opt.ChunkSize))
numParts = (srcSize-1)/partSize + 1
numParts = (srcSize + partSize - 1) / partSize
blockIDs = make([]string, numParts) // list of blocks for finalize
g, gCtx = errgroup.WithContext(ctx)
checker = newCheckForInvalidBlockOrBlob("copy", o)
@@ -1791,7 +1917,8 @@ func (f *Fs) copyMultipart(ctx context.Context, remote, dstContainer, dstPath st
Count: partSize,
},
// Specifies the authorization scheme and signature for the copy source.
CopySourceAuthorization: token,
// We use SAS URLs as this doesn't seem to work always
// CopySourceAuthorization: token,
// CPKInfo *blob.CPKInfo
// CPKScopeInfo *blob.CPKScopeInfo
}
@@ -1861,7 +1988,7 @@ func (f *Fs) copySinglepart(ctx context.Context, remote, dstContainer, dstPath s
dstBlobSVC := f.getBlobSVC(dstContainer, dstPath)
// Get the source auth - none needed for same storage account
srcURL, _, err := src.getAuth(ctx, false, f == src.fs)
srcURL, err := src.getAuth(ctx, f == src.fs)
if err != nil {
return nil, fmt.Errorf("single part copy: source auth: %w", err)
}
@@ -2025,7 +2152,6 @@ func (o *Object) getMetadata() (metadata map[string]*string) {
}
metadata = make(map[string]*string, len(o.meta))
for k, v := range o.meta {
v := v
metadata[k] = &v
}
return metadata
@@ -2176,11 +2302,6 @@ func (o *Object) getTags() (tags map[string]string) {
// getBlobSVC creates a blob client
func (o *Object) getBlobSVC() *blob.Client {
container, directory := o.split()
// If we are trying to remove an all / directory marker then
// this will have one / too many now.
if bucket.IsAllSlashes(o.remote) {
directory = strings.TrimSuffix(directory, "/")
}
return o.fs.getBlobSVC(container, directory)
}
@@ -2582,6 +2703,13 @@ func (w *azChunkWriter) WriteChunk(ctx context.Context, chunkNumber int, reader
return -1, err
}
// Only account after the checksum reads have been done
if do, ok := reader.(pool.DelayAccountinger); ok {
// To figure out this number, do a transfer and if the accounted size is 0 or a
// multiple of what it should be, increase or decrease this number.
do.DelayAccounting(2)
}
// Upload the block, with MD5 for check
m := md5.New()
currentChunkSize, err := io.Copy(m, reader)
@@ -2863,6 +2991,9 @@ func (o *Object) prepareUpload(ctx context.Context, src fs.ObjectInfo, options [
return ui, err
}
}
// if ui.isDirMarker && strings.HasSuffix(containerPath, "//") {
// containerPath = containerPath[:len(containerPath)-1]
// }
// Update Mod time
o.updateMetadataWithModTime(src.ModTime(ctx))
@@ -3055,6 +3186,7 @@ var (
_ fs.PutStreamer = &Fs{}
_ fs.Purger = &Fs{}
_ fs.ListRer = &Fs{}
_ fs.ListPer = &Fs{}
_ fs.OpenChunkWriter = &Fs{}
_ fs.Object = &Object{}
_ fs.MimeTyper = &Object{}

View File

@@ -56,6 +56,7 @@ import (
"github.com/rclone/rclone/fs/config/obscure"
"github.com/rclone/rclone/fs/fshttp"
"github.com/rclone/rclone/fs/hash"
"github.com/rclone/rclone/fs/list"
"github.com/rclone/rclone/lib/encoder"
"github.com/rclone/rclone/lib/env"
"github.com/rclone/rclone/lib/readers"
@@ -453,7 +454,7 @@ func newFsFromOptions(ctx context.Context, name, root string, opt *Options) (fs.
return nil, fmt.Errorf("create new shared key credential failed: %w", err)
}
case opt.UseAZ:
var options = azidentity.AzureCLICredentialOptions{}
options := azidentity.AzureCLICredentialOptions{}
cred, err = azidentity.NewAzureCLICredential(&options)
fmt.Println(cred)
if err != nil {
@@ -516,6 +517,7 @@ func newFsFromOptions(ctx context.Context, name, root string, opt *Options) (fs.
}
case opt.ClientID != "" && opt.Tenant != "" && opt.Username != "" && opt.Password != "":
// User with username and password
//nolint:staticcheck // this is deprecated due to Azure policy
options := azidentity.UsernamePasswordCredentialOptions{
ClientOptions: policyClientOptions,
}
@@ -549,7 +551,7 @@ func newFsFromOptions(ctx context.Context, name, root string, opt *Options) (fs.
case opt.UseMSI:
// Specifying a user-assigned identity. Exactly one of the above IDs must be specified.
// Validate and ensure exactly one is set. (To do: better validation.)
var b2i = map[bool]int{false: 0, true: 1}
b2i := map[bool]int{false: 0, true: 1}
set := b2i[opt.MSIClientID != ""] + b2i[opt.MSIObjectID != ""] + b2i[opt.MSIResourceID != ""]
if set > 1 {
return nil, errors.New("more than one user-assigned identity ID is set")
@@ -568,6 +570,37 @@ func newFsFromOptions(ctx context.Context, name, root string, opt *Options) (fs.
if err != nil {
return nil, fmt.Errorf("failed to acquire MSI token: %w", err)
}
case opt.ClientID != "" && opt.Tenant != "" && opt.MSIClientID != "":
// Workload Identity based authentication
var options azidentity.ManagedIdentityCredentialOptions
options.ID = azidentity.ClientID(opt.MSIClientID)
msiCred, err := azidentity.NewManagedIdentityCredential(&options)
if err != nil {
return nil, fmt.Errorf("failed to acquire MSI token: %w", err)
}
getClientAssertions := func(context.Context) (string, error) {
token, err := msiCred.GetToken(context.Background(), policy.TokenRequestOptions{
Scopes: []string{"api://AzureADTokenExchange"},
})
if err != nil {
return "", fmt.Errorf("failed to acquire MSI token: %w", err)
}
return token.Token, nil
}
assertOpts := &azidentity.ClientAssertionCredentialOptions{}
cred, err = azidentity.NewClientAssertionCredential(
opt.Tenant,
opt.ClientID,
getClientAssertions,
assertOpts)
if err != nil {
return nil, fmt.Errorf("failed to acquire client assertion token: %w", err)
}
default:
return nil, errors.New("no authentication method configured")
}
@@ -811,18 +844,35 @@ func (f *Fs) PutStream(ctx context.Context, in io.Reader, src fs.ObjectInfo, opt
//
// This should return ErrDirNotFound if the directory isn't found.
func (f *Fs) List(ctx context.Context, dir string) (fs.DirEntries, error) {
var entries fs.DirEntries
return list.WithListP(ctx, dir, f)
}
// ListP lists the objects and directories of the Fs starting
// from dir non recursively into out.
//
// dir should be "" to start from the root, and should not
// have trailing slashes.
//
// This should return ErrDirNotFound if the directory isn't
// found.
//
// It should call callback for each tranche of entries read.
// These need not be returned in any particular order. If
// callback returns an error then the listing will stop
// immediately.
func (f *Fs) ListP(ctx context.Context, dir string, callback fs.ListRCallback) error {
list := list.NewHelper(callback)
subDirClient := f.dirClient(dir)
// Checking whether directory exists
_, err := subDirClient.GetProperties(ctx, nil)
if fileerror.HasCode(err, fileerror.ParentNotFound, fileerror.ResourceNotFound) {
return entries, fs.ErrorDirNotFound
return fs.ErrorDirNotFound
} else if err != nil {
return entries, err
return err
}
var opt = &directory.ListFilesAndDirectoriesOptions{
opt := &directory.ListFilesAndDirectoriesOptions{
Include: directory.ListFilesInclude{
Timestamps: true,
},
@@ -831,7 +881,7 @@ func (f *Fs) List(ctx context.Context, dir string) (fs.DirEntries, error) {
for pager.More() {
resp, err := pager.NextPage(ctx)
if err != nil {
return entries, err
return err
}
for _, directory := range resp.Segment.Directories {
// Name *string `xml:"Name"`
@@ -857,7 +907,10 @@ func (f *Fs) List(ctx context.Context, dir string) (fs.DirEntries, error) {
if directory.Properties.ContentLength != nil {
entry.SetSize(*directory.Properties.ContentLength)
}
entries = append(entries, entry)
err = list.Add(entry)
if err != nil {
return err
}
}
for _, file := range resp.Segment.Files {
leaf := f.opt.Enc.ToStandardPath(*file.Name)
@@ -871,10 +924,13 @@ func (f *Fs) List(ctx context.Context, dir string) (fs.DirEntries, error) {
if file.Properties.LastWriteTime != nil {
entry.modTime = *file.Properties.LastWriteTime
}
entries = append(entries, entry)
err = list.Add(entry)
if err != nil {
return err
}
}
}
return entries, nil
return list.Flush()
}
// ------------------------------------------------------------
@@ -921,7 +977,7 @@ func (o *Object) setMetadata(resp *file.GetPropertiesResponse) {
}
}
// readMetaData gets the metadata if it hasn't already been fetched
// getMetadata gets the metadata if it hasn't already been fetched
func (o *Object) getMetadata(ctx context.Context) error {
resp, err := o.fileClient().GetProperties(ctx, nil)
if err != nil {
@@ -981,6 +1037,10 @@ func (o *Object) SetModTime(ctx context.Context, t time.Time) error {
SMBProperties: &file.SMBProperties{
LastWriteTime: &t,
},
HTTPHeaders: &file.HTTPHeaders{
ContentMD5: o.md5,
ContentType: &o.contentType,
},
}
_, err := o.fileClient().SetHTTPHeaders(ctx, &opt)
if err != nil {
@@ -1277,10 +1337,29 @@ func (f *Fs) Copy(ctx context.Context, src fs.Object, remote string) (fs.Object,
}
srcURL := srcObj.fileClient().URL()
fc := f.fileClient(remote)
_, err = fc.StartCopyFromURL(ctx, srcURL, &opt)
startCopy, err := fc.StartCopyFromURL(ctx, srcURL, &opt)
if err != nil {
return nil, fmt.Errorf("Copy failed: %w", err)
}
// Poll for completion if necessary
//
// The for loop is never executed for same storage account copies.
copyStatus := startCopy.CopyStatus
var properties file.GetPropertiesResponse
pollTime := 100 * time.Millisecond
for copyStatus != nil && string(*copyStatus) == string(file.CopyStatusTypePending) {
time.Sleep(pollTime)
properties, err = fc.GetProperties(ctx, &file.GetPropertiesOptions{})
if err != nil {
return nil, err
}
copyStatus = properties.CopyStatus
pollTime = min(2*pollTime, time.Second)
}
dstObj, err := f.NewObject(ctx, remote)
if err != nil {
return nil, fmt.Errorf("Copy: NewObject failed: %w", err)
@@ -1395,6 +1474,7 @@ var (
_ fs.DirMover = &Fs{}
_ fs.Copier = &Fs{}
_ fs.OpenWriterAter = &Fs{}
_ fs.ListPer = &Fs{}
_ fs.Object = &Object{}
_ fs.MimeTyper = &Object{}
)

View File

@@ -48,6 +48,14 @@ type LifecycleRule struct {
FileNamePrefix string `json:"fileNamePrefix"`
}
// ServerSideEncryption is a configuration object for B2 Server-Side Encryption
type ServerSideEncryption struct {
Mode string `json:"mode"`
Algorithm string `json:"algorithm"` // Encryption algorithm to use
CustomerKey string `json:"customerKey"` // User provided Base64 encoded key that is used by the server to encrypt files
CustomerKeyMd5 string `json:"customerKeyMd5"` // An MD5 hash of the decoded key
}
// Timestamp is a UTC time when this file was uploaded. It is a base
// 10 number of milliseconds since midnight, January 1, 1970 UTC. This
// fits in a 64 bit integer such as the type "long" in the programming
@@ -261,21 +269,22 @@ type GetFileInfoRequest struct {
//
// Example: { "src_last_modified_millis" : "1452802803026", "large_file_sha1" : "a3195dc1e7b46a2ff5da4b3c179175b75671e80d", "color": "blue" }
type StartLargeFileRequest struct {
BucketID string `json:"bucketId"` //The ID of the bucket that the file will go in.
Name string `json:"fileName"` // The name of the file. See Files for requirements on file names.
ContentType string `json:"contentType"` // The MIME type of the content of the file, which will be returned in the Content-Type header when downloading the file. Use the Content-Type b2/x-auto to automatically set the stored Content-Type post upload. In the case where a file extension is absent or the lookup fails, the Content-Type is set to application/octet-stream.
Info map[string]string `json:"fileInfo"` // A JSON object holding the name/value pairs for the custom file info.
BucketID string `json:"bucketId"` // The ID of the bucket that the file will go in.
Name string `json:"fileName"` // The name of the file. See Files for requirements on file names.
ContentType string `json:"contentType"` // The MIME type of the content of the file, which will be returned in the Content-Type header when downloading the file. Use the Content-Type b2/x-auto to automatically set the stored Content-Type post upload. In the case where a file extension is absent or the lookup fails, the Content-Type is set to application/octet-stream.
Info map[string]string `json:"fileInfo"` // A JSON object holding the name/value pairs for the custom file info.
ServerSideEncryption *ServerSideEncryption `json:"serverSideEncryption,omitempty"` // A JSON object holding values related to Server-Side Encryption
}
// StartLargeFileResponse is the response to StartLargeFileRequest
type StartLargeFileResponse struct {
ID string `json:"fileId"` // The unique identifier for this version of this file. Used with b2_get_file_info, b2_download_file_by_id, and b2_delete_file_version.
Name string `json:"fileName"` // The name of this file, which can be used with b2_download_file_by_name.
AccountID string `json:"accountId"` // The identifier for the account.
BucketID string `json:"bucketId"` // The unique ID of the bucket.
ContentType string `json:"contentType"` // The MIME type of the file.
Info map[string]string `json:"fileInfo"` // The custom information that was uploaded with the file. This is a JSON object, holding the name/value pairs that were uploaded with the file.
UploadTimestamp Timestamp `json:"uploadTimestamp"` // This is a UTC time when this file was uploaded.
ID string `json:"fileId"` // The unique identifier for this version of this file. Used with b2_get_file_info, b2_download_file_by_id, and b2_delete_file_version.
Name string `json:"fileName"` // The name of this file, which can be used with b2_download_file_by_name.
AccountID string `json:"accountId"` // The identifier for the account.
BucketID string `json:"bucketId"` // The unique ID of the bucket.
ContentType string `json:"contentType"` // The MIME type of the file.
Info map[string]string `json:"fileInfo"` // The custom information that was uploaded with the file. This is a JSON object, holding the name/value pairs that were uploaded with the file.
UploadTimestamp Timestamp `json:"uploadTimestamp,omitempty"` // This is a UTC time when this file was uploaded.
}
// GetUploadPartURLRequest is passed to b2_get_upload_part_url
@@ -325,21 +334,25 @@ type CancelLargeFileResponse struct {
// CopyFileRequest is as passed to b2_copy_file
type CopyFileRequest struct {
SourceID string `json:"sourceFileId"` // The ID of the source file being copied.
Name string `json:"fileName"` // The name of the new file being created.
Range string `json:"range,omitempty"` // The range of bytes to copy. If not provided, the whole source file will be copied.
MetadataDirective string `json:"metadataDirective,omitempty"` // The strategy for how to populate metadata for the new file: COPY or REPLACE
ContentType string `json:"contentType,omitempty"` // The MIME type of the content of the file (REPLACE only)
Info map[string]string `json:"fileInfo,omitempty"` // This field stores the metadata that will be stored with the file. (REPLACE only)
DestBucketID string `json:"destinationBucketId,omitempty"` // The destination ID of the bucket if set, if not the source bucket will be used
SourceID string `json:"sourceFileId"` // The ID of the source file being copied.
Name string `json:"fileName"` // The name of the new file being created.
Range string `json:"range,omitempty"` // The range of bytes to copy. If not provided, the whole source file will be copied.
MetadataDirective string `json:"metadataDirective,omitempty"` // The strategy for how to populate metadata for the new file: COPY or REPLACE
ContentType string `json:"contentType,omitempty"` // The MIME type of the content of the file (REPLACE only)
Info map[string]string `json:"fileInfo,omitempty"` // This field stores the metadata that will be stored with the file. (REPLACE only)
DestBucketID string `json:"destinationBucketId,omitempty"` // The destination ID of the bucket if set, if not the source bucket will be used
SourceServerSideEncryption *ServerSideEncryption `json:"sourceServerSideEncryption,omitempty"` // A JSON object holding values related to Server-Side Encryption for the source file
DestinationServerSideEncryption *ServerSideEncryption `json:"destinationServerSideEncryption,omitempty"` // A JSON object holding values related to Server-Side Encryption for the destination file
}
// CopyPartRequest is the request for b2_copy_part - the response is UploadPartResponse
type CopyPartRequest struct {
SourceID string `json:"sourceFileId"` // The ID of the source file being copied.
LargeFileID string `json:"largeFileId"` // The ID of the large file the part will belong to, as returned by b2_start_large_file.
PartNumber int64 `json:"partNumber"` // Which part this is (starting from 1)
Range string `json:"range,omitempty"` // The range of bytes to copy. If not provided, the whole source file will be copied.
SourceID string `json:"sourceFileId"` // The ID of the source file being copied.
LargeFileID string `json:"largeFileId"` // The ID of the large file the part will belong to, as returned by b2_start_large_file.
PartNumber int64 `json:"partNumber"` // Which part this is (starting from 1)
Range string `json:"range,omitempty"` // The range of bytes to copy. If not provided, the whole source file will be copied.
SourceServerSideEncryption *ServerSideEncryption `json:"sourceServerSideEncryption,omitempty"` // A JSON object holding values related to Server-Side Encryption for the source file
DestinationServerSideEncryption *ServerSideEncryption `json:"destinationServerSideEncryption,omitempty"` // A JSON object holding values related to Server-Side Encryption for the destination file
}
// UpdateBucketRequest describes a request to modify a B2 bucket

View File

@@ -8,7 +8,9 @@ import (
"bufio"
"bytes"
"context"
"crypto/md5"
"crypto/sha1"
"encoding/base64"
"encoding/json"
"errors"
"fmt"
@@ -53,6 +55,9 @@ const (
nameHeader = "X-Bz-File-Name"
timestampHeader = "X-Bz-Upload-Timestamp"
retryAfterHeader = "Retry-After"
sseAlgorithmHeader = "X-Bz-Server-Side-Encryption-Customer-Algorithm"
sseKeyHeader = "X-Bz-Server-Side-Encryption-Customer-Key"
sseMd5Header = "X-Bz-Server-Side-Encryption-Customer-Key-Md5"
minSleep = 10 * time.Millisecond
maxSleep = 5 * time.Minute
decayConstant = 1 // bigger for slower decay, exponential
@@ -67,7 +72,7 @@ const (
// Globals
var (
errNotWithVersions = errors.New("can't modify or delete files in --b2-versions mode")
errNotWithVersions = errors.New("can't modify files in --b2-versions mode")
errNotWithVersionAt = errors.New("can't modify or delete files in --b2-version-at mode")
)
@@ -252,6 +257,51 @@ See: [rclone backend lifecycle](#lifecycle) for setting lifecycles after bucket
Default: (encoder.Display |
encoder.EncodeBackSlash |
encoder.EncodeInvalidUtf8),
}, {
Name: "sse_customer_algorithm",
Help: "If using SSE-C, the server-side encryption algorithm used when storing this object in B2.",
Advanced: true,
Examples: []fs.OptionExample{{
Value: "",
Help: "None",
}, {
Value: "AES256",
Help: "Advanced Encryption Standard (256 bits key length)",
}},
}, {
Name: "sse_customer_key",
Help: `To use SSE-C, you may provide the secret encryption key encoded in a UTF-8 compatible string to encrypt/decrypt your data
Alternatively you can provide --sse-customer-key-base64.`,
Advanced: true,
Examples: []fs.OptionExample{{
Value: "",
Help: "None",
}},
Sensitive: true,
}, {
Name: "sse_customer_key_base64",
Help: `To use SSE-C, you may provide the secret encryption key encoded in Base64 format to encrypt/decrypt your data
Alternatively you can provide --sse-customer-key.`,
Advanced: true,
Examples: []fs.OptionExample{{
Value: "",
Help: "None",
}},
Sensitive: true,
}, {
Name: "sse_customer_key_md5",
Help: `If using SSE-C you may provide the secret encryption key MD5 checksum (optional).
If you leave it blank, this is calculated automatically from the sse_customer_key provided.
`,
Advanced: true,
Examples: []fs.OptionExample{{
Value: "",
Help: "None",
}},
Sensitive: true,
}},
})
}
@@ -274,6 +324,10 @@ type Options struct {
DownloadAuthorizationDuration fs.Duration `config:"download_auth_duration"`
Lifecycle int `config:"lifecycle"`
Enc encoder.MultiEncoder `config:"encoding"`
SSECustomerAlgorithm string `config:"sse_customer_algorithm"`
SSECustomerKey string `config:"sse_customer_key"`
SSECustomerKeyBase64 string `config:"sse_customer_key_base64"`
SSECustomerKeyMD5 string `config:"sse_customer_key_md5"`
}
// Fs represents a remote b2 server
@@ -504,6 +558,24 @@ func NewFs(ctx context.Context, name, root string, m configmap.Mapper) (fs.Fs, e
if opt.Endpoint == "" {
opt.Endpoint = defaultEndpoint
}
if opt.SSECustomerKey != "" && opt.SSECustomerKeyBase64 != "" {
return nil, errors.New("b2: can't use both sse_customer_key and sse_customer_key_base64 at the same time")
} else if opt.SSECustomerKeyBase64 != "" {
// Decode the Base64-encoded key and store it in the SSECustomerKey field
decoded, err := base64.StdEncoding.DecodeString(opt.SSECustomerKeyBase64)
if err != nil {
return nil, fmt.Errorf("b2: Could not decode sse_customer_key_base64: %w", err)
}
opt.SSECustomerKey = string(decoded)
} else {
// Encode the raw key as Base64
opt.SSECustomerKeyBase64 = base64.StdEncoding.EncodeToString([]byte(opt.SSECustomerKey))
}
if opt.SSECustomerKey != "" && opt.SSECustomerKeyMD5 == "" {
// Calculate CustomerKeyMd5 if not supplied
md5sumBinary := md5.Sum([]byte(opt.SSECustomerKey))
opt.SSECustomerKeyMD5 = base64.StdEncoding.EncodeToString(md5sumBinary[:])
}
ci := fs.GetConfig(ctx)
f := &Fs{
name: name,
@@ -847,7 +919,7 @@ func (f *Fs) itemToDirEntry(ctx context.Context, remote string, object *api.File
}
// listDir lists a single directory
func (f *Fs) listDir(ctx context.Context, bucket, directory, prefix string, addBucket bool) (entries fs.DirEntries, err error) {
func (f *Fs) listDir(ctx context.Context, bucket, directory, prefix string, addBucket bool, callback func(fs.DirEntry) error) (err error) {
last := ""
err = f.list(ctx, bucket, directory, prefix, f.rootBucket == "", false, 0, f.opt.Versions, false, func(remote string, object *api.File, isDirectory bool) error {
entry, err := f.itemToDirEntry(ctx, remote, object, isDirectory, &last)
@@ -855,16 +927,16 @@ func (f *Fs) listDir(ctx context.Context, bucket, directory, prefix string, addB
return err
}
if entry != nil {
entries = append(entries, entry)
return callback(entry)
}
return nil
})
if err != nil {
return nil, err
return err
}
// bucket must be present if listing succeeded
f.cache.MarkOK(bucket)
return entries, nil
return nil
}
// listBuckets returns all the buckets to out
@@ -890,14 +962,46 @@ func (f *Fs) listBuckets(ctx context.Context) (entries fs.DirEntries, err error)
// This should return ErrDirNotFound if the directory isn't
// found.
func (f *Fs) List(ctx context.Context, dir string) (entries fs.DirEntries, err error) {
return list.WithListP(ctx, dir, f)
}
// ListP lists the objects and directories of the Fs starting
// from dir non recursively into out.
//
// dir should be "" to start from the root, and should not
// have trailing slashes.
//
// This should return ErrDirNotFound if the directory isn't
// found.
//
// It should call callback for each tranche of entries read.
// These need not be returned in any particular order. If
// callback returns an error then the listing will stop
// immediately.
func (f *Fs) ListP(ctx context.Context, dir string, callback fs.ListRCallback) error {
list := list.NewHelper(callback)
bucket, directory := f.split(dir)
if bucket == "" {
if directory != "" {
return nil, fs.ErrorListBucketRequired
return fs.ErrorListBucketRequired
}
entries, err := f.listBuckets(ctx)
if err != nil {
return err
}
for _, entry := range entries {
err = list.Add(entry)
if err != nil {
return err
}
}
} else {
err := f.listDir(ctx, bucket, directory, f.rootDirectory, f.rootBucket == "", list.Add)
if err != nil {
return err
}
return f.listBuckets(ctx)
}
return f.listDir(ctx, bucket, directory, f.rootDirectory, f.rootBucket == "")
return list.Flush()
}
// ListR lists the objects and directories of the Fs starting
@@ -1403,6 +1507,16 @@ func (f *Fs) copy(ctx context.Context, dstObj *Object, srcObj *Object, newInfo *
Name: f.opt.Enc.FromStandardPath(dstPath),
DestBucketID: destBucketID,
}
if f.opt.SSECustomerKey != "" && f.opt.SSECustomerKeyMD5 != "" {
serverSideEncryptionConfig := api.ServerSideEncryption{
Mode: "SSE-C",
Algorithm: f.opt.SSECustomerAlgorithm,
CustomerKey: f.opt.SSECustomerKeyBase64,
CustomerKeyMd5: f.opt.SSECustomerKeyMD5,
}
request.SourceServerSideEncryption = &serverSideEncryptionConfig
request.DestinationServerSideEncryption = &serverSideEncryptionConfig
}
if newInfo == nil {
request.MetadataDirective = "COPY"
} else {
@@ -1673,6 +1787,21 @@ func (o *Object) getMetaData(ctx context.Context) (info *api.File, err error) {
return o.getMetaDataListing(ctx)
}
}
// If using versionAt we need to list the find the correct version.
if o.fs.opt.VersionAt.IsSet() {
info, err := o.getMetaDataListing(ctx)
if err != nil {
return nil, err
}
if info.Action == "hide" {
// Rerturn object not found error if the current version is deleted.
return nil, fs.ErrorObjectNotFound
}
return info, nil
}
_, info, err = o.getOrHead(ctx, "HEAD", nil)
return info, err
}
@@ -1819,9 +1948,10 @@ var _ io.ReadCloser = &openFile{}
func (o *Object) getOrHead(ctx context.Context, method string, options []fs.OpenOption) (resp *http.Response, info *api.File, err error) {
opts := rest.Opts{
Method: method,
Options: options,
NoResponse: method == "HEAD",
Method: method,
Options: options,
NoResponse: method == "HEAD",
ExtraHeaders: map[string]string{},
}
// Use downloadUrl from backblaze if downloadUrl is not set
@@ -1839,6 +1969,11 @@ func (o *Object) getOrHead(ctx context.Context, method string, options []fs.Open
bucket, bucketPath := o.split()
opts.Path += "/file/" + urlEncode(o.fs.opt.Enc.FromStandardName(bucket)) + "/" + urlEncode(o.fs.opt.Enc.FromStandardPath(bucketPath))
}
if o.fs.opt.SSECustomerKey != "" && o.fs.opt.SSECustomerKeyMD5 != "" {
opts.ExtraHeaders[sseAlgorithmHeader] = o.fs.opt.SSECustomerAlgorithm
opts.ExtraHeaders[sseKeyHeader] = o.fs.opt.SSECustomerKeyBase64
opts.ExtraHeaders[sseMd5Header] = o.fs.opt.SSECustomerKeyMD5
}
err = o.fs.pacer.Call(func() (bool, error) {
resp, err = o.fs.srv.Call(ctx, &opts)
return o.fs.shouldRetry(ctx, resp, err)
@@ -1883,9 +2018,14 @@ func (o *Object) getOrHead(ctx context.Context, method string, options []fs.Open
// --b2-download-url cloudflare strips the Content-Length
// headers (presumably so it can inject stuff) so use the old
// length read from the listing.
// Additionally, the official examples return S3 headers
// instead of native, i.e. no file ID, use ones from listing.
if info.Size < 0 {
info.Size = o.size
}
if info.ID == "" {
info.ID = o.id
}
return resp, info, nil
}
@@ -2098,6 +2238,11 @@ func (o *Object) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, op
},
ContentLength: &size,
}
if o.fs.opt.SSECustomerKey != "" && o.fs.opt.SSECustomerKeyMD5 != "" {
opts.ExtraHeaders[sseAlgorithmHeader] = o.fs.opt.SSECustomerAlgorithm
opts.ExtraHeaders[sseKeyHeader] = o.fs.opt.SSECustomerKeyBase64
opts.ExtraHeaders[sseMd5Header] = o.fs.opt.SSECustomerKeyMD5
}
var response api.FileInfo
// Don't retry, return a retry error instead
err = o.fs.pacer.CallNoRetry(func() (bool, error) {
@@ -2172,20 +2317,27 @@ func (f *Fs) OpenChunkWriter(ctx context.Context, remote string, src fs.ObjectIn
return info, nil, err
}
up, err := f.newLargeUpload(ctx, o, nil, src, f.opt.ChunkSize, false, nil, options...)
if err != nil {
return info, nil, err
}
info = fs.ChunkWriterInfo{
ChunkSize: int64(f.opt.ChunkSize),
ChunkSize: up.chunkSize,
Concurrency: o.fs.opt.UploadConcurrency,
//LeavePartsOnError: o.fs.opt.LeavePartsOnError,
}
up, err := f.newLargeUpload(ctx, o, nil, src, f.opt.ChunkSize, false, nil, options...)
return info, up, err
return info, up, nil
}
// Remove an object
func (o *Object) Remove(ctx context.Context) error {
bucket, bucketPath := o.split()
if o.fs.opt.Versions {
return errNotWithVersions
t, path := api.RemoveVersion(bucketPath)
if !t.IsZero() {
return o.fs.deleteByID(ctx, o.id, path)
}
}
if o.fs.opt.VersionAt.IsSet() {
return errNotWithVersionAt
@@ -2208,32 +2360,36 @@ func (o *Object) ID() string {
var lifecycleHelp = fs.CommandHelp{
Name: "lifecycle",
Short: "Read or set the lifecycle for a bucket",
Short: "Read or set the lifecycle for a bucket.",
Long: `This command can be used to read or set the lifecycle for a bucket.
Usage Examples:
To show the current lifecycle rules:
rclone backend lifecycle b2:bucket
` + "```console" + `
rclone backend lifecycle b2:bucket
` + "```" + `
This will dump something like this showing the lifecycle rules.
[
{
"daysFromHidingToDeleting": 1,
"daysFromUploadingToHiding": null,
"daysFromStartingToCancelingUnfinishedLargeFiles": null,
"fileNamePrefix": ""
}
]
` + "```json" + `
[
{
"daysFromHidingToDeleting": 1,
"daysFromUploadingToHiding": null,
"daysFromStartingToCancelingUnfinishedLargeFiles": null,
"fileNamePrefix": ""
}
]
` + "```" + `
If there are no lifecycle rules (the default) then it will just return [].
If there are no lifecycle rules (the default) then it will just return ` + "`[]`" + `.
To reset the current lifecycle rules:
rclone backend lifecycle b2:bucket -o daysFromHidingToDeleting=30
rclone backend lifecycle b2:bucket -o daysFromUploadingToHiding=5 -o daysFromHidingToDeleting=1
` + "```console" + `
rclone backend lifecycle b2:bucket -o daysFromHidingToDeleting=30
rclone backend lifecycle b2:bucket -o daysFromUploadingToHiding=5 -o daysFromHidingToDeleting=1
` + "```" + `
This will run and then print the new lifecycle rules as above.
@@ -2245,14 +2401,17 @@ the daysFromHidingToDeleting to 1 day. You can enable hard_delete in
the config also which will mean deletions won't cause versions but
overwrites will still cause versions to be made.
rclone backend lifecycle b2:bucket -o daysFromHidingToDeleting=1
` + "```console" + `
rclone backend lifecycle b2:bucket -o daysFromHidingToDeleting=1
` + "```" + `
See: https://www.backblaze.com/docs/cloud-storage-lifecycle-rules
`,
See: <https://www.backblaze.com/docs/cloud-storage-lifecycle-rules>`,
Opts: map[string]string{
"daysFromHidingToDeleting": "After a file has been hidden for this many days it is deleted. 0 is off.",
"daysFromUploadingToHiding": "This many days after uploading a file is hidden",
"daysFromStartingToCancelingUnfinishedLargeFiles": "Cancels any unfinished large file versions after this many days",
"daysFromHidingToDeleting": `After a file has been hidden for this many days
it is deleted. 0 is off.`,
"daysFromUploadingToHiding": `This many days after uploading a file is hidden.`,
"daysFromStartingToCancelingUnfinishedLargeFiles": `Cancels any unfinished
large file versions after this many days.`,
},
}
@@ -2335,13 +2494,14 @@ max-age, which defaults to 24 hours.
Note that you can use --interactive/-i or --dry-run with this command to see what
it would do.
rclone backend cleanup b2:bucket/path/to/object
rclone backend cleanup -o max-age=7w b2:bucket/path/to/object
` + "```console" + `
rclone backend cleanup b2:bucket/path/to/object
rclone backend cleanup -o max-age=7w b2:bucket/path/to/object
` + "```" + `
Durations are parsed as per the rest of rclone, 2h, 7d, 7w etc.
`,
Durations are parsed as per the rest of rclone, 2h, 7d, 7w etc.`,
Opts: map[string]string{
"max-age": "Max age of upload to delete",
"max-age": "Max age of upload to delete.",
},
}
@@ -2364,8 +2524,9 @@ var cleanupHiddenHelp = fs.CommandHelp{
Note that you can use --interactive/-i or --dry-run with this command to see what
it would do.
rclone backend cleanup-hidden b2:bucket/path/to/dir
`,
` + "```console" + `
rclone backend cleanup-hidden b2:bucket/path/to/dir
` + "```",
}
func (f *Fs) cleanupHiddenCommand(ctx context.Context, name string, arg []string, opt map[string]string) (out any, err error) {
@@ -2408,6 +2569,7 @@ var (
_ fs.PutStreamer = &Fs{}
_ fs.CleanUpper = &Fs{}
_ fs.ListRer = &Fs{}
_ fs.ListPer = &Fs{}
_ fs.PublicLinker = &Fs{}
_ fs.OpenChunkWriter = &Fs{}
_ fs.Commander = &Fs{}

View File

@@ -446,14 +446,14 @@ func (f *Fs) InternalTestVersions(t *testing.T) {
t.Run("List", func(t *testing.T) {
fstest.CheckListing(t, f, test.want)
})
// b2 NewObject doesn't work with VersionAt
//t.Run("NewObject", func(t *testing.T) {
// gotObj, gotErr := f.NewObject(ctx, fileName)
// assert.Equal(t, test.wantErr, gotErr)
// if gotErr == nil {
// assert.Equal(t, test.wantSize, gotObj.Size())
// }
//})
t.Run("NewObject", func(t *testing.T) {
gotObj, gotErr := f.NewObject(ctx, fileName)
assert.Equal(t, test.wantErr, gotErr)
if gotErr == nil {
assert.Equal(t, test.wantSize, gotObj.Size())
}
})
})
}
})

View File

@@ -144,6 +144,14 @@ func (f *Fs) newLargeUpload(ctx context.Context, o *Object, in io.Reader, src fs
request.ContentType = newInfo.ContentType
request.Info = newInfo.Info
}
if o.fs.opt.SSECustomerKey != "" && o.fs.opt.SSECustomerKeyMD5 != "" {
request.ServerSideEncryption = &api.ServerSideEncryption{
Mode: "SSE-C",
Algorithm: o.fs.opt.SSECustomerAlgorithm,
CustomerKey: o.fs.opt.SSECustomerKeyBase64,
CustomerKeyMd5: o.fs.opt.SSECustomerKeyMD5,
}
}
opts := rest.Opts{
Method: "POST",
Path: "/b2_start_large_file",
@@ -295,6 +303,12 @@ func (up *largeUpload) WriteChunk(ctx context.Context, chunkNumber int, reader i
ContentLength: &sizeWithHash,
}
if up.o.fs.opt.SSECustomerKey != "" && up.o.fs.opt.SSECustomerKeyMD5 != "" {
opts.ExtraHeaders[sseAlgorithmHeader] = up.o.fs.opt.SSECustomerAlgorithm
opts.ExtraHeaders[sseKeyHeader] = up.o.fs.opt.SSECustomerKeyBase64
opts.ExtraHeaders[sseMd5Header] = up.o.fs.opt.SSECustomerKeyMD5
}
var response api.UploadPartResponse
resp, err := up.f.srv.CallJSON(ctx, &opts, nil, &response)
@@ -334,6 +348,17 @@ func (up *largeUpload) copyChunk(ctx context.Context, part int, partSize int64)
PartNumber: int64(part + 1),
Range: fmt.Sprintf("bytes=%d-%d", offset, offset+partSize-1),
}
if up.o.fs.opt.SSECustomerKey != "" && up.o.fs.opt.SSECustomerKeyMD5 != "" {
serverSideEncryptionConfig := api.ServerSideEncryption{
Mode: "SSE-C",
Algorithm: up.o.fs.opt.SSECustomerAlgorithm,
CustomerKey: up.o.fs.opt.SSECustomerKeyBase64,
CustomerKeyMd5: up.o.fs.opt.SSECustomerKeyMD5,
}
request.SourceServerSideEncryption = &serverSideEncryptionConfig
request.DestinationServerSideEncryption = &serverSideEncryptionConfig
}
var response api.UploadPartResponse
resp, err := up.f.srv.CallJSON(ctx, &opts, &request, &response)
retry, err := up.f.shouldRetry(ctx, resp, err)

View File

@@ -125,10 +125,21 @@ type FolderItems struct {
Offset int `json:"offset"`
Limit int `json:"limit"`
NextMarker *string `json:"next_marker,omitempty"`
Order []struct {
By string `json:"by"`
Direction string `json:"direction"`
} `json:"order"`
// There is some confusion about how this is actually
// returned. The []struct has worked for many years, but in
// https://github.com/rclone/rclone/issues/8776 box was
// returning it returned not as a list. We don't actually use
// this so comment it out.
//
// Order struct {
// By string `json:"by"`
// Direction string `json:"direction"`
// } `json:"order"`
//
// Order []struct {
// By string `json:"by"`
// Direction string `json:"direction"`
// } `json:"order"`
}
// Parent defined the ID of the parent directory
@@ -271,9 +282,9 @@ type User struct {
ModifiedAt time.Time `json:"modified_at"`
Language string `json:"language"`
Timezone string `json:"timezone"`
SpaceAmount int64 `json:"space_amount"`
SpaceUsed int64 `json:"space_used"`
MaxUploadSize int64 `json:"max_upload_size"`
SpaceAmount float64 `json:"space_amount"`
SpaceUsed float64 `json:"space_used"`
MaxUploadSize float64 `json:"max_upload_size"`
Status string `json:"status"`
JobTitle string `json:"job_title"`
Phone string `json:"phone"`

View File

@@ -37,6 +37,7 @@ import (
"github.com/rclone/rclone/fs/fserrors"
"github.com/rclone/rclone/fs/fshttp"
"github.com/rclone/rclone/fs/hash"
"github.com/rclone/rclone/fs/list"
"github.com/rclone/rclone/lib/dircache"
"github.com/rclone/rclone/lib/encoder"
"github.com/rclone/rclone/lib/env"
@@ -86,13 +87,11 @@ func init() {
Description: "Box",
NewFs: NewFs,
Config: func(ctx context.Context, name string, m configmap.Mapper, config fs.ConfigIn) (*fs.ConfigOut, error) {
jsonFile, ok := m.Get("box_config_file")
boxSubType, boxSubTypeOk := m.Get("box_sub_type")
boxAccessToken, boxAccessTokenOk := m.Get("access_token")
var err error
// If using box config.json, use JWT auth
if ok && boxSubTypeOk && jsonFile != "" && boxSubType != "" {
err = refreshJWTToken(ctx, jsonFile, boxSubType, name, m)
if usesJWTAuth(m) {
err = refreshJWTToken(ctx, name, m)
if err != nil {
return nil, fmt.Errorf("failed to configure token with jwt authentication: %w", err)
}
@@ -113,6 +112,11 @@ func init() {
}, {
Name: "box_config_file",
Help: "Box App config.json location\n\nLeave blank normally." + env.ShellExpandHelp,
}, {
Name: "config_credentials",
Help: "Box App config.json contents.\n\nLeave blank normally.",
Hide: fs.OptionHideBoth,
Sensitive: true,
}, {
Name: "access_token",
Help: "Box App Primary Access Token\n\nLeave blank normally.",
@@ -183,9 +187,17 @@ See: https://developer.box.com/guides/authentication/jwt/as-user/
})
}
func refreshJWTToken(ctx context.Context, jsonFile string, boxSubType string, name string, m configmap.Mapper) error {
jsonFile = env.ShellExpand(jsonFile)
boxConfig, err := getBoxConfig(jsonFile)
func usesJWTAuth(m configmap.Mapper) bool {
jsonFile, okFile := m.Get("box_config_file")
jsonFileCredentials, okCredentials := m.Get("config_credentials")
boxSubType, boxSubTypeOk := m.Get("box_sub_type")
return (okFile || okCredentials) && boxSubTypeOk && (jsonFile != "" || jsonFileCredentials != "") && boxSubType != ""
}
func refreshJWTToken(ctx context.Context, name string, m configmap.Mapper) error {
boxSubType, _ := m.Get("box_sub_type")
boxConfig, err := getBoxConfig(m)
if err != nil {
return fmt.Errorf("get box config: %w", err)
}
@@ -204,12 +216,19 @@ func refreshJWTToken(ctx context.Context, jsonFile string, boxSubType string, na
return err
}
func getBoxConfig(configFile string) (boxConfig *api.ConfigJSON, err error) {
file, err := os.ReadFile(configFile)
if err != nil {
return nil, fmt.Errorf("box: failed to read Box config: %w", err)
func getBoxConfig(m configmap.Mapper) (boxConfig *api.ConfigJSON, err error) {
configFileCredentials, _ := m.Get("config_credentials")
configFileBytes := []byte(configFileCredentials)
if configFileCredentials == "" {
configFile, _ := m.Get("box_config_file")
configFileBytes, err = os.ReadFile(configFile)
if err != nil {
return nil, fmt.Errorf("box: failed to read Box config: %w", err)
}
}
err = json.Unmarshal(file, &boxConfig)
err = json.Unmarshal(configFileBytes, &boxConfig)
if err != nil {
return nil, fmt.Errorf("box: failed to parse Box config: %w", err)
}
@@ -484,15 +503,12 @@ func NewFs(ctx context.Context, name, root string, m configmap.Mapper) (fs.Fs, e
f.srv.SetHeader("as-user", f.opt.Impersonate)
}
jsonFile, ok := m.Get("box_config_file")
boxSubType, boxSubTypeOk := m.Get("box_sub_type")
if ts != nil {
// If using box config.json and JWT, renewing should just refresh the token and
// should do so whether there are uploads pending or not.
if ok && boxSubTypeOk && jsonFile != "" && boxSubType != "" {
if usesJWTAuth(m) {
f.tokenRenewer = oauthutil.NewRenew(f.String(), ts, func() error {
err := refreshJWTToken(ctx, jsonFile, boxSubType, name, m)
err := refreshJWTToken(ctx, name, m)
return err
})
f.tokenRenewer.Start()
@@ -705,9 +721,27 @@ OUTER:
// This should return ErrDirNotFound if the directory isn't
// found.
func (f *Fs) List(ctx context.Context, dir string) (entries fs.DirEntries, err error) {
return list.WithListP(ctx, dir, f)
}
// ListP lists the objects and directories of the Fs starting
// from dir non recursively into out.
//
// dir should be "" to start from the root, and should not
// have trailing slashes.
//
// This should return ErrDirNotFound if the directory isn't
// found.
//
// It should call callback for each tranche of entries read.
// These need not be returned in any particular order. If
// callback returns an error then the listing will stop
// immediately.
func (f *Fs) ListP(ctx context.Context, dir string, callback fs.ListRCallback) error {
list := list.NewHelper(callback)
directoryID, err := f.dirCache.FindDir(ctx, dir, false)
if err != nil {
return nil, err
return err
}
var iErr error
_, err = f.listAll(ctx, directoryID, false, false, true, func(info *api.Item) bool {
@@ -717,14 +751,22 @@ func (f *Fs) List(ctx context.Context, dir string) (entries fs.DirEntries, err e
f.dirCache.Put(remote, info.ID)
d := fs.NewDir(remote, info.ModTime()).SetID(info.ID)
// FIXME more info from dir?
entries = append(entries, d)
err = list.Add(d)
if err != nil {
iErr = err
return true
}
} else if info.Type == api.ItemTypeFile {
o, err := f.newObjectWithInfo(ctx, remote, info)
if err != nil {
iErr = err
return true
}
entries = append(entries, o)
err = list.Add(o)
if err != nil {
iErr = err
return true
}
}
// Cache some metadata for this Item to help us process events later
@@ -740,12 +782,12 @@ func (f *Fs) List(ctx context.Context, dir string) (entries fs.DirEntries, err e
return false
})
if err != nil {
return nil, err
return err
}
if iErr != nil {
return nil, iErr
return iErr
}
return entries, nil
return list.Flush()
}
// Creates from the parameters passed in a half finished Object which
@@ -1741,6 +1783,7 @@ var (
_ fs.DirCacheFlusher = (*Fs)(nil)
_ fs.PublicLinker = (*Fs)(nil)
_ fs.CleanUpper = (*Fs)(nil)
_ fs.ListPer = (*Fs)(nil)
_ fs.Shutdowner = (*Fs)(nil)
_ fs.Object = (*Object)(nil)
_ fs.IDer = (*Object)(nil)

View File

@@ -684,7 +684,7 @@ func (f *Fs) rcFetch(ctx context.Context, in rc.Params) (rc.Params, error) {
start, end int64
}
parseChunks := func(ranges string) (crs []chunkRange, err error) {
for _, part := range strings.Split(ranges, ",") {
for part := range strings.SplitSeq(ranges, ",") {
var start, end int64 = 0, math.MaxInt64
switch ints := strings.Split(part, ":"); len(ints) {
case 1:

View File

@@ -1,5 +1,4 @@
//go:build !plan9 && !js
// +build !plan9,!js
package cache

View File

@@ -1861,6 +1861,8 @@ func (f *Fs) Move(ctx context.Context, src fs.Object, remote string) (fs.Object,
// baseMove chains to the wrapped Move or simulates it by Copy+Delete
func (f *Fs) baseMove(ctx context.Context, src fs.Object, remote string, delMode int) (fs.Object, error) {
ctx, ci := fs.AddConfig(ctx)
ci.NameTransform = nil // ensure operations.Move does not double-transform here
var (
dest fs.Object
err error

View File

@@ -187,7 +187,6 @@ func NewFs(ctx context.Context, name, root string, m configmap.Mapper) (outFs fs
g, gCtx := errgroup.WithContext(ctx)
var mu sync.Mutex
for _, upstream := range opt.Upstreams {
upstream := upstream
g.Go(func() (err error) {
equal := strings.IndexRune(upstream, '=')
if equal < 0 {
@@ -241,18 +240,22 @@ func NewFs(ctx context.Context, name, root string, m configmap.Mapper) (outFs fs
DirModTimeUpdatesOnWrite: true,
PartialUploads: true,
}).Fill(ctx, f)
canMove := true
canMove, slowHash := true, false
for _, u := range f.upstreams {
features = features.Mask(ctx, u.f) // Mask all upstream fs
if !operations.CanServerSideMove(u.f) {
canMove = false
}
slowHash = slowHash || u.f.Features().SlowHash
}
// We can move if all remotes support Move or Copy
if canMove {
features.Move = f.Move
}
// If any of upstreams are SlowHash, propagate it
features.SlowHash = slowHash
// Enable ListR when upstreams either support ListR or is local
// But not when all upstreams are local
if features.ListR == nil {
@@ -366,7 +369,6 @@ func NewFs(ctx context.Context, name, root string, m configmap.Mapper) (outFs fs
func (f *Fs) multithread(ctx context.Context, fn func(context.Context, *upstream) error) error {
g, gCtx := errgroup.WithContext(ctx)
for _, u := range f.upstreams {
u := u
g.Go(func() (err error) {
return fn(gCtx, u)
})
@@ -633,7 +635,6 @@ func (f *Fs) ChangeNotify(ctx context.Context, notifyFunc func(string, fs.EntryT
var uChans []chan time.Duration
for _, u := range f.upstreams {
u := u
if do := u.f.Features().ChangeNotify; do != nil {
ch := make(chan time.Duration)
uChans = append(uChans, ch)
@@ -858,7 +859,7 @@ func (f *Fs) ListP(ctx context.Context, dir string, callback fs.ListRCallback) e
}
return wrappedCallback(entries)
}
return listP(ctx, dir, wrappedCallback)
return listP(ctx, uRemote, wrappedCallback)
}
// ListR lists the objects and directories of the Fs starting

View File

@@ -2,10 +2,8 @@
package compress
import (
"bufio"
"bytes"
"context"
"crypto/md5"
"encoding/base64"
"encoding/binary"
"encoding/hex"
@@ -46,6 +44,7 @@ const (
minCompressionRatio = 1.1
gzFileExt = ".gz"
zstdFileExt = ".zst"
metaFileExt = ".json"
uncompressedFileExt = ".bin"
)
@@ -54,6 +53,7 @@ const (
const (
Uncompressed = 0
Gzip = 2
Zstd = 4
)
var nameRegexp = regexp.MustCompile(`^(.+?)\.([A-Za-z0-9-_]{11})$`)
@@ -66,6 +66,10 @@ func init() {
Value: "gzip",
Help: "Standard gzip compression with fastest parameters.",
},
{
Value: "zstd",
Help: "Zstandard compression — fast modern algorithm offering adjustable speed-to-compression tradeoffs.",
},
}
// Register our remote
@@ -87,17 +91,23 @@ func init() {
Examples: compressionModeOptions,
}, {
Name: "level",
Help: `GZIP compression level (-2 to 9).
Generally -1 (default, equivalent to 5) is recommended.
Levels 1 to 9 increase compression at the cost of speed. Going past 6
generally offers very little return.
Level -2 uses Huffman encoding only. Only use if you know what you
are doing.
Level 0 turns off compression.`,
Default: sgzip.DefaultCompression,
Advanced: true,
Help: `GZIP (levels -2 to 9):
- -2 — Huffman encoding only. Only use if you know what you're doing.
- -1 (default) — recommended; equivalent to level 5.
- 0 — turns off compression.
- 19 — increase compression at the cost of speed. Going past 6 generally offers very little return.
ZSTD (levels 0 to 4):
- 0 — turns off compression entirely.
- 1 — fastest compression with the lowest ratio.
- 2 (default) — good balance of speed and compression.
- 3 — better compression, but uses about 23x more CPU than the default.
- 4 — best possible compression ratio (highest CPU cost).
Notes:
- Choose GZIP for wide compatibility; ZSTD for better speed/ratio tradeoffs.
- Negative gzip levels: -2 = Huffman-only, -1 = default (≈ level 5).`,
Required: true,
}, {
Name: "ram_cache_limit",
Help: `Some remotes don't allow the upload of files with unknown size.
@@ -112,6 +122,47 @@ this limit will be cached on disk.`,
})
}
// compressionModeHandler defines the interface for handling different compression modes
type compressionModeHandler interface {
// processFileNameGetFileExtension returns the file extension for the given compression mode
processFileNameGetFileExtension(compressionMode int) string
// newObjectGetOriginalSize returns the original file size from the metadata
newObjectGetOriginalSize(meta *ObjectMetadata) (int64, error)
// isCompressible checks the compression ratio of the provided data and returns true if the ratio exceeds
// the configured threshold
isCompressible(r io.Reader, compressionMode int) (bool, error)
// putCompress compresses the input data and uploads it to the remote, returning the new object and its metadata
putCompress(
ctx context.Context,
f *Fs,
in io.Reader,
src fs.ObjectInfo,
options []fs.OpenOption,
mimeType string,
) (fs.Object, *ObjectMetadata, error)
// openGetReadCloser opens a compressed object and returns a ReadCloser in the Open method
openGetReadCloser(
ctx context.Context,
o *Object,
offset int64,
limit int64,
cr chunkedreader.ChunkedReader,
closer io.Closer,
options ...fs.OpenOption,
) (rc io.ReadCloser, err error)
// putUncompressGetNewMetadata returns metadata in the putUncompress method for a specific compression algorithm
putUncompressGetNewMetadata(o fs.Object, mode int, md5 string, mimeType string, sum []byte) (fs.Object, *ObjectMetadata, error)
// This function generates a metadata object for sgzip.GzipMetadata or SzstdMetadata.
// Warning: This function panics if cmeta is not of the expected type.
newMetadata(size int64, mode int, cmeta any, md5 string, mimeType string) *ObjectMetadata
}
// Options defines the configuration for this backend
type Options struct {
Remote string `config:"remote"`
@@ -125,12 +176,13 @@ type Options struct {
// Fs represents a wrapped fs.Fs
type Fs struct {
fs.Fs
wrapper fs.Fs
name string
root string
opt Options
mode int // compression mode id
features *fs.Features // optional features
wrapper fs.Fs
name string
root string
opt Options
mode int // compression mode id
features *fs.Features // optional features
modeHandler compressionModeHandler // compression mode handler
}
// NewFs constructs an Fs from the path, container:path
@@ -167,13 +219,28 @@ func NewFs(ctx context.Context, name, rpath string, m configmap.Mapper) (fs.Fs,
return nil, fmt.Errorf("failed to make remote %s:%q to wrap: %w", wName, remotePath, err)
}
compressionMode := compressionModeFromName(opt.CompressionMode)
var modeHandler compressionModeHandler
switch compressionMode {
case Gzip:
modeHandler = &gzipModeHandler{}
case Zstd:
modeHandler = &zstdModeHandler{}
case Uncompressed:
modeHandler = &uncompressedModeHandler{}
default:
modeHandler = &unknownModeHandler{}
}
// Create the wrapping fs
f := &Fs{
Fs: wrappedFs,
name: name,
root: rpath,
opt: *opt,
mode: compressionModeFromName(opt.CompressionMode),
Fs: wrappedFs,
name: name,
root: rpath,
opt: *opt,
mode: compressionMode,
modeHandler: modeHandler,
}
// Correct root if definitely pointing to a file
if err == fs.ErrorIsFile {
@@ -215,10 +282,13 @@ func NewFs(ctx context.Context, name, rpath string, m configmap.Mapper) (fs.Fs,
return f, err
}
// compressionModeFromName converts a compression mode name to its int representation.
func compressionModeFromName(name string) int {
switch name {
case "gzip":
return Gzip
case "zstd":
return Zstd
default:
return Uncompressed
}
@@ -242,7 +312,7 @@ func base64ToInt64(str string) (int64, error) {
// Processes a file name for a compressed file. Returns the original file name, the extension, and the size of the original file.
// Returns -2 for the original size if the file is uncompressed.
func processFileName(compressedFileName string) (origFileName string, extension string, origSize int64, err error) {
func processFileName(compressedFileName string, modeHandler compressionModeHandler) (origFileName string, extension string, origSize int64, err error) {
// Separate the filename and size from the extension
extensionPos := strings.LastIndex(compressedFileName, ".")
if extensionPos == -1 {
@@ -261,7 +331,8 @@ func processFileName(compressedFileName string) (origFileName string, extension
if err != nil {
return "", "", 0, errors.New("could not decode size")
}
return match[1], gzFileExt, size, nil
ext := modeHandler.processFileNameGetFileExtension(compressionModeFromName(compressedFileName[extensionPos+1:]))
return match[1], ext, size, nil
}
// Generates the file name for a metadata file
@@ -286,11 +357,15 @@ func unwrapMetadataFile(filename string) (string, bool) {
// makeDataName generates the file name for a data file with specified compression mode
func makeDataName(remote string, size int64, mode int) (newRemote string) {
if mode != Uncompressed {
switch mode {
case Gzip:
newRemote = remote + "." + int64ToBase64(size) + gzFileExt
} else {
case Zstd:
newRemote = remote + "." + int64ToBase64(size) + zstdFileExt
default:
newRemote = remote + uncompressedFileExt
}
return newRemote
}
@@ -304,7 +379,7 @@ func (f *Fs) dataName(remote string, size int64, compressed bool) (name string)
// addData parses an object and adds it to the DirEntries
func (f *Fs) addData(entries *fs.DirEntries, o fs.Object) {
origFileName, _, size, err := processFileName(o.Remote())
origFileName, _, size, err := processFileName(o.Remote(), f.modeHandler)
if err != nil {
fs.Errorf(o, "Error on parsing file name: %v", err)
return
@@ -427,8 +502,12 @@ func (f *Fs) NewObject(ctx context.Context, remote string) (fs.Object, error) {
if err != nil {
return nil, fmt.Errorf("error decoding metadata: %w", err)
}
size, err := f.modeHandler.newObjectGetOriginalSize(meta)
if err != nil {
return nil, fmt.Errorf("error reading metadata: %w", err)
}
// Create our Object
o, err := f.Fs.NewObject(ctx, makeDataName(remote, meta.CompressionMetadata.Size, meta.Mode))
o, err := f.Fs.NewObject(ctx, makeDataName(remote, size, meta.Mode))
if err != nil {
return nil, err
}
@@ -437,7 +516,7 @@ func (f *Fs) NewObject(ctx context.Context, remote string) (fs.Object, error) {
// checkCompressAndType checks if an object is compressible and determines it's mime type
// returns a multireader with the bytes that were read to determine mime type
func checkCompressAndType(in io.Reader) (newReader io.Reader, compressible bool, mimeType string, err error) {
func checkCompressAndType(in io.Reader, compressionMode int, modeHandler compressionModeHandler) (newReader io.Reader, compressible bool, mimeType string, err error) {
in, wrap := accounting.UnWrap(in)
buf := make([]byte, heuristicBytes)
n, err := in.Read(buf)
@@ -446,7 +525,7 @@ func checkCompressAndType(in io.Reader) (newReader io.Reader, compressible bool,
return nil, false, "", err
}
mime := mimetype.Detect(buf)
compressible, err = isCompressible(bytes.NewReader(buf))
compressible, err = modeHandler.isCompressible(bytes.NewReader(buf), compressionMode)
if err != nil {
return nil, false, "", err
}
@@ -454,26 +533,6 @@ func checkCompressAndType(in io.Reader) (newReader io.Reader, compressible bool,
return wrap(in), compressible, mime.String(), nil
}
// isCompressible checks the compression ratio of the provided data and returns true if the ratio exceeds
// the configured threshold
func isCompressible(r io.Reader) (bool, error) {
var b bytes.Buffer
w, err := sgzip.NewWriterLevel(&b, sgzip.DefaultCompression)
if err != nil {
return false, err
}
n, err := io.Copy(w, r)
if err != nil {
return false, err
}
err = w.Close()
if err != nil {
return false, err
}
ratio := float64(n) / float64(b.Len())
return ratio > minCompressionRatio, nil
}
// verifyObjectHash verifies the Objects hash
func (f *Fs) verifyObjectHash(ctx context.Context, o fs.Object, hasher *hash.MultiHasher, ht hash.Type) error {
srcHash := hasher.Sums()[ht]
@@ -494,9 +553,9 @@ func (f *Fs) verifyObjectHash(ctx context.Context, o fs.Object, hasher *hash.Mul
type putFn func(ctx context.Context, in io.Reader, src fs.ObjectInfo, options ...fs.OpenOption) (fs.Object, error)
type compressionResult struct {
type compressionResult[T sgzip.GzipMetadata | SzstdMetadata] struct {
err error
meta sgzip.GzipMetadata
meta T
}
// replicating some of operations.Rcat functionality because we want to support remotes without streaming
@@ -537,106 +596,18 @@ func (f *Fs) rcat(ctx context.Context, dstFileName string, in io.ReadCloser, mod
return nil, fmt.Errorf("failed to write temporary local file: %w", err)
}
if _, err = tempFile.Seek(0, 0); err != nil {
return nil, err
return nil, fmt.Errorf("failed to seek temporary local file: %w", err)
}
finfo, err := tempFile.Stat()
if err != nil {
return nil, err
return nil, fmt.Errorf("failed to stat temporary local file: %w", err)
}
return f.Fs.Put(ctx, tempFile, object.NewStaticObjectInfo(dstFileName, modTime, finfo.Size(), false, nil, f.Fs))
}
// Put a compressed version of a file. Returns a wrappable object and metadata.
func (f *Fs) putCompress(ctx context.Context, in io.Reader, src fs.ObjectInfo, options []fs.OpenOption, mimeType string) (fs.Object, *ObjectMetadata, error) {
// Unwrap reader accounting
in, wrap := accounting.UnWrap(in)
// Add the metadata hasher
metaHasher := md5.New()
in = io.TeeReader(in, metaHasher)
// Compress the file
pipeReader, pipeWriter := io.Pipe()
results := make(chan compressionResult)
go func() {
gz, err := sgzip.NewWriterLevel(pipeWriter, f.opt.CompressionLevel)
if err != nil {
results <- compressionResult{err: err, meta: sgzip.GzipMetadata{}}
return
}
_, err = io.Copy(gz, in)
gzErr := gz.Close()
if gzErr != nil {
fs.Errorf(nil, "Failed to close compress: %v", gzErr)
if err == nil {
err = gzErr
}
}
closeErr := pipeWriter.Close()
if closeErr != nil {
fs.Errorf(nil, "Failed to close pipe: %v", closeErr)
if err == nil {
err = closeErr
}
}
results <- compressionResult{err: err, meta: gz.MetaData()}
}()
wrappedIn := wrap(bufio.NewReaderSize(pipeReader, bufferSize)) // Probably no longer needed as sgzip has it's own buffering
// Find a hash the destination supports to compute a hash of
// the compressed data.
ht := f.Fs.Hashes().GetOne()
var hasher *hash.MultiHasher
var err error
if ht != hash.None {
// unwrap the accounting again
wrappedIn, wrap = accounting.UnWrap(wrappedIn)
hasher, err = hash.NewMultiHasherTypes(hash.NewHashSet(ht))
if err != nil {
return nil, nil, err
}
// add the hasher and re-wrap the accounting
wrappedIn = io.TeeReader(wrappedIn, hasher)
wrappedIn = wrap(wrappedIn)
}
// Transfer the data
o, err := f.rcat(ctx, makeDataName(src.Remote(), src.Size(), f.mode), io.NopCloser(wrappedIn), src.ModTime(ctx), options)
//o, err := operations.Rcat(ctx, f.Fs, makeDataName(src.Remote(), src.Size(), f.mode), io.NopCloser(wrappedIn), src.ModTime(ctx))
if err != nil {
if o != nil {
removeErr := o.Remove(ctx)
if removeErr != nil {
fs.Errorf(o, "Failed to remove partially transferred object: %v", err)
}
}
return nil, nil, err
}
// Check whether we got an error during compression
result := <-results
err = result.err
if err != nil {
if o != nil {
removeErr := o.Remove(ctx)
if removeErr != nil {
fs.Errorf(o, "Failed to remove partially compressed object: %v", err)
}
}
return nil, nil, err
}
// Generate metadata
meta := newMetadata(result.meta.Size, f.mode, result.meta, hex.EncodeToString(metaHasher.Sum(nil)), mimeType)
// Check the hashes of the compressed data if we were comparing them
if ht != hash.None && hasher != nil {
err = f.verifyObjectHash(ctx, o, hasher, ht)
if err != nil {
return nil, nil, err
}
}
return o, meta, nil
return f.modeHandler.putCompress(ctx, f, in, src, options, mimeType)
}
// Put an uncompressed version of a file. Returns a wrappable object and metadata.
@@ -680,7 +651,8 @@ func (f *Fs) putUncompress(ctx context.Context, in io.Reader, src fs.ObjectInfo,
if err != nil {
return nil, nil, err
}
return o, newMetadata(o.Size(), Uncompressed, sgzip.GzipMetadata{}, hex.EncodeToString(sum), mimeType), nil
return f.modeHandler.putUncompressGetNewMetadata(o, Uncompressed, hex.EncodeToString(sum), mimeType, sum)
}
// This function will write a metadata struct to a metadata Object for an src. Returns a wrappable metadata object.
@@ -751,7 +723,7 @@ func (f *Fs) Put(ctx context.Context, in io.Reader, src fs.ObjectInfo, options .
o, err := f.NewObject(ctx, src.Remote())
if err == fs.ErrorObjectNotFound {
// Get our file compressibility
in, compressible, mimeType, err := checkCompressAndType(in)
in, compressible, mimeType, err := checkCompressAndType(in, f.mode, f.modeHandler)
if err != nil {
return nil, err
}
@@ -771,7 +743,7 @@ func (f *Fs) PutStream(ctx context.Context, in io.Reader, src fs.ObjectInfo, opt
}
found := err == nil
in, compressible, mimeType, err := checkCompressAndType(in)
in, compressible, mimeType, err := checkCompressAndType(in, f.mode, f.modeHandler)
if err != nil {
return nil, err
}
@@ -1090,11 +1062,12 @@ func (f *Fs) PublicLink(ctx context.Context, remote string, duration fs.Duration
// ObjectMetadata describes the metadata for an Object.
type ObjectMetadata struct {
Mode int // Compression mode of the file.
Size int64 // Size of the object.
MD5 string // MD5 hash of the file.
MimeType string // Mime type of the file
CompressionMetadata sgzip.GzipMetadata
Mode int // Compression mode of the file.
Size int64 // Size of the object.
MD5 string // MD5 hash of the file.
MimeType string // Mime type of the file
CompressionMetadataGzip *sgzip.GzipMetadata // Metadata for Gzip compression
CompressionMetadataZstd *SzstdMetadata // Metadata for Zstd compression
}
// Object with external metadata
@@ -1107,17 +1080,6 @@ type Object struct {
meta *ObjectMetadata // Metadata struct for this object (nil if not loaded)
}
// This function generates a metadata object
func newMetadata(size int64, mode int, cmeta sgzip.GzipMetadata, md5 string, mimeType string) *ObjectMetadata {
meta := new(ObjectMetadata)
meta.Size = size
meta.Mode = mode
meta.CompressionMetadata = cmeta
meta.MD5 = md5
meta.MimeType = mimeType
return meta
}
// This function will read the metadata from a metadata object.
func readMetadata(ctx context.Context, mo fs.Object) (meta *ObjectMetadata, err error) {
// Open our meradata object
@@ -1165,7 +1127,7 @@ func (o *Object) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, op
return o.mo, o.mo.Update(ctx, in, src, options...)
}
in, compressible, mimeType, err := checkCompressAndType(in)
in, compressible, mimeType, err := checkCompressAndType(in, o.meta.Mode, o.f.modeHandler)
if err != nil {
return err
}
@@ -1278,7 +1240,7 @@ func (o *Object) String() string {
// Remote returns the remote path
func (o *Object) Remote() string {
origFileName, _, _, err := processFileName(o.Object.Remote())
origFileName, _, _, err := processFileName(o.Object.Remote(), o.f.modeHandler)
if err != nil {
fs.Errorf(o.f, "Could not get remote path for: %s", o.Object.Remote())
return o.Object.Remote()
@@ -1381,7 +1343,6 @@ func (o *Object) Open(ctx context.Context, options ...fs.OpenOption) (rc io.Read
return o.Object.Open(ctx, options...)
}
// Get offset and limit from OpenOptions, pass the rest to the underlying remote
var openOptions = []fs.OpenOption{&fs.SeekOption{Offset: 0}}
var offset, limit int64 = 0, -1
for _, option := range options {
switch x := option.(type) {
@@ -1389,31 +1350,12 @@ func (o *Object) Open(ctx context.Context, options ...fs.OpenOption) (rc io.Read
offset = x.Offset
case *fs.RangeOption:
offset, limit = x.Decode(o.Size())
default:
openOptions = append(openOptions, option)
}
}
// Get a chunkedreader for the wrapped object
chunkedReader := chunkedreader.New(ctx, o.Object, initialChunkSize, maxChunkSize, chunkStreams)
// Get file handle
var file io.Reader
if offset != 0 {
file, err = sgzip.NewReaderAt(chunkedReader, &o.meta.CompressionMetadata, offset)
} else {
file, err = sgzip.NewReader(chunkedReader)
}
if err != nil {
return nil, err
}
var fileReader io.Reader
if limit != -1 {
fileReader = io.LimitReader(file, limit)
} else {
fileReader = file
}
// Return a ReadCloser
return ReadCloserWrapper{Reader: fileReader, Closer: chunkedReader}, nil
var retCloser io.Closer = chunkedReader
return o.f.modeHandler.openGetReadCloser(ctx, o, offset, limit, chunkedReader, retCloser, options...)
}
// ObjectInfo describes a wrapped fs.ObjectInfo for being the source

View File

@@ -48,7 +48,27 @@ func TestRemoteGzip(t *testing.T) {
opt.ExtraConfig = []fstests.ExtraConfigItem{
{Name: name, Key: "type", Value: "compress"},
{Name: name, Key: "remote", Value: tempdir},
{Name: name, Key: "compression_mode", Value: "gzip"},
{Name: name, Key: "mode", Value: "gzip"},
{Name: name, Key: "level", Value: "-1"},
}
opt.QuickTestOK = true
fstests.Run(t, &opt)
}
// TestRemoteZstd tests ZSTD compression
func TestRemoteZstd(t *testing.T) {
if *fstest.RemoteName != "" {
t.Skip("Skipping as -remote set")
}
tempdir := filepath.Join(os.TempDir(), "rclone-compress-test-zstd")
name := "TestCompressZstd"
opt := defaultOpt
opt.RemoteName = name + ":"
opt.ExtraConfig = []fstests.ExtraConfigItem{
{Name: name, Key: "type", Value: "compress"},
{Name: name, Key: "remote", Value: tempdir},
{Name: name, Key: "mode", Value: "zstd"},
{Name: name, Key: "level", Value: "2"},
}
opt.QuickTestOK = true
fstests.Run(t, &opt)

View File

@@ -0,0 +1,207 @@
package compress
import (
"bufio"
"bytes"
"context"
"crypto/md5"
"encoding/hex"
"errors"
"io"
"github.com/buengese/sgzip"
"github.com/rclone/rclone/fs"
"github.com/rclone/rclone/fs/accounting"
"github.com/rclone/rclone/fs/chunkedreader"
"github.com/rclone/rclone/fs/hash"
)
// gzipModeHandler implements compressionModeHandler for gzip
type gzipModeHandler struct{}
// isCompressible checks the compression ratio of the provided data and returns true if the ratio exceeds
// the configured threshold
func (g *gzipModeHandler) isCompressible(r io.Reader, compressionMode int) (bool, error) {
var b bytes.Buffer
var n int64
w, err := sgzip.NewWriterLevel(&b, sgzip.DefaultCompression)
if err != nil {
return false, err
}
n, err = io.Copy(w, r)
if err != nil {
return false, err
}
err = w.Close()
if err != nil {
return false, err
}
ratio := float64(n) / float64(b.Len())
return ratio > minCompressionRatio, nil
}
// newObjectGetOriginalSize returns the original file size from the metadata
func (g *gzipModeHandler) newObjectGetOriginalSize(meta *ObjectMetadata) (int64, error) {
if meta.CompressionMetadataGzip == nil {
return 0, errors.New("missing gzip metadata")
}
return meta.CompressionMetadataGzip.Size, nil
}
// openGetReadCloser opens a compressed object and returns a ReadCloser in the Open method
func (g *gzipModeHandler) openGetReadCloser(
ctx context.Context,
o *Object,
offset int64,
limit int64,
cr chunkedreader.ChunkedReader,
closer io.Closer,
options ...fs.OpenOption,
) (rc io.ReadCloser, err error) {
var file io.Reader
if offset != 0 {
file, err = sgzip.NewReaderAt(cr, o.meta.CompressionMetadataGzip, offset)
} else {
file, err = sgzip.NewReader(cr)
}
if err != nil {
return nil, err
}
var fileReader io.Reader
if limit != -1 {
fileReader = io.LimitReader(file, limit)
} else {
fileReader = file
}
// Return a ReadCloser
return ReadCloserWrapper{Reader: fileReader, Closer: closer}, nil
}
// processFileNameGetFileExtension returns the file extension for the given compression mode
func (g *gzipModeHandler) processFileNameGetFileExtension(compressionMode int) string {
if compressionMode == Gzip {
return gzFileExt
}
return ""
}
// putCompress compresses the input data and uploads it to the remote, returning the new object and its metadata
func (g *gzipModeHandler) putCompress(
ctx context.Context,
f *Fs,
in io.Reader,
src fs.ObjectInfo,
options []fs.OpenOption,
mimeType string,
) (fs.Object, *ObjectMetadata, error) {
// Unwrap reader accounting
in, wrap := accounting.UnWrap(in)
// Add the metadata hasher
metaHasher := md5.New()
in = io.TeeReader(in, metaHasher)
// Compress the file
pipeReader, pipeWriter := io.Pipe()
resultsGzip := make(chan compressionResult[sgzip.GzipMetadata])
go func() {
gz, err := sgzip.NewWriterLevel(pipeWriter, f.opt.CompressionLevel)
if err != nil {
resultsGzip <- compressionResult[sgzip.GzipMetadata]{err: err, meta: sgzip.GzipMetadata{}}
close(resultsGzip)
return
}
_, err = io.Copy(gz, in)
gzErr := gz.Close()
if gzErr != nil && err == nil {
err = gzErr
}
closeErr := pipeWriter.Close()
if closeErr != nil && err == nil {
err = closeErr
}
resultsGzip <- compressionResult[sgzip.GzipMetadata]{err: err, meta: gz.MetaData()}
close(resultsGzip)
}()
wrappedIn := wrap(bufio.NewReaderSize(pipeReader, bufferSize)) // Probably no longer needed as sgzip has it's own buffering
// Find a hash the destination supports to compute a hash of
// the compressed data.
ht := f.Fs.Hashes().GetOne()
var hasher *hash.MultiHasher
var err error
if ht != hash.None {
// unwrap the accounting again
wrappedIn, wrap = accounting.UnWrap(wrappedIn)
hasher, err = hash.NewMultiHasherTypes(hash.NewHashSet(ht))
if err != nil {
return nil, nil, err
}
// add the hasher and re-wrap the accounting
wrappedIn = io.TeeReader(wrappedIn, hasher)
wrappedIn = wrap(wrappedIn)
}
// Transfer the data
o, err := f.rcat(ctx, makeDataName(src.Remote(), src.Size(), f.mode), io.NopCloser(wrappedIn), src.ModTime(ctx), options)
if err != nil {
if o != nil {
if removeErr := o.Remove(ctx); removeErr != nil {
fs.Errorf(o, "Failed to remove partially transferred object: %v", removeErr)
}
}
return nil, nil, err
}
// Check whether we got an error during compression
result := <-resultsGzip
if result.err != nil {
if o != nil {
if removeErr := o.Remove(ctx); removeErr != nil {
fs.Errorf(o, "Failed to remove partially compressed object: %v", removeErr)
}
}
return nil, nil, result.err
}
// Generate metadata
meta := g.newMetadata(result.meta.Size, f.mode, result.meta, hex.EncodeToString(metaHasher.Sum(nil)), mimeType)
// Check the hashes of the compressed data if we were comparing them
if ht != hash.None && hasher != nil {
err = f.verifyObjectHash(ctx, o, hasher, ht)
if err != nil {
return nil, nil, err
}
}
return o, meta, nil
}
// putUncompressGetNewMetadata returns metadata in the putUncompress method for a specific compression algorithm
func (g *gzipModeHandler) putUncompressGetNewMetadata(o fs.Object, mode int, md5 string, mimeType string, sum []byte) (fs.Object, *ObjectMetadata, error) {
return o, g.newMetadata(o.Size(), mode, sgzip.GzipMetadata{}, hex.EncodeToString(sum), mimeType), nil
}
// This function generates a metadata object for sgzip.GzipMetadata or SzstdMetadata.
// Warning: This function panics if cmeta is not of the expected type.
func (g *gzipModeHandler) newMetadata(size int64, mode int, cmeta any, md5 string, mimeType string) *ObjectMetadata {
meta, ok := cmeta.(sgzip.GzipMetadata)
if !ok {
panic("invalid cmeta type: expected sgzip.GzipMetadata")
}
objMeta := new(ObjectMetadata)
objMeta.Size = size
objMeta.Mode = mode
objMeta.CompressionMetadataGzip = &meta
objMeta.CompressionMetadataZstd = nil
objMeta.MD5 = md5
objMeta.MimeType = mimeType
return objMeta
}

View File

@@ -0,0 +1,327 @@
package compress
import (
"context"
"errors"
"io"
"runtime"
"sync"
szstd "github.com/a1ex3/zstd-seekable-format-go/pkg"
"github.com/klauspost/compress/zstd"
)
const szstdChunkSize int = 1 << 20 // 1 MiB chunk size
// SzstdMetadata holds metadata for szstd compressed files.
type SzstdMetadata struct {
BlockSize int // BlockSize is the size of the blocks in the zstd file
Size int64 // Size is the uncompressed size of the file
BlockData []uint32 // BlockData is the block data for the zstd file, used for seeking
}
// SzstdWriter is a writer that compresses data in szstd format.
type SzstdWriter struct {
enc *zstd.Encoder
w szstd.ConcurrentWriter
metadata SzstdMetadata
mu sync.Mutex
}
// NewWriterSzstd creates a new szstd writer with the specified options.
// It initializes the szstd writer with a zstd encoder and returns a pointer to the SzstdWriter.
// The writer can be used to write data in chunks, and it will automatically handle block sizes and metadata.
func NewWriterSzstd(w io.Writer, opts ...zstd.EOption) (*SzstdWriter, error) {
encoder, err := zstd.NewWriter(nil, opts...)
if err != nil {
return nil, err
}
sw, err := szstd.NewWriter(w, encoder)
if err != nil {
if err := encoder.Close(); err != nil {
return nil, err
}
return nil, err
}
return &SzstdWriter{
enc: encoder,
w: sw,
metadata: SzstdMetadata{
BlockSize: szstdChunkSize,
Size: 0,
},
}, nil
}
// Write writes data to the szstd writer in chunks of szstdChunkSize.
// It handles the block size and metadata updates automatically.
func (w *SzstdWriter) Write(p []byte) (int, error) {
if len(p) == 0 {
return 0, nil
}
if w.metadata.BlockData == nil {
numBlocks := (len(p) + w.metadata.BlockSize - 1) / w.metadata.BlockSize
w.metadata.BlockData = make([]uint32, 1, numBlocks+1)
w.metadata.BlockData[0] = 0
}
start := 0
total := len(p)
var writerFunc szstd.FrameSource = func() ([]byte, error) {
if start >= total {
return nil, nil
}
end := min(start+w.metadata.BlockSize, total)
chunk := p[start:end]
size := end - start
w.mu.Lock()
w.metadata.Size += int64(size)
w.mu.Unlock()
start = end
return chunk, nil
}
// write sizes of compressed blocks in the callback
err := w.w.WriteMany(context.Background(), writerFunc,
szstd.WithWriteCallback(func(size uint32) {
w.mu.Lock()
lastOffset := w.metadata.BlockData[len(w.metadata.BlockData)-1]
w.metadata.BlockData = append(w.metadata.BlockData, lastOffset+size)
w.mu.Unlock()
}),
)
if err != nil {
return 0, err
}
return total, nil
}
// Close closes the SzstdWriter and its underlying encoder.
func (w *SzstdWriter) Close() error {
if err := w.w.Close(); err != nil {
return err
}
if err := w.enc.Close(); err != nil {
return err
}
return nil
}
// GetMetadata returns the metadata of the szstd writer.
func (w *SzstdWriter) GetMetadata() SzstdMetadata {
return w.metadata
}
// SzstdReaderAt is a reader that allows random access in szstd compressed data.
type SzstdReaderAt struct {
r szstd.Reader
decoder *zstd.Decoder
metadata *SzstdMetadata
pos int64
mu sync.Mutex
}
// NewReaderAtSzstd creates a new SzstdReaderAt at the specified io.ReadSeeker.
func NewReaderAtSzstd(rs io.ReadSeeker, meta *SzstdMetadata, offset int64, opts ...zstd.DOption) (*SzstdReaderAt, error) {
decoder, err := zstd.NewReader(nil, opts...)
if err != nil {
return nil, err
}
r, err := szstd.NewReader(rs, decoder)
if err != nil {
decoder.Close()
return nil, err
}
sr := &SzstdReaderAt{
r: r,
decoder: decoder,
metadata: meta,
pos: 0,
}
// Set initial position to the provided offset
if _, err := sr.Seek(offset, io.SeekStart); err != nil {
if err := sr.Close(); err != nil {
return nil, err
}
return nil, err
}
return sr, nil
}
// Seek sets the offset for the next Read.
func (s *SzstdReaderAt) Seek(offset int64, whence int) (int64, error) {
s.mu.Lock()
defer s.mu.Unlock()
pos, err := s.r.Seek(offset, whence)
if err == nil {
s.pos = pos
}
return pos, err
}
func (s *SzstdReaderAt) Read(p []byte) (int, error) {
s.mu.Lock()
defer s.mu.Unlock()
n, err := s.r.Read(p)
if err == nil {
s.pos += int64(n)
}
return n, err
}
// ReadAt reads data at the specified offset.
func (s *SzstdReaderAt) ReadAt(p []byte, off int64) (int, error) {
if off < 0 {
return 0, errors.New("invalid offset")
}
if off >= s.metadata.Size {
return 0, io.EOF
}
endOff := min(off+int64(len(p)), s.metadata.Size)
// Find all blocks covered by the range
type blockInfo struct {
index int // Block index
offsetInBlock int64 // Offset within the block for starting reading
bytesToRead int64 // How many bytes to read from this block
}
var blocks []blockInfo
uncompressedOffset := int64(0)
currentOff := off
for i := 0; i < len(s.metadata.BlockData)-1; i++ {
blockUncompressedEnd := min(uncompressedOffset+int64(s.metadata.BlockSize), s.metadata.Size)
if currentOff < blockUncompressedEnd && endOff > uncompressedOffset {
offsetInBlock := max(0, currentOff-uncompressedOffset)
bytesToRead := min(blockUncompressedEnd-uncompressedOffset-offsetInBlock, endOff-currentOff)
blocks = append(blocks, blockInfo{
index: i,
offsetInBlock: offsetInBlock,
bytesToRead: bytesToRead,
})
currentOff += bytesToRead
if currentOff >= endOff {
break
}
}
uncompressedOffset = blockUncompressedEnd
}
if len(blocks) == 0 {
return 0, io.EOF
}
// Parallel block decoding
type decodeResult struct {
index int
data []byte
err error
}
resultCh := make(chan decodeResult, len(blocks))
var wg sync.WaitGroup
sem := make(chan struct{}, runtime.NumCPU())
for _, block := range blocks {
wg.Add(1)
go func(block blockInfo) {
defer wg.Done()
sem <- struct{}{}
defer func() { <-sem }()
startOffset := int64(s.metadata.BlockData[block.index])
endOffset := int64(s.metadata.BlockData[block.index+1])
compressedSize := endOffset - startOffset
compressed := make([]byte, compressedSize)
_, err := s.r.ReadAt(compressed, startOffset)
if err != nil && err != io.EOF {
resultCh <- decodeResult{index: block.index, err: err}
return
}
decoded, err := s.decoder.DecodeAll(compressed, nil)
if err != nil {
resultCh <- decodeResult{index: block.index, err: err}
return
}
resultCh <- decodeResult{index: block.index, data: decoded, err: nil}
}(block)
}
go func() {
wg.Wait()
close(resultCh)
}()
// Collect results in block index order
totalRead := 0
results := make(map[int]decodeResult)
expected := len(blocks)
minIndex := blocks[0].index
for res := range resultCh {
results[res.index] = res
for {
if result, ok := results[minIndex]; ok {
if result.err != nil {
return 0, result.err
}
// find the corresponding blockInfo
var blk blockInfo
for _, b := range blocks {
if b.index == result.index {
blk = b
break
}
}
start := blk.offsetInBlock
end := start + blk.bytesToRead
copy(p[totalRead:totalRead+int(blk.bytesToRead)], result.data[start:end])
totalRead += int(blk.bytesToRead)
minIndex++
if minIndex-blocks[0].index >= len(blocks) {
break
}
} else {
break
}
}
if len(results) == expected && minIndex-blocks[0].index >= len(blocks) {
break
}
}
return totalRead, nil
}
// Close closes the SzstdReaderAt and underlying decoder.
func (s *SzstdReaderAt) Close() error {
if err := s.r.Close(); err != nil {
return err
}
s.decoder.Close()
return nil
}

View File

@@ -0,0 +1,65 @@
package compress
import (
"context"
"fmt"
"io"
"github.com/rclone/rclone/fs"
"github.com/rclone/rclone/fs/chunkedreader"
)
// uncompressedModeHandler implements compressionModeHandler for uncompressed files
type uncompressedModeHandler struct{}
// isCompressible checks the compression ratio of the provided data and returns true if the ratio exceeds
// the configured threshold
func (u *uncompressedModeHandler) isCompressible(r io.Reader, compressionMode int) (bool, error) {
return false, nil
}
// newObjectGetOriginalSize returns the original file size from the metadata
func (u *uncompressedModeHandler) newObjectGetOriginalSize(meta *ObjectMetadata) (int64, error) {
return 0, nil
}
// openGetReadCloser opens a compressed object and returns a ReadCloser in the Open method
func (u *uncompressedModeHandler) openGetReadCloser(
ctx context.Context,
o *Object,
offset int64,
limit int64,
cr chunkedreader.ChunkedReader,
closer io.Closer,
options ...fs.OpenOption,
) (rc io.ReadCloser, err error) {
return o.Object.Open(ctx, options...)
}
// processFileNameGetFileExtension returns the file extension for the given compression mode
func (u *uncompressedModeHandler) processFileNameGetFileExtension(compressionMode int) string {
return ""
}
// putCompress compresses the input data and uploads it to the remote, returning the new object and its metadata
func (u *uncompressedModeHandler) putCompress(
ctx context.Context,
f *Fs,
in io.Reader,
src fs.ObjectInfo,
options []fs.OpenOption,
mimeType string,
) (fs.Object, *ObjectMetadata, error) {
return nil, nil, fmt.Errorf("unsupported compression mode %d", f.mode)
}
// putUncompressGetNewMetadata returns metadata in the putUncompress method for a specific compression algorithm
func (u *uncompressedModeHandler) putUncompressGetNewMetadata(o fs.Object, mode int, md5 string, mimeType string, sum []byte) (fs.Object, *ObjectMetadata, error) {
return nil, nil, fmt.Errorf("unsupported compression mode %d", Uncompressed)
}
// This function generates a metadata object for sgzip.GzipMetadata or SzstdMetadata.
// Warning: This function panics if cmeta is not of the expected type.
func (u *uncompressedModeHandler) newMetadata(size int64, mode int, cmeta any, md5 string, mimeType string) *ObjectMetadata {
return nil
}

View File

@@ -0,0 +1,65 @@
package compress
import (
"context"
"fmt"
"io"
"github.com/rclone/rclone/fs"
"github.com/rclone/rclone/fs/chunkedreader"
)
// unknownModeHandler implements compressionModeHandler for unknown compression types
type unknownModeHandler struct{}
// isCompressible checks the compression ratio of the provided data and returns true if the ratio exceeds
// the configured threshold
func (unk *unknownModeHandler) isCompressible(r io.Reader, compressionMode int) (bool, error) {
return false, fmt.Errorf("unknown compression mode %d", compressionMode)
}
// newObjectGetOriginalSize returns the original file size from the metadata
func (unk *unknownModeHandler) newObjectGetOriginalSize(meta *ObjectMetadata) (int64, error) {
return 0, nil
}
// openGetReadCloser opens a compressed object and returns a ReadCloser in the Open method
func (unk *unknownModeHandler) openGetReadCloser(
ctx context.Context,
o *Object,
offset int64,
limit int64,
cr chunkedreader.ChunkedReader,
closer io.Closer,
options ...fs.OpenOption,
) (rc io.ReadCloser, err error) {
return nil, fmt.Errorf("unknown compression mode %d", o.meta.Mode)
}
// processFileNameGetFileExtension returns the file extension for the given compression mode
func (unk *unknownModeHandler) processFileNameGetFileExtension(compressionMode int) string {
return ""
}
// putCompress compresses the input data and uploads it to the remote, returning the new object and its metadata
func (unk *unknownModeHandler) putCompress(
ctx context.Context,
f *Fs,
in io.Reader,
src fs.ObjectInfo,
options []fs.OpenOption,
mimeType string,
) (fs.Object, *ObjectMetadata, error) {
return nil, nil, fmt.Errorf("unknown compression mode %d", f.mode)
}
// putUncompressGetNewMetadata returns metadata in the putUncompress method for a specific compression algorithm
func (unk *unknownModeHandler) putUncompressGetNewMetadata(o fs.Object, mode int, md5 string, mimeType string, sum []byte) (fs.Object, *ObjectMetadata, error) {
return nil, nil, fmt.Errorf("unknown compression mode")
}
// This function generates a metadata object for sgzip.GzipMetadata or SzstdMetadata.
// Warning: This function panics if cmeta is not of the expected type.
func (unk *unknownModeHandler) newMetadata(size int64, mode int, cmeta any, md5 string, mimeType string) *ObjectMetadata {
return nil
}

View File

@@ -0,0 +1,192 @@
package compress
import (
"bufio"
"bytes"
"context"
"crypto/md5"
"encoding/hex"
"errors"
"io"
"github.com/klauspost/compress/zstd"
"github.com/rclone/rclone/fs"
"github.com/rclone/rclone/fs/accounting"
"github.com/rclone/rclone/fs/chunkedreader"
"github.com/rclone/rclone/fs/hash"
)
// zstdModeHandler implements compressionModeHandler for zstd
type zstdModeHandler struct{}
// isCompressible checks the compression ratio of the provided data and returns true if the ratio exceeds
// the configured threshold
func (z *zstdModeHandler) isCompressible(r io.Reader, compressionMode int) (bool, error) {
var b bytes.Buffer
var n int64
w, err := NewWriterSzstd(&b, zstd.WithEncoderLevel(zstd.SpeedDefault))
if err != nil {
return false, err
}
n, err = io.Copy(w, r)
if err != nil {
return false, err
}
err = w.Close()
if err != nil {
return false, err
}
ratio := float64(n) / float64(b.Len())
return ratio > minCompressionRatio, nil
}
// newObjectGetOriginalSize returns the original file size from the metadata
func (z *zstdModeHandler) newObjectGetOriginalSize(meta *ObjectMetadata) (int64, error) {
if meta.CompressionMetadataZstd == nil {
return 0, errors.New("missing zstd metadata")
}
return meta.CompressionMetadataZstd.Size, nil
}
// openGetReadCloser opens a compressed object and returns a ReadCloser in the Open method
func (z *zstdModeHandler) openGetReadCloser(
ctx context.Context,
o *Object,
offset int64,
limit int64,
cr chunkedreader.ChunkedReader,
closer io.Closer,
options ...fs.OpenOption,
) (rc io.ReadCloser, err error) {
var file io.Reader
if offset != 0 {
file, err = NewReaderAtSzstd(cr, o.meta.CompressionMetadataZstd, offset)
} else {
file, err = zstd.NewReader(cr)
}
if err != nil {
return nil, err
}
var fileReader io.Reader
if limit != -1 {
fileReader = io.LimitReader(file, limit)
} else {
fileReader = file
}
// Return a ReadCloser
return ReadCloserWrapper{Reader: fileReader, Closer: closer}, nil
}
// processFileNameGetFileExtension returns the file extension for the given compression mode
func (z *zstdModeHandler) processFileNameGetFileExtension(compressionMode int) string {
if compressionMode == Zstd {
return zstdFileExt
}
return ""
}
// putCompress compresses the input data and uploads it to the remote, returning the new object and its metadata
func (z *zstdModeHandler) putCompress(
ctx context.Context,
f *Fs,
in io.Reader,
src fs.ObjectInfo,
options []fs.OpenOption,
mimeType string,
) (fs.Object, *ObjectMetadata, error) {
// Unwrap reader accounting
in, wrap := accounting.UnWrap(in)
// Add the metadata hasher
metaHasher := md5.New()
in = io.TeeReader(in, metaHasher)
// Compress the file
pipeReader, pipeWriter := io.Pipe()
resultsZstd := make(chan compressionResult[SzstdMetadata])
go func() {
writer, err := NewWriterSzstd(pipeWriter, zstd.WithEncoderLevel(zstd.EncoderLevel(f.opt.CompressionLevel)))
if err != nil {
resultsZstd <- compressionResult[SzstdMetadata]{err: err}
close(resultsZstd)
return
}
_, err = io.Copy(writer, in)
if wErr := writer.Close(); wErr != nil && err == nil {
err = wErr
}
if cErr := pipeWriter.Close(); cErr != nil && err == nil {
err = cErr
}
resultsZstd <- compressionResult[SzstdMetadata]{err: err, meta: writer.GetMetadata()}
close(resultsZstd)
}()
wrappedIn := wrap(bufio.NewReaderSize(pipeReader, bufferSize))
ht := f.Fs.Hashes().GetOne()
var hasher *hash.MultiHasher
var err error
if ht != hash.None {
wrappedIn, wrap = accounting.UnWrap(wrappedIn)
hasher, err = hash.NewMultiHasherTypes(hash.NewHashSet(ht))
if err != nil {
return nil, nil, err
}
wrappedIn = io.TeeReader(wrappedIn, hasher)
wrappedIn = wrap(wrappedIn)
}
o, err := f.rcat(ctx, makeDataName(src.Remote(), src.Size(), f.mode), io.NopCloser(wrappedIn), src.ModTime(ctx), options)
if err != nil {
return nil, nil, err
}
result := <-resultsZstd
if result.err != nil {
if o != nil {
_ = o.Remove(ctx)
}
return nil, nil, result.err
}
// Build metadata using uncompressed size for filename
meta := z.newMetadata(result.meta.Size, f.mode, result.meta, hex.EncodeToString(metaHasher.Sum(nil)), mimeType)
if ht != hash.None && hasher != nil {
err = f.verifyObjectHash(ctx, o, hasher, ht)
if err != nil {
return nil, nil, err
}
}
return o, meta, nil
}
// putUncompressGetNewMetadata returns metadata in the putUncompress method for a specific compression algorithm
func (z *zstdModeHandler) putUncompressGetNewMetadata(o fs.Object, mode int, md5 string, mimeType string, sum []byte) (fs.Object, *ObjectMetadata, error) {
return o, z.newMetadata(o.Size(), mode, SzstdMetadata{}, hex.EncodeToString(sum), mimeType), nil
}
// This function generates a metadata object for sgzip.GzipMetadata or SzstdMetadata.
// Warning: This function panics if cmeta is not of the expected type.
func (z *zstdModeHandler) newMetadata(size int64, mode int, cmeta any, md5 string, mimeType string) *ObjectMetadata {
meta, ok := cmeta.(SzstdMetadata)
if !ok {
panic("invalid cmeta type: expected SzstdMetadata")
}
objMeta := new(ObjectMetadata)
objMeta.Size = size
objMeta.Mode = mode
objMeta.CompressionMetadataGzip = nil
objMeta.CompressionMetadataZstd = &meta
objMeta.MD5 = md5
objMeta.MimeType = mimeType
return objMeta
}

View File

@@ -923,28 +923,30 @@ func (f *Fs) ChangeNotify(ctx context.Context, notifyFunc func(string, fs.EntryT
var commandHelp = []fs.CommandHelp{
{
Name: "encode",
Short: "Encode the given filename(s)",
Short: "Encode the given filename(s).",
Long: `This encodes the filenames given as arguments returning a list of
strings of the encoded results.
Usage Example:
Usage examples:
rclone backend encode crypt: file1 [file2...]
rclone rc backend/command command=encode fs=crypt: file1 [file2...]
`,
` + "```console" + `
rclone backend encode crypt: file1 [file2...]
rclone rc backend/command command=encode fs=crypt: file1 [file2...]
` + "```",
},
{
Name: "decode",
Short: "Decode the given filename(s)",
Short: "Decode the given filename(s).",
Long: `This decodes the filenames given as arguments returning a list of
strings of the decoded results. It will return an error if any of the
inputs are invalid.
Usage Example:
Usage examples:
rclone backend decode crypt: encryptedfile1 [encryptedfile2...]
rclone rc backend/command command=decode fs=crypt: encryptedfile1 [encryptedfile2...]
`,
` + "```console" + `
rclone backend decode crypt: encryptedfile1 [encryptedfile2...]
rclone rc backend/command command=decode fs=crypt: encryptedfile1 [encryptedfile2...]
` + "```",
},
}

View File

@@ -0,0 +1,38 @@
// Type definitions specific to Dataverse
package api
// DataverseDatasetResponse is returned by the Dataverse dataset API
type DataverseDatasetResponse struct {
Status string `json:"status"`
Data DataverseDataset `json:"data"`
}
// DataverseDataset is the representation of a dataset
type DataverseDataset struct {
LatestVersion DataverseDatasetVersion `json:"latestVersion"`
}
// DataverseDatasetVersion is the representation of a dataset version
type DataverseDatasetVersion struct {
LastUpdateTime string `json:"lastUpdateTime"`
Files []DataverseFile `json:"files"`
}
// DataverseFile is the representation of a file found in a dataset
type DataverseFile struct {
DirectoryLabel string `json:"directoryLabel"`
DataFile DataverseDataFile `json:"dataFile"`
}
// DataverseDataFile represents file metadata details
type DataverseDataFile struct {
ID int64 `json:"id"`
Filename string `json:"filename"`
ContentType string `json:"contentType"`
FileSize int64 `json:"filesize"`
OriginalFileFormat string `json:"originalFileFormat"`
OriginalFileSize int64 `json:"originalFileSize"`
OriginalFileName string `json:"originalFileName"`
MD5 string `json:"md5"`
}

View File

@@ -0,0 +1,33 @@
// Type definitions specific to InvenioRDM
package api
// InvenioRecordResponse is the representation of a record stored in InvenioRDM
type InvenioRecordResponse struct {
Links InvenioRecordResponseLinks `json:"links"`
}
// InvenioRecordResponseLinks represents a record's links
type InvenioRecordResponseLinks struct {
Self string `json:"self"`
}
// InvenioFilesResponse is the representation of a record's files
type InvenioFilesResponse struct {
Entries []InvenioFilesResponseEntry `json:"entries"`
}
// InvenioFilesResponseEntry is the representation of a file entry
type InvenioFilesResponseEntry struct {
Key string `json:"key"`
Checksum string `json:"checksum"`
Size int64 `json:"size"`
Updated string `json:"updated"`
MimeType string `json:"mimetype"`
Links InvenioFilesResponseEntryLinks `json:"links"`
}
// InvenioFilesResponseEntryLinks represents file links details
type InvenioFilesResponseEntryLinks struct {
Content string `json:"content"`
}

26
backend/doi/api/types.go Normal file
View File

@@ -0,0 +1,26 @@
// Package api has general type definitions for doi
package api
// DoiResolverResponse is returned by the DOI resolver API
//
// Reference: https://www.doi.org/the-identifier/resources/factsheets/doi-resolution-documentation
type DoiResolverResponse struct {
ResponseCode int `json:"responseCode"`
Handle string `json:"handle"`
Values []DoiResolverResponseValue `json:"values"`
}
// DoiResolverResponseValue is a single handle record value
type DoiResolverResponseValue struct {
Index int `json:"index"`
Type string `json:"type"`
Data DoiResolverResponseValueData `json:"data"`
TTL int `json:"ttl"`
Timestamp string `json:"timestamp"`
}
// DoiResolverResponseValueData is the data held in a handle value
type DoiResolverResponseValueData struct {
Format string `json:"format"`
Value any `json:"value"`
}

112
backend/doi/dataverse.go Normal file
View File

@@ -0,0 +1,112 @@
// Implementation for Dataverse
package doi
import (
"context"
"fmt"
"net/http"
"net/url"
"path"
"strings"
"time"
"github.com/rclone/rclone/backend/doi/api"
"github.com/rclone/rclone/fs"
"github.com/rclone/rclone/lib/rest"
)
// Returns true if resolvedURL is likely a DOI hosted on a Dataverse intallation
func activateDataverse(resolvedURL *url.URL) (isActive bool) {
queryValues := resolvedURL.Query()
persistentID := queryValues.Get("persistentId")
return persistentID != ""
}
// Resolve the main API endpoint for a DOI hosted on a Dataverse installation
func resolveDataverseEndpoint(resolvedURL *url.URL) (provider Provider, endpoint *url.URL, err error) {
queryValues := resolvedURL.Query()
persistentID := queryValues.Get("persistentId")
query := url.Values{}
query.Add("persistentId", persistentID)
endpointURL := resolvedURL.ResolveReference(&url.URL{Path: "/api/datasets/:persistentId/", RawQuery: query.Encode()})
return Dataverse, endpointURL, nil
}
// dataverseProvider implements the doiProvider interface for Dataverse installations
type dataverseProvider struct {
f *Fs
}
// ListEntries returns the full list of entries found at the remote, regardless of root
func (dp *dataverseProvider) ListEntries(ctx context.Context) (entries []*Object, err error) {
// Use the cache if populated
cachedEntries, found := dp.f.cache.GetMaybe("files")
if found {
parsedEntries, ok := cachedEntries.([]Object)
if ok {
for _, entry := range parsedEntries {
newEntry := entry
entries = append(entries, &newEntry)
}
return entries, nil
}
}
filesURL := dp.f.endpoint
var res *http.Response
var result api.DataverseDatasetResponse
opts := rest.Opts{
Method: "GET",
Path: strings.TrimLeft(filesURL.EscapedPath(), "/"),
Parameters: filesURL.Query(),
}
err = dp.f.pacer.Call(func() (bool, error) {
res, err = dp.f.srv.CallJSON(ctx, &opts, nil, &result)
return shouldRetry(ctx, res, err)
})
if err != nil {
return nil, fmt.Errorf("readDir failed: %w", err)
}
modTime, modTimeErr := time.Parse(time.RFC3339, result.Data.LatestVersion.LastUpdateTime)
if modTimeErr != nil {
fs.Logf(dp.f, "error: could not parse last update time %v", modTimeErr)
modTime = timeUnset
}
for _, file := range result.Data.LatestVersion.Files {
contentURLPath := fmt.Sprintf("/api/access/datafile/%d", file.DataFile.ID)
query := url.Values{}
query.Add("format", "original")
contentURL := dp.f.endpoint.ResolveReference(&url.URL{Path: contentURLPath, RawQuery: query.Encode()})
entry := &Object{
fs: dp.f,
remote: path.Join(file.DirectoryLabel, file.DataFile.Filename),
contentURL: contentURL.String(),
size: file.DataFile.FileSize,
modTime: modTime,
md5: file.DataFile.MD5,
contentType: file.DataFile.ContentType,
}
if file.DataFile.OriginalFileName != "" {
entry.remote = path.Join(file.DirectoryLabel, file.DataFile.OriginalFileName)
entry.size = file.DataFile.OriginalFileSize
entry.contentType = file.DataFile.OriginalFileFormat
}
entries = append(entries, entry)
}
// Populate the cache
cacheEntries := []Object{}
for _, entry := range entries {
cacheEntries = append(cacheEntries, *entry)
}
dp.f.cache.Put("files", cacheEntries)
return entries, nil
}
func newDataverseProvider(f *Fs) doiProvider {
return &dataverseProvider{
f: f,
}
}

653
backend/doi/doi.go Normal file
View File

@@ -0,0 +1,653 @@
// Package doi provides a filesystem interface for digital objects identified by DOIs.
//
// See: https://www.doi.org/the-identifier/what-is-a-doi/
package doi
import (
"context"
"errors"
"fmt"
"io"
"net/http"
"net/url"
"path"
"strings"
"time"
"github.com/rclone/rclone/backend/doi/api"
"github.com/rclone/rclone/fs"
"github.com/rclone/rclone/fs/config/configmap"
"github.com/rclone/rclone/fs/config/configstruct"
"github.com/rclone/rclone/fs/fserrors"
"github.com/rclone/rclone/fs/fshttp"
"github.com/rclone/rclone/fs/hash"
"github.com/rclone/rclone/lib/cache"
"github.com/rclone/rclone/lib/pacer"
"github.com/rclone/rclone/lib/rest"
)
const (
// the URL of the DOI resolver
//
// Reference: https://www.doi.org/the-identifier/resources/factsheets/doi-resolution-documentation
doiResolverAPIURL = "https://doi.org/api"
minSleep = 10 * time.Millisecond
maxSleep = 2 * time.Second
decayConstant = 2 // bigger for slower decay, exponential
)
var (
errorReadOnly = errors.New("doi remotes are read only")
timeUnset = time.Unix(0, 0)
)
func init() {
fsi := &fs.RegInfo{
Name: "doi",
Description: "DOI datasets",
NewFs: NewFs,
CommandHelp: commandHelp,
Options: []fs.Option{{
Name: "doi",
Help: "The DOI or the doi.org URL.",
Required: true,
}, {
Name: fs.ConfigProvider,
Help: `DOI provider.
The DOI provider can be set when rclone does not automatically recognize a supported DOI provider.`,
Examples: []fs.OptionExample{
{
Value: "auto",
Help: "Auto-detect provider",
},
{
Value: string(Zenodo),
Help: "Zenodo",
}, {
Value: string(Dataverse),
Help: "Dataverse",
}, {
Value: string(Invenio),
Help: "Invenio",
}},
Required: false,
Advanced: true,
}, {
Name: "doi_resolver_api_url",
Help: `The URL of the DOI resolver API to use.
The DOI resolver can be set for testing or for cases when the the canonical DOI resolver API cannot be used.
Defaults to "https://doi.org/api".`,
Required: false,
Advanced: true,
}},
}
fs.Register(fsi)
}
// Provider defines the type of provider hosting the DOI
type Provider string
const (
// Zenodo provider, see https://zenodo.org
Zenodo Provider = "zenodo"
// Dataverse provider, see https://dataverse.harvard.edu
Dataverse Provider = "dataverse"
// Invenio provider, see https://inveniordm.docs.cern.ch
Invenio Provider = "invenio"
)
// Options defines the configuration for this backend
type Options struct {
Doi string `config:"doi"` // The DOI, a digital identifier of an object, usually a dataset
Provider string `config:"provider"` // The DOI provider
DoiResolverAPIURL string `config:"doi_resolver_api_url"` // The URL of the DOI resolver API to use.
}
// Fs stores the interface to the remote HTTP files
type Fs struct {
name string // name of this remote
root string // the path we are working on
provider Provider // the DOI provider
doiProvider doiProvider // the interface used to interact with the DOI provider
features *fs.Features // optional features
opt Options // options for this backend
ci *fs.ConfigInfo // global config
endpoint *url.URL // the main API endpoint for this remote
endpointURL string // endpoint as a string
srv *rest.Client // the connection to the server
pacer *fs.Pacer // pacer for API calls
cache *cache.Cache // a cache for the remote metadata
}
// Object is a remote object that has been stat'd (so it exists, but is not necessarily open for reading)
type Object struct {
fs *Fs // what this object is part of
remote string // the remote path
contentURL string // the URL where the contents of the file can be downloaded
size int64 // size of the object
modTime time.Time // modification time of the object
contentType string // content type of the object
md5 string // MD5 hash of the object content
}
// doiProvider is the interface used to list objects in a DOI
type doiProvider interface {
// ListEntries returns the full list of entries found at the remote, regardless of root
ListEntries(ctx context.Context) (entries []*Object, err error)
}
// Parse the input string as a DOI
// Examples:
// 10.1000/182 -> 10.1000/182
// https://doi.org/10.1000/182 -> 10.1000/182
// doi:10.1000/182 -> 10.1000/182
func parseDoi(doi string) string {
doiURL, err := url.Parse(doi)
if err != nil {
return doi
}
if doiURL.Scheme == "doi" {
return strings.TrimLeft(strings.TrimPrefix(doi, "doi:"), "/")
}
if strings.HasSuffix(doiURL.Hostname(), "doi.org") {
return strings.TrimLeft(doiURL.Path, "/")
}
return doi
}
// Resolve a DOI to a URL
// Reference: https://www.doi.org/the-identifier/resources/factsheets/doi-resolution-documentation
func resolveDoiURL(ctx context.Context, srv *rest.Client, pacer *fs.Pacer, opt *Options) (doiURL *url.URL, err error) {
resolverURL := opt.DoiResolverAPIURL
if resolverURL == "" {
resolverURL = doiResolverAPIURL
}
var result api.DoiResolverResponse
params := url.Values{}
params.Add("index", "1")
opts := rest.Opts{
Method: "GET",
RootURL: resolverURL,
Path: "/handles/" + opt.Doi,
Parameters: params,
}
err = pacer.Call(func() (bool, error) {
res, err := srv.CallJSON(ctx, &opts, nil, &result)
return shouldRetry(ctx, res, err)
})
if err != nil {
return nil, err
}
if result.ResponseCode != 1 {
return nil, fmt.Errorf("could not resolve DOI (error code %d)", result.ResponseCode)
}
resolvedURLStr := ""
for _, value := range result.Values {
if value.Type == "URL" && value.Data.Format == "string" {
valueStr, ok := value.Data.Value.(string)
if !ok {
return nil, fmt.Errorf("could not resolve DOI (incorrect response format)")
}
resolvedURLStr = valueStr
}
}
resolvedURL, err := url.Parse(resolvedURLStr)
if err != nil {
return nil, err
}
return resolvedURL, nil
}
// Resolve the passed configuration into a provider and enpoint
func resolveEndpoint(ctx context.Context, srv *rest.Client, pacer *fs.Pacer, opt *Options) (provider Provider, endpoint *url.URL, err error) {
resolvedURL, err := resolveDoiURL(ctx, srv, pacer, opt)
if err != nil {
return "", nil, err
}
switch opt.Provider {
case string(Dataverse):
return resolveDataverseEndpoint(resolvedURL)
case string(Invenio):
return resolveInvenioEndpoint(ctx, srv, pacer, resolvedURL)
case string(Zenodo):
return resolveZenodoEndpoint(ctx, srv, pacer, resolvedURL, opt.Doi)
}
hostname := strings.ToLower(resolvedURL.Hostname())
if hostname == "dataverse.harvard.edu" || activateDataverse(resolvedURL) {
return resolveDataverseEndpoint(resolvedURL)
}
if hostname == "zenodo.org" || strings.HasSuffix(hostname, ".zenodo.org") {
return resolveZenodoEndpoint(ctx, srv, pacer, resolvedURL, opt.Doi)
}
if activateInvenio(ctx, srv, pacer, resolvedURL) {
return resolveInvenioEndpoint(ctx, srv, pacer, resolvedURL)
}
return "", nil, fmt.Errorf("provider '%s' is not supported", resolvedURL.Hostname())
}
// Make the http connection from the passed options
func (f *Fs) httpConnection(ctx context.Context, opt *Options) (isFile bool, err error) {
provider, endpoint, err := resolveEndpoint(ctx, f.srv, f.pacer, opt)
if err != nil {
return false, err
}
// Update f with the new parameters
f.srv.SetRoot(endpoint.ResolveReference(&url.URL{Path: "/"}).String())
f.endpoint = endpoint
f.endpointURL = endpoint.String()
f.provider = provider
f.opt.Provider = string(provider)
switch f.provider {
case Dataverse:
f.doiProvider = newDataverseProvider(f)
case Invenio, Zenodo:
f.doiProvider = newInvenioProvider(f)
default:
return false, fmt.Errorf("provider type '%s' not supported", f.provider)
}
// Determine if the root is a file
entries, err := f.doiProvider.ListEntries(ctx)
if err != nil {
return false, err
}
for _, entry := range entries {
if entry.remote == f.root {
isFile = true
break
}
}
return isFile, nil
}
// retryErrorCodes is a slice of error codes that we will retry
var retryErrorCodes = []int{
429, // Too Many Requests.
500, // Internal Server Error
502, // Bad Gateway
503, // Service Unavailable
504, // Gateway Timeout
509, // Bandwidth Limit Exceeded
}
// shouldRetry returns a boolean as to whether this res and err
// deserve to be retried. It returns the err as a convenience.
func shouldRetry(ctx context.Context, res *http.Response, err error) (bool, error) {
if fserrors.ContextError(ctx, &err) {
return false, err
}
return fserrors.ShouldRetry(err) || fserrors.ShouldRetryHTTP(res, retryErrorCodes), err
}
// NewFs creates a new Fs object from the name and root. It connects to
// the host specified in the config file.
func NewFs(ctx context.Context, name, root string, m configmap.Mapper) (fs.Fs, error) {
root = strings.Trim(root, "/")
// Parse config into Options struct
opt := new(Options)
err := configstruct.Set(m, opt)
if err != nil {
return nil, err
}
opt.Doi = parseDoi(opt.Doi)
client := fshttp.NewClient(ctx)
ci := fs.GetConfig(ctx)
f := &Fs{
name: name,
root: root,
opt: *opt,
ci: ci,
srv: rest.NewClient(client),
pacer: fs.NewPacer(ctx, pacer.NewDefault(pacer.MinSleep(minSleep), pacer.MaxSleep(maxSleep), pacer.DecayConstant(decayConstant))),
cache: cache.New(),
}
f.features = (&fs.Features{
CanHaveEmptyDirectories: true,
}).Fill(ctx, f)
isFile, err := f.httpConnection(ctx, opt)
if err != nil {
return nil, err
}
if isFile {
// return an error with an fs which points to the parent
newRoot := path.Dir(f.root)
if newRoot == "." {
newRoot = ""
}
f.root = newRoot
return f, fs.ErrorIsFile
}
return f, nil
}
// Name returns the configured name of the file system
func (f *Fs) Name() string {
return f.name
}
// Root returns the root for the filesystem
func (f *Fs) Root() string {
return f.root
}
// String returns the URL for the filesystem
func (f *Fs) String() string {
return fmt.Sprintf("DOI %s", f.opt.Doi)
}
// Features returns the optional features of this Fs
func (f *Fs) Features() *fs.Features {
return f.features
}
// Precision is the remote http file system's modtime precision, which we have no way of knowing. We estimate at 1s
func (f *Fs) Precision() time.Duration {
return time.Second
}
// Hashes returns hash.HashNone to indicate remote hashing is unavailable
func (f *Fs) Hashes() hash.Set {
return hash.Set(hash.MD5)
// return hash.Set(hash.None)
}
// Mkdir makes the root directory of the Fs object
func (f *Fs) Mkdir(ctx context.Context, dir string) error {
return errorReadOnly
}
// Remove a remote http file object
func (o *Object) Remove(ctx context.Context) error {
return errorReadOnly
}
// Rmdir removes the root directory of the Fs object
func (f *Fs) Rmdir(ctx context.Context, dir string) error {
return errorReadOnly
}
// NewObject creates a new remote http file object
func (f *Fs) NewObject(ctx context.Context, remote string) (fs.Object, error) {
entries, err := f.doiProvider.ListEntries(ctx)
if err != nil {
return nil, err
}
remoteFullPath := remote
if f.root != "" {
remoteFullPath = path.Join(f.root, remote)
}
for _, entry := range entries {
if entry.Remote() == remoteFullPath {
return entry, nil
}
}
return nil, fs.ErrorObjectNotFound
}
// List the objects and directories in dir into entries. The
// entries can be returned in any order but should be for a
// complete directory.
//
// dir should be "" to list the root, and should not have
// trailing slashes.
//
// This should return ErrDirNotFound if the directory isn't
// found.
func (f *Fs) List(ctx context.Context, dir string) (entries fs.DirEntries, err error) {
fileEntries, err := f.doiProvider.ListEntries(ctx)
if err != nil {
return nil, fmt.Errorf("error listing %q: %w", dir, err)
}
fullDir := path.Join(f.root, dir)
if fullDir != "" {
fullDir += "/"
}
dirPaths := map[string]bool{}
for _, entry := range fileEntries {
// First, filter out files not in `fullDir`
if !strings.HasPrefix(entry.remote, fullDir) {
continue
}
// Then, find entries in subfolers
remotePath := entry.remote
if fullDir != "" {
remotePath = strings.TrimLeft(strings.TrimPrefix(remotePath, fullDir), "/")
}
parts := strings.SplitN(remotePath, "/", 2)
if len(parts) == 1 {
newEntry := *entry
newEntry.remote = path.Join(dir, remotePath)
entries = append(entries, &newEntry)
} else {
dirPaths[path.Join(dir, parts[0])] = true
}
}
for dirPath := range dirPaths {
entry := fs.NewDir(dirPath, time.Time{})
entries = append(entries, entry)
}
return entries, nil
}
// Put in to the remote path with the modTime given of the given size
//
// May create the object even if it returns an error - if so
// will return the object and the error, otherwise will return
// nil and the error
func (f *Fs) Put(ctx context.Context, in io.Reader, src fs.ObjectInfo, options ...fs.OpenOption) (fs.Object, error) {
return nil, errorReadOnly
}
// PutStream uploads to the remote path with the modTime given of indeterminate size
func (f *Fs) PutStream(ctx context.Context, in io.Reader, src fs.ObjectInfo, options ...fs.OpenOption) (fs.Object, error) {
return nil, errorReadOnly
}
// Fs is the filesystem this remote http file object is located within
func (o *Object) Fs() fs.Info {
return o.fs
}
// String returns the URL to the remote HTTP file
func (o *Object) String() string {
if o == nil {
return "<nil>"
}
return o.remote
}
// Remote the name of the remote HTTP file, relative to the fs root
func (o *Object) Remote() string {
return o.remote
}
// Hash returns "" since HTTP (in Go or OpenSSH) doesn't support remote calculation of hashes
func (o *Object) Hash(ctx context.Context, t hash.Type) (string, error) {
if t != hash.MD5 {
return "", hash.ErrUnsupported
}
return o.md5, nil
}
// Size returns the size in bytes of the remote http file
func (o *Object) Size() int64 {
return o.size
}
// ModTime returns the modification time of the remote http file
func (o *Object) ModTime(ctx context.Context) time.Time {
return o.modTime
}
// SetModTime sets the modification and access time to the specified time
//
// it also updates the info field
func (o *Object) SetModTime(ctx context.Context, modTime time.Time) error {
return errorReadOnly
}
// Storable returns whether the remote http file is a regular file (not a directory, symbolic link, block device, character device, named pipe, etc.)
func (o *Object) Storable() bool {
return true
}
// Open a remote http file object for reading. Seek is supported
func (o *Object) Open(ctx context.Context, options ...fs.OpenOption) (in io.ReadCloser, err error) {
fs.FixRangeOption(options, o.size)
opts := rest.Opts{
Method: "GET",
RootURL: o.contentURL,
Options: options,
}
var res *http.Response
err = o.fs.pacer.Call(func() (bool, error) {
res, err = o.fs.srv.Call(ctx, &opts)
return shouldRetry(ctx, res, err)
})
if err != nil {
return nil, fmt.Errorf("Open failed: %w", err)
}
// Handle non-compliant redirects
if res.Header.Get("Location") != "" {
newURL, err := res.Location()
if err == nil {
opts.RootURL = newURL.String()
err = o.fs.pacer.Call(func() (bool, error) {
res, err = o.fs.srv.Call(ctx, &opts)
return shouldRetry(ctx, res, err)
})
if err != nil {
return nil, fmt.Errorf("Open failed: %w", err)
}
}
}
return res.Body, nil
}
// Update in to the object with the modTime given of the given size
func (o *Object) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, options ...fs.OpenOption) error {
return errorReadOnly
}
// MimeType of an Object if known, "" otherwise
func (o *Object) MimeType(ctx context.Context) string {
return o.contentType
}
var commandHelp = []fs.CommandHelp{{
Name: "metadata",
Short: "Show metadata about the DOI.",
Long: `This command returns a JSON object with some information about the DOI.
Usage example:
` + "```console" + `
rclone backend metadata doi:
` + "```" + `
It returns a JSON object representing metadata about the DOI.`,
}, {
Name: "set",
Short: "Set command for updating the config parameters.",
Long: `This set command can be used to update the config parameters
for a running doi backend.
Usage examples:
` + "```console" + `
rclone backend set doi: [-o opt_name=opt_value] [-o opt_name2=opt_value2]
rclone rc backend/command command=set fs=doi: [-o opt_name=opt_value] [-o opt_name2=opt_value2]
rclone rc backend/command command=set fs=doi: -o doi=NEW_DOI
` + "```" + `
The option keys are named as they are in the config file.
This rebuilds the connection to the doi backend when it is called with
the new parameters. Only new parameters need be passed as the values
will default to those currently in use.
It doesn't return anything.`,
}}
// Command the backend to run a named command
//
// The command run is name
// args may be used to read arguments from
// opts may be used to read optional arguments from
//
// The result should be capable of being JSON encoded
// If it is a string or a []string it will be shown to the user
// otherwise it will be JSON encoded and shown to the user like that
func (f *Fs) Command(ctx context.Context, name string, arg []string, opt map[string]string) (out any, err error) {
switch name {
case "metadata":
return f.ShowMetadata(ctx)
case "set":
newOpt := f.opt
err := configstruct.Set(configmap.Simple(opt), &newOpt)
if err != nil {
return nil, fmt.Errorf("reading config: %w", err)
}
_, err = f.httpConnection(ctx, &newOpt)
if err != nil {
return nil, fmt.Errorf("updating session: %w", err)
}
f.opt = newOpt
keys := []string{}
for k := range opt {
keys = append(keys, k)
}
fs.Logf(f, "Updated config values: %s", strings.Join(keys, ", "))
return nil, nil
default:
return nil, fs.ErrorCommandNotFound
}
}
// ShowMetadata returns some metadata about the corresponding DOI
func (f *Fs) ShowMetadata(ctx context.Context) (metadata any, err error) {
doiURL, err := url.Parse("https://doi.org/" + f.opt.Doi)
if err != nil {
return nil, err
}
info := map[string]any{}
info["DOI"] = f.opt.Doi
info["URL"] = doiURL.String()
info["metadataURL"] = f.endpointURL
info["provider"] = f.provider
return info, nil
}
// Check the interfaces are satisfied
var (
_ fs.Fs = (*Fs)(nil)
_ fs.PutStreamer = (*Fs)(nil)
_ fs.Commander = (*Fs)(nil)
_ fs.Object = (*Object)(nil)
_ fs.MimeTyper = (*Object)(nil)
)

View File

@@ -0,0 +1,260 @@
package doi
import (
"context"
"crypto/md5"
"encoding/hex"
"encoding/json"
"io"
"net/http"
"net/http/httptest"
"net/url"
"sort"
"strings"
"testing"
"time"
"github.com/rclone/rclone/backend/doi/api"
"github.com/rclone/rclone/fs"
"github.com/rclone/rclone/fs/config/configmap"
"github.com/rclone/rclone/fs/hash"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
)
var remoteName = "TestDoi"
func TestParseDoi(t *testing.T) {
// 10.1000/182 -> 10.1000/182
doi := "10.1000/182"
parsed := parseDoi(doi)
assert.Equal(t, "10.1000/182", parsed)
// https://doi.org/10.1000/182 -> 10.1000/182
doi = "https://doi.org/10.1000/182"
parsed = parseDoi(doi)
assert.Equal(t, "10.1000/182", parsed)
// https://dx.doi.org/10.1000/182 -> 10.1000/182
doi = "https://dxdoi.org/10.1000/182"
parsed = parseDoi(doi)
assert.Equal(t, "10.1000/182", parsed)
// doi:10.1000/182 -> 10.1000/182
doi = "doi:10.1000/182"
parsed = parseDoi(doi)
assert.Equal(t, "10.1000/182", parsed)
// doi://10.1000/182 -> 10.1000/182
doi = "doi://10.1000/182"
parsed = parseDoi(doi)
assert.Equal(t, "10.1000/182", parsed)
}
// prepareMockDoiResolverServer prepares a test server to resolve DOIs
func prepareMockDoiResolverServer(t *testing.T, resolvedURL string) (doiResolverAPIURL string) {
mux := http.NewServeMux()
// Handle requests for resolving DOIs
mux.HandleFunc("GET /api/handles/{handle...}", func(w http.ResponseWriter, r *http.Request) {
// Check that we are resolving a DOI
handle := strings.TrimPrefix(r.URL.Path, "/api/handles/")
assert.NotEmpty(t, handle)
index := r.URL.Query().Get("index")
assert.Equal(t, "1", index)
// Return the most basic response
result := api.DoiResolverResponse{
ResponseCode: 1,
Handle: handle,
Values: []api.DoiResolverResponseValue{
{
Index: 1,
Type: "URL",
Data: api.DoiResolverResponseValueData{
Format: "string",
Value: resolvedURL,
},
},
},
}
resultBytes, err := json.Marshal(result)
require.NoError(t, err)
w.Header().Add("Content-Type", "application/json")
_, err = w.Write(resultBytes)
require.NoError(t, err)
})
// Make the test server
ts := httptest.NewServer(mux)
// Close the server at the end of the test
t.Cleanup(ts.Close)
return ts.URL + "/api"
}
func md5Sum(text string) string {
hash := md5.Sum([]byte(text))
return hex.EncodeToString(hash[:])
}
// prepareMockZenodoServer prepares a test server that mocks Zenodo.org
func prepareMockZenodoServer(t *testing.T, files map[string]string) *httptest.Server {
mux := http.NewServeMux()
// Handle requests for a single record
mux.HandleFunc("GET /api/records/{recordID...}", func(w http.ResponseWriter, r *http.Request) {
// Check that we are returning data about a single record
recordID := strings.TrimPrefix(r.URL.Path, "/api/records/")
assert.NotEmpty(t, recordID)
// Return the most basic response
selfURL, err := url.Parse("http://" + r.Host)
require.NoError(t, err)
selfURL = selfURL.JoinPath(r.URL.String())
result := api.InvenioRecordResponse{
Links: api.InvenioRecordResponseLinks{
Self: selfURL.String(),
},
}
resultBytes, err := json.Marshal(result)
require.NoError(t, err)
w.Header().Add("Content-Type", "application/json")
_, err = w.Write(resultBytes)
require.NoError(t, err)
})
// Handle requests for listing files in a record
mux.HandleFunc("GET /api/records/{record}/files", func(w http.ResponseWriter, r *http.Request) {
// Return the most basic response
filesBaseURL, err := url.Parse("http://" + r.Host)
require.NoError(t, err)
filesBaseURL = filesBaseURL.JoinPath("/api/files/")
entries := []api.InvenioFilesResponseEntry{}
for filename, contents := range files {
entries = append(entries,
api.InvenioFilesResponseEntry{
Key: filename,
Checksum: md5Sum(contents),
Size: int64(len(contents)),
Updated: time.Now().UTC().Format(time.RFC3339),
MimeType: "text/plain; charset=utf-8",
Links: api.InvenioFilesResponseEntryLinks{
Content: filesBaseURL.JoinPath(filename).String(),
},
},
)
}
result := api.InvenioFilesResponse{
Entries: entries,
}
resultBytes, err := json.Marshal(result)
require.NoError(t, err)
w.Header().Add("Content-Type", "application/json")
_, err = w.Write(resultBytes)
require.NoError(t, err)
})
// Handle requests for file contents
mux.HandleFunc("/api/files/{file}", func(w http.ResponseWriter, r *http.Request) {
// Check that we are returning the contents of a file
filename := strings.TrimPrefix(r.URL.Path, "/api/files/")
assert.NotEmpty(t, filename)
contents, found := files[filename]
if !found {
w.WriteHeader(404)
return
}
// Return the most basic response
_, err := w.Write([]byte(contents))
require.NoError(t, err)
})
// Make the test server
ts := httptest.NewServer(mux)
// Close the server at the end of the test
t.Cleanup(ts.Close)
return ts
}
func TestZenodoRemote(t *testing.T) {
recordID := "2600782"
doi := "10.5281/zenodo.2600782"
// The files in the dataset
files := map[string]string{
"README.md": "This is a dataset.",
"data.txt": "Some data",
}
ts := prepareMockZenodoServer(t, files)
resolvedURL := ts.URL + "/record/" + recordID
doiResolverAPIURL := prepareMockDoiResolverServer(t, resolvedURL)
testConfig := configmap.Simple{
"type": "doi",
"doi": doi,
"provider": "zenodo",
"doi_resolver_api_url": doiResolverAPIURL,
}
f, err := NewFs(context.Background(), remoteName, "", testConfig)
require.NoError(t, err)
// Test listing the DOI files
entries, err := f.List(context.Background(), "")
require.NoError(t, err)
sort.Sort(entries)
require.Equal(t, len(files), len(entries))
e := entries[0]
assert.Equal(t, "README.md", e.Remote())
assert.Equal(t, int64(18), e.Size())
_, ok := e.(*Object)
assert.True(t, ok)
e = entries[1]
assert.Equal(t, "data.txt", e.Remote())
assert.Equal(t, int64(9), e.Size())
_, ok = e.(*Object)
assert.True(t, ok)
// Test reading the DOI files
o, err := f.NewObject(context.Background(), "README.md")
require.NoError(t, err)
assert.Equal(t, int64(18), o.Size())
md5Hash, err := o.Hash(context.Background(), hash.MD5)
require.NoError(t, err)
assert.Equal(t, "464352b1cab5240e44528a56fda33d9d", md5Hash)
fd, err := o.Open(context.Background())
require.NoError(t, err)
data, err := io.ReadAll(fd)
require.NoError(t, err)
require.NoError(t, fd.Close())
assert.Equal(t, []byte(files["README.md"]), data)
do, ok := o.(fs.MimeTyper)
require.True(t, ok)
assert.Equal(t, "text/plain; charset=utf-8", do.MimeType(context.Background()))
o, err = f.NewObject(context.Background(), "data.txt")
require.NoError(t, err)
assert.Equal(t, int64(9), o.Size())
md5Hash, err = o.Hash(context.Background(), hash.MD5)
require.NoError(t, err)
assert.Equal(t, "5b82f8bf4df2bfb0e66ccaa7306fd024", md5Hash)
fd, err = o.Open(context.Background())
require.NoError(t, err)
data, err = io.ReadAll(fd)
require.NoError(t, err)
require.NoError(t, fd.Close())
assert.Equal(t, []byte(files["data.txt"]), data)
do, ok = o.(fs.MimeTyper)
require.True(t, ok)
assert.Equal(t, "text/plain; charset=utf-8", do.MimeType(context.Background()))
}

16
backend/doi/doi_test.go Normal file
View File

@@ -0,0 +1,16 @@
// Test DOI filesystem interface
package doi
import (
"testing"
"github.com/rclone/rclone/fstest/fstests"
)
// TestIntegration runs integration tests against the remote
func TestIntegration(t *testing.T) {
fstests.Run(t, &fstests.Opt{
RemoteName: "TestDoi:",
NilObject: (*Object)(nil),
})
}

164
backend/doi/invenio.go Normal file
View File

@@ -0,0 +1,164 @@
// Implementation for InvenioRDM
package doi
import (
"context"
"fmt"
"net/http"
"net/url"
"regexp"
"strings"
"time"
"github.com/rclone/rclone/backend/doi/api"
"github.com/rclone/rclone/fs"
"github.com/rclone/rclone/lib/rest"
)
var invenioRecordRegex = regexp.MustCompile(`\/records?\/(.+)`)
// Returns true if resolvedURL is likely a DOI hosted on an InvenioRDM intallation
func activateInvenio(ctx context.Context, srv *rest.Client, pacer *fs.Pacer, resolvedURL *url.URL) (isActive bool) {
_, _, err := resolveInvenioEndpoint(ctx, srv, pacer, resolvedURL)
return err == nil
}
// Resolve the main API endpoint for a DOI hosted on an InvenioRDM installation
func resolveInvenioEndpoint(ctx context.Context, srv *rest.Client, pacer *fs.Pacer, resolvedURL *url.URL) (provider Provider, endpoint *url.URL, err error) {
var res *http.Response
opts := rest.Opts{
Method: "GET",
RootURL: resolvedURL.String(),
}
err = pacer.Call(func() (bool, error) {
res, err = srv.Call(ctx, &opts)
return shouldRetry(ctx, res, err)
})
if err != nil {
return "", nil, err
}
// First, attempt to grab the API URL from the headers
var linksetURL *url.URL
links := parseLinkHeader(res.Header.Get("Link"))
for _, link := range links {
if link.Rel == "linkset" && link.Type == "application/linkset+json" {
parsed, err := url.Parse(link.Href)
if err == nil {
linksetURL = parsed
break
}
}
}
if linksetURL != nil {
endpoint, err = checkInvenioAPIURL(ctx, srv, pacer, linksetURL)
if err == nil {
return Invenio, endpoint, nil
}
fs.Logf(nil, "using linkset URL failed: %s", err.Error())
}
// If there is no linkset header, try to grab the record ID from the URL
recordID := ""
resURL := res.Request.URL
match := invenioRecordRegex.FindStringSubmatch(resURL.EscapedPath())
if match != nil {
recordID = match[1]
guessedURL := res.Request.URL.ResolveReference(&url.URL{
Path: "/api/records/" + recordID,
})
endpoint, err = checkInvenioAPIURL(ctx, srv, pacer, guessedURL)
if err == nil {
return Invenio, endpoint, nil
}
fs.Logf(nil, "guessing the URL failed: %s", err.Error())
}
return "", nil, fmt.Errorf("could not resolve the Invenio API endpoint for '%s'", resolvedURL.String())
}
func checkInvenioAPIURL(ctx context.Context, srv *rest.Client, pacer *fs.Pacer, resolvedURL *url.URL) (endpoint *url.URL, err error) {
var result api.InvenioRecordResponse
opts := rest.Opts{
Method: "GET",
RootURL: resolvedURL.String(),
}
err = pacer.Call(func() (bool, error) {
res, err := srv.CallJSON(ctx, &opts, nil, &result)
return shouldRetry(ctx, res, err)
})
if err != nil {
return nil, err
}
if result.Links.Self == "" {
return nil, fmt.Errorf("could not parse API response from '%s'", resolvedURL.String())
}
return url.Parse(result.Links.Self)
}
// invenioProvider implements the doiProvider interface for InvenioRDM installations
type invenioProvider struct {
f *Fs
}
// ListEntries returns the full list of entries found at the remote, regardless of root
func (ip *invenioProvider) ListEntries(ctx context.Context) (entries []*Object, err error) {
// Use the cache if populated
cachedEntries, found := ip.f.cache.GetMaybe("files")
if found {
parsedEntries, ok := cachedEntries.([]Object)
if ok {
for _, entry := range parsedEntries {
newEntry := entry
entries = append(entries, &newEntry)
}
return entries, nil
}
}
filesURL := ip.f.endpoint.JoinPath("files")
var result api.InvenioFilesResponse
opts := rest.Opts{
Method: "GET",
Path: strings.TrimLeft(filesURL.EscapedPath(), "/"),
}
err = ip.f.pacer.Call(func() (bool, error) {
res, err := ip.f.srv.CallJSON(ctx, &opts, nil, &result)
return shouldRetry(ctx, res, err)
})
if err != nil {
return nil, fmt.Errorf("readDir failed: %w", err)
}
for _, file := range result.Entries {
modTime, modTimeErr := time.Parse(time.RFC3339, file.Updated)
if modTimeErr != nil {
fs.Logf(ip.f, "error: could not parse last update time %v", modTimeErr)
modTime = timeUnset
}
entry := &Object{
fs: ip.f,
remote: file.Key,
contentURL: file.Links.Content,
size: file.Size,
modTime: modTime,
contentType: file.MimeType,
md5: strings.TrimPrefix(file.Checksum, "md5:"),
}
entries = append(entries, entry)
}
// Populate the cache
cacheEntries := []Object{}
for _, entry := range entries {
cacheEntries = append(cacheEntries, *entry)
}
ip.f.cache.Put("files", cacheEntries)
return entries, nil
}
func newInvenioProvider(f *Fs) doiProvider {
return &invenioProvider{
f: f,
}
}

View File

@@ -0,0 +1,75 @@
package doi
import (
"regexp"
"strings"
)
var linkRegex = regexp.MustCompile(`^<(.+)>$`)
var valueRegex = regexp.MustCompile(`^"(.+)"$`)
// headerLink represents a link as presented in HTTP headers
// MDN Reference: https://developer.mozilla.org/en-US/docs/Web/HTTP/Reference/Headers/Link
type headerLink struct {
Href string
Rel string
Type string
Extras map[string]string
}
func parseLinkHeader(header string) (links []headerLink) {
for link := range strings.SplitSeq(header, ",") {
link = strings.TrimSpace(link)
parsed := parseLink(link)
if parsed != nil {
links = append(links, *parsed)
}
}
return links
}
func parseLink(link string) (parsedLink *headerLink) {
var parts []string
for part := range strings.SplitSeq(link, ";") {
parts = append(parts, strings.TrimSpace(part))
}
match := linkRegex.FindStringSubmatch(parts[0])
if match == nil {
return nil
}
result := &headerLink{
Href: match[1],
Extras: map[string]string{},
}
for _, keyValue := range parts[1:] {
parsed := parseKeyValue(keyValue)
if parsed != nil {
key, value := parsed[0], parsed[1]
switch strings.ToLower(key) {
case "rel":
result.Rel = value
case "type":
result.Type = value
default:
result.Extras[key] = value
}
}
}
return result
}
func parseKeyValue(keyValue string) []string {
parts := strings.SplitN(keyValue, "=", 2)
if parts[0] == "" || len(parts) < 2 {
return nil
}
match := valueRegex.FindStringSubmatch(parts[1])
if match != nil {
parts[1] = match[1]
return parts
}
return parts
}

View File

@@ -0,0 +1,44 @@
package doi
import (
"testing"
"github.com/stretchr/testify/assert"
)
func TestParseLinkHeader(t *testing.T) {
header := "<https://zenodo.org/api/records/15063252> ; rel=\"linkset\" ; type=\"application/linkset+json\""
links := parseLinkHeader(header)
expected := headerLink{
Href: "https://zenodo.org/api/records/15063252",
Rel: "linkset",
Type: "application/linkset+json",
Extras: map[string]string{},
}
assert.Contains(t, links, expected)
header = "<https://api.example.com/issues?page=2>; rel=\"prev\", <https://api.example.com/issues?page=4>; rel=\"next\", <https://api.example.com/issues?page=10>; rel=\"last\", <https://api.example.com/issues?page=1>; rel=\"first\""
links = parseLinkHeader(header)
expectedList := []headerLink{{
Href: "https://api.example.com/issues?page=2",
Rel: "prev",
Type: "",
Extras: map[string]string{},
}, {
Href: "https://api.example.com/issues?page=4",
Rel: "next",
Type: "",
Extras: map[string]string{},
}, {
Href: "https://api.example.com/issues?page=10",
Rel: "last",
Type: "",
Extras: map[string]string{},
}, {
Href: "https://api.example.com/issues?page=1",
Rel: "first",
Type: "",
Extras: map[string]string{},
}}
assert.Equal(t, links, expectedList)
}

47
backend/doi/zenodo.go Normal file
View File

@@ -0,0 +1,47 @@
// Implementation for Zenodo
package doi
import (
"context"
"fmt"
"net/url"
"regexp"
"github.com/rclone/rclone/backend/doi/api"
"github.com/rclone/rclone/fs"
"github.com/rclone/rclone/lib/rest"
)
var zenodoRecordRegex = regexp.MustCompile(`zenodo[.](.+)`)
// Resolve the main API endpoint for a DOI hosted on Zenodo
func resolveZenodoEndpoint(ctx context.Context, srv *rest.Client, pacer *fs.Pacer, resolvedURL *url.URL, doi string) (provider Provider, endpoint *url.URL, err error) {
match := zenodoRecordRegex.FindStringSubmatch(doi)
if match == nil {
return "", nil, fmt.Errorf("could not derive API endpoint URL from '%s'", resolvedURL.String())
}
recordID := match[1]
endpointURL := resolvedURL.ResolveReference(&url.URL{Path: "/api/records/" + recordID})
var result api.InvenioRecordResponse
opts := rest.Opts{
Method: "GET",
RootURL: endpointURL.String(),
}
err = pacer.Call(func() (bool, error) {
res, err := srv.CallJSON(ctx, &opts, nil, &result)
return shouldRetry(ctx, res, err)
})
if err != nil {
return "", nil, err
}
endpointURL, err = url.Parse(result.Links.Self)
if err != nil {
return "", nil, err
}
return Zenodo, endpointURL, nil
}

View File

@@ -191,7 +191,7 @@ func driveScopes(scopesString string) (scopes []string) {
if scopesString == "" {
scopesString = defaultScope
}
for _, scope := range strings.Split(scopesString, ",") {
for scope := range strings.SplitSeq(scopesString, ",") {
scope = strings.TrimSpace(scope)
scopes = append(scopes, scopePrefix+scope)
}
@@ -1220,7 +1220,7 @@ func isLinkMimeType(mimeType string) bool {
// into a list of unique extensions with leading "." and a list of associated MIME types
func parseExtensions(extensionsIn ...string) (extensions, mimeTypes []string, err error) {
for _, extensionText := range extensionsIn {
for _, extension := range strings.Split(extensionText, ",") {
for extension := range strings.SplitSeq(extensionText, ",") {
extension = strings.ToLower(strings.TrimSpace(extension))
if extension == "" {
continue
@@ -1965,9 +1965,28 @@ func (f *Fs) findImportFormat(ctx context.Context, mimeType string) string {
// This should return ErrDirNotFound if the directory isn't
// found.
func (f *Fs) List(ctx context.Context, dir string) (entries fs.DirEntries, err error) {
return list.WithListP(ctx, dir, f)
}
// ListP lists the objects and directories of the Fs starting
// from dir non recursively into out.
//
// dir should be "" to start from the root, and should not
// have trailing slashes.
//
// This should return ErrDirNotFound if the directory isn't
// found.
//
// It should call callback for each tranche of entries read.
// These need not be returned in any particular order. If
// callback returns an error then the listing will stop
// immediately.
func (f *Fs) ListP(ctx context.Context, dir string, callback fs.ListRCallback) error {
list := list.NewHelper(callback)
entriesAdded := 0
directoryID, err := f.dirCache.FindDir(ctx, dir, false)
if err != nil {
return nil, err
return err
}
directoryID = actualID(directoryID)
@@ -1979,25 +1998,30 @@ func (f *Fs) List(ctx context.Context, dir string) (entries fs.DirEntries, err e
return true
}
if entry != nil {
entries = append(entries, entry)
err = list.Add(entry)
if err != nil {
iErr = err
return true
}
entriesAdded++
}
return false
})
if err != nil {
return nil, err
return err
}
if iErr != nil {
return nil, iErr
return iErr
}
// If listing the root of a teamdrive and got no entries,
// double check we have access
if f.isTeamDrive && len(entries) == 0 && f.root == "" && dir == "" {
if f.isTeamDrive && entriesAdded == 0 && f.root == "" && dir == "" {
err = f.teamDriveOK(ctx)
if err != nil {
return nil, err
return err
}
}
return entries, nil
return list.Flush()
}
// listREntry is a task to be executed by a litRRunner
@@ -3640,41 +3664,47 @@ func (f *Fs) rescue(ctx context.Context, dirID string, delete bool) (err error)
var commandHelp = []fs.CommandHelp{{
Name: "get",
Short: "Get command for fetching the drive config parameters",
Long: `This is a get command which will be used to fetch the various drive config parameters
Short: "Get command for fetching the drive config parameters.",
Long: `This is a get command which will be used to fetch the various drive config
parameters.
Usage Examples:
Usage examples:
rclone backend get drive: [-o service_account_file] [-o chunk_size]
rclone rc backend/command command=get fs=drive: [-o service_account_file] [-o chunk_size]
`,
` + "```console" + `
rclone backend get drive: [-o service_account_file] [-o chunk_size]
rclone rc backend/command command=get fs=drive: [-o service_account_file] [-o chunk_size]
` + "```",
Opts: map[string]string{
"chunk_size": "show the current upload chunk size",
"service_account_file": "show the current service account file",
"chunk_size": "Show the current upload chunk size.",
"service_account_file": "Show the current service account file.",
},
}, {
Name: "set",
Short: "Set command for updating the drive config parameters",
Long: `This is a set command which will be used to update the various drive config parameters
Short: "Set command for updating the drive config parameters.",
Long: `This is a set command which will be used to update the various drive config
parameters.
Usage Examples:
Usage examples:
rclone backend set drive: [-o service_account_file=sa.json] [-o chunk_size=67108864]
rclone rc backend/command command=set fs=drive: [-o service_account_file=sa.json] [-o chunk_size=67108864]
`,
` + "```console" + `
rclone backend set drive: [-o service_account_file=sa.json] [-o chunk_size=67108864]
rclone rc backend/command command=set fs=drive: [-o service_account_file=sa.json] [-o chunk_size=67108864]
` + "```",
Opts: map[string]string{
"chunk_size": "update the current upload chunk size",
"service_account_file": "update the current service account file",
"chunk_size": "Update the current upload chunk size.",
"service_account_file": "Update the current service account file.",
},
}, {
Name: "shortcut",
Short: "Create shortcuts from files or directories",
Short: "Create shortcuts from files or directories.",
Long: `This command creates shortcuts from files or directories.
Usage:
Usage examples:
rclone backend shortcut drive: source_item destination_shortcut
rclone backend shortcut drive: source_item -o target=drive2: destination_shortcut
` + "```console" + `
rclone backend shortcut drive: source_item destination_shortcut
rclone backend shortcut drive: source_item -o target=drive2: destination_shortcut
` + "```" + `
In the first example this creates a shortcut from the "source_item"
which can be a file or a directory to the "destination_shortcut". The
@@ -3684,90 +3714,100 @@ from "drive:"
In the second example this creates a shortcut from the "source_item"
relative to "drive:" to the "destination_shortcut" relative to
"drive2:". This may fail with a permission error if the user
authenticated with "drive2:" can't read files from "drive:".
`,
authenticated with "drive2:" can't read files from "drive:".`,
Opts: map[string]string{
"target": "optional target remote for the shortcut destination",
"target": "Optional target remote for the shortcut destination.",
},
}, {
Name: "drives",
Short: "List the Shared Drives available to this account",
Short: "List the Shared Drives available to this account.",
Long: `This command lists the Shared Drives (Team Drives) available to this
account.
Usage:
Usage example:
rclone backend [-o config] drives drive:
` + "```console" + `
rclone backend [-o config] drives drive:
` + "```" + `
This will return a JSON list of objects like this
This will return a JSON list of objects like this:
[
{
"id": "0ABCDEF-01234567890",
"kind": "drive#teamDrive",
"name": "My Drive"
},
{
"id": "0ABCDEFabcdefghijkl",
"kind": "drive#teamDrive",
"name": "Test Drive"
}
]
` + "```json" + `
[
{
"id": "0ABCDEF-01234567890",
"kind": "drive#teamDrive",
"name": "My Drive"
},
{
"id": "0ABCDEFabcdefghijkl",
"kind": "drive#teamDrive",
"name": "Test Drive"
}
]
` + "```" + `
With the -o config parameter it will output the list in a format
suitable for adding to a config file to make aliases for all the
drives found and a combined drive.
[My Drive]
type = alias
remote = drive,team_drive=0ABCDEF-01234567890,root_folder_id=:
` + "```ini" + `
[My Drive]
type = alias
remote = drive,team_drive=0ABCDEF-01234567890,root_folder_id=:
[Test Drive]
type = alias
remote = drive,team_drive=0ABCDEFabcdefghijkl,root_folder_id=:
[Test Drive]
type = alias
remote = drive,team_drive=0ABCDEFabcdefghijkl,root_folder_id=:
[AllDrives]
type = combine
upstreams = "My Drive=My Drive:" "Test Drive=Test Drive:"
[AllDrives]
type = combine
upstreams = "My Drive=My Drive:" "Test Drive=Test Drive:"
` + "```" + `
Adding this to the rclone config file will cause those team drives to
be accessible with the aliases shown. Any illegal characters will be
substituted with "_" and duplicate names will have numbers suffixed.
It will also add a remote called AllDrives which shows all the shared
drives combined into one directory tree.
`,
drives combined into one directory tree.`,
}, {
Name: "untrash",
Short: "Untrash files and directories",
Short: "Untrash files and directories.",
Long: `This command untrashes all the files and directories in the directory
passed in recursively.
Usage:
Usage example:
` + "```console" + `
rclone backend untrash drive:directory
rclone backend --interactive untrash drive:directory subdir
` + "```" + `
This takes an optional directory to trash which make this easier to
use via the API.
rclone backend untrash drive:directory
rclone backend --interactive untrash drive:directory subdir
Use the --interactive/-i or --dry-run flag to see what would be restored before restoring it.
Use the --interactive/-i or --dry-run flag to see what would be restored before
restoring it.
Result:
{
"Untrashed": 17,
"Errors": 0
}
`,
` + "```json" + `
{
"Untrashed": 17,
"Errors": 0
}
` + "```",
}, {
Name: "copyid",
Short: "Copy files by ID",
Long: `This command copies files by ID
Short: "Copy files by ID.",
Long: `This command copies files by ID.
Usage:
Usage examples:
rclone backend copyid drive: ID path
rclone backend copyid drive: ID1 path1 ID2 path2
` + "```console" + `
rclone backend copyid drive: ID path
rclone backend copyid drive: ID1 path1 ID2 path2
` + "```" + `
It copies the drive file with ID given to the path (an rclone path which
will be passed internally to rclone copyto). The ID and path pairs can be
@@ -3780,17 +3820,19 @@ component will be used as the file name.
If the destination is a drive backend then server-side copying will be
attempted if possible.
Use the --interactive/-i or --dry-run flag to see what would be copied before copying.
`,
Use the --interactive/-i or --dry-run flag to see what would be copied before
copying.`,
}, {
Name: "moveid",
Short: "Move files by ID",
Long: `This command moves files by ID
Short: "Move files by ID.",
Long: `This command moves files by ID.
Usage:
Usage examples:
rclone backend moveid drive: ID path
rclone backend moveid drive: ID1 path1 ID2 path2
` + "```console" + `
rclone backend moveid drive: ID path
rclone backend moveid drive: ID1 path1 ID2 path2
` + "```" + `
It moves the drive file with ID given to the path (an rclone path which
will be passed internally to rclone moveto).
@@ -3802,58 +3844,65 @@ component will be used as the file name.
If the destination is a drive backend then server-side moving will be
attempted if possible.
Use the --interactive/-i or --dry-run flag to see what would be moved beforehand.
`,
Use the --interactive/-i or --dry-run flag to see what would be moved beforehand.`,
}, {
Name: "exportformats",
Short: "Dump the export formats for debug purposes",
Short: "Dump the export formats for debug purposes.",
}, {
Name: "importformats",
Short: "Dump the import formats for debug purposes",
Short: "Dump the import formats for debug purposes.",
}, {
Name: "query",
Short: "List files using Google Drive query language",
Long: `This command lists files based on a query
Short: "List files using Google Drive query language.",
Long: `This command lists files based on a query.
Usage:
Usage example:
` + "```console" + `
rclone backend query drive: query
` + "```" + `
rclone backend query drive: query
The query syntax is documented at [Google Drive Search query terms and
operators](https://developers.google.com/drive/api/guides/ref-search-terms).
For example:
rclone backend query drive: "'0ABc9DEFGHIJKLMNop0QRatUVW3X' in parents and name contains 'foo'"
` + "```console" + `
rclone backend query drive: "'0ABc9DEFGHIJKLMNop0QRatUVW3X' in parents and name contains 'foo'"
` + "```" + `
If the query contains literal ' or \ characters, these need to be escaped with
\ characters. "'" becomes "\'" and "\" becomes "\\\", for example to match a
file named "foo ' \.txt":
rclone backend query drive: "name = 'foo \' \\\.txt'"
` + "```console" + `
rclone backend query drive: "name = 'foo \' \\\.txt'"
` + "```" + `
The result is a JSON array of matches, for example:
[
{
"createdTime": "2017-06-29T19:58:28.537Z",
"id": "0AxBe_CDEF4zkGHI4d0FjYko2QkD",
"md5Checksum": "68518d16be0c6fbfab918be61d658032",
"mimeType": "text/plain",
"modifiedTime": "2024-02-02T10:40:02.874Z",
"name": "foo ' \\.txt",
"parents": [
"0BxAe_BCDE4zkFGZpcWJGek0xbzC"
],
"resourceKey": "0-ABCDEFGHIXJQpIGqBJq3MC",
"sha1Checksum": "8f284fa768bfb4e45d076a579ab3905ab6bfa893",
"size": "311",
"webViewLink": "https://drive.google.com/file/d/0AxBe_CDEF4zkGHI4d0FjYko2QkD/view?usp=drivesdk\u0026resourcekey=0-ABCDEFGHIXJQpIGqBJq3MC"
}
]`,
` + "```json" + `
[
{
"createdTime": "2017-06-29T19:58:28.537Z",
"id": "0AxBe_CDEF4zkGHI4d0FjYko2QkD",
"md5Checksum": "68518d16be0c6fbfab918be61d658032",
"mimeType": "text/plain",
"modifiedTime": "2024-02-02T10:40:02.874Z",
"name": "foo ' \\.txt",
"parents": [
"0BxAe_BCDE4zkFGZpcWJGek0xbzC"
],
"resourceKey": "0-ABCDEFGHIXJQpIGqBJq3MC",
"sha1Checksum": "8f284fa768bfb4e45d076a579ab3905ab6bfa893",
"size": "311",
"webViewLink": "https://drive.google.com/file/d/0AxBe_CDEF4zkGHI4d0FjYko2QkD/view?usp=drivesdk\u0026resourcekey=0-ABCDEFGHIXJQpIGqBJq3MC"
}
]
` + "```console",
}, {
Name: "rescue",
Short: "Rescue or delete any orphaned files",
Short: "Rescue or delete any orphaned files.",
Long: `This command rescues or deletes any orphaned files or directories.
Sometimes files can get orphaned in Google Drive. This means that they
@@ -3862,26 +3911,31 @@ are no longer in any folder in Google Drive.
This command finds those files and either rescues them to a directory
you specify or deletes them.
Usage:
This can be used in 3 ways.
First, list all orphaned files
First, list all orphaned files:
rclone backend rescue drive:
` + "```console" + `
rclone backend rescue drive:
` + "```" + `
Second rescue all orphaned files to the directory indicated
Second rescue all orphaned files to the directory indicated:
rclone backend rescue drive: "relative/path/to/rescue/directory"
` + "```console" + `
rclone backend rescue drive: "relative/path/to/rescue/directory"
` + "```" + `
e.g. To rescue all orphans to a directory called "Orphans" in the top level
E.g. to rescue all orphans to a directory called "Orphans" in the top level:
rclone backend rescue drive: Orphans
` + "```console" + `
rclone backend rescue drive: Orphans
` + "```" + `
Third delete all orphaned files to the trash
Third delete all orphaned files to the trash:
rclone backend rescue drive: -o delete
`,
` + "```console" + `
rclone backend rescue drive: -o delete
` + "```",
}}
// Command the backend to run a named command
@@ -4617,6 +4671,7 @@ var (
_ fs.PutUncheckeder = (*Fs)(nil)
_ fs.PublicLinker = (*Fs)(nil)
_ fs.ListRer = (*Fs)(nil)
_ fs.ListPer = (*Fs)(nil)
_ fs.MergeDirser = (*Fs)(nil)
_ fs.DirSetModTimer = (*Fs)(nil)
_ fs.MkdirMetadataer = (*Fs)(nil)

View File

@@ -386,7 +386,6 @@ func (o *baseObject) parseMetadata(ctx context.Context, info *drive.File) (err e
g.SetLimit(o.fs.ci.Checkers)
var mu sync.Mutex // protect the info.Permissions from concurrent writes
for _, permissionID := range info.PermissionIds {
permissionID := permissionID
g.Go(func() error {
// must fetch the team drive ones individually to check the inherited flag
perm, inherited, err := o.fs.getPermission(gCtx, actualID(info.Id), permissionID, !o.fs.isTeamDrive)
@@ -520,7 +519,6 @@ func (f *Fs) updateMetadata(ctx context.Context, updateInfo *drive.File, meta fs
}
// merge metadata into request and user metadata
for k, v := range meta {
k, v := k, v
// parse a boolean from v and write into out
parseBool := func(out *bool) error {
b, err := strconv.ParseBool(v)

View File

@@ -47,6 +47,7 @@ import (
"github.com/rclone/rclone/fs/config/obscure"
"github.com/rclone/rclone/fs/fserrors"
"github.com/rclone/rclone/fs/hash"
"github.com/rclone/rclone/fs/list"
"github.com/rclone/rclone/fs/operations"
"github.com/rclone/rclone/lib/batcher"
"github.com/rclone/rclone/lib/encoder"
@@ -834,7 +835,7 @@ func (f *Fs) NewObject(ctx context.Context, remote string) (fs.Object, error) {
// listSharedFolders lists all available shared folders mounted and not mounted
// we'll need the id later so we have to return them in original format
func (f *Fs) listSharedFolders(ctx context.Context) (entries fs.DirEntries, err error) {
func (f *Fs) listSharedFolders(ctx context.Context, callback func(fs.DirEntry) error) (err error) {
started := false
var res *sharing.ListFoldersResult
for {
@@ -847,7 +848,7 @@ func (f *Fs) listSharedFolders(ctx context.Context) (entries fs.DirEntries, err
return shouldRetry(ctx, err)
})
if err != nil {
return nil, err
return err
}
started = true
} else {
@@ -859,15 +860,15 @@ func (f *Fs) listSharedFolders(ctx context.Context) (entries fs.DirEntries, err
return shouldRetry(ctx, err)
})
if err != nil {
return nil, fmt.Errorf("list continue: %w", err)
return fmt.Errorf("list continue: %w", err)
}
}
for _, entry := range res.Entries {
leaf := f.opt.Enc.ToStandardName(entry.Name)
d := fs.NewDir(leaf, time.Time{}).SetID(entry.SharedFolderId)
entries = append(entries, d)
err = callback(d)
if err != nil {
return nil, err
return err
}
}
if res.Cursor == "" {
@@ -875,21 +876,25 @@ func (f *Fs) listSharedFolders(ctx context.Context) (entries fs.DirEntries, err
}
}
return entries, nil
return nil
}
// findSharedFolder find the id for a given shared folder name
// somewhat annoyingly there is no endpoint to query a shared folder by it's name
// so our only option is to iterate over all shared folders
func (f *Fs) findSharedFolder(ctx context.Context, name string) (id string, err error) {
entries, err := f.listSharedFolders(ctx)
if err != nil {
return "", err
}
for _, entry := range entries {
errFoundFile := errors.New("found file")
err = f.listSharedFolders(ctx, func(entry fs.DirEntry) error {
if entry.(*fs.Dir).Remote() == name {
return entry.(*fs.Dir).ID(), nil
id = entry.(*fs.Dir).ID()
return errFoundFile
}
return nil
})
if errors.Is(err, errFoundFile) {
return id, nil
} else if err != nil {
return "", err
}
return "", fs.ErrorDirNotFound
}
@@ -908,7 +913,7 @@ func (f *Fs) mountSharedFolder(ctx context.Context, id string) error {
// listReceivedFiles lists shared the user as access to (note this means individual
// files not files contained in shared folders)
func (f *Fs) listReceivedFiles(ctx context.Context) (entries fs.DirEntries, err error) {
func (f *Fs) listReceivedFiles(ctx context.Context, callback func(fs.DirEntry) error) (err error) {
started := false
var res *sharing.ListFilesResult
for {
@@ -921,7 +926,7 @@ func (f *Fs) listReceivedFiles(ctx context.Context) (entries fs.DirEntries, err
return shouldRetry(ctx, err)
})
if err != nil {
return nil, err
return err
}
started = true
} else {
@@ -933,7 +938,7 @@ func (f *Fs) listReceivedFiles(ctx context.Context) (entries fs.DirEntries, err
return shouldRetry(ctx, err)
})
if err != nil {
return nil, fmt.Errorf("list continue: %w", err)
return fmt.Errorf("list continue: %w", err)
}
}
for _, entry := range res.Entries {
@@ -946,26 +951,33 @@ func (f *Fs) listReceivedFiles(ctx context.Context) (entries fs.DirEntries, err
modTime: *entry.TimeInvited,
}
if err != nil {
return nil, err
return err
}
err = callback(o)
if err != nil {
return err
}
entries = append(entries, o)
}
if res.Cursor == "" {
break
}
}
return entries, nil
return nil
}
func (f *Fs) findSharedFile(ctx context.Context, name string) (o *Object, err error) {
files, err := f.listReceivedFiles(ctx)
if err != nil {
return nil, err
}
for _, entry := range files {
errFoundFile := errors.New("found file")
err = f.listReceivedFiles(ctx, func(entry fs.DirEntry) error {
if entry.(*Object).remote == name {
return entry.(*Object), nil
o = entry.(*Object)
return errFoundFile
}
return nil
})
if errors.Is(err, errFoundFile) {
return o, nil
} else if err != nil {
return nil, err
}
return nil, fs.ErrorObjectNotFound
}
@@ -980,11 +992,37 @@ func (f *Fs) findSharedFile(ctx context.Context, name string) (o *Object, err er
// This should return ErrDirNotFound if the directory isn't
// found.
func (f *Fs) List(ctx context.Context, dir string) (entries fs.DirEntries, err error) {
return list.WithListP(ctx, dir, f)
}
// ListP lists the objects and directories of the Fs starting
// from dir non recursively into out.
//
// dir should be "" to start from the root, and should not
// have trailing slashes.
//
// This should return ErrDirNotFound if the directory isn't
// found.
//
// It should call callback for each tranche of entries read.
// These need not be returned in any particular order. If
// callback returns an error then the listing will stop
// immediately.
func (f *Fs) ListP(ctx context.Context, dir string, callback fs.ListRCallback) (err error) {
list := list.NewHelper(callback)
if f.opt.SharedFiles {
return f.listReceivedFiles(ctx)
err := f.listReceivedFiles(ctx, list.Add)
if err != nil {
return err
}
return list.Flush()
}
if f.opt.SharedFolders {
return f.listSharedFolders(ctx)
err := f.listSharedFolders(ctx, list.Add)
if err != nil {
return err
}
return list.Flush()
}
root := f.slashRoot
@@ -1014,7 +1052,7 @@ func (f *Fs) List(ctx context.Context, dir string) (entries fs.DirEntries, err e
err = fs.ErrorDirNotFound
}
}
return nil, err
return err
}
started = true
} else {
@@ -1026,7 +1064,7 @@ func (f *Fs) List(ctx context.Context, dir string) (entries fs.DirEntries, err e
return shouldRetry(ctx, err)
})
if err != nil {
return nil, fmt.Errorf("list continue: %w", err)
return fmt.Errorf("list continue: %w", err)
}
}
for _, entry := range res.Entries {
@@ -1051,14 +1089,20 @@ func (f *Fs) List(ctx context.Context, dir string) (entries fs.DirEntries, err e
remote := path.Join(dir, leaf)
if folderInfo != nil {
d := fs.NewDir(remote, time.Time{}).SetID(folderInfo.Id)
entries = append(entries, d)
err = list.Add(d)
if err != nil {
return err
}
} else if fileInfo != nil {
o, err := f.newObjectWithInfo(ctx, remote, fileInfo)
if err != nil {
return nil, err
return err
}
if o.(*Object).exportType.listable() {
entries = append(entries, o)
err = list.Add(o)
if err != nil {
return err
}
}
}
}
@@ -1066,7 +1110,7 @@ func (f *Fs) List(ctx context.Context, dir string) (entries fs.DirEntries, err e
break
}
}
return entries, nil
return list.Flush()
}
// Put the object
@@ -1286,6 +1330,16 @@ func (f *Fs) Move(ctx context.Context, src fs.Object, remote string) (fs.Object,
var result *files.RelocationResult
err = f.pacer.Call(func() (bool, error) {
result, err = f.srv.MoveV2(&arg)
switch e := err.(type) {
case files.MoveV2APIError:
// There seems to be a bit of eventual consistency here which causes this to
// fail on just created objects
// See: https://github.com/rclone/rclone/issues/8881
if e.EndpointError != nil && e.EndpointError.FromLookup != nil && e.EndpointError.FromLookup.Tag == files.LookupErrorNotFound {
fs.Debugf(srcObj, "Retrying move on %v error", err)
return true, err
}
}
return shouldRetry(ctx, err)
})
if err != nil {
@@ -1446,9 +1500,9 @@ func (f *Fs) About(ctx context.Context) (usage *fs.Usage, err error) {
}
}
usage = &fs.Usage{
Total: fs.NewUsageValue(int64(total)), // quota of bytes that can be used
Used: fs.NewUsageValue(int64(used)), // bytes in use
Free: fs.NewUsageValue(int64(total - used)), // bytes which can be uploaded before reaching the quota
Total: fs.NewUsageValue(total), // quota of bytes that can be used
Used: fs.NewUsageValue(used), // bytes in use
Free: fs.NewUsageValue(total - used), // bytes which can be uploaded before reaching the quota
}
return usage, nil
}
@@ -2087,6 +2141,7 @@ var (
_ fs.Mover = (*Fs)(nil)
_ fs.PublicLinker = (*Fs)(nil)
_ fs.DirMover = (*Fs)(nil)
_ fs.ListPer = (*Fs)(nil)
_ fs.Abouter = (*Fs)(nil)
_ fs.Shutdowner = &Fs{}
_ fs.Object = (*Object)(nil)

View File

@@ -0,0 +1,81 @@
// Package api defines types for interacting with the FileLu API.
package api
import "encoding/json"
// CreateFolderResponse represents the response for creating a folder.
type CreateFolderResponse struct {
Status int `json:"status"`
Msg string `json:"msg"`
Result struct {
FldID any `json:"fld_id"`
} `json:"result"`
}
// DeleteFolderResponse represents the response for deleting a folder.
type DeleteFolderResponse struct {
Status int `json:"status"`
Msg string `json:"msg"`
}
// FolderListResponse represents the response for listing folders.
type FolderListResponse struct {
Status int `json:"status"`
Msg string `json:"msg"`
Result struct {
Files []struct {
Name string `json:"name"`
FldID json.Number `json:"fld_id"`
Path string `json:"path"`
FileCode string `json:"file_code"`
Size int64 `json:"size"`
} `json:"files"`
Folders []struct {
Name string `json:"name"`
FldID json.Number `json:"fld_id"`
Path string `json:"path"`
} `json:"folders"`
} `json:"result"`
}
// FileDirectLinkResponse represents the response for a direct link to a file.
type FileDirectLinkResponse struct {
Status int `json:"status"`
Msg string `json:"msg"`
Result struct {
URL string `json:"url"`
Size int64 `json:"size"`
} `json:"result"`
}
// FileInfoResponse represents the response for file information.
type FileInfoResponse struct {
Status int `json:"status"`
Msg string `json:"msg"`
Result []struct {
Size string `json:"size"`
Name string `json:"name"`
FileCode string `json:"filecode"`
Hash string `json:"hash"`
Status int `json:"status"`
} `json:"result"`
}
// DeleteFileResponse represents the response for deleting a file.
type DeleteFileResponse struct {
Status int `json:"status"`
Msg string `json:"msg"`
}
// AccountInfoResponse represents the response for account information.
type AccountInfoResponse struct {
Status int `json:"status"` // HTTP status code of the response.
Msg string `json:"msg"` // Message describing the response.
Result struct {
PremiumExpire string `json:"premium_expire"` // Expiration date of premium access.
Email string `json:"email"` // User's email address.
UType string `json:"utype"` // User type (e.g., premium or free).
Storage string `json:"storage"` // Total storage available to the user.
StorageUsed string `json:"storage_used"` // Amount of storage used.
} `json:"result"` // Nested result structure containing account details.
}

366
backend/filelu/filelu.go Normal file
View File

@@ -0,0 +1,366 @@
// Package filelu provides an interface to the FileLu storage system.
package filelu
import (
"context"
"fmt"
"io"
"net/http"
"os"
"path"
"strings"
"time"
"github.com/rclone/rclone/fs"
"github.com/rclone/rclone/fs/config"
"github.com/rclone/rclone/fs/config/configmap"
"github.com/rclone/rclone/fs/config/configstruct"
"github.com/rclone/rclone/fs/fshttp"
"github.com/rclone/rclone/lib/encoder"
"github.com/rclone/rclone/lib/pacer"
"github.com/rclone/rclone/lib/rest"
)
// Register the backend with Rclone
func init() {
fs.Register(&fs.RegInfo{
Name: "filelu",
Description: "FileLu Cloud Storage",
NewFs: NewFs,
Options: []fs.Option{{
Name: "key",
Help: "Your FileLu Rclone key from My Account",
Required: true,
Sensitive: true,
},
{
Name: config.ConfigEncoding,
Help: config.ConfigEncodingHelp,
Advanced: true,
Default: (encoder.Base | // Slash,LtGt,DoubleQuote,Question,Asterisk,Pipe,Hash,Percent,BackSlash,Del,Ctl,RightSpace,InvalidUtf8,Dot
encoder.EncodeSlash |
encoder.EncodeLtGt |
encoder.EncodeExclamation |
encoder.EncodeDoubleQuote |
encoder.EncodeSingleQuote |
encoder.EncodeBackQuote |
encoder.EncodeQuestion |
encoder.EncodeDollar |
encoder.EncodeColon |
encoder.EncodeAsterisk |
encoder.EncodePipe |
encoder.EncodeHash |
encoder.EncodePercent |
encoder.EncodeBackSlash |
encoder.EncodeCrLf |
encoder.EncodeDel |
encoder.EncodeCtl |
encoder.EncodeLeftSpace |
encoder.EncodeLeftPeriod |
encoder.EncodeLeftTilde |
encoder.EncodeLeftCrLfHtVt |
encoder.EncodeRightPeriod |
encoder.EncodeRightCrLfHtVt |
encoder.EncodeSquareBracket |
encoder.EncodeSemicolon |
encoder.EncodeRightSpace |
encoder.EncodeInvalidUtf8 |
encoder.EncodeDot),
},
}})
}
// Options defines the configuration for the FileLu backend
type Options struct {
Key string `config:"key"`
Enc encoder.MultiEncoder `config:"encoding"`
}
// Fs represents the FileLu file system
type Fs struct {
name string
root string
opt Options
features *fs.Features
endpoint string
pacer *pacer.Pacer
srv *rest.Client
client *http.Client
targetFile string
}
// NewFs creates a new Fs object for FileLu
func NewFs(ctx context.Context, name string, root string, m configmap.Mapper) (fs.Fs, error) {
opt := new(Options)
err := configstruct.Set(m, opt)
if err != nil {
return nil, fmt.Errorf("failed to parse config: %w", err)
}
if opt.Key == "" {
return nil, fmt.Errorf("FileLu Rclone Key is required")
}
client := fshttp.NewClient(ctx)
if strings.TrimSpace(root) == "" {
root = ""
}
root = strings.Trim(root, "/")
filename := ""
f := &Fs{
name: name,
opt: *opt,
endpoint: "https://filelu.com/rclone",
client: client,
srv: rest.NewClient(client).SetRoot("https://filelu.com/rclone"),
pacer: pacer.New(),
targetFile: filename,
root: root,
}
f.features = (&fs.Features{
CanHaveEmptyDirectories: true,
WriteMetadata: false,
SlowHash: true,
}).Fill(ctx, f)
rootContainer, rootDirectory := rootSplit(f.root)
if rootContainer != "" && rootDirectory != "" {
// Check to see if the (container,directory) is actually an existing file
oldRoot := f.root
newRoot, leaf := path.Split(oldRoot)
f.root = strings.Trim(newRoot, "/")
_, err := f.NewObject(ctx, leaf)
if err != nil {
if err == fs.ErrorObjectNotFound || err == fs.ErrorNotAFile {
// File doesn't exist or is a directory so return old f
f.root = strings.Trim(oldRoot, "/")
return f, nil
}
return nil, err
}
// return an error with an fs which points to the parent
return f, fs.ErrorIsFile
}
return f, nil
}
// Mkdir to create directory on remote server.
func (f *Fs) Mkdir(ctx context.Context, dir string) error {
fullPath := path.Clean(f.root + "/" + dir)
_, err := f.createFolder(ctx, fullPath)
return err
}
// About provides usage statistics for the remote
func (f *Fs) About(ctx context.Context) (*fs.Usage, error) {
accountInfo, err := f.getAccountInfo(ctx)
if err != nil {
return nil, err
}
totalStorage, err := parseStorageToBytes(accountInfo.Result.Storage)
if err != nil {
return nil, fmt.Errorf("failed to parse total storage: %w", err)
}
usedStorage, err := parseStorageToBytes(accountInfo.Result.StorageUsed)
if err != nil {
return nil, fmt.Errorf("failed to parse used storage: %w", err)
}
return &fs.Usage{
Total: fs.NewUsageValue(totalStorage), // Total bytes available
Used: fs.NewUsageValue(usedStorage), // Total bytes used
Free: fs.NewUsageValue(totalStorage - usedStorage),
}, nil
}
// Purge deletes the directory and all its contents
func (f *Fs) Purge(ctx context.Context, dir string) error {
fullPath := path.Join(f.root, dir)
if fullPath != "" {
fullPath = "/" + strings.Trim(fullPath, "/")
}
return f.deleteFolder(ctx, fullPath)
}
// List returns a list of files and folders
// List returns a list of files and folders for the given directory
func (f *Fs) List(ctx context.Context, dir string) (fs.DirEntries, error) {
// Compose full path for API call
fullPath := path.Join(f.root, dir)
fullPath = "/" + strings.Trim(fullPath, "/")
if fullPath == "/" {
fullPath = ""
}
var entries fs.DirEntries
result, err := f.getFolderList(ctx, fullPath)
if err != nil {
return nil, err
}
fldMap := map[string]bool{}
for _, folder := range result.Result.Folders {
fldMap[folder.FldID.String()] = true
if f.root == "" && dir == "" && strings.Contains(folder.Path, "/") {
continue
}
paths := strings.Split(folder.Path, fullPath+"/")
remote := paths[0]
if len(paths) > 1 {
remote = paths[1]
}
if strings.Contains(remote, "/") {
continue
}
pathsWithoutRoot := strings.Split(folder.Path, "/"+f.root+"/")
remotePathWithoutRoot := pathsWithoutRoot[0]
if len(pathsWithoutRoot) > 1 {
remotePathWithoutRoot = pathsWithoutRoot[1]
}
remotePathWithoutRoot = strings.TrimPrefix(remotePathWithoutRoot, "/")
entries = append(entries, fs.NewDir(remotePathWithoutRoot, time.Now()))
}
for _, file := range result.Result.Files {
if _, ok := fldMap[file.FldID.String()]; ok {
continue
}
remote := path.Join(dir, file.Name)
// trim leading slashes
remote = strings.TrimPrefix(remote, "/")
obj := &Object{
fs: f,
remote: remote,
size: file.Size,
modTime: time.Now(),
}
entries = append(entries, obj)
}
return entries, nil
}
// Put uploads a file directly to the destination folder in the FileLu storage system.
func (f *Fs) Put(ctx context.Context, in io.Reader, src fs.ObjectInfo, options ...fs.OpenOption) (fs.Object, error) {
if src.Size() == 0 {
return nil, fs.ErrorCantUploadEmptyFiles
}
err := f.uploadFile(ctx, in, src.Remote())
if err != nil {
return nil, err
}
newObject := &Object{
fs: f,
remote: src.Remote(),
size: src.Size(),
modTime: src.ModTime(ctx),
}
fs.Infof(f, "Put: Successfully uploaded new file %q", src.Remote())
return newObject, nil
}
// Move moves the file to the specified location
func (f *Fs) Move(ctx context.Context, src fs.Object, destinationPath string) (fs.Object, error) {
if strings.HasPrefix(destinationPath, "/") || strings.Contains(destinationPath, ":\\") {
dir := path.Dir(destinationPath)
if err := os.MkdirAll(dir, 0755); err != nil {
return nil, fmt.Errorf("failed to create destination directory: %w", err)
}
reader, err := src.Open(ctx)
if err != nil {
return nil, fmt.Errorf("failed to open source file: %w", err)
}
defer func() {
if err := reader.Close(); err != nil {
fs.Logf(nil, "Failed to close file body: %v", err)
}
}()
dest, err := os.Create(destinationPath)
if err != nil {
return nil, fmt.Errorf("failed to create destination file: %w", err)
}
defer func() {
if err := dest.Close(); err != nil {
fs.Logf(nil, "Failed to close file body: %v", err)
}
}()
if _, err := io.Copy(dest, reader); err != nil {
return nil, fmt.Errorf("failed to copy file content: %w", err)
}
if err := src.Remove(ctx); err != nil {
return nil, fmt.Errorf("failed to remove source file: %w", err)
}
return nil, nil
}
reader, err := src.Open(ctx)
if err != nil {
return nil, fmt.Errorf("failed to open source object: %w", err)
}
defer func() {
if err := reader.Close(); err != nil {
fs.Logf(nil, "Failed to close file body: %v", err)
}
}()
err = f.uploadFile(ctx, reader, destinationPath)
if err != nil {
return nil, fmt.Errorf("failed to upload file to destination: %w", err)
}
if err := src.Remove(ctx); err != nil {
return nil, fmt.Errorf("failed to delete source file: %w", err)
}
return &Object{
fs: f,
remote: destinationPath,
size: src.Size(),
modTime: src.ModTime(ctx),
}, nil
}
// Rmdir removes a directory
func (f *Fs) Rmdir(ctx context.Context, dir string) error {
fullPath := path.Join(f.root, dir)
if fullPath != "" {
fullPath = "/" + strings.Trim(fullPath, "/")
}
// Step 1: Check if folder is empty
listResp, err := f.getFolderList(ctx, fullPath)
if err != nil {
return err
}
if len(listResp.Result.Files) > 0 || len(listResp.Result.Folders) > 0 {
return fmt.Errorf("Rmdir: directory %q is not empty", fullPath)
}
// Step 2: Delete the folder
return f.deleteFolder(ctx, fullPath)
}
// Check the interfaces are satisfied
var (
_ fs.Fs = (*Fs)(nil)
_ fs.Purger = (*Fs)(nil)
_ fs.Abouter = (*Fs)(nil)
_ fs.Mover = (*Fs)(nil)
_ fs.Object = (*Object)(nil)
)

View File

@@ -0,0 +1,324 @@
package filelu
import (
"bytes"
"context"
"encoding/json"
"fmt"
"io"
"net/http"
"net/url"
"strings"
"github.com/rclone/rclone/backend/filelu/api"
"github.com/rclone/rclone/fs"
"github.com/rclone/rclone/fs/fserrors"
"github.com/rclone/rclone/lib/rest"
)
// createFolder creates a folder at the specified path.
func (f *Fs) createFolder(ctx context.Context, dirPath string) (*api.CreateFolderResponse, error) {
encodedDir := f.fromStandardPath(dirPath)
apiURL := fmt.Sprintf("%s/folder/create?folder_path=%s&key=%s",
f.endpoint,
url.QueryEscape(encodedDir),
url.QueryEscape(f.opt.Key), // assuming f.opt.Key is the correct field
)
req, err := http.NewRequestWithContext(ctx, "GET", apiURL, nil)
if err != nil {
return nil, fmt.Errorf("failed to create request: %w", err)
}
var resp *http.Response
result := api.CreateFolderResponse{}
err = f.pacer.Call(func() (bool, error) {
var innerErr error
resp, innerErr = f.client.Do(req)
return fserrors.ShouldRetry(innerErr), innerErr
})
if err != nil {
return nil, fmt.Errorf("request failed: %w", err)
}
defer func() {
if err := resp.Body.Close(); err != nil {
fs.Logf(nil, "Failed to close response body: %v", err)
}
}()
err = json.NewDecoder(resp.Body).Decode(&result)
if err != nil {
return nil, fmt.Errorf("error decoding response: %w", err)
}
if result.Status != 200 {
return nil, fmt.Errorf("error: %s", result.Msg)
}
fs.Infof(f, "Successfully created folder %q with ID %v", dirPath, result.Result.FldID)
return &result, nil
}
// getFolderList List both files and folders in a directory.
func (f *Fs) getFolderList(ctx context.Context, path string) (*api.FolderListResponse, error) {
encodedDir := f.fromStandardPath(path)
apiURL := fmt.Sprintf("%s/folder/list?folder_path=%s&key=%s",
f.endpoint,
url.QueryEscape(encodedDir),
url.QueryEscape(f.opt.Key),
)
var body []byte
err := f.pacer.Call(func() (bool, error) {
req, err := http.NewRequestWithContext(ctx, "GET", apiURL, nil)
if err != nil {
return false, fmt.Errorf("failed to create request: %w", err)
}
resp, err := f.client.Do(req)
if err != nil {
return shouldRetry(err), fmt.Errorf("failed to list directory: %w", err)
}
defer func() {
if err := resp.Body.Close(); err != nil {
fs.Logf(nil, "Failed to close response body: %v", err)
}
}()
body, err = io.ReadAll(resp.Body)
if err != nil {
return false, fmt.Errorf("error reading response body: %w", err)
}
return shouldRetryHTTP(resp.StatusCode), nil
})
if err != nil {
return nil, err
}
var response api.FolderListResponse
if err := json.NewDecoder(bytes.NewReader(body)).Decode(&response); err != nil {
return nil, fmt.Errorf("error decoding response: %w", err)
}
if response.Status != 200 {
if strings.Contains(response.Msg, "Folder not found") {
return nil, fs.ErrorDirNotFound
}
return nil, fmt.Errorf("API error: %s", response.Msg)
}
for index := range response.Result.Folders {
response.Result.Folders[index].Path = f.toStandardPath(response.Result.Folders[index].Path)
}
for index := range response.Result.Files {
response.Result.Files[index].Name = f.toStandardPath(response.Result.Files[index].Name)
}
return &response, nil
}
// deleteFolder deletes a folder at the specified path.
func (f *Fs) deleteFolder(ctx context.Context, fullPath string) error {
fullPath = f.fromStandardPath(fullPath)
deleteURL := fmt.Sprintf("%s/folder/delete?folder_path=%s&key=%s",
f.endpoint,
url.QueryEscape(fullPath),
url.QueryEscape(f.opt.Key),
)
delResp := api.DeleteFolderResponse{}
err := f.pacer.Call(func() (bool, error) {
req, err := http.NewRequestWithContext(ctx, "GET", deleteURL, nil)
if err != nil {
return false, err
}
resp, err := f.client.Do(req)
if err != nil {
return fserrors.ShouldRetry(err), err
}
defer func() {
if err := resp.Body.Close(); err != nil {
fs.Logf(nil, "Failed to close response body: %v", err)
}
}()
body, err := io.ReadAll(resp.Body)
if err != nil {
return false, err
}
if err := json.Unmarshal(body, &delResp); err != nil {
return false, fmt.Errorf("error decoding delete response: %w", err)
}
if delResp.Status != 200 {
return false, fmt.Errorf("delete error: %s", delResp.Msg)
}
return false, nil
})
if err != nil {
return err
}
fs.Infof(f, "Rmdir: successfully deleted %q", fullPath)
return nil
}
// getDirectLink of file from FileLu to download.
func (f *Fs) getDirectLink(ctx context.Context, filePath string) (string, int64, error) {
filePath = f.fromStandardPath(filePath)
apiURL := fmt.Sprintf("%s/file/direct_link?file_path=%s&key=%s",
f.endpoint,
url.QueryEscape(filePath),
url.QueryEscape(f.opt.Key),
)
result := api.FileDirectLinkResponse{}
err := f.pacer.Call(func() (bool, error) {
req, err := http.NewRequestWithContext(ctx, "GET", apiURL, nil)
if err != nil {
return false, fmt.Errorf("failed to create request: %w", err)
}
resp, err := f.client.Do(req)
if err != nil {
return shouldRetry(err), fmt.Errorf("failed to fetch direct link: %w", err)
}
defer func() {
if err := resp.Body.Close(); err != nil {
fs.Logf(nil, "Failed to close response body: %v", err)
}
}()
if err := json.NewDecoder(resp.Body).Decode(&result); err != nil {
return false, fmt.Errorf("error decoding response: %w", err)
}
if result.Status != 200 {
return false, fmt.Errorf("API error: %s", result.Msg)
}
return shouldRetryHTTP(resp.StatusCode), nil
})
if err != nil {
return "", 0, err
}
return result.Result.URL, result.Result.Size, nil
}
// deleteFile deletes a file based on filePath
func (f *Fs) deleteFile(ctx context.Context, filePath string) error {
filePath = f.fromStandardPath(filePath)
apiURL := fmt.Sprintf("%s/file/remove?file_path=%s&key=%s",
f.endpoint,
url.QueryEscape(filePath),
url.QueryEscape(f.opt.Key),
)
result := api.DeleteFileResponse{}
err := f.pacer.Call(func() (bool, error) {
req, err := http.NewRequestWithContext(ctx, "GET", apiURL, nil)
if err != nil {
return false, fmt.Errorf("failed to create request: %w", err)
}
resp, err := f.client.Do(req)
if err != nil {
return shouldRetry(err), fmt.Errorf("failed to fetch direct link: %w", err)
}
defer func() {
if err := resp.Body.Close(); err != nil {
fs.Logf(nil, "Failed to close response body: %v", err)
}
}()
if err := json.NewDecoder(resp.Body).Decode(&result); err != nil {
return false, fmt.Errorf("error decoding response: %w", err)
}
if result.Status != 200 {
return false, fmt.Errorf("API error: %s", result.Msg)
}
return shouldRetryHTTP(resp.StatusCode), nil
})
return err
}
// getAccountInfo retrieves account information
func (f *Fs) getAccountInfo(ctx context.Context) (*api.AccountInfoResponse, error) {
opts := rest.Opts{
Method: "GET",
Path: "/account/info",
Parameters: url.Values{
"key": {f.opt.Key},
},
}
var result api.AccountInfoResponse
err := f.pacer.Call(func() (bool, error) {
_, callErr := f.srv.CallJSON(ctx, &opts, nil, &result)
return fserrors.ShouldRetry(callErr), callErr
})
if err != nil {
return nil, err
}
if result.Status != 200 {
return nil, fmt.Errorf("error: %s", result.Msg)
}
return &result, nil
}
// getFileInfo retrieves file information based on file code
func (f *Fs) getFileInfo(ctx context.Context, fileCode string) (*api.FileInfoResponse, error) {
u, _ := url.Parse(f.endpoint + "/file/info2")
q := u.Query()
q.Set("file_code", fileCode) // raw path — Go handles escaping properly here
q.Set("key", f.opt.Key)
u.RawQuery = q.Encode()
apiURL := f.endpoint + "/file/info2?" + u.RawQuery
var body []byte
err := f.pacer.Call(func() (bool, error) {
req, err := http.NewRequestWithContext(ctx, "GET", apiURL, nil)
if err != nil {
return false, fmt.Errorf("failed to create request: %w", err)
}
resp, err := f.client.Do(req)
if err != nil {
return shouldRetry(err), fmt.Errorf("failed to fetch file info: %w", err)
}
defer func() {
if err := resp.Body.Close(); err != nil {
fs.Logf(nil, "Failed to close response body: %v", err)
}
}()
body, err = io.ReadAll(resp.Body)
if err != nil {
return false, fmt.Errorf("error reading response body: %w", err)
}
return shouldRetryHTTP(resp.StatusCode), nil
})
if err != nil {
return nil, err
}
result := api.FileInfoResponse{}
if err := json.NewDecoder(bytes.NewReader(body)).Decode(&result); err != nil {
return nil, fmt.Errorf("error decoding response: %w", err)
}
if result.Status != 200 || len(result.Result) == 0 {
return nil, fs.ErrorObjectNotFound
}
return &result, nil
}

View File

@@ -0,0 +1,193 @@
package filelu
import (
"context"
"encoding/json"
"errors"
"fmt"
"io"
"mime/multipart"
"net/http"
"net/url"
"path"
"strings"
"github.com/rclone/rclone/fs"
)
// uploadFile uploads a file to FileLu
func (f *Fs) uploadFile(ctx context.Context, fileContent io.Reader, fileFullPath string) error {
directory := path.Dir(fileFullPath)
fileName := path.Base(fileFullPath)
if directory == "." {
directory = ""
}
destinationFolderPath := path.Join(f.root, directory)
if destinationFolderPath != "" {
destinationFolderPath = "/" + strings.Trim(destinationFolderPath, "/")
}
existingEntries, err := f.List(ctx, path.Dir(fileFullPath))
if err != nil {
if errors.Is(err, fs.ErrorDirNotFound) {
err = f.Mkdir(ctx, path.Dir(fileFullPath))
if err != nil {
return fmt.Errorf("failed to create directory: %w", err)
}
} else {
return fmt.Errorf("failed to list existing files: %w", err)
}
}
for _, entry := range existingEntries {
if entry.Remote() == fileFullPath {
_, ok := entry.(fs.Object)
if !ok {
continue
}
// If the file exists but is different, remove it
filePath := "/" + strings.Trim(destinationFolderPath+"/"+fileName, "/")
err = f.deleteFile(ctx, filePath)
if err != nil {
return fmt.Errorf("failed to delete existing file: %w", err)
}
}
}
uploadURL, sessID, err := f.getUploadServer(ctx)
if err != nil {
return fmt.Errorf("failed to retrieve upload server: %w", err)
}
// Since the fileCode isn't used, just handle the error
if _, err := f.uploadFileWithDestination(ctx, uploadURL, sessID, fileName, fileContent, destinationFolderPath); err != nil {
return fmt.Errorf("failed to upload file: %w", err)
}
return nil
}
// getUploadServer gets the upload server URL with proper key authentication
func (f *Fs) getUploadServer(ctx context.Context) (string, string, error) {
apiURL := fmt.Sprintf("%s/upload/server?key=%s", f.endpoint, url.QueryEscape(f.opt.Key))
var result struct {
Status int `json:"status"`
SessID string `json:"sess_id"`
Result string `json:"result"`
Msg string `json:"msg"`
}
err := f.pacer.Call(func() (bool, error) {
req, err := http.NewRequestWithContext(ctx, "GET", apiURL, nil)
if err != nil {
return false, fmt.Errorf("failed to create request: %w", err)
}
resp, err := f.client.Do(req)
if err != nil {
return shouldRetry(err), fmt.Errorf("failed to get upload server: %w", err)
}
defer func() {
if err := resp.Body.Close(); err != nil {
fs.Logf(nil, "Failed to close response body: %v", err)
}
}()
if err := json.NewDecoder(resp.Body).Decode(&result); err != nil {
return false, fmt.Errorf("error decoding response: %w", err)
}
if result.Status != 200 {
return false, fmt.Errorf("API error: %s", result.Msg)
}
return shouldRetryHTTP(resp.StatusCode), nil
})
if err != nil {
return "", "", err
}
return result.Result, result.SessID, nil
}
// uploadFileWithDestination uploads a file directly to a specified folder using file content reader.
func (f *Fs) uploadFileWithDestination(ctx context.Context, uploadURL, sessID, fileName string, fileContent io.Reader, dirPath string) (string, error) {
destinationPath := f.fromStandardPath(dirPath)
encodedFileName := f.fromStandardPath(fileName)
pr, pw := io.Pipe()
writer := multipart.NewWriter(pw)
isDeletionRequired := false
go func() {
defer func() {
if err := pw.Close(); err != nil {
fs.Logf(nil, "Failed to close: %v", err)
}
}()
_ = writer.WriteField("sess_id", sessID)
_ = writer.WriteField("utype", "prem")
_ = writer.WriteField("fld_path", destinationPath)
part, err := writer.CreateFormFile("file_0", encodedFileName)
if err != nil {
pw.CloseWithError(fmt.Errorf("failed to create form file: %w", err))
return
}
if _, err := io.Copy(part, fileContent); err != nil {
isDeletionRequired = true
pw.CloseWithError(fmt.Errorf("failed to copy file content: %w", err))
return
}
if err := writer.Close(); err != nil {
pw.CloseWithError(fmt.Errorf("failed to close writer: %w", err))
}
}()
var fileCode string
err := f.pacer.Call(func() (bool, error) {
req, err := http.NewRequestWithContext(ctx, "POST", uploadURL, pr)
if err != nil {
return false, fmt.Errorf("failed to create upload request: %w", err)
}
req.Header.Set("Content-Type", writer.FormDataContentType())
resp, err := f.client.Do(req)
if err != nil {
return shouldRetry(err), fmt.Errorf("failed to send upload request: %w", err)
}
defer respBodyClose(resp.Body)
var result []struct {
FileCode string `json:"file_code"`
FileStatus string `json:"file_status"`
}
if err := json.NewDecoder(resp.Body).Decode(&result); err != nil {
return false, fmt.Errorf("failed to parse upload response: %w", err)
}
if len(result) == 0 || result[0].FileStatus != "OK" {
return false, fmt.Errorf("upload failed with status: %s", result[0].FileStatus)
}
fileCode = result[0].FileCode
return shouldRetryHTTP(resp.StatusCode), nil
})
if err != nil && isDeletionRequired {
// Attempt to delete the file if upload fails
_ = f.deleteFile(ctx, destinationPath+"/"+fileName)
}
return fileCode, err
}
// respBodyClose to check body response.
func respBodyClose(responseBody io.Closer) {
if cerr := responseBody.Close(); cerr != nil {
fmt.Printf("Error closing response body: %v\n", cerr)
}
}

View File

@@ -0,0 +1,112 @@
package filelu
import (
"context"
"errors"
"fmt"
"path"
"strings"
"time"
"github.com/rclone/rclone/fs"
"github.com/rclone/rclone/fs/fserrors"
"github.com/rclone/rclone/fs/hash"
)
// errFileNotFound represent file not found error
var errFileNotFound = errors.New("file not found")
// getFileCode retrieves the file code for a given file path
func (f *Fs) getFileCode(ctx context.Context, filePath string) (string, error) {
// Prepare parent directory
parentDir := path.Dir(filePath)
// Call List to get all the files
result, err := f.getFolderList(ctx, parentDir)
if err != nil {
return "", err
}
for _, file := range result.Result.Files {
filePathFromServer := parentDir + "/" + file.Name
if parentDir == "/" {
filePathFromServer = "/" + file.Name
}
if filePath == filePathFromServer {
return file.FileCode, nil
}
}
return "", errFileNotFound
}
// Features returns the optional features of this Fs
func (f *Fs) Features() *fs.Features {
return f.features
}
func (f *Fs) fromStandardPath(remote string) string {
return f.opt.Enc.FromStandardPath(remote)
}
func (f *Fs) toStandardPath(remote string) string {
return f.opt.Enc.ToStandardPath(remote)
}
// Hashes returns an empty hash set, indicating no hash support
func (f *Fs) Hashes() hash.Set {
return hash.NewHashSet() // Properly creates an empty hash set
}
// Name returns the remote name
func (f *Fs) Name() string {
return f.name
}
// Root returns the root path
func (f *Fs) Root() string {
return f.root
}
// Precision returns the precision of the remote
func (f *Fs) Precision() time.Duration {
return fs.ModTimeNotSupported
}
func (f *Fs) String() string {
return fmt.Sprintf("FileLu root '%s'", f.root)
}
// isFileCode checks if a string looks like a file code
func isFileCode(s string) bool {
if len(s) != 12 {
return false
}
for _, c := range s {
if !((c >= 'a' && c <= 'z') || (c >= '0' && c <= '9')) {
return false
}
}
return true
}
func shouldRetry(err error) bool {
return fserrors.ShouldRetry(err)
}
func shouldRetryHTTP(code int) bool {
return code == 429 || code >= 500
}
func rootSplit(absPath string) (bucket, bucketPath string) {
// No bucket
if absPath == "" {
return "", ""
}
slash := strings.IndexRune(absPath, '/')
// Bucket but no path
if slash < 0 {
return absPath, ""
}
return absPath[:slash], absPath[slash+1:]
}

View File

@@ -0,0 +1,259 @@
package filelu
import (
"bytes"
"context"
"encoding/json"
"fmt"
"io"
"net/http"
"net/url"
"path"
"regexp"
"strconv"
"strings"
"time"
"github.com/rclone/rclone/fs"
"github.com/rclone/rclone/fs/hash"
)
// Object describes a FileLu object
type Object struct {
fs *Fs
remote string
size int64
modTime time.Time
}
// NewObject creates a new Object for the given remote path
func (f *Fs) NewObject(ctx context.Context, remote string) (fs.Object, error) {
var filePath string
filePath = path.Join(f.root, remote)
filePath = "/" + strings.Trim(filePath, "/")
// Get File code
fileCode, err := f.getFileCode(ctx, filePath)
if err != nil {
return nil, fs.ErrorObjectNotFound
}
// Get File info
fileInfos, err := f.getFileInfo(ctx, fileCode)
if err != nil {
return nil, fmt.Errorf("failed to get file info: %w", err)
}
fileInfo := fileInfos.Result[0]
size, _ := strconv.ParseInt(fileInfo.Size, 10, 64)
returnedRemote := remote
return &Object{
fs: f,
remote: returnedRemote,
size: size,
modTime: time.Now(),
}, nil
}
// Open opens the object for reading
func (o *Object) Open(ctx context.Context, options ...fs.OpenOption) (io.ReadCloser, error) {
filePath := path.Join(o.fs.root, o.remote)
// Get direct link
directLink, size, err := o.fs.getDirectLink(ctx, filePath)
if err != nil {
return nil, fmt.Errorf("failed to get direct link: %w", err)
}
o.size = size
// Offset and Count for range download
var offset int64
var count int64
fs.FixRangeOption(options, o.size)
for _, option := range options {
switch x := option.(type) {
case *fs.RangeOption:
offset, count = x.Decode(o.size)
if count < 0 {
count = o.size - offset
}
case *fs.SeekOption:
offset = x.Offset
count = o.size
default:
if option.Mandatory() {
fs.Logf(o, "Unsupported mandatory option: %v", option)
}
}
}
var reader io.ReadCloser
err = o.fs.pacer.Call(func() (bool, error) {
req, err := http.NewRequestWithContext(ctx, "GET", directLink, nil)
if err != nil {
return false, fmt.Errorf("failed to create download request: %w", err)
}
resp, err := o.fs.client.Do(req)
if err != nil {
return shouldRetry(err), fmt.Errorf("failed to download file: %w", err)
}
if resp.StatusCode != http.StatusOK {
defer func() {
if err := resp.Body.Close(); err != nil {
fs.Logf(nil, "Failed to close response body: %v", err)
}
}()
return false, fmt.Errorf("failed to download file: HTTP %d", resp.StatusCode)
}
// Wrap the response body to handle offset and count
currentContents, err := io.ReadAll(resp.Body)
if err != nil {
return false, fmt.Errorf("failed to read response body: %w", err)
}
if offset > 0 {
if offset > int64(len(currentContents)) {
return false, fmt.Errorf("offset %d exceeds file size %d", offset, len(currentContents))
}
currentContents = currentContents[offset:]
}
if count > 0 && count < int64(len(currentContents)) {
currentContents = currentContents[:count]
}
reader = io.NopCloser(bytes.NewReader(currentContents))
return false, nil
})
if err != nil {
return nil, err
}
return reader, nil
}
// Update updates the object with new data
func (o *Object) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, options ...fs.OpenOption) error {
if src.Size() <= 0 {
return fs.ErrorCantUploadEmptyFiles
}
err := o.fs.uploadFile(ctx, in, o.remote)
if err != nil {
return fmt.Errorf("failed to upload file: %w", err)
}
o.size = src.Size()
return nil
}
// Remove deletes the object from FileLu
func (o *Object) Remove(ctx context.Context) error {
fullPath := "/" + strings.Trim(path.Join(o.fs.root, o.remote), "/")
err := o.fs.deleteFile(ctx, fullPath)
if err != nil {
return err
}
fs.Infof(o.fs, "Successfully deleted file: %s", fullPath)
return nil
}
// Hash returns the MD5 hash of an object
func (o *Object) Hash(ctx context.Context, t hash.Type) (string, error) {
if t != hash.MD5 {
return "", hash.ErrUnsupported
}
var fileCode string
if isFileCode(o.fs.root) {
fileCode = o.fs.root
} else {
matches := regexp.MustCompile(`\((.*?)\)`).FindAllStringSubmatch(o.remote, -1)
for _, match := range matches {
if len(match) > 1 && len(match[1]) == 12 {
fileCode = match[1]
break
}
}
}
if fileCode == "" {
return "", fmt.Errorf("no valid file code found in the remote path")
}
apiURL := fmt.Sprintf("%s/file/info?file_code=%s&key=%s",
o.fs.endpoint, url.QueryEscape(fileCode), url.QueryEscape(o.fs.opt.Key))
var result struct {
Status int `json:"status"`
Msg string `json:"msg"`
Result []struct {
Hash string `json:"hash"`
} `json:"result"`
}
err := o.fs.pacer.Call(func() (bool, error) {
req, err := http.NewRequestWithContext(ctx, "GET", apiURL, nil)
if err != nil {
return false, err
}
resp, err := o.fs.client.Do(req)
if err != nil {
return shouldRetry(err), err
}
defer func() {
if err := resp.Body.Close(); err != nil {
fs.Logf(nil, "Failed to close response body: %v", err)
}
}()
if err := json.NewDecoder(resp.Body).Decode(&result); err != nil {
return false, err
}
return shouldRetryHTTP(resp.StatusCode), nil
})
if err != nil {
return "", err
}
if result.Status != 200 || len(result.Result) == 0 {
return "", fmt.Errorf("error: unable to fetch hash: %s", result.Msg)
}
return result.Result[0].Hash, nil
}
// String returns a string representation of the object
func (o *Object) String() string {
return o.remote
}
// Fs returns the parent Fs
func (o *Object) Fs() fs.Info {
return o.fs
}
// Remote returns the remote path
func (o *Object) Remote() string {
return o.remote
}
// Size returns the size of the object
func (o *Object) Size() int64 {
return o.size
}
// ModTime returns the modification time of the object
func (o *Object) ModTime(ctx context.Context) time.Time {
return o.modTime
}
// SetModTime sets the modification time of the object
func (o *Object) SetModTime(ctx context.Context, modTime time.Time) error {
return fs.ErrorCantSetModTime
}
// Storable indicates whether the object is storable
func (o *Object) Storable() bool {
return true
}

View File

@@ -0,0 +1,16 @@
package filelu_test
import (
"testing"
"github.com/rclone/rclone/fstest/fstests"
)
// TestIntegration runs integration tests for the FileLu backend
func TestIntegration(t *testing.T) {
fstests.Run(t, &fstests.Opt{
RemoteName: "TestFileLu:",
NilObject: nil,
SkipInvalidUTF8: true,
})
}

15
backend/filelu/utils.go Normal file
View File

@@ -0,0 +1,15 @@
package filelu
import (
"fmt"
)
// parseStorageToBytes converts a storage string (e.g., "10") to bytes
func parseStorageToBytes(storage string) (int64, error) {
var gb float64
_, err := fmt.Sscanf(storage, "%f", &gb)
if err != nil {
return 0, fmt.Errorf("failed to parse storage: %w", err)
}
return int64(gb * 1024 * 1024 * 1024), nil
}

View File

@@ -9,6 +9,7 @@ import (
"io"
"net"
"net/textproto"
"net/url"
"path"
"runtime"
"strings"
@@ -162,6 +163,16 @@ Enabled by default. Use 0 to disable.`,
Help: "Disable TLS 1.3 (workaround for FTP servers with buggy TLS)",
Default: false,
Advanced: true,
}, {
Name: "allow_insecure_tls_ciphers",
Help: `Allow insecure TLS ciphers
Setting this flag will allow the usage of the following TLS ciphers in addition to the secure defaults:
- TLS_RSA_WITH_AES_128_GCM_SHA256
`,
Default: false,
Advanced: true,
}, {
Name: "shut_timeout",
Help: "Maximum time to wait for data connection closing status.",
@@ -185,6 +196,14 @@ Supports the format user:pass@host:port, user@host:port, host:port.
Example:
myUser:myPass@localhost:9005
`,
Advanced: true,
}, {
Name: "http_proxy",
Default: "",
Help: `URL for HTTP CONNECT proxy
Set this to a URL for an HTTP proxy which supports the HTTP CONNECT verb.
`,
Advanced: true,
}, {
@@ -227,28 +246,30 @@ a write only folder.
// Options defines the configuration for this backend
type Options struct {
Host string `config:"host"`
User string `config:"user"`
Pass string `config:"pass"`
Port string `config:"port"`
TLS bool `config:"tls"`
ExplicitTLS bool `config:"explicit_tls"`
TLSCacheSize int `config:"tls_cache_size"`
DisableTLS13 bool `config:"disable_tls13"`
Concurrency int `config:"concurrency"`
SkipVerifyTLSCert bool `config:"no_check_certificate"`
DisableEPSV bool `config:"disable_epsv"`
DisableMLSD bool `config:"disable_mlsd"`
DisableUTF8 bool `config:"disable_utf8"`
WritingMDTM bool `config:"writing_mdtm"`
ForceListHidden bool `config:"force_list_hidden"`
IdleTimeout fs.Duration `config:"idle_timeout"`
CloseTimeout fs.Duration `config:"close_timeout"`
ShutTimeout fs.Duration `config:"shut_timeout"`
AskPassword bool `config:"ask_password"`
Enc encoder.MultiEncoder `config:"encoding"`
SocksProxy string `config:"socks_proxy"`
NoCheckUpload bool `config:"no_check_upload"`
Host string `config:"host"`
User string `config:"user"`
Pass string `config:"pass"`
Port string `config:"port"`
TLS bool `config:"tls"`
ExplicitTLS bool `config:"explicit_tls"`
TLSCacheSize int `config:"tls_cache_size"`
DisableTLS13 bool `config:"disable_tls13"`
AllowInsecureTLSCiphers bool `config:"allow_insecure_tls_ciphers"`
Concurrency int `config:"concurrency"`
SkipVerifyTLSCert bool `config:"no_check_certificate"`
DisableEPSV bool `config:"disable_epsv"`
DisableMLSD bool `config:"disable_mlsd"`
DisableUTF8 bool `config:"disable_utf8"`
WritingMDTM bool `config:"writing_mdtm"`
ForceListHidden bool `config:"force_list_hidden"`
IdleTimeout fs.Duration `config:"idle_timeout"`
CloseTimeout fs.Duration `config:"close_timeout"`
ShutTimeout fs.Duration `config:"shut_timeout"`
AskPassword bool `config:"ask_password"`
Enc encoder.MultiEncoder `config:"encoding"`
SocksProxy string `config:"socks_proxy"`
HTTPProxy string `config:"http_proxy"`
NoCheckUpload bool `config:"no_check_upload"`
}
// Fs represents a remote FTP server
@@ -262,10 +283,12 @@ type Fs struct {
user string
pass string
dialAddr string
tlsConf *tls.Config // default TLS client config
poolMu sync.Mutex
pool []*ftp.ServerConn
drain *time.Timer // used to drain the pool when we stop using the connections
tokens *pacer.TokenDispenser
proxyURL *url.URL // address of HTTP proxy read from environment
pacer *fs.Pacer // pacer for FTP connections
fGetTime bool // true if the ftp library accepts GetTime
fSetTime bool // true if the ftp library accepts SetTime
@@ -386,9 +409,14 @@ func shouldRetry(ctx context.Context, err error) (bool, error) {
func (f *Fs) tlsConfig() *tls.Config {
var tlsConfig *tls.Config
if f.opt.TLS || f.opt.ExplicitTLS {
tlsConfig = &tls.Config{
ServerName: f.opt.Host,
InsecureSkipVerify: f.opt.SkipVerifyTLSCert,
if f.tlsConf != nil {
tlsConfig = f.tlsConf.Clone()
} else {
tlsConfig = new(tls.Config)
}
tlsConfig.ServerName = f.opt.Host
if f.opt.SkipVerifyTLSCert {
tlsConfig.InsecureSkipVerify = true
}
if f.opt.TLSCacheSize > 0 {
tlsConfig.ClientSessionCache = tls.NewLRUClientSessionCache(f.opt.TLSCacheSize)
@@ -396,6 +424,14 @@ func (f *Fs) tlsConfig() *tls.Config {
if f.opt.DisableTLS13 {
tlsConfig.MaxVersion = tls.VersionTLS12
}
if f.opt.AllowInsecureTLSCiphers {
var ids []uint16
// Read default ciphers
for _, cs := range tls.CipherSuites() {
ids = append(ids, cs.ID)
}
tlsConfig.CipherSuites = append(ids, tls.TLS_RSA_WITH_AES_128_GCM_SHA256)
}
}
return tlsConfig
}
@@ -413,11 +449,28 @@ func (f *Fs) ftpConnection(ctx context.Context) (c *ftp.ServerConn, err error) {
dial := func(network, address string) (conn net.Conn, err error) {
fs.Debugf(f, "dial(%q,%q)", network, address)
defer func() {
fs.Debugf(f, "> dial: conn=%T, err=%v", conn, err)
if err != nil {
fs.Debugf(f, "> dial: conn=%v, err=%v", conn, err)
} else {
fs.Debugf(f, "> dial: conn=%s->%s, err=%v", conn.LocalAddr(), conn.RemoteAddr(), err)
}
}()
baseDialer := fshttp.NewDialer(ctx)
if f.opt.SocksProxy != "" {
conn, err = proxy.SOCKS5Dial(network, address, f.opt.SocksProxy, baseDialer)
if f.opt.SocksProxy != "" || f.proxyURL != nil {
// We need to make the onward connection to f.opt.Host. However the FTP
// library sets the host to the proxy IP after using EPSV or PASV so we need
// to correct that here.
var dialPort string
_, dialPort, err = net.SplitHostPort(address)
if err != nil {
return nil, err
}
dialAddress := net.JoinHostPort(f.opt.Host, dialPort)
if f.opt.SocksProxy != "" {
conn, err = proxy.SOCKS5Dial(network, dialAddress, f.opt.SocksProxy, baseDialer)
} else {
conn, err = proxy.HTTPConnectDial(network, dialAddress, f.proxyURL, baseDialer)
}
} else {
conn, err = baseDialer.Dial(network, address)
}
@@ -626,11 +679,20 @@ func NewFs(ctx context.Context, name, root string, m configmap.Mapper) (ff fs.Fs
dialAddr: dialAddr,
tokens: pacer.NewTokenDispenser(opt.Concurrency),
pacer: fs.NewPacer(ctx, pacer.NewDefault(pacer.MinSleep(minSleep), pacer.MaxSleep(maxSleep), pacer.DecayConstant(decayConstant))),
tlsConf: fshttp.NewTransport(ctx).TLSClientConfig,
}
f.features = (&fs.Features{
CanHaveEmptyDirectories: true,
PartialUploads: true,
}).Fill(ctx, f)
// get proxy URL if set
if opt.HTTPProxy != "" {
proxyURL, err := url.Parse(opt.HTTPProxy)
if err != nil {
return nil, fmt.Errorf("failed to parse HTTP Proxy URL: %w", err)
}
f.proxyURL = proxyURL
}
// set the pool drainer timer going
if f.opt.IdleTimeout > 0 {
f.drain = time.AfterFunc(time.Duration(opt.IdleTimeout), func() { _ = f.drainPool(ctx) })
@@ -1230,7 +1292,7 @@ func (f *ftpReadCloser) Close() error {
// See: https://github.com/rclone/rclone/issues/3445#issuecomment-521654257
if errX := textprotoError(err); errX != nil {
switch errX.Code {
case ftp.StatusTransfertAborted, ftp.StatusFileUnavailable, ftp.StatusAboutToSend:
case ftp.StatusTransfertAborted, ftp.StatusFileUnavailable, ftp.StatusAboutToSend, ftp.StatusRequestedFileActionOK:
err = nil
}
}

View File

@@ -52,7 +52,7 @@ func (f *Fs) testUploadTimeout(t *testing.T) {
ci.Timeout = saveTimeout
}()
ci.LowLevelRetries = 1
ci.Timeout = idleTimeout
ci.Timeout = fs.Duration(idleTimeout)
upload := func(concurrency int, shutTimeout time.Duration) (obj fs.Object, err error) {
fixFs := deriveFs(ctx, t, f, settings{

View File

@@ -194,33 +194,9 @@ type DeleteResponse struct {
Data map[string]Error
}
// Server is an upload server
type Server struct {
Name string `json:"name"`
Zone string `json:"zone"`
}
// String returns a string representation of the Server
func (s *Server) String() string {
return fmt.Sprintf("%s (%s)", s.Name, s.Zone)
}
// Root returns the root URL for the server
func (s *Server) Root() string {
return fmt.Sprintf("https://%s.gofile.io/", s.Name)
}
// URL returns the upload URL for the server
func (s *Server) URL() string {
return fmt.Sprintf("https://%s.gofile.io/contents/uploadfile", s.Name)
}
// ServersResponse is the output from /servers
type ServersResponse struct {
Error
Data struct {
Servers []Server `json:"servers"`
} `json:"data"`
// DirectUploadURL returns the direct upload URL for Gofile
func DirectUploadURL() string {
return "https://upload.gofile.io/uploadfile"
}
// UploadResponse is returned by POST /contents/uploadfile

View File

@@ -8,13 +8,11 @@ import (
"errors"
"fmt"
"io"
"math/rand"
"net/http"
"net/url"
"path"
"strconv"
"strings"
"sync"
"time"
"github.com/rclone/rclone/backend/gofile/api"
@@ -37,10 +35,8 @@ const (
maxSleep = 20 * time.Second
decayConstant = 1 // bigger for slower decay, exponential
rootURL = "https://api.gofile.io"
serversExpiry = 60 * time.Second // check for new upload servers this often
serversActive = 2 // choose this many closest upload servers to use
rateLimitSleep = 5 * time.Second // penalise a goroutine by this long for making a rate limit error
maxDepth = 4 // in ListR recursive list this deep (maximum is 16)
rateLimitSleep = 5 * time.Second // penalise a goroutine by this long for making a rate limit error
maxDepth = 4 // in ListR recursive list this deep (maximum is 16)
)
/*
@@ -128,16 +124,13 @@ type Options struct {
// Fs represents a remote gofile
type Fs struct {
name string // name of this remote
root string // the path we are working on
opt Options // parsed options
features *fs.Features // optional features
srv *rest.Client // the connection to the server
dirCache *dircache.DirCache // Map of directory path to directory id
pacer *fs.Pacer // pacer for API calls
serversMu *sync.Mutex // protect the servers info below
servers []api.Server // upload servers we can use
serversChecked time.Time // time the servers were refreshed
name string // name of this remote
root string // the path we are working on
opt Options // parsed options
features *fs.Features // optional features
srv *rest.Client // the connection to the server
dirCache *dircache.DirCache // Map of directory path to directory id
pacer *fs.Pacer // pacer for API calls
}
// Object describes a gofile object
@@ -311,12 +304,11 @@ func NewFs(ctx context.Context, name, root string, m configmap.Mapper) (fs.Fs, e
client := fshttp.NewClient(ctx)
f := &Fs{
name: name,
root: root,
opt: *opt,
srv: rest.NewClient(client).SetRoot(rootURL),
pacer: fs.NewPacer(ctx, pacer.NewDefault(pacer.MinSleep(minSleep), pacer.MaxSleep(maxSleep), pacer.DecayConstant(decayConstant))),
serversMu: new(sync.Mutex),
name: name,
root: root,
opt: *opt,
srv: rest.NewClient(client).SetRoot(rootURL),
pacer: fs.NewPacer(ctx, pacer.NewDefault(pacer.MinSleep(minSleep), pacer.MaxSleep(maxSleep), pacer.DecayConstant(decayConstant))),
}
f.features = (&fs.Features{
CaseInsensitive: false,
@@ -435,98 +427,6 @@ func (f *Fs) readRootFolderID(ctx context.Context, m configmap.Mapper) (err erro
return nil
}
// Find the top n servers measured by response time
func (f *Fs) bestServers(ctx context.Context, servers []api.Server, n int) (newServers []api.Server) {
ctx, cancel := context.WithDeadline(ctx, time.Now().Add(10*time.Second))
defer cancel()
if n > len(servers) {
n = len(servers)
}
results := make(chan int, len(servers))
// Test how long the servers take to respond
for i := range servers {
i := i // for closure
go func() {
opts := rest.Opts{
Method: "GET",
RootURL: servers[i].Root(),
}
var result api.UploadServerStatus
start := time.Now()
_, err := f.srv.CallJSON(ctx, &opts, nil, &result)
ping := time.Since(start)
err = result.Err(err)
if err != nil {
results <- -1 // send a -ve number on error
return
}
fs.Debugf(nil, "Upload server %v responded in %v", &servers[i], ping)
results <- i
}()
}
// Wait for n servers to respond
newServers = make([]api.Server, 0, n)
for range servers {
i := <-results
if i >= 0 {
newServers = append(newServers, servers[i])
}
if len(newServers) >= n {
break
}
}
return newServers
}
// Clear all the upload servers - call on an error
func (f *Fs) clearServers() {
f.serversMu.Lock()
defer f.serversMu.Unlock()
fs.Debugf(f, "Clearing upload servers")
f.servers = nil
}
// Gets an upload server
func (f *Fs) getServer(ctx context.Context) (server *api.Server, err error) {
f.serversMu.Lock()
defer f.serversMu.Unlock()
if len(f.servers) == 0 || time.Since(f.serversChecked) >= serversExpiry {
opts := rest.Opts{
Method: "GET",
Path: "/servers",
}
var result api.ServersResponse
var resp *http.Response
err = f.pacer.Call(func() (bool, error) {
resp, err = f.srv.CallJSON(ctx, &opts, nil, &result)
return shouldRetry(ctx, resp, err)
})
if err = result.Err(err); err != nil {
if len(f.servers) == 0 {
return nil, fmt.Errorf("failed to read upload servers: %w", err)
}
fs.Errorf(f, "failed to read new upload servers: %v", err)
} else {
// Find the top servers measured by response time
f.servers = f.bestServers(ctx, result.Data.Servers, serversActive)
f.serversChecked = time.Now()
}
}
if len(f.servers) == 0 {
return nil, errors.New("no upload servers found")
}
// Pick a server at random since we've already found the top ones
i := rand.Intn(len(f.servers))
return &f.servers[i], nil
}
// rootSlash returns root with a slash on if it is empty, otherwise empty string
func (f *Fs) rootSlash() string {
if f.root == "" {
@@ -1526,13 +1426,6 @@ func (o *Object) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, op
return err
}
// Find an upload server
server, err := o.fs.getServer(ctx)
if err != nil {
return err
}
fs.Debugf(o, "Using upload server %v", server)
// If the file exists, delete it after a successful upload
if o.id != "" {
id := o.id
@@ -1561,7 +1454,7 @@ func (o *Object) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, op
},
MultipartContentName: "file",
MultipartFileName: o.fs.opt.Enc.FromStandardName(leaf),
RootURL: server.URL(),
RootURL: api.DirectUploadURL(),
Options: options,
}
err = o.fs.pacer.CallNoRetry(func() (bool, error) {
@@ -1569,10 +1462,6 @@ func (o *Object) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, op
return shouldRetry(ctx, resp, err)
})
if err = result.Err(err); err != nil {
if isAPIErr(err, "error-freespace") {
fs.Errorf(o, "Upload server out of space - need to retry upload")
}
o.fs.clearServers()
return fmt.Errorf("failed to upload file: %w", err)
}
return o.setMetaData(&result.Data)

View File

@@ -252,6 +252,9 @@ Docs: https://cloud.google.com/storage/docs/bucket-policy-only
}, {
Value: "us-east4",
Help: "Northern Virginia",
}, {
Value: "us-east5",
Help: "Ohio",
}, {
Value: "us-west1",
Help: "Oregon",
@@ -483,6 +486,9 @@ func parsePath(path string) (root string) {
// relative to f.root
func (f *Fs) split(rootRelativePath string) (bucketName, bucketPath string) {
bucketName, bucketPath = bucket.Split(bucket.Join(f.root, rootRelativePath))
if f.opt.DirectoryMarkers && strings.HasSuffix(bucketPath, "//") {
bucketPath = bucketPath[:len(bucketPath)-1]
}
return f.opt.Enc.FromStandardName(bucketName), f.opt.Enc.FromStandardPath(bucketPath)
}
@@ -712,7 +718,7 @@ func (f *Fs) list(ctx context.Context, bucket, directory, prefix string, addBuck
continue
}
// process directory markers as directories
remote = strings.TrimRight(remote, "/")
remote, _ = strings.CutSuffix(remote, "/")
}
remote = remote[len(prefix):]
if addBucket {
@@ -757,7 +763,7 @@ func (f *Fs) itemToDirEntry(ctx context.Context, remote string, object *storage.
}
// listDir lists a single directory
func (f *Fs) listDir(ctx context.Context, bucket, directory, prefix string, addBucket bool) (entries fs.DirEntries, err error) {
func (f *Fs) listDir(ctx context.Context, bucket, directory, prefix string, addBucket bool, callback func(fs.DirEntry) error) (err error) {
// List the objects
err = f.list(ctx, bucket, directory, prefix, addBucket, false, func(remote string, object *storage.Object, isDirectory bool) error {
entry, err := f.itemToDirEntry(ctx, remote, object, isDirectory)
@@ -765,16 +771,16 @@ func (f *Fs) listDir(ctx context.Context, bucket, directory, prefix string, addB
return err
}
if entry != nil {
entries = append(entries, entry)
return callback(entry)
}
return nil
})
if err != nil {
return nil, err
return err
}
// bucket must be present if listing succeeded
f.cache.MarkOK(bucket)
return entries, err
return err
}
// listBuckets lists the buckets
@@ -817,14 +823,46 @@ func (f *Fs) listBuckets(ctx context.Context) (entries fs.DirEntries, err error)
// This should return ErrDirNotFound if the directory isn't
// found.
func (f *Fs) List(ctx context.Context, dir string) (entries fs.DirEntries, err error) {
return list.WithListP(ctx, dir, f)
}
// ListP lists the objects and directories of the Fs starting
// from dir non recursively into out.
//
// dir should be "" to start from the root, and should not
// have trailing slashes.
//
// This should return ErrDirNotFound if the directory isn't
// found.
//
// It should call callback for each tranche of entries read.
// These need not be returned in any particular order. If
// callback returns an error then the listing will stop
// immediately.
func (f *Fs) ListP(ctx context.Context, dir string, callback fs.ListRCallback) error {
list := list.NewHelper(callback)
bucket, directory := f.split(dir)
if bucket == "" {
if directory != "" {
return nil, fs.ErrorListBucketRequired
return fs.ErrorListBucketRequired
}
entries, err := f.listBuckets(ctx)
if err != nil {
return err
}
for _, entry := range entries {
err = list.Add(entry)
if err != nil {
return err
}
}
} else {
err := f.listDir(ctx, bucket, directory, f.rootDirectory, f.rootBucket == "", list.Add)
if err != nil {
return err
}
return f.listBuckets(ctx)
}
return f.listDir(ctx, bucket, directory, f.rootDirectory, f.rootBucket == "")
return list.Flush()
}
// ListR lists the objects and directories of the Fs starting
@@ -959,7 +997,7 @@ func (f *Fs) Mkdir(ctx context.Context, dir string) (err error) {
// mkdirParent creates the parent bucket/directory if it doesn't exist
func (f *Fs) mkdirParent(ctx context.Context, remote string) error {
remote = strings.TrimRight(remote, "/")
remote, _ = strings.CutSuffix(remote, "/")
dir := path.Dir(remote)
if dir == "/" || dir == "." {
dir = ""
@@ -1096,7 +1134,15 @@ func (f *Fs) Copy(ctx context.Context, src fs.Object, remote string) (fs.Object,
remote: remote,
}
rewriteRequest := f.svc.Objects.Rewrite(srcBucket, srcPath, dstBucket, dstPath, nil)
// Set the storage class for the destination object if configured
var dstObject *storage.Object
if f.opt.StorageClass != "" {
dstObject = &storage.Object{
StorageClass: f.opt.StorageClass,
}
}
rewriteRequest := f.svc.Objects.Rewrite(srcBucket, srcPath, dstBucket, dstPath, dstObject)
if !f.opt.BucketPolicyOnly {
rewriteRequest.DestinationPredefinedAcl(f.opt.ObjectACL)
}
@@ -1384,6 +1430,10 @@ func (o *Object) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, op
ContentType: fs.MimeType(ctx, src),
Metadata: metadataFromModTime(modTime),
}
// Set the storage class from config if configured
if o.fs.opt.StorageClass != "" {
object.StorageClass = o.fs.opt.StorageClass
}
// Apply upload options
for _, option := range options {
key, value := option.Header()
@@ -1459,6 +1509,7 @@ var (
_ fs.Copier = &Fs{}
_ fs.PutStreamer = &Fs{}
_ fs.ListRer = &Fs{}
_ fs.ListPer = &Fs{}
_ fs.Object = &Object{}
_ fs.MimeTyper = &Object{}
)

View File

@@ -43,6 +43,7 @@ var (
errAlbumDelete = errors.New("google photos API does not implement deleting albums")
errRemove = errors.New("google photos API only implements removing files from albums")
errOwnAlbums = errors.New("google photos API only allows uploading to albums rclone created")
errReadOnly = errors.New("can't upload files in read only mode")
)
const (
@@ -52,19 +53,31 @@ const (
listChunks = 100 // chunk size to read directory listings
albumChunks = 50 // chunk size to read album listings
minSleep = 10 * time.Millisecond
scopeReadOnly = "https://www.googleapis.com/auth/photoslibrary.readonly"
scopeReadWrite = "https://www.googleapis.com/auth/photoslibrary"
scopeAccess = 2 // position of access scope in list
scopeAppendOnly = "https://www.googleapis.com/auth/photoslibrary.appendonly"
scopeReadOnly = "https://www.googleapis.com/auth/photoslibrary.readonly.appcreateddata"
scopeReadWrite = "https://www.googleapis.com/auth/photoslibrary.edit.appcreateddata"
)
var (
// scopes needed for read write access
scopesReadWrite = []string{
"openid",
"profile",
scopeAppendOnly,
scopeReadOnly,
scopeReadWrite,
}
// scopes needed for read only access
scopesReadOnly = []string{
"openid",
"profile",
scopeReadOnly,
}
// Description of how to auth for this app
oauthConfig = &oauthutil.Config{
Scopes: []string{
"openid",
"profile",
scopeReadWrite, // this must be at position scopeAccess
},
Scopes: scopesReadWrite,
AuthURL: google.Endpoint.AuthURL,
TokenURL: google.Endpoint.TokenURL,
ClientID: rcloneClientID,
@@ -100,20 +113,26 @@ func init() {
case "":
// Fill in the scopes
if opt.ReadOnly {
oauthConfig.Scopes[scopeAccess] = scopeReadOnly
oauthConfig.Scopes = scopesReadOnly
} else {
oauthConfig.Scopes[scopeAccess] = scopeReadWrite
oauthConfig.Scopes = scopesReadWrite
}
return oauthutil.ConfigOut("warning", &oauthutil.Options{
return oauthutil.ConfigOut("warning1", &oauthutil.Options{
OAuth2Config: oauthConfig,
})
case "warning":
case "warning1":
// Warn the user as required by google photos integration
return fs.ConfigConfirm("warning_done", true, "config_warning", `Warning
return fs.ConfigConfirm("warning2", true, "config_warning", `Warning
IMPORTANT: All media items uploaded to Google Photos with rclone
are stored in full resolution at original quality. These uploads
will count towards storage in your Google Account.`)
case "warning2":
// Warn the user that rclone can no longer download photos it didnt upload from google photos
return fs.ConfigConfirm("warning_done", true, "config_warning", `Warning
IMPORTANT: Due to Google policy changes rclone can now only download photos it uploaded.`)
case "warning_done":
return nil, nil
}
@@ -333,7 +352,7 @@ func NewFs(ctx context.Context, name, root string, m configmap.Mapper) (fs.Fs, e
baseClient := fshttp.NewClient(ctx)
oAuthClient, ts, err := oauthutil.NewClientWithBaseClient(ctx, name, m, oauthConfig, baseClient)
if err != nil {
return nil, fmt.Errorf("failed to configure Box: %w", err)
return nil, fmt.Errorf("failed to configure google photos: %w", err)
}
root = strings.Trim(path.Clean(root), "/")
@@ -1120,6 +1139,9 @@ func (o *Object) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, op
}
if !album.IsWriteable {
if o.fs.opt.ReadOnly {
return errReadOnly
}
return errOwnAlbums
}

View File

@@ -43,33 +43,42 @@ func (f *Fs) Command(ctx context.Context, name string, arg []string, opt map[str
var commandHelp = []fs.CommandHelp{{
Name: "drop",
Short: "Drop cache",
Short: "Drop cache.",
Long: `Completely drop checksum cache.
Usage Example:
rclone backend drop hasher:
`,
Usage example:
` + "```console" + `
rclone backend drop hasher:
` + "```",
}, {
Name: "dump",
Short: "Dump the database",
Long: "Dump cache records covered by the current remote",
Short: "Dump the database.",
Long: "Dump cache records covered by the current remote.",
}, {
Name: "fulldump",
Short: "Full dump of the database",
Long: "Dump all cache records in the database",
Short: "Full dump of the database.",
Long: "Dump all cache records in the database.",
}, {
Name: "import",
Short: "Import a SUM file",
Short: "Import a SUM file.",
Long: `Amend hash cache from a SUM file and bind checksums to files by size/time.
Usage Example:
rclone backend import hasher:subdir md5 /path/to/sum.md5
`,
Usage example:
` + "```console" + `
rclone backend import hasher:subdir md5 /path/to/sum.md5
` + "```",
}, {
Name: "stickyimport",
Short: "Perform fast import of a SUM file",
Short: "Perform fast import of a SUM file.",
Long: `Fill hash cache from a SUM file without verifying file fingerprints.
Usage Example:
rclone backend stickyimport hasher:subdir md5 remote:path/to/sum.md5
`,
Usage example:
` + "```console" + `
rclone backend stickyimport hasher:subdir md5 remote:path/to/sum.md5
` + "```",
}}
func (f *Fs) dbDump(ctx context.Context, full bool, root string) error {

View File

@@ -371,9 +371,9 @@ func (f *Fs) About(ctx context.Context) (*fs.Usage, error) {
return nil, err
}
return &fs.Usage{
Total: fs.NewUsageValue(int64(info.Capacity)),
Used: fs.NewUsageValue(int64(info.Used)),
Free: fs.NewUsageValue(int64(info.Remaining)),
Total: fs.NewUsageValue(info.Capacity),
Used: fs.NewUsageValue(info.Used),
Free: fs.NewUsageValue(info.Remaining),
}, nil
}

View File

@@ -11,6 +11,7 @@ import (
"io"
"mime"
"net/http"
"net/textproto"
"net/url"
"path"
"strings"
@@ -37,6 +38,10 @@ func init() {
Description: "HTTP",
NewFs: NewFs,
CommandHelp: commandHelp,
MetadataInfo: &fs.MetadataInfo{
System: systemMetadataInfo,
Help: `HTTP metadata keys are case insensitive and are always returned in lower case.`,
},
Options: []fs.Option{{
Name: "url",
Help: "URL of HTTP host to connect to.\n\nE.g. \"https://example.com\", or \"https://user:pass@example.com\" to use a username and password.",
@@ -98,6 +103,40 @@ sizes of any files, and some files that don't exist may be in the listing.`,
fs.Register(fsi)
}
// system metadata keys which this backend owns
var systemMetadataInfo = map[string]fs.MetadataHelp{
"cache-control": {
Help: "Cache-Control header",
Type: "string",
Example: "no-cache",
},
"content-disposition": {
Help: "Content-Disposition header",
Type: "string",
Example: "inline",
},
"content-disposition-filename": {
Help: "Filename retrieved from Content-Disposition header",
Type: "string",
Example: "file.txt",
},
"content-encoding": {
Help: "Content-Encoding header",
Type: "string",
Example: "gzip",
},
"content-language": {
Help: "Content-Language header",
Type: "string",
Example: "en-US",
},
"content-type": {
Help: "Content-Type header",
Type: "string",
Example: "text/plain",
},
}
// Options defines the configuration for this backend
type Options struct {
Endpoint string `config:"url"`
@@ -126,6 +165,13 @@ type Object struct {
size int64
modTime time.Time
contentType string
// Metadata as pointers to strings as they often won't be present
contentDisposition *string // Content-Disposition: header
contentDispositionFilename *string // Filename retrieved from Content-Disposition: header
cacheControl *string // Cache-Control: header
contentEncoding *string // Content-Encoding: header
contentLanguage *string // Content-Language: header
}
// statusError returns an error if the res contained an error
@@ -277,6 +323,7 @@ func NewFs(ctx context.Context, name, root string, m configmap.Mapper) (fs.Fs, e
ci: ci,
}
f.features = (&fs.Features{
ReadMetadata: true,
CanHaveEmptyDirectories: true,
}).Fill(ctx, f)
@@ -429,6 +476,29 @@ func parse(base *url.URL, in io.Reader) (names []string, err error) {
return names, nil
}
// parseFilename extracts the filename from a Content-Disposition header
func parseFilename(contentDisposition string) (string, error) {
// Normalize the contentDisposition to canonical MIME format
mediaType, params, err := mime.ParseMediaType(contentDisposition)
if err != nil {
return "", fmt.Errorf("failed to parse contentDisposition: %v", err)
}
// Check if the contentDisposition is an attachment
if strings.ToLower(mediaType) != "attachment" {
return "", fmt.Errorf("not an attachment: %s", mediaType)
}
// Extract the filename from the parameters
filename, ok := params["filename"]
if !ok {
return "", fmt.Errorf("filename not found in contentDisposition")
}
// Decode filename if it contains special encoding
return textproto.TrimString(filename), nil
}
// Adds the configured headers to the request if any
func addHeaders(req *http.Request, opt *Options) {
for i := 0; i < len(opt.Headers); i += 2 {
@@ -577,6 +647,9 @@ func (o *Object) String() string {
// Remote the name of the remote HTTP file, relative to the fs root
func (o *Object) Remote() string {
if o.contentDispositionFilename != nil {
return *o.contentDispositionFilename
}
return o.remote
}
@@ -634,6 +707,29 @@ func (o *Object) decodeMetadata(ctx context.Context, res *http.Response) error {
o.modTime = t
o.contentType = res.Header.Get("Content-Type")
o.size = rest.ParseSizeFromHeaders(res.Header)
contentDisposition := res.Header.Get("Content-Disposition")
if contentDisposition != "" {
o.contentDisposition = &contentDisposition
}
if o.contentDisposition != nil {
var filename string
filename, err = parseFilename(*o.contentDisposition)
if err == nil && filename != "" {
o.contentDispositionFilename = &filename
}
}
cacheControl := res.Header.Get("Cache-Control")
if cacheControl != "" {
o.cacheControl = &cacheControl
}
contentEncoding := res.Header.Get("Content-Encoding")
if contentEncoding != "" {
o.contentEncoding = &contentEncoding
}
contentLanguage := res.Header.Get("Content-Language")
if contentLanguage != "" {
o.contentLanguage = &contentLanguage
}
// If NoSlash is set then check ContentType to see if it is a directory
if o.fs.opt.NoSlash {
@@ -722,11 +818,13 @@ var commandHelp = []fs.CommandHelp{{
Long: `This set command can be used to update the config parameters
for a running http backend.
Usage Examples:
Usage examples:
rclone backend set remote: [-o opt_name=opt_value] [-o opt_name2=opt_value2]
rclone rc backend/command command=set fs=remote: [-o opt_name=opt_value] [-o opt_name2=opt_value2]
rclone rc backend/command command=set fs=remote: -o url=https://example.com
` + "```console" + `
rclone backend set remote: [-o opt_name=opt_value] [-o opt_name2=opt_value2]
rclone rc backend/command command=set fs=remote: [-o opt_name=opt_value] [-o opt_name2=opt_value2]
rclone rc backend/command command=set fs=remote: -o url=https://example.com
` + "```" + `
The option keys are named as they are in the config file.
@@ -734,8 +832,7 @@ This rebuilds the connection to the http backend when it is called with
the new parameters. Only new parameters need be passed as the values
will default to those currently in use.
It doesn't return anything.
`,
It doesn't return anything.`,
}}
// Command the backend to run a named command
@@ -771,6 +868,30 @@ func (f *Fs) Command(ctx context.Context, name string, arg []string, opt map[str
}
}
// Metadata returns metadata for an object
//
// It should return nil if there is no Metadata
func (o *Object) Metadata(ctx context.Context) (metadata fs.Metadata, err error) {
metadata = make(fs.Metadata, 6)
if o.contentType != "" {
metadata["content-type"] = o.contentType
}
// Set system metadata
setMetadata := func(k string, v *string) {
if v == nil || *v == "" {
return
}
metadata[k] = *v
}
setMetadata("content-disposition", o.contentDisposition)
setMetadata("content-disposition-filename", o.contentDispositionFilename)
setMetadata("cache-control", o.cacheControl)
setMetadata("content-language", o.contentLanguage)
setMetadata("content-encoding", o.contentEncoding)
return metadata, nil
}
// Check the interfaces are satisfied
var (
_ fs.Fs = &Fs{}
@@ -778,4 +899,5 @@ var (
_ fs.Object = &Object{}
_ fs.MimeTyper = &Object{}
_ fs.Commander = &Fs{}
_ fs.Metadataer = &Object{}
)

View File

@@ -60,6 +60,17 @@ func prepareServer(t *testing.T) configmap.Simple {
what := fmt.Sprintf("%s %s: Header ", r.Method, r.URL.Path)
assert.Equal(t, headers[1], r.Header.Get(headers[0]), what+headers[0])
assert.Equal(t, headers[3], r.Header.Get(headers[2]), what+headers[2])
// Set the content disposition header for the fifth file
// later we will check if it is set using the metadata method
if r.URL.Path == "/five.txt.gz" {
w.Header().Set("Content-Disposition", "attachment; filename=\"five.txt.gz\"")
w.Header().Set("Content-Type", "text/plain; charset=utf-8")
w.Header().Set("Cache-Control", "no-cache")
w.Header().Set("Content-Language", "en-US")
w.Header().Set("Content-Encoding", "gzip")
}
fileServer.ServeHTTP(w, r)
})
@@ -102,27 +113,33 @@ func testListRoot(t *testing.T, f fs.Fs, noSlash bool) {
sort.Sort(entries)
require.Equal(t, 4, len(entries))
require.Equal(t, 5, len(entries))
e := entries[0]
assert.Equal(t, "four", e.Remote())
assert.Equal(t, "five.txt.gz", e.Remote())
assert.Equal(t, int64(-1), e.Size())
_, ok := e.(fs.Directory)
_, ok := e.(fs.Object)
assert.True(t, ok)
e = entries[1]
assert.Equal(t, "four", e.Remote())
assert.Equal(t, int64(-1), e.Size())
_, ok = e.(fs.Directory)
assert.True(t, ok)
e = entries[2]
assert.Equal(t, "one%.txt", e.Remote())
assert.Equal(t, int64(5+lineEndSize), e.Size())
_, ok = e.(*Object)
assert.True(t, ok)
e = entries[2]
e = entries[3]
assert.Equal(t, "three", e.Remote())
assert.Equal(t, int64(-1), e.Size())
_, ok = e.(fs.Directory)
assert.True(t, ok)
e = entries[3]
e = entries[4]
assert.Equal(t, "two.html", e.Remote())
if noSlash {
assert.Equal(t, int64(-1), e.Size())
@@ -218,6 +235,23 @@ func TestNewObjectWithLeadingSlash(t *testing.T) {
assert.Equal(t, fs.ErrorObjectNotFound, err)
}
func TestNewObjectWithMetadata(t *testing.T) {
f := prepare(t)
o, err := f.NewObject(context.Background(), "/five.txt.gz")
require.NoError(t, err)
assert.Equal(t, "five.txt.gz", o.Remote())
ho, ok := o.(*Object)
assert.True(t, ok)
metadata, err := ho.Metadata(context.Background())
require.NoError(t, err)
assert.Equal(t, "text/plain; charset=utf-8", metadata["content-type"])
assert.Equal(t, "attachment; filename=\"five.txt.gz\"", metadata["content-disposition"])
assert.Equal(t, "five.txt.gz", metadata["content-disposition-filename"])
assert.Equal(t, "no-cache", metadata["cache-control"])
assert.Equal(t, "en-US", metadata["content-language"])
assert.Equal(t, "gzip", metadata["content-encoding"])
}
func TestOpen(t *testing.T) {
m := prepareServer(t)

Binary file not shown.

View File

@@ -252,18 +252,14 @@ func (d *DriveService) DownloadFile(ctx context.Context, url string, opt []fs.Op
}
resp, err := d.icloud.srv.Call(ctx, opts)
if err != nil {
// icloud has some weird http codes
if resp.StatusCode == 330 {
loc, err := resp.Location()
if err == nil {
return d.DownloadFile(ctx, loc.String(), opt)
}
// icloud has some weird http codes
if err != nil && resp != nil && resp.StatusCode == 330 {
loc, err := resp.Location()
if err == nil {
return d.DownloadFile(ctx, loc.String(), opt)
}
return resp, err
}
return d.icloud.srv.Call(ctx, opts)
return resp, err
}
// MoveItemToTrashByItemID moves an item to the trash based on the item ID.

View File

@@ -421,6 +421,9 @@ func (f *Fs) NewObject(ctx context.Context, remote string) (fs.Object, error) {
// will return the object and the error, otherwise will return
// nil and the error
func (f *Fs) Put(ctx context.Context, in io.Reader, src fs.ObjectInfo, options ...fs.OpenOption) (fs.Object, error) {
if src.Size() == 0 {
return nil, fs.ErrorCantUploadEmptyFiles
}
return uploadFile(ctx, f, in, src.Remote(), options...)
}
@@ -659,6 +662,9 @@ func (o *Object) Open(ctx context.Context, options ...fs.OpenOption) (io.ReadClo
// But for unknown-sized objects (indicated by src.Size() == -1), Upload should either
// return an error or update the object properly (rather than e.g. calling panic).
func (o *Object) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, options ...fs.OpenOption) (err error) {
if src.Size() == 0 {
return fs.ErrorCantUploadEmptyFiles
}
srcRemote := o.Remote()
@@ -670,7 +676,7 @@ func (o *Object) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, op
var resp *client.UploadResult
err = o.fs.pacer.Call(func() (bool, error) {
err = o.fs.pacer.CallNoRetry(func() (bool, error) {
var res *http.Response
res, resp, err = o.fs.ik.Upload(ctx, in, client.UploadParam{
FileName: fileName,
@@ -725,7 +731,7 @@ func uploadFile(ctx context.Context, f *Fs, in io.Reader, srcRemote string, opti
UseUniqueFileName := new(bool)
*UseUniqueFileName = false
err := f.pacer.Call(func() (bool, error) {
err := f.pacer.CallNoRetry(func() (bool, error) {
var res *http.Response
var err error
res, _, err = f.ik.Upload(ctx, in, client.UploadParam{
@@ -794,35 +800,10 @@ func (o *Object) Metadata(ctx context.Context) (metadata fs.Metadata, err error)
return metadata, nil
}
// Copy src to this remote using server-side move operations.
//
// This is stored with the remote path given.
//
// It returns the destination Object and a possible error.
//
// Will only be called if src.Fs().Name() == f.Name()
//
// If it isn't possible then return fs.ErrorCantMove
func (f *Fs) Copy(ctx context.Context, src fs.Object, remote string) (fs.Object, error) {
srcObj, ok := src.(*Object)
if !ok {
return nil, fs.ErrorCantMove
}
file, err := srcObj.Open(ctx)
if err != nil {
return nil, err
}
return uploadFile(ctx, f, file, remote)
}
// Check the interfaces are satisfied.
var (
_ fs.Fs = &Fs{}
_ fs.Purger = &Fs{}
_ fs.PublicLinker = &Fs{}
_ fs.Object = &Object{}
_ fs.Copier = &Fs{}
)

View File

@@ -590,7 +590,7 @@ func (f *Fs) PublicLink(ctx context.Context, remote string, expire fs.Duration,
return "", err
}
bucket, bucketPath := f.split(remote)
return path.Join(f.opt.FrontEndpoint, "/download/", bucket, quotePath(bucketPath)), nil
return path.Join(f.opt.FrontEndpoint, "/download/", bucket, rest.URLPathEscapeAll(bucketPath)), nil
}
// Copy src to this remote using server-side copy operations.
@@ -622,7 +622,7 @@ func (f *Fs) Copy(ctx context.Context, src fs.Object, remote string) (_ fs.Objec
"x-archive-auto-make-bucket": "1",
"x-archive-queue-derive": "0",
"x-archive-keep-old-version": "0",
"x-amz-copy-source": quotePath(path.Join("/", srcBucket, srcPath)),
"x-amz-copy-source": rest.URLPathEscapeAll(path.Join("/", srcBucket, srcPath)),
"x-amz-metadata-directive": "COPY",
"x-archive-filemeta-sha1": srcObj.sha1,
"x-archive-filemeta-md5": srcObj.md5,
@@ -778,7 +778,7 @@ func (o *Object) Open(ctx context.Context, options ...fs.OpenOption) (in io.Read
// make a GET request to (frontend)/download/:item/:path
opts := rest.Opts{
Method: "GET",
Path: path.Join("/download/", o.fs.root, quotePath(o.fs.opt.Enc.FromStandardPath(o.remote))),
Path: path.Join("/download/", o.fs.root, rest.URLPathEscapeAll(o.fs.opt.Enc.FromStandardPath(o.remote))),
Options: optionsFixed,
}
err = o.fs.pacer.Call(func() (bool, error) {
@@ -1334,16 +1334,6 @@ func trimPathPrefix(s, prefix string, enc encoder.MultiEncoder) string {
return enc.ToStandardPath(strings.TrimPrefix(s, prefix+"/"))
}
// mimics urllib.parse.quote() on Python; exclude / from url.PathEscape
func quotePath(s string) string {
seg := strings.Split(s, "/")
newValues := []string{}
for _, v := range seg {
newValues = append(newValues, url.PathEscape(v))
}
return strings.Join(newValues, "/")
}
var (
_ fs.Fs = &Fs{}
_ fs.Copier = &Fs{}

View File

@@ -17,6 +17,7 @@ import (
"net/url"
"os"
"path"
"slices"
"strconv"
"strings"
"time"
@@ -59,31 +60,43 @@ const (
configVersion = 1
defaultTokenURL = "https://id.jottacloud.com/auth/realms/jottacloud/protocol/openid-connect/token"
defaultClientID = "jottacli"
defaultClientID = "jottacli" // Identified as "Jottacloud CLI" in "My logged in devices"
legacyTokenURL = "https://api.jottacloud.com/auth/v1/token"
legacyRegisterURL = "https://api.jottacloud.com/auth/v1/register"
legacyClientID = "nibfk8biu12ju7hpqomr8b1e40"
legacyEncryptedClientSecret = "Vp8eAv7eVElMnQwN-kgU9cbhgApNDaMqWdlDi5qFydlQoji4JBxrGMF2"
legacyConfigVersion = 0
teliaseCloudTokenURL = "https://cloud-auth.telia.se/auth/realms/telia_se/protocol/openid-connect/token"
teliaseCloudAuthURL = "https://cloud-auth.telia.se/auth/realms/telia_se/protocol/openid-connect/auth"
teliaseCloudClientID = "desktop"
telianoCloudTokenURL = "https://sky-auth.telia.no/auth/realms/get/protocol/openid-connect/token"
telianoCloudAuthURL = "https://sky-auth.telia.no/auth/realms/get/protocol/openid-connect/auth"
telianoCloudClientID = "desktop"
tele2CloudTokenURL = "https://mittcloud-auth.tele2.se/auth/realms/comhem/protocol/openid-connect/token"
tele2CloudAuthURL = "https://mittcloud-auth.tele2.se/auth/realms/comhem/protocol/openid-connect/auth"
tele2CloudClientID = "desktop"
onlimeCloudTokenURL = "https://cloud-auth.onlime.dk/auth/realms/onlime_wl/protocol/openid-connect/token"
onlimeCloudAuthURL = "https://cloud-auth.onlime.dk/auth/realms/onlime_wl/protocol/openid-connect/auth"
onlimeCloudClientID = "desktop"
)
type service struct {
key string
name string
domain string
realm string
clientID string
scopes []string
}
// The list of services and their settings for supporting traditional OAuth.
// Please keep these in alphabetical order, but with jottacloud first.
func getServices() []service {
return []service{
{"jottacloud", "Jottacloud", "id.jottacloud.com", "jottacloud", "desktop", []string{"openid", "jotta-default", "offline_access"}}, // Chose client id "desktop" here, will be identified as "Jottacloud for Desktop" in "My logged in devices", but could have used "jottacli" here as well.
{"elgiganten_dk", "Elgiganten Cloud (Denmark)", "cloud.elgiganten.dk", "elgiganten", "desktop", []string{"openid", "jotta-default", "offline_access"}},
{"elgiganten_se", "Elgiganten Cloud (Sweden)", "cloud.elgiganten.se", "elgiganten", "desktop", []string{"openid", "jotta-default", "offline_access"}},
{"elkjop", "Elkjøp Cloud (Norway)", "cloud.elkjop.no", "elkjop", "desktop", []string{"openid", "jotta-default", "offline_access"}},
{"elko", "ELKO Cloud (Iceland)", "cloud.elko.is", "elko", "desktop", []string{"openid", "jotta-default", "offline_access"}},
{"gigantti", "Gigantti Cloud (Finland)", "cloud.gigantti.fi", "gigantti", "desktop", []string{"openid", "jotta-default", "offline_access"}},
{"letsgo", "Let's Go Cloud (Germany)", "letsgo.jotta.cloud", "letsgo", "desktop-win", []string{"openid", "offline_access"}},
{"mediamarkt", "MediaMarkt Cloud (Multiregional)", "mediamarkt.jottacloud.com", "mediamarkt", "desktop", []string{"openid", "jotta-default", "offline_access"}},
{"onlime", "Onlime (Denmark)", "cloud-auth.onlime.dk", "onlime_wl", "desktop", []string{"openid", "jotta-default", "offline_access"}},
{"tele2", "Tele2 Cloud (Sweden)", "mittcloud-auth.tele2.se", "comhem", "desktop", []string{"openid", "jotta-default", "offline_access"}},
{"telia_no", "Telia Sky (Norway)", "sky-auth.telia.no", "get", "desktop", []string{"openid", "jotta-default", "offline_access"}},
{"telia_se", "Telia Cloud (Sweden)", "cloud-auth.telia.se", "telia_se", "desktop", []string{"openid", "jotta-default", "offline_access"}},
}
}
// Register with Fs
func init() {
// needs to be done early so we can use oauth during config
@@ -159,36 +172,44 @@ func init() {
}
// Config runs the backend configuration protocol
func Config(ctx context.Context, name string, m configmap.Mapper, config fs.ConfigIn) (*fs.ConfigOut, error) {
switch config.State {
func Config(ctx context.Context, name string, m configmap.Mapper, conf fs.ConfigIn) (*fs.ConfigOut, error) {
switch conf.State {
case "":
return fs.ConfigChooseExclusiveFixed("auth_type_done", "config_type", `Select authentication type.`, []fs.OptionExample{{
if isAuthorize, _ := m.Get(config.ConfigAuthorize); isAuthorize == "true" {
return nil, errors.New("not supported by this backend")
}
return fs.ConfigChooseExclusiveFixed("auth_type_done", "config_type", `Type of authentication.`, []fs.OptionExample{{
Value: "standard",
Help: "Standard authentication.\nUse this if you're a normal Jottacloud user.",
Help: `Standard authentication.
This is primarily supported by the official service, but may also be
supported by some white-label services. It is designed for command-line
applications, and you will be asked to enter a single-use personal login
token which you must manually generate from the account security settings
in the web interface of your service.`,
}, {
Value: "traditional",
Help: `Traditional authentication.
This is supported by the official service and all white-label services
that rclone knows about. You will be asked which service to connect to.
It has a limitation of only a single active authentication at a time. You
need to be on, or have access to, a machine with an internet-connected
web browser.`,
}, {
Value: "legacy",
Help: "Legacy authentication.\nThis is only required for certain whitelabel versions of Jottacloud and not recommended for normal users.",
}, {
Value: "telia_se",
Help: "Telia Cloud authentication.\nUse this if you are using Telia Cloud (Sweden).",
}, {
Value: "telia_no",
Help: "Telia Sky authentication.\nUse this if you are using Telia Sky (Norway).",
}, {
Value: "tele2",
Help: "Tele2 Cloud authentication.\nUse this if you are using Tele2 Cloud.",
}, {
Value: "onlime",
Help: "Onlime Cloud authentication.\nUse this if you are using Onlime Cloud.",
Help: `Legacy authentication.
This is no longer supported by any known services and not recommended
used. You will be asked for your account's username and password.`,
}})
case "auth_type_done":
// Jump to next state according to config chosen
return fs.ConfigGoto(config.Result)
return fs.ConfigGoto(conf.Result)
case "standard": // configure a jottacloud backend using the modern JottaCli token based authentication
m.Set("configVersion", fmt.Sprint(configVersion))
return fs.ConfigInput("standard_token", "config_login_token", "Personal login token.\nGenerate here: https://www.jottacloud.com/web/secure")
return fs.ConfigInput("standard_token", "config_login_token", `Personal login token.
Generate it from the account security settings in the web interface of your
service, for the official service on https://www.jottacloud.com/web/secure.`)
case "standard_token":
loginToken := config.Result
loginToken := conf.Result
m.Set(configClientID, defaultClientID)
m.Set(configClientSecret, "")
@@ -203,10 +224,50 @@ func Config(ctx context.Context, name string, m configmap.Mapper, config fs.Conf
return nil, fmt.Errorf("error while saving token: %w", err)
}
return fs.ConfigGoto("choose_device")
case "traditional":
services := getServices()
options := make([]fs.OptionExample, 0, len(services))
for _, service := range services {
options = append(options, fs.OptionExample{
Value: service.key,
Help: service.name,
})
}
return fs.ConfigChooseExclusiveFixed("traditional_type", "config_traditional",
"White-label service. This decides the domain name to connect to and\nthe authentication configuration to use.",
options)
case "traditional_type":
services := getServices()
i := slices.IndexFunc(services, func(s service) bool { return s.key == conf.Result })
if i == -1 {
return nil, fmt.Errorf("unexpected service %q", conf.Result)
}
service := services[i]
opts := rest.Opts{
Method: "GET",
RootURL: "https://" + service.domain + "/auth/realms/" + service.realm + "/.well-known/openid-configuration",
}
var wellKnown api.WellKnown
srv := rest.NewClient(fshttp.NewClient(ctx))
_, err := srv.CallJSON(ctx, &opts, nil, &wellKnown)
if err != nil {
return nil, fmt.Errorf("failed to get authentication provider configuration: %w", err)
}
m.Set("configVersion", fmt.Sprint(configVersion))
m.Set(configClientID, service.clientID)
m.Set(configTokenURL, wellKnown.TokenEndpoint)
return oauthutil.ConfigOut("choose_device", &oauthutil.Options{
OAuth2Config: &oauthutil.Config{
AuthURL: wellKnown.AuthorizationEndpoint,
TokenURL: wellKnown.TokenEndpoint,
ClientID: service.clientID,
Scopes: service.scopes,
RedirectURL: oauthutil.RedirectLocalhostURL,
},
})
case "legacy": // configure a jottacloud backend using legacy authentication
m.Set("configVersion", fmt.Sprint(legacyConfigVersion))
return fs.ConfigConfirm("legacy_api", false, "config_machine_specific", `Do you want to create a machine specific API key?
Rclone has it's own Jottacloud API KEY which works fine as long as one
only uses rclone on a single machine. When you want to use rclone with
this account on more than one machine it's recommended to create a
@@ -214,7 +275,7 @@ machine specific API key. These keys can NOT be shared between
machines.`)
case "legacy_api":
srv := rest.NewClient(fshttp.NewClient(ctx))
if config.Result == "true" {
if conf.Result == "true" {
deviceRegistration, err := registerDevice(ctx, srv)
if err != nil {
return nil, fmt.Errorf("failed to register device: %w", err)
@@ -223,16 +284,16 @@ machines.`)
m.Set(configClientSecret, obscure.MustObscure(deviceRegistration.ClientSecret))
fs.Debugf(nil, "Got clientID %q and clientSecret %q", deviceRegistration.ClientID, deviceRegistration.ClientSecret)
}
return fs.ConfigInput("legacy_username", "config_username", "Username (e-mail address)")
return fs.ConfigInput("legacy_username", "config_username", "Username (e-mail address) of your account.")
case "legacy_username":
m.Set(configUsername, config.Result)
return fs.ConfigPassword("legacy_password", "config_password", "Password (only used in setup, will not be stored)")
m.Set(configUsername, conf.Result)
return fs.ConfigPassword("legacy_password", "config_password", "Password of your account. This is only used in setup, it will not be stored.")
case "legacy_password":
m.Set("password", config.Result)
m.Set("password", conf.Result)
m.Set("auth_code", "")
return fs.ConfigGoto("legacy_do_auth")
case "legacy_auth_code":
authCode := strings.ReplaceAll(config.Result, "-", "") // remove any "-" contained in the code so we have a 6 digit number
authCode := strings.ReplaceAll(conf.Result, "-", "") // remove any "-" contained in the code so we have a 6 digit number
m.Set("auth_code", authCode)
return fs.ConfigGoto("legacy_do_auth")
case "legacy_do_auth":
@@ -242,12 +303,12 @@ machines.`)
authCode, _ := m.Get("auth_code")
srv := rest.NewClient(fshttp.NewClient(ctx))
clientID, ok := m.Get(configClientID)
if !ok {
clientID, _ := m.Get(configClientID)
if clientID == "" {
clientID = legacyClientID
}
clientSecret, ok := m.Get(configClientSecret)
if !ok {
clientSecret, _ := m.Get(configClientSecret)
if clientSecret == "" {
clientSecret = legacyEncryptedClientSecret
}
@@ -260,7 +321,7 @@ machines.`)
}
token, err := doLegacyAuth(ctx, srv, oauthConfig, username, password, authCode)
if err == errAuthCodeRequired {
return fs.ConfigInput("legacy_auth_code", "config_auth_code", "Verification Code\nThis account uses 2 factor authentication you will receive a verification code via SMS.")
return fs.ConfigInput("legacy_auth_code", "config_auth_code", "Verification code.\nThis account uses 2 factor authentication you will receive a verification code via SMS.")
}
m.Set("password", "")
m.Set("auth_code", "")
@@ -272,58 +333,6 @@ machines.`)
return nil, fmt.Errorf("error while saving token: %w", err)
}
return fs.ConfigGoto("choose_device")
case "telia_se": // telia_se cloud config
m.Set("configVersion", fmt.Sprint(configVersion))
m.Set(configClientID, teliaseCloudClientID)
m.Set(configTokenURL, teliaseCloudTokenURL)
return oauthutil.ConfigOut("choose_device", &oauthutil.Options{
OAuth2Config: &oauthutil.Config{
AuthURL: teliaseCloudAuthURL,
TokenURL: teliaseCloudTokenURL,
ClientID: teliaseCloudClientID,
Scopes: []string{"openid", "jotta-default", "offline_access"},
RedirectURL: oauthutil.RedirectLocalhostURL,
},
})
case "telia_no": // telia_no cloud config
m.Set("configVersion", fmt.Sprint(configVersion))
m.Set(configClientID, telianoCloudClientID)
m.Set(configTokenURL, telianoCloudTokenURL)
return oauthutil.ConfigOut("choose_device", &oauthutil.Options{
OAuth2Config: &oauthutil.Config{
AuthURL: telianoCloudAuthURL,
TokenURL: telianoCloudTokenURL,
ClientID: telianoCloudClientID,
Scopes: []string{"openid", "jotta-default", "offline_access"},
RedirectURL: oauthutil.RedirectLocalhostURL,
},
})
case "tele2": // tele2 cloud config
m.Set("configVersion", fmt.Sprint(configVersion))
m.Set(configClientID, tele2CloudClientID)
m.Set(configTokenURL, tele2CloudTokenURL)
return oauthutil.ConfigOut("choose_device", &oauthutil.Options{
OAuth2Config: &oauthutil.Config{
AuthURL: tele2CloudAuthURL,
TokenURL: tele2CloudTokenURL,
ClientID: tele2CloudClientID,
Scopes: []string{"openid", "jotta-default", "offline_access"},
RedirectURL: oauthutil.RedirectLocalhostURL,
},
})
case "onlime": // onlime cloud config
m.Set("configVersion", fmt.Sprint(configVersion))
m.Set(configClientID, onlimeCloudClientID)
m.Set(configTokenURL, onlimeCloudTokenURL)
return oauthutil.ConfigOut("choose_device", &oauthutil.Options{
OAuth2Config: &oauthutil.Config{
AuthURL: onlimeCloudAuthURL,
TokenURL: onlimeCloudTokenURL,
ClientID: onlimeCloudClientID,
Scopes: []string{"openid", "jotta-default", "offline_access"},
RedirectURL: oauthutil.RedirectLocalhostURL,
},
})
case "choose_device":
return fs.ConfigConfirm("choose_device_query", false, "config_non_standard", `Use a non-standard device/mountpoint?
Choosing no, the default, will let you access the storage used for the archive
@@ -331,7 +340,7 @@ section of the official Jottacloud client. If you instead want to access the
sync or the backup section, for example, you must choose yes.`)
case "choose_device_query":
if config.Result != "true" {
if conf.Result != "true" {
m.Set(configDevice, "")
m.Set(configMountpoint, "")
return fs.ConfigGoto("end")
@@ -372,7 +381,7 @@ a new by entering a unique name.`, defaultDevice)
return deviceNames[i], ""
})
case "choose_device_result":
device := config.Result
device := conf.Result
oAuthClient, _, err := getOAuthClient(ctx, name, m)
if err != nil {
@@ -432,7 +441,7 @@ You may create a new by entering a unique name.`, device)
return dev.MountPoints[i].Name, ""
})
case "choose_device_mountpoint":
mountpoint := config.Result
mountpoint := conf.Result
oAuthClient, _, err := getOAuthClient(ctx, name, m)
if err != nil {
@@ -463,7 +472,7 @@ You may create a new by entering a unique name.`, device)
if isNew {
if device == defaultDevice {
return nil, fmt.Errorf("custom mountpoints not supported on built-in %s device: %w", defaultDevice, err)
return nil, fmt.Errorf("custom mountpoints not supported on built-in %s device", defaultDevice)
}
fs.Debugf(nil, "Creating new mountpoint: %s", mountpoint)
_, err := createMountPoint(ctx, jfsSrv, path.Join(cust.Username, device, mountpoint))
@@ -478,7 +487,7 @@ You may create a new by entering a unique name.`, device)
// All the config flows end up here in case we need to carry on with something
return nil, nil
}
return nil, fmt.Errorf("unknown state %q", config.State)
return nil, fmt.Errorf("unknown state %q", conf.State)
}
// Options defines the configuration for this backend
@@ -929,12 +938,12 @@ func getOAuthClient(ctx context.Context, name string, m configmap.Mapper) (oAuth
oauthConfig.AuthURL = tokenURL
}
} else if ver == legacyConfigVersion {
clientID, ok := m.Get(configClientID)
if !ok {
clientID, _ := m.Get(configClientID)
if clientID == "" {
clientID = legacyClientID
}
clientSecret, ok := m.Get(configClientSecret)
if !ok {
clientSecret, _ := m.Get(configClientSecret)
if clientSecret == "" {
clientSecret = legacyEncryptedClientSecret
}
oauthConfig.ClientID = clientID
@@ -1000,6 +1009,13 @@ func NewFs(ctx context.Context, name, root string, m configmap.Mapper) (fs.Fs, e
f.features.ListR = nil
}
cust, err := getCustomerInfo(ctx, f.apiSrv)
if err != nil {
return nil, err
}
f.user = cust.Username
f.setEndpoints()
// Renew the token in the background
f.tokenRenewer = oauthutil.NewRenew(f.String(), ts, func() error {
_, err := f.readMetaDataForPath(ctx, "")
@@ -1009,13 +1025,6 @@ func NewFs(ctx context.Context, name, root string, m configmap.Mapper) (fs.Fs, e
return err
})
cust, err := getCustomerInfo(ctx, f.apiSrv)
if err != nil {
return nil, err
}
f.user = cust.Username
f.setEndpoints()
if root != "" && !rootIsDir {
// Check to see if the root actually an existing file
remote := path.Base(root)

View File

@@ -461,7 +461,7 @@ func translateErrorsDir(err error) error {
return err
}
// translatesErrorsObject translates Koofr errors to rclone errors (for an object operation)
// translateErrorsObject translates Koofr errors to rclone errors (for an object operation)
func translateErrorsObject(err error) error {
switch err := err.(type) {
case httpclient.InvalidStatusError:

View File

@@ -497,9 +497,6 @@ func (f *Fs) purgeCheck(ctx context.Context, dir string, check bool) error {
}
f.dirCache.FlushDir(dir)
if err != nil {
return err
}
return nil
}
@@ -617,16 +614,36 @@ func (o *Object) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, op
case 1:
// upload file using link from first step
var res *http.Response
var location string
// Check to see if we are being redirected
opts := &rest.Opts{
Method: "HEAD",
RootURL: getFirstStepResult.Data.SignURL,
Options: options,
NoRedirect: true,
}
err = o.fs.pacer.CallNoRetry(func() (bool, error) {
res, err = o.fs.srv.Call(ctx, opts)
return o.fs.shouldRetry(ctx, res, err)
})
if res != nil {
location = res.Header.Get("Location")
if location != "" {
// set the URL to the new Location
opts.RootURL = location
err = nil
}
}
if err != nil {
return fmt.Errorf("head upload URL: %w", err)
}
file := io.MultiReader(bytes.NewReader(first10mBytes), in)
opts := &rest.Opts{
Method: "PUT",
RootURL: getFirstStepResult.Data.SignURL,
Options: options,
Body: file,
ContentLength: &size,
}
opts.Method = "PUT"
opts.Body = file
opts.ContentLength = &size
err = o.fs.pacer.CallNoRetry(func() (bool, error) {
res, err = o.fs.srv.Call(ctx, opts)

View File

@@ -7,6 +7,7 @@ import (
"errors"
"fmt"
"io"
iofs "io/fs"
"os"
"path"
"path/filepath"
@@ -114,6 +115,17 @@ points, as you explicitly acknowledge that they should be skipped.`,
NoPrefix: true,
Advanced: true,
},
{
Name: "skip_specials",
Help: `Don't warn about skipped pipes, sockets and device objects.
This flag disables warning messages on skipped pipes, sockets and
device objects, as you explicitly acknowledge that they should be
skipped.`,
Default: false,
NoPrefix: true,
Advanced: true,
},
{
Name: "zero_size_links",
Help: `Assume the Stat size of links is zero (and read them instead) (deprecated).
@@ -305,6 +317,12 @@ only useful for reading.
Help: "The last status change time.",
}},
},
{
Name: "hashes",
Help: `Comma separated list of supported checksum types.`,
Default: fs.CommaSepList{},
Advanced: true,
},
{
Name: config.ConfigEncoding,
Help: config.ConfigEncodingHelp,
@@ -321,6 +339,7 @@ type Options struct {
FollowSymlinks bool `config:"copy_links"`
TranslateSymlinks bool `config:"links"`
SkipSymlinks bool `config:"skip_links"`
SkipSpecials bool `config:"skip_specials"`
UTFNorm bool `config:"unicode_normalization"`
NoCheckUpdated bool `config:"no_check_updated"`
NoUNC bool `config:"nounc"`
@@ -331,6 +350,7 @@ type Options struct {
NoSparse bool `config:"no_sparse"`
NoSetModTime bool `config:"no_set_modtime"`
TimeType timeType `config:"time_type"`
Hashes fs.CommaSepList `config:"hashes"`
Enc encoder.MultiEncoder `config:"encoding"`
NoClone bool `config:"no_clone"`
}
@@ -664,8 +684,12 @@ func (f *Fs) List(ctx context.Context, dir string) (entries fs.DirEntries, err e
name := fi.Name()
mode := fi.Mode()
newRemote := f.cleanRemote(dir, name)
symlinkFlag := os.ModeSymlink
if runtime.GOOS == "windows" {
symlinkFlag |= os.ModeIrregular
}
// Follow symlinks if required
if f.opt.FollowSymlinks && (mode&os.ModeSymlink) != 0 {
if f.opt.FollowSymlinks && (mode&symlinkFlag) != 0 {
localPath := filepath.Join(fsDirPath, name)
fi, err = os.Stat(localPath)
// Quietly skip errors on excluded files and directories
@@ -687,13 +711,13 @@ func (f *Fs) List(ctx context.Context, dir string) (entries fs.DirEntries, err e
if fi.IsDir() {
// Ignore directories which are symlinks. These are junction points under windows which
// are kind of a souped up symlink. Unix doesn't have directories which are symlinks.
if (mode&os.ModeSymlink) == 0 && f.dev == readDevice(fi, f.opt.OneFileSystem) {
if (mode&symlinkFlag) == 0 && f.dev == readDevice(fi, f.opt.OneFileSystem) {
d := f.newDirectory(newRemote, fi)
entries = append(entries, d)
}
} else {
// Check whether this link should be translated
if f.opt.TranslateSymlinks && fi.Mode()&os.ModeSymlink != 0 {
if f.opt.TranslateSymlinks && fi.Mode()&symlinkFlag != 0 {
newRemote += fs.LinkSuffix
}
// Don't include non directory if not included
@@ -830,7 +854,13 @@ func (f *Fs) Rmdir(ctx context.Context, dir string) error {
} else if !fi.IsDir() {
return fs.ErrorIsFile
}
return os.Remove(localPath)
err := os.Remove(localPath)
if runtime.GOOS == "windows" && errors.Is(err, iofs.ErrPermission) { // https://github.com/golang/go/issues/26295
if os.Chmod(localPath, 0o600) == nil {
err = os.Remove(localPath)
}
}
return err
}
// Precision of the file system
@@ -1021,18 +1051,30 @@ func (f *Fs) DirMove(ctx context.Context, src fs.Fs, srcRemote, dstRemote string
// Hashes returns the supported hash sets.
func (f *Fs) Hashes() hash.Set {
if len(f.opt.Hashes) > 0 {
// Return only configured hashes.
// Note: Could have used hash.SupportOnly to limit supported hashes for all hash related features.
var supported hash.Set
for _, hashName := range f.opt.Hashes {
var ht hash.Type
if err := ht.Set(hashName); err != nil {
fs.Infof(nil, "Invalid token %q in hash string %q", hashName, f.opt.Hashes.String())
}
supported.Add(ht)
}
return supported
}
return hash.Supported()
}
var commandHelp = []fs.CommandHelp{
{
Name: "noop",
Short: "A null operation for testing backend commands",
Long: `This is a test command which has some options
you can try to change the output.`,
Short: "A null operation for testing backend commands.",
Long: `This is a test command which has some options you can try to change the output.`,
Opts: map[string]string{
"echo": "echo the input arguments",
"error": "return an error based on option value",
"echo": "Echo the input arguments.",
"error": "Return an error based on option value.",
},
},
}
@@ -1090,6 +1132,10 @@ func (o *Object) Remote() string {
// Hash returns the requested hash of a file as a lowercase hex string
func (o *Object) Hash(ctx context.Context, r hash.Type) (string, error) {
if r == hash.None {
return "", nil
}
// Check that the underlying file hasn't changed
o.fs.objectMetaMu.RLock()
oldtime := o.modTime
@@ -1197,13 +1243,23 @@ func (o *Object) Storable() bool {
o.fs.objectMetaMu.RLock()
mode := o.mode
o.fs.objectMetaMu.RUnlock()
if mode&os.ModeSymlink != 0 && !o.fs.opt.TranslateSymlinks {
// On Windows items with os.ModeIrregular are likely Junction
// points so we treat them as symlinks for the purpose of ignoring them.
// https://github.com/golang/go/issues/73827
symlinkFlag := os.ModeSymlink
if runtime.GOOS == "windows" {
symlinkFlag |= os.ModeIrregular
}
if mode&symlinkFlag != 0 && !o.fs.opt.TranslateSymlinks {
if !o.fs.opt.SkipSymlinks {
fs.Logf(o, "Can't follow symlink without -L/--copy-links")
}
return false
} else if mode&(os.ModeNamedPipe|os.ModeSocket|os.ModeDevice) != 0 {
fs.Logf(o, "Can't transfer non file/directory")
if !o.fs.opt.SkipSpecials {
fs.Logf(o, "Can't transfer non file/directory")
}
return false
} else if mode&os.ModeDir != 0 {
// fs.Debugf(o, "Skipping directory")

View File

@@ -204,6 +204,23 @@ func TestSymlinkError(t *testing.T) {
assert.Equal(t, errLinksAndCopyLinks, err)
}
func TestHashWithTypeNone(t *testing.T) {
ctx := context.Background()
r := fstest.NewRun(t)
const filePath = "file.txt"
r.WriteFile(filePath, "content", time.Now())
f := r.Flocal.(*Fs)
// Get the object
o, err := f.NewObject(ctx, filePath)
require.NoError(t, err)
// Test the hash is as we expect
h, err := o.Hash(ctx, hash.None)
require.Empty(t, h)
require.NoError(t, err)
}
// Test hashes on updating an object
func TestHashOnUpdate(t *testing.T) {
ctx := context.Background()
@@ -317,7 +334,7 @@ func TestMetadata(t *testing.T) {
func testMetadata(t *testing.T, r *fstest.Run, o *Object, when time.Time) {
ctx := context.Background()
whenRFC := when.Format(time.RFC3339Nano)
whenRFC := when.Local().Format(time.RFC3339Nano)
const dayLength = len("2001-01-01")
f := r.Flocal.(*Fs)

View File

@@ -0,0 +1,40 @@
//go:build windows
package local
import (
"context"
"path/filepath"
"runtime"
"syscall"
"testing"
"github.com/rclone/rclone/fs/operations"
"github.com/rclone/rclone/fstest"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
)
// TestRmdirWindows tests that FILE_ATTRIBUTE_READONLY does not block Rmdir on windows.
// Microsoft docs indicate that "This attribute is not honored on directories."
// See https://learn.microsoft.com/en-us/windows/win32/fileio/file-attribute-constants#file_attribute_readonly
// and https://github.com/golang/go/issues/26295
func TestRmdirWindows(t *testing.T) {
if runtime.GOOS != "windows" {
t.Skipf("windows only")
}
r := fstest.NewRun(t)
defer r.Finalise()
err := operations.Mkdir(context.Background(), r.Flocal, "testdir")
require.NoError(t, err)
ptr, err := syscall.UTF16PtrFromString(filepath.Join(r.Flocal.Root(), "testdir"))
require.NoError(t, err)
err = syscall.SetFileAttributes(ptr, uint32(syscall.FILE_ATTRIBUTE_DIRECTORY+syscall.FILE_ATTRIBUTE_READONLY))
require.NoError(t, err)
err = operations.Rmdir(context.Background(), r.Flocal, "testdir")
assert.NoError(t, err)
}

View File

@@ -1,4 +1,4 @@
//go:build dragonfly || plan9 || js
//go:build dragonfly || plan9 || js || aix
package local

View File

@@ -400,7 +400,7 @@ type quirks struct {
}
func (q *quirks) parseQuirks(option string) {
for _, flag := range strings.Split(option, ",") {
for flag := range strings.SplitSeq(option, ",") {
switch strings.ToLower(strings.TrimSpace(flag)) {
case "binlist":
// The official client sometimes uses a so called "bin" protocol,
@@ -634,7 +634,7 @@ func (f *Fs) readItemMetaData(ctx context.Context, path string) (entry fs.DirEnt
return
}
// itemToEntry converts API item to rclone directory entry
// itemToDirEntry converts API item to rclone directory entry
// The dirSize return value is:
//
// <0 - for a file or in case of error
@@ -1770,7 +1770,7 @@ func (f *Fs) parseSpeedupPatterns(patternString string) (err error) {
f.speedupAny = false
uniqueValidPatterns := make(map[string]any)
for _, pattern := range strings.Split(patternString, ",") {
for pattern := range strings.SplitSeq(patternString, ",") {
pattern = strings.ToLower(strings.TrimSpace(pattern))
if pattern == "" {
continue

View File

@@ -17,9 +17,12 @@ Improvements:
import (
"context"
"crypto/tls"
"encoding/base64"
"errors"
"fmt"
"io"
"net/http"
"path"
"slices"
"strings"
@@ -45,6 +48,9 @@ const (
maxSleep = 2 * time.Second
eventWaitTime = 500 * time.Millisecond
decayConstant = 2 // bigger for slower decay, exponential
sessionIDConfigKey = "session_id"
masterKeyConfigKey = "master_key"
)
var (
@@ -68,6 +74,24 @@ func init() {
Help: "Password.",
Required: true,
IsPassword: true,
}, {
Name: "2fa",
Help: `The 2FA code of your MEGA account if the account is set up with one`,
Required: false,
}, {
Name: sessionIDConfigKey,
Help: "Session (internal use only)",
Required: false,
Advanced: true,
Sensitive: true,
Hide: fs.OptionHideBoth,
}, {
Name: masterKeyConfigKey,
Help: "Master key (internal use only)",
Required: false,
Advanced: true,
Sensitive: true,
Hide: fs.OptionHideBoth,
}, {
Name: "debug",
Help: `Output more debug from Mega.
@@ -111,6 +135,9 @@ Enabling it will increase CPU usage and add network overhead.`,
type Options struct {
User string `config:"user"`
Pass string `config:"pass"`
TwoFA string `config:"2fa"`
SessionID string `config:"session_id"`
MasterKey string `config:"master_key"`
Debug bool `config:"debug"`
HardDelete bool `config:"hard_delete"`
UseHTTPS bool `config:"use_https"`
@@ -207,6 +234,19 @@ func NewFs(ctx context.Context, name, root string, m configmap.Mapper) (fs.Fs, e
}
ci := fs.GetConfig(ctx)
// Create Fs
root = parsePath(root)
f := &Fs{
name: name,
root: root,
opt: *opt,
pacer: fs.NewPacer(ctx, pacer.NewDefault(pacer.MinSleep(minSleep), pacer.MaxSleep(maxSleep), pacer.DecayConstant(decayConstant))),
}
f.features = (&fs.Features{
DuplicateFiles: true,
CanHaveEmptyDirectories: true,
}).Fill(ctx, f)
// cache *mega.Mega on username so we can reuse and share
// them between remotes. They are expensive to make as they
// contain all the objects and sharing the objects makes the
@@ -216,7 +256,25 @@ func NewFs(ctx context.Context, name, root string, m configmap.Mapper) (fs.Fs, e
defer megaCacheMu.Unlock()
srv := megaCache[opt.User]
if srv == nil {
srv = mega.New().SetClient(fshttp.NewClient(ctx))
// srv = mega.New().SetClient(fshttp.NewClient(ctx))
// Workaround for Mega's use of insecure cipher suites which are no longer supported by default since Go 1.22.
// Relevant issues:
// https://github.com/rclone/rclone/issues/8565
// https://github.com/meganz/webclient/issues/103
clt := fshttp.NewClient(ctx)
clt.Transport = fshttp.NewTransportCustom(ctx, func(t *http.Transport) {
var ids []uint16
// Read default ciphers
for _, cs := range tls.CipherSuites() {
ids = append(ids, cs.ID)
}
// Insecure but Mega uses TLS_RSA_WITH_AES_128_GCM_SHA256 for storage endpoints
// (e.g. https://gfs302n114.userstorage.mega.co.nz) as of June 18, 2025.
t.TLSClientConfig.CipherSuites = append(ids, tls.TLS_RSA_WITH_AES_128_GCM_SHA256)
})
srv = mega.New().SetClient(clt)
srv.SetRetries(ci.LowLevelRetries) // let mega do the low level retries
srv.SetHTTPS(opt.UseHTTPS)
srv.SetLogger(func(format string, v ...any) {
@@ -228,25 +286,29 @@ func NewFs(ctx context.Context, name, root string, m configmap.Mapper) (fs.Fs, e
})
}
err := srv.Login(opt.User, opt.Pass)
if err != nil {
return nil, fmt.Errorf("couldn't login: %w", err)
if opt.SessionID == "" {
fs.Debugf(f, "Using username and password to initialize the Mega API")
err := srv.MultiFactorLogin(opt.User, opt.Pass, opt.TwoFA)
if err != nil {
return nil, fmt.Errorf("couldn't login: %w", err)
}
megaCache[opt.User] = srv
m.Set(sessionIDConfigKey, srv.GetSessionID())
encodedMasterKey := base64.StdEncoding.EncodeToString(srv.GetMasterKey())
m.Set(masterKeyConfigKey, encodedMasterKey)
} else {
fs.Debugf(f, "Using previously stored session ID and master key to initialize the Mega API")
decodedMasterKey, err := base64.StdEncoding.DecodeString(opt.MasterKey)
if err != nil {
return nil, fmt.Errorf("couldn't decode master key: %w", err)
}
err = srv.LoginWithKeys(opt.SessionID, decodedMasterKey)
if err != nil {
fs.Debugf(f, "login with previous auth keys failed: %v", err)
}
}
megaCache[opt.User] = srv
}
root = parsePath(root)
f := &Fs{
name: name,
root: root,
opt: *opt,
srv: srv,
pacer: fs.NewPacer(ctx, pacer.NewDefault(pacer.MinSleep(minSleep), pacer.MaxSleep(maxSleep), pacer.DecayConstant(decayConstant))),
}
f.features = (&fs.Features{
DuplicateFiles: true,
CanHaveEmptyDirectories: true,
}).Fill(ctx, f)
f.srv = srv
// Find the root node and check if it is a file or not
_, err = f.findRoot(ctx, false)
@@ -926,9 +988,9 @@ func (f *Fs) About(ctx context.Context) (*fs.Usage, error) {
return nil, fmt.Errorf("failed to get Mega Quota: %w", err)
}
usage := &fs.Usage{
Total: fs.NewUsageValue(int64(q.Mstrg)), // quota of bytes that can be used
Used: fs.NewUsageValue(int64(q.Cstrg)), // bytes in use
Free: fs.NewUsageValue(int64(q.Mstrg - q.Cstrg)), // bytes which can be uploaded before reaching the quota
Total: fs.NewUsageValue(q.Mstrg), // quota of bytes that can be used
Used: fs.NewUsageValue(q.Cstrg), // bytes in use
Free: fs.NewUsageValue(q.Mstrg - q.Cstrg), // bytes which can be uploaded before reaching the quota
}
return usage, nil
}

View File

@@ -325,13 +325,12 @@ func (f *Fs) list(ctx context.Context, bucket, directory, prefix string, addBuck
}
// listDir lists the bucket to the entries
func (f *Fs) listDir(ctx context.Context, bucket, directory, prefix string, addBucket bool) (entries fs.DirEntries, err error) {
func (f *Fs) listDir(ctx context.Context, bucket, directory, prefix string, addBucket bool, callback func(fs.DirEntry) error) (err error) {
// List the objects and directories
err = f.list(ctx, bucket, directory, prefix, addBucket, false, func(remote string, entry fs.DirEntry, isDirectory bool) error {
entries = append(entries, entry)
return nil
return callback(entry)
})
return entries, err
return err
}
// listBuckets lists the buckets to entries
@@ -354,15 +353,46 @@ func (f *Fs) listBuckets(ctx context.Context) (entries fs.DirEntries, err error)
// This should return ErrDirNotFound if the directory isn't
// found.
func (f *Fs) List(ctx context.Context, dir string) (entries fs.DirEntries, err error) {
// defer fslog.Trace(dir, "")("entries = %q, err = %v", &entries, &err)
return list.WithListP(ctx, dir, f)
}
// ListP lists the objects and directories of the Fs starting
// from dir non recursively into out.
//
// dir should be "" to start from the root, and should not
// have trailing slashes.
//
// This should return ErrDirNotFound if the directory isn't
// found.
//
// It should call callback for each tranche of entries read.
// These need not be returned in any particular order. If
// callback returns an error then the listing will stop
// immediately.
func (f *Fs) ListP(ctx context.Context, dir string, callback fs.ListRCallback) error {
list := list.NewHelper(callback)
bucket, directory := f.split(dir)
if bucket == "" {
if directory != "" {
return nil, fs.ErrorListBucketRequired
return fs.ErrorListBucketRequired
}
entries, err := f.listBuckets(ctx)
if err != nil {
return err
}
for _, entry := range entries {
err = list.Add(entry)
if err != nil {
return err
}
}
} else {
err := f.listDir(ctx, bucket, directory, f.rootDirectory, f.rootBucket == "", list.Add)
if err != nil {
return err
}
return f.listBuckets(ctx)
}
return f.listDir(ctx, bucket, directory, f.rootDirectory, f.rootBucket == "")
return list.Flush()
}
// ListR lists the objects and directories of the Fs starting
@@ -629,6 +659,7 @@ var (
_ fs.Copier = &Fs{}
_ fs.PutStreamer = &Fs{}
_ fs.ListRer = &Fs{}
_ fs.ListPer = &Fs{}
_ fs.Object = &Object{}
_ fs.MimeTyper = &Object{}
)

View File

@@ -87,7 +87,7 @@ Please choose the 'y' option to set your own password then enter your secret.`,
var commandHelp = []fs.CommandHelp{{
Name: "du",
Short: "Return disk usage information for a specified directory",
Short: "Return disk usage information for a specified directory.",
Long: `The usage information returned, includes the targeted directory as well as all
files stored in any sub-directories that may exist.`,
}, {
@@ -96,7 +96,12 @@ files stored in any sub-directories that may exist.`,
Long: `The desired path location (including applicable sub-directories) ending in
the object that will be the target of the symlink (for example, /links/mylink).
Include the file extension for the object, if applicable.
` + "`rclone backend symlink <src> <path>`",
Usage example:
` + "```console" + `
rclone backend symlink <src> <path>
` + "```",
},
}

View File

@@ -243,7 +243,6 @@ func (m *Metadata) Get(ctx context.Context) (metadata fs.Metadata, err error) {
func (m *Metadata) Set(ctx context.Context, metadata fs.Metadata) (numSet int, err error) {
numSet = 0
for k, v := range metadata {
k, v := k, v
switch k {
case "mtime":
t, err := time.Parse(timeFormatIn, v)
@@ -422,12 +421,7 @@ func (m *Metadata) orderPermissions(xs []*api.PermissionsType) {
if hasUserIdentity(p.GetGrantedTo(m.fs.driveType)) {
return true
}
for _, identity := range p.GetGrantedToIdentities(m.fs.driveType) {
if hasUserIdentity(identity) {
return true
}
}
return false
return slices.ContainsFunc(p.GetGrantedToIdentities(m.fs.driveType), hasUserIdentity)
}
// Put Permissions with a user first, leaving unsorted otherwise
slices.SortStableFunc(xs, func(a, b *api.PermissionsType) int {
@@ -749,6 +743,8 @@ func (o *Object) fetchMetadataForCreate(ctx context.Context, src fs.ObjectInfo,
// Fetch metadata and update updateInfo if --metadata is in use
// modtime will still be set when there is no metadata to set
//
// May return info=nil and err=nil if there was no metadata to update.
func (f *Fs) fetchAndUpdateMetadata(ctx context.Context, src fs.ObjectInfo, options []fs.OpenOption, updateInfo *Object) (info *api.Item, err error) {
meta, err := fs.GetMetadataOptions(ctx, f, src, options)
if err != nil {
@@ -768,6 +764,8 @@ func (f *Fs) fetchAndUpdateMetadata(ctx context.Context, src fs.ObjectInfo, opti
}
// updateMetadata calls Get, Set, and Write
//
// May return info=nil and err=nil if there was no metadata to update.
func (o *Object) updateMetadata(ctx context.Context, meta fs.Metadata) (info *api.Item, err error) {
_, err = o.meta.Get(ctx) // refresh permissions
if err != nil {

View File

@@ -56,6 +56,7 @@ const (
driveTypeSharepoint = "documentLibrary"
defaultChunkSize = 10 * fs.Mebi
chunkSizeMultiple = 320 * fs.Kibi
maxSinglePartSize = 4 * fs.Mebi
regionGlobal = "global"
regionUS = "us"
@@ -138,6 +139,21 @@ func init() {
Help: "Azure and Office 365 operated by Vnet Group in China",
},
},
}, {
Name: "upload_cutoff",
Help: `Cutoff for switching to chunked upload.
Any files larger than this will be uploaded in chunks of chunk_size.
This is disabled by default as uploading using single part uploads
causes rclone to use twice the storage on Onedrive business as when
rclone sets the modification time after the upload Onedrive creates a
new version.
See: https://github.com/rclone/rclone/issues/1716
`,
Default: fs.SizeSuffix(-1),
Advanced: true,
}, {
Name: "chunk_size",
Help: `Chunk size to upload files with - must be multiple of 320k (327,680 bytes).
@@ -746,6 +762,7 @@ Examples:
// Options defines the configuration for this backend
type Options struct {
Region string `config:"region"`
UploadCutoff fs.SizeSuffix `config:"upload_cutoff"`
ChunkSize fs.SizeSuffix `config:"chunk_size"`
DriveID string `config:"drive_id"`
DriveType string `config:"drive_type"`
@@ -1022,6 +1039,13 @@ func (f *Fs) setUploadChunkSize(cs fs.SizeSuffix) (old fs.SizeSuffix, err error)
return
}
func checkUploadCutoff(cs fs.SizeSuffix) error {
if cs > maxSinglePartSize {
return fmt.Errorf("%v is greater than %v", cs, maxSinglePartSize)
}
return nil
}
// NewFs constructs an Fs from the path, container:path
func NewFs(ctx context.Context, name, root string, m configmap.Mapper) (fs.Fs, error) {
// Parse config into Options struct
@@ -1035,6 +1059,10 @@ func NewFs(ctx context.Context, name, root string, m configmap.Mapper) (fs.Fs, e
if err != nil {
return nil, fmt.Errorf("onedrive: chunk size: %w", err)
}
err = checkUploadCutoff(opt.UploadCutoff)
if err != nil {
return nil, fmt.Errorf("onedrive: upload cutoff: %w", err)
}
if opt.DriveID == "" || opt.DriveType == "" {
return nil, errors.New("unable to get drive_id and drive_type - if you are upgrading from older versions of rclone, please run `rclone config` and re-configure this backend")
@@ -1349,9 +1377,27 @@ func (f *Fs) itemToDirEntry(ctx context.Context, dir string, info *api.Item) (en
// This should return ErrDirNotFound if the directory isn't
// found.
func (f *Fs) List(ctx context.Context, dir string) (entries fs.DirEntries, err error) {
return list.WithListP(ctx, dir, f)
}
// ListP lists the objects and directories of the Fs starting
// from dir non recursively into out.
//
// dir should be "" to start from the root, and should not
// have trailing slashes.
//
// This should return ErrDirNotFound if the directory isn't
// found.
//
// It should call callback for each tranche of entries read.
// These need not be returned in any particular order. If
// callback returns an error then the listing will stop
// immediately.
func (f *Fs) ListP(ctx context.Context, dir string, callback fs.ListRCallback) error {
list := list.NewHelper(callback)
directoryID, err := f.dirCache.FindDir(ctx, dir, false)
if err != nil {
return nil, err
return err
}
err = f.listAll(ctx, directoryID, false, false, func(info *api.Item) error {
entry, err := f.itemToDirEntry(ctx, dir, info)
@@ -1361,13 +1407,16 @@ func (f *Fs) List(ctx context.Context, dir string) (entries fs.DirEntries, err e
if entry == nil {
return nil
}
entries = append(entries, entry)
err = list.Add(entry)
if err != nil {
return err
}
return nil
})
if err != nil {
return nil, err
return err
}
return entries, nil
return list.Flush()
}
// ListR lists the objects and directories of the Fs starting
@@ -1754,7 +1803,9 @@ func (f *Fs) Copy(ctx context.Context, src fs.Object, remote string) (dst fs.Obj
if err != nil {
return nil, err
}
err = dstObj.setMetaData(info)
if info != nil {
err = dstObj.setMetaData(info)
}
return dstObj, err
}
@@ -1834,7 +1885,9 @@ func (f *Fs) Move(ctx context.Context, src fs.Object, remote string) (fs.Object,
if err != nil {
return nil, err
}
err = dstObj.setMetaData(info)
if info != nil {
err = dstObj.setMetaData(info)
}
return dstObj, err
}
@@ -2469,6 +2522,10 @@ func (o *Object) uploadFragment(ctx context.Context, url string, start int64, to
return false, nil
}
return true, fmt.Errorf("retry this chunk skipping %d bytes: %w", skip, err)
} else if err != nil && resp != nil && resp.StatusCode == http.StatusNotFound {
fs.Debugf(o, "Received 404 error: assuming eventual consistency problem with session - retrying chunk: %v", err)
time.Sleep(5 * time.Second) // a little delay to help things along
return true, err
}
if err != nil {
return shouldRetry(ctx, resp, err)
@@ -2563,8 +2620,8 @@ func (o *Object) uploadMultipart(ctx context.Context, in io.Reader, src fs.Objec
// This function will set modtime and metadata after uploading, which will create a new version for the remote file
func (o *Object) uploadSinglepart(ctx context.Context, in io.Reader, src fs.ObjectInfo, options ...fs.OpenOption) (info *api.Item, err error) {
size := src.Size()
if size < 0 || size > int64(fs.SizeSuffix(4*1024*1024)) {
return nil, errors.New("size passed into uploadSinglepart must be >= 0 and <= 4 MiB")
if size < 0 || size > int64(maxSinglePartSize) {
return nil, fmt.Errorf("size passed into uploadSinglepart must be >= 0 and <= %v", maxSinglePartSize)
}
fs.Debugf(o, "Starting singlepart upload")
@@ -2597,7 +2654,10 @@ func (o *Object) uploadSinglepart(ctx context.Context, in io.Reader, src fs.Obje
if err != nil {
return nil, fmt.Errorf("failed to fetch and update metadata: %w", err)
}
return info, o.setMetaData(info)
if info != nil {
err = o.setMetaData(info)
}
return info, err
}
// Update the object with the contents of the io.Reader, modTime and size
@@ -2617,9 +2677,9 @@ func (o *Object) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, op
size := src.Size()
var info *api.Item
if size > 0 {
if size > 0 && size >= int64(o.fs.opt.UploadCutoff) {
info, err = o.uploadMultipart(ctx, in, src, options...)
} else if size == 0 {
} else if size >= 0 {
info, err = o.uploadSinglepart(ctx, in, src, options...)
} else {
return errors.New("unknown-sized upload not supported")
@@ -2984,6 +3044,7 @@ var (
_ fs.PublicLinker = (*Fs)(nil)
_ fs.CleanUpper = (*Fs)(nil)
_ fs.ListRer = (*Fs)(nil)
_ fs.ListPer = (*Fs)(nil)
_ fs.Shutdowner = (*Fs)(nil)
_ fs.Object = (*Object)(nil)
_ fs.MimeTyper = &Object{}

View File

@@ -172,8 +172,8 @@ func BenchmarkQuickXorHash(b *testing.B) {
require.NoError(b, err)
require.Equal(b, len(buf), n)
h := New()
b.ResetTimer()
for i := 0; i < b.N; i++ {
for b.Loop() {
h.Reset()
h.Write(buf)
h.Sum(nil)

View File

@@ -30,20 +30,25 @@ const (
var commandHelp = []fs.CommandHelp{{
Name: operationRename,
Short: "change the name of an object",
Short: "change the name of an object.",
Long: `This command can be used to rename a object.
Usage Examples:
Usage example:
rclone backend rename oos:bucket relative-object-path-under-bucket object-new-name
`,
` + "```console" + `
rclone backend rename oos:bucket relative-object-path-under-bucket object-new-name
` + "```",
Opts: nil,
}, {
Name: operationListMultiPart,
Short: "List the unfinished multipart uploads",
Short: "List the unfinished multipart uploads.",
Long: `This command lists the unfinished multipart uploads in JSON format.
rclone backend list-multipart-uploads oos:bucket/path/to/object
Usage example:
` + "```console" + `
rclone backend list-multipart-uploads oos:bucket/path/to/object
` + "```" + `
It returns a dictionary of buckets with values as lists of unfinished
multipart uploads.
@@ -51,70 +56,82 @@ multipart uploads.
You can call it with no bucket in which case it lists all bucket, with
a bucket or with a bucket and path.
{
"test-bucket": [
{
"namespace": "test-namespace",
"bucket": "test-bucket",
"object": "600m.bin",
"uploadId": "51dd8114-52a4-b2f2-c42f-5291f05eb3c8",
"timeCreated": "2022-07-29T06:21:16.595Z",
"storageTier": "Standard"
}
]
`,
` + "```json" + `
{
"test-bucket": [
{
"namespace": "test-namespace",
"bucket": "test-bucket",
"object": "600m.bin",
"uploadId": "51dd8114-52a4-b2f2-c42f-5291f05eb3c8",
"timeCreated": "2022-07-29T06:21:16.595Z",
"storageTier": "Standard"
}
]
}`,
}, {
Name: operationCleanup,
Short: "Remove unfinished multipart uploads.",
Long: `This command removes unfinished multipart uploads of age greater than
max-age which defaults to 24 hours.
Note that you can use --interactive/-i or --dry-run with this command to see what
it would do.
Note that you can use --interactive/-i or --dry-run with this command to see
what it would do.
rclone backend cleanup oos:bucket/path/to/object
rclone backend cleanup -o max-age=7w oos:bucket/path/to/object
Usage examples:
Durations are parsed as per the rest of rclone, 2h, 7d, 7w etc.
`,
` + "```console" + `
rclone backend cleanup oos:bucket/path/to/object
rclone backend cleanup -o max-age=7w oos:bucket/path/to/object
` + "```" + `
Durations are parsed as per the rest of rclone, 2h, 7d, 7w etc.`,
Opts: map[string]string{
"max-age": "Max age of upload to delete",
"max-age": "Max age of upload to delete.",
},
}, {
Name: operationRestore,
Short: "Restore objects from Archive to Standard storage",
Long: `This command can be used to restore one or more objects from Archive to Standard storage.
Short: "Restore objects from Archive to Standard storage.",
Long: `This command can be used to restore one or more objects from Archive to
Standard storage.
Usage Examples:
Usage examples:
rclone backend restore oos:bucket/path/to/directory -o hours=HOURS
rclone backend restore oos:bucket -o hours=HOURS
` + "```console" + `
rclone backend restore oos:bucket/path/to/directory -o hours=HOURS
rclone backend restore oos:bucket -o hours=HOURS
` + "```" + `
This flag also obeys the filters. Test first with --interactive/-i or --dry-run flags
rclone --interactive backend restore --include "*.txt" oos:bucket/path -o hours=72
` + "```console" + `
rclone --interactive backend restore --include "*.txt" oos:bucket/path -o hours=72
` + "```" + `
All the objects shown will be marked for restore, then
All the objects shown will be marked for restore, then:
rclone backend restore --include "*.txt" oos:bucket/path -o hours=72
` + "```console" + `
rclone backend restore --include "*.txt" oos:bucket/path -o hours=72
` + "```" + `
It returns a list of status dictionaries with Object Name and Status
keys. The Status will be "RESTORED"" if it was successful or an error message
if not.
It returns a list of status dictionaries with Object Name and Status keys.
The Status will be "RESTORED"" if it was successful or an error message if not.
[
{
"Object": "test.txt"
"Status": "RESTORED",
},
{
"Object": "test/file4.txt"
"Status": "RESTORED",
}
]
`,
` + "```json" + `
[
{
"Object": "test.txt"
"Status": "RESTORED",
},
{
"Object": "test/file4.txt"
"Status": "RESTORED",
}
]
` + "```",
Opts: map[string]string{
"hours": "The number of hours for which this object will be restored. Default is 24 hrs.",
"hours": `The number of hours for which this object will be restored.
Default is 24 hrs.`,
},
},
}

View File

@@ -12,6 +12,7 @@ import (
"strings"
"time"
"github.com/ncw/swift/v2"
"github.com/oracle/oci-go-sdk/v65/common"
"github.com/oracle/oci-go-sdk/v65/objectstorage"
"github.com/rclone/rclone/fs"
@@ -33,9 +34,46 @@ func init() {
NewFs: NewFs,
CommandHelp: commandHelp,
Options: newOptions(),
MetadataInfo: &fs.MetadataInfo{
System: systemMetadataInfo,
Help: `User metadata is stored as opc-meta- keys.`,
},
})
}
var systemMetadataInfo = map[string]fs.MetadataHelp{
"opc-meta-mode": {
Help: "File type and mode",
Type: "octal, unix style",
Example: "0100664",
},
"opc-meta-uid": {
Help: "User ID of owner",
Type: "decimal number",
Example: "500",
},
"opc-meta-gid": {
Help: "Group ID of owner",
Type: "decimal number",
Example: "500",
},
"opc-meta-atime": {
Help: "Time of last access",
Type: "ISO 8601",
Example: "2025-06-30T22:27:43-04:00",
},
"opc-meta-mtime": {
Help: "Time of last modification",
Type: "ISO 8601",
Example: "2025-06-30T22:27:43-04:00",
},
"opc-meta-btime": {
Help: "Time of file birth (creation)",
Type: "ISO 8601",
Example: "2025-06-30T22:27:43-04:00",
},
}
// Fs represents a remote object storage server
type Fs struct {
name string // name of this remote
@@ -82,6 +120,7 @@ func NewFs(ctx context.Context, name, root string, m configmap.Mapper) (fs.Fs, e
}
f.setRoot(root)
f.features = (&fs.Features{
ReadMetadata: true,
ReadMimeType: true,
WriteMimeType: true,
BucketBased: true,
@@ -215,15 +254,47 @@ func (f *Fs) split(rootRelativePath string) (bucketName, bucketPath string) {
// This should return ErrDirNotFound if the directory isn't
// found.
func (f *Fs) List(ctx context.Context, dir string) (entries fs.DirEntries, err error) {
return list.WithListP(ctx, dir, f)
}
// ListP lists the objects and directories of the Fs starting
// from dir non recursively into out.
//
// dir should be "" to start from the root, and should not
// have trailing slashes.
//
// This should return ErrDirNotFound if the directory isn't
// found.
//
// It should call callback for each tranche of entries read.
// These need not be returned in any particular order. If
// callback returns an error then the listing will stop
// immediately.
func (f *Fs) ListP(ctx context.Context, dir string, callback fs.ListRCallback) error {
list := list.NewHelper(callback)
bucketName, directory := f.split(dir)
fs.Debugf(f, "listing: bucket : %v, directory: %v", bucketName, dir)
if bucketName == "" {
if directory != "" {
return nil, fs.ErrorListBucketRequired
return fs.ErrorListBucketRequired
}
entries, err := f.listBuckets(ctx)
if err != nil {
return err
}
for _, entry := range entries {
err = list.Add(entry)
if err != nil {
return err
}
}
} else {
err := f.listDir(ctx, bucketName, directory, f.rootDirectory, f.rootBucket == "", list.Add)
if err != nil {
return err
}
return f.listBuckets(ctx)
}
return f.listDir(ctx, bucketName, directory, f.rootDirectory, f.rootBucket == "")
return list.Flush()
}
// listFn is called from list to handle an object.
@@ -372,24 +443,24 @@ func (f *Fs) itemToDirEntry(ctx context.Context, remote string, object *objectst
}
// listDir lists a single directory
func (f *Fs) listDir(ctx context.Context, bucket, directory, prefix string, addBucket bool) (entries fs.DirEntries, err error) {
func (f *Fs) listDir(ctx context.Context, bucket, directory, prefix string, addBucket bool, callback func(fs.DirEntry) error) (err error) {
fn := func(remote string, object *objectstorage.ObjectSummary, isDirectory bool) error {
entry, err := f.itemToDirEntry(ctx, remote, object, isDirectory)
if err != nil {
return err
}
if entry != nil {
entries = append(entries, entry)
return callback(entry)
}
return nil
}
err = f.list(ctx, bucket, directory, prefix, addBucket, false, 0, fn)
if err != nil {
return nil, err
return err
}
// bucket must be present if listing succeeded
f.cache.MarkOK(bucket)
return entries, nil
return nil
}
// listBuckets returns all the buckets to out
@@ -688,12 +759,45 @@ func (f *Fs) ListR(ctx context.Context, dir string, callback fs.ListRCallback) (
return list.Flush()
}
// Metadata returns metadata for an object
//
// It should return nil if there is no Metadata
func (o *Object) Metadata(ctx context.Context) (metadata fs.Metadata, err error) {
err = o.readMetaData(ctx)
if err != nil {
return nil, err
}
metadata = make(fs.Metadata, len(o.meta)+7)
for k, v := range o.meta {
switch k {
case metaMtime:
if modTime, err := swift.FloatStringToTime(v); err == nil {
metadata["mtime"] = modTime.Format(time.RFC3339Nano)
}
case metaMD5Hash:
// don't write hash metadata
default:
metadata[k] = v
}
}
if o.mimeType != "" {
metadata["content-type"] = o.mimeType
}
if !o.lastModified.IsZero() {
metadata["btime"] = o.lastModified.Format(time.RFC3339Nano)
}
return metadata, nil
}
// Check the interfaces are satisfied
var (
_ fs.Fs = &Fs{}
_ fs.Copier = &Fs{}
_ fs.PutStreamer = &Fs{}
_ fs.ListRer = &Fs{}
_ fs.ListPer = &Fs{}
_ fs.Commander = &Fs{}
_ fs.CleanUpper = &Fs{}
_ fs.OpenChunkWriter = &Fs{}

View File

@@ -378,12 +378,20 @@ func NewFs(ctx context.Context, name, root string, m configmap.Mapper) (fs.Fs, e
return f, nil
}
// OpenWriterAt opens with a handle for random access writes
// XOpenWriterAt opens with a handle for random access writes
//
// Pass in the remote desired and the size if known.
//
// It truncates any existing object
func (f *Fs) OpenWriterAt(ctx context.Context, remote string, size int64) (fs.WriterAtCloser, error) {
// It truncates any existing object.
//
// OpenWriterAt disabled because it seems to have been disabled at pcloud
// PUT /file_open?flags=XXX&folderid=XXX&name=XXX HTTP/1.1
//
// {
// "result": 2003,
// "error": "Access denied. You do not have permissions to perform this operation."
// }
func (f *Fs) XOpenWriterAt(ctx context.Context, remote string, size int64) (fs.WriterAtCloser, error) {
client, err := f.newSingleConnClient(ctx)
if err != nil {
return nil, fmt.Errorf("create client: %w", err)
@@ -621,11 +629,31 @@ func (f *Fs) listHelper(ctx context.Context, dir string, recursive bool, callbac
// This should return ErrDirNotFound if the directory isn't
// found.
func (f *Fs) List(ctx context.Context, dir string) (entries fs.DirEntries, err error) {
return list.WithListP(ctx, dir, f)
}
// ListP lists the objects and directories of the Fs starting
// from dir non recursively into out.
//
// dir should be "" to start from the root, and should not
// have trailing slashes.
//
// This should return ErrDirNotFound if the directory isn't
// found.
//
// It should call callback for each tranche of entries read.
// These need not be returned in any particular order. If
// callback returns an error then the listing will stop
// immediately.
func (f *Fs) ListP(ctx context.Context, dir string, callback fs.ListRCallback) (err error) {
list := list.NewHelper(callback)
err = f.listHelper(ctx, dir, false, func(o fs.DirEntry) error {
entries = append(entries, o)
return nil
return list.Add(o)
})
return entries, err
if err != nil {
return err
}
return list.Flush()
}
// ListR lists the objects and directories of the Fs starting
@@ -1369,6 +1397,8 @@ var (
_ fs.DirMover = (*Fs)(nil)
_ fs.DirCacheFlusher = (*Fs)(nil)
_ fs.PublicLinker = (*Fs)(nil)
_ fs.ListRer = (*Fs)(nil)
_ fs.ListPer = (*Fs)(nil)
_ fs.Abouter = (*Fs)(nil)
_ fs.Shutdowner = (*Fs)(nil)
_ fs.Object = (*Object)(nil)

View File

@@ -5,6 +5,7 @@ package api
import (
"fmt"
"net/url"
"reflect"
"strconv"
"time"
@@ -136,8 +137,25 @@ type Link struct {
}
// Valid reports whether l is non-nil, has an URL, and is not expired.
// It primarily checks the URL's expire query parameter, falling back to the Expire field.
func (l *Link) Valid() bool {
return l != nil && l.URL != "" && time.Now().Add(10*time.Second).Before(time.Time(l.Expire))
if l == nil || l.URL == "" {
return false
}
// Primary validation: check URL's expire query parameter
if u, err := url.Parse(l.URL); err == nil {
if expireStr := u.Query().Get("expire"); expireStr != "" {
// Try parsing as Unix timestamp (seconds)
if expireInt, err := strconv.ParseInt(expireStr, 10, 64); err == nil {
expireTime := time.Unix(expireInt, 0)
return time.Now().Add(10 * time.Second).Before(expireTime)
}
}
}
// Fallback validation: use the Expire field if URL parsing didn't work
return time.Now().Add(10 * time.Second).Before(time.Time(l.Expire))
}
// URL is a basic form of URL

View File

@@ -0,0 +1,99 @@
package api
import (
"fmt"
"testing"
"time"
)
// TestLinkValid tests the Link.Valid method for various scenarios
func TestLinkValid(t *testing.T) {
tests := []struct {
name string
link *Link
expected bool
desc string
}{
{
name: "nil link",
link: nil,
expected: false,
desc: "nil link should be invalid",
},
{
name: "empty URL",
link: &Link{URL: ""},
expected: false,
desc: "empty URL should be invalid",
},
{
name: "valid URL with future expire parameter",
link: &Link{
URL: fmt.Sprintf("https://example.com/file?expire=%d", time.Now().Add(time.Hour).Unix()),
},
expected: true,
desc: "URL with future expire parameter should be valid",
},
{
name: "expired URL with past expire parameter",
link: &Link{
URL: fmt.Sprintf("https://example.com/file?expire=%d", time.Now().Add(-time.Hour).Unix()),
},
expected: false,
desc: "URL with past expire parameter should be invalid",
},
{
name: "URL expire parameter takes precedence over Expire field",
link: &Link{
URL: fmt.Sprintf("https://example.com/file?expire=%d", time.Now().Add(time.Hour).Unix()),
Expire: Time(time.Now().Add(-time.Hour)), // Fallback is expired
},
expected: true,
desc: "URL expire parameter should take precedence over Expire field",
},
{
name: "URL expire parameter within 10 second buffer should be invalid",
link: &Link{
URL: fmt.Sprintf("https://example.com/file?expire=%d", time.Now().Add(5*time.Second).Unix()),
},
expected: false,
desc: "URL expire parameter within 10 second buffer should be invalid",
},
{
name: "fallback to Expire field when no URL expire parameter",
link: &Link{
URL: "https://example.com/file",
Expire: Time(time.Now().Add(time.Hour)),
},
expected: true,
desc: "should fallback to Expire field when URL has no expire parameter",
},
{
name: "fallback to Expire field when URL expire parameter is invalid",
link: &Link{
URL: "https://example.com/file?expire=invalid",
Expire: Time(time.Now().Add(time.Hour)),
},
expected: true,
desc: "should fallback to Expire field when URL expire parameter is unparsable",
},
{
name: "invalid when both URL expire and Expire field are expired",
link: &Link{
URL: fmt.Sprintf("https://example.com/file?expire=%d", time.Now().Add(-time.Hour).Unix()),
Expire: Time(time.Now().Add(-time.Hour)),
},
expected: false,
desc: "should be invalid when both URL expire and Expire field are expired",
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
result := tt.link.Valid()
if result != tt.expected {
t.Errorf("Link.Valid() = %v, expected %v. %s", result, tt.expected, tt.desc)
}
})
}
}

Some files were not shown because too many files have changed in this diff Show More