1
0
mirror of https://github.com/rclone/rclone.git synced 2026-01-08 19:43:58 +00:00

Compare commits

..

154 Commits

Author SHA1 Message Date
Nick Craig-Wood
1ebbc74f1d nfsmount: compile for all unix oses, add --sudo and fix error/option handling
- make compile on all unix OSes - this will make the docs appear on linux and rclone.org!
- add --sudo flag for using with mount
- improve error reporting
- fix option handling
2023-12-05 10:44:53 +00:00
Nick Craig-Wood
aee787d33e serve nfs: Mark as experimental 2023-12-05 10:44:53 +00:00
Anagh Kumar Baranwal
298c13e719 systemd: Fix detection and switch to the coreos package everywhere
rather than having 2 separate libraries

Signed-off-by: Anagh Kumar Baranwal <6824881+darthShadow@users.noreply.github.com>
2023-12-02 14:17:15 +00:00
Nick Craig-Wood
f0c774156e onedrive: fix error listing: unknown object type <nil>
This error was introduced in this commit when refactoring the list
routine.

b8591b230d onedrive: implement ListR method which gives --fast-list support

The error was caused by OneNote files not being skipped properly.
2023-12-02 10:49:15 +00:00
Nick Craig-Wood
08c460dd1a Add ben-ba to contributors 2023-12-02 10:49:15 +00:00
ben-ba
e3d0bff9ca docs: fix typo in docs.md
- OpenChunkedWriter
+ OpenChunkWriter
2023-12-01 20:45:48 +01:00
Nick Craig-Wood
caf5dd9d5e mount: notice daemon dying much quicker
Before this change we waited until until the timeout to check the
daemon was alive.

Now we check it every 100ms like we do the mount status.

This also fixes compiling on all platforms which was broken by the
previous change

9bfbf2a4a mount: fix macOS not noticing errors with --daemon

See: https://forum.rclone.org/t/rclone-mount-daemon-exits-successfully-even-when-mount-fails/43146
2023-12-01 09:36:05 +00:00
Nick Craig-Wood
97d7945cef Add halms to contributors 2023-12-01 09:36:05 +00:00
Manoj Ghosh
9061e81850 multipart copy create bucket if it doesn't exist. 2023-11-29 15:47:56 +00:00
halms
58339845f4 smb: fix shares not listed by updating go-smb2
Before this change the IP address of the server was used in the SMB
connect request (see CloudSoda/go-smb2#18).
The updated library now can pass the hostname instead.

The update requires a small change in the dial method call.

Fixes rclone#6672
2023-11-29 15:39:27 +00:00
Nick Craig-Wood
4d4f3de5a5 s3: add --s3-version-deleted to show delete markers in listings when using versions.
See: https://forum.rclone.org/t/s3-object-deletion-times/42781
2023-11-29 09:44:40 +00:00
Nick Craig-Wood
9bfbf2a4ae mount: fix macOS not noticing errors with --daemon
See: https://forum.rclone.org/t/rclone-mount-daemon-exits-successfully-even-when-mount-fails/43146
2023-11-28 19:42:00 +00:00
Nick Craig-Wood
96f8b7c827 install.sh: fix harmless error message on install
This was caused by trying to write to a non existent file, and
changing the order of the cleanup fixed it.

https://forum.rclone.org/t/rclone-v1-65-0-release/43100/18
2023-11-28 19:10:04 +00:00
Nick Craig-Wood
85f142a206 Start v1.66.0-DEV development 2023-11-26 17:14:38 +00:00
Nick Craig-Wood
82b963e372 Version v1.65.0 2023-11-26 16:07:39 +00:00
Nick Craig-Wood
74d5477fad onedrive: add --onedrive-delta flag to enable ListR
Before this change ListR was unconditionally enabled on onedrive.

This caused performance problems for some uses, so now the
--onedrive-delta flag has to be supplied.

Fixes #7362
2023-11-26 16:06:49 +00:00
Nick Craig-Wood
b5857f0bf8 smb: fix modtime of multithread uploads by setting PartialUploads
Before this change PartialUploads was not set. This is clearly wrong
since incoming files are visible on the smb server.

Setting PartialUploads fixes the multithread upload modtime problem as
it uses the PartialUploads flag as an indication that it needs to set
the modtime explicitly.

This problem was detected by the new TestMultithreadCopy integration
tests

Fixes #7411
2023-11-25 18:46:48 +00:00
Nick Craig-Wood
edb5ccdd0b smb: fix about size wrong by switching to github.com/cloudsoda/go-smb2/ fork
Before this change smb drives sometimes showed a fraction of the
correct size using `rclone about`.

This fixes the problem by switching the upstream library from
github.com/hirochachacha/go-smb2 to github.com/cloudsoda/go-smb2 which
has a fix for the problem.

The new library passes the integration tests.

Fixes #6733
2023-11-25 18:45:41 +00:00
Nick Craig-Wood
0244caf13a serve s3: fix overwrite of files with 0 length file
Before this change overwriting an existing file with a 0 length file
didn't update the file size.

This change corrects the issue and makes sure the file is truncated
properly.

This was discovered by the full integration tests.
2023-11-24 20:47:06 +00:00
Nick Craig-Wood
aaa897337d serve s3: fix error handling for listing non-existent prefix - fixes #7455
Before this change serve s3 would return NoSuchKey errors when a non
existent prefix was listed.

This change fixes it to return an empty list like AWS does.

This was discovered by the full integration tests.
2023-11-24 20:47:06 +00:00
Nick Craig-Wood
e7c002adef test_all: make integration test for serve s3 2023-11-24 20:47:06 +00:00
Nick Craig-Wood
9e62a74a23 Add Abhinav Dhiman to contributors 2023-11-24 20:47:06 +00:00
Nick Craig-Wood
a10abf9934 Add 你知道未来吗 to contributors 2023-11-24 20:47:06 +00:00
Abhinav Dhiman
36eb3cd660 imagekit: Added ImageKit backend 2023-11-24 18:18:01 +00:00
你知道未来吗
fd2322cb41 fs/fshttp: fix --contimeout being ignored
The following command will block for 60s(default) when the network is slow or unavailable:

```
rclone  --contimeout 10s --low-level-retries 0 lsd dropbox:
```

This change will make it timeout after the expected 10s.

Signed-off-by: rkonfj <rkonfj@gmail.com>
2023-11-24 17:53:33 +00:00
Nick Craig-Wood
4eed3ae99a s3: ensure we can set upload cutoff that we use for Rclone provider
This is a workaround to make the new multipart upload integration
tests pass.
2023-11-24 16:32:06 +00:00
Nick Craig-Wood
d8855b21eb serve s3: document multipart copy doesn't work #7454
This puts in a workaround for the tests also
2023-11-24 15:49:33 +00:00
Nick Craig-Wood
8f47b6746d b2: fix streaming chunked files an exact multiple of chunk size
Before this change, streaming files an exact multiple of the chunk
size would cause rclone to attempt to stream a 0 sized chunk which was
rejected by the b2 servers.

This bug was noticed by the new integration tests for chunked streaming.
2023-11-24 14:32:01 +00:00
Nick Craig-Wood
cc2a4c2e20 fstest: factor chunked streaming tests from b2 and use in all backends 2023-11-24 12:58:40 +00:00
Nick Craig-Wood
fabeb8e44e b2: fix server side chunked copy when file size was exactly --b2-copy-cutoff
Before this change the b2 servers would complain as this was only a
single part transfer.

This was noticed by the new integration tests for server side chunked copy.
2023-11-24 12:37:11 +00:00
Nick Craig-Wood
c27977d4d5 fstest: factor chunked copy tests from b2 and use them in s3 and oos 2023-11-24 12:37:11 +00:00
Nick Craig-Wood
d5d28a7513 operations: fix overwrite of destination when multi-thread transfer fails
Before this change, if a multithread upload failed (let's say the
source became unavailable) rclone would finalise the file first before
aborting the transfer.

This caused the partial file to be written which would overwrite any
existing files.

This was fixed by making sure we Abort the transfer before Close-ing
it.

This updates the docs to encourage calling of Abort before Close and
updates writerAtChunkWriter to make sure that works properly.

This also reworks the tests to detect this and to make sure we upload
and download to each multi-thread capable backend (we were only
downloading before which isn't a full test).

Fixes #7071
2023-11-24 11:19:58 +00:00
Nick Craig-Wood
94ccc95515 random: stop using deprecated rand.Seed in go1.20 and later 2023-11-24 11:19:58 +00:00
Nick Craig-Wood
5d5473c8a5 random: speed up String function for generating larger blocks 2023-11-24 11:19:58 +00:00
Nick Craig-Wood
251a8e3c39 hash: allow runtime configuration of supported hashes for testing 2023-11-24 11:19:58 +00:00
Nick Craig-Wood
a259226eb2 Add Alen Šiljak to contributors 2023-11-24 11:19:58 +00:00
Alen Šiljak
5fba502516 http: enable methods used with WebDAV - fixes #7444
Without this, requests like PROPFIND, issued from a browser, fail.
2023-11-23 16:49:03 +00:00
Nick Craig-Wood
ba11040d6b s3: detect looping when using gcs and versions
Apparently gcs doesn't return an S3 compatible result when using
versions.

In particular it doesn't return a NextKeyMarker - this means rclone
loops and fetches the same page over and over again.

This patch detects the problem and stops the infinite retries but it
doesn't fix the underlying problem.

See: https://forum.rclone.org/t/list-s3-versions-files-looping-bug/42974
See: https://issuetracker.google.com/u/0/issues/312292516
2023-11-23 09:50:28 +00:00
Nick Craig-Wood
668711e432 dropbox: fix missing encoding for rclone purge again
This commit fixed the problem but made the integration tests fail.

33376bf399 dropbox: fix missing encoding for rclone purge

This fixes the problem properly by making sure we send the encoded or
non encoded root to the right places.
2023-11-21 12:23:28 +00:00
Nick Craig-Wood
a71d181cb0 test_all: limit the Zoho tests to just the backend
The free account has a very ungenerous 1000 api calls per day limit
and the full integration test suite breaches that so limit the
integration tests to just the backend.
2023-11-21 12:06:31 +00:00
Nick Craig-Wood
cab42107f7 test_all: remove uptobox from integration tests
The uptobox service hasn't running since 20 September 2023.

This removes it from the integration tests to save noise.
2023-11-21 11:49:39 +00:00
Nick Craig-Wood
1f9a79ef09 operations: use less memory when doing multithread uploads
For uploads which are coming from disk or going to disk or going to a
backend which doesn't need to seek except for retries this doesn't
buffer the input.

This dramatically reduces rclone's memory usage.

Fixes #7350
2023-11-20 18:07:05 +00:00
Nick Craig-Wood
c0fb9ebfce operations: make Open() return an io.ReadSeekCloser #7350
As part of reducing memory usage in rclone, we need to have a raw
handle to an object we can seek with.
2023-11-20 18:07:05 +00:00
Nick Craig-Wood
e8fcde8de1 fs: add ChunkWriterDoesntSeek feature flag and set it for b2 2023-11-20 18:07:05 +00:00
Nick Craig-Wood
72dfdd97d8 mockobject: fix SetUnknownSize method to obey parameter passed in 2023-11-20 18:07:05 +00:00
Nick Craig-Wood
bb88b8499b box: fix performance problem reading metadata for single files
Before this change the backend used to list the directory to find the
metadata for a single file. For lots of files in a directory this
caused a serious performance problem.

This change uses the preflight check to check for a files existence
and find its ID.

See: https://forum.rclone.org/t/psa-box-com-has-serious-performance-issues-in-directories-with-thousands-of-files/41128/10
See: https://forum.box.com/t/is-there-an-api-to-find-a-file-by-leaf-name-given-a-folder-id/997/
See: https://developer.box.com/guides/uploads/check/
2023-11-20 18:07:05 +00:00
Nick Craig-Wood
4ac5cb07ca gcs: fix 400 Bad request errors when using multi-thread copy
Before this change, on every Open, we added the userProject parameter
to the URL in the object.

This meant it grew and grew until Google returned Error 400 (Bad
Request) errors when the URL became too long.

This fixes the problem by adding the userProject parameter once.

See: https://forum.rclone.org/t/endlessly-repeating-userproject-parameter-in-get-to-google-storage-context-canceled-got-http-response-code-400/42652
2023-11-20 18:07:05 +00:00
Nick Craig-Wood
4a3e9bbabf http: implement set backend command to update running backend
See: https://forum.rclone.org/t/updating-the-url-of-http-remote-not-applied-on-mounts/42763
2023-11-20 18:07:05 +00:00
Nick Craig-Wood
33376bf399 dropbox: fix missing encoding for rclone purge
This was causing directories with encodable characters in not to be
found on purge.

See: https://forum.rclone.org/t/purge-command-does-not-work-on-directories-with-files/42793
2023-11-20 18:07:05 +00:00
asdffdsazqqq
94b7c49196 Update Docs to show SMB remote supports modtime.md 2023-11-20 17:50:28 +00:00
albertony
a7faf05393 docs: cleanup backend hashes sections 2023-11-20 17:43:57 +00:00
albertony
98a96596df docs: replace mod-time with modtime 2023-11-20 17:43:57 +00:00
Nick Craig-Wood
88bd80c1fa march: Fix excessive parallelism when using --no-traverse
When using `--no-traverse` the march routines call NewObject on each
potential object in the destination.

The concurrency limiter was accidentally arranged so that there were
`--checkers` * `--checkers` NewObject calls going on at once.

This became obvious when using the sftp backend which used too many
connections.

Fixes #5824
2023-11-20 17:36:31 +00:00
Nick Craig-Wood
c6755aa768 Add Mina Galić to contributors 2023-11-20 17:36:31 +00:00
Mina Galić
01be5c75be Makefile: use POSIX compatible install arguments
install -t doesn't exist on BSD.
flip the arguments since we only have one.
2023-11-20 15:01:26 +00:00
Jacob Hands
20bd17f107 install.sh: Clean up temp files in install script 2023-11-20 15:00:08 +00:00
Nick Craig-Wood
64ec5709fe drive: fix integration tests by enabling metadata support from the context
Before this change, the drive backend only used metadata if it was
created with Metadata enabled.

This patch changes it so the Metadata support is enabled dynamically
if it is set in the context.

This fixes the metadata tests in the integration tests which have been
changed to make sure Metadata is enabled.
2023-11-19 12:48:27 +00:00
Nick Craig-Wood
1ea8678be2 fstests: make sure Metadata is enabled in the context for metadata tests 2023-11-19 12:48:27 +00:00
Nick Craig-Wood
8341de05c6 Refresh CONTRIBUTING.md
- add dos and don'ts section to writing a new backend
- bring markdown up to modern style
2023-11-19 12:48:27 +00:00
Nick Craig-Wood
47ca0c326e fs: implement --metadata-mapper to transform metatadata with a user supplied program 2023-11-18 17:49:35 +00:00
Nick Craig-Wood
54196f34e3 drive: fix error updating created time metadata on existing object
Google drive doesn't allow the btime (created time) metadata to be
updated when updating an existing object.

This changes skips btime metadata if we are updating an existing
object but allows it otherwise.
2023-11-18 17:49:35 +00:00
Nick Craig-Wood
9fdf3d548a drive: add read/write metadata support
- fetch metadata with listings and fetch permissions in parallel
- only write permissions out if they are not inherited.
- make setting labels, owner and permissions work controlled by flags
    - `--drive-metadata-labels`, `--drive-metadata-owner`, `--drive-metadata-permissions`
2023-11-18 17:49:35 +00:00
Nick Craig-Wood
10774d297a Add moongdal to contributors 2023-11-18 17:49:35 +00:00
Nick Craig-Wood
bf9053705d Add viktor to contributors 2023-11-18 17:49:35 +00:00
Nick Craig-Wood
0bd059ec55 Add karan to contributors 2023-11-18 17:49:35 +00:00
Nick Craig-Wood
59d363b3c1 Add Oksana Zhykina to contributors 2023-11-18 17:49:35 +00:00
Nick Craig-Wood
94a5de58c8 linkbox: pre-merge fixes
- convert to directoryCache - makes backend much more efficient
- don't force --low-level-retries to 2
- don't wrap paced calls in pacer
- fix shouldRetry
- fix file list searching mechanism
2023-11-18 17:14:45 +00:00
viktor
a466ababd0 backend: add Linkbox backend
Add backend for linkbox.io with read and write capabilities

fixes #6960 #6629
2023-11-18 17:14:45 +00:00
Nick Craig-Wood
168d577297 vfs: error out early if can't upload 0 length file
Before this change if a backend can't upload 0 length files and
`--vfs-cache-mode writes` was in use then the writeback logic would
try to upload the 0 length file forever.

This change causes it to exit on the first failure to upload.
2023-11-18 17:14:45 +00:00
Nick Craig-Wood
ddaf01ece9 azurefiles: finish docs and implementation and add optional interfaces
- use rclone's http Transport
- fix handling of 0 length files
- combine into one file and remove uneeded abstraction
- make `chunk_size` and `upload_concurrency` settable
- make auth the same as azureblob
- set the Features correctly
- implement `--azurefiles-max-stream-size`
- remove arbitrary sleep on Mkdir
- implement `--header-upload`
- implement read and write MimeType for objects
- implement optional methods
    - About
    - Copy
    - DirMove
    - Move
    - OpenWriterAt
    - PutStream
- finish documentation
- disable build on plan9 and js

Fixes #365
Fixes #7378
2023-11-18 16:48:23 +00:00
karan
b5301e03a6 Implement Azure Files backend
Co-authored-by: moongdal <moongdal@tutanota.com>
2023-11-18 16:42:13 +00:00
Dimitri Papadopoulos
e9763552f7 fs: fix a typo in a comment 2023-11-16 17:15:00 +00:00
Oksana Zhykina
6b60e09ff2 quatrix: overwrite files on conflict during server-side move 2023-11-16 17:14:00 +00:00
Oksana Zhykina
41a52f50df quatrix: add partial upload support 2023-11-16 17:14:00 +00:00
Nick Craig-Wood
93f35c915a serve s3: pre-merge tweaks
- Changes
    - Rename `--s3-authkey` to `--auth-key` to get it out of the s3 backend namespace
    - Enable `Content-MD5` integrity checks
    - Remove locking after code audit
- Documentation
    - Factor out documentation into seperate file
    - Add Quickstart to docs
    - Add Bugs section to docs
    - Add experimental tag to docs
    - Add rclone provider to s3 backend docs
- Fixes
    - Correct quirks in s3 backend
    - Change fmt.Printlns into fs.Logs
    - Make metadata storage per backend not global
    - Log on startup if anonymous access is enabled
- Coding style fixes
    - rename fs to vfs to save confusion with the rest of rclone code
    - rename db to b for *s3Backend

Fixes #7062
2023-11-16 16:59:56 +00:00
Nick Craig-Wood
a2c4f07a57 Add Saw-jan to contributors 2023-11-16 16:59:56 +00:00
Saw-jan
d3dcc61154 serve s3: fixes before merge
- add context to log and fallthrough to error log level
- test: use rclone random lib to generate random strings
- calculate hash from vfs cache if file is uploading
- add server started log with server url
- remove md5 hasher
2023-11-16 16:59:56 +00:00
Nick Craig-Wood
34ef5147aa Add Artur Neumann to contributors 2023-11-16 16:59:56 +00:00
Artur Neumann
aa29742be2 serve s3: fix file name encoding using s3 serve with mc client
using the mc (minio) client file encoding were wrong
see Mikubill/gofakes3#2 for details
2023-11-16 16:59:56 +00:00
Nick Craig-Wood
ef366b47f1 Add Mikubill to contributors 2023-11-16 16:59:55 +00:00
Mikubill
23abac2a59 serve s3: let rclone act as an S3 compatible server 2023-11-16 16:59:55 +00:00
Nick Craig-Wood
d3ba32c43e s3: add --s3-disable-multipart-uploads flag 2023-11-16 16:59:55 +00:00
Nick Craig-Wood
cdf5a97bb6 bin/update_authors.py: add authors from Co-authored-by: lines too 2023-11-16 16:59:55 +00:00
albertony
e1b0417c28 size: dont show duplicate object count when less than 1k 2023-11-14 16:44:12 +00:00
Nick Craig-Wood
acf1e2df84 lib/file: fix MkdirAll after go1.21.4 stdlib update
In ths security related issue the go1.21.4 stdlib changed the parsing
of volume names on Windows.

https://github.com/golang/go/issues/63713

This had the consequences of breaking the MkdirAll tests which were
looking for specific error messages which changed and using invalid
paths.

In particular under go1.21.3:

    filepath.VolumeName(`\\?\C:`) == `\\?\C:`

But under go1.21.4 it is:

    filepath.VolumeName(`\\?\C:`) == `\\?`

The path `\\?\C:` isn't actually a valid Windows path. I reported this
as a FYI bug upstream - I'm not expecting it to be fixed.

See: https://github.com/golang/go/issues/64101
2023-11-14 09:47:46 +00:00
Nick Craig-Wood
831d1df67f docs: factor large docs into separate .md files to make them easier to maintain.
We then use the go embed command to embed them back into the binary.
2023-11-13 16:27:09 +00:00
Nick Craig-Wood
e67157cf46 Add Tayo-pasedaRJ to contributors 2023-11-13 16:27:09 +00:00
Nick Craig-Wood
ac012618db Add Adithya Kumar to contributors 2023-11-13 16:27:09 +00:00
Nick Craig-Wood
7f09d9c2a0 Add wuxingzhong to contributors 2023-11-13 16:27:09 +00:00
Tayo-pasedaRJ
0548e61910 hdfs: added support for list of namenodes in hdfs remote config
Users can now input a comma separated list of namenodes when writing
config for hdfs remotes.

This is required when you have multiple namenodes in your hdfs cluster
and cannot be certain which namenodes will be in 'standby' or 'active'
states.

This was available before but wasn't documented and didn't use the
correct rclone interfaces.
2023-11-13 15:55:52 +00:00
Adithya Kumar
ad83ff769b webdav: added an rclone vendor to work with rclone serve webdav
Fixes #7160
2023-11-05 12:37:25 +00:00
albertony
ca14b00b34 docs: show hashsum arguments as optional in usage string 2023-11-03 23:31:00 +01:00
albertony
52d444f4a9 docs: document how to build with version info and icon resources on windows 2023-11-01 12:44:04 +01:00
albertony
4506f35f2e build: refactor version info and icon resource handling on windows
This makes it easier to add resources with any build method, and also when
building librclone.dll.

Goversioninfo is now used as a library, instead of running it as a tool.
2023-11-01 12:44:04 +01:00
wuxingzhong
4ab57eb90b serve dnla: fix crash on graceful exit
Before this change, closing a uninitialised chan would cause a crash.
2023-10-31 16:44:25 +00:00
Nick Craig-Wood
23ab6fa3a0 operations: fix server side copies on partial upload backends after refactor
After the copy refactor:

179f978f75 operations: refactor Copy into methods on an temporary object

There was some confusion in the code about server side copies - should
they or shouldn't they use partials?

This manifested in unit test failures for remotes which supported
server side Copy and PartialUploads. This combination is rare and only
exists in the sftp backend with the --sftp-copy-is-hardlink flag.

This fix makes the choice that backends which set PartialUploads
always use partials even for server side copies.
2023-10-30 16:50:19 +00:00
Nick Craig-Wood
af8ba18580 mount: disable mount for freebsd
The upstream library rclone uses for rclone mount no longer supports
freebsd. Not only is it broken, but it no longer compiles.

This patch disables rclone mount for freebsd.

However all is not lost for freebsd users - compiling rclone with the
`cmount` tag, so `go install -tags cmount` will install a working
`rclone mount` command which uses cgofuse and the libfuse C library
directly.

Note that the binaries from rclone.org will not have mount support as
we don't have a freebsd build machine in CI and it is very hard to
cross compile cmount.

See: https://github.com/bazil/fuse/issues/280
Fixes #5843
2023-10-29 15:46:41 +00:00
Nick Craig-Wood
0b90dd23c1 build: update all dependencies 2023-10-29 15:46:38 +00:00
Nick Craig-Wood
e64be7652a operations: fix invalid UTF-8 when truncating file names when not using --inplace
Before this change, when not using --inplace, rclone could generate
invalid file names when truncating file names to fit within the
character size limits.

This fixes it by taking care to truncate on UTF-8 character
boundaries.

See: https://forum.rclone.org/t/ssh-fx-failure-when-copying-file-with-nonstandard-characters-to-sftp-remote-with-ntfs-drive/42560/
2023-10-29 14:04:37 +00:00
Nick Craig-Wood
179f978f75 operations: refactor Copy into methods on an temporary object
operations.Copy had become very unwieldy. This refactors it into
methods on a copy object which is created for the duration of the
copy. This makes it much easier to read and reason about.
2023-10-29 14:04:37 +00:00
Nick Craig-Wood
17b7ee1f3a operations: factor Copy into its own file 2023-10-29 14:04:37 +00:00
dependabot[bot]
5c73363b16 build(deps): bump google.golang.org/grpc from 1.56.2 to 1.56.3
Bumps [google.golang.org/grpc](https://github.com/grpc/grpc-go) from 1.56.2 to 1.56.3.
- [Release notes](https://github.com/grpc/grpc-go/releases)
- [Commits](https://github.com/grpc/grpc-go/compare/v1.56.2...v1.56.3)

---
updated-dependencies:
- dependency-name: google.golang.org/grpc
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>
2023-10-28 18:31:35 +01:00
Nick Craig-Wood
bf21db0ac4 b2: fix multi-thread upload with copyto going to wrong name
See: https://forum.rclone.org/t/errors-and-failure-with-big-file-upload-to-b2/42522/
2023-10-28 15:18:00 +01:00
Nick Craig-Wood
0180301b3f fstests: add integration test for OpenChunkWriter uploading to the wrong name 2023-10-28 15:18:00 +01:00
Nick Craig-Wood
adfb1f7c7d b2: fix error handler to remove confusing DEBUG messages
On a 404 error, b2 returns an empty body which, before this change,
caused the error handler to try to parse an empty string and give the
following DEBUG message:

    Couldn't decode error response: EOF

This is confusing as it is expected in normal operations and isn't an
error.

This change reads the body of an error response first then tries to
decode it only if it isn't empty, which avoids the confusing DEBUG
message.

This also upgrades failure to read the body or failure to decode the
JSON to ERROR messages as now we are certain that we should have
something to read and decode.
2023-10-28 15:18:00 +01:00
Nick Craig-Wood
6092fe2aaa s3: emit a debug message if anonymous credentials are in use
This can indicate the user is expecting `env_auth=true` to be the
default so we say that in the debug message.

See: https://forum.rclone.org/t/rclone-with-amazon-s3-access-point/42411
2023-10-27 16:00:47 +01:00
Nick Craig-Wood
53868ef4e1 ncdu: fix crash when re-entering changed directory after rescan
ncdu stores the position that it was in for each directory. However
doing a rescan can cause those positions to be out of range if the
number of files decreased in a directory. When re-entering the
directory, this causes an index out of range error.

This fixes the problem by detecting the index out of range and
flushing the saved directory position.

See: https://forum.rclone.org/t/slice-bounds-out-of-range-during-ncdu/42492/
2023-10-24 14:26:57 +01:00
Nick Craig-Wood
e1ad467009 fs: fix docs for Bits 2023-10-23 15:43:55 +01:00
Nick Craig-Wood
12db7b6935 fs: add IsSet convenience method to Bits 2023-10-23 15:43:42 +01:00
Nick Craig-Wood
7434ad8618 docs: remove third party logos from source tree 2023-10-23 15:35:25 +01:00
Nick Craig-Wood
e4ab59bcc7 docs: update Storj image and link 2023-10-23 15:35:25 +01:00
Nick Craig-Wood
9119c6c76f Add alfish2000 to contributors 2023-10-23 15:35:25 +01:00
alfish2000
9d4d294793 union: fix documentation 2023-10-21 10:37:43 +01:00
Nick Craig-Wood
750ed556a5 build: fix new lint errors with golangci-lint v1.55.0 2023-10-20 18:53:30 +01:00
Nick Craig-Wood
5b0d3d060f selfupdate: make sure we don't run tests if selfupdate is set 2023-10-20 18:14:27 +01:00
Nick Craig-Wood
5b0f9dc4e3 local: fix copying from Windows Volume Shadows
For some files the Windows Volume Shadow Service (VSS) advertises the
file size as X in the directory listing but returns a different number
Y on stat-ing the file. If the file is opened and read there are Y
bytes available for reading.

Existing copy tools copy Y bytes rather than X so for consistency
rclone should do the same.

This fixes the problem by stat-ing the file immediately before opening
it. This will also reduce the unnecessary occurrence of "can't copy -
source file is being updated" errors; if the file has finished
changing by the time we come to copy it then we now can copy it
successfully.

See: https://forum.rclone.org/t/consistently-getting-corrupted-on-transfer-sizes-differ-syncing-to-an-smb-share/42218/
2023-10-19 16:38:10 +01:00
Nick Craig-Wood
b0a87d7cf1 Changelog updates from Version 1.64.2 2023-10-19 12:34:34 +01:00
Nick Craig-Wood
37d786c82a selfupdate: fix "invalid hashsum signature" error
This was caused by a change to the upstream library
ProtonMail/go-crypto checking the flags on the keys more strictly.

However the signing key for rclone is very old and does not have those
flags. Adding those flags using `gpg --edit-key` and then the
`change-usage` subcommand to remove, save, quite then re-add, save
quit the signing capabilities caused the key to work.

This also adds tests for the verification and adds the selfupdate
tests into the integration test harness as they had been disabled on
CI because they rely on external sources and are sometimes unreliable.

Fixes #7373
2023-10-18 17:55:19 +01:00
Nick Craig-Wood
56fe12c479 build: add the serve docker tests to the integration tester
These had been disabled on CI for being unreliable, so test them in
the integration tests framework which will retry them.
2023-10-18 17:55:19 +01:00
Nick Craig-Wood
9197180610 build: fix docker build running out of space
This removes some unused SDKs from the build machine to free some
space up before building. It also adds some lines to show the free
space.
2023-10-18 17:55:19 +01:00
Nick Craig-Wood
f4a538371d Add Ivan Yanitra to contributors 2023-10-18 17:55:10 +01:00
Nick Craig-Wood
f2ec08cba2 Add Keigo Imai to contributors 2023-10-18 17:55:10 +01:00
Nick Craig-Wood
8f25531b7f Add Gabriel Espinoza to contributors 2023-10-18 17:55:10 +01:00
Ivan Yanitra
0ee6d0b4bf azureblob: add support cold tier 2023-10-18 17:54:25 +01:00
Keigo Imai
4ac4597afb drive: add a note that --drive-scope accepts comma-separated list of scopes 2023-10-18 17:54:08 +01:00
Joda Stößer
143df6f6d2 docs: change authors email for SimJoSt 2023-10-18 16:31:15 +01:00
Nick Craig-Wood
8264ba987b Changelog updates from Version 1.64.1 2023-10-17 18:37:04 +01:00
Gabriel Espinoza
7a27d9a192 lib/http: export basic go strings functions
makes the following go strings functions available to be used in custom templates; contains, hasPrefix, hasSuffix

added documentation for exported funcs
2023-10-16 19:46:19 +01:00
albertony
195ad98311 docs: update documentation for --fast-list adding info about ListR 2023-10-16 18:11:22 +02:00
Nick Craig-Wood
29baa5888f mount: fix automount not detecting drive is ready
With automount the target mount drive appears twice in /proc/self/mountinfo.

    379 27 0:70 / /mnt/rclone rw,relatime shared:433 - autofs systemd-1 rw,fd=57,...
    566 379 0:90 / /mnt/rclone rw,nosuid,nodev,relatime shared:488 - fuse.rclone remote: rw,...

Before this fix we only looked for the mount once in
/proc/self/mountinfo. It finds the automount line and since this
doesn't have fs type rclone it concludes the mount isn't ready yet.

This patch makes rclone look through all the mounts and if any of them
have fs type rclone it concludes the mount is ready.

See: https://forum.rclone.org/t/systemd-mount-works-but-automount-does-not/42287/
2023-10-16 12:13:20 +01:00
Nick Craig-Wood
c7a2719fac sftp: implement --sftp-copy-is-hardlink to server side copy as hardlink
If the server does not support hardlinks then it falls back to normal
copy.

See: https://forum.rclone.org/t/sftp-remote-server-side-copy/41867
2023-10-16 12:08:22 +01:00
Nick Craig-Wood
c190b9b14f serve sftp: return not supported error for not supported commands
Before this change, if a hardlink command was issued, rclone would
just ignore it and not return an error.

This changes any unknown operations (including hardlink) to return an
unsupported error.
2023-10-16 12:08:22 +01:00
Nick Craig-Wood
5fa68e9ca5 b2: fix chunked streaming uploads
Streaming uploads are used by rclone rcat and rclone mount
--vfs-cache-mode off.

After the multipart chunker refactor the multipart chunked streaming
upload was accidentally mixing the first and the second parts up which
was causing corrupted uploads.

This was caused by a simple off by one error in the refactoring where
we went from 1 based part number counting to 0 based part number
counting.

Fixing this revealed that the metadata wasn't being re-read for the
copied object either.

This fixes both of those issues and adds an integration tests so it
won't happen again.

Fixes #7367
2023-10-13 15:46:36 +01:00
Nick Craig-Wood
b9727cc6ab build: upgrade golang.org/x/net to v0.17.0 to fix HTTP/2 rapid reset
Vulnerability1: GO-2023-2102

HTTP/2 rapid reset can cause excessive work in net/http

More info: https://pkg.go.dev/vuln/GO-2023-2102
2023-10-12 17:44:16 +01:00
Nick Craig-Wood
d8d76ff647 b2: fix server side copies greater than 4GB
After the multipart chunker refactor the multipart chunked server side
copy was accidentally sending one part too many. The last part was 0
length which was rejected by b2.

This was caused by a simple off by one error in the refactoring where
we went from 1 based part number counting to 0 based part number
counting.

Fixing this revealed that the metadata wasn't being re-read for the
copied object either.

This fixes both of those issues and adds an integration tests so it
won't happen again.

See: https://forum.rclone.org/t/large-server-side-copy-in-b2-fails-due-to-bad-byte-range/42294
2023-10-12 11:19:56 +01:00
Nick Craig-Wood
5afa838457 cmd: Make --progress output logs in the same format as without
See: https://forum.rclone.org/t/using-progress-change-dates-from-2023-10-05-to-2023-10-05/42173
2023-10-11 11:36:31 +01:00
Nick Craig-Wood
2de084944b operations: fix error message on delete to have file name - fixes #7355 2023-10-11 11:34:11 +01:00
Vitor Gomes
48a8bfa6b3 operations: fix OpenOptions ignored in copy if operation was a multiThreadCopy 2023-10-11 11:19:03 +01:00
Nick Craig-Wood
d3ce795c30 build: fix docker beta build running out of space
This removes some unused SDKs from the build machine to free some
space up before building. It also adds some lines to show the free
space.
2023-10-10 15:59:07 +01:00
Nick Craig-Wood
c04657cd4c Add Volodymyr to contributors 2023-10-10 15:59:07 +01:00
Volodymyr
6255d9dfaa operations: implement --partial-suffix to control extension of temporary file names 2023-10-10 12:27:32 +01:00
Nick Craig-Wood
f56ea2bee2 s3: fix no error being returned when creating a bucket we don't own
Before this change if you tried to create a bucket that already
existed, but someone else owned then rclone did not return an error.

This now will return an error on providers that return the
AlreadyOwnedByYou error code or no error on bucket creation of an
existing bucket owned by you.

This introduces a new provider quirk and this has been set or cleared
for as many providers as can be tested. This can be overridden by the
--s3-use-already-exists flag.

Fixes #7351
2023-10-09 18:15:02 +01:00
Nick Craig-Wood
d6ba60c04d oracleobjectstorage: fix OpenOptions being ignored in uploadMultipart with chunkWriter 2023-10-09 17:13:42 +01:00
Vitor Gomes
37eaa3682a s3: fix OpenOptions being ignored in uploadMultipart with chunkWriter 2023-10-09 17:12:56 +01:00
Nick Craig-Wood
c5f6fc3283 drive: add --drive-show-all-gdocs to allow unexportable gdocs to be server side copied
Before this change, attempting to server side copy a google form would
give this error

    No export formats found for "application/vnd.google-apps.form"

Adding this flag allows the form to be server side copied but not
downloaded.

Fixes #6302
2023-10-09 16:53:03 +01:00
Nick Craig-Wood
4daf755da0 Add Saleh Dindar to contributors 2023-10-09 16:53:03 +01:00
Nick Craig-Wood
eee8ad5146 Add Beyond Meat to contributors 2023-10-09 16:53:03 +01:00
Saleh Dindar
bcb3289dad nfsmount: documentation for new NFS mount feature for macOS 2023-10-06 14:08:20 +01:00
Saleh Dindar
ef2ef8ef84 nfsmount: New mount command to provide mount mechanism on macOS without FUSE
Summary:
In cases where cmount is not available in macOS, we alias nfsmount to mount command and transparently start the NFS server and mount it to the target dir.

The NFS server is started on localhost on a random port so it is reasonably secure.

Test Plan:
```
go run rclone.go mount --http-url https://beta.rclone.org :http: nfs-test
```

Added mount tests:
```
go test ./cmd/nfsmount
```
2023-10-06 14:08:20 +01:00
Saleh Dindar
c69cf46f06 serve nfs: new serve nfs command
Summary:
Adding a new command to serve any remote over NFS. This is only useful for new macOS versions where FUSE mounts are not available.
 * Added willscot/go-nfs dependency and updated go.mod and go.sum

Test Plan:
```
go run rclone.go serve nfs --http-url https://beta.rclone.org :http:
```

Test that it is serving correctly by mounting the NFS directory.

```
mkdir nfs-test
mount -oport=58654,mountport=58654 localhost: nfs-test
```

Then we can list the mounted directory to see it is working.
```
ls nfs-test
```
2023-10-06 14:08:20 +01:00
Saleh Dindar
25f59b2918 vfs: Add go-billy dependency and make sure vfs.Handle implements billy.File
billy defines a common file system interface that is used in multiple go packages.
vfs.Handle implements billy.File mostly, only two methods needed to be added to
make it compliant.

An interface check is added as well.

This is a preliminary work for adding serve nfs command.
2023-10-06 14:08:20 +01:00
Saleh Dindar
7801b160f2 vfs: [bugfix] Update dir modification time
A subtle bug where dir modification time is not updated when the dir already exists
in the cache. It is only noticeable when some clients use dir modification time to
invalidate cache.
2023-10-06 14:08:20 +01:00
Saleh Dindar
23f8dea182 vfs: [bugfix] Implement Name() method in WriteFileHandle and ReadFileHandle
Name() method was originally left out and defaulted to the base
class which always returns empty. This trigerred incorrect behavior
in serve nfs where it relied on the Name() of the interafce to figure
out what file it was modifying.

This method is copied from RWFileHandle struct.

Added extra assert in the tests.
2023-10-06 14:08:20 +01:00
Beyond Meat
3337fe31c7 vfs: add --vfs-refresh flag to read all the directories on start
Refreshes the directory listing recursively at VFS start time.
2023-10-06 13:11:09 +01:00
270 changed files with 29094 additions and 14436 deletions

View File

@@ -216,7 +216,6 @@ jobs:
shell: bash
run: |
if [[ "${{ matrix.os }}" == "ubuntu-latest" ]]; then make release_dep_linux ; fi
if [[ "${{ matrix.os }}" == "windows-latest" ]]; then make release_dep_windows ; fi
make ci_beta
env:
RCLONE_CONFIG_PASS: ${{ secrets.RCLONE_CONFIG_PASS }}

View File

@@ -10,6 +10,15 @@ jobs:
runs-on: ubuntu-latest
name: Build image job
steps:
- name: Free some space
shell: bash
run: |
df -h .
# Remove android SDK
sudo rm -rf /usr/local/lib/android || true
# Remove .net runtime
sudo rm -rf /usr/share/dotnet || true
df -h .
- name: Checkout master
uses: actions/checkout@v4
with:
@@ -42,7 +51,10 @@ jobs:
# See https://docs.github.com/en/actions/security-guides/automatic-token-authentication#about-the-github_token-secret
# for more detailed information.
password: ${{ secrets.GITHUB_TOKEN }}
- name: Show disk usage
shell: bash
run: |
df -h .
- name: Build and publish image
uses: docker/build-push-action@v5
with:
@@ -54,8 +66,12 @@ jobs:
rclone/rclone:beta
labels: ${{ steps.meta.outputs.labels }}
platforms: linux/amd64,linux/386,linux/arm64,linux/arm/v7,linux/arm/v6
cache-from: type=gha
cache-to: type=gha,mode=max
cache-from: type=gha, scope=${{ github.workflow }}
cache-to: type=gha, mode=max, scope=${{ github.workflow }}
provenance: false
# Eventually cache will need to be cleared if builds more frequent than once a week
# https://github.com/docker/build-push-action/issues/252
- name: Show disk usage
shell: bash
run: |
df -h .

View File

@@ -10,6 +10,15 @@ jobs:
runs-on: ubuntu-latest
name: Build image job
steps:
- name: Free some space
shell: bash
run: |
df -h .
# Remove android SDK
sudo rm -rf /usr/local/lib/android || true
# Remove .net runtime
sudo rm -rf /usr/share/dotnet || true
df -h .
- name: Checkout master
uses: actions/checkout@v4
with:
@@ -39,6 +48,15 @@ jobs:
runs-on: ubuntu-latest
name: Build docker plugin job
steps:
- name: Free some space
shell: bash
run: |
df -h .
# Remove android SDK
sudo rm -rf /usr/local/lib/android || true
# Remove .net runtime
sudo rm -rf /usr/share/dotnet || true
df -h .
- name: Checkout master
uses: actions/checkout@v4
with:

5
.gitignore vendored
View File

@@ -14,4 +14,7 @@ fuzz-build.zip
*.rej
Thumbs.db
__pycache__
.DS_Store
.DS_Store
/docs/static/img/logos/
resource_windows_*.syso
.devcontainer

View File

@@ -1,8 +1,8 @@
# Contributing to rclone #
# Contributing to rclone
This is a short guide on how to contribute things to rclone.
## Reporting a bug ##
## Reporting a bug
If you've just got a question or aren't sure if you've found a bug
then please use the [rclone forum](https://forum.rclone.org/) instead
@@ -12,13 +12,13 @@ When filing an issue, please include the following information if
possible as well as a description of the problem. Make sure you test
with the [latest beta of rclone](https://beta.rclone.org/):
* Rclone version (e.g. output from `rclone version`)
* Which OS you are using and how many bits (e.g. Windows 10, 64 bit)
* The command you were trying to run (e.g. `rclone copy /tmp remote:tmp`)
* A log of the command with the `-vv` flag (e.g. output from `rclone -vv copy /tmp remote:tmp`)
* if the log contains secrets then edit the file with a text editor first to obscure them
- Rclone version (e.g. output from `rclone version`)
- Which OS you are using and how many bits (e.g. Windows 10, 64 bit)
- The command you were trying to run (e.g. `rclone copy /tmp remote:tmp`)
- A log of the command with the `-vv` flag (e.g. output from `rclone -vv copy /tmp remote:tmp`)
- if the log contains secrets then edit the file with a text editor first to obscure them
## Submitting a new feature or bug fix ##
## Submitting a new feature or bug fix
If you find a bug that you'd like to fix, or a new feature that you'd
like to implement then please submit a pull request via GitHub.
@@ -73,9 +73,9 @@ This is typically enough if you made a simple bug fix, otherwise please read the
Make sure you
* Add [unit tests](#testing) for a new feature.
* Add [documentation](#writing-documentation) for a new feature.
* [Commit your changes](#committing-your-changes) using the [message guideline](#commit-messages).
- Add [unit tests](#testing) for a new feature.
- Add [documentation](#writing-documentation) for a new feature.
- [Commit your changes](#committing-your-changes) using the [commit message guidelines](#commit-messages).
When you are done with that push your changes to GitHub:
@@ -88,9 +88,9 @@ Your changes will then get reviewed and you might get asked to fix some stuff. I
You may sometimes be asked to [base your changes on the latest master](#basing-your-changes-on-the-latest-master) or [squash your commits](#squashing-your-commits).
## Using Git and GitHub ##
## Using Git and GitHub
### Committing your changes ###
### Committing your changes
Follow the guideline for [commit messages](#commit-messages) and then:
@@ -107,7 +107,7 @@ You can modify the message or changes in the latest commit using:
If you amend to commits that have been pushed to GitHub, then you will have to [replace your previously pushed commits](#replacing-your-previously-pushed-commits).
### Replacing your previously pushed commits ###
### Replacing your previously pushed commits
Note that you are about to rewrite the GitHub history of your branch. It is good practice to involve your collaborators before modifying commits that have been pushed to GitHub.
@@ -115,7 +115,7 @@ Your previously pushed commits are replaced by:
git push --force origin my-new-feature
### Basing your changes on the latest master ###
### Basing your changes on the latest master
To base your changes on the latest version of the [rclone master](https://github.com/rclone/rclone/tree/master) (upstream):
@@ -149,13 +149,21 @@ If you squash commits that have been pushed to GitHub, then you will have to [re
Tip: You may like to use `git rebase -i master` if you are experienced or have a more complex situation.
### GitHub Continuous Integration ###
### GitHub Continuous Integration
rclone currently uses [GitHub Actions](https://github.com/rclone/rclone/actions) to build and test the project, which should be automatically available for your fork too from the `Actions` tab in your repository.
## Testing ##
## Testing
### Quick testing ###
### Code quality tests
If you install [golangci-lint](https://github.com/golangci/golangci-lint) then you can run the same tests as get run in the CI which can be very helpful.
You can run them with `make check` or with `golangci-lint run ./...`.
Using these tests ensures that the rclone codebase all uses the same coding standards. These tests also check for easy mistakes to make (like forgetting to check an error return).
### Quick testing
rclone's tests are run from the go testing framework, so at the top
level you can run this to run all the tests.
@@ -168,7 +176,7 @@ You can also use `make`, if supported by your platform
The quicktest is [automatically run by GitHub](#github-continuous-integration) when you push your branch to GitHub.
### Backend testing ###
### Backend testing
rclone contains a mixture of unit tests and integration tests.
Because it is difficult (and in some respects pointless) to test cloud
@@ -203,7 +211,7 @@ project root:
go install github.com/rclone/rclone/fstest/test_all
test_all -backend drive
### Full integration testing ###
### Full integration testing
If you want to run all the integration tests against all the remotes,
then change into the project root and run
@@ -218,55 +226,56 @@ The commands may require some extra go packages which you can install with
The full integration tests are run daily on the integration test server. You can
find the results at https://pub.rclone.org/integration-tests/
## Code Organisation ##
## Code Organisation
Rclone code is organised into a small number of top level directories
with modules beneath.
* backend - the rclone backends for interfacing to cloud providers -
* all - import this to load all the cloud providers
* ...providers
* bin - scripts for use while building or maintaining rclone
* cmd - the rclone commands
* all - import this to load all the commands
* ...commands
* cmdtest - end-to-end tests of commands, flags, environment variables,...
* docs - the documentation and website
* content - adjust these docs only - everything else is autogenerated
* command - these are auto-generated - edit the corresponding .go file
* fs - main rclone definitions - minimal amount of code
* accounting - bandwidth limiting and statistics
* asyncreader - an io.Reader which reads ahead
* config - manage the config file and flags
* driveletter - detect if a name is a drive letter
* filter - implements include/exclude filtering
* fserrors - rclone specific error handling
* fshttp - http handling for rclone
* fspath - path handling for rclone
* hash - defines rclone's hash types and functions
* list - list a remote
* log - logging facilities
* march - iterates directories in lock step
* object - in memory Fs objects
* operations - primitives for sync, e.g. Copy, Move
* sync - sync directories
* walk - walk a directory
* fstest - provides integration test framework
* fstests - integration tests for the backends
* mockdir - mocks an fs.Directory
* mockobject - mocks an fs.Object
* test_all - Runs integration tests for everything
* graphics - the images used in the website, etc.
* lib - libraries used by the backend
* atexit - register functions to run when rclone exits
* dircache - directory ID to name caching
* oauthutil - helpers for using oauth
* pacer - retries with backoff and paces operations
* readers - a selection of useful io.Readers
* rest - a thin abstraction over net/http for REST
* vfs - Virtual FileSystem layer for implementing rclone mount and similar
- backend - the rclone backends for interfacing to cloud providers -
- all - import this to load all the cloud providers
- ...providers
- bin - scripts for use while building or maintaining rclone
- cmd - the rclone commands
- all - import this to load all the commands
- ...commands
- cmdtest - end-to-end tests of commands, flags, environment variables,...
- docs - the documentation and website
- content - adjust these docs only - everything else is autogenerated
- command - these are auto-generated - edit the corresponding .go file
- fs - main rclone definitions - minimal amount of code
- accounting - bandwidth limiting and statistics
- asyncreader - an io.Reader which reads ahead
- config - manage the config file and flags
- driveletter - detect if a name is a drive letter
- filter - implements include/exclude filtering
- fserrors - rclone specific error handling
- fshttp - http handling for rclone
- fspath - path handling for rclone
- hash - defines rclone's hash types and functions
- list - list a remote
- log - logging facilities
- march - iterates directories in lock step
- object - in memory Fs objects
- operations - primitives for sync, e.g. Copy, Move
- sync - sync directories
- walk - walk a directory
- fstest - provides integration test framework
- fstests - integration tests for the backends
- mockdir - mocks an fs.Directory
- mockobject - mocks an fs.Object
- test_all - Runs integration tests for everything
- graphics - the images used in the website, etc.
- lib - libraries used by the backend
- atexit - register functions to run when rclone exits
- dircache - directory ID to name caching
- oauthutil - helpers for using oauth
- pacer - retries with backoff and paces operations
- readers - a selection of useful io.Readers
- rest - a thin abstraction over net/http for REST
- librclone - in memory interface to rclone's API for embedding rclone
- vfs - Virtual FileSystem layer for implementing rclone mount and similar
## Writing Documentation ##
## Writing Documentation
If you are adding a new feature then please update the documentation.
@@ -277,22 +286,22 @@ alphabetical order.
If you add a new backend option/flag, then it should be documented in
the source file in the `Help:` field.
* Start with the most important information about the option,
- Start with the most important information about the option,
as a single sentence on a single line.
* This text will be used for the command-line flag help.
* It will be combined with other information, such as any default value,
- This text will be used for the command-line flag help.
- It will be combined with other information, such as any default value,
and the result will look odd if not written as a single sentence.
* It should end with a period/full stop character, which will be shown
- It should end with a period/full stop character, which will be shown
in docs but automatically removed when producing the flag help.
* Try to keep it below 80 characters, to reduce text wrapping in the terminal.
* More details can be added in a new paragraph, after an empty line (`"\n\n"`).
* Like with docs generated from Markdown, a single line break is ignored
- Try to keep it below 80 characters, to reduce text wrapping in the terminal.
- More details can be added in a new paragraph, after an empty line (`"\n\n"`).
- Like with docs generated from Markdown, a single line break is ignored
and two line breaks creates a new paragraph.
* This text will be shown to the user in `rclone config`
- This text will be shown to the user in `rclone config`
and in the docs (where it will be added by `make backenddocs`,
normally run some time before next release).
* To create options of enumeration type use the `Examples:` field.
* Each example value have their own `Help:` field, but they are treated
- To create options of enumeration type use the `Examples:` field.
- Each example value have their own `Help:` field, but they are treated
a bit different than the main option help text. They will be shown
as an unordered list, therefore a single line break is enough to
create a new list item. Also, for enumeration texts like name of
@@ -312,12 +321,12 @@ combined unmodified with other information (such as any default value).
Note that you can use [GitHub's online editor](https://help.github.com/en/github/managing-files-in-a-repository/editing-files-in-another-users-repository)
for small changes in the docs which makes it very easy.
## Making a release ##
## Making a release
There are separate instructions for making a release in the RELEASE.md
file.
## Commit messages ##
## Commit messages
Please make the first line of your commit message a summary of the
change that a user (not a developer) of rclone would like to read, and
@@ -358,7 +367,7 @@ error fixing the hang.
Fixes #1498
```
## Adding a dependency ##
## Adding a dependency
rclone uses the [go
modules](https://tip.golang.org/cmd/go/#hdr-Modules__module_versions__and_more)
@@ -370,7 +379,7 @@ To add a dependency `github.com/ncw/new_dependency` see the
instructions below. These will fetch the dependency and add it to
`go.mod` and `go.sum`.
GO111MODULE=on go get github.com/ncw/new_dependency
go get github.com/ncw/new_dependency
You can add constraints on that package when doing `go get` (see the
go docs linked above), but don't unless you really need to.
@@ -378,15 +387,15 @@ go docs linked above), but don't unless you really need to.
Please check in the changes generated by `go mod` including `go.mod`
and `go.sum` in the same commit as your other changes.
## Updating a dependency ##
## Updating a dependency
If you need to update a dependency then run
GO111MODULE=on go get -u golang.org/x/crypto
go get golang.org/x/crypto
Check in a single commit as above.
## Updating all the dependencies ##
## Updating all the dependencies
In order to update all the dependencies then run `make update`. This
just uses the go modules to update all the modules to their latest
@@ -395,7 +404,7 @@ stable release. Check in the changes in a single commit as above.
This should be done early in the release cycle to pick up new versions
of packages in time for them to get some testing.
## Updating a backend ##
## Updating a backend
If you update a backend then please run the unit tests and the
integration tests for that backend.
@@ -410,76 +419,82 @@ integration tests.
The next section goes into more detail about the tests.
## Writing a new backend ##
## Writing a new backend
Choose a name. The docs here will use `remote` as an example.
Note that in rclone terminology a file system backend is called a
remote or an fs.
Research
### Research
* Look at the interfaces defined in `fs/types.go`
* Study one or more of the existing remotes
- Look at the interfaces defined in `fs/types.go`
- Study one or more of the existing remotes
Getting going
### Getting going
* Create `backend/remote/remote.go` (copy this from a similar remote)
* box is a good one to start from if you have a directory-based remote
* b2 is a good one to start from if you have a bucket-based remote
* Add your remote to the imports in `backend/all/all.go`
* HTTP based remotes are easiest to maintain if they use rclone's [lib/rest](https://pkg.go.dev/github.com/rclone/rclone/lib/rest) module, but if there is a really good go SDK then use that instead.
* Try to implement as many optional methods as possible as it makes the remote more usable.
* Use [lib/encoder](https://pkg.go.dev/github.com/rclone/rclone/lib/encoder) to make sure we can encode any path name and `rclone info` to help determine the encodings needed
* `rclone purge -v TestRemote:rclone-info`
* `rclone test info --all --remote-encoding None -vv --write-json remote.json TestRemote:rclone-info`
* `go run cmd/test/info/internal/build_csv/main.go -o remote.csv remote.json`
* open `remote.csv` in a spreadsheet and examine
- Create `backend/remote/remote.go` (copy this from a similar remote)
- box is a good one to start from if you have a directory-based remote (and shows how to use the directory cache)
- b2 is a good one to start from if you have a bucket-based remote
- Add your remote to the imports in `backend/all/all.go`
- HTTP based remotes are easiest to maintain if they use rclone's [lib/rest](https://pkg.go.dev/github.com/rclone/rclone/lib/rest) module, but if there is a really good Go SDK from the provider then use that instead.
- Try to implement as many optional methods as possible as it makes the remote more usable.
- Use [lib/encoder](https://pkg.go.dev/github.com/rclone/rclone/lib/encoder) to make sure we can encode any path name and `rclone info` to help determine the encodings needed
- `rclone purge -v TestRemote:rclone-info`
- `rclone test info --all --remote-encoding None -vv --write-json remote.json TestRemote:rclone-info`
- `go run cmd/test/info/internal/build_csv/main.go -o remote.csv remote.json`
- open `remote.csv` in a spreadsheet and examine
Important:
### Guidelines for a speedy merge
* Please use [lib/rest](https://pkg.go.dev/github.com/rclone/rclone/lib/rest) if you are implementing a REST like backend and parsing XML/JSON in the backend. It makes maintenance much easier.
* If your backend is HTTP based then please use rclone's Client or Transport from [fs/fshttp](https://pkg.go.dev/github.com/rclone/rclone/fs/fshttp) - this adds features like `--dump bodies`, `--tpslimit`, `--user-agent` without you having to code anything!
- **Do** use [lib/rest](https://pkg.go.dev/github.com/rclone/rclone/lib/rest) if you are implementing a REST like backend and parsing XML/JSON in the backend.
- **Do** use rclone's Client or Transport from [fs/fshttp](https://pkg.go.dev/github.com/rclone/rclone/fs/fshttp) if your backend is HTTP based - this adds features like `--dump bodies`, `--tpslimit`, `--user-agent` without you having to code anything!
- **Do** follow your example backend exactly - use the same code order, function names, layout, structure. **Don't** move stuff around and **Don't** delete the comments.
- **Do not** split your backend up into `fs.go` and `object.go` (there are a few backends like that - don't follow them!)
- **Do** put your API type definitions in a separate file - by preference `api/types.go`
- **Remember** we have >50 backends to maintain so keeping them as similar as possible to each other is a high priority!
Unit tests
### Unit tests
* Create a config entry called `TestRemote` for the unit tests to use
* Create a `backend/remote/remote_test.go` - copy and adjust your example remote
* Make sure all tests pass with `go test -v`
- Create a config entry called `TestRemote` for the unit tests to use
- Create a `backend/remote/remote_test.go` - copy and adjust your example remote
- Make sure all tests pass with `go test -v`
Integration tests
### Integration tests
* Add your backend to `fstest/test_all/config.yaml`
* Once you've done that then you can use the integration test framework from the project root:
* go install ./...
* test_all -backends remote
- Add your backend to `fstest/test_all/config.yaml`
- Once you've done that then you can use the integration test framework from the project root:
- go install ./...
- test_all -backends remote
Or if you want to run the integration tests manually:
* Make sure integration tests pass with
* `cd fs/operations`
* `go test -v -remote TestRemote:`
* `cd fs/sync`
* `go test -v -remote TestRemote:`
* If your remote defines `ListR` check with this also
* `go test -v -remote TestRemote: -fast-list`
- Make sure integration tests pass with
- `cd fs/operations`
- `go test -v -remote TestRemote:`
- `cd fs/sync`
- `go test -v -remote TestRemote:`
- If your remote defines `ListR` check with this also
- `go test -v -remote TestRemote: -fast-list`
See the [testing](#testing) section for more information on integration tests.
Add your fs to the docs - you'll need to pick an icon for it from
### Backend documentation
Add your backend to the docs - you'll need to pick an icon for it from
[fontawesome](http://fontawesome.io/icons/). Keep lists of remotes in
alphabetical order of full name of remote (e.g. `drive` is ordered as
`Google Drive`) but with the local file system last.
* `README.md` - main GitHub page
* `docs/content/remote.md` - main docs page (note the backend options are automatically added to this file with `make backenddocs`)
* make sure this has the `autogenerated options` comments in (see your reference backend docs)
* update them in your backend with `bin/make_backend_docs.py remote`
* `docs/content/overview.md` - overview docs
* `docs/content/docs.md` - list of remotes in config section
* `docs/content/_index.md` - front page of rclone.org
* `docs/layouts/chrome/navbar.html` - add it to the website navigation
* `bin/make_manual.py` - add the page to the `docs` constant
- `README.md` - main GitHub page
- `docs/content/remote.md` - main docs page (note the backend options are automatically added to this file with `make backenddocs`)
- make sure this has the `autogenerated options` comments in (see your reference backend docs)
- update them in your backend with `bin/make_backend_docs.py remote`
- `docs/content/overview.md` - overview docs
- `docs/content/docs.md` - list of remotes in config section
- `docs/content/_index.md` - front page of rclone.org
- `docs/layouts/chrome/navbar.html` - add it to the website navigation
- `bin/make_manual.py` - add the page to the `docs` constant
Once you've written the docs, run `make serve` and check they look OK
in the web browser and the links (internal and external) all work.
@@ -524,13 +539,13 @@ in the names so if these fail and the provider doesn't support
For an example of adding an s3 provider see [eb3082a1](https://github.com/rclone/rclone/commit/eb3082a1ebdb76d5625f14cedec3f5154a5e7b10).
## Writing a plugin ##
## Writing a plugin
New features (backends, commands) can also be added "out-of-tree", through Go plugins.
Changes will be kept in a dynamically loaded file instead of being compiled into the main binary.
This is useful if you can't merge your changes upstream or don't want to maintain a fork of rclone.
Usage
### Usage
- Naming
- Plugins names must have the pattern `librcloneplugin_KIND_NAME.so`.
@@ -545,7 +560,7 @@ Usage
- Plugins must be compiled against the exact version of rclone to work.
(The rclone used during building the plugin must be the same as the source of rclone)
Building
### Building
To turn your existing additions into a Go plugin, move them to an external repository
and change the top-level package name to `main`.

5974
MANUAL.html generated

File diff suppressed because it is too large Load Diff

5553
MANUAL.md generated

File diff suppressed because it is too large Load Diff

5500
MANUAL.txt generated

File diff suppressed because it is too large Load Diff

View File

@@ -30,6 +30,7 @@ ifdef RELEASE_TAG
TAG := $(RELEASE_TAG)
endif
GO_VERSION := $(shell go version)
GO_OS := $(shell go env GOOS)
ifdef BETA_SUBDIR
BETA_SUBDIR := /$(BETA_SUBDIR)
endif
@@ -46,7 +47,13 @@ endif
.PHONY: rclone test_all vars version
rclone:
ifeq ($(GO_OS),windows)
go run bin/resource_windows.go -version $(TAG) -syso resource_windows_`go env GOARCH`.syso
endif
go build -v --ldflags "-s -X github.com/rclone/rclone/fs.Version=$(TAG)" $(BUILDTAGS) $(BUILD_ARGS)
ifeq ($(GO_OS),windows)
rm resource_windows_`go env GOARCH`.syso
endif
mkdir -p `go env GOPATH`/bin/
cp -av rclone`go env GOEXE` `go env GOPATH`/bin/rclone`go env GOEXE`.new
mv -v `go env GOPATH`/bin/rclone`go env GOEXE`.new `go env GOPATH`/bin/rclone`go env GOEXE`
@@ -102,10 +109,6 @@ build_dep:
release_dep_linux:
go install github.com/goreleaser/nfpm/v2/cmd/nfpm@latest
# Get the release dependencies we only install on Windows
release_dep_windows:
GOOS="" GOARCH="" go install github.com/josephspurrier/goversioninfo/cmd/goversioninfo@latest
# Update dependencies
showupdates:
@echo "*** Direct dependencies that could be updated ***"
@@ -150,7 +153,7 @@ rcdocs: rclone
install: rclone
install -d ${DESTDIR}/usr/bin
install -t ${DESTDIR}/usr/bin ${GOPATH}/bin/rclone
install ${GOPATH}/bin/rclone ${DESTDIR}/usr/bin
clean:
go clean ./...

View File

@@ -53,12 +53,14 @@ Rclone *("rsync for cloud storage")* is a command-line program to sync files and
* Koofr [:page_facing_up:](https://rclone.org/koofr/)
* Leviia Object Storage [:page_facing_up:](https://rclone.org/s3/#leviia)
* Liara Object Storage [:page_facing_up:](https://rclone.org/s3/#liara-object-storage)
* Linkbox [:page_facing_up:](https://rclone.org/linkbox)
* Linode Object Storage [:page_facing_up:](https://rclone.org/s3/#linode)
* Mail.ru Cloud [:page_facing_up:](https://rclone.org/mailru/)
* Memset Memstore [:page_facing_up:](https://rclone.org/swift/)
* Mega [:page_facing_up:](https://rclone.org/mega/)
* Memory [:page_facing_up:](https://rclone.org/memory/)
* Microsoft Azure Blob Storage [:page_facing_up:](https://rclone.org/azureblob/)
* Microsoft Azure Files Storage [:page_facing_up:](https://rclone.org/azurefiles/)
* Microsoft OneDrive [:page_facing_up:](https://rclone.org/onedrive/)
* Minio [:page_facing_up:](https://rclone.org/s3/#minio)
* Nextcloud [:page_facing_up:](https://rclone.org/webdav/#nextcloud)

View File

@@ -41,12 +41,15 @@ Early in the next release cycle update the dependencies
* Review any pinned packages in go.mod and remove if possible
* make updatedirect
* make
* make GOTAGS=cmount
* make compiletest
* git commit -a -v
* make update
* make
* make GOTAGS=cmount
* make compiletest
* roll back any updates which didn't compile
* git commit -a -v --amend
* **NB** watch out for this changing the default go version in `go.mod`
Note that `make update` updates all direct and indirect dependencies
and there can occasionally be forwards compatibility problems with
@@ -90,6 +93,13 @@ Now
* git commit -a -v -m "Changelog updates from Version ${NEW_TAG}"
* git push
## Sponsor logos
If updating the website note that the sponsor logos have been moved out of the main repository.
You will need to checkout `/docs/static/img/logos` from https://github.com/rclone/third-party-logos
which is a private repo containing artwork from sponsors.
## Update the website between releases
Create an update website branch based off the last release

View File

@@ -1 +1 @@
v1.65.0
v1.66.0

View File

@@ -6,6 +6,7 @@ import (
_ "github.com/rclone/rclone/backend/alias"
_ "github.com/rclone/rclone/backend/amazonclouddrive"
_ "github.com/rclone/rclone/backend/azureblob"
_ "github.com/rclone/rclone/backend/azurefiles"
_ "github.com/rclone/rclone/backend/b2"
_ "github.com/rclone/rclone/backend/box"
_ "github.com/rclone/rclone/backend/cache"
@@ -24,9 +25,11 @@ import (
_ "github.com/rclone/rclone/backend/hdfs"
_ "github.com/rclone/rclone/backend/hidrive"
_ "github.com/rclone/rclone/backend/http"
_ "github.com/rclone/rclone/backend/imagekit"
_ "github.com/rclone/rclone/backend/internetarchive"
_ "github.com/rclone/rclone/backend/jottacloud"
_ "github.com/rclone/rclone/backend/koofr"
_ "github.com/rclone/rclone/backend/linkbox"
_ "github.com/rclone/rclone/backend/local"
_ "github.com/rclone/rclone/backend/mailru"
_ "github.com/rclone/rclone/backend/mega"

View File

@@ -295,10 +295,10 @@ avoid the time out.`,
Advanced: true,
}, {
Name: "access_tier",
Help: `Access tier of blob: hot, cool or archive.
Help: `Access tier of blob: hot, cool, cold or archive.
Archived blobs can be restored by setting access tier to hot or
cool. Leave blank if you intend to use default access tier, which is
Archived blobs can be restored by setting access tier to hot, cool or
cold. Leave blank if you intend to use default access tier, which is
set at account level
If there is no "access tier" specified, rclone doesn't apply any tier.
@@ -306,7 +306,7 @@ rclone performs "Set Tier" operation on blobs while uploading, if objects
are not modified, specifying "access tier" to new one will have no effect.
If blobs are in "archive tier" at remote, trying to perform data transfer
operations from remote will not be allowed. User should first restore by
tiering blob to "Hot" or "Cool".`,
tiering blob to "Hot", "Cool" or "Cold".`,
Advanced: true,
}, {
Name: "archive_tier_delete",
@@ -520,6 +520,7 @@ func (o *Object) split() (container, containerPath string) {
func validateAccessTier(tier string) bool {
return strings.EqualFold(tier, string(blob.AccessTierHot)) ||
strings.EqualFold(tier, string(blob.AccessTierCool)) ||
strings.EqualFold(tier, string(blob.AccessTierCold)) ||
strings.EqualFold(tier, string(blob.AccessTierArchive))
}
@@ -649,8 +650,8 @@ func NewFs(ctx context.Context, name, root string, m configmap.Mapper) (fs.Fs, e
if opt.AccessTier == "" {
opt.AccessTier = string(defaultAccessTier)
} else if !validateAccessTier(opt.AccessTier) {
return nil, fmt.Errorf("supported access tiers are %s, %s and %s",
string(blob.AccessTierHot), string(blob.AccessTierCool), string(blob.AccessTierArchive))
return nil, fmt.Errorf("supported access tiers are %s, %s, %s and %s",
string(blob.AccessTierHot), string(blob.AccessTierCool), string(blob.AccessTierCold), string(blob.AccessTierArchive))
}
if !validatePublicAccess((opt.PublicAccess)) {
@@ -1899,7 +1900,7 @@ func (o *Object) Open(ctx context.Context, options ...fs.OpenOption) (in io.Read
var offset int64
var count int64
if o.AccessTier() == blob.AccessTierArchive {
return nil, fmt.Errorf("blob in archive tier, you need to set tier to hot or cool first")
return nil, fmt.Errorf("blob in archive tier, you need to set tier to hot, cool, cold first")
}
fs.FixRangeOption(options, o.size)
for _, option := range options {

View File

@@ -19,7 +19,7 @@ func TestIntegration(t *testing.T) {
fstests.Run(t, &fstests.Opt{
RemoteName: "TestAzureBlob:",
NilObject: (*Object)(nil),
TiersToTest: []string{"Hot", "Cool"},
TiersToTest: []string{"Hot", "Cool", "Cold"},
ChunkedUpload: fstests.ChunkedUploadConfig{
MinChunkSize: defaultChunkSize,
},
@@ -35,7 +35,7 @@ func TestIntegration2(t *testing.T) {
fstests.Run(t, &fstests.Opt{
RemoteName: name + ":",
NilObject: (*Object)(nil),
TiersToTest: []string{"Hot", "Cool"},
TiersToTest: []string{"Hot", "Cool", "Cold"},
ChunkedUpload: fstests.ChunkedUploadConfig{
MinChunkSize: defaultChunkSize,
},
@@ -62,6 +62,7 @@ func TestValidateAccessTier(t *testing.T) {
"HOT": {"HOT", true},
"Hot": {"Hot", true},
"cool": {"cool", true},
"cold": {"cold", true},
"archive": {"archive", true},
"empty": {"", false},
"unknown": {"unknown", false},

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,70 @@
//go:build !plan9 && !js
// +build !plan9,!js
package azurefiles
import (
"context"
"math/rand"
"strings"
"testing"
"github.com/rclone/rclone/fstest/fstests"
"github.com/stretchr/testify/assert"
)
func (f *Fs) InternalTest(t *testing.T) {
t.Run("Authentication", f.InternalTestAuth)
}
var _ fstests.InternalTester = (*Fs)(nil)
func (f *Fs) InternalTestAuth(t *testing.T) {
t.Skip("skipping since this requires authentication credentials which are not part of repo")
shareName := "test-rclone-oct-2023"
testCases := []struct {
name string
options *Options
}{
{
name: "ConnectionString",
options: &Options{
ShareName: shareName,
ConnectionString: "",
},
},
{
name: "AccountAndKey",
options: &Options{
ShareName: shareName,
Account: "",
Key: "",
}},
{
name: "SASUrl",
options: &Options{
ShareName: shareName,
SASURL: "",
}},
}
for _, tc := range testCases {
t.Run(tc.name, func(t *testing.T) {
fs, err := newFsFromOptions(context.TODO(), "TestAzureFiles", "", tc.options)
assert.NoError(t, err)
dirName := randomString(10)
assert.NoError(t, fs.Mkdir(context.TODO(), dirName))
})
}
}
const chars = "abcdefghijklmnopqrstuvwzyxABCDEFGHIJKLMNOPQRSTUVWZYX"
func randomString(charCount int) string {
strBldr := strings.Builder{}
for i := 0; i < charCount; i++ {
randPos := rand.Int63n(52)
strBldr.WriteByte(chars[randPos])
}
return strBldr.String()
}

View File

@@ -0,0 +1,18 @@
//go:build !plan9 && !js
// +build !plan9,!js
package azurefiles
import (
"testing"
"github.com/rclone/rclone/fstest/fstests"
)
func TestIntegration(t *testing.T) {
var objPtr *Object
fstests.Run(t, &fstests.Opt{
RemoteName: "TestAzureFiles:",
NilObject: objPtr,
})
}

View File

@@ -0,0 +1,7 @@
// Build for azurefiles for unsupported platforms to stop go complaining
// about "no buildable Go source files "
//go:build plan9 || js
// +build plan9 js
package azurefiles

View File

@@ -9,6 +9,7 @@ import (
"bytes"
"context"
"crypto/sha1"
"encoding/json"
"errors"
"fmt"
gohash "hash"
@@ -399,11 +400,18 @@ func (f *Fs) shouldRetry(ctx context.Context, resp *http.Response, err error) (b
// errorHandler parses a non 2xx error response into an error
func errorHandler(resp *http.Response) error {
// Decode error response
errResponse := new(api.Error)
err := rest.DecodeJSON(resp, &errResponse)
body, err := rest.ReadBody(resp)
if err != nil {
fs.Debugf(nil, "Couldn't decode error response: %v", err)
fs.Errorf(nil, "Couldn't read error out of body: %v", err)
body = nil
}
// Decode error response if there was one - they can be blank
errResponse := new(api.Error)
if len(body) > 0 {
err = json.Unmarshal(body, errResponse)
if err != nil {
fs.Errorf(nil, "Couldn't decode error response: %v", err)
}
}
if errResponse.Code == "" {
errResponse.Code = "unknown"
@@ -447,6 +455,14 @@ func (f *Fs) setUploadCutoff(cs fs.SizeSuffix) (old fs.SizeSuffix, err error) {
return
}
func (f *Fs) setCopyCutoff(cs fs.SizeSuffix) (old fs.SizeSuffix, err error) {
err = checkUploadChunkSize(cs)
if err == nil {
old, f.opt.CopyCutoff = f.opt.CopyCutoff, cs
}
return
}
// setRoot changes the root of the Fs
func (f *Fs) setRoot(root string) {
f.root = parsePath(root)
@@ -497,10 +513,11 @@ func NewFs(ctx context.Context, name, root string, m configmap.Mapper) (fs.Fs, e
}
f.setRoot(root)
f.features = (&fs.Features{
ReadMimeType: true,
WriteMimeType: true,
BucketBased: true,
BucketBasedRootOK: true,
ReadMimeType: true,
WriteMimeType: true,
BucketBased: true,
BucketBasedRootOK: true,
ChunkWriterDoesntSeek: true,
}).Fill(ctx, f)
// Set the test flag if required
if opt.TestMode != "" {
@@ -1321,7 +1338,7 @@ func (f *Fs) CleanUp(ctx context.Context) error {
// If newInfo is nil then the metadata will be copied otherwise it
// will be replaced with newInfo
func (f *Fs) copy(ctx context.Context, dstObj *Object, srcObj *Object, newInfo *api.File) (err error) {
if srcObj.size >= int64(f.opt.CopyCutoff) {
if srcObj.size > int64(f.opt.CopyCutoff) {
if newInfo == nil {
newInfo, err = srcObj.getMetaData(ctx)
if err != nil {
@@ -1332,7 +1349,11 @@ func (f *Fs) copy(ctx context.Context, dstObj *Object, srcObj *Object, newInfo *
if err != nil {
return err
}
return up.Copy(ctx)
err = up.Copy(ctx)
if err != nil {
return err
}
return dstObj.decodeMetaDataFileInfo(up.info)
}
dstBucket, dstPath := dstObj.split()
@@ -1919,7 +1940,11 @@ func (o *Object) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, op
return err
}
// NB Stream returns the buffer and token
return up.Stream(ctx, rw)
err = up.Stream(ctx, rw)
if err != nil {
return err
}
return o.decodeMetaDataFileInfo(up.info)
} else if err == io.EOF {
fs.Debugf(o, "File has %d bytes, which makes only one chunk. Using direct upload.", n)
defer o.fs.putRW(rw)
@@ -2063,7 +2088,7 @@ func (f *Fs) OpenChunkWriter(ctx context.Context, remote string, src fs.ObjectIn
// Temporary Object under construction
o := &Object{
fs: f,
remote: src.Remote(),
remote: remote,
}
bucket, _ := o.split()

View File

@@ -5,6 +5,7 @@ import (
"time"
"github.com/rclone/rclone/fstest"
"github.com/rclone/rclone/fstest/fstests"
)
// Test b2 string encoding
@@ -168,3 +169,10 @@ func TestParseTimeString(t *testing.T) {
}
}
// -run TestIntegration/FsMkdir/FsPutFiles/Internal
func (f *Fs) InternalTest(t *testing.T) {
// Internal tests go here
}
var _ fstests.InternalTester = (*Fs)(nil)

View File

@@ -28,7 +28,12 @@ func (f *Fs) SetUploadCutoff(cs fs.SizeSuffix) (fs.SizeSuffix, error) {
return f.setUploadCutoff(cs)
}
func (f *Fs) SetCopyCutoff(cs fs.SizeSuffix) (fs.SizeSuffix, error) {
return f.setCopyCutoff(cs)
}
var (
_ fstests.SetUploadChunkSizer = (*Fs)(nil)
_ fstests.SetUploadCutoffer = (*Fs)(nil)
_ fstests.SetCopyCutoffer = (*Fs)(nil)
)

View File

@@ -393,10 +393,11 @@ func (up *largeUpload) Stream(ctx context.Context, initialUploadBlock *pool.RW)
hasMoreParts = true
)
up.size = initialUploadBlock.Size()
up.parts = 0
for part := 0; hasMoreParts; part++ {
// Get a block of memory from the pool and token which limits concurrency.
var rw *pool.RW
if part == 1 {
if part == 0 {
rw = initialUploadBlock
} else {
rw = up.f.getRW(false)
@@ -411,12 +412,18 @@ func (up *largeUpload) Stream(ctx context.Context, initialUploadBlock *pool.RW)
// Read the chunk
var n int64
if part == 1 {
if part == 0 {
n = rw.Size()
} else {
n, err = io.CopyN(rw, up.in, up.chunkSize)
if err == io.EOF {
fs.Debugf(up.o, "Read less than a full chunk, making this the last one.")
if n == 0 {
fs.Debugf(up.o, "Not sending empty chunk after EOF - ending.")
up.f.putRW(rw)
break
} else {
fs.Debugf(up.o, "Read less than a full chunk %d, making this the last one.", n)
}
hasMoreParts = false
} else if err != nil {
// other kinds of errors indicate failure
@@ -426,7 +433,7 @@ func (up *largeUpload) Stream(ctx context.Context, initialUploadBlock *pool.RW)
}
// Keep stats up to date
up.parts = part
up.parts += 1
up.size += n
if part > maxParts {
up.f.putRW(rw)
@@ -456,7 +463,7 @@ func (up *largeUpload) Copy(ctx context.Context) (err error) {
remaining = up.size
)
g.SetLimit(up.f.opt.UploadConcurrency)
for part := 0; part <= up.parts; part++ {
for part := 0; part < up.parts; part++ {
// Fail fast, in case an errgroup managed function returns an error
// gCtx is cancelled. There is no point in copying all the other parts.
if gCtx.Err() != nil {

View File

@@ -167,19 +167,7 @@ type PreUploadCheckResponse struct {
// PreUploadCheckConflict is returned in the ContextInfo error field
// from PreUploadCheck when the error code is "item_name_in_use"
type PreUploadCheckConflict struct {
Conflicts struct {
Type string `json:"type"`
ID string `json:"id"`
FileVersion struct {
Type string `json:"type"`
ID string `json:"id"`
Sha1 string `json:"sha1"`
} `json:"file_version"`
SequenceID string `json:"sequence_id"`
Etag string `json:"etag"`
Sha1 string `json:"sha1"`
Name string `json:"name"`
} `json:"conflicts"`
Conflicts ItemMini `json:"conflicts"`
}
// UpdateFileModTime is used in Update File Info

View File

@@ -380,7 +380,7 @@ func shouldRetry(ctx context.Context, resp *http.Response, err error) (bool, err
// readMetaDataForPath reads the metadata from the path
func (f *Fs) readMetaDataForPath(ctx context.Context, path string) (info *api.Item, err error) {
// defer fs.Trace(f, "path=%q", path)("info=%+v, err=%v", &info, &err)
// defer log.Trace(f, "path=%q", path)("info=%+v, err=%v", &info, &err)
leaf, directoryID, err := f.dirCache.FindPath(ctx, path, false)
if err != nil {
if err == fs.ErrorDirNotFound {
@@ -389,20 +389,30 @@ func (f *Fs) readMetaDataForPath(ctx context.Context, path string) (info *api.It
return nil, err
}
found, err := f.listAll(ctx, directoryID, false, true, true, func(item *api.Item) bool {
if strings.EqualFold(item.Name, leaf) {
info = item
return true
}
return false
// Use preupload to find the ID
itemMini, err := f.preUploadCheck(ctx, leaf, directoryID, -1)
if err != nil {
return nil, err
}
if itemMini == nil {
return nil, fs.ErrorObjectNotFound
}
// Now we have the ID we can look up the object proper
opts := rest.Opts{
Method: "GET",
Path: "/files/" + itemMini.ID,
Parameters: fieldsValue(),
}
var item api.Item
err = f.pacer.Call(func() (bool, error) {
resp, err := f.srv.CallJSON(ctx, &opts, nil, &item)
return shouldRetry(ctx, resp, err)
})
if err != nil {
return nil, err
}
if !found {
return nil, fs.ErrorObjectNotFound
}
return info, nil
return &item, nil
}
// errorHandler parses a non 2xx error response into an error
@@ -762,7 +772,7 @@ func (f *Fs) createObject(ctx context.Context, remote string, modTime time.Time,
//
// It returns "", nil if the file is good to go
// It returns "ID", nil if the file must be updated
func (f *Fs) preUploadCheck(ctx context.Context, leaf, directoryID string, size int64) (ID string, err error) {
func (f *Fs) preUploadCheck(ctx context.Context, leaf, directoryID string, size int64) (item *api.ItemMini, err error) {
check := api.PreUploadCheck{
Name: f.opt.Enc.FromStandardName(leaf),
Parent: api.Parent{
@@ -787,16 +797,16 @@ func (f *Fs) preUploadCheck(ctx context.Context, leaf, directoryID string, size
var conflict api.PreUploadCheckConflict
err = json.Unmarshal(apiErr.ContextInfo, &conflict)
if err != nil {
return "", fmt.Errorf("pre-upload check: JSON decode failed: %w", err)
return nil, fmt.Errorf("pre-upload check: JSON decode failed: %w", err)
}
if conflict.Conflicts.Type != api.ItemTypeFile {
return "", fmt.Errorf("pre-upload check: can't overwrite non file with file: %w", err)
return nil, fs.ErrorIsDir
}
return conflict.Conflicts.ID, nil
return &conflict.Conflicts, nil
}
return "", fmt.Errorf("pre-upload check: %w", err)
return nil, fmt.Errorf("pre-upload check: %w", err)
}
return "", nil
return nil, nil
}
// Put the object
@@ -817,11 +827,11 @@ func (f *Fs) Put(ctx context.Context, in io.Reader, src fs.ObjectInfo, options .
// Preflight check the upload, which returns the ID if the
// object already exists
ID, err := f.preUploadCheck(ctx, leaf, directoryID, src.Size())
item, err := f.preUploadCheck(ctx, leaf, directoryID, src.Size())
if err != nil {
return nil, err
}
if ID == "" {
if item == nil {
return f.PutUnchecked(ctx, in, src, options...)
}
@@ -829,7 +839,7 @@ func (f *Fs) Put(ctx context.Context, in io.Reader, src fs.ObjectInfo, options .
o := &Object{
fs: f,
remote: remote,
id: ID,
id: item.ID,
}
return o, o.Update(ctx, in, src, options...)
}

View File

@@ -143,6 +143,41 @@ var (
_linkTemplates map[string]*template.Template // available link types
)
// rwChoices type for fs.Bits
type rwChoices struct{}
func (rwChoices) Choices() []fs.BitsChoicesInfo {
return []fs.BitsChoicesInfo{
{Bit: uint64(rwOff), Name: "off"},
{Bit: uint64(rwRead), Name: "read"},
{Bit: uint64(rwWrite), Name: "write"},
}
}
// rwChoice type alias
type rwChoice = fs.Bits[rwChoices]
const (
rwRead rwChoice = 1 << iota
rwWrite
rwOff rwChoice = 0
)
// Examples for the options
var rwExamples = fs.OptionExamples{{
Value: rwOff.String(),
Help: "Do not read or write the value",
}, {
Value: rwRead.String(),
Help: "Read the value only",
}, {
Value: rwWrite.String(),
Help: "Write the value only",
}, {
Value: (rwRead | rwWrite).String(),
Help: "Read and Write the value.",
}}
// Parse the scopes option returning a slice of scopes
func driveScopes(scopesString string) (scopes []string) {
if scopesString == "" {
@@ -250,9 +285,13 @@ func init() {
}
return nil, fmt.Errorf("unknown state %q", config.State)
},
MetadataInfo: &fs.MetadataInfo{
System: systemMetadataInfo,
Help: `User metadata is stored in the properties field of the drive object.`,
},
Options: append(driveOAuthOptions(), []fs.Option{{
Name: "scope",
Help: "Scope that rclone should use when requesting access from drive.",
Help: "Comma separated list of scopes that rclone should use when requesting access from drive.",
Examples: []fs.OptionExample{{
Value: "drive",
Help: "Full access all files, excluding Application Data Folder.",
@@ -320,6 +359,25 @@ rather than shortcuts themselves when doing server side copies.`,
Default: false,
Help: "Skip google documents in all listings.\n\nIf given, gdocs practically become invisible to rclone.",
Advanced: true,
}, {
Name: "show_all_gdocs",
Default: false,
Help: `Show all Google Docs including non-exportable ones in listings.
If you try a server side copy on a Google Form without this flag, you
will get this error:
No export formats found for "application/vnd.google-apps.form"
However adding this flag will allow the form to be server side copied.
Note that rclone doesn't add extensions to the Google Docs file names
in this mode.
Do **not** use this flag when trying to download Google Docs - rclone
will fail to download them.
`,
Advanced: true,
}, {
Name: "skip_checksum_gphotos",
Default: false,
@@ -620,6 +678,56 @@ having trouble with like many empty directories.
`,
Advanced: true,
Default: true,
}, {
Name: "metadata_owner",
Help: `Control whether owner should be read or written in metadata.
Owner is a standard part of the file metadata so is easy to read. But it
isn't always desirable to set the owner from the metadata.
Note that you can't set the owner on Shared Drives, and that setting
ownership will generate an email to the new owner (this can't be
disabled), and you can't transfer ownership to someone outside your
organization.
`,
Advanced: true,
Default: rwRead,
Examples: rwExamples,
}, {
Name: "metadata_permissions",
Help: `Control whether permissions should be read or written in metadata.
Reading permissions metadata from files can be done quickly, but it
isn't always desirable to set the permissions from the metadata.
Note that rclone drops any inherited permissions on Shared Drives and
any owner permission on My Drives as these are duplicated in the owner
metadata.
`,
Advanced: true,
Default: rwOff,
Examples: rwExamples,
}, {
Name: "metadata_labels",
Help: `Control whether labels should be read or written in metadata.
Reading labels metadata from files takes an extra API transaction and
will slow down listings. It isn't always desirable to set the labels
from the metadata.
The format of labels is documented in the drive API documentation at
https://developers.google.com/drive/api/reference/rest/v3/Label -
rclone just provides a JSON dump of this format.
When setting labels, the label and fields must already exist - rclone
will not create them. This means that if you are transferring labels
from two different accounts you will have to create the labels in
advance and use the metadata mapper to translate the IDs between the
two accounts.
`,
Advanced: true,
Default: rwOff,
Examples: rwExamples,
}, {
Name: config.ConfigEncoding,
Help: config.ConfigEncodingHelp,
@@ -667,6 +775,7 @@ type Options struct {
UseTrash bool `config:"use_trash"`
CopyShortcutContent bool `config:"copy_shortcut_content"`
SkipGdocs bool `config:"skip_gdocs"`
ShowAllGdocs bool `config:"show_all_gdocs"`
SkipChecksumGphotos bool `config:"skip_checksum_gphotos"`
SharedWithMe bool `config:"shared_with_me"`
TrashedOnly bool `config:"trashed_only"`
@@ -695,6 +804,9 @@ type Options struct {
SkipDanglingShortcuts bool `config:"skip_dangling_shortcuts"`
ResourceKey string `config:"resource_key"`
FastListBugFix bool `config:"fast_list_bug_fix"`
MetadataOwner rwChoice `config:"metadata_owner"`
MetadataPermissions rwChoice `config:"metadata_permissions"`
MetadataLabels rwChoice `config:"metadata_labels"`
Enc encoder.MultiEncoder `config:"encoding"`
EnvAuth bool `config:"env_auth"`
}
@@ -716,23 +828,25 @@ type Fs struct {
exportExtensions []string // preferred extensions to download docs
importMimeTypes []string // MIME types to convert to docs
isTeamDrive bool // true if this is a team drive
fileFields googleapi.Field // fields to fetch file info with
m configmap.Mapper
grouping int32 // number of IDs to search at once in ListR - read with atomic
listRmu *sync.Mutex // protects listRempties
listRempties map[string]struct{} // IDs of supposedly empty directories which triggered grouping disable
dirResourceKeys *sync.Map // map directory ID to resource key
grouping int32 // number of IDs to search at once in ListR - read with atomic
listRmu *sync.Mutex // protects listRempties
listRempties map[string]struct{} // IDs of supposedly empty directories which triggered grouping disable
dirResourceKeys *sync.Map // map directory ID to resource key
permissionsMu *sync.Mutex // protect the below
permissions map[string]*drive.Permission // map permission IDs to Permissions
}
type baseObject struct {
fs *Fs // what this object is part of
remote string // The remote path
id string // Drive Id of this object
modifiedDate string // RFC3339 time it was last modified
mimeType string // The object MIME type
bytes int64 // size of the object
parents []string // IDs of the parent directories
resourceKey *string // resourceKey is needed for link shared objects
fs *Fs // what this object is part of
remote string // The remote path
id string // Drive Id of this object
modifiedDate string // RFC3339 time it was last modified
mimeType string // The object MIME type
bytes int64 // size of the object
parents []string // IDs of the parent directories
resourceKey *string // resourceKey is needed for link shared objects
metadata *fs.Metadata // metadata if known
}
type documentObject struct {
baseObject
@@ -981,7 +1095,7 @@ func (f *Fs) list(ctx context.Context, dirIDs []string, title string, directorie
list.Header().Add("X-Goog-Drive-Resource-Keys", resourceKeysHeader)
}
fields := fmt.Sprintf("files(%s),nextPageToken,incompleteSearch", f.fileFields)
fields := fmt.Sprintf("files(%s),nextPageToken,incompleteSearch", f.getFileFields(ctx))
OUTER:
for {
@@ -1255,9 +1369,10 @@ func newFs(ctx context.Context, name, path string, m configmap.Mapper) (*Fs, err
listRmu: new(sync.Mutex),
listRempties: make(map[string]struct{}),
dirResourceKeys: new(sync.Map),
permissionsMu: new(sync.Mutex),
permissions: make(map[string]*drive.Permission),
}
f.isTeamDrive = opt.TeamDriveID != ""
f.fileFields = f.getFileFields()
f.features = (&fs.Features{
DuplicateFiles: true,
ReadMimeType: true,
@@ -1265,6 +1380,9 @@ func newFs(ctx context.Context, name, path string, m configmap.Mapper) (*Fs, err
CanHaveEmptyDirectories: true,
ServerSideAcrossConfigs: opt.ServerSideAcrossConfigs,
FilterAware: true,
ReadMetadata: true,
WriteMetadata: true,
UserMetadata: true,
}).Fill(ctx, f)
// Create a new authorized Drive client.
@@ -1369,7 +1487,7 @@ func NewFs(ctx context.Context, name, path string, m configmap.Mapper) (fs.Fs, e
return f, nil
}
func (f *Fs) newBaseObject(remote string, info *drive.File) baseObject {
func (f *Fs) newBaseObject(ctx context.Context, remote string, info *drive.File) (o baseObject, err error) {
modifiedDate := info.ModifiedTime
if f.opt.UseCreatedDate {
modifiedDate = info.CreatedTime
@@ -1380,7 +1498,7 @@ func (f *Fs) newBaseObject(remote string, info *drive.File) baseObject {
if f.opt.SizeAsQuota {
size = info.QuotaBytesUsed
}
return baseObject{
o = baseObject{
fs: f,
remote: remote,
id: info.Id,
@@ -1389,10 +1507,15 @@ func (f *Fs) newBaseObject(remote string, info *drive.File) baseObject {
bytes: size,
parents: info.Parents,
}
err = nil
if fs.GetConfig(ctx).Metadata {
err = o.parseMetadata(ctx, info)
}
return o, err
}
// getFileFields gets the fields for a normal file Get or List
func (f *Fs) getFileFields() (fields googleapi.Field) {
func (f *Fs) getFileFields(ctx context.Context) (fields googleapi.Field) {
fields = partialFields
if f.opt.AuthOwnerOnly {
fields += ",owners"
@@ -1406,11 +1529,14 @@ func (f *Fs) getFileFields() (fields googleapi.Field) {
if f.opt.SizeAsQuota {
fields += ",quotaBytesUsed"
}
if fs.GetConfig(ctx).Metadata {
fields += "," + metadataFields
}
return fields
}
// newRegularObject creates an fs.Object for a normal drive.File
func (f *Fs) newRegularObject(remote string, info *drive.File) fs.Object {
func (f *Fs) newRegularObject(ctx context.Context, remote string, info *drive.File) (obj fs.Object, err error) {
// wipe checksum if SkipChecksumGphotos and file is type Photo or Video
if f.opt.SkipChecksumGphotos {
for _, space := range info.Spaces {
@@ -1423,27 +1549,33 @@ func (f *Fs) newRegularObject(remote string, info *drive.File) fs.Object {
}
}
o := &Object{
baseObject: f.newBaseObject(remote, info),
url: fmt.Sprintf("%sfiles/%s?alt=media", f.svc.BasePath, actualID(info.Id)),
md5sum: strings.ToLower(info.Md5Checksum),
sha1sum: strings.ToLower(info.Sha1Checksum),
sha256sum: strings.ToLower(info.Sha256Checksum),
v2Download: f.opt.V2DownloadMinSize != -1 && info.Size >= int64(f.opt.V2DownloadMinSize),
}
o.baseObject, err = f.newBaseObject(ctx, remote, info)
if err != nil {
return nil, err
}
if info.ResourceKey != "" {
o.resourceKey = &info.ResourceKey
}
return o
return o, nil
}
// newDocumentObject creates an fs.Object for a google docs drive.File
func (f *Fs) newDocumentObject(remote string, info *drive.File, extension, exportMimeType string) (fs.Object, error) {
func (f *Fs) newDocumentObject(ctx context.Context, remote string, info *drive.File, extension, exportMimeType string) (fs.Object, error) {
mediaType, _, err := mime.ParseMediaType(exportMimeType)
if err != nil {
return nil, err
}
url := info.ExportLinks[mediaType]
baseObject := f.newBaseObject(remote+extension, info)
baseObject, err := f.newBaseObject(ctx, remote+extension, info)
if err != nil {
return nil, err
}
baseObject.bytes = -1
baseObject.mimeType = exportMimeType
return &documentObject{
@@ -1455,7 +1587,7 @@ func (f *Fs) newDocumentObject(remote string, info *drive.File, extension, expor
}
// newLinkObject creates an fs.Object that represents a link a google docs drive.File
func (f *Fs) newLinkObject(remote string, info *drive.File, extension, exportMimeType string) (fs.Object, error) {
func (f *Fs) newLinkObject(ctx context.Context, remote string, info *drive.File, extension, exportMimeType string) (fs.Object, error) {
t := linkTemplate(exportMimeType)
if t == nil {
return nil, fmt.Errorf("unsupported link type %s", exportMimeType)
@@ -1474,7 +1606,10 @@ func (f *Fs) newLinkObject(remote string, info *drive.File, extension, exportMim
return nil, fmt.Errorf("executing template failed: %w", err)
}
baseObject := f.newBaseObject(remote+extension, info)
baseObject, err := f.newBaseObject(ctx, remote+extension, info)
if err != nil {
return nil, err
}
baseObject.bytes = int64(buf.Len())
baseObject.mimeType = exportMimeType
return &linkObject{
@@ -1490,7 +1625,7 @@ func (f *Fs) newLinkObject(remote string, info *drive.File, extension, exportMim
func (f *Fs) newObjectWithInfo(ctx context.Context, remote string, info *drive.File) (fs.Object, error) {
// If item has MD5 sum it is a file stored on drive
if info.Md5Checksum != "" {
return f.newRegularObject(remote, info), nil
return f.newRegularObject(ctx, remote, info)
}
extension, exportName, exportMimeType, isDocument := f.findExportFormat(ctx, info)
@@ -1521,13 +1656,15 @@ func (f *Fs) newObjectWithExportInfo(
case info.MimeType == shortcutMimeTypeDangling:
// Pretend a dangling shortcut is a regular object
// It will error if used, but appear in listings so it can be deleted
return f.newRegularObject(remote, info), nil
return f.newRegularObject(ctx, remote, info)
case info.Md5Checksum != "":
// If item has MD5 sum it is a file stored on drive
return f.newRegularObject(remote, info), nil
return f.newRegularObject(ctx, remote, info)
case f.opt.SkipGdocs:
fs.Debugf(remote, "Skipping google document type %q", info.MimeType)
return nil, fs.ErrorObjectNotFound
case f.opt.ShowAllGdocs:
return f.newDocumentObject(ctx, remote, info, "", info.MimeType)
default:
// If item MimeType is in the ExportFormats then it is a google doc
if !isDocument {
@@ -1539,9 +1676,9 @@ func (f *Fs) newObjectWithExportInfo(
return nil, fs.ErrorObjectNotFound
}
if isLinkMimeType(exportMimeType) {
return f.newLinkObject(remote, info, extension, exportMimeType)
return f.newLinkObject(ctx, remote, info, extension, exportMimeType)
}
return f.newDocumentObject(remote, info, extension, exportMimeType)
return f.newDocumentObject(ctx, remote, info, extension, exportMimeType)
}
}
@@ -2169,7 +2306,7 @@ func (f *Fs) resolveShortcut(ctx context.Context, item *drive.File) (newItem *dr
fs.Errorf(nil, "Expecting shortcutDetails in %v", item)
return item, nil
}
newItem, err = f.getFile(ctx, item.ShortcutDetails.TargetId, f.fileFields)
newItem, err = f.getFile(ctx, item.ShortcutDetails.TargetId, f.getFileFields(ctx))
if err != nil {
var gerr *googleapi.Error
if errors.As(err, &gerr) && gerr.Code == 404 {
@@ -2301,6 +2438,10 @@ func (f *Fs) PutUnchecked(ctx context.Context, in io.Reader, src fs.ObjectInfo,
} else {
createInfo.MimeType = fs.MimeTypeFromName(remote)
}
updateMetadata, err := f.fetchAndUpdateMetadata(ctx, src, options, createInfo, false)
if err != nil {
return nil, err
}
var info *drive.File
if size >= 0 && size < int64(f.opt.UploadCutoff) {
@@ -2325,6 +2466,10 @@ func (f *Fs) PutUnchecked(ctx context.Context, in io.Reader, src fs.ObjectInfo,
return nil, err
}
}
err = updateMetadata(ctx, info)
if err != nil {
return nil, err
}
return f.newObjectWithInfo(ctx, remote, info)
}
@@ -3242,7 +3387,7 @@ func (f *Fs) unTrashDir(ctx context.Context, dir string, recurse bool) (r unTras
// copy file with id to dest
func (f *Fs) copyID(ctx context.Context, id, dest string) (err error) {
info, err := f.getFile(ctx, id, f.fileFields)
info, err := f.getFile(ctx, id, f.getFileFields(ctx))
if err != nil {
return fmt.Errorf("couldn't find id: %w", err)
}
@@ -3922,10 +4067,20 @@ func (o *Object) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, op
MimeType: srcMimeType,
ModifiedTime: src.ModTime(ctx).Format(timeFormatOut),
}
updateMetadata, err := o.fs.fetchAndUpdateMetadata(ctx, src, options, updateInfo, true)
if err != nil {
return err
}
info, err := o.baseObject.update(ctx, updateInfo, srcMimeType, in, src)
if err != nil {
return err
}
err = updateMetadata(ctx, info)
if err != nil {
return err
}
newO, err := o.fs.newObjectWithInfo(ctx, o.remote, info)
if err != nil {
return err
@@ -4011,6 +4166,26 @@ func (o *baseObject) ParentID() string {
return ""
}
// Metadata returns metadata for an object
//
// It should return nil if there is no Metadata
func (o *baseObject) Metadata(ctx context.Context) (metadata fs.Metadata, err error) {
if o.metadata != nil {
return *o.metadata, nil
}
fs.Debugf(o, "Fetching metadata")
id := actualID(o.id)
info, err := o.fs.getFile(ctx, id, o.fs.getFileFields(ctx))
if err != nil {
return nil, err
}
err = o.parseMetadata(ctx, info)
if err != nil {
return nil, err
}
return *o.metadata, nil
}
func (o *documentObject) ext() string {
return o.baseObject.remote[len(o.baseObject.remote)-o.extLen:]
}
@@ -4072,6 +4247,7 @@ var (
_ fs.MimeTyper = (*Object)(nil)
_ fs.IDer = (*Object)(nil)
_ fs.ParentIDer = (*Object)(nil)
_ fs.Metadataer = (*Object)(nil)
_ fs.Object = (*documentObject)(nil)
_ fs.MimeTyper = (*documentObject)(nil)
_ fs.IDer = (*documentObject)(nil)

608
backend/drive/metadata.go Normal file
View File

@@ -0,0 +1,608 @@
package drive
import (
"context"
"encoding/json"
"fmt"
"strconv"
"strings"
"sync"
"github.com/rclone/rclone/fs"
"golang.org/x/sync/errgroup"
drive "google.golang.org/api/drive/v3"
"google.golang.org/api/googleapi"
)
// system metadata keys which this backend owns
var systemMetadataInfo = map[string]fs.MetadataHelp{
"content-type": {
Help: "The MIME type of the file.",
Type: "string",
Example: "text/plain",
},
"mtime": {
Help: "Time of last modification with mS accuracy.",
Type: "RFC 3339",
Example: "2006-01-02T15:04:05.999Z07:00",
},
"btime": {
Help: "Time of file birth (creation) with mS accuracy. Note that this is only writable on fresh uploads - it can't be written for updates.",
Type: "RFC 3339",
Example: "2006-01-02T15:04:05.999Z07:00",
},
"copy-requires-writer-permission": {
Help: "Whether the options to copy, print, or download this file, should be disabled for readers and commenters.",
Type: "boolean",
Example: "true",
},
"writers-can-share": {
Help: "Whether users with only writer permission can modify the file's permissions. Not populated for items in shared drives.",
Type: "boolean",
Example: "false",
},
"viewed-by-me": {
Help: "Whether the file has been viewed by this user.",
Type: "boolean",
Example: "true",
ReadOnly: true,
},
"owner": {
Help: "The owner of the file. Usually an email address. Enable with --drive-metadata-owner.",
Type: "string",
Example: "user@example.com",
},
"permissions": {
Help: "Permissions in a JSON dump of Google drive format. On shared drives these will only be present if they aren't inherited. Enable with --drive-metadata-permissions.",
Type: "JSON",
Example: "{}",
},
"folder-color-rgb": {
Help: "The color for a folder or a shortcut to a folder as an RGB hex string.",
Type: "string",
Example: "881133",
},
"description": {
Help: "A short description of the file.",
Type: "string",
Example: "Contract for signing",
},
"starred": {
Help: "Whether the user has starred the file.",
Type: "boolean",
Example: "false",
},
"labels": {
Help: "Labels attached to this file in a JSON dump of Googled drive format. Enable with --drive-metadata-labels.",
Type: "JSON",
Example: "[]",
},
}
// Extra fields we need to fetch to implement the system metadata above
var metadataFields = googleapi.Field(strings.Join([]string{
"copyRequiresWriterPermission",
"description",
"folderColorRgb",
"hasAugmentedPermissions",
"owners",
"permissionIds",
"permissions",
"properties",
"starred",
"viewedByMe",
"viewedByMeTime",
"writersCanShare",
}, ","))
// Fields we need to read from permissions
var permissionsFields = googleapi.Field(strings.Join([]string{
"*",
"permissionDetails/*",
}, ","))
// getPermission returns permissions for the fileID and permissionID passed in
func (f *Fs) getPermission(ctx context.Context, fileID, permissionID string, useCache bool) (perm *drive.Permission, inherited bool, err error) {
f.permissionsMu.Lock()
defer f.permissionsMu.Unlock()
if useCache {
perm = f.permissions[permissionID]
if perm != nil {
return perm, false, nil
}
}
fs.Debugf(f, "Fetching permission %q", permissionID)
err = f.pacer.Call(func() (bool, error) {
perm, err = f.svc.Permissions.Get(fileID, permissionID).
Fields(permissionsFields).
SupportsAllDrives(true).
Context(ctx).Do()
return f.shouldRetry(ctx, err)
})
if err != nil {
return nil, false, err
}
inherited = len(perm.PermissionDetails) > 0 && perm.PermissionDetails[0].Inherited
cleanPermission(perm)
// cache the permission
f.permissions[permissionID] = perm
return perm, inherited, err
}
// Set the permissions on the info
func (f *Fs) setPermissions(ctx context.Context, info *drive.File, permissions []*drive.Permission) (err error) {
for _, perm := range permissions {
if perm.Role == "owner" {
// ignore owner permissions - these are set with owner
continue
}
cleanPermissionForWrite(perm)
err = f.pacer.Call(func() (bool, error) {
_, err = f.svc.Permissions.Create(info.Id, perm).
SupportsAllDrives(true).
Context(ctx).Do()
return f.shouldRetry(ctx, err)
})
if err != nil {
return fmt.Errorf("failed to set permission: %w", err)
}
}
return nil
}
// Clean attributes from permissions which we can't write
func cleanPermissionForWrite(perm *drive.Permission) {
perm.Deleted = false
perm.DisplayName = ""
perm.Id = ""
perm.Kind = ""
perm.PermissionDetails = nil
perm.TeamDrivePermissionDetails = nil
}
// Clean and cache the permission if not already cached
func (f *Fs) cleanAndCachePermission(perm *drive.Permission) {
f.permissionsMu.Lock()
defer f.permissionsMu.Unlock()
cleanPermission(perm)
if _, found := f.permissions[perm.Id]; !found {
f.permissions[perm.Id] = perm
}
}
// Clean fields we don't need to keep from the permission
func cleanPermission(perm *drive.Permission) {
// DisplayName: Output only. The "pretty" name of the value of the
// permission. The following is a list of examples for each type of
// permission: * `user` - User's full name, as defined for their Google
// account, such as "Joe Smith." * `group` - Name of the Google Group,
// such as "The Company Administrators." * `domain` - String domain
// name, such as "thecompany.com." * `anyone` - No `displayName` is
// present.
perm.DisplayName = ""
// Kind: Output only. Identifies what kind of resource this is. Value:
// the fixed string "drive#permission".
perm.Kind = ""
// PermissionDetails: Output only. Details of whether the permissions on
// this shared drive item are inherited or directly on this item. This
// is an output-only field which is present only for shared drive items.
perm.PermissionDetails = nil
// PhotoLink: Output only. A link to the user's profile photo, if
// available.
perm.PhotoLink = ""
// TeamDrivePermissionDetails: Output only. Deprecated: Output only. Use
// `permissionDetails` instead.
perm.TeamDrivePermissionDetails = nil
}
// Fields we need to read from labels
var labelsFields = googleapi.Field(strings.Join([]string{
"*",
}, ","))
// getLabels returns labels for the fileID passed in
func (f *Fs) getLabels(ctx context.Context, fileID string) (labels []*drive.Label, err error) {
fs.Debugf(f, "Fetching labels for %q", fileID)
listLabels := f.svc.Files.ListLabels(fileID).
Fields(labelsFields).
Context(ctx)
for {
var info *drive.LabelList
err = f.pacer.Call(func() (bool, error) {
info, err = listLabels.Do()
return f.shouldRetry(ctx, err)
})
if err != nil {
return nil, err
}
labels = append(labels, info.Labels...)
if info.NextPageToken == "" {
break
}
listLabels.PageToken(info.NextPageToken)
}
for _, label := range labels {
cleanLabel(label)
}
return labels, nil
}
// Set the labels on the info
func (f *Fs) setLabels(ctx context.Context, info *drive.File, labels []*drive.Label) (err error) {
if len(labels) == 0 {
return nil
}
req := drive.ModifyLabelsRequest{}
for _, label := range labels {
req.LabelModifications = append(req.LabelModifications, &drive.LabelModification{
FieldModifications: labelFieldsToFieldModifications(label.Fields),
LabelId: label.Id,
})
}
err = f.pacer.Call(func() (bool, error) {
_, err = f.svc.Files.ModifyLabels(info.Id, &req).
Context(ctx).Do()
return f.shouldRetry(ctx, err)
})
if err != nil {
return fmt.Errorf("failed to set owner: %w", err)
}
return nil
}
// Convert label fields into something which can set the fields
func labelFieldsToFieldModifications(fields map[string]drive.LabelField) (out []*drive.LabelFieldModification) {
for id, field := range fields {
var emails []string
for _, user := range field.User {
emails = append(emails, user.EmailAddress)
}
out = append(out, &drive.LabelFieldModification{
// FieldId: The ID of the field to be modified.
FieldId: id,
// SetDateValues: Replaces the value of a dateString Field with these
// new values. The string must be in the RFC 3339 full-date format:
// YYYY-MM-DD.
SetDateValues: field.DateString,
// SetIntegerValues: Replaces the value of an `integer` field with these
// new values.
SetIntegerValues: field.Integer,
// SetSelectionValues: Replaces a `selection` field with these new
// values.
SetSelectionValues: field.Selection,
// SetTextValues: Sets the value of a `text` field.
SetTextValues: field.Text,
// SetUserValues: Replaces a `user` field with these new values. The
// values must be valid email addresses.
SetUserValues: emails,
})
}
return out
}
// Clean fields we don't need to keep from the label
func cleanLabel(label *drive.Label) {
// Kind: This is always drive#label
label.Kind = ""
for name, field := range label.Fields {
// Kind: This is always drive#labelField.
field.Kind = ""
// Note the fields are copies so we need to write them
// back to the map
label.Fields[name] = field
}
}
// Parse the metadata from drive item
//
// It should return nil if there is no Metadata
func (o *baseObject) parseMetadata(ctx context.Context, info *drive.File) (err error) {
metadata := make(fs.Metadata, 16)
// Dump user metadata first as it overrides system metadata
for k, v := range info.Properties {
metadata[k] = v
}
// System metadata
metadata["copy-requires-writer-permission"] = fmt.Sprint(info.CopyRequiresWriterPermission)
metadata["writers-can-share"] = fmt.Sprint(info.WritersCanShare)
metadata["viewed-by-me"] = fmt.Sprint(info.ViewedByMe)
metadata["content-type"] = info.MimeType
// Owners: Output only. The owner of this file. Only certain legacy
// files may have more than one owner. This field isn't populated for
// items in shared drives.
if o.fs.opt.MetadataOwner.IsSet(rwRead) && len(info.Owners) > 0 {
user := info.Owners[0]
if len(info.Owners) > 1 {
fs.Logf(o, "Ignoring more than 1 owner")
}
if user != nil {
id := user.EmailAddress
if id == "" {
id = user.DisplayName
}
metadata["owner"] = id
}
}
if o.fs.opt.MetadataPermissions.IsSet(rwRead) {
// We only write permissions out if they are not inherited.
//
// On My Drives permissions seem to be attached to every item
// so they will always be written out.
//
// On Shared Drives only non-inherited permissions will be
// written out.
// To read the inherited permissions flag will mean we need to
// read the permissions for each object and the cache will be
// useless. However shared drives don't return permissions
// only permissionIds so will need to fetch them for each
// object. We use HasAugmentedPermissions to see if there are
// special permissions before fetching them to save transactions.
// HasAugmentedPermissions: Output only. Whether there are permissions
// directly on this file. This field is only populated for items in
// shared drives.
if o.fs.isTeamDrive && !info.HasAugmentedPermissions {
// Don't process permissions if there aren't any specifically set
info.Permissions = nil
info.PermissionIds = nil
}
// PermissionIds: Output only. List of permission IDs for users with
// access to this file.
//
// Only process these if we have no Permissions
if len(info.PermissionIds) > 0 && len(info.Permissions) == 0 {
info.Permissions = make([]*drive.Permission, 0, len(info.PermissionIds))
g, gCtx := errgroup.WithContext(ctx)
g.SetLimit(o.fs.ci.Checkers)
var mu sync.Mutex // protect the info.Permissions from concurrent writes
for _, permissionID := range info.PermissionIds {
permissionID := permissionID
g.Go(func() error {
// must fetch the team drive ones individually to check the inherited flag
perm, inherited, err := o.fs.getPermission(gCtx, actualID(info.Id), permissionID, !o.fs.isTeamDrive)
if err != nil {
return fmt.Errorf("failed to read permission: %w", err)
}
// Don't write inherited permissions out
if inherited {
return nil
}
// Don't write owner role out - these are covered by the owner metadata
if perm.Role == "owner" {
return nil
}
mu.Lock()
info.Permissions = append(info.Permissions, perm)
mu.Unlock()
return nil
})
}
err = g.Wait()
if err != nil {
return err
}
} else {
// Clean the fetched permissions
for _, perm := range info.Permissions {
o.fs.cleanAndCachePermission(perm)
}
}
// Permissions: Output only. The full list of permissions for the file.
// This is only available if the requesting user can share the file. Not
// populated for items in shared drives.
if len(info.Permissions) > 0 {
buf, err := json.Marshal(info.Permissions)
if err != nil {
return fmt.Errorf("failed to marshal permissions: %w", err)
}
metadata["permissions"] = string(buf)
}
// Permission propagation
// https://developers.google.com/drive/api/guides/manage-sharing#permission-propagation
// Leads me to believe that in non shared drives, permissions
// are added to each item when you set permissions for a
// folder whereas in shared drives they are inherited and
// placed on the item directly.
}
if info.FolderColorRgb != "" {
metadata["folder-color-rgb"] = info.FolderColorRgb
}
if info.Description != "" {
metadata["description"] = info.Description
}
metadata["starred"] = fmt.Sprint(info.Starred)
metadata["btime"] = info.CreatedTime
metadata["mtime"] = info.ModifiedTime
if o.fs.opt.MetadataLabels.IsSet(rwRead) {
// FIXME would be really nice if we knew if files had labels
// before listing but we need to know all possible label IDs
// to get it in the listing.
labels, err := o.fs.getLabels(ctx, actualID(info.Id))
if err != nil {
return fmt.Errorf("failed to fetch labels: %w", err)
}
buf, err := json.Marshal(labels)
if err != nil {
return fmt.Errorf("failed to marshal labels: %w", err)
}
metadata["labels"] = string(buf)
}
o.metadata = &metadata
return nil
}
// Set the owner on the info
func (f *Fs) setOwner(ctx context.Context, info *drive.File, owner string) (err error) {
perm := drive.Permission{
Role: "owner",
EmailAddress: owner,
// Type: The type of the grantee. Valid values are: * `user` * `group` *
// `domain` * `anyone` When creating a permission, if `type` is `user`
// or `group`, you must provide an `emailAddress` for the user or group.
// When `type` is `domain`, you must provide a `domain`. There isn't
// extra information required for an `anyone` type.
Type: "user",
}
err = f.pacer.Call(func() (bool, error) {
_, err = f.svc.Permissions.Create(info.Id, &perm).
SupportsAllDrives(true).
TransferOwnership(true).
// SendNotificationEmail(false). - required apparently!
Context(ctx).Do()
return f.shouldRetry(ctx, err)
})
if err != nil {
return fmt.Errorf("failed to set owner: %w", err)
}
return nil
}
// Call back to set metadata that can't be set on the upload/update
//
// The *drive.File passed in holds the current state of the drive.File
// and this should update it with any modifications.
type updateMetadataFn func(context.Context, *drive.File) error
// read the metadata from meta and write it into updateInfo
//
// update should be true if this is being used to create metadata for
// an update/PATCH call as the rules on what can be updated are
// slightly different there.
//
// It returns a callback which should be called to finish the updates
// after the data is uploaded.
func (f *Fs) updateMetadata(ctx context.Context, updateInfo *drive.File, meta fs.Metadata, update bool) (callback updateMetadataFn, err error) {
callbackFns := []updateMetadataFn{}
callback = func(ctx context.Context, info *drive.File) error {
for _, fn := range callbackFns {
err := fn(ctx, info)
if err != nil {
return err
}
}
return nil
}
// merge metadata into request and user metadata
for k, v := range meta {
k, v := k, v
// parse a boolean from v and write into out
parseBool := func(out *bool) error {
b, err := strconv.ParseBool(v)
if err != nil {
return fmt.Errorf("can't parse metadata %q = %q: %w", k, v, err)
}
*out = b
return nil
}
switch k {
case "copy-requires-writer-permission":
if err := parseBool(&updateInfo.CopyRequiresWriterPermission); err != nil {
return nil, err
}
case "writers-can-share":
if err := parseBool(&updateInfo.WritersCanShare); err != nil {
return nil, err
}
case "viewed-by-me":
// Can't write this
case "content-type":
updateInfo.MimeType = v
case "owner":
if !f.opt.MetadataOwner.IsSet(rwWrite) {
continue
}
// Can't set Owner on upload so need to set afterwards
callbackFns = append(callbackFns, func(ctx context.Context, info *drive.File) error {
return f.setOwner(ctx, info, v)
})
case "permissions":
if !f.opt.MetadataPermissions.IsSet(rwWrite) {
continue
}
var perms []*drive.Permission
err := json.Unmarshal([]byte(v), &perms)
if err != nil {
return nil, fmt.Errorf("failed to unmarshal permissions: %w", err)
}
// Can't set Permissions on upload so need to set afterwards
callbackFns = append(callbackFns, func(ctx context.Context, info *drive.File) error {
return f.setPermissions(ctx, info, perms)
})
case "labels":
if !f.opt.MetadataLabels.IsSet(rwWrite) {
continue
}
var labels []*drive.Label
err := json.Unmarshal([]byte(v), &labels)
if err != nil {
return nil, fmt.Errorf("failed to unmarshal labels: %w", err)
}
// Can't set Labels on upload so need to set afterwards
callbackFns = append(callbackFns, func(ctx context.Context, info *drive.File) error {
return f.setLabels(ctx, info, labels)
})
case "folder-color-rgb":
updateInfo.FolderColorRgb = v
case "description":
updateInfo.Description = v
case "starred":
if err := parseBool(&updateInfo.Starred); err != nil {
return nil, err
}
case "btime":
if update {
fs.Debugf(f, "Skipping btime metadata as can't update it on an existing file: %v", v)
} else {
updateInfo.CreatedTime = v
}
case "mtime":
updateInfo.ModifiedTime = v
default:
if updateInfo.Properties == nil {
updateInfo.Properties = make(map[string]string, 1)
}
updateInfo.Properties[k] = v
}
}
return callback, nil
}
// Fetch metadata and update updateInfo if --metadata is in use
func (f *Fs) fetchAndUpdateMetadata(ctx context.Context, src fs.ObjectInfo, options []fs.OpenOption, updateInfo *drive.File, update bool) (callback updateMetadataFn, err error) {
meta, err := fs.GetMetadataOptions(ctx, f, src, options)
if err != nil {
return nil, fmt.Errorf("failed to read metadata from source object: %w", err)
}
callback, err = f.updateMetadata(ctx, updateInfo, meta, update)
if err != nil {
return nil, fmt.Errorf("failed to update metadata from source object: %w", err)
}
return callback, nil
}

View File

@@ -946,6 +946,7 @@ func (f *Fs) purgeCheck(ctx context.Context, dir string, check bool) (err error)
if root == "/" {
return errors.New("can't remove root directory")
}
encRoot := f.opt.Enc.FromStandardPath(root)
if check {
// check directory exists
@@ -954,10 +955,9 @@ func (f *Fs) purgeCheck(ctx context.Context, dir string, check bool) (err error)
return fmt.Errorf("Rmdir: %w", err)
}
root = f.opt.Enc.FromStandardPath(root)
// check directory empty
arg := files.ListFolderArg{
Path: root,
Path: encRoot,
Recursive: false,
}
if root == "/" {
@@ -978,7 +978,7 @@ func (f *Fs) purgeCheck(ctx context.Context, dir string, check bool) (err error)
// remove it
err = f.pacer.Call(func() (bool, error) {
_, err = f.srv.DeleteV2(&files.DeleteArg{Path: root})
_, err = f.srv.DeleteV2(&files.DeleteArg{Path: encRoot})
return shouldRetry(ctx, err)
})
return err

View File

@@ -1310,10 +1310,11 @@ func (o *Object) Storable() bool {
// Open an object for read
func (o *Object) Open(ctx context.Context, options ...fs.OpenOption) (in io.ReadCloser, err error) {
url := o.url
if o.fs.opt.UserProject != "" {
o.url = o.url + "&userProject=" + o.fs.opt.UserProject
url += "&userProject=" + o.fs.opt.UserProject
}
req, err := http.NewRequestWithContext(ctx, "GET", o.url, nil)
req, err := http.NewRequestWithContext(ctx, "GET", url, nil)
if err != nil {
return nil, err
}

View File

@@ -93,7 +93,7 @@ func NewFs(ctx context.Context, name, root string, m configmap.Mapper) (fs.Fs, e
}
options := hdfs.ClientOptions{
Addresses: []string{opt.Namenode},
Addresses: opt.Namenode,
UseDatanodeHostname: false,
}

View File

@@ -20,9 +20,10 @@ func init() {
NewFs: NewFs,
Options: []fs.Option{{
Name: "namenode",
Help: "Hadoop name node and port.\n\nE.g. \"namenode:8020\" to connect to host namenode at port 8020.",
Help: "Hadoop name nodes and ports.\n\nE.g. \"namenode-1:8020,namenode-2:8020,...\" to connect to host namenodes at port 8020.",
Required: true,
Sensitive: true,
Default: fs.CommaSepList{},
}, {
Name: "username",
Help: "Hadoop user name.",
@@ -65,7 +66,7 @@ and 'privacy'. Used only with KERBEROS enabled.`,
// Options for this backend
type Options struct {
Namenode string `config:"namenode"`
Namenode fs.CommaSepList `config:"namenode"`
Username string `config:"username"`
ServicePrincipalName string `config:"service_principal_name"`
DataTransferProtection string `config:"data_transfer_protection"`

View File

@@ -36,6 +36,7 @@ func init() {
Name: "http",
Description: "HTTP",
NewFs: NewFs,
CommandHelp: commandHelp,
Options: []fs.Option{{
Name: "url",
Help: "URL of HTTP host to connect to.\n\nE.g. \"https://example.com\", or \"https://user:pass@example.com\" to use a username and password.",
@@ -210,6 +211,42 @@ func getFsEndpoint(ctx context.Context, client *http.Client, url string, opt *Op
return createFileResult()
}
// Make the http connection with opt
func (f *Fs) httpConnection(ctx context.Context, opt *Options) (isFile bool, err error) {
if len(opt.Headers)%2 != 0 {
return false, errors.New("odd number of headers supplied")
}
if !strings.HasSuffix(opt.Endpoint, "/") {
opt.Endpoint += "/"
}
// Parse the endpoint and stick the root onto it
base, err := url.Parse(opt.Endpoint)
if err != nil {
return false, err
}
u, err := rest.URLJoin(base, rest.URLPathEscape(f.root))
if err != nil {
return false, err
}
client := fshttp.NewClient(ctx)
endpoint, isFile := getFsEndpoint(ctx, client, u.String(), opt)
fs.Debugf(nil, "Root: %s", endpoint)
u, err = url.Parse(endpoint)
if err != nil {
return false, err
}
// Update f with the new parameters
f.httpClient = client
f.endpoint = u
f.endpointURL = u.String()
return isFile, nil
}
// NewFs creates a new Fs object from the name and root. It connects to
// the host specified in the config file.
func NewFs(ctx context.Context, name, root string, m configmap.Mapper) (fs.Fs, error) {
@@ -220,47 +257,23 @@ func NewFs(ctx context.Context, name, root string, m configmap.Mapper) (fs.Fs, e
return nil, err
}
if len(opt.Headers)%2 != 0 {
return nil, errors.New("odd number of headers supplied")
}
if !strings.HasSuffix(opt.Endpoint, "/") {
opt.Endpoint += "/"
}
// Parse the endpoint and stick the root onto it
base, err := url.Parse(opt.Endpoint)
if err != nil {
return nil, err
}
u, err := rest.URLJoin(base, rest.URLPathEscape(root))
if err != nil {
return nil, err
}
client := fshttp.NewClient(ctx)
endpoint, isFile := getFsEndpoint(ctx, client, u.String(), opt)
fs.Debugf(nil, "Root: %s", endpoint)
u, err = url.Parse(endpoint)
if err != nil {
return nil, err
}
ci := fs.GetConfig(ctx)
f := &Fs{
name: name,
root: root,
opt: *opt,
ci: ci,
httpClient: client,
endpoint: u,
endpointURL: u.String(),
name: name,
root: root,
opt: *opt,
ci: ci,
}
f.features = (&fs.Features{
CanHaveEmptyDirectories: true,
}).Fill(ctx, f)
// Make the http connection
isFile, err := f.httpConnection(ctx, opt)
if err != nil {
return nil, err
}
if isFile {
// return an error with an fs which points to the parent
return f, fs.ErrorIsFile
@@ -685,10 +698,66 @@ func (o *Object) MimeType(ctx context.Context) string {
return o.contentType
}
var commandHelp = []fs.CommandHelp{{
Name: "set",
Short: "Set command for updating the config parameters.",
Long: `This set command can be used to update the config parameters
for a running http backend.
Usage Examples:
rclone backend set remote: [-o opt_name=opt_value] [-o opt_name2=opt_value2]
rclone rc backend/command command=set fs=remote: [-o opt_name=opt_value] [-o opt_name2=opt_value2]
rclone rc backend/command command=set fs=remote: -o url=https://example.com
The option keys are named as they are in the config file.
This rebuilds the connection to the http backend when it is called with
the new parameters. Only new parameters need be passed as the values
will default to those currently in use.
It doesn't return anything.
`,
}}
// Command the backend to run a named command
//
// The command run is name
// args may be used to read arguments from
// opts may be used to read optional arguments from
//
// The result should be capable of being JSON encoded
// If it is a string or a []string it will be shown to the user
// otherwise it will be JSON encoded and shown to the user like that
func (f *Fs) Command(ctx context.Context, name string, arg []string, opt map[string]string) (out interface{}, err error) {
switch name {
case "set":
newOpt := f.opt
err := configstruct.Set(configmap.Simple(opt), &newOpt)
if err != nil {
return nil, fmt.Errorf("reading config: %w", err)
}
_, err = f.httpConnection(ctx, &newOpt)
if err != nil {
return nil, fmt.Errorf("updating session: %w", err)
}
f.opt = newOpt
keys := []string{}
for k := range opt {
keys = append(keys, k)
}
fs.Logf(f, "Updated config values: %s", strings.Join(keys, ", "))
return nil, nil
default:
return nil, fs.ErrorCommandNotFound
}
}
// Check the interfaces are satisfied
var (
_ fs.Fs = &Fs{}
_ fs.PutStreamer = &Fs{}
_ fs.Object = &Object{}
_ fs.MimeTyper = &Object{}
_ fs.Commander = &Fs{}
)

View File

@@ -0,0 +1,66 @@
// Package client provides a client for interacting with the ImageKit API.
package client
import (
"context"
"fmt"
"github.com/rclone/rclone/fs"
"github.com/rclone/rclone/fs/fshttp"
"github.com/rclone/rclone/lib/rest"
)
// ImageKit main struct
type ImageKit struct {
Prefix string
UploadPrefix string
Timeout int64
UploadTimeout int64
PrivateKey string
PublicKey string
URLEndpoint string
HTTPClient *rest.Client
}
// NewParams is a struct to define parameters to imagekit
type NewParams struct {
PrivateKey string
PublicKey string
URLEndpoint string
}
// New returns ImageKit object from environment variables
func New(ctx context.Context, params NewParams) (*ImageKit, error) {
privateKey := params.PrivateKey
publicKey := params.PublicKey
endpointURL := params.URLEndpoint
switch {
case privateKey == "":
return nil, fmt.Errorf("ImageKit.io URL endpoint is required")
case publicKey == "":
return nil, fmt.Errorf("ImageKit.io public key is required")
case endpointURL == "":
return nil, fmt.Errorf("ImageKit.io private key is required")
}
cliCtx, cliCfg := fs.AddConfig(ctx)
cliCfg.UserAgent = "rclone/imagekit"
client := rest.NewClient(fshttp.NewClient(cliCtx))
client.SetUserPass(privateKey, "")
client.SetHeader("Accept", "application/json")
return &ImageKit{
Prefix: "https://api.imagekit.io/v2",
UploadPrefix: "https://upload.imagekit.io/api/v2",
Timeout: 60,
UploadTimeout: 3600,
PrivateKey: params.PrivateKey,
PublicKey: params.PublicKey,
URLEndpoint: params.URLEndpoint,
HTTPClient: client,
}, nil
}

View File

@@ -0,0 +1,252 @@
package client
import (
"context"
"errors"
"fmt"
"net/http"
"net/url"
"time"
"github.com/rclone/rclone/lib/rest"
"gopkg.in/validator.v2"
)
// FilesOrFolderParam struct is a parameter type to ListFiles() function to search / list media library files.
type FilesOrFolderParam struct {
Path string `json:"path,omitempty"`
Limit int `json:"limit,omitempty"`
Skip int `json:"skip,omitempty"`
SearchQuery string `json:"searchQuery,omitempty"`
}
// AITag represents an AI tag for a media library file.
type AITag struct {
Name string `json:"name"`
Confidence float32 `json:"confidence"`
Source string `json:"source"`
}
// File represents media library File details.
type File struct {
FileID string `json:"fileId"`
Name string `json:"name"`
FilePath string `json:"filePath"`
Type string `json:"type"`
VersionInfo map[string]string `json:"versionInfo"`
IsPrivateFile *bool `json:"isPrivateFile"`
CustomCoordinates *string `json:"customCoordinates"`
URL string `json:"url"`
Thumbnail string `json:"thumbnail"`
FileType string `json:"fileType"`
Mime string `json:"mime"`
Height int `json:"height"`
Width int `json:"Width"`
Size uint64 `json:"size"`
HasAlpha bool `json:"hasAlpha"`
CustomMetadata map[string]any `json:"customMetadata,omitempty"`
EmbeddedMetadata map[string]any `json:"embeddedMetadata"`
CreatedAt time.Time `json:"createdAt"`
UpdatedAt time.Time `json:"updatedAt"`
Tags []string `json:"tags"`
AITags []AITag `json:"AITags"`
}
// Folder represents media library Folder details.
type Folder struct {
*File
FolderPath string `json:"folderPath"`
}
// CreateFolderParam represents parameter to create folder api
type CreateFolderParam struct {
FolderName string `validate:"nonzero" json:"folderName"`
ParentFolderPath string `validate:"nonzero" json:"parentFolderPath"`
}
// DeleteFolderParam represents parameter to delete folder api
type DeleteFolderParam struct {
FolderPath string `validate:"nonzero" json:"folderPath"`
}
// MoveFolderParam represents parameter to move folder api
type MoveFolderParam struct {
SourceFolderPath string `validate:"nonzero" json:"sourceFolderPath"`
DestinationPath string `validate:"nonzero" json:"destinationPath"`
}
// JobIDResponse respresents response struct with JobID for folder operations
type JobIDResponse struct {
JobID string `json:"jobId"`
}
// JobStatus represents response Data to job status api
type JobStatus struct {
JobID string `json:"jobId"`
Type string `json:"type"`
Status string `json:"status"`
}
// File represents media library File details.
func (ik *ImageKit) File(ctx context.Context, fileID string) (*http.Response, *File, error) {
data := &File{}
response, err := ik.HTTPClient.CallJSON(ctx, &rest.Opts{
Method: "GET",
Path: fmt.Sprintf("/files/%s/details", fileID),
RootURL: ik.Prefix,
IgnoreStatus: true,
}, nil, data)
return response, data, err
}
// Files retrieves media library files. Filter options can be supplied as FilesOrFolderParam.
func (ik *ImageKit) Files(ctx context.Context, params FilesOrFolderParam, includeVersion bool) (*http.Response, *[]File, error) {
var SearchQuery = `type = "file"`
if includeVersion {
SearchQuery = `type IN ["file", "file-version"]`
}
if params.SearchQuery != "" {
SearchQuery = params.SearchQuery
}
parameters := url.Values{}
parameters.Set("skip", fmt.Sprintf("%d", params.Skip))
parameters.Set("limit", fmt.Sprintf("%d", params.Limit))
parameters.Set("path", params.Path)
parameters.Set("searchQuery", SearchQuery)
data := &[]File{}
response, err := ik.HTTPClient.CallJSON(ctx, &rest.Opts{
Method: "GET",
Path: "/files",
RootURL: ik.Prefix,
Parameters: parameters,
}, nil, data)
return response, data, err
}
// DeleteFile removes file by FileID from media library
func (ik *ImageKit) DeleteFile(ctx context.Context, fileID string) (*http.Response, error) {
var err error
if fileID == "" {
return nil, errors.New("fileID can not be empty")
}
response, err := ik.HTTPClient.CallJSON(ctx, &rest.Opts{
Method: "DELETE",
Path: fmt.Sprintf("/files/%s", fileID),
RootURL: ik.Prefix,
NoResponse: true,
}, nil, nil)
return response, err
}
// Folders retrieves media library files. Filter options can be supplied as FilesOrFolderParam.
func (ik *ImageKit) Folders(ctx context.Context, params FilesOrFolderParam) (*http.Response, *[]Folder, error) {
var SearchQuery = `type = "folder"`
if params.SearchQuery != "" {
SearchQuery = params.SearchQuery
}
parameters := url.Values{}
parameters.Set("skip", fmt.Sprintf("%d", params.Skip))
parameters.Set("limit", fmt.Sprintf("%d", params.Limit))
parameters.Set("path", params.Path)
parameters.Set("searchQuery", SearchQuery)
data := &[]Folder{}
resp, err := ik.HTTPClient.CallJSON(ctx, &rest.Opts{
Method: "GET",
Path: "/files",
RootURL: ik.Prefix,
Parameters: parameters,
}, nil, data)
if err != nil {
return resp, data, err
}
return resp, data, err
}
// CreateFolder creates a new folder in media library
func (ik *ImageKit) CreateFolder(ctx context.Context, param CreateFolderParam) (*http.Response, error) {
var err error
if err = validator.Validate(&param); err != nil {
return nil, err
}
response, err := ik.HTTPClient.CallJSON(ctx, &rest.Opts{
Method: "POST",
Path: "/folder",
RootURL: ik.Prefix,
NoResponse: true,
}, param, nil)
return response, err
}
// DeleteFolder removes the folder from media library
func (ik *ImageKit) DeleteFolder(ctx context.Context, param DeleteFolderParam) (*http.Response, error) {
var err error
if err = validator.Validate(&param); err != nil {
return nil, err
}
response, err := ik.HTTPClient.CallJSON(ctx, &rest.Opts{
Method: "DELETE",
Path: "/folder",
RootURL: ik.Prefix,
NoResponse: true,
}, param, nil)
return response, err
}
// MoveFolder moves given folder to new path in media library
func (ik *ImageKit) MoveFolder(ctx context.Context, param MoveFolderParam) (*http.Response, *JobIDResponse, error) {
var err error
var response = &JobIDResponse{}
if err = validator.Validate(&param); err != nil {
return nil, nil, err
}
resp, err := ik.HTTPClient.CallJSON(ctx, &rest.Opts{
Method: "PUT",
Path: "bulkJobs/moveFolder",
RootURL: ik.Prefix,
}, param, response)
return resp, response, err
}
// BulkJobStatus retrieves the status of a bulk job by job ID.
func (ik *ImageKit) BulkJobStatus(ctx context.Context, jobID string) (*http.Response, *JobStatus, error) {
var err error
var response = &JobStatus{}
if jobID == "" {
return nil, nil, errors.New("jobId can not be blank")
}
resp, err := ik.HTTPClient.CallJSON(ctx, &rest.Opts{
Method: "GET",
Path: "bulkJobs/" + jobID,
RootURL: ik.Prefix,
}, nil, response)
return resp, response, err
}

View File

@@ -0,0 +1,96 @@
package client
import (
"context"
"errors"
"fmt"
"io"
"net/http"
"net/url"
"github.com/rclone/rclone/lib/rest"
)
// UploadParam defines upload parameters
type UploadParam struct {
FileName string `json:"fileName"`
Folder string `json:"folder,omitempty"` // default value: /
Tags string `json:"tags,omitempty"`
IsPrivateFile *bool `json:"isPrivateFile,omitempty"` // default: false
}
// UploadResult defines the response structure for the upload API
type UploadResult struct {
FileID string `json:"fileId"`
Name string `json:"name"`
URL string `json:"url"`
ThumbnailURL string `json:"thumbnailUrl"`
Height int `json:"height"`
Width int `json:"Width"`
Size uint64 `json:"size"`
FilePath string `json:"filePath"`
AITags []map[string]any `json:"AITags"`
VersionInfo map[string]string `json:"versionInfo"`
}
// Upload uploads an asset to a imagekit account.
//
// The asset can be:
// - the actual data (io.Reader)
// - the Data URI (Base64 encoded), max ~60 MB (62,910,000 chars)
// - the remote FTP, HTTP or HTTPS URL address of an existing file
//
// https://docs.imagekit.io/api-reference/upload-file-api/server-side-file-upload
func (ik *ImageKit) Upload(ctx context.Context, file io.Reader, param UploadParam) (*http.Response, *UploadResult, error) {
var err error
if param.FileName == "" {
return nil, nil, errors.New("Upload: Filename is required")
}
// Initialize URL values
formParams := url.Values{}
formParams.Add("useUniqueFileName", fmt.Sprint(false))
// Add individual fields to URL values
if param.FileName != "" {
formParams.Add("fileName", param.FileName)
}
if param.Tags != "" {
formParams.Add("tags", param.Tags)
}
if param.Folder != "" {
formParams.Add("folder", param.Folder)
}
if param.IsPrivateFile != nil {
formParams.Add("isPrivateFile", fmt.Sprintf("%v", *param.IsPrivateFile))
}
response := &UploadResult{}
formReader, contentType, _, err := rest.MultipartUpload(ctx, file, formParams, "file", param.FileName)
if err != nil {
return nil, nil, fmt.Errorf("failed to make multipart upload: %w", err)
}
opts := rest.Opts{
Method: "POST",
Path: "/files/upload",
RootURL: ik.UploadPrefix,
Body: formReader,
ContentType: contentType,
}
resp, err := ik.HTTPClient.CallJSON(ctx, &opts, nil, response)
if err != nil {
return resp, response, err
}
return resp, response, err
}

View File

@@ -0,0 +1,72 @@
package client
import (
"crypto/hmac"
"crypto/sha1"
"encoding/hex"
"fmt"
neturl "net/url"
"strconv"
"strings"
"time"
)
// URLParam represents parameters for generating url
type URLParam struct {
Path string
Src string
URLEndpoint string
Signed bool
ExpireSeconds int64
QueryParameters map[string]string
}
// URL generates url from URLParam
func (ik *ImageKit) URL(params URLParam) (string, error) {
var resultURL string
var url *neturl.URL
var err error
var endpoint = params.URLEndpoint
if endpoint == "" {
endpoint = ik.URLEndpoint
}
endpoint = strings.TrimRight(endpoint, "/") + "/"
if params.QueryParameters == nil {
params.QueryParameters = make(map[string]string)
}
if url, err = neturl.Parse(params.Src); err != nil {
return "", err
}
query := url.Query()
for k, v := range params.QueryParameters {
query.Set(k, v)
}
url.RawQuery = query.Encode()
resultURL = url.String()
if params.Signed {
now := time.Now().Unix()
var expires = strconv.FormatInt(now+params.ExpireSeconds, 10)
var path = strings.Replace(resultURL, endpoint, "", 1)
path = path + expires
mac := hmac.New(sha1.New, []byte(ik.PrivateKey))
mac.Write([]byte(path))
signature := hex.EncodeToString(mac.Sum(nil))
if strings.Contains(resultURL, "?") {
resultURL = resultURL + "&" + fmt.Sprintf("ik-t=%s&ik-s=%s", expires, signature)
} else {
resultURL = resultURL + "?" + fmt.Sprintf("ik-t=%s&ik-s=%s", expires, signature)
}
}
return resultURL, nil
}

View File

@@ -0,0 +1,828 @@
// Package imagekit provides an interface to the ImageKit.io media library.
package imagekit
import (
"context"
"errors"
"fmt"
"io"
"math"
"net/http"
"path"
"strconv"
"strings"
"time"
"github.com/rclone/rclone/backend/imagekit/client"
"github.com/rclone/rclone/fs"
"github.com/rclone/rclone/fs/config"
"github.com/rclone/rclone/fs/config/configmap"
"github.com/rclone/rclone/fs/config/configstruct"
"github.com/rclone/rclone/fs/hash"
"github.com/rclone/rclone/lib/encoder"
"github.com/rclone/rclone/lib/pacer"
"github.com/rclone/rclone/lib/readers"
"github.com/rclone/rclone/lib/version"
)
const (
minSleep = 1 * time.Millisecond
maxSleep = 100 * time.Millisecond
decayConstant = 2
)
var systemMetadataInfo = map[string]fs.MetadataHelp{
"btime": {
Help: "Time of file birth (creation) read from Last-Modified header",
Type: "RFC 3339",
Example: "2006-01-02T15:04:05.999999999Z07:00",
ReadOnly: true,
},
"size": {
Help: "Size of the object in bytes",
Type: "int64",
ReadOnly: true,
},
"file-type": {
Help: "Type of the file",
Type: "string",
Example: "image",
ReadOnly: true,
},
"height": {
Help: "Height of the image or video in pixels",
Type: "int",
ReadOnly: true,
},
"width": {
Help: "Width of the image or video in pixels",
Type: "int",
ReadOnly: true,
},
"has-alpha": {
Help: "Whether the image has alpha channel or not",
Type: "bool",
ReadOnly: true,
},
"tags": {
Help: "Tags associated with the file",
Type: "string",
Example: "tag1,tag2",
ReadOnly: true,
},
"google-tags": {
Help: "AI generated tags by Google Cloud Vision associated with the image",
Type: "string",
Example: "tag1,tag2",
ReadOnly: true,
},
"aws-tags": {
Help: "AI generated tags by AWS Rekognition associated with the image",
Type: "string",
Example: "tag1,tag2",
ReadOnly: true,
},
"is-private-file": {
Help: "Whether the file is private or not",
Type: "bool",
ReadOnly: true,
},
"custom-coordinates": {
Help: "Custom coordinates of the file",
Type: "string",
Example: "0,0,100,100",
ReadOnly: true,
},
}
// Register with Fs
func init() {
fs.Register(&fs.RegInfo{
Name: "imagekit",
Description: "ImageKit.io",
NewFs: NewFs,
MetadataInfo: &fs.MetadataInfo{
System: systemMetadataInfo,
Help: `Any metadata supported by the underlying remote is read and written.`,
},
Options: []fs.Option{
{
Name: "endpoint",
Help: "You can find your ImageKit.io URL endpoint in your [dashboard](https://imagekit.io/dashboard/developer/api-keys)",
Required: true,
},
{
Name: "public_key",
Help: "You can find your ImageKit.io public key in your [dashboard](https://imagekit.io/dashboard/developer/api-keys)",
Required: true,
Sensitive: true,
},
{
Name: "private_key",
Help: "You can find your ImageKit.io private key in your [dashboard](https://imagekit.io/dashboard/developer/api-keys)",
Required: true,
Sensitive: true,
},
{
Name: "only_signed",
Help: "If you have configured `Restrict unsigned image URLs` in your dashboard settings, set this to true.",
Default: false,
Advanced: true,
},
{
Name: "versions",
Help: "Include old versions in directory listings.",
Default: false,
Advanced: true,
},
{
Name: "upload_tags",
Help: "Tags to add to the uploaded files, e.g. \"tag1,tag2\".",
Default: "",
Advanced: true,
},
{
Name: config.ConfigEncoding,
Help: config.ConfigEncodingHelp,
Advanced: true,
Default: (encoder.EncodeZero |
encoder.EncodeSlash |
encoder.EncodeQuestion |
encoder.EncodeHashPercent |
encoder.EncodeCtl |
encoder.EncodeDel |
encoder.EncodeDot |
encoder.EncodeDoubleQuote |
encoder.EncodePercent |
encoder.EncodeBackSlash |
encoder.EncodeDollar |
encoder.EncodeLtGt |
encoder.EncodeSquareBracket |
encoder.EncodeInvalidUtf8),
},
},
})
}
// Options defines the configuration for this backend
type Options struct {
Endpoint string `config:"endpoint"`
PublicKey string `config:"public_key"`
PrivateKey string `config:"private_key"`
OnlySigned bool `config:"only_signed"`
Versions bool `config:"versions"`
Enc encoder.MultiEncoder `config:"encoding"`
}
// Fs represents a remote to ImageKit
type Fs struct {
name string // name of remote
root string // root path
opt Options // parsed options
features *fs.Features // optional features
ik *client.ImageKit // ImageKit client
pacer *fs.Pacer // pacer for API calls
}
// Object describes a ImageKit file
type Object struct {
fs *Fs // The Fs this object is part of
remote string // The remote path
filePath string // The path to the file
contentType string // The content type of the object if known - may be ""
timestamp time.Time // The timestamp of the object if known - may be zero
file client.File // The media file if known - may be nil
versionID string // If present this points to an object version
}
// NewFs constructs an Fs from the path, container:path
func NewFs(ctx context.Context, name string, root string, m configmap.Mapper) (fs.Fs, error) {
opt := new(Options)
err := configstruct.Set(m, opt)
if err != nil {
return nil, err
}
ik, err := client.New(ctx, client.NewParams{
URLEndpoint: opt.Endpoint,
PublicKey: opt.PublicKey,
PrivateKey: opt.PrivateKey,
})
if err != nil {
return nil, err
}
f := &Fs{
name: name,
opt: *opt,
ik: ik,
pacer: fs.NewPacer(ctx, pacer.NewDefault(pacer.MinSleep(minSleep), pacer.MaxSleep(maxSleep), pacer.DecayConstant(decayConstant))),
}
f.root = path.Join("/", root)
f.features = (&fs.Features{
CaseInsensitive: false,
DuplicateFiles: false,
ReadMimeType: true,
WriteMimeType: false,
CanHaveEmptyDirectories: true,
BucketBased: false,
ServerSideAcrossConfigs: false,
IsLocal: false,
SlowHash: true,
ReadMetadata: true,
WriteMetadata: false,
UserMetadata: false,
FilterAware: true,
PartialUploads: false,
NoMultiThreading: false,
}).Fill(ctx, f)
if f.root != "/" {
r := f.root
folderPath := f.EncodePath(r[:strings.LastIndex(r, "/")+1])
fileName := f.EncodeFileName(r[strings.LastIndex(r, "/")+1:])
file := f.getFileByName(ctx, folderPath, fileName)
if file != nil {
newRoot := path.Dir(f.root)
f.root = newRoot
return f, fs.ErrorIsFile
}
}
return f, nil
}
// Name of the remote (as passed into NewFs)
func (f *Fs) Name() string {
return f.name
}
// Root of the remote (as passed into NewFs)
func (f *Fs) Root() string {
return strings.TrimLeft(f.root, "/")
}
// String returns a description of the FS
func (f *Fs) String() string {
return fmt.Sprintf("FS imagekit: %s", f.root)
}
// Precision of the ModTimes in this Fs
func (f *Fs) Precision() time.Duration {
return fs.ModTimeNotSupported
}
// Hashes returns the supported hash types of the filesystem.
func (f *Fs) Hashes() hash.Set {
return hash.NewHashSet()
}
// Features returns the optional features of this Fs.
func (f *Fs) Features() *fs.Features {
return f.features
}
// List the objects and directories in dir into entries. The
// entries can be returned in any order but should be for a
// complete directory.
//
// dir should be "" to list the root, and should not have
// trailing slashes.
//
// This should return ErrDirNotFound if the directory isn't
// found.
func (f *Fs) List(ctx context.Context, dir string) (entries fs.DirEntries, err error) {
remote := path.Join(f.root, dir)
remote = f.EncodePath(remote)
if remote != "/" {
parentFolderPath, folderName := path.Split(remote)
folderExists, err := f.getFolderByName(ctx, parentFolderPath, folderName)
if err != nil {
return make(fs.DirEntries, 0), err
}
if folderExists == nil {
return make(fs.DirEntries, 0), fs.ErrorDirNotFound
}
}
folders, folderError := f.getFolders(ctx, remote)
if folderError != nil {
return make(fs.DirEntries, 0), folderError
}
files, fileError := f.getFiles(ctx, remote, f.opt.Versions)
if fileError != nil {
return make(fs.DirEntries, 0), fileError
}
res := make([]fs.DirEntry, 0, len(folders)+len(files))
for _, folder := range folders {
folderPath := f.DecodePath(strings.TrimLeft(strings.Replace(folder.FolderPath, f.EncodePath(f.root), "", 1), "/"))
res = append(res, fs.NewDir(folderPath, folder.UpdatedAt))
}
for _, file := range files {
res = append(res, f.newObject(ctx, remote, file))
}
return res, nil
}
func (f *Fs) newObject(ctx context.Context, remote string, file client.File) *Object {
remoteFile := strings.TrimLeft(strings.Replace(file.FilePath, f.EncodePath(f.root), "", 1), "/")
folderPath, fileName := path.Split(remoteFile)
folderPath = f.DecodePath(folderPath)
fileName = f.DecodeFileName(fileName)
remoteFile = path.Join(folderPath, fileName)
if file.Type == "file-version" {
remoteFile = version.Add(remoteFile, file.UpdatedAt)
return &Object{
fs: f,
remote: remoteFile,
filePath: file.FilePath,
contentType: file.Mime,
timestamp: file.UpdatedAt,
file: file,
versionID: file.VersionInfo["id"],
}
}
return &Object{
fs: f,
remote: remoteFile,
filePath: file.FilePath,
contentType: file.Mime,
timestamp: file.UpdatedAt,
file: file,
}
}
// NewObject finds the Object at remote. If it can't be found
// it returns the error ErrorObjectNotFound.
//
// If remote points to a directory then it should return
// ErrorIsDir if possible without doing any extra work,
// otherwise ErrorObjectNotFound.
func (f *Fs) NewObject(ctx context.Context, remote string) (fs.Object, error) {
r := path.Join(f.root, remote)
folderPath, fileName := path.Split(r)
folderPath = f.EncodePath(folderPath)
fileName = f.EncodeFileName(fileName)
isFolder, err := f.getFolderByName(ctx, folderPath, fileName)
if err != nil {
return nil, err
}
if isFolder != nil {
return nil, fs.ErrorIsDir
}
file := f.getFileByName(ctx, folderPath, fileName)
if file == nil {
return nil, fs.ErrorObjectNotFound
}
return f.newObject(ctx, r, *file), nil
}
// Put in to the remote path with the modTime given of the given size
//
// When called from outside an Fs by rclone, src.Size() will always be >= 0.
// But for unknown-sized objects (indicated by src.Size() == -1), Put should either
// return an error or upload it properly (rather than e.g. calling panic).
//
// May create the object even if it returns an error - if so
// will return the object and the error, otherwise will return
// nil and the error
func (f *Fs) Put(ctx context.Context, in io.Reader, src fs.ObjectInfo, options ...fs.OpenOption) (fs.Object, error) {
return uploadFile(ctx, f, in, src.Remote(), options...)
}
// Mkdir makes the directory (container, bucket)
//
// Shouldn't return an error if it already exists
func (f *Fs) Mkdir(ctx context.Context, dir string) (err error) {
remote := path.Join(f.root, dir)
parentFolderPath, folderName := path.Split(remote)
parentFolderPath = f.EncodePath(parentFolderPath)
folderName = f.EncodeFileName(folderName)
err = f.pacer.Call(func() (bool, error) {
var res *http.Response
res, err = f.ik.CreateFolder(ctx, client.CreateFolderParam{
ParentFolderPath: parentFolderPath,
FolderName: folderName,
})
return f.shouldRetry(ctx, res, err)
})
return err
}
// Rmdir removes the directory (container, bucket) if empty
//
// Return an error if it doesn't exist or isn't empty
func (f *Fs) Rmdir(ctx context.Context, dir string) (err error) {
entries, err := f.List(ctx, dir)
if err != nil {
return err
}
if len(entries) > 0 {
return errors.New("directory is not empty")
}
err = f.pacer.Call(func() (bool, error) {
var res *http.Response
res, err = f.ik.DeleteFolder(ctx, client.DeleteFolderParam{
FolderPath: f.EncodePath(path.Join(f.root, dir)),
})
if res.StatusCode == http.StatusNotFound {
return false, fs.ErrorDirNotFound
}
return f.shouldRetry(ctx, res, err)
})
return err
}
// Purge deletes all the files and the container
//
// Optional interface: Only implement this if you have a way of
// deleting all the files quicker than just running Remove() on the
// result of List()
func (f *Fs) Purge(ctx context.Context, dir string) (err error) {
remote := path.Join(f.root, dir)
err = f.pacer.Call(func() (bool, error) {
var res *http.Response
res, err = f.ik.DeleteFolder(ctx, client.DeleteFolderParam{
FolderPath: f.EncodePath(remote),
})
if res.StatusCode == http.StatusNotFound {
return false, fs.ErrorDirNotFound
}
return f.shouldRetry(ctx, res, err)
})
return err
}
// PublicLink generates a public link to the remote path (usually readable by anyone)
func (f *Fs) PublicLink(ctx context.Context, remote string, expire fs.Duration, unlink bool) (string, error) {
duration := time.Duration(math.Abs(float64(expire)))
expireSeconds := duration.Seconds()
fileRemote := path.Join(f.root, remote)
folderPath, fileName := path.Split(fileRemote)
folderPath = f.EncodePath(folderPath)
fileName = f.EncodeFileName(fileName)
file := f.getFileByName(ctx, folderPath, fileName)
if file == nil {
return "", fs.ErrorObjectNotFound
}
// Pacer not needed as this doesn't use the API
url, err := f.ik.URL(client.URLParam{
Src: file.URL,
Signed: *file.IsPrivateFile || f.opt.OnlySigned,
ExpireSeconds: int64(expireSeconds),
QueryParameters: map[string]string{
"updatedAt": file.UpdatedAt.String(),
},
})
if err != nil {
return "", err
}
return url, nil
}
// Fs returns read only access to the Fs that this object is part of
func (o *Object) Fs() fs.Info {
return o.fs
}
// Hash returns the selected checksum of the file
// If no checksum is available it returns ""
func (o *Object) Hash(ctx context.Context, ty hash.Type) (string, error) {
return "", hash.ErrUnsupported
}
// Storable says whether this object can be stored
func (o *Object) Storable() bool {
return true
}
// String returns a description of the Object
func (o *Object) String() string {
if o == nil {
return "<nil>"
}
return o.file.Name
}
// Remote returns the remote path
func (o *Object) Remote() string {
return o.remote
}
// ModTime returns the modification date of the file
// It should return a best guess if one isn't available
func (o *Object) ModTime(context.Context) time.Time {
return o.file.UpdatedAt
}
// Size returns the size of the file
func (o *Object) Size() int64 {
return int64(o.file.Size)
}
// MimeType returns the MIME type of the file
func (o *Object) MimeType(context.Context) string {
return o.contentType
}
// Open opens the file for read. Call Close() on the returned io.ReadCloser
func (o *Object) Open(ctx context.Context, options ...fs.OpenOption) (io.ReadCloser, error) {
// Offset and Count for range download
var offset int64
var count int64
fs.FixRangeOption(options, -1)
partialContent := false
for _, option := range options {
switch x := option.(type) {
case *fs.RangeOption:
offset, count = x.Decode(-1)
partialContent = true
case *fs.SeekOption:
offset = x.Offset
partialContent = true
default:
if option.Mandatory() {
fs.Logf(o, "Unsupported mandatory option: %v", option)
}
}
}
// Pacer not needed as this doesn't use the API
url, err := o.fs.ik.URL(client.URLParam{
Src: o.file.URL,
Signed: *o.file.IsPrivateFile || o.fs.opt.OnlySigned,
QueryParameters: map[string]string{
"tr": "orig-true",
"updatedAt": o.file.UpdatedAt.String(),
},
})
if err != nil {
return nil, err
}
client := &http.Client{}
req, _ := http.NewRequest("GET", url, nil)
req.Header.Set("Range", fmt.Sprintf("bytes=%d-%d", offset, offset+count-1))
resp, err := client.Do(req)
if err != nil {
return nil, err
}
end := resp.ContentLength
if partialContent && resp.StatusCode == http.StatusOK {
skip := offset
if offset < 0 {
skip = end + offset + 1
}
_, err = io.CopyN(io.Discard, resp.Body, skip)
if err != nil {
if resp != nil {
_ = resp.Body.Close()
}
return nil, err
}
return readers.NewLimitedReadCloser(resp.Body, end-skip), nil
}
return resp.Body, nil
}
// Update in to the object with the modTime given of the given size
//
// When called from outside an Fs by rclone, src.Size() will always be >= 0.
// But for unknown-sized objects (indicated by src.Size() == -1), Upload should either
// return an error or update the object properly (rather than e.g. calling panic).
func (o *Object) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, options ...fs.OpenOption) (err error) {
srcRemote := o.Remote()
remote := path.Join(o.fs.root, srcRemote)
folderPath, fileName := path.Split(remote)
UseUniqueFileName := new(bool)
*UseUniqueFileName = false
var resp *client.UploadResult
err = o.fs.pacer.Call(func() (bool, error) {
var res *http.Response
res, resp, err = o.fs.ik.Upload(ctx, in, client.UploadParam{
FileName: fileName,
Folder: folderPath,
IsPrivateFile: o.file.IsPrivateFile,
})
return o.fs.shouldRetry(ctx, res, err)
})
if err != nil {
return err
}
fileID := resp.FileID
_, file, err := o.fs.ik.File(ctx, fileID)
if err != nil {
return err
}
o.file = *file
return nil
}
// Remove this object
func (o *Object) Remove(ctx context.Context) (err error) {
err = o.fs.pacer.Call(func() (bool, error) {
var res *http.Response
res, err = o.fs.ik.DeleteFile(ctx, o.file.FileID)
return o.fs.shouldRetry(ctx, res, err)
})
return err
}
// SetModTime sets the metadata on the object to set the modification date
func (o *Object) SetModTime(ctx context.Context, t time.Time) error {
return fs.ErrorCantSetModTime
}
func uploadFile(ctx context.Context, f *Fs, in io.Reader, srcRemote string, options ...fs.OpenOption) (fs.Object, error) {
remote := path.Join(f.root, srcRemote)
folderPath, fileName := path.Split(remote)
folderPath = f.EncodePath(folderPath)
fileName = f.EncodeFileName(fileName)
UseUniqueFileName := new(bool)
*UseUniqueFileName = false
err := f.pacer.Call(func() (bool, error) {
var res *http.Response
var err error
res, _, err = f.ik.Upload(ctx, in, client.UploadParam{
FileName: fileName,
Folder: folderPath,
IsPrivateFile: &f.opt.OnlySigned,
})
return f.shouldRetry(ctx, res, err)
})
if err != nil {
return nil, err
}
return f.NewObject(ctx, srcRemote)
}
// Metadata returns the metadata for the object
func (o *Object) Metadata(ctx context.Context) (metadata fs.Metadata, err error) {
metadata.Set("btime", o.file.CreatedAt.Format(time.RFC3339))
metadata.Set("size", strconv.FormatUint(o.file.Size, 10))
metadata.Set("file-type", o.file.FileType)
metadata.Set("height", strconv.Itoa(o.file.Height))
metadata.Set("width", strconv.Itoa(o.file.Width))
metadata.Set("has-alpha", strconv.FormatBool(o.file.HasAlpha))
for k, v := range o.file.EmbeddedMetadata {
metadata.Set(k, fmt.Sprint(v))
}
if o.file.Tags != nil {
metadata.Set("tags", strings.Join(o.file.Tags, ","))
}
if o.file.CustomCoordinates != nil {
metadata.Set("custom-coordinates", *o.file.CustomCoordinates)
}
if o.file.IsPrivateFile != nil {
metadata.Set("is-private-file", strconv.FormatBool(*o.file.IsPrivateFile))
}
if o.file.AITags != nil {
googleTags := []string{}
awsTags := []string{}
for _, tag := range o.file.AITags {
if tag.Source == "google-auto-tagging" {
googleTags = append(googleTags, tag.Name)
} else if tag.Source == "aws-auto-tagging" {
awsTags = append(awsTags, tag.Name)
}
}
if len(googleTags) > 0 {
metadata.Set("google-tags", strings.Join(googleTags, ","))
}
if len(awsTags) > 0 {
metadata.Set("aws-tags", strings.Join(awsTags, ","))
}
}
return metadata, nil
}
// Copy src to this remote using server-side move operations.
//
// This is stored with the remote path given.
//
// It returns the destination Object and a possible error.
//
// Will only be called if src.Fs().Name() == f.Name()
//
// If it isn't possible then return fs.ErrorCantMove
func (f *Fs) Copy(ctx context.Context, src fs.Object, remote string) (fs.Object, error) {
srcObj, ok := src.(*Object)
if !ok {
return nil, fs.ErrorCantMove
}
file, err := srcObj.Open(ctx)
if err != nil {
return nil, err
}
return uploadFile(ctx, f, file, remote)
}
// Check the interfaces are satisfied.
var (
_ fs.Fs = &Fs{}
_ fs.Purger = &Fs{}
_ fs.PublicLinker = &Fs{}
_ fs.Object = &Object{}
_ fs.Copier = &Fs{}
)

View File

@@ -0,0 +1,18 @@
package imagekit
import (
"testing"
"github.com/rclone/rclone/fstest"
"github.com/rclone/rclone/fstest/fstests"
)
func TestIntegration(t *testing.T) {
debug := true
fstest.Verbose = &debug
fstests.Run(t, &fstests.Opt{
RemoteName: "TestImageKit:",
NilObject: (*Object)(nil),
SkipFsCheckWrap: true,
})
}

193
backend/imagekit/util.go Normal file
View File

@@ -0,0 +1,193 @@
package imagekit
import (
"context"
"fmt"
"net/http"
"strconv"
"time"
"github.com/rclone/rclone/backend/imagekit/client"
"github.com/rclone/rclone/fs"
"github.com/rclone/rclone/fs/fserrors"
"github.com/rclone/rclone/lib/pacer"
)
func (f *Fs) getFiles(ctx context.Context, path string, includeVersions bool) (files []client.File, err error) {
files = make([]client.File, 0)
var hasMore = true
for hasMore {
err = f.pacer.Call(func() (bool, error) {
var data *[]client.File
var res *http.Response
res, data, err = f.ik.Files(ctx, client.FilesOrFolderParam{
Skip: len(files),
Limit: 100,
Path: path,
}, includeVersions)
hasMore = !(len(*data) == 0 || len(*data) < 100)
if len(*data) > 0 {
files = append(files, *data...)
}
return f.shouldRetry(ctx, res, err)
})
}
if err != nil {
return make([]client.File, 0), err
}
return files, nil
}
func (f *Fs) getFolders(ctx context.Context, path string) (folders []client.Folder, err error) {
folders = make([]client.Folder, 0)
var hasMore = true
for hasMore {
err = f.pacer.Call(func() (bool, error) {
var data *[]client.Folder
var res *http.Response
res, data, err = f.ik.Folders(ctx, client.FilesOrFolderParam{
Skip: len(folders),
Limit: 100,
Path: path,
})
hasMore = !(len(*data) == 0 || len(*data) < 100)
if len(*data) > 0 {
folders = append(folders, *data...)
}
return f.shouldRetry(ctx, res, err)
})
}
if err != nil {
return make([]client.Folder, 0), err
}
return folders, nil
}
func (f *Fs) getFileByName(ctx context.Context, path string, name string) (file *client.File) {
err := f.pacer.Call(func() (bool, error) {
res, data, err := f.ik.Files(ctx, client.FilesOrFolderParam{
Limit: 1,
Path: path,
SearchQuery: fmt.Sprintf(`type = "file" AND name = %s`, strconv.Quote(name)),
}, false)
if len(*data) == 0 {
file = nil
} else {
file = &(*data)[0]
}
return f.shouldRetry(ctx, res, err)
})
if err != nil {
return nil
}
return file
}
func (f *Fs) getFolderByName(ctx context.Context, path string, name string) (folder *client.Folder, err error) {
err = f.pacer.Call(func() (bool, error) {
res, data, err := f.ik.Folders(ctx, client.FilesOrFolderParam{
Limit: 1,
Path: path,
SearchQuery: fmt.Sprintf(`type = "folder" AND name = %s`, strconv.Quote(name)),
})
if len(*data) == 0 {
folder = nil
} else {
folder = &(*data)[0]
}
return f.shouldRetry(ctx, res, err)
})
if err != nil {
return nil, err
}
return folder, nil
}
// retryErrorCodes is a slice of error codes that we will retry
var retryErrorCodes = []int{
401, // Unauthorized (e.g. "Token has expired")
408, // Request Timeout
429, // Rate exceeded.
500, // Get occasional 500 Internal Server Error
503, // Service Unavailable
504, // Gateway Time-out
}
func shouldRetryHTTP(resp *http.Response, retryErrorCodes []int) bool {
if resp == nil {
return false
}
for _, e := range retryErrorCodes {
if resp.StatusCode == e {
return true
}
}
return false
}
func (f *Fs) shouldRetry(ctx context.Context, resp *http.Response, err error) (bool, error) {
if fserrors.ContextError(ctx, &err) {
return false, err
}
if resp != nil && (resp.StatusCode == 429 || resp.StatusCode == 503) {
var retryAfter = 1
retryAfterString := resp.Header.Get("X-RateLimit-Reset")
if retryAfterString != "" {
var err error
retryAfter, err = strconv.Atoi(retryAfterString)
if err != nil {
fs.Errorf(f, "Malformed %s header %q: %v", "X-RateLimit-Reset", retryAfterString, err)
}
}
return true, pacer.RetryAfterError(err, time.Duration(retryAfter)*time.Millisecond)
}
return fserrors.ShouldRetry(err) || shouldRetryHTTP(resp, retryErrorCodes), err
}
// EncodePath encapsulates the logic for encoding a path
func (f *Fs) EncodePath(str string) string {
return f.opt.Enc.FromStandardPath(str)
}
// DecodePath encapsulates the logic for decoding a path
func (f *Fs) DecodePath(str string) string {
return f.opt.Enc.ToStandardPath(str)
}
// EncodeFileName encapsulates the logic for encoding a file name
func (f *Fs) EncodeFileName(str string) string {
return f.opt.Enc.FromStandardName(str)
}
// DecodeFileName encapsulates the logic for decoding a file name
func (f *Fs) DecodeFileName(str string) string {
return f.opt.Enc.ToStandardName(str)
}

View File

@@ -802,7 +802,7 @@ func (o *Object) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, op
headers["x-archive-size-hint"] = fmt.Sprintf("%d", size)
}
var mdata fs.Metadata
mdata, err = fs.GetMetadataOptions(ctx, src, options)
mdata, err = fs.GetMetadataOptions(ctx, o.fs, src, options)
if err == nil && mdata != nil {
for mk, mv := range mdata {
mk = strings.ToLower(mk)

View File

@@ -1944,7 +1944,7 @@ func (o *Object) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, op
in = wrap(in)
}
// Fetch metadata if --metadata is in use
meta, err := fs.GetMetadataOptions(ctx, src, options)
meta, err := fs.GetMetadataOptions(ctx, o.fs, src, options)
if err != nil {
return fmt.Errorf("failed to read metadata from source object: %w", err)
}

897
backend/linkbox/linkbox.go Normal file
View File

@@ -0,0 +1,897 @@
// Package linkbox provides an interface to the linkbox.to Cloud storage system.
//
// API docs: https://www.linkbox.to/api-docs
package linkbox
/*
Extras
- PublicLink - NO - sharing doesn't share the actual file, only a page with it on
- Move - YES - have Move and Rename file APIs so is possible
- MoveDir - NO - probably not possible - have Move but no Rename
*/
import (
"bytes"
"context"
"crypto/md5"
"fmt"
"io"
"net/http"
"net/url"
"path"
"regexp"
"strconv"
"strings"
"time"
"github.com/rclone/rclone/fs"
"github.com/rclone/rclone/fs/config/configmap"
"github.com/rclone/rclone/fs/config/configstruct"
"github.com/rclone/rclone/fs/fserrors"
"github.com/rclone/rclone/fs/fshttp"
"github.com/rclone/rclone/fs/hash"
"github.com/rclone/rclone/lib/dircache"
"github.com/rclone/rclone/lib/pacer"
"github.com/rclone/rclone/lib/rest"
)
const (
maxEntitiesPerPage = 1024
minSleep = 200 * time.Millisecond
maxSleep = 2 * time.Second
pacerBurst = 1
linkboxAPIURL = "https://www.linkbox.to/api/open/"
rootID = "0" // ID of root directory
)
func init() {
fsi := &fs.RegInfo{
Name: "linkbox",
Description: "Linkbox",
NewFs: NewFs,
Options: []fs.Option{{
Name: "token",
Help: "Token from https://www.linkbox.to/admin/account",
Sensitive: true,
Required: true,
}},
}
fs.Register(fsi)
}
// Options defines the configuration for this backend
type Options struct {
Token string `config:"token"`
}
// Fs stores the interface to the remote Linkbox files
type Fs struct {
name string
root string
opt Options // options for this backend
features *fs.Features // optional features
ci *fs.ConfigInfo // global config
srv *rest.Client // the connection to the server
dirCache *dircache.DirCache // Map of directory path to directory id
pacer *fs.Pacer
}
// Object is a remote object that has been stat'd (so it exists, but is not necessarily open for reading)
type Object struct {
fs *Fs
remote string
size int64
modTime time.Time
contentType string
fullURL string
dirID int64
itemID string // and these IDs are for files
id int64 // these IDs appear to apply to directories
isDir bool
}
// NewFs creates a new Fs object from the name and root. It connects to
// the host specified in the config file.
func NewFs(ctx context.Context, name, root string, m configmap.Mapper) (fs.Fs, error) {
root = strings.Trim(root, "/")
// Parse config into Options struct
opt := new(Options)
err := configstruct.Set(m, opt)
if err != nil {
return nil, err
}
ci := fs.GetConfig(ctx)
f := &Fs{
name: name,
opt: *opt,
root: root,
ci: ci,
srv: rest.NewClient(fshttp.NewClient(ctx)),
pacer: fs.NewPacer(ctx, pacer.NewDefault(pacer.MinSleep(minSleep), pacer.MaxSleep(maxSleep))),
}
f.dirCache = dircache.New(root, rootID, f)
f.features = (&fs.Features{
CanHaveEmptyDirectories: true,
CaseInsensitive: true,
}).Fill(ctx, f)
// Find the current root
err = f.dirCache.FindRoot(ctx, false)
if err != nil {
// Assume it is a file
newRoot, remote := dircache.SplitPath(root)
tempF := *f
tempF.dirCache = dircache.New(newRoot, rootID, &tempF)
tempF.root = newRoot
// Make new Fs which is the parent
err = tempF.dirCache.FindRoot(ctx, false)
if err != nil {
// No root so return old f
return f, nil
}
_, err := tempF.NewObject(ctx, remote)
if err != nil {
if err == fs.ErrorObjectNotFound {
// File doesn't exist so return old f
return f, nil
}
return nil, err
}
f.features.Fill(ctx, &tempF)
// XXX: update the old f here instead of returning tempF, since
// `features` were already filled with functions having *f as a receiver.
// See https://github.com/rclone/rclone/issues/2182
f.dirCache = tempF.dirCache
f.root = tempF.root
// return an error with an fs which points to the parent
return f, fs.ErrorIsFile
}
return f, nil
}
type entity struct {
Type string `json:"type"`
Name string `json:"name"`
URL string `json:"url"`
Ctime int64 `json:"ctime"`
Size int64 `json:"size"`
ID int64 `json:"id"`
Pid int64 `json:"pid"`
ItemID string `json:"item_id"`
}
// Return true if the entity is a directory
func (e *entity) isDir() bool {
return e.Type == "dir" || e.Type == "sdir"
}
type data struct {
Entities []entity `json:"list"`
}
type fileSearchRes struct {
response
SearchData data `json:"data"`
}
// Set an object info from an entity
func (o *Object) set(e *entity) {
o.modTime = time.Unix(e.Ctime, 0)
o.contentType = e.Type
o.size = e.Size
o.fullURL = e.URL
o.isDir = e.isDir()
o.id = e.ID
o.itemID = e.ItemID
o.dirID = e.Pid
}
// Call linkbox with the query in opts and return result
//
// This will be checked for error and an error will be returned if Status != 1
func getUnmarshaledResponse(ctx context.Context, f *Fs, opts *rest.Opts, result interface{}) error {
err := f.pacer.Call(func() (bool, error) {
resp, err := f.srv.CallJSON(ctx, opts, nil, &result)
return f.shouldRetry(ctx, resp, err)
})
if err != nil {
return err
}
responser := result.(responser)
if responser.IsError() {
return responser
}
return nil
}
// list the objects into the function supplied
//
// If directories is set it only sends directories
// User function to process a File item from listAll
//
// Should return true to finish processing
type listAllFn func(*entity) bool
// Search is a bit fussy about which characters match
//
// If the name doesn't match this then do an dir list instead
var searchOK = regexp.MustCompile(`^[a-zA-Z0-9_ .]+$`)
// Lists the directory required calling the user function on each item found
//
// If the user fn ever returns true then it early exits with found = true
//
// If you set name then search ignores dirID. name is a substring
// search also so name="dir" matches "sub dir" also. This filters it
// down so it only returns items in dirID
func (f *Fs) listAll(ctx context.Context, dirID string, name string, fn listAllFn) (found bool, err error) {
var (
pageNumber = 0
numberOfEntities = maxEntitiesPerPage
)
name = strings.TrimSpace(name) // search doesn't like spaces
if !searchOK.MatchString(name) {
// If name isn't good then do an unbounded search
name = ""
}
OUTER:
for numberOfEntities == maxEntitiesPerPage {
pageNumber++
opts := &rest.Opts{
Method: "GET",
RootURL: linkboxAPIURL,
Path: "file_search",
Parameters: url.Values{
"token": {f.opt.Token},
"name": {name},
"pid": {dirID},
"pageNo": {itoa(pageNumber)},
"pageSize": {itoa64(maxEntitiesPerPage)},
},
}
var responseResult fileSearchRes
err = getUnmarshaledResponse(ctx, f, opts, &responseResult)
if err != nil {
return false, fmt.Errorf("getting files failed: %w", err)
}
numberOfEntities = len(responseResult.SearchData.Entities)
for _, entity := range responseResult.SearchData.Entities {
if itoa64(entity.Pid) != dirID {
// when name != "" this returns from all directories, so ignore not this one
continue
}
if fn(&entity) {
found = true
break OUTER
}
}
if pageNumber > 100000 {
return false, fmt.Errorf("too many results")
}
}
return found, nil
}
// Turn 64 bit int to string
func itoa64(i int64) string {
return strconv.FormatInt(i, 10)
}
// Turn int to string
func itoa(i int) string {
return itoa64(int64(i))
}
func splitDirAndName(remote string) (dir string, name string) {
lastSlashPosition := strings.LastIndex(remote, "/")
if lastSlashPosition == -1 {
dir = ""
name = remote
} else {
dir = remote[:lastSlashPosition]
name = remote[lastSlashPosition+1:]
}
// fs.Debugf(nil, "splitDirAndName remote = {%s}, dir = {%s}, name = {%s}", remote, dir, name)
return dir, name
}
// FindLeaf finds a directory of name leaf in the folder with ID directoryID
func (f *Fs) FindLeaf(ctx context.Context, directoryID, leaf string) (directoryIDOut string, found bool, err error) {
// Find the leaf in directoryID
found, err = f.listAll(ctx, directoryID, leaf, func(entity *entity) bool {
if entity.isDir() && strings.EqualFold(entity.Name, leaf) {
directoryIDOut = itoa64(entity.ID)
return true
}
return false
})
return directoryIDOut, found, err
}
// Returned from "folder_create"
type folderCreateRes struct {
response
Data struct {
DirID int64 `json:"dirId"`
} `json:"data"`
}
// CreateDir makes a directory with dirID as parent and name leaf
func (f *Fs) CreateDir(ctx context.Context, dirID, leaf string) (newID string, err error) {
// fs.Debugf(f, "CreateDir(%q, %q)\n", dirID, leaf)
opts := &rest.Opts{
Method: "GET",
RootURL: linkboxAPIURL,
Path: "folder_create",
Parameters: url.Values{
"token": {f.opt.Token},
"name": {leaf},
"pid": {dirID},
"isShare": {"0"},
"canInvite": {"1"},
"canShare": {"1"},
"withBodyImg": {"1"},
"desc": {""},
},
}
response := folderCreateRes{}
err = getUnmarshaledResponse(ctx, f, opts, &response)
if err != nil {
// response status 1501 means that directory already exists
if response.Status == 1501 {
return newID, fmt.Errorf("couldn't find already created directory: %w", fs.ErrorDirNotFound)
}
return newID, fmt.Errorf("CreateDir failed: %w", err)
}
if response.Data.DirID == 0 {
return newID, fmt.Errorf("API returned 0 for ID of newly created directory")
}
return itoa64(response.Data.DirID), nil
}
// List the objects and directories in dir into entries. The
// entries can be returned in any order but should be for a
// complete directory.
//
// dir should be "" to list the root, and should not have
// trailing slashes.
//
// This should return ErrDirNotFound if the directory isn't
// found.
func (f *Fs) List(ctx context.Context, dir string) (entries fs.DirEntries, err error) {
// fs.Debugf(f, "List method dir = {%s}", dir)
directoryID, err := f.dirCache.FindDir(ctx, dir, false)
if err != nil {
return nil, err
}
_, err = f.listAll(ctx, directoryID, "", func(entity *entity) bool {
remote := path.Join(dir, entity.Name)
if entity.isDir() {
id := itoa64(entity.ID)
modTime := time.Unix(entity.Ctime, 0)
d := fs.NewDir(remote, modTime).SetID(id).SetParentID(itoa64(entity.Pid))
entries = append(entries, d)
// cache the directory ID for later lookups
f.dirCache.Put(remote, id)
} else {
o := &Object{
fs: f,
remote: remote,
}
o.set(entity)
entries = append(entries, o)
}
return false
})
if err != nil {
return nil, err
}
return entries, nil
}
// get an entity with leaf from dirID
func getEntity(ctx context.Context, f *Fs, leaf string, directoryID string, token string) (*entity, error) {
var result *entity
var resultErr = fs.ErrorObjectNotFound
_, err := f.listAll(ctx, directoryID, leaf, func(entity *entity) bool {
if strings.EqualFold(entity.Name, leaf) {
// fs.Debugf(f, "getObject found entity.Name {%s} name {%s}", entity.Name, name)
if entity.isDir() {
result = nil
resultErr = fs.ErrorIsDir
} else {
result = entity
resultErr = nil
}
return true
}
return false
})
if err != nil {
return nil, err
}
return result, resultErr
}
// NewObject finds the Object at remote. If it can't be found
// it returns the error ErrorObjectNotFound.
//
// If remote points to a directory then it should return
// ErrorIsDir if possible without doing any extra work,
// otherwise ErrorObjectNotFound.
func (f *Fs) NewObject(ctx context.Context, remote string) (fs.Object, error) {
leaf, dirID, err := f.dirCache.FindPath(ctx, remote, false)
if err != nil {
if err == fs.ErrorDirNotFound {
return nil, fs.ErrorObjectNotFound
}
return nil, err
}
entity, err := getEntity(ctx, f, leaf, dirID, f.opt.Token)
if err != nil {
return nil, err
}
o := &Object{
fs: f,
remote: remote,
}
o.set(entity)
return o, nil
}
// Mkdir makes the directory (container, bucket)
//
// Shouldn't return an error if it already exists
func (f *Fs) Mkdir(ctx context.Context, dir string) error {
_, err := f.dirCache.FindDir(ctx, dir, true)
return err
}
func (f *Fs) purgeCheck(ctx context.Context, dir string, check bool) error {
if check {
entries, err := f.List(ctx, dir)
if err != nil {
return err
}
if len(entries) != 0 {
return fs.ErrorDirectoryNotEmpty
}
}
directoryID, err := f.dirCache.FindDir(ctx, dir, false)
if err != nil {
return err
}
opts := &rest.Opts{
Method: "GET",
RootURL: linkboxAPIURL,
Path: "folder_del",
Parameters: url.Values{
"token": {f.opt.Token},
"dirIds": {directoryID},
},
}
response := response{}
err = getUnmarshaledResponse(ctx, f, opts, &response)
if err != nil {
// Linkbox has some odd error returns here
if response.Status == 403 || response.Status == 500 {
return fs.ErrorDirNotFound
}
return fmt.Errorf("purge error: %w", err)
}
f.dirCache.FlushDir(dir)
if err != nil {
return err
}
return nil
}
// Rmdir removes the directory (container, bucket) if empty
//
// Return an error if it doesn't exist or isn't empty
func (f *Fs) Rmdir(ctx context.Context, dir string) error {
return f.purgeCheck(ctx, dir, true)
}
// SetModTime sets modTime on a particular file
func (o *Object) SetModTime(ctx context.Context, modTime time.Time) error {
return fs.ErrorCantSetModTime
}
// Open opens the file for read. Call Close() on the returned io.ReadCloser
func (o *Object) Open(ctx context.Context, options ...fs.OpenOption) (io.ReadCloser, error) {
var res *http.Response
downloadURL := o.fullURL
if downloadURL == "" {
_, name := splitDirAndName(o.Remote())
newObject, err := getEntity(ctx, o.fs, name, itoa64(o.dirID), o.fs.opt.Token)
if err != nil {
return nil, err
}
if newObject == nil {
// fs.Debugf(o.fs, "Open entity is empty: name = {%s}", name)
return nil, fs.ErrorObjectNotFound
}
downloadURL = newObject.URL
}
opts := &rest.Opts{
Method: "GET",
RootURL: downloadURL,
Options: options,
}
err := o.fs.pacer.Call(func() (bool, error) {
var err error
res, err = o.fs.srv.Call(ctx, opts)
return o.fs.shouldRetry(ctx, res, err)
})
if err != nil {
return nil, fmt.Errorf("Open failed: %w", err)
}
return res.Body, nil
}
// Update in to the object with the modTime given of the given size
//
// When called from outside an Fs by rclone, src.Size() will always be >= 0.
// But for unknown-sized objects (indicated by src.Size() == -1), Upload should either
// return an error or update the object properly (rather than e.g. calling panic).
func (o *Object) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, options ...fs.OpenOption) (err error) {
size := src.Size()
if size == 0 {
return fs.ErrorCantUploadEmptyFiles
} else if size < 0 {
return fmt.Errorf("can't upload files of unknown length")
}
remote := o.Remote()
// remove the file if it exists
if o.itemID != "" {
fs.Debugf(o, "Update: removing old file")
err = o.Remove(ctx)
if err != nil {
fs.Errorf(o, "Update: failed to remove existing file: %v", err)
}
o.itemID = ""
} else {
tmpObject, err := o.fs.NewObject(ctx, remote)
if err == nil {
fs.Debugf(o, "Update: removing old file")
err = tmpObject.Remove(ctx)
if err != nil {
fs.Errorf(o, "Update: failed to remove existing file: %v", err)
}
}
}
first10m := io.LimitReader(in, 10_485_760)
first10mBytes, err := io.ReadAll(first10m)
if err != nil {
return fmt.Errorf("Update err in reading file: %w", err)
}
// get upload authorization (step 1)
opts := &rest.Opts{
Method: "GET",
RootURL: linkboxAPIURL,
Path: "get_upload_url",
Options: options,
Parameters: url.Values{
"token": {o.fs.opt.Token},
"fileMd5ofPre10m": {fmt.Sprintf("%x", md5.Sum(first10mBytes))},
"fileSize": {itoa64(size)},
},
}
getFirstStepResult := getUploadURLResponse{}
err = getUnmarshaledResponse(ctx, o.fs, opts, &getFirstStepResult)
if err != nil {
if getFirstStepResult.Status != 600 {
return fmt.Errorf("Update err in unmarshaling response: %w", err)
}
}
switch getFirstStepResult.Status {
case 1:
// upload file using link from first step
var res *http.Response
file := io.MultiReader(bytes.NewReader(first10mBytes), in)
opts := &rest.Opts{
Method: "PUT",
RootURL: getFirstStepResult.Data.SignURL,
Options: options,
Body: file,
ContentLength: &size,
}
err = o.fs.pacer.CallNoRetry(func() (bool, error) {
res, err = o.fs.srv.Call(ctx, opts)
return o.fs.shouldRetry(ctx, res, err)
})
if err != nil {
return fmt.Errorf("update err in uploading file: %w", err)
}
_, err = io.ReadAll(res.Body)
if err != nil {
return fmt.Errorf("update err in reading response: %w", err)
}
case 600:
// Status means that we don't need to upload file
// We need only to make second step
default:
return fmt.Errorf("got unexpected message from Linkbox: %s", getFirstStepResult.Message)
}
leaf, dirID, err := o.fs.dirCache.FindPath(ctx, remote, false)
if err != nil {
return err
}
// create file item at Linkbox (second step)
opts = &rest.Opts{
Method: "GET",
RootURL: linkboxAPIURL,
Path: "folder_upload_file",
Options: options,
Parameters: url.Values{
"token": {o.fs.opt.Token},
"fileMd5ofPre10m": {fmt.Sprintf("%x", md5.Sum(first10mBytes))},
"fileSize": {itoa64(size)},
"pid": {dirID},
"diyName": {leaf},
},
}
getSecondStepResult := getUploadURLResponse{}
err = getUnmarshaledResponse(ctx, o.fs, opts, &getSecondStepResult)
if err != nil {
return fmt.Errorf("Update second step failed: %w", err)
}
// Try a few times to read the object after upload for eventual consistency
const maxTries = 10
var sleepTime = 100 * time.Millisecond
var entity *entity
for try := 1; try <= maxTries; try++ {
entity, err = getEntity(ctx, o.fs, leaf, dirID, o.fs.opt.Token)
if err == nil {
break
}
if err != fs.ErrorObjectNotFound {
return fmt.Errorf("Update failed to read object: %w", err)
}
fs.Debugf(o, "Trying to read object after upload: try again in %v (%d/%d)", sleepTime, try, maxTries)
time.Sleep(sleepTime)
sleepTime *= 2
}
if err != nil {
return err
}
o.set(entity)
return nil
}
// Remove this object
func (o *Object) Remove(ctx context.Context) error {
opts := &rest.Opts{
Method: "GET",
RootURL: linkboxAPIURL,
Path: "file_del",
Parameters: url.Values{
"token": {o.fs.opt.Token},
"itemIds": {o.itemID},
},
}
requestResult := getUploadURLResponse{}
err := getUnmarshaledResponse(ctx, o.fs, opts, &requestResult)
if err != nil {
return fmt.Errorf("could not Remove: %w", err)
}
return nil
}
// ModTime returns the modification time of the remote http file
func (o *Object) ModTime(ctx context.Context) time.Time {
return o.modTime
}
// Remote the name of the remote HTTP file, relative to the fs root
func (o *Object) Remote() string {
return o.remote
}
// Size returns the size in bytes of the remote http file
func (o *Object) Size() int64 {
return o.size
}
// String returns the URL to the remote HTTP file
func (o *Object) String() string {
if o == nil {
return "<nil>"
}
return o.remote
}
// Fs is the filesystem this remote http file object is located within
func (o *Object) Fs() fs.Info {
return o.fs
}
// Hash returns "" since HTTP (in Go or OpenSSH) doesn't support remote calculation of hashes
func (o *Object) Hash(ctx context.Context, r hash.Type) (string, error) {
return "", hash.ErrUnsupported
}
// Storable returns whether the remote http file is a regular file
// (not a directory, symbolic link, block device, character device, named pipe, etc.)
func (o *Object) Storable() bool {
return true
}
// Features returns the optional features of this Fs
// Info provides a read only interface to information about a filesystem.
func (f *Fs) Features() *fs.Features {
return f.features
}
// Name of the remote (as passed into NewFs)
// Name returns the configured name of the file system
func (f *Fs) Name() string {
return f.name
}
// Root of the remote (as passed into NewFs)
func (f *Fs) Root() string {
return f.root
}
// String returns a description of the FS
func (f *Fs) String() string {
return fmt.Sprintf("Linkbox root '%s'", f.root)
}
// Precision of the ModTimes in this Fs
func (f *Fs) Precision() time.Duration {
return fs.ModTimeNotSupported
}
// Hashes returns hash.HashNone to indicate remote hashing is unavailable
// Returns the supported hash types of the filesystem
func (f *Fs) Hashes() hash.Set {
return hash.Set(hash.None)
}
/*
{
"data": {
"signUrl": "http://xx -- Then CURL PUT your file with sign url "
},
"msg": "please use this url to upload (PUT method)",
"status": 1
}
*/
// All messages have these items
type response struct {
Message string `json:"msg"`
Status int `json:"status"`
}
// IsError returns whether response represents an error
func (r *response) IsError() bool {
return r.Status != 1
}
// Error returns the error state of this response
func (r *response) Error() string {
return fmt.Sprintf("Linkbox error %d: %s", r.Status, r.Message)
}
// responser is interface covering the response so we can use it when it is embedded.
type responser interface {
IsError() bool
Error() string
}
type getUploadURLData struct {
SignURL string `json:"signUrl"`
}
type getUploadURLResponse struct {
response
Data getUploadURLData `json:"data"`
}
// Put in to the remote path with the modTime given of the given size
//
// When called from outside an Fs by rclone, src.Size() will always be >= 0.
// But for unknown-sized objects (indicated by src.Size() == -1), Put should either
// return an error or upload it properly (rather than e.g. calling panic).
//
// May create the object even if it returns an error - if so
// will return the object and the error, otherwise will return
// nil and the error
func (f *Fs) Put(ctx context.Context, in io.Reader, src fs.ObjectInfo, options ...fs.OpenOption) (fs.Object, error) {
o := &Object{
fs: f,
remote: src.Remote(),
size: src.Size(),
}
dir, _ := splitDirAndName(src.Remote())
err := f.Mkdir(ctx, dir)
if err != nil {
return nil, err
}
err = o.Update(ctx, in, src, options...)
return o, err
}
// Purge all files in the directory specified
//
// Implement this if you have a way of deleting all the files
// quicker than just running Remove() on the result of List()
//
// Return an error if it doesn't exist
func (f *Fs) Purge(ctx context.Context, dir string) error {
return f.purgeCheck(ctx, dir, false)
}
// retryErrorCodes is a slice of error codes that we will retry
var retryErrorCodes = []int{
429, // Too Many Requests.
500, // Internal Server Error
502, // Bad Gateway
503, // Service Unavailable
504, // Gateway Timeout
509, // Bandwidth Limit Exceeded
}
// shouldRetry determines whether a given err rates being retried
func (f *Fs) shouldRetry(ctx context.Context, resp *http.Response, err error) (bool, error) {
if fserrors.ContextError(ctx, &err) {
return false, err
}
return fserrors.ShouldRetry(err) || fserrors.ShouldRetryHTTP(resp, retryErrorCodes), err
}
// DirCacheFlush resets the directory cache - used in testing as an
// optional interface
func (f *Fs) DirCacheFlush() {
f.dirCache.ResetRoot()
}
// Check the interfaces are satisfied
var (
_ fs.Fs = &Fs{}
_ fs.Purger = &Fs{}
_ fs.DirCacheFlusher = &Fs{}
_ fs.Object = &Object{}
)

View File

@@ -0,0 +1,17 @@
// Test Linkbox filesystem interface
package linkbox_test
import (
"testing"
"github.com/rclone/rclone/backend/linkbox"
"github.com/rclone/rclone/fstest/fstests"
)
// TestIntegration runs integration tests against the remote
func TestIntegration(t *testing.T) {
fstests.Run(t, &fstests.Opt{
RemoteName: "TestLinkbox:",
NilObject: (*linkbox.Object)(nil),
})
}

View File

@@ -146,6 +146,11 @@ time we:
- Only checksum the size that stat gave
- Don't update the stat info for the file
**NB** do not use this flag on a Windows Volume Shadow (VSS). For some
unknown reason, files in a VSS sometimes show different sizes from the
directory listing (where the initial stat value comes from on Windows)
and when stat is called on them directly. Other copy tools always use
the direct stat value and setting this flag will disable that.
`,
Default: false,
Advanced: true,
@@ -1123,6 +1128,12 @@ func (o *Object) Open(ctx context.Context, options ...fs.OpenOption) (in io.Read
}
}
// Update the file info before we start reading
err = o.lstat()
if err != nil {
return nil, err
}
// If not checking updated then limit to current size. This means if
// file is being extended, readers will read a o.Size() bytes rather
// than the new size making for a consistent upload.
@@ -1287,7 +1298,7 @@ func (o *Object) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, op
}
// Fetch and set metadata if --metadata is in use
meta, err := fs.GetMetadataOptions(ctx, src, options)
meta, err := fs.GetMetadataOptions(ctx, o.fs, src, options)
if err != nil {
return fmt.Errorf("failed to read metadata from source object: %w", err)
}

View File

@@ -324,6 +324,37 @@ the --onedrive-av-override flag, or av_override = true in the config
file.
`,
Advanced: true,
}, {
Name: "delta",
Default: false,
Help: strings.ReplaceAll(`If set rclone will use delta listing to implement recursive listings.
If this flag is set the the onedrive backend will advertise |ListR|
support for recursive listings.
Setting this flag speeds up these things greatly:
rclone lsf -R onedrive:
rclone size onedrive:
rclone rc vfs/refresh recursive=true
**However** the delta listing API **only** works at the root of the
drive. If you use it not at the root then it recurses from the root
and discards all the data that is not under the directory you asked
for. So it will be correct but may not be very efficient.
This is why this flag is not set as the default.
As a rule of thumb if nearly all of your data is under rclone's root
directory (the |root/directory| in |onedrive:root/directory|) then
using this flag will be be a big performance win. If your data is
mostly not under the root then using this flag will be a big
performance loss.
It is recommended if you are mounting your onedrive at the root
(or near the root when using crypt) and using rclone |rc vfs/refresh|.
`, "|", "`"),
Advanced: true,
}, {
Name: config.ConfigEncoding,
Help: config.ConfigEncodingHelp,
@@ -645,6 +676,7 @@ type Options struct {
LinkPassword string `config:"link_password"`
HashType string `config:"hash_type"`
AVOverride bool `config:"av_override"`
Delta bool `config:"delta"`
Enc encoder.MultiEncoder `config:"encoding"`
}
@@ -976,6 +1008,11 @@ func NewFs(ctx context.Context, name, root string, m configmap.Mapper) (fs.Fs, e
f.dirCache = dircache.New(root, rootID, f)
// ListR only supported if delta set
if !f.opt.Delta {
f.features.ListR = nil
}
// Find the current root
err = f.dirCache.FindRoot(ctx, false)
if err != nil {
@@ -1204,10 +1241,14 @@ func (f *Fs) List(ctx context.Context, dir string) (entries fs.DirEntries, err e
}
err = f.listAll(ctx, directoryID, false, false, func(info *api.Item) error {
entry, err := f.itemToDirEntry(ctx, dir, info)
if err == nil {
entries = append(entries, entry)
if err != nil {
return err
}
return err
if entry == nil {
return nil
}
entries = append(entries, entry)
return nil
})
if err != nil {
return nil, err
@@ -1302,6 +1343,9 @@ func (f *Fs) ListR(ctx context.Context, dir string, callback fs.ListRCallback) (
if err != nil {
return err
}
if entry == nil {
return nil
}
err = list.Add(entry)
if err != nil {
return err

View File

@@ -295,7 +295,7 @@ func (o *Object) prepareUpload(ctx context.Context, src fs.ObjectInfo, options [
// Set the mtime in the metadata
modTime := src.ModTime(ctx)
// Fetch metadata if --metadata is in use
meta, err := fs.GetMetadataOptions(ctx, src, options)
meta, err := fs.GetMetadataOptions(ctx, o.fs, src, options)
if err != nil {
return ui, fmt.Errorf("failed to read metadata from source object: %w", err)
}
@@ -399,13 +399,17 @@ func (o *Object) prepareUpload(ctx context.Context, src fs.ObjectInfo, options [
func (o *Object) createMultipartUpload(ctx context.Context, putReq *objectstorage.PutObjectRequest) (
uploadID string, existingParts map[int]objectstorage.MultipartUploadPartSummary, err error) {
bucketName, bucketPath := o.split()
f := o.fs
if f.opt.AttemptResumeUpload {
err = o.fs.makeBucket(ctx, bucketName)
if err != nil {
fs.Errorf(o, "failed to create bucket: %v, err: %v", bucketName, err)
return uploadID, existingParts, err
}
if o.fs.opt.AttemptResumeUpload {
fs.Debugf(o, "attempting to resume upload for %v (if any)", o.remote)
resumeUploads, err := o.fs.findLatestMultipartUpload(ctx, bucketName, bucketPath)
if err == nil && len(resumeUploads) > 0 {
uploadID = *resumeUploads[0].UploadId
existingParts, err = f.listMultipartUploadParts(ctx, bucketName, bucketPath, uploadID)
existingParts, err = o.fs.listMultipartUploadParts(ctx, bucketName, bucketPath, uploadID)
if err == nil {
fs.Debugf(o, "resuming with existing upload id: %v", uploadID)
return uploadID, existingParts, err

View File

@@ -401,7 +401,7 @@ func (o *Object) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, op
multipart = false
}
if multipart {
err = o.uploadMultipart(ctx, src, in)
err = o.uploadMultipart(ctx, src, in, options...)
if err != nil {
return err
}

View File

@@ -138,6 +138,14 @@ func (f *Fs) setUploadCutoff(cs fs.SizeSuffix) (old fs.SizeSuffix, err error) {
return
}
func (f *Fs) setCopyCutoff(cs fs.SizeSuffix) (old fs.SizeSuffix, err error) {
err = checkUploadChunkSize(cs)
if err == nil {
old, f.opt.CopyCutoff = f.opt.CopyCutoff, cs
}
return
}
// ------------------------------------------------------------
// Implement backed that represents a remote object storage server
// Fs is the interface a cloud storage system must provide

View File

@@ -30,4 +30,12 @@ func (f *Fs) SetUploadCutoff(cs fs.SizeSuffix) (fs.SizeSuffix, error) {
return f.setUploadCutoff(cs)
}
var _ fstests.SetUploadChunkSizer = (*Fs)(nil)
func (f *Fs) SetCopyCutoff(cs fs.SizeSuffix) (fs.SizeSuffix, error) {
return f.setCopyCutoff(cs)
}
var (
_ fstests.SetUploadChunkSizer = (*Fs)(nil)
_ fstests.SetUploadCutoffer = (*Fs)(nil)
_ fstests.SetCopyCutoffer = (*Fs)(nil)
)

View File

@@ -6,8 +6,8 @@ import (
"time"
)
// OverwriteOnCopyMode is a conflict resolve mode during copy. Files with conflicting names will be overwritten
const OverwriteOnCopyMode = "overwrite"
// OverwriteMode is a conflict resolve mode during copy or move. Files with conflicting names will be overwritten
const OverwriteMode = "overwrite"
// ProfileInfo is a profile info about quota
type ProfileInfo struct {

View File

@@ -193,6 +193,7 @@ func NewFs(ctx context.Context, name, root string, m configmap.Mapper) (fs.Fs, e
f.features = (&fs.Features{
CaseInsensitive: false,
CanHaveEmptyDirectories: true,
PartialUploads: true,
}).Fill(ctx, f)
if f.opt.APIKey != "" {
@@ -728,7 +729,7 @@ func (f *Fs) Copy(ctx context.Context, src fs.Object, remote string) (fs.Object,
Resolve: true,
MTime: api.JSONTime(srcObj.ModTime(ctx)),
Name: dstLeaf,
ResolveMode: api.OverwriteOnCopyMode,
ResolveMode: api.OverwriteMode,
}
result := &api.File{}
@@ -788,11 +789,12 @@ func (f *Fs) Move(ctx context.Context, src fs.Object, remote string) (fs.Object,
}
params := &api.FileCopyMoveOneParams{
ID: srcObj.id,
Target: directoryID,
Resolve: false,
MTime: api.JSONTime(srcObj.ModTime(ctx)),
Name: dstLeaf,
ID: srcObj.id,
Target: directoryID,
Resolve: true,
MTime: api.JSONTime(srcObj.ModTime(ctx)),
Name: dstLeaf,
ResolveMode: api.OverwriteMode,
}
var resp *http.Response

View File

@@ -15,6 +15,7 @@ import (
"errors"
"fmt"
"io"
"math"
"net/http"
"net/url"
"path"
@@ -139,6 +140,9 @@ var providerOption = fs.Option{
}, {
Value: "RackCorp",
Help: "RackCorp Object Storage",
}, {
Value: "Rclone",
Help: "Rclone S3 Server",
}, {
Value: "Scaleway",
Help: "Scaleway Object Storage",
@@ -2422,6 +2426,19 @@ See [the time option docs](/docs/#time-option) for valid formats.
`,
Default: fs.Time{},
Advanced: true,
}, {
Name: "version_deleted",
Help: `Show deleted file markers when using versions.
This shows deleted file markers in the listing when using versions. These will appear
as 0 size files. The only operation which can be performed on them is deletion.
Deleting a delete marker will reveal the previous version.
Deleted files will always show with a timestamp.
`,
Default: false,
Advanced: true,
}, {
Name: "decompress",
Help: `If set this will decompress gzip encoded objects.
@@ -2488,6 +2505,45 @@ In this case, you might want to try disabling this option.
Help: "Endpoint for STS.\n\nLeave blank if using AWS to use the default endpoint for the region.",
Provider: "AWS",
Advanced: true,
}, {
Name: "use_already_exists",
Help: strings.ReplaceAll(`Set if rclone should report BucketAlreadyExists errors on bucket creation.
At some point during the evolution of the s3 protocol, AWS started
returning an |AlreadyOwnedByYou| error when attempting to create a
bucket that the user already owned, rather than a
|BucketAlreadyExists| error.
Unfortunately exactly what has been implemented by s3 clones is a
little inconsistent, some return |AlreadyOwnedByYou|, some return
|BucketAlreadyExists| and some return no error at all.
This is important to rclone because it ensures the bucket exists by
creating it on quite a lot of operations (unless
|--s3-no-check-bucket| is used).
If rclone knows the provider can return |AlreadyOwnedByYou| or returns
no error then it can report |BucketAlreadyExists| errors when the user
attempts to create a bucket not owned by them. Otherwise rclone
ignores the |BucketAlreadyExists| error which can lead to confusion.
This should be automatically set correctly for all providers rclone
knows about - please make a bug report if not.
`, "|", "`"),
Default: fs.Tristate{},
Advanced: true,
}, {
Name: "use_multipart_uploads",
Help: `Set if rclone should use multipart uploads.
You can change this if you want to disable the use of multipart uploads.
This shouldn't be necessary in normal operation.
This should be automatically set correctly for all providers rclone
knows about - please make a bug report if not.
`,
Default: fs.Tristate{},
Advanced: true,
},
}})
}
@@ -2610,10 +2666,13 @@ type Options struct {
UsePresignedRequest bool `config:"use_presigned_request"`
Versions bool `config:"versions"`
VersionAt fs.Time `config:"version_at"`
VersionDeleted bool `config:"version_deleted"`
Decompress bool `config:"decompress"`
MightGzip fs.Tristate `config:"might_gzip"`
UseAcceptEncodingGzip fs.Tristate `config:"use_accept_encoding_gzip"`
NoSystemMetadata bool `config:"no_system_metadata"`
UseAlreadyExists fs.Tristate `config:"use_already_exists"`
UseMultipartUploads fs.Tristate `config:"use_multipart_uploads"`
}
// Fs represents a remote s3 server
@@ -2868,6 +2927,7 @@ func s3Connection(ctx context.Context, opt *Options, client *http.Client) (*s3.S
case v.AccessKeyID == "" && v.SecretAccessKey == "":
// if no access key/secret and iam is explicitly disabled then fall back to anon interaction
cred = credentials.AnonymousCredentials
fs.Debugf(nil, "Using anonymous credentials - did you mean to set env_auth=true?")
case v.AccessKeyID == "":
return nil, nil, errors.New("access_key_id not found")
case v.SecretAccessKey == "":
@@ -2958,13 +3018,23 @@ func checkUploadCutoff(cs fs.SizeSuffix) error {
}
func (f *Fs) setUploadCutoff(cs fs.SizeSuffix) (old fs.SizeSuffix, err error) {
err = checkUploadCutoff(cs)
if f.opt.Provider != "Rclone" {
err = checkUploadCutoff(cs)
}
if err == nil {
old, f.opt.UploadCutoff = f.opt.UploadCutoff, cs
}
return
}
func (f *Fs) setCopyCutoff(cs fs.SizeSuffix) (old fs.SizeSuffix, err error) {
err = checkUploadChunkSize(cs)
if err == nil {
old, f.opt.CopyCutoff = f.opt.CopyCutoff, cs
}
return
}
// setEndpointValueForIDriveE2 gets user region endpoint against the Access Key details by calling the API
func setEndpointValueForIDriveE2(m configmap.Mapper) (err error) {
value, ok := m.Get(fs.ConfigProvider)
@@ -3012,6 +3082,8 @@ func setQuirks(opt *Options) {
useMultipartEtag = true // Set if Etags for multpart uploads are compatible with AWS
useAcceptEncodingGzip = true // Set Accept-Encoding: gzip
mightGzip = true // assume all providers might use content encoding gzip until proven otherwise
useAlreadyExists = true // Set if provider returns AlreadyOwnedByYou or no error if you try to remake your own bucket
useMultipartUploads = true // Set if provider supports multipart uploads
)
switch opt.Provider {
case "AWS":
@@ -3019,18 +3091,22 @@ func setQuirks(opt *Options) {
mightGzip = false // Never auto gzips objects
case "Alibaba":
useMultipartEtag = false // Alibaba seems to calculate multipart Etags differently from AWS
useAlreadyExists = true // returns 200 OK
case "HuaweiOBS":
// Huawei OBS PFS is not support listObjectV2, and if turn on the urlEncodeListing, marker will not work and keep list same page forever.
urlEncodeListings = false
listObjectsV2 = false
useAlreadyExists = false // untested
case "Ceph":
listObjectsV2 = false
virtualHostStyle = false
urlEncodeListings = false
useAlreadyExists = false // untested
case "ChinaMobile":
listObjectsV2 = false
virtualHostStyle = false
urlEncodeListings = false
useAlreadyExists = false // untested
case "Cloudflare":
virtualHostStyle = false
useMultipartEtag = false // currently multipart Etags are random
@@ -3038,88 +3114,111 @@ func setQuirks(opt *Options) {
listObjectsV2 = false
virtualHostStyle = false
urlEncodeListings = false
useAlreadyExists = false // untested
case "DigitalOcean":
urlEncodeListings = false
useAlreadyExists = false // untested
case "Dreamhost":
urlEncodeListings = false
useAlreadyExists = false // untested
case "IBMCOS":
listObjectsV2 = false // untested
virtualHostStyle = false
urlEncodeListings = false
useMultipartEtag = false // untested
useAlreadyExists = false // returns BucketAlreadyExists
case "IDrive":
virtualHostStyle = false
useAlreadyExists = false // untested
case "IONOS":
// listObjectsV2 supported - https://api.ionos.com/docs/s3/#Basic-Operations-get-Bucket-list-type-2
virtualHostStyle = false
urlEncodeListings = false
useAlreadyExists = false // untested
case "Petabox":
// No quirks
useAlreadyExists = false // untested
case "Liara":
virtualHostStyle = false
urlEncodeListings = false
useMultipartEtag = false
useAlreadyExists = false // untested
case "Linode":
// No quirks
useAlreadyExists = true // returns 200 OK
case "LyveCloud":
useMultipartEtag = false // LyveCloud seems to calculate multipart Etags differently from AWS
useAlreadyExists = false // untested
case "Minio":
virtualHostStyle = false
case "Netease":
listObjectsV2 = false // untested
urlEncodeListings = false
useMultipartEtag = false // untested
useAlreadyExists = false // untested
case "RackCorp":
// No quirks
useMultipartEtag = false // untested
useAlreadyExists = false // untested
case "Rclone":
listObjectsV2 = true
urlEncodeListings = true
virtualHostStyle = false
useMultipartEtag = false
useAlreadyExists = false
// useMultipartUploads = false - set this manually
case "Scaleway":
// Scaleway can only have 1000 parts in an upload
if opt.MaxUploadParts > 1000 {
opt.MaxUploadParts = 1000
}
urlEncodeListings = false
useAlreadyExists = false // untested
case "SeaweedFS":
listObjectsV2 = false // untested
virtualHostStyle = false
urlEncodeListings = false
useMultipartEtag = false // untested
useAlreadyExists = false // untested
case "StackPath":
listObjectsV2 = false // untested
virtualHostStyle = false
urlEncodeListings = false
useAlreadyExists = false // untested
case "Storj":
// Force chunk size to >= 64 MiB
if opt.ChunkSize < 64*fs.Mebi {
opt.ChunkSize = 64 * fs.Mebi
}
useAlreadyExists = false // returns BucketAlreadyExists
case "Synology":
useMultipartEtag = false
useAlreadyExists = false // untested
case "TencentCOS":
listObjectsV2 = false // untested
useMultipartEtag = false // untested
useAlreadyExists = false // untested
case "Wasabi":
// No quirks
useAlreadyExists = true // returns 200 OK
case "Leviia":
// No quirks
useAlreadyExists = false // untested
case "Qiniu":
useMultipartEtag = false
urlEncodeListings = false
virtualHostStyle = false
useAlreadyExists = false // untested
case "GCS":
// Google break request Signature by mutating accept-encoding HTTP header
// https://github.com/rclone/rclone/issues/6670
useAcceptEncodingGzip = false
useAlreadyExists = true // returns BucketNameUnavailable instead of BucketAlreadyExists but good enough!
default:
fs.Logf("s3", "s3 provider %q not known - please set correctly", opt.Provider)
fallthrough
case "Other":
listObjectsV2 = false
virtualHostStyle = false
urlEncodeListings = false
useMultipartEtag = false
default:
fs.Logf("s3", "s3 provider %q not known - please set correctly", opt.Provider)
listObjectsV2 = false
virtualHostStyle = false
urlEncodeListings = false
useMultipartEtag = false
useAlreadyExists = false
}
// Path Style vs Virtual Host style
@@ -3159,6 +3258,22 @@ func setQuirks(opt *Options) {
opt.UseAcceptEncodingGzip.Valid = true
opt.UseAcceptEncodingGzip.Value = useAcceptEncodingGzip
}
// Has the provider got AlreadyOwnedByYou error?
if !opt.UseAlreadyExists.Valid {
opt.UseAlreadyExists.Valid = true
opt.UseAlreadyExists.Value = useAlreadyExists
}
// Set the correct use multipart uploads if not manually set
if !opt.UseMultipartUploads.Valid {
opt.UseMultipartUploads.Valid = true
opt.UseMultipartUploads.Value = useMultipartUploads
}
if !opt.UseMultipartUploads.Value {
opt.UploadCutoff = math.MaxInt64
}
}
// setRoot changes the root of the Fs
@@ -3271,6 +3386,10 @@ func NewFs(ctx context.Context, name, root string, m configmap.Mapper) (fs.Fs, e
f.features.CanHaveEmptyDirectories = true
}
// f.listMultipartUploads()
if !opt.UseMultipartUploads.Value {
fs.Debugf(f, "Disabling multipart uploads")
f.features.OpenChunkWriter = nil
}
if f.rootBucket != "" && f.rootDirectory != "" && !opt.NoHeadObject && !strings.HasSuffix(root, "/") {
// Check to see if the (bucket,directory) is actually an existing file
@@ -3315,6 +3434,7 @@ func (f *Fs) getMetaDataListing(ctx context.Context, wantRemote string) (info *s
withVersions: f.opt.Versions,
findFile: true,
versionAt: f.opt.VersionAt,
hidden: f.opt.VersionDeleted,
}, func(gotRemote string, object *s3.Object, objectVersionID *string, isDirectory bool) error {
if isDirectory {
return nil
@@ -3376,6 +3496,10 @@ func (f *Fs) newObjectWithInfo(ctx context.Context, remote string, info *s3.Obje
o.bytes = aws.Int64Value(info.Size)
o.storageClass = stringClonePointer(info.StorageClass)
o.versionID = stringClonePointer(versionID)
// If is delete marker, show that metadata has been read as there is none to read
if info.Size == isDeleteMarker {
o.meta = map[string]string{}
}
} else if !o.fs.opt.NoHeadObject {
err := o.readMetaData(ctx) // reads info and meta, returning an error
if err != nil {
@@ -3626,6 +3750,9 @@ func (ls *versionsList) List(ctx context.Context) (resp *s3.ListObjectsV2Output,
// Set up the request for next time
ls.req.KeyMarker = respVersions.NextKeyMarker
ls.req.VersionIdMarker = respVersions.NextVersionIdMarker
if aws.BoolValue(respVersions.IsTruncated) && ls.req.KeyMarker == nil {
return nil, nil, errors.New("s3 protocol error: received versions listing with IsTruncated set with no NextKeyMarker")
}
// If we are URL encoding then must decode the marker
if ls.req.KeyMarker != nil && ls.req.EncodingType != nil {
@@ -3670,7 +3797,7 @@ func (ls *versionsList) List(ctx context.Context) (resp *s3.ListObjectsV2Output,
//structs.SetFrom(obj, objVersion)
setFrom_s3Object_s3ObjectVersion(obj, objVersion)
// Adjust the file names
if !ls.usingVersionAt && !aws.BoolValue(objVersion.IsLatest) {
if !ls.usingVersionAt && (!aws.BoolValue(objVersion.IsLatest) || objVersion.Size == isDeleteMarker) {
if obj.Key != nil && objVersion.LastModified != nil {
*obj.Key = version.Add(*obj.Key, *objVersion.LastModified)
}
@@ -3938,6 +4065,7 @@ func (f *Fs) listDir(ctx context.Context, bucket, directory, prefix string, addB
addBucket: addBucket,
withVersions: f.opt.Versions,
versionAt: f.opt.VersionAt,
hidden: f.opt.VersionDeleted,
}, func(remote string, object *s3.Object, versionID *string, isDirectory bool) error {
entry, err := f.itemToDirEntry(ctx, remote, object, versionID, isDirectory)
if err != nil {
@@ -4024,6 +4152,7 @@ func (f *Fs) ListR(ctx context.Context, dir string, callback fs.ListRCallback) (
recurse: true,
withVersions: f.opt.Versions,
versionAt: f.opt.VersionAt,
hidden: f.opt.VersionDeleted,
}, func(remote string, object *s3.Object, versionID *string, isDirectory bool) error {
entry, err := f.itemToDirEntry(ctx, remote, object, versionID, isDirectory)
if err != nil {
@@ -4187,8 +4316,17 @@ func (f *Fs) makeBucket(ctx context.Context, bucket string) error {
fs.Infof(f, "Bucket %q created with ACL %q", bucket, f.opt.BucketACL)
}
if awsErr, ok := err.(awserr.Error); ok {
if code := awsErr.Code(); code == "BucketAlreadyOwnedByYou" || code == "BucketAlreadyExists" {
switch awsErr.Code() {
case "BucketAlreadyOwnedByYou":
err = nil
case "BucketAlreadyExists", "BucketNameUnavailable":
if f.opt.UseAlreadyExists.Value {
// We can trust BucketAlreadyExists to mean not owned by us, so make it non retriable
err = fserrors.NoRetryError(err)
} else {
// We can't trust BucketAlreadyExists to mean not owned by us, so ignore it
err = nil
}
}
}
return err
@@ -4781,6 +4919,7 @@ func (f *Fs) restoreStatus(ctx context.Context, all bool) (out []restoreStatusOu
recurse: true,
withVersions: f.opt.Versions,
versionAt: f.opt.VersionAt,
hidden: f.opt.VersionDeleted,
restoreStatus: true,
}, func(remote string, object *s3.Object, versionID *string, isDirectory bool) error {
entry, err := f.itemToDirEntry(ctx, remote, object, versionID, isDirectory)
@@ -5904,7 +6043,7 @@ func (o *Object) prepareUpload(ctx context.Context, src fs.ObjectInfo, options [
}
// Fetch metadata if --metadata is in use
meta, err := fs.GetMetadataOptions(ctx, src, options)
meta, err := fs.GetMetadataOptions(ctx, o.fs, src, options)
if err != nil {
return ui, fmt.Errorf("failed to read metadata from source object: %w", err)
}
@@ -6070,7 +6209,7 @@ func (o *Object) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, op
var err error
var ui uploadInfo
if multipart {
wantETag, gotETag, versionID, ui, err = o.uploadMultipart(ctx, src, in)
wantETag, gotETag, versionID, ui, err = o.uploadMultipart(ctx, src, in, options...)
} else {
ui, err = o.prepareUpload(ctx, src, options)
if err != nil {

View File

@@ -12,6 +12,7 @@ import (
"time"
"github.com/aws/aws-sdk-go/aws"
"github.com/aws/aws-sdk-go/aws/awserr"
"github.com/aws/aws-sdk-go/service/s3"
"github.com/rclone/rclone/fs"
"github.com/rclone/rclone/fs/cache"
@@ -393,6 +394,41 @@ func (f *Fs) InternalTestVersions(t *testing.T) {
}
})
t.Run("Mkdir", func(t *testing.T) {
// Test what happens when we create a bucket we already own and see whether the
// quirk is set correctly
req := s3.CreateBucketInput{
Bucket: &f.rootBucket,
ACL: stringPointerOrNil(f.opt.BucketACL),
}
if f.opt.LocationConstraint != "" {
req.CreateBucketConfiguration = &s3.CreateBucketConfiguration{
LocationConstraint: &f.opt.LocationConstraint,
}
}
err := f.pacer.Call(func() (bool, error) {
_, err := f.c.CreateBucketWithContext(ctx, &req)
return f.shouldRetry(ctx, err)
})
var errString string
if err == nil {
errString = "No Error"
} else if awsErr, ok := err.(awserr.Error); ok {
errString = awsErr.Code()
} else {
assert.Fail(t, "Unknown error %T %v", err, err)
}
t.Logf("Creating a bucket we already have created returned code: %s", errString)
switch errString {
case "BucketAlreadyExists":
assert.False(t, f.opt.UseAlreadyExists.Value, "Need to clear UseAlreadyExists quirk")
case "No Error", "BucketAlreadyOwnedByYou":
assert.True(t, f.opt.UseAlreadyExists.Value, "Need to set UseAlreadyExists quirk")
default:
assert.Fail(t, "Unknown error string %q", errString)
}
})
t.Run("Cleanup", func(t *testing.T) {
require.NoError(t, f.CleanUpHidden(ctx))
items := append([]fstest.Item{newItem}, fstests.InternalTestFiles...)

View File

@@ -47,4 +47,12 @@ func (f *Fs) SetUploadCutoff(cs fs.SizeSuffix) (fs.SizeSuffix, error) {
return f.setUploadCutoff(cs)
}
var _ fstests.SetUploadChunkSizer = (*Fs)(nil)
func (f *Fs) SetCopyCutoff(cs fs.SizeSuffix) (fs.SizeSuffix, error) {
return f.setCopyCutoff(cs)
}
var (
_ fstests.SetUploadChunkSizer = (*Fs)(nil)
_ fstests.SetUploadCutoffer = (*Fs)(nil)
_ fstests.SetCopyCutoffer = (*Fs)(nil)
)

View File

@@ -449,6 +449,26 @@ Example:
myUser:myPass@localhost:9005
`,
Advanced: true,
}, {
Name: "copy_is_hardlink",
Default: false,
Help: `Set to enable server side copies using hardlinks.
The SFTP protocol does not define a copy command so normally server
side copies are not allowed with the sftp backend.
However the SFTP protocol does support hardlinking, and if you enable
this flag then the sftp backend will support server side copies. These
will be implemented by doing a hardlink from the source to the
destination.
Not all sftp servers support this.
Note that hardlinking two files together will use no additional space
as the source and the destination will be the same file.
This feature may be useful backups made with --copy-dest.`,
Advanced: true,
}},
}
fs.Register(fsi)
@@ -490,6 +510,7 @@ type Options struct {
HostKeyAlgorithms fs.SpaceSepList `config:"host_key_algorithms"`
SSH fs.SpaceSepList `config:"ssh"`
SocksProxy string `config:"socks_proxy"`
CopyIsHardlink bool `config:"copy_is_hardlink"`
}
// Fs stores the interface to the remote SFTP files
@@ -1049,6 +1070,10 @@ func NewFsWithConnection(ctx context.Context, f *Fs, name string, root string, m
SlowHash: true,
PartialUploads: true,
}).Fill(ctx, f)
if !opt.CopyIsHardlink {
// Disable server side copy unless --sftp-copy-is-hardlink is set
f.features.Copy = nil
}
// Make a connection and pool it to return errors early
c, err := f.getSftpConnection(ctx)
if err != nil {
@@ -1401,6 +1426,43 @@ func (f *Fs) Move(ctx context.Context, src fs.Object, remote string) (fs.Object,
return dstObj, nil
}
// Copy server side copies a remote sftp file object using hardlinks
func (f *Fs) Copy(ctx context.Context, src fs.Object, remote string) (fs.Object, error) {
if !f.opt.CopyIsHardlink {
return nil, fs.ErrorCantCopy
}
srcObj, ok := src.(*Object)
if !ok {
fs.Debugf(src, "Can't copy - not same remote type")
return nil, fs.ErrorCantCopy
}
err := f.mkParentDir(ctx, remote)
if err != nil {
return nil, fmt.Errorf("Copy mkParentDir failed: %w", err)
}
c, err := f.getSftpConnection(ctx)
if err != nil {
return nil, fmt.Errorf("Copy: %w", err)
}
srcPath, dstPath := srcObj.path(), path.Join(f.absRoot, remote)
err = c.sftpClient.Link(srcPath, dstPath)
f.putSftpConnection(&c, err)
if err != nil {
if sftpErr, ok := err.(*sftp.StatusError); ok {
if sftpErr.FxCode() == sftp.ErrSSHFxOpUnsupported {
// Remote doesn't support Link
return nil, fs.ErrorCantCopy
}
}
return nil, fmt.Errorf("Copy failed: %w", err)
}
dstObj, err := f.NewObject(ctx, remote)
if err != nil {
return nil, fmt.Errorf("Copy NewObject failed: %w", err)
}
return dstObj, nil
}
// DirMove moves src, srcRemote to this remote at dstRemote
// using server-side move operations.
//
@@ -2120,6 +2182,7 @@ var (
_ fs.Fs = &Fs{}
_ fs.PutStreamer = &Fs{}
_ fs.Mover = &Fs{}
_ fs.Copier = &Fs{}
_ fs.DirMover = &Fs{}
_ fs.Abouter = &Fs{}
_ fs.Shutdowner = &Fs{}

View File

@@ -6,7 +6,7 @@ import (
"net"
"time"
smb2 "github.com/hirochachacha/go-smb2"
smb2 "github.com/cloudsoda/go-smb2"
"github.com/rclone/rclone/fs"
"github.com/rclone/rclone/fs/accounting"
"github.com/rclone/rclone/fs/config/obscure"
@@ -40,7 +40,7 @@ func (f *Fs) dial(ctx context.Context, network, addr string) (*conn, error) {
},
}
session, err := d.DialContext(ctx, tconn)
session, err := d.DialConn(ctx, tconn, addr)
if err != nil {
return nil, err
}

View File

@@ -12,7 +12,6 @@ import (
"sync/atomic"
"time"
smb2 "github.com/hirochachacha/go-smb2"
"github.com/rclone/rclone/fs"
"github.com/rclone/rclone/fs/config"
"github.com/rclone/rclone/fs/config/configmap"
@@ -178,6 +177,7 @@ func NewFs(ctx context.Context, name, root string, m configmap.Mapper) (fs.Fs, e
CaseInsensitive: opt.CaseInsensitive,
CanHaveEmptyDirectories: true,
BucketBased: true,
PartialUploads: true,
}).Fill(ctx, f)
f.pacer = fs.NewPacer(ctx, pacer.NewDefault(pacer.MinSleep(minSleep), pacer.MaxSleep(maxSleep), pacer.DecayConstant(decayConstant)))
@@ -477,26 +477,6 @@ func (f *Fs) About(ctx context.Context) (_ *fs.Usage, err error) {
return usage, nil
}
// Wrap a smb2.File with a custom Close method
type closeSession struct {
*smb2.File
close func() error
closed bool
}
// Close the handle and call the custom code
func (c *closeSession) Close() error {
err := c.File.Close()
if !c.closed {
err2 := c.close()
if err == nil {
err = err2
}
c.closed = true
}
return err
}
// OpenWriterAt opens with a handle for random access writes
//
// Pass in the remote desired and the size if known.
@@ -530,19 +510,10 @@ func (f *Fs) OpenWriterAt(ctx context.Context, remote string, size int64) (fs.Wr
fl, err := cn.smbShare.OpenFile(filename, os.O_WRONLY|os.O_CREATE|os.O_TRUNC, 0o644)
if err != nil {
o.fs.putConnection(&cn)
return nil, fmt.Errorf("failed to open: %w", err)
}
// Connection is returned in the closeSession.Close method
c := &closeSession{
File: fl,
close: func() error {
o.fs.putConnection(&cn)
return nil
},
}
return c, nil
return fl, nil
}
// Shutdown the backend, closing any background tasks and any

View File

@@ -121,9 +121,8 @@ func (p *Prop) Hashes() (hashes map[hash.Type]string) {
hashes = make(map[hash.Type]string)
hashes[hash.SHA1] = *p.MESha1Hex
return hashes
} else {
return nil
}
return nil
}
// PropValue is a tagged name and value

View File

@@ -91,6 +91,9 @@ func init() {
}, {
Value: "sharepoint-ntlm",
Help: "Sharepoint with NTLM authentication, usually self-hosted or on-premises",
}, {
Value: "rclone",
Help: "rclone WebDAV server to serve a remote over HTTP via the WebDAV protocol",
}, {
Value: "other",
Help: "Other site/service or software",
@@ -644,6 +647,10 @@ func (f *Fs) setQuirks(ctx context.Context, vendor string) error {
// so we must perform an extra check to detect this
// condition and return a proper error code.
f.checkBeforePurge = true
case "rclone":
f.canStream = true
f.precision = time.Second
f.useOCMtime = true
case "other":
default:
fs.Debugf(f, "Unknown vendor %q", vendor)

View File

@@ -7,3 +7,6 @@
<ankur0493@gmail.com>
<agupta@egnyte.com>
<ricci@disroot.org>
<stoesser@yay-digital.de>
<services+github@simjo.st>
<seb•ɑƬ•chezwam•ɖɵʈ•org>

View File

@@ -6,7 +6,6 @@
package main
import (
"encoding/json"
"flag"
"fmt"
"log"
@@ -21,23 +20,21 @@ import (
"sync"
"text/template"
"time"
"github.com/coreos/go-semver/semver"
)
var (
// Flags
debug = flag.Bool("d", false, "Print commands instead of running them.")
parallel = flag.Int("parallel", runtime.NumCPU(), "Number of commands to run in parallel.")
debug = flag.Bool("d", false, "Print commands instead of running them")
parallel = flag.Int("parallel", runtime.NumCPU(), "Number of commands to run in parallel")
copyAs = flag.String("release", "", "Make copies of the releases with this name")
gitLog = flag.String("git-log", "", "git log to include as well")
include = flag.String("include", "^.*$", "os/arch regexp to include")
exclude = flag.String("exclude", "^$", "os/arch regexp to exclude")
cgo = flag.Bool("cgo", false, "Use cgo for the build")
noClean = flag.Bool("no-clean", false, "Don't clean the build directory before running.")
noClean = flag.Bool("no-clean", false, "Don't clean the build directory before running")
tags = flag.String("tags", "", "Space separated list of build tags")
buildmode = flag.String("buildmode", "", "Passed to go build -buildmode flag")
compileOnly = flag.Bool("compile-only", false, "Just build the binary, not the zip.")
compileOnly = flag.Bool("compile-only", false, "Just build the binary, not the zip")
extraEnv = flag.String("env", "", "comma separated list of VAR=VALUE env vars to set")
macOSSDK = flag.String("macos-sdk", "", "macOS SDK to use")
macOSArch = flag.String("macos-arch", "", "macOS arch to use")
@@ -140,21 +137,21 @@ func chdir(dir string) {
func substitute(inFile, outFile string, data interface{}) {
t, err := template.ParseFiles(inFile)
if err != nil {
log.Fatalf("Failed to read template file %q: %v %v", inFile, err)
log.Fatalf("Failed to read template file %q: %v", inFile, err)
}
out, err := os.Create(outFile)
if err != nil {
log.Fatalf("Failed to create output file %q: %v %v", outFile, err)
log.Fatalf("Failed to create output file %q: %v", outFile, err)
}
defer func() {
err := out.Close()
if err != nil {
log.Fatalf("Failed to close output file %q: %v %v", outFile, err)
log.Fatalf("Failed to close output file %q: %v", outFile, err)
}
}()
err = t.Execute(out, data)
if err != nil {
log.Fatalf("Failed to substitute template file %q: %v %v", inFile, err)
log.Fatalf("Failed to substitute template file %q: %v", inFile, err)
}
}
@@ -202,101 +199,6 @@ func buildDebAndRpm(dir, version, goarch string) []string {
return artifacts
}
// generate system object (syso) file to be picked up by a following go build for embedding icon and version info resources into windows executable
func buildWindowsResourceSyso(goarch string, versionTag string) string {
type M map[string]interface{}
version := strings.TrimPrefix(versionTag, "v")
semanticVersion := semver.New(version)
// Build json input to goversioninfo utility
bs, err := json.Marshal(M{
"FixedFileInfo": M{
"FileVersion": M{
"Major": semanticVersion.Major,
"Minor": semanticVersion.Minor,
"Patch": semanticVersion.Patch,
},
"ProductVersion": M{
"Major": semanticVersion.Major,
"Minor": semanticVersion.Minor,
"Patch": semanticVersion.Patch,
},
},
"StringFileInfo": M{
"CompanyName": "https://rclone.org",
"ProductName": "Rclone",
"FileDescription": "Rclone",
"InternalName": "rclone",
"OriginalFilename": "rclone.exe",
"LegalCopyright": "The Rclone Authors",
"FileVersion": version,
"ProductVersion": version,
},
"IconPath": "../graphics/logo/ico/logo_symbol_color.ico",
})
if err != nil {
log.Printf("Failed to build version info json: %v", err)
return ""
}
// Write json to temporary file that will only be used by the goversioninfo command executed below.
jsonPath, err := filepath.Abs("versioninfo_windows_" + goarch + ".json") // Appending goos and goarch as suffix to avoid any race conditions
if err != nil {
log.Printf("Failed to resolve path: %v", err)
return ""
}
err = os.WriteFile(jsonPath, bs, 0644)
if err != nil {
log.Printf("Failed to write %s: %v", jsonPath, err)
return ""
}
defer func() {
if err := os.Remove(jsonPath); err != nil {
if !os.IsNotExist(err) {
log.Printf("Warning: Couldn't remove generated %s: %v. Please remove it manually.", jsonPath, err)
}
}
}()
// Execute goversioninfo utility using the json file as input.
// It will produce a system object (syso) file that a following go build should pick up.
sysoPath, err := filepath.Abs("../resource_windows_" + goarch + ".syso") // Appending goos and goarch as suffix to avoid any race conditions, and also it is recognized by go build and avoids any builds for other systems considering it
if err != nil {
log.Printf("Failed to resolve path: %v", err)
return ""
}
args := []string{
"goversioninfo",
"-o",
sysoPath,
}
if strings.Contains(goarch, "64") {
args = append(args, "-64") // Make the syso a 64-bit coff file
}
if strings.Contains(goarch, "arm") {
args = append(args, "-arm") // Make the syso an arm binary
}
args = append(args, jsonPath)
err = runEnv(args, nil)
if err != nil {
return ""
}
return sysoPath
}
// delete generated system object (syso) resource file
func cleanupResourceSyso(sysoFilePath string) {
if sysoFilePath == "" {
return
}
if err := os.Remove(sysoFilePath); err != nil {
if !os.IsNotExist(err) {
log.Printf("Warning: Couldn't remove generated %s: %v. Please remove it manually.", sysoFilePath, err)
}
}
}
// Trip a version suffix off the arch if present
func stripVersion(goarch string) string {
i := strings.Index(goarch, "-")
@@ -315,17 +217,41 @@ func runOut(command ...string) string {
return strings.TrimSpace(string(out))
}
// Generate Windows resource system object file (.syso), which can be picked
// up by the following go build for embedding version information and icon
// resources into the executable.
func generateResourceWindows(version, arch string) func() {
sysoPath := fmt.Sprintf("../resource_windows_%s.syso", arch) // Use explicit destination filename, even though it should be same as default, so that we are sure we have the correct reference to it
if err := os.Remove(sysoPath); !os.IsNotExist(err) {
// Note: This one we choose to treat as fatal, to avoid any risk of picking up an old .syso file without noticing.
log.Fatalf("Failed to remove existing Windows %s resource system object file %s: %v", arch, sysoPath, err)
}
args := []string{"go", "run", "../bin/resource_windows.go", "-arch", arch, "-version", version, "-syso", sysoPath}
if err := runEnv(args, nil); err != nil {
log.Printf("Warning: Couldn't generate Windows %s resource system object file, binaries will not have version information or icon embedded", arch)
return nil
}
if _, err := os.Stat(sysoPath); err != nil {
log.Printf("Warning: Couldn't find generated Windows %s resource system object file, binaries will not have version information or icon embedded", arch)
return nil
}
return func() {
if err := os.Remove(sysoPath); err != nil && !os.IsNotExist(err) {
log.Printf("Warning: Couldn't remove generated Windows %s resource system object file %s: %v. Please remove it manually.", arch, sysoPath, err)
}
}
}
// build the binary in dir returning success or failure
func compileArch(version, goos, goarch, dir string) bool {
log.Printf("Compiling %s/%s into %s", goos, goarch, dir)
goarchBase := stripVersion(goarch)
output := filepath.Join(dir, "rclone")
if goos == "windows" {
output += ".exe"
sysoPath := buildWindowsResourceSyso(goarch, version)
if sysoPath == "" {
log.Printf("Warning: Windows binaries will not have file information embedded")
if cleanupFn := generateResourceWindows(version, goarchBase); cleanupFn != nil {
defer cleanupFn()
}
defer cleanupResourceSyso(sysoPath)
}
err := os.MkdirAll(dir, 0777)
if err != nil {
@@ -348,7 +274,7 @@ func compileArch(version, goos, goarch, dir string) bool {
)
env := []string{
"GOOS=" + goos,
"GOARCH=" + stripVersion(goarch),
"GOARCH=" + goarchBase,
}
if *extraEnv != "" {
env = append(env, strings.Split(*extraEnv, ",")...)

View File

@@ -50,14 +50,17 @@ docs = [
"hdfs.md",
"hidrive.md",
"http.md",
"imagekit.md",
"internetarchive.md",
"jottacloud.md",
"koofr.md",
"linkbox.md",
"mailru.md",
"mega.md",
"memory.md",
"netstorage.md",
"azureblob.md",
"azurefiles.md",
"onedrive.md",
"opendrive.md",
"oracleobjectstorage.md",

122
bin/resource_windows.go Normal file
View File

@@ -0,0 +1,122 @@
// Utility program to generate Rclone-specific Windows resource system object
// file (.syso), that can be picked up by a following go build for embedding
// version information and icon resources into a rclone binary.
//
// Run it with "go generate", or "go run" to be able to customize with
// command-line flags. Note that this program is intended to be run directly
// from its original location in the source tree: Default paths are absolute
// within the current source tree, which is convenient because it makes it
// oblivious to the working directory, and it gives identical result whether
// run by "go generate" or "go run", but it will not make sense if this
// program's source is moved out from the source tree.
//
// Can be used for rclone.exe (default), and other binaries such as
// librclone.dll (must be specified with flag -binary).
//
//go:generate go run resource_windows.go
//go:build tools
// +build tools
package main
import (
"flag"
"fmt"
"log"
"path"
"runtime"
"strings"
"github.com/coreos/go-semver/semver"
"github.com/josephspurrier/goversioninfo"
"github.com/rclone/rclone/fs"
)
func main() {
// Get path of directory containing the current source file to use for absolute path references within the code tree (as described above)
projectDir := ""
_, sourceFile, _, ok := runtime.Caller(0)
if ok {
projectDir = path.Dir(path.Dir(sourceFile)) // Root of the current project working directory
}
// Define flags
binary := flag.String("binary", "rclone.exe", `The name of the binary to generate resource for, e.g. "rclone.exe" or "librclone.dll"`)
arch := flag.String("arch", runtime.GOARCH, `Architecture of resource file, or the target GOARCH, "386", "amd64", "arm", or "arm64"`)
version := flag.String("version", fs.Version, "Version number or tag name")
icon := flag.String("icon", path.Join(projectDir, "graphics/logo/ico/logo_symbol_color.ico"), "Path to icon file to embed in an .exe binary")
dir := flag.String("dir", projectDir, "Path to output directory where to write the resulting system object file (.syso), with a default name according to -arch (resource_windows_<arch>.syso), only considered if not -syso is specified")
syso := flag.String("syso", "", "Path to output resource system object file (.syso) to be created/overwritten, ignores -dir")
// Parse command-line flags
flag.Parse()
// Handle default value for -file which depends on optional -dir and -arch
if *syso == "" {
// Use default filename, which includes target GOOS (hardcoded "windows")
// and GOARCH (from argument -arch) as suffix, to avoid any race conditions,
// and also this will be recognized by go build when it is consuming the
// .syso file and will only be used for builds with matching os/arch.
*syso = path.Join(*dir, fmt.Sprintf("resource_windows_%s.syso", *arch))
}
// Parse version/tag string argument as a SemVer
stringVersion := strings.TrimPrefix(*version, "v")
semanticVersion, err := semver.NewVersion(stringVersion)
if err != nil {
log.Fatalf("Invalid version number: %v", err)
}
// Extract binary extension
binaryExt := path.Ext(*binary)
// Create the version info configuration container
vi := &goversioninfo.VersionInfo{}
// FixedFileInfo
vi.FixedFileInfo.FileOS = "040004" // VOS_NT_WINDOWS32
if strings.EqualFold(binaryExt, ".exe") {
vi.FixedFileInfo.FileType = "01" // VFT_APP
} else if strings.EqualFold(binaryExt, ".dll") {
vi.FixedFileInfo.FileType = "02" // VFT_DLL
} else {
log.Fatalf("Specified binary must have extension .exe or .dll")
}
// FixedFileInfo.FileVersion
vi.FixedFileInfo.FileVersion.Major = int(semanticVersion.Major)
vi.FixedFileInfo.FileVersion.Minor = int(semanticVersion.Minor)
vi.FixedFileInfo.FileVersion.Patch = int(semanticVersion.Patch)
vi.FixedFileInfo.FileVersion.Build = 0
// FixedFileInfo.ProductVersion
vi.FixedFileInfo.ProductVersion.Major = int(semanticVersion.Major)
vi.FixedFileInfo.ProductVersion.Minor = int(semanticVersion.Minor)
vi.FixedFileInfo.ProductVersion.Patch = int(semanticVersion.Patch)
vi.FixedFileInfo.ProductVersion.Build = 0
// StringFileInfo
vi.StringFileInfo.CompanyName = "https://rclone.org"
vi.StringFileInfo.ProductName = "Rclone"
vi.StringFileInfo.FileDescription = "Rclone"
vi.StringFileInfo.InternalName = (*binary)[:len(*binary)-len(binaryExt)]
vi.StringFileInfo.OriginalFilename = *binary
vi.StringFileInfo.LegalCopyright = "The Rclone Authors"
vi.StringFileInfo.FileVersion = stringVersion
vi.StringFileInfo.ProductVersion = stringVersion
// Icon (only relevant for .exe, not .dll)
if *icon != "" && strings.EqualFold(binaryExt, ".exe") {
vi.IconPath = *icon
}
// Build native structures from the configuration data
vi.Build()
// Write the native structures as binary data to a buffer
vi.Walk()
// Write the binary data buffer to file
if err := vi.WriteSyso(*syso, *arch); err != nil {
log.Fatalf(`Failed to generate Windows %s resource system object file for %v with path "%v": %v`, *arch, *binary, *syso, err)
}
}

24
bin/test_metadata_mapper.py Executable file
View File

@@ -0,0 +1,24 @@
#!/usr/bin/env python3
"""
A demo metadata mapper
"""
import sys
import json
def main():
i = json.load(sys.stdin)
# Add tag to description
metadata = i["Metadata"]
if "description" in metadata:
metadata["description"] += " [migrated from domain1]"
else:
metadata["description"] = "[migrated from domain1]"
# Modify owner
if "owner" in metadata:
metadata["owner"] = metadata["owner"].replace("domain1.com", "domain2.com")
o = { "Metadata": metadata }
json.dump(o, sys.stdout, indent="\t")
if __name__ == "__main__":
main()

View File

@@ -27,6 +27,7 @@ def add_email(name, email):
subprocess.check_call(["git", "commit", "-m", "Add %s to contributors" % name, AUTHORS])
def main():
# Add emails from authors
out = subprocess.check_output(["git", "log", '--reverse', '--format=%an|%ae', "master"])
out = out.decode("utf-8")
@@ -43,5 +44,23 @@ def main():
previous.add(email)
add_email(name, email)
# Add emails from Co-authored-by: lines
out = subprocess.check_output(["git", "log", '-i', '--grep', 'Co-authored-by:', "master"])
out = out.decode("utf-8")
co_authored_by = re.compile(r"(?i)Co-authored-by:\s+(.*?)\s+<([^>]+)>$")
for line in out.split("\n"):
line = line.strip()
m = co_authored_by.search(line)
if not m:
continue
name, email = m.group(1), m.group(2)
name = name.strip()
email = email.strip()
if email in previous:
continue
previous.add(email)
add_email(name, email)
if __name__ == "__main__":
main()

View File

@@ -40,6 +40,7 @@ import (
_ "github.com/rclone/rclone/cmd/move"
_ "github.com/rclone/rclone/cmd/moveto"
_ "github.com/rclone/rclone/cmd/ncdu"
_ "github.com/rclone/rclone/cmd/nfsmount"
_ "github.com/rclone/rclone/cmd/obscure"
_ "github.com/rclone/rclone/cmd/purge"
_ "github.com/rclone/rclone/cmd/rc"

View File

@@ -25,7 +25,7 @@ import (
func init() {
name := "cmount"
cmountOnly := ProvidedBy(runtime.GOOS)
cmountOnly := runtime.GOOS != "linux" // rclone mount only works for linux
if cmountOnly {
name = "mount"
}

View File

@@ -15,6 +15,7 @@ import (
"testing"
"github.com/rclone/rclone/fstest/testy"
"github.com/rclone/rclone/vfs/vfscommon"
"github.com/rclone/rclone/vfs/vfstest"
)
@@ -23,5 +24,5 @@ func TestMount(t *testing.T) {
if runtime.GOOS == "darwin" {
testy.SkipUnreliable(t)
}
vfstest.RunTests(t, false, mount)
vfstest.RunTests(t, false, vfscommon.CacheModeOff, true, mount)
}

View File

@@ -82,7 +82,7 @@ func CreateFromStdinArg(ht hash.Type, args []string, startArg int) (bool, error)
}
var commandDefinition = &cobra.Command{
Use: "hashsum <hash> remote:path",
Use: "hashsum [<hash> remote:path]",
Short: `Produces a hashsum file for all the objects in the path.`,
Long: `
Produces a hash file for all the objects in the path using the hash

View File

@@ -1,5 +1,5 @@
//go:build linux || freebsd
// +build linux freebsd
//go:build linux
// +build linux
package mount

View File

@@ -1,5 +1,5 @@
//go:build linux || freebsd
// +build linux freebsd
//go:build linux
// +build linux
package mount

View File

@@ -1,7 +1,7 @@
// FUSE main Fs
//go:build linux || freebsd
// +build linux freebsd
//go:build linux
// +build linux
package mount

View File

@@ -1,5 +1,5 @@
//go:build linux || freebsd
// +build linux freebsd
//go:build linux
// +build linux
package mount

View File

@@ -1,5 +1,5 @@
//go:build linux || freebsd
// +build linux freebsd
//go:build linux
// +build linux
// Package mount implements a FUSE mounting system for rclone remotes.
package mount

View File

@@ -1,14 +1,15 @@
//go:build linux || freebsd
// +build linux freebsd
//go:build linux
// +build linux
package mount
import (
"testing"
"github.com/rclone/rclone/vfs/vfscommon"
"github.com/rclone/rclone/vfs/vfstest"
)
func TestMount(t *testing.T) {
vfstest.RunTests(t, false, mount)
vfstest.RunTests(t, false, vfscommon.CacheModeOff, true, mount)
}

View File

@@ -1,10 +1,8 @@
//go:build !linux && !freebsd
// +build !linux,!freebsd
//go:build !linux
// +build !linux
// Package mount implements a FUSE mounting system for rclone remotes.
//
// Build for mount for unsupported platforms to stop go complaining
// about "no buildable Go source files".
//
// Invert the build constraint: linux freebsd
package mount

View File

@@ -6,9 +6,10 @@ package mount2
import (
"testing"
"github.com/rclone/rclone/vfs/vfscommon"
"github.com/rclone/rclone/vfs/vfstest"
)
func TestMount(t *testing.T) {
vfstest.RunTests(t, false, mount)
vfstest.RunTests(t, false, vfscommon.CacheModeOff, true, mount)
}

View File

@@ -7,15 +7,10 @@ import (
"fmt"
"path/filepath"
"strings"
"time"
"github.com/moby/sys/mountinfo"
)
const (
pollInterval = 100 * time.Millisecond
)
// CheckMountEmpty checks if folder is not already a mountpoint.
// On Linux we use the OS-specific /proc/self/mountinfo API so the check won't access the path.
// Directories marked as "mounted" by autofs are considered not mounted.
@@ -47,6 +42,15 @@ func CheckMountEmpty(mountpoint string) error {
return checkMountEmpty(mountpoint)
}
// singleEntryFilter looks for a specific entry.
//
// It may appear more than once and we return all of them if so.
func singleEntryFilter(mp string) mountinfo.FilterFunc {
return func(m *mountinfo.Info) (skip, stop bool) {
return m.Mountpoint != mp, false
}
}
// CheckMountReady checks whether mountpoint is mounted by rclone.
// Only mounts with type "rclone" or "fuse.rclone" count.
func CheckMountReady(mountpoint string) error {
@@ -57,7 +61,7 @@ func CheckMountReady(mountpoint string) error {
return fmt.Errorf("cannot get absolute path: %s: %w", mountpoint, err)
}
infos, err := mountinfo.GetMounts(mountinfo.SingleEntryFilter(mountpointAbs))
infos, err := mountinfo.GetMounts(singleEntryFilter(mountpointAbs))
if err != nil {
return fmt.Errorf("cannot get mounts: %w", err)
}
@@ -71,19 +75,5 @@ func CheckMountReady(mountpoint string) error {
return fmt.Errorf(msg, mountpointAbs)
}
// WaitMountReady waits until mountpoint is mounted by rclone.
func WaitMountReady(mountpoint string, timeout time.Duration) (err error) {
endTime := time.Now().Add(timeout)
for {
err = CheckMountReady(mountpoint)
delay := time.Until(endTime)
if err == nil || delay <= 0 {
break
}
if delay > pollInterval {
delay = pollInterval
}
time.Sleep(delay)
}
return
}
// CanCheckMountReady is set if CheckMountReady is functional
var CanCheckMountReady = true

View File

@@ -3,10 +3,6 @@
package mountlib
import (
"time"
)
// CheckMountEmpty checks if mountpoint folder is empty.
// On non-Linux unixes we list directory to ensure that.
func CheckMountEmpty(mountpoint string) error {
@@ -19,9 +15,5 @@ func CheckMountReady(mountpoint string) error {
return nil
}
// WaitMountReady should wait until mountpoint is mounted by rclone.
// The check is implemented only for Linux so we just sleep a little.
func WaitMountReady(mountpoint string, timeout time.Duration) error {
time.Sleep(timeout)
return nil
}
// CanCheckMountReady is set if CheckMountReady is functional
var CanCheckMountReady = false

View File

@@ -3,6 +3,7 @@ package mountlib
import (
"context"
_ "embed"
"fmt"
"log"
"os"
@@ -22,11 +23,14 @@ import (
"github.com/rclone/rclone/vfs/vfscommon"
"github.com/rclone/rclone/vfs/vfsflags"
sysdnotify "github.com/iguanesolutions/go-systemd/v5/notify"
"github.com/coreos/go-systemd/v22/daemon"
"github.com/spf13/cobra"
"github.com/spf13/pflag"
)
//go:embed mount.md
var mountHelp string
// Options for creating the mount
type Options struct {
DebugFUSE bool
@@ -152,13 +156,45 @@ func AddFlags(flagSet *pflag.FlagSet) {
flags.DurationVarP(flagSet, &Opt.DaemonWait, "daemon-wait", "", Opt.DaemonWait, "Time to wait for ready mount from daemon (maximum time on Linux, constant sleep time on OSX/BSD) (not supported on Windows)", "Mount")
}
const (
pollInterval = 100 * time.Millisecond
)
// WaitMountReady waits until mountpoint is mounted by rclone.
//
// If the mount daemon dies prematurely it will notice too.
func WaitMountReady(mountpoint string, timeout time.Duration, daemon *os.Process) (err error) {
endTime := time.Now().Add(timeout)
for {
if CanCheckMountReady {
err = CheckMountReady(mountpoint)
if err == nil {
break
}
}
err = daemonize.Check(daemon)
if err != nil {
return err
}
delay := time.Until(endTime)
if delay <= 0 {
break
}
if delay > pollInterval {
delay = pollInterval
}
time.Sleep(delay)
}
return
}
// NewMountCommand makes a mount command with the given name and Mount function
func NewMountCommand(commandName string, hidden bool, mount MountFn) *cobra.Command {
var commandDefinition = &cobra.Command{
Use: commandName + " remote:path /path/to/mountpoint",
Hidden: hidden,
Short: `Mount the remote as file system on a mountpoint.`,
Long: strings.ReplaceAll(strings.ReplaceAll(mountHelp, "|", "`"), "@", commandName) + vfs.Help,
Long: strings.ReplaceAll(mountHelp, "@", commandName) + vfs.Help,
Annotations: map[string]string{
"versionIntroduced": "v1.33",
"groups": "Filter",
@@ -186,10 +222,10 @@ func NewMountCommand(commandName string, hidden bool, mount MountFn) *cobra.Comm
}
mnt := NewMountPoint(mount, args[1], cmd.NewFsDir(args), &Opt, &vfsflags.Opt)
daemon, err := mnt.Mount()
mountDaemon, err := mnt.Mount()
// Wait for foreground mount, if any...
if daemon == nil {
if mountDaemon == nil {
if err == nil {
err = mnt.Wait()
}
@@ -199,15 +235,15 @@ func NewMountCommand(commandName string, hidden bool, mount MountFn) *cobra.Comm
return
}
// Wait for daemon, if any...
// Wait for mountDaemon, if any...
killOnce := sync.Once{}
killDaemon := func(reason string) {
killOnce.Do(func() {
if err := daemon.Signal(os.Interrupt); err != nil {
fs.Errorf(nil, "%s. Failed to terminate daemon pid %d: %v", reason, daemon.Pid, err)
if err := mountDaemon.Signal(os.Interrupt); err != nil {
fs.Errorf(nil, "%s. Failed to terminate daemon pid %d: %v", reason, mountDaemon.Pid, err)
return
}
fs.Debugf(nil, "%s. Terminating daemon pid %d", reason, daemon.Pid)
fs.Debugf(nil, "%s. Terminating daemon pid %d", reason, mountDaemon.Pid)
})
}
@@ -215,7 +251,7 @@ func NewMountCommand(commandName string, hidden bool, mount MountFn) *cobra.Comm
handle := atexit.Register(func() {
killDaemon("Got interrupt")
})
err = WaitMountReady(mnt.MountPoint, Opt.DaemonWait)
err = WaitMountReady(mnt.MountPoint, Opt.DaemonWait, mountDaemon)
if err != nil {
killDaemon("Daemon timed out")
}
@@ -239,7 +275,7 @@ func NewMountCommand(commandName string, hidden bool, mount MountFn) *cobra.Comm
}
// Mount the remote at mountpoint
func (m *MountPoint) Mount() (daemon *os.Process, err error) {
func (m *MountPoint) Mount() (mountDaemon *os.Process, err error) {
// Ensure sensible defaults
m.SetVolumeName(m.MountOpt.VolumeName)
@@ -247,9 +283,9 @@ func (m *MountPoint) Mount() (daemon *os.Process, err error) {
// Start background task if --daemon is specified
if m.MountOpt.Daemon {
daemon, err = daemonize.StartDaemon(os.Args)
if daemon != nil || err != nil {
return daemon, err
mountDaemon, err = daemonize.StartDaemon(os.Args)
if mountDaemon != nil || err != nil {
return mountDaemon, err
}
}
@@ -269,7 +305,7 @@ func (m *MountPoint) Wait() error {
var finaliseOnce sync.Once
finalise := func() {
finaliseOnce.Do(func() {
_ = sysdnotify.Stopping()
_, _ = daemon.SdNotify(false, daemon.SdNotifyStopping)
// Unmount only if directory was mounted by rclone, e.g. don't unmount autofs hooks.
if err := CheckMountReady(m.MountPoint); err != nil {
fs.Debugf(m.MountPoint, "Unmounted externally. Just exit now.")
@@ -286,7 +322,7 @@ func (m *MountPoint) Wait() error {
defer atexit.Unregister(fnHandle)
// Notify systemd
if err := sysdnotify.Ready(); err != nil {
if _, err := daemon.SdNotify(false, daemon.SdNotifyReady); err != nil {
return fmt.Errorf("failed to notify systemd: %w", err)
}

View File

@@ -1,15 +1,11 @@
package mountlib
// "@" will be replaced by the command name, "|" will be replaced by backticks
var mountHelp = `
rclone @ allows Linux, FreeBSD, macOS and Windows to
mount any of Rclone's cloud storage systems as a file system with
FUSE.
First set up your remote using |rclone config|. Check it works with |rclone ls| etc.
First set up your remote using `rclone config`. Check it works with `rclone ls` etc.
On Linux and macOS, you can run mount in either foreground or background (aka
daemon) mode. Mount runs in foreground mode by default. Use the |--daemon| flag
daemon) mode. Mount runs in foreground mode by default. Use the `--daemon` flag
to force background mode. On Windows you can run mount in foreground only,
the flag is ignored.
@@ -18,7 +14,7 @@ program starts, spawns background rclone process to setup and maintain the
mount, waits until success or timeout and exits with appropriate code
(killing the child process if it fails).
On Linux/macOS/FreeBSD start the mount like this, where |/path/to/local/mount|
On Linux/macOS/FreeBSD start the mount like this, where `/path/to/local/mount`
is an **empty** **existing** directory:
rclone @ remote:path/to/files /path/to/local/mount
@@ -29,10 +25,10 @@ rclone will serve the mount and occupy the console so another window should be
used to work with the mount until rclone is interrupted e.g. by pressing Ctrl-C.
The following examples will mount to an automatically assigned drive,
to specific drive letter |X:|, to path |C:\path\parent\mount|
to specific drive letter `X:`, to path `C:\path\parent\mount`
(where parent directory or drive must exist, and mount must **not** exist,
and is not supported when [mounting as a network drive](#mounting-modes-on-windows)), and
the last example will mount as network share |\\cloud\remote| and map it to an
the last example will mount as network share `\\cloud\remote` and map it to an
automatically assigned drive:
rclone @ remote:path/to/files *
@@ -89,7 +85,7 @@ as a network drive instead.
When mounting as a fixed disk drive you can either mount to an unused drive letter,
or to a path representing a **nonexistent** subdirectory of an **existing** parent
directory or drive. Using the special value |*| will tell rclone to
directory or drive. Using the special value `*` will tell rclone to
automatically assign the next available drive letter, starting with Z: and moving backward.
Examples:
@@ -98,45 +94,45 @@ Examples:
rclone @ remote:path/to/files C:\path\parent\mount
rclone @ remote:path/to/files X:
Option |--volname| can be used to set a custom volume name for the mounted
Option `--volname` can be used to set a custom volume name for the mounted
file system. The default is to use the remote name and path.
To mount as network drive, you can add option |--network-mode|
To mount as network drive, you can add option `--network-mode`
to your @ command. Mounting to a directory path is not supported in
this mode, it is a limitation Windows imposes on junctions, so the remote must always
be mounted to a drive letter.
rclone @ remote:path/to/files X: --network-mode
A volume name specified with |--volname| will be used to create the network share path.
A complete UNC path, such as |\\cloud\remote|, optionally with path
|\\cloud\remote\madeup\path|, will be used as is. Any other
string will be used as the share part, after a default prefix |\\server\|.
If no volume name is specified then |\\server\share| will be used.
A volume name specified with `--volname` will be used to create the network share path.
A complete UNC path, such as `\\cloud\remote`, optionally with path
`\\cloud\remote\madeup\path`, will be used as is. Any other
string will be used as the share part, after a default prefix `\\server\`.
If no volume name is specified then `\\server\share` will be used.
You must make sure the volume name is unique when you are mounting more than one drive,
or else the mount command will fail. The share name will treated as the volume label for
the mapped drive, shown in Windows Explorer etc, while the complete
|\\server\share| will be reported as the remote UNC path by
|net use| etc, just like a normal network drive mapping.
`\\server\share` will be reported as the remote UNC path by
`net use` etc, just like a normal network drive mapping.
If you specify a full network share UNC path with |--volname|, this will implicitly
set the |--network-mode| option, so the following two examples have same result:
If you specify a full network share UNC path with `--volname`, this will implicitly
set the `--network-mode` option, so the following two examples have same result:
rclone @ remote:path/to/files X: --network-mode
rclone @ remote:path/to/files X: --volname \\server\share
You may also specify the network share UNC path as the mountpoint itself. Then rclone
will automatically assign a drive letter, same as with |*| and use that as
will automatically assign a drive letter, same as with `*` and use that as
mountpoint, and instead use the UNC path specified as the volume name, as if it were
specified with the |--volname| option. This will also implicitly set
the |--network-mode| option. This means the following two examples have same result:
specified with the `--volname` option. This will also implicitly set
the `--network-mode` option. This means the following two examples have same result:
rclone @ remote:path/to/files \\cloud\remote
rclone @ remote:path/to/files * --volname \\cloud\remote
There is yet another way to enable network mode, and to set the share path,
and that is to pass the "native" libfuse/WinFsp option directly:
|--fuse-flag --VolumePrefix=\server\share|. Note that the path
`--fuse-flag --VolumePrefix=\server\share`. Note that the path
must be with just a single backslash prefix in this case.
@@ -157,15 +153,15 @@ representing permissions for the POSIX permission scopes: Owner, group and other
By default, the owner and group will be taken from the current user, and the built-in
group "Everyone" will be used to represent others. The user/group can be customized
with FUSE options "UserName" and "GroupName",
e.g. |-o UserName=user123 -o GroupName="Authenticated Users"|.
e.g. `-o UserName=user123 -o GroupName="Authenticated Users"`.
The permissions on each entry will be set according to [options](#options)
|--dir-perms| and |--file-perms|, which takes a value in traditional Unix
`--dir-perms` and `--file-perms`, which takes a value in traditional Unix
[numeric notation](https://en.wikipedia.org/wiki/File-system_permissions#Numeric_notation).
The default permissions corresponds to |--file-perms 0666 --dir-perms 0777|,
The default permissions corresponds to `--file-perms 0666 --dir-perms 0777`,
i.e. read and write permissions to everyone. This means you will not be able
to start any programs from the mount. To be able to do that you must add
execute permissions, e.g. |--file-perms 0777 --dir-perms 0777| to add it
execute permissions, e.g. `--file-perms 0777 --dir-perms 0777` to add it
to everyone. If the program needs to write files, chances are you will
have to enable [VFS File Caching](#vfs-file-caching) as well (see also
[limitations](#limitations)). Note that the default write permission have
@@ -193,12 +189,12 @@ will be added automatically for compatibility with Unix. Some example use
cases will following.
If you set POSIX permissions for only allowing access to the owner,
using |--file-perms 0600 --dir-perms 0700|, the user group and the built-in
using `--file-perms 0600 --dir-perms 0700`, the user group and the built-in
"Everyone" group will still be given some special permissions, as described
above. Some programs may then (incorrectly) interpret this as the file being
accessible by everyone, for example an SSH client may warn about "unprotected
private key file". You can work around this by specifying
|-o FileSecurity="D:P(A;;FA;;;OW)"|, which sets file all access (FA) to the
`-o FileSecurity="D:P(A;;FA;;;OW)"`, which sets file all access (FA) to the
owner (OW), and nothing else.
When setting write permissions then, except for the owner, this does not
@@ -207,11 +203,11 @@ This may prevent applications from writing to files, giving permission denied
error instead. To set working write permissions for the built-in "Everyone"
group, similar to what it gets by default but with the addition of the
"write extended attributes", you can specify
|-o FileSecurity="D:P(A;;FRFW;;;WD)"|, which sets file read (FR) and file
`-o FileSecurity="D:P(A;;FRFW;;;WD)"`, which sets file read (FR) and file
write (FW) to everyone (WD). If file execute (FX) is also needed, then change
to |-o FileSecurity="D:P(A;;FRFWFX;;;WD)"|, or set file all access (FA) to
to `-o FileSecurity="D:P(A;;FRFWFX;;;WD)"`, or set file all access (FA) to
get full access permissions, including delete, with
|-o FileSecurity="D:P(A;;FA;;;WD)"|.
`-o FileSecurity="D:P(A;;FA;;;WD)"`.
#### Windows caveats
@@ -235,7 +231,7 @@ It is also possible to make a drive mount available to everyone on the system,
by running the process creating it as the built-in SYSTEM account.
There are several ways to do this: One is to use the command-line
utility [PsExec](https://docs.microsoft.com/en-us/sysinternals/downloads/psexec),
from Microsoft's Sysinternals suite, which has option |-s| to start
from Microsoft's Sysinternals suite, which has option `-s` to start
processes as the SYSTEM account. Another alternative is to run the mount
command from a Windows Scheduled Task, or a Windows Service, configured
to run as the SYSTEM account. A third alternative is to use the
@@ -243,7 +239,7 @@ to run as the SYSTEM account. A third alternative is to use the
Read more in the [install documentation](https://rclone.org/install/).
Note that when running rclone as another user, it will not use
the configuration file from your profile unless you tell it to
with the [|--config|](https://rclone.org/docs/#config-config-file) option.
with the [`--config`](https://rclone.org/docs/#config-config-file) option.
Note also that it is now the SYSTEM account that will have the owner
permissions, and other accounts will have permissions according to the
group or others scopes. As mentioned above, these will then not get the
@@ -256,11 +252,17 @@ does not suffer from the same limitations.
### Mounting on macOS
Mounting on macOS can be done either via [macFUSE](https://osxfuse.github.io/)
Mounting on macOS can be done either via [built-in NFS server](/commands/rclone_serve_nfs/), [macFUSE](https://osxfuse.github.io/)
(also known as osxfuse) or [FUSE-T](https://www.fuse-t.org/). macFUSE is a traditional
FUSE driver utilizing a macOS kernel extension (kext). FUSE-T is an alternative FUSE system
which "mounts" via an NFSv4 local server.
## NFS mount
This method spins up an NFS server using [serve nfs](/commands/rclone_serve_nfs/) command and mounts
it to the specified mountpoint. If you run this in background mode using |--daemon|, you will need to
send SIGTERM signal to the rclone process using |kill| command to stop the mount.
#### macFUSE Notes
If installing macFUSE using [dmg packages](https://github.com/osxfuse/osxfuse/releases) from
@@ -294,31 +296,33 @@ of the file.
Rclone includes flags for unicode normalization with macFUSE that should be updated
for FUSE-T. See [this forum post](https://forum.rclone.org/t/some-unicode-forms-break-mount-on-macos-with-fuse-t/36403)
and [FUSE-T issue #16](https://github.com/macos-fuse-t/fuse-t/issues/16). The following
flag should be added to the |rclone mount| command.
flag should be added to the `rclone mount` command.
-o modules=iconv,from_code=UTF-8,to_code=UTF-8
##### Read Only mounts
When mounting with |--read-only|, attempts to write to files will fail *silently* as
When mounting with `--read-only`, attempts to write to files will fail *silently* as
opposed to with a clear warning as in macFUSE.
### Limitations
Without the use of |--vfs-cache-mode| this can only write files
Without the use of `--vfs-cache-mode` this can only write files
sequentially, it can only seek when reading. This means that many
applications won't work with their files on an rclone mount without
|--vfs-cache-mode writes| or |--vfs-cache-mode full|.
`--vfs-cache-mode writes` or `--vfs-cache-mode full`.
See the [VFS File Caching](#vfs-file-caching) section for more info.
When using NFS mount on macOS, if you don't specify |--vfs-cache-mode|
the mount point will be read-only.
The bucket-based remotes (e.g. Swift, S3, Google Compute Storage, B2)
do not support the concept of empty directories, so empty
directories will have a tendency to disappear once they fall out of
the directory cache.
When |rclone mount| is invoked on Unix with |--daemon| flag, the main rclone
When `rclone mount` is invoked on Unix with `--daemon` flag, the main rclone
program will wait for the background mount to become ready or until the timeout
specified by the |--daemon-wait| flag. On Linux it can check mount status using
specified by the `--daemon-wait` flag. On Linux it can check mount status using
ProcFS so the flag in fact sets **maximum** time to wait, while the real wait
can be less. On macOS / BSD the time to wait is constant and the check is
performed only at the end. We advise you to set wait time on macOS reasonably.
@@ -336,10 +340,10 @@ for solutions to make @ more reliable.
### Attribute caching
You can use the flag |--attr-timeout| to set the time the kernel caches
You can use the flag `--attr-timeout` to set the time the kernel caches
the attributes (size, modification time, etc.) for directory entries.
The default is |1s| which caches files just long enough to avoid
The default is `1s` which caches files just long enough to avoid
too many callbacks to rclone from the kernel.
In theory 0s should be the correct value for filesystems which can
@@ -350,14 +354,14 @@ few problems such as
and [excessive time listing directories](https://github.com/rclone/rclone/issues/2095#issuecomment-371141147).
The kernel can cache the info about a file for the time given by
|--attr-timeout|. You may see corruption if the remote file changes
`--attr-timeout`. You may see corruption if the remote file changes
length during this window. It will show up as either a truncated file
or a file with garbage on the end. With |--attr-timeout 1s| this is
very unlikely but not impossible. The higher you set |--attr-timeout|
or a file with garbage on the end. With `--attr-timeout 1s` this is
very unlikely but not impossible. The higher you set `--attr-timeout`
the more likely it is. The default setting of "1s" is the lowest
setting which mitigates the problems above.
If you set it higher (|10s| or |1m| say) then the kernel will call
If you set it higher (`10s` or `1m` say) then the kernel will call
back to rclone less often making it more efficient, however there is
more chance of the corruption issue above.
@@ -380,32 +384,32 @@ Units having the rclone @ service specified as a requirement
will see all files and folders immediately in this mode.
Note that systemd runs mount units without any environment variables including
|PATH| or |HOME|. This means that tilde (|~|) expansion will not work
and you should provide |--config| and |--cache-dir| explicitly as absolute
`PATH` or `HOME`. This means that tilde (`~`) expansion will not work
and you should provide `--config` and `--cache-dir` explicitly as absolute
paths via rclone arguments.
Since mounting requires the |fusermount| program, rclone will use the fallback
PATH of |/bin:/usr/bin| in this scenario. Please ensure that |fusermount|
Since mounting requires the `fusermount` program, rclone will use the fallback
PATH of `/bin:/usr/bin` in this scenario. Please ensure that `fusermount`
is present on this PATH.
### Rclone as Unix mount helper
The core Unix program |/bin/mount| normally takes the |-t FSTYPE| argument
then runs the |/sbin/mount.FSTYPE| helper program passing it mount options
as |-o key=val,...| or |--opt=...|. Automount (classic or systemd) behaves
The core Unix program `/bin/mount` normally takes the `-t FSTYPE` argument
then runs the `/sbin/mount.FSTYPE` helper program passing it mount options
as `-o key=val,...` or `--opt=...`. Automount (classic or systemd) behaves
in a similar way.
rclone by default expects GNU-style flags |--key val|. To run it as a mount
helper you should symlink rclone binary to |/sbin/mount.rclone| and optionally
|/usr/bin/rclonefs|, e.g. |ln -s /usr/bin/rclone /sbin/mount.rclone|.
rclone by default expects GNU-style flags `--key val`. To run it as a mount
helper you should symlink rclone binary to `/sbin/mount.rclone` and optionally
`/usr/bin/rclonefs`, e.g. `ln -s /usr/bin/rclone /sbin/mount.rclone`.
rclone will detect it and translate command-line arguments appropriately.
Now you can run classic mounts like this:
|||
```
mount sftp1:subdir /mnt/data -t rclone -o vfs_cache_mode=writes,sftp_key_file=/path/to/pem
|||
```
or create systemd mount units:
|||
```
# /etc/systemd/system/mnt-data.mount
[Unit]
Description=Mount for /mnt/data
@@ -414,10 +418,10 @@ Type=rclone
What=sftp1:subdir
Where=/mnt/data
Options=rw,_netdev,allow_other,args2env,vfs-cache-mode=writes,config=/etc/rclone.conf,cache-dir=/var/rclone
|||
```
optionally accompanied by systemd automount unit
|||
```
# /etc/systemd/system/mnt-data.automount
[Unit]
Description=AutoMount for /mnt/data
@@ -426,34 +430,33 @@ Where=/mnt/data
TimeoutIdleSec=600
[Install]
WantedBy=multi-user.target
|||
```
or add in |/etc/fstab| a line like
|||
or add in `/etc/fstab` a line like
```
sftp1:subdir /mnt/data rclone rw,noauto,nofail,_netdev,x-systemd.automount,args2env,vfs_cache_mode=writes,config=/etc/rclone.conf,cache_dir=/var/cache/rclone 0 0
|||
```
or use classic Automountd.
Remember to provide explicit |config=...,cache-dir=...| as a workaround for
mount units being run without |HOME|.
Remember to provide explicit `config=...,cache-dir=...` as a workaround for
mount units being run without `HOME`.
Rclone in the mount helper mode will split |-o| argument(s) by comma, replace |_|
by |-| and prepend |--| to get the command-line flags. Options containing commas
Rclone in the mount helper mode will split `-o` argument(s) by comma, replace `_`
by `-` and prepend `--` to get the command-line flags. Options containing commas
or spaces can be wrapped in single or double quotes. Any inner quotes inside outer
quotes of the same type should be doubled.
Mount option syntax includes a few extra options treated specially:
- |env.NAME=VALUE| will set an environment variable for the mount process.
- `env.NAME=VALUE` will set an environment variable for the mount process.
This helps with Automountd and Systemd.mount which don't allow setting
custom environment for mount helpers.
Typically you will use |env.HTTPS_PROXY=proxy.host:3128| or |env.HOME=/root|
- |command=cmount| can be used to run |cmount| or any other rclone command
rather than the default |mount|.
- |args2env| will pass mount options to the mount helper running in background
Typically you will use `env.HTTPS_PROXY=proxy.host:3128` or `env.HOME=/root`
- `command=cmount` can be used to run `cmount` or any other rclone command
rather than the default `mount`.
- `args2env` will pass mount options to the mount helper running in background
via environment variables instead of command line arguments. This allows to
hide secrets from such commands as |ps| or |pgrep|.
- |vv...| will be transformed into appropriate |--verbose=N|
- standard mount options like |x-systemd.automount|, |_netdev|, |nosuid| and alike
hide secrets from such commands as `ps` or `pgrep`.
- `vv...` will be transformed into appropriate `--verbose=N`
- standard mount options like `x-systemd.automount`, `_netdev`, `nosuid` and alike
are intended only for Automountd and ignored by rclone.
`

View File

@@ -386,6 +386,12 @@ func (u *UI) Draw() {
}
showEmptyDir := u.hasEmptyDir()
dirPos := u.dirPosMap[u.path]
// Check to see if a rescan has invalidated the position
if dirPos.offset >= len(u.sortPerm) {
delete(u.dirPosMap, u.path)
dirPos.offset = 0
dirPos.entry = 0
}
for i, j := range u.sortPerm[dirPos.offset:] {
entry := u.entries[j]
n := i + dirPos.offset

100
cmd/nfsmount/nfsmount.go Normal file
View File

@@ -0,0 +1,100 @@
//go:build unix
// +build unix
// Package nfsmount implements mounting functionality using serve nfs command
//
// This can potentially work on all unix like systems which can mount NFS.
package nfsmount
import (
"bytes"
"context"
"fmt"
"net"
"os/exec"
"runtime"
"github.com/rclone/rclone/cmd/mountlib"
"github.com/rclone/rclone/cmd/serve/nfs"
"github.com/rclone/rclone/fs"
"github.com/rclone/rclone/fs/config/flags"
"github.com/rclone/rclone/vfs"
)
var (
sudo = false
)
func init() {
name := "nfsmount"
cmd := mountlib.NewMountCommand(name, false, mount)
cmd.Annotations["versionIntroduced"] = "v1.65"
cmd.Annotations["status"] = "Experimental"
mountlib.AddRc(name, mount)
cmdFlags := cmd.Flags()
flags.BoolVarP(cmdFlags, &sudo, "sudo", "", sudo, "Use sudo to run the mount command as root.", "")
}
func mount(VFS *vfs.VFS, mountpoint string, opt *mountlib.Options) (asyncerrors <-chan error, unmount func() error, err error) {
s, err := nfs.NewServer(context.Background(), VFS, &nfs.Options{})
if err != nil {
return
}
errChan := make(chan error, 1)
go func() {
errChan <- s.Serve()
}()
// The port is always picked at random after the NFS server has started
// we need to query the server for the port number so we can mount it
_, port, err := net.SplitHostPort(s.Addr().String())
if err != nil {
err = fmt.Errorf("cannot find port number in %s", s.Addr().String())
return
}
// Options
options := []string{
"-o", fmt.Sprintf("port=%s", port),
"-o", fmt.Sprintf("mountport=%s", port),
}
for _, option := range opt.ExtraOptions {
options = append(options, "-o", option)
}
options = append(options, opt.ExtraFlags...)
cmd := []string{}
if sudo {
cmd = append(cmd, "sudo")
}
cmd = append(cmd, "mount")
cmd = append(cmd, options...)
cmd = append(cmd, "localhost:", mountpoint)
fs.Debugf(nil, "Running mount command: %q", cmd)
out, err := exec.Command(cmd[0], cmd[1:]...).CombinedOutput()
if err != nil {
out = bytes.TrimSpace(out)
err = fmt.Errorf("%s: failed to mount NFS volume: %v", out, err)
return
}
asyncerrors = errChan
unmount = func() error {
var umountErr error
var out []byte
if runtime.GOOS == "darwin" {
out, umountErr = exec.Command("diskutil", "umount", "force", mountpoint).CombinedOutput()
} else {
out, umountErr = exec.Command("umount", "-f", mountpoint).CombinedOutput()
}
shutdownErr := s.Shutdown()
VFS.Shutdown()
if umountErr != nil {
out = bytes.TrimSpace(out)
return fmt.Errorf("%s: failed to umount the NFS volume %e", out, umountErr)
} else if shutdownErr != nil {
return fmt.Errorf("failed to shutdown NFS server: %e", shutdownErr)
}
return nil
}
return
}

View File

@@ -0,0 +1,15 @@
//go:build darwin && !cmount
// +build darwin,!cmount
package nfsmount
import (
"testing"
"github.com/rclone/rclone/vfs/vfscommon"
"github.com/rclone/rclone/vfs/vfstest"
)
func TestMount(t *testing.T) {
vfstest.RunTests(t, false, vfscommon.CacheModeMinimal, false, mount)
}

View File

@@ -0,0 +1,8 @@
// Build for nfsmount for unsupported platforms to stop go complaining
// about "no buildable Go source files "
//go:build !unix
// +build !unix
// Package nfsmount implements mount command using NFS.
package nfsmount

View File

@@ -20,7 +20,7 @@ const (
// interval between progress prints
defaultProgressInterval = 500 * time.Millisecond
// time format for logging
logTimeFormat = "2006-01-02 15:04:05"
logTimeFormat = "2006/01/02 15:04:05"
)
// startProgress starts the progress bar printing

View File

@@ -10,6 +10,7 @@ import (
"bytes"
"context"
"crypto/sha256"
_ "embed"
"encoding/hex"
"errors"
"fmt"
@@ -35,6 +36,9 @@ import (
versionCmd "github.com/rclone/rclone/cmd/version"
)
//go:embed selfupdate.md
var selfUpdateHelp string
// Options contains options for the self-update command
type Options struct {
Check bool
@@ -63,7 +67,7 @@ var cmdSelfUpdate = &cobra.Command{
Use: "selfupdate",
Aliases: []string{"self-update"},
Short: `Update the rclone binary.`,
Long: strings.ReplaceAll(selfUpdateHelp, "|", "`"),
Long: selfUpdateHelp,
Annotations: map[string]string{
"versionIntroduced": "v1.55",
},

View File

@@ -1,55 +1,47 @@
//go:build !noselfupdate
// +build !noselfupdate
package selfupdate
// Note: "|" will be replaced by backticks in the help string below
var selfUpdateHelp = `
This command downloads the latest release of rclone and replaces the
currently running binary. The download is verified with a hashsum and
cryptographically signed signature; see [the release signing
docs](/release_signing/) for details.
If used without flags (or with implied |--stable| flag), this command
If used without flags (or with implied `--stable` flag), this command
will install the latest stable release. However, some issues may be fixed
(or features added) only in the latest beta release. In such cases you should
run the command with the |--beta| flag, i.e. |rclone selfupdate --beta|.
run the command with the `--beta` flag, i.e. `rclone selfupdate --beta`.
You can check in advance what version would be installed by adding the
|--check| flag, then repeat the command without it when you are satisfied.
`--check` flag, then repeat the command without it when you are satisfied.
Sometimes the rclone team may recommend you a concrete beta or stable
rclone release to troubleshoot your issue or add a bleeding edge feature.
The |--version VER| flag, if given, will update to the concrete version
instead of the latest one. If you omit micro version from |VER| (for
example |1.53|), the latest matching micro version will be used.
The `--version VER` flag, if given, will update to the concrete version
instead of the latest one. If you omit micro version from `VER` (for
example `1.53`), the latest matching micro version will be used.
Upon successful update rclone will print a message that contains a previous
version number. You will need it if you later decide to revert your update
for some reason. Then you'll have to note the previous version and run the
following command: |rclone selfupdate [--beta] OLDVER|.
If the old version contains only dots and digits (for example |v1.54.0|)
then it's a stable release so you won't need the |--beta| flag. Beta releases
have an additional information similar to |v1.54.0-beta.5111.06f1c0c61|.
following command: `rclone selfupdate [--beta] OLDVER`.
If the old version contains only dots and digits (for example `v1.54.0`)
then it's a stable release so you won't need the `--beta` flag. Beta releases
have an additional information similar to `v1.54.0-beta.5111.06f1c0c61`.
(if you are a developer and use a locally built rclone, the version number
will end with |-DEV|, you will have to rebuild it as it obviously can't
will end with `-DEV`, you will have to rebuild it as it obviously can't
be distributed).
If you previously installed rclone via a package manager, the package may
include local documentation or configure services. You may wish to update
with the flag |--package deb| or |--package rpm| (whichever is correct for
your OS) to update these too. This command with the default |--package zip|
with the flag `--package deb` or `--package rpm` (whichever is correct for
your OS) to update these too. This command with the default `--package zip`
will update only the rclone executable so the local manual may become
inaccurate after it.
The [rclone mount](/commands/rclone_mount/) command may
or may not support extended FUSE options depending on the build and OS.
|selfupdate| will refuse to update if the capability would be discarded.
`selfupdate` will refuse to update if the capability would be discarded.
Note: Windows forbids deletion of a currently running executable so this
command will rename the old executable to 'rclone.old.exe' upon success.
Please note that this command was not available before rclone version 1.55.
If it fails for you with the message |unknown command "selfupdate"| then
If it fails for you with the message `unknown command "selfupdate"` then
you will need to update manually following the install instructions located
at https://rclone.org/install/
`

View File

@@ -14,6 +14,7 @@ import (
"time"
"github.com/rclone/rclone/fs"
_ "github.com/rclone/rclone/fstest" // needed to run under integration tests
"github.com/rclone/rclone/fstest/testy"
"github.com/stretchr/testify/assert"
)

View File

@@ -0,0 +1,10 @@
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1
b20b47f579a2c790ca752fb5d8e5651fade7d5867cbac0a4f71e805fc5c468d0 archive.zip
-----BEGIN PGP SIGNATURE-----
iF0EARECAB0WIQT79zfs6firGGBL0qyTk14C/ztU+gUCZS+oVQAKCRCTk14C/ztU
+lNsAJ9XRiODlM4fIW9yqiltO3N+lLeucwCfRzD3cXk6BCB5wdz7pTgnItk9N74=
=1GTr
-----END PGP SIGNATURE-----

Binary file not shown.

View File

@@ -26,24 +26,37 @@ QbogRGodbKhqY4v+cMNkKiemBuTQiWPkpKjifwNsD1fNjNKfDP3pJ64Yz7a4fuzV
X1YwBACpKVuEen34lmcX6ziY4jq8rKibKBs4JjQCRO24kYoHDULVe+RS9krQWY5b
e0foDhru4dsKccefK099G+WEzKVCKxupstWkTT/iJwajR8mIqd4AhD0wO9W3MCfV
Ov8ykMDZ7qBWk1DHc87Ep3W1o8t8wq74ifV+HjhhWg8QAylXg7QlTmljayBDcmFp
Zy1Xb29kIDxuaWNrQGNyYWlnLXdvb2QuY29tPohxBBMRCAAxBQsHCgMEAxUDAgMW
AgECF4AWIQT79zfs6firGGBL0qyTk14C/ztU+gUCXjg2UgIZAQAKCRCTk14C/ztU
+lmmAJ4jH5FyULzStjisuTvHLTVz6G44eQCfaR5QGZFPseenE5ic2WeQcBcmtoG5
Ag0EO7LdgRAIAI6QdFBg3/xa1gFKPYy1ihV9eSdGqwWZGJvokWsfCvHy5180tj/v
UNOLAJrdqglMSvevNTXe8bT65D6423AAsLhch9wq/aNqrHolTYABzxRigjcS1//T
yln5naGUzlVQXDVfrDk3Md/NrkdOFj7r/YyMF0+iWwpFz2qAjL95i5wfVZ1kWGrT
2AmivE1wD1sWT/Ja3FDI0NRkU0Nbz/a0TKe4ml8iLVtZXpTRbxxCCPdkHXXgSyu1
eZ4NrF/wTJuvwGn12TJ1EF95aVkHxAUw0+KmLGdcyBG+IKuHamrsjWIAXGXV///K
AxPgUthccQ03HMjltFsrdmen5Q034YM3eOsAAwUH/jAKiIAA8LpZmZPnt9GZ4+Ol
Zp22VAfyfDOFl4Ol+cWjkLAgjAFsm5gnOKcRSE/9XPxnQqkhw7+ZygYuUMgTDJ99
/5IM1UQL3ooS+oFrDaE99S8bLeOe17skcdXcA/K83VqD9m93rQRnbtD+75zqKkZn
9WNFyKCXg5P6PFPdNYRtlQKOcwFR9mHRLUmapQSAM8Y2pCgALZ7GViKQca8/TT1T
gZk9fJMZYGez+IlOPxTJxjn80+vywk4/wdIWSiQj+8u5RzT9sjmm77wbMVNGRqYd
W/EemW9Zz9vi0CIvJGgbPMqcuxw8e/5lnuQ6Mi3uDR0P2RNIAhFrdZpVSME8xQaI
RgQYEQIABgUCO7LdgQAKCRCTk14C/ztU+mLBAKC2cdFy7eLaQAvyzcE2VK6HVIjn
JACguA00bxLQuJ4+RCJrLFZP8ZlN2sc=
=TtR5
-----END PGP PUBLIC KEY BLOCK-----`
Zy1Xb29kIDxuaWNrQGNyYWlnLXdvb2QuY29tPoh0BBMRCAA0BQsHCgMEAxUDAgMW
AgECF4ACGQEWIQT79zfs6firGGBL0qyTk14C/ztU+gUCZS/mXAIbIwAKCRCTk14C
/ztU+tX+AJ9CUAnPvT4w5yRAPRfDiwWIPUqBOgCgiTelkzvUxvLWnYmpowwzKmsx
qaSJAjMEEAEIAB0WIQTjs1jchY+zB/SBcLnLDb68XzLIHQUCZPRnNAAKCRDLDb68
XzLIHZSAD/oCk9Z0xJfbpriphTBxFy7bWyPKF1lM1GZZaLKkktGfunf1i0Q7rhwp
Nu+u1launlOTp6ZoY36Ce2Qa1eSxWAQdjVajw9kOHXCAewrTREOMY/mb7RVGjajo
0Egl8T9iD3JRyaxu2iVtbpZYuqehtGG28CaCzmtqE+EJcx1cGqAGSuuaDWRYlVX8
KDip44GQB5Lut30vwSIoZG1CPCR6VE82u4cl3mYZUfcJkCHsiLzoeadVzb+fOd+2
ybzBn8Y77ifGgM+dSFSHe03mFfcHPdp0QImF9HQR7XI0UMZmEJsw7c2vDrRa+kRY
2A4/amGn4Tahuazq8g2yqgGm3yAj49qGNarAau849lDr7R49j73ESnNVBGJ9ShzU
4Ls+S1A5gohZVu2s1fkE3mbAmoTfU4JCrpRydOuL9xRJk5gbL44sKeuGODNshyTP
JzG9DmRHpLsBn59v8mg5tqSfBIGqcqBxxnYHJnkK801MkaLW2m7wDmtz6P3TW86g
GukzfIN3/OufLjnpN3Nx376JwWDDIyif7sn6/q+ZMwGz9uLKZkAeM5c3Dh4ygpgl
iSLoV2bZzDz0iLxKWW7QOVVdWHmlEqbTldpQ7gUEPG7mxpzVo0xd6nHncSq0M91x
29It4B3fATx/iJB2eardMzSsbzHiwTg0eswhYYGpSKZLgp4RShnVAbkCDQQ7st2B
EAgAjpB0UGDf/FrWAUo9jLWKFX15J0arBZkYm+iRax8K8fLnXzS2P+9Q04sAmt2q
CUxK9681Nd7xtPrkPrjbcACwuFyH3Cr9o2qseiVNgAHPFGKCNxLX/9PKWfmdoZTO
VVBcNV+sOTcx382uR04WPuv9jIwXT6JbCkXPaoCMv3mLnB9VnWRYatPYCaK8TXAP
WxZP8lrcUMjQ1GRTQ1vP9rRMp7iaXyItW1lelNFvHEII92QddeBLK7V5ng2sX/BM
m6/AafXZMnUQX3lpWQfEBTDT4qYsZ1zIEb4gq4dqauyNYgBcZdX//8oDE+BS2Fxx
DTccyOW0Wyt2Z6flDTfhgzd46wADBQf+MAqIgADwulmZk+e30Znj46VmnbZUB/J8
M4WXg6X5xaOQsCCMAWybmCc4pxFIT/1c/GdCqSHDv5nKBi5QyBMMn33/kgzVRAve
ihL6gWsNoT31Lxst457XuyRx1dwD8rzdWoP2b3etBGdu0P7vnOoqRmf1Y0XIoJeD
k/o8U901hG2VAo5zAVH2YdEtSZqlBIAzxjakKAAtnsZWIpBxrz9NPVOBmT18kxlg
Z7P4iU4/FMnGOfzT6/LCTj/B0hZKJCP7y7lHNP2yOabvvBsxU0ZGph1b8R6Zb1nP
2+LQIi8kaBs8ypy7HDx7/mWe5DoyLe4NHQ/ZE0gCEWt1mlVIwTzFBohGBBgRAgAG
BQI7st2BAAoJEJOTXgL/O1T6YsEAoLZx0XLt4tpAC/LNwTZUrodUiOckAKC4DTRv
EtC4nj5EImssVk/xmU3axw==
=VUqh
-----END PGP PUBLIC KEY BLOCK-----
`
func verifyHashsum(ctx context.Context, siteURL, version, archive string, hash []byte) error {
sumsURL := fmt.Sprintf("%s/%s/SHA256SUMS", siteURL, version)
@@ -52,16 +65,26 @@ func verifyHashsum(ctx context.Context, siteURL, version, archive string, hash [
return err
}
fs.Debugf(nil, "downloaded hashsum list: %s", sumsURL)
return verifyHashsumDownloaded(ctx, sumsBuf, archive, hash)
}
func verifyHashsumDownloaded(ctx context.Context, sumsBuf []byte, archive string, hash []byte) error {
keyRing, err := openpgp.ReadArmoredKeyRing(strings.NewReader(ncwPublicKeyPGP))
if err != nil {
return errors.New("unsupported signing key")
return fmt.Errorf("unsupported signing key: %w", err)
}
block, rest := clearsign.Decode(sumsBuf)
// block.Bytes = block.Bytes[1:] // uncomment to test invalid signature
if block == nil {
return errors.New("invalid hashsum signature: couldn't find detached signature")
}
if len(rest) > 0 {
return fmt.Errorf("invalid hashsum signature: %d bytes of unsigned data", len(rest))
}
_, err = openpgp.CheckDetachedSignature(keyRing, bytes.NewReader(block.Bytes), block.ArmoredSignature.Body, nil)
if err != nil || len(rest) > 0 {
return errors.New("invalid hashsum signature")
if err != nil {
return fmt.Errorf("invalid hashsum signature: %w", err)
}
wantHash, err := findFileHash(sumsBuf, archive)

View File

@@ -0,0 +1,43 @@
//go:build !noselfupdate
// +build !noselfupdate
package selfupdate
import (
"context"
"encoding/hex"
"os"
"testing"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
)
func TestVerify(t *testing.T) {
ctx := context.Background()
sumsBuf, err := os.ReadFile("testdata/verify/SHA256SUMS")
require.NoError(t, err)
hash, err := hex.DecodeString("b20b47f579a2c790ca752fb5d8e5651fade7d5867cbac0a4f71e805fc5c468d0")
require.NoError(t, err)
t.Run("NoError", func(t *testing.T) {
err = verifyHashsumDownloaded(ctx, sumsBuf, "archive.zip", hash)
require.NoError(t, err)
})
t.Run("BadSig", func(t *testing.T) {
sumsBuf[0x60] ^= 1 // change the signature by one bit
err = verifyHashsumDownloaded(ctx, sumsBuf, "archive.zip", hash)
assert.ErrorContains(t, err, "invalid signature")
sumsBuf[0x60] ^= 1 // undo the change
})
t.Run("BadSum", func(t *testing.T) {
hash[0] ^= 1 // change the SHA256 by one bit
err = verifyHashsumDownloaded(ctx, sumsBuf, "archive.zip", hash)
assert.ErrorContains(t, err, "archive hash mismatch")
hash[0] ^= 1 // undo the change
})
t.Run("BadName", func(t *testing.T) {
err = verifyHashsumDownloaded(ctx, sumsBuf, "archive.zipX", hash)
assert.ErrorContains(t, err, "unable to find hash")
})
}

View File

@@ -129,11 +129,10 @@ func newServer(f fs.Fs, opt *dlnaflags.Options) (*server, error) {
FriendlyName: friendlyName,
RootDeviceUUID: makeDeviceUUID(friendlyName),
Interfaces: interfaces,
httpListenAddr: opt.ListenAddr,
f: f,
vfs: vfs.New(f, &vfsflags.Opt),
waitChan: make(chan struct{}),
httpListenAddr: opt.ListenAddr,
f: f,
vfs: vfs.New(f, &vfsflags.Opt),
}
s.services = map[string]UPnPService{

View File

@@ -3,8 +3,8 @@ package docker
import (
"context"
_ "embed"
"path/filepath"
"strings"
"syscall"
"github.com/spf13/cobra"
@@ -30,6 +30,9 @@ var (
noSpec = false
)
//go:embed docker.md
var longHelp string
func init() {
cmdFlags := Command.Flags()
// Add command specific flags
@@ -47,7 +50,7 @@ func init() {
var Command = &cobra.Command{
Use: "docker",
Short: `Serve any remote on docker's volume plugin API.`,
Long: strings.ReplaceAll(longHelp, "|", "`") + vfs.Help,
Long: longHelp + vfs.Help,
Annotations: map[string]string{
"versionIntroduced": "v1.56",
"groups": "Filter",

View File

@@ -1,7 +1,3 @@
package docker
// Note: "|" will be replaced by backticks
var longHelp = `
This command implements the Docker volume plugin API allowing docker to use
rclone as a data storage mechanism for various cloud providers.
rclone provides [docker volume plugin](/docker) based on it.
@@ -12,32 +8,31 @@ docker daemon and runs the corresponding code when necessary.
Docker plugins can run as a managed plugin under control of the docker daemon
or as an independent native service. For testing, you can just run it directly
from the command line, for example:
|||
```
sudo rclone serve docker --base-dir /tmp/rclone-volumes --socket-addr localhost:8787 -vv
|||
```
Running |rclone serve docker| will create the said socket, listening for
Running `rclone serve docker` will create the said socket, listening for
commands from Docker to create the necessary Volumes. Normally you need not
give the |--socket-addr| flag. The API will listen on the unix domain socket
at |/run/docker/plugins/rclone.sock|. In the example above rclone will create
a TCP socket and a small file |/etc/docker/plugins/rclone.spec| containing
the socket address. We use |sudo| because both paths are writeable only by
give the `--socket-addr` flag. The API will listen on the unix domain socket
at `/run/docker/plugins/rclone.sock`. In the example above rclone will create
a TCP socket and a small file `/etc/docker/plugins/rclone.spec` containing
the socket address. We use `sudo` because both paths are writeable only by
the root user.
If you later decide to change listening socket, the docker daemon must be
restarted to reconnect to |/run/docker/plugins/rclone.sock|
or parse new |/etc/docker/plugins/rclone.spec|. Until you restart, any
restarted to reconnect to `/run/docker/plugins/rclone.sock`
or parse new `/etc/docker/plugins/rclone.spec`. Until you restart, any
volume related docker commands will timeout trying to access the old socket.
Running directly is supported on **Linux only**, not on Windows or MacOS.
This is not a problem with managed plugin mode described in details
in the [full documentation](https://rclone.org/docker).
The command will create volume mounts under the path given by |--base-dir|
(by default |/var/lib/docker-volumes/rclone| available only to root)
and maintain the JSON formatted file |docker-plugin.state| in the rclone cache
The command will create volume mounts under the path given by `--base-dir`
(by default `/var/lib/docker-volumes/rclone` available only to root)
and maintain the JSON formatted file `docker-plugin.state` in the rclone cache
directory with book-keeping records of created and mounted volumes.
All mount and VFS options are submitted by the docker daemon via API, but
you can also provide defaults on the command line as well as set path to the
config file and cache directory or adjust logging verbosity.
`

View File

@@ -12,8 +12,7 @@ import (
"sync"
"time"
sysdnotify "github.com/iguanesolutions/go-systemd/v5/notify"
"github.com/coreos/go-systemd/v22/daemon"
"github.com/rclone/rclone/cmd/mountlib"
"github.com/rclone/rclone/fs"
"github.com/rclone/rclone/fs/config"
@@ -87,7 +86,7 @@ func NewDriver(ctx context.Context, root string, mntOpt *mountlib.Options, vfsOp
})
// notify systemd
if err := sysdnotify.Ready(); err != nil {
if _, err := daemon.SdNotify(false, daemon.SdNotifyReady); err != nil {
return nil, fmt.Errorf("failed to notify systemd: %w", err)
}
@@ -100,7 +99,10 @@ func (drv *Driver) Exit() {
drv.mu.Lock()
defer drv.mu.Unlock()
reportErr(sysdnotify.Stopping())
reportErr(func() error {
_, err := daemon.SdNotify(false, daemon.SdNotifyStopping)
return err
}())
drv.monChan <- true // ask monitor to exit
for _, vol := range drv.volumes {
reportErr(vol.unmountAll())

View File

@@ -6,8 +6,8 @@ package docker
import (
"os"
"github.com/coreos/go-systemd/activation"
"github.com/coreos/go-systemd/util"
"github.com/coreos/go-systemd/v22/activation"
"github.com/coreos/go-systemd/v22/util"
)
func systemdActivationFiles() []*os.File {

159
cmd/serve/nfs/filesystem.go Normal file
View File

@@ -0,0 +1,159 @@
//go:build unix
// +build unix
package nfs
import (
"os"
"path"
"strings"
"time"
billy "github.com/go-git/go-billy/v5"
"github.com/rclone/rclone/fs"
"github.com/rclone/rclone/vfs"
"github.com/rclone/rclone/vfs/vfscommon"
)
// FS is our wrapper around the VFS to properly support billy.Filesystem interface
type FS struct {
vfs *vfs.VFS
}
// ReadDir implements read dir
func (f *FS) ReadDir(path string) (dir []os.FileInfo, err error) {
return f.vfs.ReadDir(path)
}
// Create implements creating new files
func (f *FS) Create(filename string) (billy.File, error) {
return f.vfs.Create(filename)
}
// Open opens a file
func (f *FS) Open(filename string) (billy.File, error) {
return f.vfs.Open(filename)
}
// OpenFile opens a file
func (f *FS) OpenFile(filename string, flag int, perm os.FileMode) (billy.File, error) {
return f.vfs.OpenFile(filename, flag, perm)
}
// Stat gets the file stat
func (f *FS) Stat(filename string) (os.FileInfo, error) {
return f.vfs.Stat(filename)
}
// Rename renames a file
func (f *FS) Rename(oldpath, newpath string) error {
return f.vfs.Rename(oldpath, newpath)
}
// Remove deletes a file
func (f *FS) Remove(filename string) error {
return f.vfs.Remove(filename)
}
// Join joins path elements
func (f *FS) Join(elem ...string) string {
return path.Join(elem...)
}
// TempFile is not implemented
func (f *FS) TempFile(dir, prefix string) (billy.File, error) {
return nil, os.ErrInvalid
}
// MkdirAll creates a directory and all the ones above it
// it does not redirect to VFS.MkDirAll because that one doesn't
// honor the permissions
func (f *FS) MkdirAll(filename string, perm os.FileMode) error {
parts := strings.Split(filename, "/")
for i := range parts {
current := strings.Join(parts[:i+1], "/")
_, err := f.Stat(current)
if err == vfs.ENOENT {
err = f.vfs.Mkdir(current, perm)
if err != nil {
return err
}
}
}
return nil
}
// Lstat gets the stats for symlink
func (f *FS) Lstat(filename string) (os.FileInfo, error) {
return f.vfs.Stat(filename)
}
// Symlink is not supported over NFS
func (f *FS) Symlink(target, link string) error {
return os.ErrInvalid
}
// Readlink is not supported
func (f *FS) Readlink(link string) (string, error) {
return "", os.ErrInvalid
}
// Chmod changes the file modes
func (f *FS) Chmod(name string, mode os.FileMode) error {
file, err := f.vfs.Open(name)
if err != nil {
return err
}
defer func() {
if err := file.Close(); err != nil {
fs.Logf(f, "Error while closing file: %e", err)
}
}()
return file.Chmod(mode)
}
// Lchown changes the owner of symlink
func (f *FS) Lchown(name string, uid, gid int) error {
return f.Chown(name, uid, gid)
}
// Chown changes owner of the file
func (f *FS) Chown(name string, uid, gid int) error {
file, err := f.vfs.Open(name)
if err != nil {
return err
}
defer func() {
if err := file.Close(); err != nil {
fs.Logf(f, "Error while closing file: %e", err)
}
}()
return file.Chown(uid, gid)
}
// Chtimes changes the acces time and modified time
func (f *FS) Chtimes(name string, atime time.Time, mtime time.Time) error {
return f.vfs.Chtimes(name, atime, mtime)
}
// Chroot is not supported in VFS
func (f *FS) Chroot(path string) (billy.Filesystem, error) {
return nil, os.ErrInvalid
}
// Root returns the root of a VFS
func (f *FS) Root() string {
return f.vfs.Fs().Root()
}
// Capabilities exports the filesystem capabilities
func (f *FS) Capabilities() billy.Capability {
if f.vfs.Opt.CacheMode == vfscommon.CacheModeOff {
return billy.ReadCapability | billy.SeekCapability
}
return billy.WriteCapability | billy.ReadCapability |
billy.ReadAndWriteCapability | billy.SeekCapability | billy.TruncateCapability
}
// Interface check
var _ billy.Filesystem = (*FS)(nil)

70
cmd/serve/nfs/handler.go Normal file
View File

@@ -0,0 +1,70 @@
//go:build unix
// +build unix
package nfs
import (
"context"
"net"
"github.com/go-git/go-billy/v5"
"github.com/rclone/rclone/vfs"
"github.com/willscott/go-nfs"
nfshelper "github.com/willscott/go-nfs/helpers"
)
// NewBackendAuthHandler creates a handler for the provided filesystem
func NewBackendAuthHandler(vfs *vfs.VFS) nfs.Handler {
return &BackendAuthHandler{vfs}
}
// BackendAuthHandler returns a NFS backing that exposes a given file system in response to all mount requests.
type BackendAuthHandler struct {
vfs *vfs.VFS
}
// Mount backs Mount RPC Requests, allowing for access control policies.
func (h *BackendAuthHandler) Mount(ctx context.Context, conn net.Conn, req nfs.MountRequest) (status nfs.MountStatus, hndl billy.Filesystem, auths []nfs.AuthFlavor) {
status = nfs.MountStatusOk
hndl = &FS{vfs: h.vfs}
auths = []nfs.AuthFlavor{nfs.AuthFlavorNull}
return
}
// Change provides an interface for updating file attributes.
func (h *BackendAuthHandler) Change(fs billy.Filesystem) billy.Change {
if c, ok := fs.(billy.Change); ok {
return c
}
return nil
}
// FSStat provides information about a filesystem.
func (h *BackendAuthHandler) FSStat(ctx context.Context, f billy.Filesystem, s *nfs.FSStat) error {
total, _, free := h.vfs.Statfs()
s.TotalSize = uint64(total)
s.FreeSize = uint64(free)
s.AvailableSize = uint64(free)
return nil
}
// ToHandle handled by CachingHandler
func (h *BackendAuthHandler) ToHandle(f billy.Filesystem, s []string) []byte {
return []byte{}
}
// FromHandle handled by CachingHandler
func (h *BackendAuthHandler) FromHandle([]byte) (billy.Filesystem, []string, error) {
return nil, []string{}, nil
}
// HandleLimit handled by cachingHandler
func (h *BackendAuthHandler) HandleLimit() int {
return -1
}
func newHandler(vfs *vfs.VFS) nfs.Handler {
handler := NewBackendAuthHandler(vfs)
cacheHelper := nfshelper.NewCachingHandler(handler, 1024)
return cacheHelper
}

Some files were not shown because too many files have changed in this diff Show More