1
0
mirror of https://github.com/rclone/rclone.git synced 2026-02-24 08:32:53 +00:00

Compare commits

...

112 Commits

Author SHA1 Message Date
Nick Craig-Wood
3fa5a424a9 serve nbd: serve an rclone remote as a Network Block Device - WIP FIXME
TODO

- Need to finalise rclone/gonbdserver and upload and change go.mod/go.sum
- Remove uneeded dependencies from rclone/gonbdserver

Maybe make companion `mount nbd` command?

Fixes #7337
2024-07-30 14:04:07 +01:00
Nick Craig-Wood
9fb0afad88 vfs: chunked files which can be read and written at will
This introduces the vfs/chunked library which can open a file like
object which is stored in parts on the remote. This can be read and
written to anywhere and at any time.
2024-07-30 14:04:07 +01:00
Nick Craig-Wood
2f9c2cf75e vfs: add vfs.WriteFile as an analogue to os.WriteFile 2024-07-30 13:32:45 +01:00
Nick Craig-Wood
1ac18e5765 docs: s3: add section on using too much memory #7974 2024-07-30 09:51:30 +01:00
Nick Craig-Wood
3e8cee148a docs: link the workaround for big directory syncs in the FAQ #7974 2024-07-30 09:41:54 +01:00
Saleh Dindar
f26d2c6ba8 fs/http: reload client certificates on expiry
In corporate environments, client certificates have short life times
for added security, and they get renewed automatically. This means
that client certificate can expire in the middle of long running
command such as `mount`.

This commit attempts to reload the client certificates 30s before they
expire.

This will be active for all backends which use HTTP.
2024-07-24 15:02:32 +01:00
Will Miles
dcecb0ede4 docs: clarify hasher operation
Add a line to the "other operations" block to indicate that the hasher overlay will apply auto-size and other checks for all commands.
2024-07-24 11:07:52 +01:00
Ernie Hershey
47588a7fd0 docs: fix typo in batcher docs for dropbox and googlephotos 2024-07-24 10:58:22 +01:00
Nick Craig-Wood
ba381f8721 b2: update versions documentation - fixes #7878 2024-07-24 10:52:05 +01:00
Nick Craig-Wood
8f0ddcca4e s3: document need to set force_path_style for buckets with invalid DNS names
Fixes #6110
2024-07-23 11:34:08 +01:00
Nick Craig-Wood
404ef80025 ncdu: document that excludes are not shown - fixes #6087 2024-07-23 11:29:07 +01:00
Nick Craig-Wood
13fa583368 sftp: clarify the docs for key_pem - fixes #7921 2024-07-23 10:07:44 +01:00
Nick Craig-Wood
e111ffba9e serve ftp: fix failed startup due to config changes
See: https://forum.rclone.org/t/failed-to-ftp-failed-to-parse-host-port/46959
2024-07-22 14:54:32 +01:00
Nick Craig-Wood
30ba7542ff docs: add Route4Me as a sponsor 2024-07-22 14:48:41 +01:00
wiserain
31fabb3402 pikpak: correct file transfer progress for uploads by hash
Pikpak can accelerate file uploads by leveraging existing content 
in its storage (identified by a custom hash called gcid). 
Previously, file transfer statistics were incorrect for uploads 
without outbound traffic as the input stream remained unchanged.

This commit addresses the issue by:

* Removing unnecessary unwrapping/wrapping of accountings 
before/after gcid calculation, leading immediate AccountRead() on buffering.
* Correctly tracking file transfer statistics for uploads 
with no incoming/outgoing traffic by marking them as Server Side Copies.

This change ensures correct statistics tracking and improves overall user experience.
2024-07-20 21:50:08 +09:00
Nick Craig-Wood
b3edc9d360 fs: fix --use-json-log and -vv after config reorganization 2024-07-20 12:49:08 +01:00
Nick Craig-Wood
04f35fc3ac Add Tobias Markus to contributors 2024-07-20 12:49:08 +01:00
Tobias Markus
8e5dd79e4d ulozto: fix upload of > 2GB files on 32 bit platforms - fixes #7960 2024-07-20 11:29:34 +01:00
Nick Craig-Wood
b809e71d6f lib/mmap: fix lint error on deprecated reflect.SliceHeader
reflect.SliceHeader is deprecated, however the replacement gives a go
vet warning so this disables the lint warning in one use of
reflect.SliceHeader and replaces it in the other.
2024-07-20 10:54:47 +01:00
Nick Craig-Wood
d149d1ec3e lib/http: fix tests after go1.23 update
go1.22 output the Content-Length on a bad Range request on a file but
go1.23 doesn't - adapt the tests accordingly.
2024-07-20 10:54:47 +01:00
Nick Craig-Wood
3b51ad24b2 rc: fix tests after go1.23 upgrade
go1.23 adds a doctype to the HTML output when serving file listings.
This adapts the tests for that.
2024-07-20 10:54:47 +01:00
Nick Craig-Wood
485aa90d13 build: use go1.22 for the linter to fix excess memory usage
golangci-lint seems to have a bug which uses excess memory under go1.23

See: https://github.com/golangci/golangci-lint/issues/4874
2024-07-20 10:54:47 +01:00
Nick Craig-Wood
8958d06456 build: update all dependencies 2024-07-20 10:54:47 +01:00
Nick Craig-Wood
ca24447090 build: update to go1.23rc1 and make go1.21 the minimum required version 2024-07-20 10:54:47 +01:00
Nick Craig-Wood
d008381e59 Add AThePeanut4 to contributors 2024-07-20 10:54:47 +01:00
AThePeanut4
14629c66f9 systemd: prevent unmount rc command from sending a STOPPING=1 sd-notify message
This prevents an `rclone rcd` server from prematurely going into the
'deactivating' state, which was causing systemd to kill it with a
SIGABRT after the stop timeout.

Fixes #7540
2024-07-19 10:32:34 +01:00
Nick Craig-Wood
4824837eed azureblob: allow anonymous access for public resources
See: https://forum.rclone.org/t/azure-blob-public-resources/46882
2024-07-18 11:13:29 +01:00
Nick Craig-Wood
5287a9b5fa Add Ke Wang to contributors 2024-07-18 11:13:29 +01:00
Nick Craig-Wood
f2ce1767f0 Add itsHenry to contributors 2024-07-18 11:13:29 +01:00
Nick Craig-Wood
7f048ac901 Add Tomasz Melcer to contributors 2024-07-18 11:13:29 +01:00
Nick Craig-Wood
b0d0e0b267 Add Paul Collins to contributors 2024-07-18 11:13:29 +01:00
Nick Craig-Wood
f5eef420a4 Add Russ Bubley to contributors 2024-07-18 11:13:29 +01:00
Sawjan Gurung
9de485f949 serve s3: implement --auth-proxy
This implements --auth-proxy for serve s3. In addition it:

* add listbuckets tests with and without authProxy
* use auth proxy test framework
* servetest: implement workaround for #7454
* update github.com/rclone/gofakes3 to fix race condition
2024-07-17 15:14:08 +01:00
Kyle Reynolds
d4b29fef92 fs: Allow semicolons as well as spaces in --bwlimit timetable parsing - fixes #7595 2024-07-17 11:04:01 +01:00
wiserain
471531eb6a pikpak: optimize upload by pre-fetching gcid from API
This commit optimizes the PikPak upload process by pre-fetching the Global 
Content Identifier (gcid) from the API server before calculating it locally.

Previously, a gcid required for uploads was calculated locally. This process was 
resource-intensive and time-consuming. By first checking for a cached gcid 
on the server, we can potentially avoid the local calculation entirely. 
This significantly improves upload speed especially for large files.
2024-07-17 12:20:09 +09:00
Nick Craig-Wood
afd2663057 rc: add option blocks parameter to options/get and options/info 2024-07-16 15:02:50 +01:00
Ke Wang
97d6a00483 chore(deps): update github.com/rclone/gofakes3 2024-07-16 10:58:02 +01:00
Nick Craig-Wood
5ddedae431 fstest: fix compile after merge
After merging this commit

56caab2033 b2: Include custom upload headers in large file info

The compile failed as a change had been missed. Should have rebased
before merging!
2024-07-15 12:18:14 +01:00
URenko
e1b7bf7701 local: fix encoding of root path
fix #7824
Statements like rclone copy <somewhere> . will spontaneously miss
if . expands to a path with a Full Width replacement character.
This is due to the incorrect order in which
relative paths and decoding were handled in the original implementation.
2024-07-15 12:10:04 +01:00
URenko
2a615f4681 vfs: fix cache encoding with special characters - #7760
The vfs use the hardcoded OS encoding when creating temp file,
but decode it with encoding for the local filesystem (--local-encoding)
when copying it to remote.
This caused failures when the filenames contained special characters.
The hardcoded OS encoding is now used uniformly.
2024-07-15 12:10:04 +01:00
URenko
e041796bfe docs: correct description of encoding None and add Raw. 2024-07-15 12:10:04 +01:00
URenko
1b9217bc78 lib/encoder: add EncodeRaw 2024-07-15 12:10:04 +01:00
wiserain
846c1aeed0 pikpak: non-buffered hash calculation for local source files 2024-07-15 11:53:01 +01:00
Pat Patterson
56caab2033 b2: Include custom upload headers in large file info - fixes #7744 2024-07-15 11:51:37 +01:00
itsHenry
495a5759d3 chore(deps): update github.com/rclone/gofakes3 2024-07-15 11:34:28 +01:00
Nick Craig-Wood
d9bd6f35f2 fs/test: fix erratic test 2024-07-15 11:09:54 +01:00
Nick Craig-Wood
532a0818f7 fs: make sure we load the options defaults to start with 2024-07-15 11:09:54 +01:00
Nick Craig-Wood
91558ce6aa fs: fix the defaults overriding the actual config
After re-organising the config it became apparent that there was a bug
in the config system which hadn't manifested until now.

This was the default config overriding the main config and was fixed
by noting when the defaults had actually changed.
2024-07-15 11:09:54 +01:00
Nick Craig-Wood
8fbb259091 rc: add options/info call to enumerate options
This also makes some fields in the Options block optional - these are
documented in rc.md
2024-07-15 11:09:54 +01:00
Nick Craig-Wood
4d2bc190cc fs: convert main options to new config system
There are some flags which haven't been converted which could be
converted in the future.
2024-07-15 11:09:54 +01:00
Nick Craig-Wood
c2bf300dd8 accounting: fix creating of global stats ignoring the config
Before this change the global stats were created before the global
config which meant they ignored the global config completely.
2024-07-15 11:09:54 +01:00
Nick Craig-Wood
c954c397d9 filter: convert options to new style 2024-07-15 11:09:54 +01:00
Nick Craig-Wood
25c6379688 filter: rename Opt to Options for consistency 2024-07-15 11:09:54 +01:00
Nick Craig-Wood
ce1859cd82 rc: convert options to new style 2024-07-15 11:09:54 +01:00
Nick Craig-Wood
cf25ae69ad lib/http: convert options to new style
There are still users of the old style options which haven't been
converted yet.
2024-07-15 11:09:54 +01:00
Nick Craig-Wood
dce8317042 log: convert options to new style 2024-07-15 11:09:54 +01:00
Nick Craig-Wood
eff2497633 serve sftp: convert options to new style 2024-07-15 11:09:54 +01:00
Nick Craig-Wood
28ba4b832d serve nfs: convert options to new style 2024-07-15 11:09:54 +01:00
Nick Craig-Wood
58da1a165c serve ftp: convert options to new style 2024-07-15 11:09:54 +01:00
Nick Craig-Wood
eec95a164d serve dlna: convert options to new style 2024-07-15 11:09:54 +01:00
Nick Craig-Wood
44cd2e07ca cmd/mountlib: convert mount options to new style 2024-07-15 11:09:54 +01:00
Nick Craig-Wood
a28287e96d vfs: convert vfs options to new style
This also
- move in use options (Opt) from vfsflags to vfscommon
- change os.FileMode to vfscommon.FileMode in parameters
- rework vfscommon.FileMode and add tests
2024-07-15 11:09:54 +01:00
Nick Craig-Wood
fc1d8dafd5 vfs: convert time.Duration option to fs.Duration 2024-07-15 11:09:54 +01:00
Nick Craig-Wood
2c57fe9826 cmd/mountlib: convert time.Duration option to fs.Duration 2024-07-15 11:09:54 +01:00
Nick Craig-Wood
7c51b10d15 configstruct: skip items with config:"-" 2024-07-15 11:09:54 +01:00
Nick Craig-Wood
3280b6b83c configstruct: allow parsing of []string encoded as JSON 2024-07-15 11:09:54 +01:00
Nick Craig-Wood
1a77a2f92b configstruct: make nested config structs work 2024-07-15 11:09:54 +01:00
Nick Craig-Wood
c156716d01 configstruct: fix parsing of invalid booleans in the config
Apparently fmt.Sscanln doesn't parse bool's properly and this isn't
likely to be fixed by the Go team who regard sscanf as a mistake.

This only uses sscan for integers and uses the correct routine for
everything else.

This also implements parsing time.Duration

See: https://github.com/golang/go/issues/43306
2024-07-15 11:09:54 +01:00
Nick Craig-Wood
0d9d0eef4c fs: check the names and types of the options blocks are correct 2024-07-15 11:09:54 +01:00
Nick Craig-Wood
2e653f8128 fs: make Flagger and FlaggerNP interfaces public so we can test flags elsewhere 2024-07-15 11:09:54 +01:00
Nick Craig-Wood
e79273f9c9 fs: add Options registry and rework rc to use it 2024-07-15 11:09:54 +01:00
Nick Craig-Wood
8e10fe71f7 fs: allow []string to work in Options 2024-07-15 11:09:54 +01:00
Nick Craig-Wood
c6ab37a59f flags: factor AddFlagsFromOptions from cmd
This is in preparation for generalising the backend config
2024-07-15 11:09:53 +01:00
Nick Craig-Wood
671a15f65f fs: add Groups and FieldName to Option 2024-07-15 11:09:53 +01:00
Nick Craig-Wood
8d72698d5a fs: refactor fs.ConfigMap to take a prefix and Options rather than an fs.RegInfo
This is in preparation for generalising the backend config system
2024-07-15 11:09:53 +01:00
Nick Craig-Wood
6e853c82d8 sftp: ignore errors when closing the connection pool
There is no need to report errors when draining the connection pool -
they are useless at this point.

See: https://forum.rclone.org/t/rclone-fails-to-close-unused-tcp-connections-due-to-use-of-closed-network-connection/46735
2024-07-15 10:48:45 +01:00
Tomasz Melcer
27267547b9 sftp: use uint32 for mtime
The SFTP protocol (and the golang sftp package) internally uses uint32 unix
time for expressing mtime. Hence it is a waste of memory to store it as 24-byte
time.Time data structure in long-lived data structures. So despite that the
golang sftp package uses time.Time as external interface, we can re-encode the
value back to the original format and save memory.

Co-authored-by: Tomasz Melcer <tomasz@melcer.pl>
2024-07-09 10:23:11 +01:00
wiserain
cdcf0e5cb8 pikpak: optimize file move by removing unnecessary readMetaData() call
Previously, the code relied on calling `readMetaData()` after every file move operation.
This introduced an unnecessary API call and potentially impacted performance.

This change removes the redundant `readMetaData()` call, improving efficiency.
2024-07-08 18:16:00 +09:00
wiserain
6507770014 pikpak: fix error with copyto command
Fixes an issue where copied files could not be renamed when using the
`copyto` command. This occurred because the object ID was empty
before calling `readMetaData`. The fix preemptively calls `readMetaData`
to ensure an object ID is available before attempting the rename operation.
2024-07-08 10:37:42 +09:00
Paul Collins
bd5799c079 swift: add workarounds for bad listings in Ceph RGW
Ceph's Swift API emulation does not fully confirm to the API spec.
As a result, it sometimes returns fewer items in a container than
the requested limit, which according to the spec should means
that there are no more objects left in the container.  (Note that
python-swiftclient always fetches unless the current page is empty.)

This commit adds a pair of new Swift backend settings to handle this.

Set `fetch_until_empty_page` to true to always fetch another
page of the container listing unless there are no items left.

Alternatively, set `partial_page_fetch_threshold` to an integer
percentage.  In this case rclone will fetch a new page only when
the current page is within this percentage of the limit.

Swift API reference: https://docs.openstack.org/swift/latest/api/pagination.html

PR against ncw/swift with research and discussion: https://github.com/ncw/swift/pull/167

Fixes #7924
2024-06-28 11:14:26 +01:00
Russ Bubley
c834eb7dcb sftp: fix docs on connections not to refer to concurrency 2024-06-28 10:42:52 +01:00
Nick Craig-Wood
754e53dbcc docs: remove warp as silver sponsor 2024-06-24 10:33:18 +01:00
Nick Craig-Wood
5511fa441a onedrive: fix nil pointer error when uploading small files
Before this fix when uploading a single part file, if the
o.fetchAndUpdateMetadata() call failed rclone would call
o.setMetaData() with a nil info which caused a crash.

This fixes the problem by returning the error from
o.fetchAndUpdateMetadata() explicitly.

See: https://forum.rclone.org/t/serve-webdav-is-crashing-fatal-error-sync-unlock-of-unlocked-mutex/46300
2024-06-24 09:30:59 +01:00
Nick Craig-Wood
4ed4483bbc vfs: fix fatal error: sync: unlock of unlocked mutex in panics
Before this change a panic could be overwritten with the message

    fatal error: sync: unlock of unlocked mutex

This was because we temporarily unlocked the mutex, but failed to lock
it again if there was a panic.

This is code is never the cause of an error but it masks the
underlying error by overwriting the panic cause.

See: https://forum.rclone.org/t/serve-webdav-is-crashing-fatal-error-sync-unlock-of-unlocked-mutex/46300
2024-06-24 09:30:59 +01:00
Nick Craig-Wood
0e85ba5080 Add Filipe Herculano to contributors 2024-06-24 09:30:59 +01:00
Nick Craig-Wood
e5095a7d7b Add Thearas to contributors 2024-06-24 09:30:59 +01:00
wiserain
300851e8bf pikpak: implement custom hash to replace wrong sha1
This improves PikPak's file integrity verification by implementing a custom 
hash function named gcid and replacing the previously used SHA-1 hash.
2024-06-20 00:57:21 +09:00
wiserain
cbccad9491 pikpak: improves data consistency by ensuring async tasks complete
Similar to uploads implemented in commit ce5024bf33, 
this change ensures most asynchronous file operations (copy, move, delete, 
purge, and cleanup) complete before proceeding with subsequent actions. 
This reduces the risk of data inconsistencies and improves overall reliability.
2024-06-20 00:07:05 +09:00
dependabot[bot]
9f1a7cfa67 build(deps): bump docker/build-push-action from 5 to 6
Bumps [docker/build-push-action](https://github.com/docker/build-push-action) from 5 to 6.
- [Release notes](https://github.com/docker/build-push-action/releases)
- [Commits](https://github.com/docker/build-push-action/compare/v5...v6)

---
updated-dependencies:
- dependency-name: docker/build-push-action
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-06-18 14:48:30 +01:00
Filipe Herculano
d84a4c9ac1 s3: fix incorrect region for Magalu provider 2024-06-15 17:40:28 +01:00
Thearas
1c9da8c96a docs: recommend no_check_bucket = true for Alibaba - fixes #7889
Change-Id: Ib6246e416ce67dddc3cb69350de69129a8826ce3
2024-06-15 17:39:05 +01:00
Nick Craig-Wood
af9c5fef93 docs: tidy .gitignore for docs 2024-06-15 13:08:20 +01:00
Nick Craig-Wood
7060777d1d docs: fix hugo warning: found no layout file for "html" for kind "term"
Hugo has been making this warning for a while

WARN found no layout file for "html" for kind "term": You should
create a template file which matches Hugo Layouts Lookup Rules for
this combination.

This turned out to be the addition of the `groups:` keyword to the
command frontmatter. Hugo is doing something with this keyword though
this isn't documented in the frontmatter documentation.

The fix was removing the `groups:` keyword from the frontmatter since
it was never used by hugo.
2024-06-15 12:59:49 +01:00
Nick Craig-Wood
0197e7f4e5 docs: remove slug and url from command pages since they are no longer needed 2024-06-15 12:37:43 +01:00
Nick Craig-Wood
c1c9e209f3 docs: fix hugo warning: found no layout file for "html" for kind "section"
Hugo has been making this warning for a while

WARN found no layout file for "html" for kind "section": You should
create a template file which matches Hugo Layouts Lookup Rules for
this combination.

It turned out to be
- the arrangement of the oracle object storage docs and sub page
- the fact that a section template was missing
2024-06-15 12:29:37 +01:00
Nick Craig-Wood
fd182af866 serve dlna: fix panic: invalid argument to Int63n
This updates the upstream github.com/anacrolix/dms to master to fix
the problem.

Fixes #7911
2024-06-15 10:58:57 +01:00
Nick Craig-Wood
4ea629446f Start v1.68.0-DEV development 2024-06-14 17:54:27 +01:00
Nick Craig-Wood
93e8a976ef Version v1.67.0 2024-06-14 16:04:51 +01:00
nielash
8470bdf810 s3: fix 405 error on HEAD for delete marker with versionId
When getting an object by specifying a versionId in the request, if the
specified version is a delete marker, it returns 405 (Method Not Allowed),
instead of 404 (Not Found) which would be returned without a versionId. See
https://docs.aws.amazon.com/AmazonS3/latest/userguide/DeleteMarker.html

Before this change, we were only looking for 404 (and not 405) to determine
whether the object exists. This meant that in some circumstances (ex. when
Versioning is enabled for the bucket and we have a non-null X-Amz-Version-Id), we
deemed the object to exist when we should not have.

After this change, 405 (Method Not Allowed) is treated the same as 404 (Not
Found) for the purposes of headObject.

See https://forum.rclone.org/t/bisync-rename-failed-method-not-allowed/45723/13
2024-06-13 18:09:29 +01:00
Nick Craig-Wood
1aa3a37a28 gitannex: make tests run more quietly - use go test -v for more info
These tests were generating 1000s of lines of logs and making it
difficult to figure out what was failing in other tests.
2024-06-13 17:33:56 +01:00
albertony
ae887ad042 jottacloud: set metadata on server side copy and move - fixes #7900 2024-06-13 16:19:36 +01:00
Nick Craig-Wood
d279fea44a qingstor: disable integration tests as test account suspended
QingStor support have disabled the integration test account with this message

尊敬的用户您好:依据监管部门相关内容安全合规要求,QingStor即日起限制对
个人客户提供对象存储服务,您的对象存储服务将被系统置于禁用状态,如需继
续使用QingsStor对象存储服务,您可以通过工单或者拨打400热线申请开通,未
解封期间您的数据将不受影响,感谢您的谅解和支持。

Which google translate renders as

> Dear user: In accordance with the relevant content security
> compliance requirements of the regulatory authorities, QingStor will
> limit the provision of object storage services to individual
> customers from now on. Your object storage service will be disabled
> by the system. If you need to continue to use the QingsStor object
> storage service, you can apply for activation through a work order
> or by calling the 400 hotline. Your data will not be affected during
> the period of unblocking. Thank you for your understanding and
> support.
2024-06-13 12:50:35 +01:00
Nick Craig-Wood
282e34f2d5 operations: add operations.ReadFile to read the contents of a file into memory 2024-06-13 12:48:46 +01:00
Nick Craig-Wood
021f25a748 fs: make ConfigFs take an fs.Info which makes it more useful 2024-06-13 12:48:46 +01:00
Nick Craig-Wood
18e9d039ad touch: fix using -R on certain backends
On backends which return a valid object for "" with NewObject then
touch was going wrong as it thought it was passed an object.

This should not happen normally but s3 can be configured with
--s3-no-head where it is happy to believe that all objects exist.
2024-06-12 17:57:28 +01:00
Nick Craig-Wood
cbcfb90d9a serve s3: fix XML of error message
This updates the s3 libary to fix the XML of the error response

Fixes #7749
2024-06-12 17:53:57 +01:00
Nick Craig-Wood
caba22a585 fs/logger: make the tests deterministic
Previously this used `rclone test makefiles --seed 0` which sets a
random seed and every now and again we get this error

    Failed to open file "$WORK\\src\\moru": open $WORK\src\moru: is a directory

Because a file with the same name was created as a file in the src and
a dir in the dst.

This fixes it by using determinstic seeds each time.
2024-06-12 16:39:30 +01:00
Nick Craig-Wood
3fef8016b5 zoho: sleep for 60 seconds if rate limit error received 2024-06-12 16:34:30 +01:00
Nick Craig-Wood
edf6537c61 zoho: remove simple file names complication which is no longer needed 2024-06-12 16:34:27 +01:00
Nick Craig-Wood
00f0e9df9d zoho: retry reading info if size wasn't returned 2024-06-12 16:34:24 +01:00
Nick Craig-Wood
e6ab644350 zoho: fix throttling problem when uploading files
Before this change rclone checked to see if a file existed before
uploading it. It did this to avoid making duplicate files. This
involved listing the destination directory to see if the file existed
which was rate limited by Zoho.

However Zoho can't have duplicate files anyway so this fix just
removes that check and the PutUnchecked method which isn't needed.

See: https://forum.rclone.org/t/second-followup-on-the-older-topic-rclone-invokes-more-number-of-workdrive-s-files-listing-api-calls-which-exceeds-the-throttling-limit/45697
See: https://forum.rclone.org/t/followup-on-the-older-topic-rclone-invokes-more-number-of-workdrive-s-files-listing-api-calls-which-exceeds-the-throttling-limit/44794
2024-06-12 16:34:18 +01:00
Nick Craig-Wood
61c18e3b60 zoho: use cursor listing for improved performance
Cursor listing enables us to list up to 1,000 items per call
(previously it was 10) and uses one less transaction per call.

See: https://forum.rclone.org/t/second-followup-on-the-older-topic-rclone-invokes-more-number-of-workdrive-s-files-listing-api-calls-which-exceeds-the-throttling-limit/45697/4
2024-06-12 16:34:11 +01:00
342 changed files with 53420 additions and 39637 deletions

View File

@@ -27,12 +27,12 @@ jobs:
strategy:
fail-fast: false
matrix:
job_name: ['linux', 'linux_386', 'mac_amd64', 'mac_arm64', 'windows', 'other_os', 'go1.20', 'go1.21']
job_name: ['linux', 'linux_386', 'mac_amd64', 'mac_arm64', 'windows', 'other_os', 'go1.21', 'go1.22']
include:
- job_name: linux
os: ubuntu-latest
go: '>=1.22.0-rc.1'
go: '>=1.23.0-rc.1'
gotags: cmount
build_flags: '-include "^linux/"'
check: true
@@ -43,14 +43,14 @@ jobs:
- job_name: linux_386
os: ubuntu-latest
go: '>=1.22.0-rc.1'
go: '>=1.23.0-rc.1'
goarch: 386
gotags: cmount
quicktest: true
- job_name: mac_amd64
os: macos-latest
go: '>=1.22.0-rc.1'
go: '>=1.23.0-rc.1'
gotags: 'cmount'
build_flags: '-include "^darwin/amd64" -cgo'
quicktest: true
@@ -59,14 +59,14 @@ jobs:
- job_name: mac_arm64
os: macos-latest
go: '>=1.22.0-rc.1'
go: '>=1.23.0-rc.1'
gotags: 'cmount'
build_flags: '-include "^darwin/arm64" -cgo -macos-arch arm64 -cgo-cflags=-I/usr/local/include -cgo-ldflags=-L/usr/local/lib'
deploy: true
- job_name: windows
os: windows-latest
go: '>=1.22.0-rc.1'
go: '>=1.23.0-rc.1'
gotags: cmount
cgo: '0'
build_flags: '-include "^windows/"'
@@ -76,23 +76,23 @@ jobs:
- job_name: other_os
os: ubuntu-latest
go: '>=1.22.0-rc.1'
go: '>=1.23.0-rc.1'
build_flags: '-exclude "^(windows/|darwin/|linux/)"'
compile_all: true
deploy: true
- job_name: go1.20
os: ubuntu-latest
go: '1.20'
quicktest: true
racequicktest: true
- job_name: go1.21
os: ubuntu-latest
go: '1.21'
quicktest: true
racequicktest: true
- job_name: go1.22
os: ubuntu-latest
go: '1.22'
quicktest: true
racequicktest: true
name: ${{ matrix.job_name }}
runs-on: ${{ matrix.os }}
@@ -311,7 +311,7 @@ jobs:
- name: Set up Go
uses: actions/setup-go@v5
with:
go-version: '>=1.22.0-rc.1'
go-version: '>=1.23.0-rc.1'
- name: Set global environment variables
shell: bash

View File

@@ -56,7 +56,7 @@ jobs:
run: |
df -h .
- name: Build and publish image
uses: docker/build-push-action@v5
uses: docker/build-push-action@v6
with:
file: Dockerfile
context: .

5
.gitignore vendored
View File

@@ -3,7 +3,9 @@ _junk/
rclone
rclone.exe
build
docs/public
/docs/public/
/docs/.hugo_build.lock
/docs/static/img/logos/
rclone.iml
.idea
.history
@@ -16,6 +18,5 @@ fuzz-build.zip
Thumbs.db
__pycache__
.DS_Store
/docs/static/img/logos/
resource_windows_*.syso
.devcontainer

24678
MANUAL.html generated

File diff suppressed because it is too large Load Diff

1458
MANUAL.md generated

File diff suppressed because it is too large Load Diff

23817
MANUAL.txt generated

File diff suppressed because it is too large Load Diff

View File

@@ -239,7 +239,7 @@ fetch_binaries:
rclone -P sync --exclude "/testbuilds/**" --delete-excluded $(BETA_UPLOAD) build/
serve: website
cd docs && hugo server -v -w --disableFastRender
cd docs && hugo server --logLevel info -w --disableFastRender
tag: retag doc
bin/make_changelog.py $(LAST_TAG) $(VERSION) > docs/content/changelog.md.new

View File

@@ -1 +1 @@
v1.67.0
v1.68.0

View File

@@ -711,10 +711,11 @@ func NewFs(ctx context.Context, name, root string, m configmap.Mapper) (fs.Fs, e
ClientOptions: policyClientOptions,
}
// Here we auth by setting one of cred, sharedKeyCred or f.svc
// Here we auth by setting one of cred, sharedKeyCred, f.svc or anonymous
var (
cred azcore.TokenCredential
sharedKeyCred *service.SharedKeyCredential
anonymous = false
)
switch {
case opt.EnvAuth:
@@ -874,6 +875,9 @@ func NewFs(ctx context.Context, name, root string, m configmap.Mapper) (fs.Fs, e
if err != nil {
return nil, fmt.Errorf("failed to acquire MSI token: %w", err)
}
case opt.Account != "":
// Anonymous access
anonymous = true
default:
return nil, errors.New("no authentication method configured")
}
@@ -903,6 +907,12 @@ func NewFs(ctx context.Context, name, root string, m configmap.Mapper) (fs.Fs, e
if err != nil {
return nil, fmt.Errorf("create client failed: %w", err)
}
} else if anonymous {
// Anonymous public access
f.svc, err = service.NewClientWithNoCredential(opt.Endpoint, &clientOpt)
if err != nil {
return nil, fmt.Errorf("create public client failed: %w", err)
}
}
}
if f.svc == nil {

View File

@@ -299,13 +299,14 @@ type Fs struct {
// Object describes a b2 object
type Object struct {
fs *Fs // what this object is part of
remote string // The remote path
id string // b2 id of the file
modTime time.Time // The modified time of the object if known
sha1 string // SHA-1 hash if known
size int64 // Size of the object
mimeType string // Content-Type of the object
fs *Fs // what this object is part of
remote string // The remote path
id string // b2 id of the file
modTime time.Time // The modified time of the object if known
sha1 string // SHA-1 hash if known
size int64 // Size of the object
mimeType string // Content-Type of the object
meta map[string]string // The object metadata if known - may be nil - with lower case keys
}
// ------------------------------------------------------------
@@ -1593,7 +1594,14 @@ func (o *Object) decodeMetaDataRaw(ID, SHA1 string, Size int64, UploadTimestamp
o.size = Size
// Use the UploadTimestamp if can't get file info
o.modTime = time.Time(UploadTimestamp)
return o.parseTimeString(Info[timeKey])
err = o.parseTimeString(Info[timeKey])
if err != nil {
return err
}
// For now, just set "mtime" in metadata
o.meta = make(map[string]string, 1)
o.meta["mtime"] = o.modTime.Format(time.RFC3339Nano)
return nil
}
// decodeMetaData sets the metadata in the object from an api.File
@@ -1695,6 +1703,16 @@ func timeString(modTime time.Time) string {
return strconv.FormatInt(modTime.UnixNano()/1e6, 10)
}
// parseTimeStringHelper converts a decimal string number of milliseconds
// elapsed since January 1, 1970 UTC into a time.Time
func parseTimeStringHelper(timeString string) (time.Time, error) {
unixMilliseconds, err := strconv.ParseInt(timeString, 10, 64)
if err != nil {
return time.Time{}, err
}
return time.Unix(unixMilliseconds/1e3, (unixMilliseconds%1e3)*1e6).UTC(), nil
}
// parseTimeString converts a decimal string number of milliseconds
// elapsed since January 1, 1970 UTC into a time.Time and stores it in
// the modTime variable.
@@ -1702,12 +1720,12 @@ func (o *Object) parseTimeString(timeString string) (err error) {
if timeString == "" {
return nil
}
unixMilliseconds, err := strconv.ParseInt(timeString, 10, 64)
modTime, err := parseTimeStringHelper(timeString)
if err != nil {
fs.Debugf(o, "Failed to parse mod time string %q: %v", timeString, err)
return nil
}
o.modTime = time.Unix(unixMilliseconds/1e3, (unixMilliseconds%1e3)*1e6).UTC()
o.modTime = modTime
return nil
}
@@ -1861,6 +1879,14 @@ func (o *Object) getOrHead(ctx context.Context, method string, options []fs.Open
ContentType: resp.Header.Get("Content-Type"),
Info: Info,
}
// Embryonic metadata support - just mtime
o.meta = make(map[string]string, 1)
modTime, err := parseTimeStringHelper(info.Info[timeKey])
if err == nil {
o.meta["mtime"] = modTime.Format(time.RFC3339Nano)
}
// When reading files from B2 via cloudflare using
// --b2-download-url cloudflare strips the Content-Length
// headers (presumably so it can inject stuff) so use the old
@@ -1958,7 +1984,7 @@ func (o *Object) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, op
if err == nil {
fs.Debugf(o, "File is big enough for chunked streaming")
up, err := o.fs.newLargeUpload(ctx, o, in, src, o.fs.opt.ChunkSize, false, nil)
up, err := o.fs.newLargeUpload(ctx, o, in, src, o.fs.opt.ChunkSize, false, nil, options...)
if err != nil {
o.fs.putRW(rw)
return err
@@ -1990,7 +2016,10 @@ func (o *Object) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, op
return o.decodeMetaDataFileInfo(up.info)
}
modTime := src.ModTime(ctx)
modTime, err := o.getModTime(ctx, src, options)
if err != nil {
return err
}
calculatedSha1, _ := src.Hash(ctx, hash.SHA1)
if calculatedSha1 == "" {
@@ -2095,6 +2124,36 @@ func (o *Object) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, op
return o.decodeMetaDataFileInfo(&response)
}
// Get modTime from the source; if --metadata is set, fetch the src metadata and get it from there.
// When metadata support is added to b2, this method will need a more generic name
func (o *Object) getModTime(ctx context.Context, src fs.ObjectInfo, options []fs.OpenOption) (time.Time, error) {
modTime := src.ModTime(ctx)
// Fetch metadata if --metadata is in use
meta, err := fs.GetMetadataOptions(ctx, o.fs, src, options)
if err != nil {
return time.Time{}, fmt.Errorf("failed to read metadata from source object: %w", err)
}
// merge metadata into request and user metadata
for k, v := range meta {
k = strings.ToLower(k)
// For now, the only metadata we're concerned with is "mtime"
switch k {
case "mtime":
// mtime in meta overrides source ModTime
metaModTime, err := time.Parse(time.RFC3339Nano, v)
if err != nil {
fs.Debugf(o, "failed to parse metadata %s: %q: %v", k, v, err)
} else {
modTime = metaModTime
}
default:
// Do nothing for now
}
}
return modTime, nil
}
// OpenChunkWriter returns the chunk size and a ChunkWriter
//
// Pass in the remote and the src object
@@ -2126,7 +2185,7 @@ func (f *Fs) OpenChunkWriter(ctx context.Context, remote string, src fs.ObjectIn
Concurrency: o.fs.opt.UploadConcurrency,
//LeavePartsOnError: o.fs.opt.LeavePartsOnError,
}
up, err := f.newLargeUpload(ctx, o, nil, src, f.opt.ChunkSize, false, nil)
up, err := f.newLargeUpload(ctx, o, nil, src, f.opt.ChunkSize, false, nil, options...)
return info, up, err
}

View File

@@ -184,57 +184,126 @@ func TestParseTimeString(t *testing.T) {
}
// This is adapted from the s3 equivalent.
func (f *Fs) InternalTestMetadata(t *testing.T) {
ctx := context.Background()
original := random.String(1000)
contents := fstest.Gz(t, original)
mimeType := "text/html"
item := fstest.NewItem("test-metadata", contents, fstest.Time("2001-05-06T04:05:06.499Z"))
btime := time.Now()
obj := fstests.PutTestContentsMetadata(ctx, t, f, &item, contents, true, mimeType, nil)
defer func() {
assert.NoError(t, obj.Remove(ctx))
}()
o := obj.(*Object)
gotMetadata, err := o.getMetaData(ctx)
require.NoError(t, err)
// We currently have a limited amount of metadata to test with B2
assert.Equal(t, mimeType, gotMetadata.ContentType, "Content-Type")
// Modification time from the x-bz-info-src_last_modified_millis header
var mtime api.Timestamp
err = mtime.UnmarshalJSON([]byte(gotMetadata.Info[timeKey]))
if err != nil {
fs.Debugf(o, "Bad "+timeHeader+" header: %v", err)
// Return a map of the headers in the options with keys stripped of the "x-bz-info-" prefix
func OpenOptionToMetaData(options []fs.OpenOption) map[string]string {
var headers = make(map[string]string)
for _, option := range options {
k, v := option.Header()
k = strings.ToLower(k)
if strings.HasPrefix(k, headerPrefix) {
headers[k[len(headerPrefix):]] = v
}
}
assert.Equal(t, item.ModTime, time.Time(mtime), "Modification time")
// Upload time
gotBtime := time.Time(gotMetadata.UploadTimestamp)
dt := gotBtime.Sub(btime)
assert.True(t, dt < time.Minute && dt > -time.Minute, fmt.Sprintf("btime more than 1 minute out want %v got %v delta %v", btime, gotBtime, dt))
return headers
}
t.Run("GzipEncoding", func(t *testing.T) {
// Test that the gzipped file we uploaded can be
// downloaded
checkDownload := func(wantContents string, wantSize int64, wantHash string) {
gotContents := fstests.ReadObject(ctx, t, o, -1)
assert.Equal(t, wantContents, gotContents)
assert.Equal(t, wantSize, o.Size())
gotHash, err := o.Hash(ctx, hash.SHA1)
func (f *Fs) internalTestMetadata(t *testing.T, size string, uploadCutoff string, chunkSize string) {
what := fmt.Sprintf("Size%s/UploadCutoff%s/ChunkSize%s", size, uploadCutoff, chunkSize)
t.Run(what, func(t *testing.T) {
ctx := context.Background()
ss := fs.SizeSuffix(0)
err := ss.Set(size)
require.NoError(t, err)
original := random.String(int(ss))
contents := fstest.Gz(t, original)
mimeType := "text/html"
if chunkSize != "" {
ss := fs.SizeSuffix(0)
err := ss.Set(chunkSize)
require.NoError(t, err)
_, err = f.SetUploadChunkSize(ss)
require.NoError(t, err)
assert.Equal(t, wantHash, gotHash)
}
t.Run("NoDecompress", func(t *testing.T) {
checkDownload(contents, int64(len(contents)), sha1Sum(t, contents))
if uploadCutoff != "" {
ss := fs.SizeSuffix(0)
err := ss.Set(uploadCutoff)
require.NoError(t, err)
_, err = f.SetUploadCutoff(ss)
require.NoError(t, err)
}
item := fstest.NewItem("test-metadata", contents, fstest.Time("2001-05-06T04:05:06.499Z"))
btime := time.Now()
metadata := fs.Metadata{
// Just mtime for now - limit to milliseconds since x-bz-info-src_last_modified_millis can't support any
"mtime": "2009-05-06T04:05:06.499Z",
}
// Need to specify HTTP options with the header prefix since they are passed as-is
options := []fs.OpenOption{
&fs.HTTPOption{Key: "X-Bz-Info-a", Value: "1"},
&fs.HTTPOption{Key: "X-Bz-Info-b", Value: "2"},
}
obj := fstests.PutTestContentsMetadata(ctx, t, f, &item, true, contents, true, mimeType, metadata, options...)
defer func() {
assert.NoError(t, obj.Remove(ctx))
}()
o := obj.(*Object)
gotMetadata, err := o.getMetaData(ctx)
require.NoError(t, err)
// X-Bz-Info-a & X-Bz-Info-b
optMetadata := OpenOptionToMetaData(options)
for k, v := range optMetadata {
got := gotMetadata.Info[k]
assert.Equal(t, v, got, k)
}
// mtime
for k, v := range metadata {
got := o.meta[k]
assert.Equal(t, v, got, k)
}
assert.Equal(t, mimeType, gotMetadata.ContentType, "Content-Type")
// Modification time from the x-bz-info-src_last_modified_millis header
var mtime api.Timestamp
err = mtime.UnmarshalJSON([]byte(gotMetadata.Info[timeKey]))
if err != nil {
fs.Debugf(o, "Bad "+timeHeader+" header: %v", err)
}
assert.Equal(t, item.ModTime, time.Time(mtime), "Modification time")
// Upload time
gotBtime := time.Time(gotMetadata.UploadTimestamp)
dt := gotBtime.Sub(btime)
assert.True(t, dt < time.Minute && dt > -time.Minute, fmt.Sprintf("btime more than 1 minute out want %v got %v delta %v", btime, gotBtime, dt))
t.Run("GzipEncoding", func(t *testing.T) {
// Test that the gzipped file we uploaded can be
// downloaded
checkDownload := func(wantContents string, wantSize int64, wantHash string) {
gotContents := fstests.ReadObject(ctx, t, o, -1)
assert.Equal(t, wantContents, gotContents)
assert.Equal(t, wantSize, o.Size())
gotHash, err := o.Hash(ctx, hash.SHA1)
require.NoError(t, err)
assert.Equal(t, wantHash, gotHash)
}
t.Run("NoDecompress", func(t *testing.T) {
checkDownload(contents, int64(len(contents)), sha1Sum(t, contents))
})
})
})
}
func (f *Fs) InternalTestMetadata(t *testing.T) {
// 1 kB regular file
f.internalTestMetadata(t, "1kiB", "", "")
// 10 MiB large file
f.internalTestMetadata(t, "10MiB", "6MiB", "6MiB")
}
func sha1Sum(t *testing.T, s string) string {
hash := sha1.Sum([]byte(s))
return fmt.Sprintf("%x", hash)

View File

@@ -91,7 +91,7 @@ type largeUpload struct {
// newLargeUpload starts an upload of object o from in with metadata in src
//
// If newInfo is set then metadata from that will be used instead of reading it from src
func (f *Fs) newLargeUpload(ctx context.Context, o *Object, in io.Reader, src fs.ObjectInfo, defaultChunkSize fs.SizeSuffix, doCopy bool, newInfo *api.File) (up *largeUpload, err error) {
func (f *Fs) newLargeUpload(ctx context.Context, o *Object, in io.Reader, src fs.ObjectInfo, defaultChunkSize fs.SizeSuffix, doCopy bool, newInfo *api.File, options ...fs.OpenOption) (up *largeUpload, err error) {
size := src.Size()
parts := 0
chunkSize := defaultChunkSize
@@ -104,11 +104,6 @@ func (f *Fs) newLargeUpload(ctx context.Context, o *Object, in io.Reader, src fs
parts++
}
}
opts := rest.Opts{
Method: "POST",
Path: "/b2_start_large_file",
}
bucket, bucketPath := o.split()
bucketID, err := f.getBucketID(ctx, bucket)
if err != nil {
@@ -118,12 +113,27 @@ func (f *Fs) newLargeUpload(ctx context.Context, o *Object, in io.Reader, src fs
BucketID: bucketID,
Name: f.opt.Enc.FromStandardPath(bucketPath),
}
optionsToSend := make([]fs.OpenOption, 0, len(options))
if newInfo == nil {
modTime := src.ModTime(ctx)
modTime, err := o.getModTime(ctx, src, options)
if err != nil {
return nil, err
}
request.ContentType = fs.MimeType(ctx, src)
request.Info = map[string]string{
timeKey: timeString(modTime),
}
// Custom upload headers - remove header prefix since they are sent in the body
for _, option := range options {
k, v := option.Header()
k = strings.ToLower(k)
if strings.HasPrefix(k, headerPrefix) {
request.Info[k[len(headerPrefix):]] = v
} else {
optionsToSend = append(optionsToSend, option)
}
}
// Set the SHA1 if known
if !o.fs.opt.DisableCheckSum || doCopy {
if calculatedSha1, err := src.Hash(ctx, hash.SHA1); err == nil && calculatedSha1 != "" {
@@ -134,6 +144,11 @@ func (f *Fs) newLargeUpload(ctx context.Context, o *Object, in io.Reader, src fs
request.ContentType = newInfo.ContentType
request.Info = newInfo.Info
}
opts := rest.Opts{
Method: "POST",
Path: "/b2_start_large_file",
Options: optionsToSend,
}
var response api.StartLargeFileResponse
err = f.pacer.Call(func() (bool, error) {
resp, err := f.srv.CallJSON(ctx, &opts, &request, &response)

View File

@@ -33,7 +33,7 @@ import (
"github.com/rclone/rclone/fstest"
"github.com/rclone/rclone/fstest/testy"
"github.com/rclone/rclone/lib/random"
"github.com/rclone/rclone/vfs/vfsflags"
"github.com/rclone/rclone/vfs/vfscommon"
"github.com/stretchr/testify/require"
)
@@ -123,10 +123,10 @@ func TestInternalListRootAndInnerRemotes(t *testing.T) {
/* TODO: is this testing something?
func TestInternalVfsCache(t *testing.T) {
vfsflags.Opt.DirCacheTime = time.Second * 30
vfscommon.Opt.DirCacheTime = time.Second * 30
testSize := int64(524288000)
vfsflags.Opt.CacheMode = vfs.CacheModeWrites
vfscommon.Opt.CacheMode = vfs.CacheModeWrites
id := "tiuufo"
rootFs, boltDb := runInstance.newCacheFs(t, remoteName, id, true, true, nil, map[string]string{"writes": "true", "info_age": "1h"})
defer runInstance.cleanupFs(t, rootFs, boltDb)
@@ -338,7 +338,7 @@ func TestInternalCachedUpdatedContentMatches(t *testing.T) {
func TestInternalWrappedWrittenContentMatches(t *testing.T) {
id := fmt.Sprintf("tiwwcm%v", time.Now().Unix())
vfsflags.Opt.DirCacheTime = time.Second
vfscommon.Opt.DirCacheTime = fs.Duration(time.Second)
rootFs, _ := runInstance.newCacheFs(t, remoteName, id, true, true, nil)
if runInstance.rootIsCrypt {
t.Skip("test skipped with crypt remote")
@@ -368,7 +368,7 @@ func TestInternalWrappedWrittenContentMatches(t *testing.T) {
func TestInternalLargeWrittenContentMatches(t *testing.T) {
id := fmt.Sprintf("tilwcm%v", time.Now().Unix())
vfsflags.Opt.DirCacheTime = time.Second
vfscommon.Opt.DirCacheTime = fs.Duration(time.Second)
rootFs, _ := runInstance.newCacheFs(t, remoteName, id, true, true, nil)
if runInstance.rootIsCrypt {
t.Skip("test skipped with crypt remote")
@@ -708,7 +708,7 @@ func TestInternalMaxChunkSizeRespected(t *testing.T) {
func TestInternalExpiredEntriesRemoved(t *testing.T) {
id := fmt.Sprintf("tieer%v", time.Now().Unix())
vfsflags.Opt.DirCacheTime = time.Second * 4 // needs to be lower than the defined
vfscommon.Opt.DirCacheTime = fs.Duration(time.Second * 4) // needs to be lower than the defined
rootFs, _ := runInstance.newCacheFs(t, remoteName, id, true, true, nil)
cfs, err := runInstance.getCacheFs(rootFs)
require.NoError(t, err)
@@ -743,7 +743,7 @@ func TestInternalExpiredEntriesRemoved(t *testing.T) {
}
func TestInternalBug2117(t *testing.T) {
vfsflags.Opt.DirCacheTime = time.Second * 10
vfscommon.Opt.DirCacheTime = fs.Duration(time.Second * 10)
id := fmt.Sprintf("tib2117%v", time.Now().Unix())
rootFs, _ := runInstance.newCacheFs(t, remoteName, id, false, true, map[string]string{"info_age": "72h", "chunk_clean_interval": "15m"})

View File

@@ -3776,7 +3776,7 @@ file named "foo ' \.txt":
The result is a JSON array of matches, for example:
[
[
{
"createdTime": "2017-06-29T19:58:28.537Z",
"id": "0AxBe_CDEF4zkGHI4d0FjYko2QkD",
@@ -3792,7 +3792,7 @@ The result is a JSON array of matches, for example:
"size": "311",
"webViewLink": "https://drive.google.com/file/d/0AxBe_CDEF4zkGHI4d0FjYko2QkD/view?usp=drivesdk\u0026resourcekey=0-ABCDEFGHIXJQpIGqBJq3MC"
}
]`,
]`,
}}
// Command the backend to run a named command

View File

@@ -566,7 +566,7 @@ func (f *Fs) InternalTestAgeQuery(t *testing.T) {
// Check set up for filtering
assert.True(t, f.Features().FilterAware)
opt := &filter.Opt{}
opt := &filter.Options{}
err := opt.MaxAge.Set("1h")
assert.NoError(t, err)
flt, err := filter.NewFilter(opt)

View File

@@ -1487,16 +1487,38 @@ func (f *Fs) Copy(ctx context.Context, src fs.Object, remote string) (fs.Object,
return nil, fs.ErrorCantMove
}
err := f.mkParentDir(ctx, remote)
meta, err := fs.GetMetadataOptions(ctx, f, src, fs.MetadataAsOpenOptions(ctx))
if err != nil {
return nil, err
}
if err := f.mkParentDir(ctx, remote); err != nil {
return nil, err
}
info, err := f.copyOrMove(ctx, "cp", srcObj.filePath(), remote)
// if destination was a trashed file then after a successful copy the copied file is still in trash (bug in api?)
if err == nil && bool(info.Deleted) && !f.opt.TrashedOnly && info.State == "COMPLETED" {
fs.Debugf(src, "Server-side copied to trashed destination, restoring")
info, err = f.createOrUpdate(ctx, remote, srcObj.createTime, srcObj.modTime, srcObj.size, srcObj.md5)
if err == nil {
var createTime time.Time
var createTimeMeta bool
var modTime time.Time
var modTimeMeta bool
if meta != nil {
createTime, createTimeMeta = srcObj.parseFsMetadataTime(meta, "btime")
if !createTimeMeta {
createTime = srcObj.createTime
}
modTime, modTimeMeta = srcObj.parseFsMetadataTime(meta, "mtime")
if !modTimeMeta {
modTime = srcObj.modTime
}
}
if bool(info.Deleted) && !f.opt.TrashedOnly && info.State == "COMPLETED" {
// Workaround necessary when destination was a trashed file, to avoid the copied file also being in trash (bug in api?)
fs.Debugf(src, "Server-side copied to trashed destination, restoring")
info, err = f.createOrUpdate(ctx, remote, createTime, modTime, info.Size, info.MD5)
} else if createTimeMeta || modTimeMeta {
info, err = f.createOrUpdate(ctx, remote, createTime, modTime, info.Size, info.MD5)
}
}
if err != nil {
@@ -1523,12 +1545,30 @@ func (f *Fs) Move(ctx context.Context, src fs.Object, remote string) (fs.Object,
return nil, fs.ErrorCantMove
}
err := f.mkParentDir(ctx, remote)
meta, err := fs.GetMetadataOptions(ctx, f, src, fs.MetadataAsOpenOptions(ctx))
if err != nil {
return nil, err
}
if err := f.mkParentDir(ctx, remote); err != nil {
return nil, err
}
info, err := f.copyOrMove(ctx, "mv", srcObj.filePath(), remote)
if err != nil && meta != nil {
createTime, createTimeMeta := srcObj.parseFsMetadataTime(meta, "btime")
if !createTimeMeta {
createTime = srcObj.createTime
}
modTime, modTimeMeta := srcObj.parseFsMetadataTime(meta, "mtime")
if !modTimeMeta {
modTime = srcObj.modTime
}
if createTimeMeta || modTimeMeta {
info, err = f.createOrUpdate(ctx, remote, createTime, modTime, info.Size, info.MD5)
}
}
if err != nil {
return nil, fmt.Errorf("couldn't move file: %w", err)
}
@@ -1786,6 +1826,20 @@ func (o *Object) readMetaData(ctx context.Context, force bool) (err error) {
return o.setMetaData(info)
}
// parseFsMetadataTime parses a time string from fs.Metadata with key
func (o *Object) parseFsMetadataTime(m fs.Metadata, key string) (t time.Time, ok bool) {
value, ok := m[key]
if ok {
var err error
t, err = time.Parse(time.RFC3339Nano, value) // metadata stores RFC3339Nano timestamps
if err != nil {
fs.Debugf(o, "failed to parse metadata %s: %q: %v", key, value, err)
ok = false
}
}
return t, ok
}
// ModTime returns the modification time of the object
//
// It attempts to read the objects mtime and if that isn't present the
@@ -1957,21 +2011,11 @@ func (o *Object) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, op
var createdTime string
var modTime string
if meta != nil {
if v, ok := meta["btime"]; ok {
t, err := time.Parse(time.RFC3339Nano, v) // metadata stores RFC3339Nano timestamps
if err != nil {
fs.Debugf(o, "failed to parse metadata btime: %q: %v", v, err)
} else {
createdTime = api.Rfc3339Time(t).String() // jottacloud api wants RFC3339 timestamps
}
if t, ok := o.parseFsMetadataTime(meta, "btime"); ok {
createdTime = api.Rfc3339Time(t).String() // jottacloud api wants RFC3339 timestamps
}
if v, ok := meta["mtime"]; ok {
t, err := time.Parse(time.RFC3339Nano, v)
if err != nil {
fs.Debugf(o, "failed to parse metadata mtime: %q: %v", v, err)
} else {
modTime = api.Rfc3339Time(t).String()
}
if t, ok := o.parseFsMetadataTime(meta, "mtime"); ok {
modTime = api.Rfc3339Time(t).String()
}
}
if modTime == "" { // prefer mtime in meta as Modified time, fallback to source ModTime

View File

@@ -59,7 +59,7 @@ func (f *Fs) InternalTestMetadata(t *testing.T) {
//"utime" - read-only
//"content-type" - read-only
}
obj := fstests.PutTestContentsMetadata(ctx, t, f, &item, contents, true, "text/html", metadata)
obj := fstests.PutTestContentsMetadata(ctx, t, f, &item, false, contents, true, "text/html", metadata)
defer func() {
assert.NoError(t, obj.Remove(ctx))
}()

View File

@@ -1568,32 +1568,47 @@ func (o *Object) SetMetadata(ctx context.Context, metadata fs.Metadata) error {
}
func cleanRootPath(s string, noUNC bool, enc encoder.MultiEncoder) string {
if runtime.GOOS != "windows" || !strings.HasPrefix(s, "\\") {
if !filepath.IsAbs(s) {
s2, err := filepath.Abs(s)
if err == nil {
s = s2
}
} else {
s = filepath.Clean(s)
}
}
var vol string
if runtime.GOOS == "windows" {
s = filepath.ToSlash(s)
vol := filepath.VolumeName(s)
vol = filepath.VolumeName(s)
if vol == `\\?` && len(s) >= 6 {
// `\\?\C:`
vol = s[:6]
}
s = vol + enc.FromStandardPath(s[len(vol):])
s = filepath.FromSlash(s)
if !noUNC {
// Convert to UNC
s = file.UNCPath(s)
}
return s
s = s[len(vol):]
}
// Don't use FromStandardPath. Make sure Dot (`.`, `..`) as name will not be reencoded
// Take care of the case Standard: .// (the first dot means current directory)
if enc != encoder.Standard {
s = filepath.ToSlash(s)
parts := strings.Split(s, "/")
encoded := make([]string, len(parts))
changed := false
for i, p := range parts {
if (p == ".") || (p == "..") {
encoded[i] = p
continue
}
part := enc.FromStandardName(p)
changed = changed || part != p
encoded[i] = part
}
if changed {
s = strings.Join(encoded, "/")
}
s = filepath.FromSlash(s)
}
if runtime.GOOS == "windows" {
s = vol + s
}
s2, err := filepath.Abs(s)
if err == nil {
s = s2
}
if !noUNC {
// Convert to UNC. It does nothing on non windows platforms.
s = file.UNCPath(s)
}
s = enc.FromStandardPath(s)
return s
}

View File

@@ -2538,6 +2538,9 @@ func (o *Object) uploadSinglepart(ctx context.Context, in io.Reader, src fs.Obje
}
// Set the mod time now and read metadata
info, err = o.fs.fetchAndUpdateMetadata(ctx, src, options, o)
if err != nil {
return nil, fmt.Errorf("failed to fetch and update metadata: %w", err)
}
return info, o.setMetaData(info)
}

View File

@@ -379,7 +379,7 @@ func (f *Fs) putWithMeta(ctx context.Context, t *testing.T, file *fstest.Item, p
}
expectedMeta.Set("permissions", marshalPerms(t, perms))
obj := fstests.PutTestContentsMetadata(ctx, t, f, file, content, true, "plain/text", expectedMeta)
obj := fstests.PutTestContentsMetadata(ctx, t, f, file, false, content, true, "plain/text", expectedMeta)
do, ok := obj.(fs.Metadataer)
require.True(t, ok)
actualMeta, err := do.Metadata(ctx)

View File

@@ -26,7 +26,10 @@ package quickxorhash
// OTHER TORTIOUS ACTION, ARISING OUT OF OR IN CONNECTION WITH THE USE OR
// PERFORMANCE OF THIS SOFTWARE.
import "hash"
import (
"crypto/subtle"
"hash"
)
const (
// BlockSize is the preferred size for hashing
@@ -48,6 +51,11 @@ func New() hash.Hash {
return &quickXorHash{}
}
// xor dst with src
func xorBytes(dst, src []byte) int {
return subtle.XORBytes(dst, src, dst)
}
// Write (via the embedded io.Writer interface) adds more data to the running hash.
// It never returns an error.
//

View File

@@ -1,20 +0,0 @@
//go:build !go1.20
package quickxorhash
func xorBytes(dst, src []byte) int {
n := len(dst)
if len(src) < n {
n = len(src)
}
if n == 0 {
return 0
}
dst = dst[:n]
//src = src[:n]
src = src[:len(dst)] // remove bounds check in loop
for i := range dst {
dst[i] ^= src[i]
}
return n
}

View File

@@ -1,9 +0,0 @@
//go:build go1.20
package quickxorhash
import "crypto/subtle"
func xorBytes(dst, src []byte) int {
return subtle.XORBytes(dst, src, dst)
}

View File

@@ -176,7 +176,7 @@ type File struct {
FileCategory string `json:"file_category,omitempty"` // "AUDIO", "VIDEO"
FileExtension string `json:"file_extension,omitempty"`
FolderType string `json:"folder_type,omitempty"`
Hash string `json:"hash,omitempty"` // sha1 but NOT a valid file hash. looks like a torrent hash
Hash string `json:"hash,omitempty"` // custom hash with a form of sha1sum
IconLink string `json:"icon_link,omitempty"`
ID string `json:"id,omitempty"`
Kind string `json:"kind,omitempty"` // "drive#file"
@@ -486,7 +486,7 @@ type RequestNewFile struct {
ParentID string `json:"parent_id"`
FolderType string `json:"folder_type"`
// only when uploading a new file
Hash string `json:"hash,omitempty"` // sha1sum
Hash string `json:"hash,omitempty"` // gcid
Resumable map[string]string `json:"resumable,omitempty"` // {"provider": "PROVIDER_ALIYUN"}
Size int64 `json:"size,omitempty"`
UploadType string `json:"upload_type,omitempty"` // "UPLOAD_TYPE_FORM" or "UPLOAD_TYPE_RESUMABLE"

View File

@@ -8,18 +8,22 @@ import (
"errors"
"fmt"
"io"
"math/rand"
"net/http"
"net/url"
"os"
"strconv"
"strings"
"time"
"github.com/rclone/rclone/backend/pikpak/api"
"github.com/rclone/rclone/fs"
"github.com/rclone/rclone/lib/rest"
)
// Globals
const (
cachePrefix = "rclone-pikpak-sha1sum-"
cachePrefix = "rclone-pikpak-gcid-"
)
// requestDecompress requests decompress of compressed files
@@ -82,19 +86,21 @@ func (f *Fs) getVIPInfo(ctx context.Context) (info *api.VIP, err error) {
// action can be one of batch{Copy,Delete,Trash,Untrash}
func (f *Fs) requestBatchAction(ctx context.Context, action string, req *api.RequestBatch) (err error) {
opts := rest.Opts{
Method: "POST",
Path: "/drive/v1/files:" + action,
NoResponse: true, // Only returns `{"task_id":""}
Method: "POST",
Path: "/drive/v1/files:" + action,
}
info := struct {
TaskID string `json:"task_id"`
}{}
var resp *http.Response
err = f.pacer.Call(func() (bool, error) {
resp, err = f.rst.CallJSON(ctx, &opts, &req, nil)
resp, err = f.rst.CallJSON(ctx, &opts, &req, &info)
return f.shouldRetry(ctx, resp, err)
})
if err != nil {
return fmt.Errorf("batch action %q failed: %w", action, err)
}
return nil
return f.waitTask(ctx, info.TaskID)
}
// requestNewTask requests a new api.NewTask and returns api.Task
@@ -148,6 +154,9 @@ func (f *Fs) getFile(ctx context.Context, ID string) (info *api.File, err error)
}
return f.shouldRetry(ctx, resp, err)
})
if err == nil {
info.Name = f.opt.Enc.ToStandardName(info.Name)
}
return
}
@@ -179,8 +188,8 @@ func (f *Fs) getTask(ctx context.Context, ID string, checkPhase bool) (info *api
resp, err = f.rst.CallJSON(ctx, &opts, nil, &info)
if checkPhase {
if err == nil && info.Phase != api.PhaseTypeComplete {
// could be pending right after file is created/uploaded.
return true, errors.New(info.Phase)
// could be pending right after the task is created
return true, fmt.Errorf("%s (%s) is still in %s", info.Name, info.Type, info.Phase)
}
}
return f.shouldRetry(ctx, resp, err)
@@ -188,6 +197,18 @@ func (f *Fs) getTask(ctx context.Context, ID string, checkPhase bool) (info *api
return
}
// waitTask waits for async tasks to be completed
func (f *Fs) waitTask(ctx context.Context, ID string) (err error) {
time.Sleep(taskWaitTime)
if info, err := f.getTask(ctx, ID, true); err != nil {
if info == nil {
return fmt.Errorf("can't verify the task is completed: %q", ID)
}
return fmt.Errorf("can't verify the task is completed: %#v", info)
}
return
}
// deleteTask remove a task having the specified ID
func (f *Fs) deleteTask(ctx context.Context, ID string, deleteFiles bool) (err error) {
params := url.Values{}
@@ -235,16 +256,42 @@ func (f *Fs) requestShare(ctx context.Context, req *api.RequestShare) (info *api
return
}
// Read the sha1 of in returning a reader which will read the same contents
// getGcid retrieves Gcid cached in API server
func (f *Fs) getGcid(ctx context.Context, src fs.ObjectInfo) (gcid string, err error) {
cid, err := calcCid(ctx, src)
if err != nil {
return
}
params := url.Values{}
params.Set("cid", cid)
params.Set("file_size", strconv.FormatInt(src.Size(), 10))
opts := rest.Opts{
Method: "GET",
Path: "/drive/v1/resource/cid",
Parameters: params,
ExtraHeaders: map[string]string{"x-device-id": f.deviceID},
}
info := struct {
Gcid string `json:"gcid,omitempty"`
}{}
var resp *http.Response
err = f.pacer.Call(func() (bool, error) {
resp, err = f.rst.CallJSON(ctx, &opts, nil, &info)
return f.shouldRetry(ctx, resp, err)
})
if err != nil {
return "", err
}
return info.Gcid, nil
}
// Read the gcid of in returning a reader which will read the same contents
//
// The cleanup function should be called when out is finished with
// regardless of whether this function returned an error or not.
func readSHA1(in io.Reader, size, threshold int64) (sha1sum string, out io.Reader, cleanup func(), err error) {
// we need an SHA1
hash := sha1.New()
// use the teeReader to write to the local file AND calculate the SHA1 while doing so
teeReader := io.TeeReader(in, hash)
func readGcid(in io.Reader, size, threshold int64) (gcid string, out io.Reader, cleanup func(), err error) {
// nothing to clean up by default
cleanup = func() {}
@@ -267,8 +314,11 @@ func readSHA1(in io.Reader, size, threshold int64) (sha1sum string, out io.Reade
_ = os.Remove(tempFile.Name()) // delete the cache file after we are done - may be deleted already
}
// copy the ENTIRE file to disc and calculate the SHA1 in the process
if _, err = io.Copy(tempFile, teeReader); err != nil {
// use the teeReader to write to the local file AND calculate the gcid while doing so
teeReader := io.TeeReader(in, tempFile)
// copy the ENTIRE file to disk and calculate the gcid in the process
if gcid, err = calcGcid(teeReader, size); err != nil {
return
}
// jump to the start of the local file so we can pass it along
@@ -279,15 +329,102 @@ func readSHA1(in io.Reader, size, threshold int64) (sha1sum string, out io.Reade
// replace the already read source with a reader of our cached file
out = tempFile
} else {
// that's a small file, just read it into memory
var inData []byte
inData, err = io.ReadAll(teeReader)
if err != nil {
buf := &bytes.Buffer{}
teeReader := io.TeeReader(in, buf)
if gcid, err = calcGcid(teeReader, size); err != nil {
return
}
// set the reader to our read memory block
out = bytes.NewReader(inData)
out = buf
}
return hex.EncodeToString(hash.Sum(nil)), out, cleanup, nil
return
}
// calcGcid calculates Gcid from reader
//
// Gcid is a custom hash to index a file contents
func calcGcid(r io.Reader, size int64) (string, error) {
calcBlockSize := func(j int64) int64 {
var psize int64 = 0x40000
for float64(j)/float64(psize) > 0x200 && psize < 0x200000 {
psize = psize << 1
}
return psize
}
totalHash := sha1.New()
blockHash := sha1.New()
readSize := calcBlockSize(size)
for {
blockHash.Reset()
if n, err := io.CopyN(blockHash, r, readSize); err != nil && n == 0 {
if err != io.EOF {
return "", err
}
break
}
totalHash.Write(blockHash.Sum(nil))
}
return hex.EncodeToString(totalHash.Sum(nil)), nil
}
// calcCid calculates Cid from source
//
// Cid is a simplified version of Gcid
func calcCid(ctx context.Context, src fs.ObjectInfo) (cid string, err error) {
srcObj := fs.UnWrapObjectInfo(src)
if srcObj == nil {
return "", fmt.Errorf("failed to unwrap object from src: %s", src)
}
size := src.Size()
hash := sha1.New()
var rc io.ReadCloser
readHash := func(start, length int64) (err error) {
end := start + length - 1
if rc, err = srcObj.Open(ctx, &fs.RangeOption{Start: start, End: end}); err != nil {
return fmt.Errorf("failed to open src with range (%d, %d): %w", start, end, err)
}
defer fs.CheckClose(rc, &err)
_, err = io.Copy(hash, rc)
return err
}
if size <= 0xF000 { // 61440 = 60KB
err = readHash(0, size)
} else { // 20KB from three different parts
for _, start := range []int64{0, size / 3, size - 0x5000} {
err = readHash(start, 0x5000)
if err != nil {
break
}
}
}
if err != nil {
return "", fmt.Errorf("failed to hash: %w", err)
}
cid = strings.ToUpper(hex.EncodeToString(hash.Sum(nil)))
return
}
// randomly generates device id used for request header 'x-device-id'
//
// original javascript implementation
//
// return "xxxxxxxxxxxx4xxxyxxxxxxxxxxxxxxx".replace(/[xy]/g, (e) => {
// const t = (16 * Math.random()) | 0;
// return ("x" == e ? t : (3 & t) | 8).toString(16);
// });
func genDeviceID() string {
base := []byte("xxxxxxxxxxxx4xxxyxxxxxxxxxxxxxxx")
for i, char := range base {
switch char {
case 'x':
base[i] = fmt.Sprintf("%x", rand.Intn(16))[0]
case 'y':
base[i] = fmt.Sprintf("%x", rand.Intn(16)&3|8)[0]
}
}
return string(base)
}

View File

@@ -7,8 +7,6 @@ package pikpak
// md5sum is not always available, sometimes given empty.
// sha1sum used for upload differs from the one with official apps.
// Trashed files are not restored to the original location when using `batchUntrash`
// Can't stream without `--vfs-cache-mode=full`
@@ -69,7 +67,7 @@ const (
rcloneEncryptedClientSecret = "aqrmB6M1YJ1DWCBxVxFSjFo7wzWEky494YMmkqgAl1do1WKOe2E"
minSleep = 100 * time.Millisecond
maxSleep = 2 * time.Second
waitTime = 500 * time.Millisecond
taskWaitTime = 500 * time.Millisecond
decayConstant = 2 // bigger for slower decay, exponential
rootURL = "https://api-drive.mypikpak.com"
minChunkSize = fs.SizeSuffix(s3manager.MinUploadPartSize)
@@ -276,6 +274,7 @@ type Fs struct {
dirCache *dircache.DirCache // Map of directory path to directory id
pacer *fs.Pacer // pacer for API calls
rootFolderID string // the id of the root folder
deviceID string // device id used for api requests
client *http.Client // authorized client
m configmap.Mapper
tokenMu *sync.Mutex // when renewing tokens
@@ -291,6 +290,7 @@ type Object struct {
modTime time.Time // modification time of the object
mimeType string // The object MIME type
parent string // ID of the parent directories
gcid string // custom hash of the object
md5sum string // md5sum of the object
link *api.Link // link to download the object
linkMu *sync.Mutex
@@ -490,6 +490,7 @@ func newFs(ctx context.Context, name, path string, m configmap.Mapper) (*Fs, err
CanHaveEmptyDirectories: true, // can have empty directories
NoMultiThreading: true, // can't have multiple threads downloading
}).Fill(ctx, f)
f.deviceID = genDeviceID()
if err := f.newClientWithPacer(ctx); err != nil {
return nil, err
@@ -917,19 +918,21 @@ func (f *Fs) Purge(ctx context.Context, dir string) error {
// CleanUp empties the trash
func (f *Fs) CleanUp(ctx context.Context) (err error) {
opts := rest.Opts{
Method: "PATCH",
Path: "/drive/v1/files/trash:empty",
NoResponse: true, // Only returns `{"task_id":""}
Method: "PATCH",
Path: "/drive/v1/files/trash:empty",
}
info := struct {
TaskID string `json:"task_id"`
}{}
var resp *http.Response
err = f.pacer.Call(func() (bool, error) {
resp, err = f.rst.Call(ctx, &opts)
resp, err = f.rst.CallJSON(ctx, &opts, nil, &info)
return f.shouldRetry(ctx, resp, err)
})
if err != nil {
return fmt.Errorf("couldn't empty trash: %w", err)
}
return nil
return f.waitTask(ctx, info.TaskID)
}
// Move the object
@@ -1015,6 +1018,7 @@ func (f *Fs) createObject(ctx context.Context, remote string, modTime time.Time,
o = &Object{
fs: f,
remote: remote,
parent: dirID,
size: size,
modTime: modTime,
linkMu: new(sync.Mutex),
@@ -1047,7 +1051,7 @@ func (f *Fs) Move(ctx context.Context, src fs.Object, remote string) (fs.Object,
return nil, err
}
// Create temporary object
// Create temporary object - still missing id, mimeType, gcid, md5sum
dstObj, dstLeaf, dstParentID, err := f.createObject(ctx, remote, srcObj.modTime, srcObj.size)
if err != nil {
return nil, err
@@ -1059,7 +1063,12 @@ func (f *Fs) Move(ctx context.Context, src fs.Object, remote string) (fs.Object,
return nil, err
}
}
// Manually update info of moved object to save API calls
dstObj.id = srcObj.id
dstObj.mimeType = srcObj.mimeType
dstObj.gcid = srcObj.gcid
dstObj.md5sum = srcObj.md5sum
dstObj.hasMetaData = true
if srcLeaf != dstLeaf {
// Rename
@@ -1067,16 +1076,7 @@ func (f *Fs) Move(ctx context.Context, src fs.Object, remote string) (fs.Object,
if err != nil {
return nil, fmt.Errorf("move: couldn't rename moved file: %w", err)
}
err = dstObj.setMetaData(info)
if err != nil {
return nil, err
}
} else {
// Update info
err = dstObj.readMetaData(ctx)
if err != nil {
return nil, fmt.Errorf("move: couldn't locate moved file: %w", err)
}
return dstObj, dstObj.setMetaData(info)
}
return dstObj, nil
}
@@ -1116,7 +1116,7 @@ func (f *Fs) Copy(ctx context.Context, src fs.Object, remote string) (fs.Object,
return nil, err
}
// Create temporary object
// Create temporary object - still missing id, mimeType, gcid, md5sum
dstObj, dstLeaf, dstParentID, err := f.createObject(ctx, remote, srcObj.modTime, srcObj.size)
if err != nil {
return nil, err
@@ -1130,6 +1130,12 @@ func (f *Fs) Copy(ctx context.Context, src fs.Object, remote string) (fs.Object,
if err := f.copyObjects(ctx, []string{srcObj.id}, dstParentID); err != nil {
return nil, fmt.Errorf("couldn't copy file: %w", err)
}
// Update info of the copied object with new parent but source name
if info, err := dstObj.fs.readMetaDataForPath(ctx, srcObj.remote); err != nil {
return nil, fmt.Errorf("copy: couldn't locate copied file: %w", err)
} else if err = dstObj.setMetaData(info); err != nil {
return nil, err
}
// Can't copy and change name in one step so we have to check if we have
// the correct name after copy
@@ -1144,16 +1150,7 @@ func (f *Fs) Copy(ctx context.Context, src fs.Object, remote string) (fs.Object,
if err != nil {
return nil, fmt.Errorf("copy: couldn't rename copied file: %w", err)
}
err = dstObj.setMetaData(info)
if err != nil {
return nil, err
}
} else {
// Update info
err = dstObj.readMetaData(ctx)
if err != nil {
return nil, fmt.Errorf("copy: couldn't locate copied file: %w", err)
}
return dstObj, dstObj.setMetaData(info)
}
return dstObj, nil
}
@@ -1222,7 +1219,7 @@ func (f *Fs) uploadByResumable(ctx context.Context, in io.Reader, name string, s
return
}
func (f *Fs) upload(ctx context.Context, in io.Reader, leaf, dirID, sha1Str string, size int64, options ...fs.OpenOption) (info *api.File, err error) {
func (f *Fs) upload(ctx context.Context, in io.Reader, leaf, dirID, gcid string, size int64, options ...fs.OpenOption) (info *api.File, err error) {
// determine upload type
uploadType := api.UploadTypeResumable
// if size >= 0 && size < int64(5*fs.Mebi) {
@@ -1237,7 +1234,7 @@ func (f *Fs) upload(ctx context.Context, in io.Reader, leaf, dirID, sha1Str stri
ParentID: parentIDForRequest(dirID),
FolderType: "NORMAL",
Size: size,
Hash: strings.ToUpper(sha1Str),
Hash: strings.ToUpper(gcid),
UploadType: uploadType,
}
if uploadType == api.UploadTypeResumable {
@@ -1251,6 +1248,12 @@ func (f *Fs) upload(ctx context.Context, in io.Reader, leaf, dirID, sha1Str stri
return nil, fmt.Errorf("invalid response: %+v", new)
} else if new.File.Phase == api.PhaseTypeComplete {
// early return; in case of zero-byte objects
if acc, ok := in.(*accounting.Account); ok && acc != nil {
// if `in io.Reader` is still in type of `*accounting.Account` (meaning that it is unused)
// it is considered as a server side copy as no incoming/outgoing traffic occur at all
acc.ServerSideTransferStart()
acc.ServerSideCopyEnd(size)
}
return new.File, nil
}
@@ -1262,8 +1265,8 @@ func (f *Fs) upload(ctx context.Context, in io.Reader, leaf, dirID, sha1Str stri
if cancelErr := f.deleteTask(ctx, new.Task.ID, false); cancelErr != nil {
fs.Logf(leaf, "failed to cancel upload: %v", cancelErr)
}
fs.Debugf(leaf, "waiting %v for the cancellation to be effective", waitTime)
time.Sleep(waitTime)
fs.Debugf(leaf, "waiting %v for the cancellation to be effective", taskWaitTime)
time.Sleep(taskWaitTime)
})()
if uploadType == api.UploadTypeForm && new.Form != nil {
@@ -1277,12 +1280,7 @@ func (f *Fs) upload(ctx context.Context, in io.Reader, leaf, dirID, sha1Str stri
if err != nil {
return nil, fmt.Errorf("failed to upload: %w", err)
}
fs.Debugf(leaf, "sleeping for %v before checking upload status", waitTime)
time.Sleep(waitTime)
if _, err = f.getTask(ctx, new.Task.ID, true); err != nil {
return nil, fmt.Errorf("unable to complete the upload: %w", err)
}
return new.File, nil
return new.File, f.waitTask(ctx, new.Task.ID)
}
// Put the object
@@ -1506,6 +1504,7 @@ func (o *Object) setMetaData(info *api.File) (err error) {
} else {
o.parent = info.ParentID
}
o.gcid = info.Hash
o.md5sum = info.Md5Checksum
if info.Links.ApplicationOctetStream != nil {
o.link = info.Links.ApplicationOctetStream
@@ -1579,9 +1578,6 @@ func (o *Object) Hash(ctx context.Context, t hash.Type) (string, error) {
if t != hash.MD5 {
return "", hash.ErrUnsupported
}
if o.md5sum == "" {
return "", nil
}
return strings.ToLower(o.md5sum), nil
}
@@ -1705,25 +1701,34 @@ func (o *Object) upload(ctx context.Context, in io.Reader, src fs.ObjectInfo, wi
return err
}
// Calculate sha1sum; grabbed from package jottacloud
hashStr, err := src.Hash(ctx, hash.SHA1)
if err != nil || hashStr == "" {
// unwrap the accounting from the input, we use wrap to put it
// back on after the buffering
var wrap accounting.WrapFn
in, wrap = accounting.UnWrap(in)
var cleanup func()
hashStr, in, cleanup, err = readSHA1(in, size, int64(o.fs.opt.HashMemoryThreshold))
defer cleanup()
if err != nil {
return fmt.Errorf("failed to calculate SHA1: %w", err)
// Calculate gcid; grabbed from package jottacloud
gcid, err := o.fs.getGcid(ctx, src)
if err != nil || gcid == "" {
fs.Debugf(o, "calculating gcid: %v", err)
if srcObj := fs.UnWrapObjectInfo(src); srcObj != nil && srcObj.Fs().Features().IsLocal {
// No buffering; directly calculate gcid from source
rc, err := srcObj.Open(ctx)
if err != nil {
return fmt.Errorf("failed to open src: %w", err)
}
defer fs.CheckClose(rc, &err)
if gcid, err = calcGcid(rc, srcObj.Size()); err != nil {
return fmt.Errorf("failed to calculate gcid: %w", err)
}
} else {
var cleanup func()
gcid, in, cleanup, err = readGcid(in, size, int64(o.fs.opt.HashMemoryThreshold))
defer cleanup()
if err != nil {
return fmt.Errorf("failed to calculate gcid: %w", err)
}
}
// Wrap the accounting back onto the stream
in = wrap(in)
}
fs.Debugf(o, "gcid = %s", gcid)
if !withTemp {
info, err := o.fs.upload(ctx, in, leaf, dirID, hashStr, size, options...)
info, err := o.fs.upload(ctx, in, leaf, dirID, gcid, size, options...)
if err != nil {
return err
}
@@ -1732,7 +1737,7 @@ func (o *Object) upload(ctx context.Context, in io.Reader, src fs.ObjectInfo, wi
// We have to fall back to upload + rename
tempName := "rcloneTemp" + random.String(8)
info, err := o.fs.upload(ctx, in, tempName, dirID, hashStr, size, options...)
info, err := o.fs.upload(ctx, in, tempName, dirID, gcid, size, options...)
if err != nil {
return err
}

View File

@@ -1415,8 +1415,8 @@ func init() {
Help: "Magalu BR Southeast 1 endpoint",
Provider: "Magalu",
}, {
Value: "br-se1.magaluobjects.com",
Help: "Magalu BR Northest 1 endpoint",
Value: "br-ne1.magaluobjects.com",
Help: "Magalu BR Northeast 1 endpoint",
Provider: "Magalu",
}},
}, {
@@ -2246,7 +2246,11 @@ for more info.
Some providers (e.g. AWS, Aliyun OSS, Netease COS, or Tencent COS) require this set to
false - rclone will do this automatically based on the provider
setting.`,
setting.
Note that if your bucket isn't a valid DNS name, i.e. has '.' or '_' in,
you'll need to set this to true.
`,
Default: true,
Advanced: true,
}, {
@@ -5422,7 +5426,7 @@ func (f *Fs) headObject(ctx context.Context, req *s3.HeadObjectInput) (resp *s3.
})
if err != nil {
if awsErr, ok := err.(awserr.RequestFailure); ok {
if awsErr.StatusCode() == http.StatusNotFound {
if awsErr.StatusCode() == http.StatusNotFound || awsErr.StatusCode() == http.StatusMethodNotAllowed {
return nil, fs.ErrorObjectNotFound
}
}

View File

@@ -58,7 +58,7 @@ func (f *Fs) InternalTestMetadata(t *testing.T) {
// "tier" - read only
// "btime" - read only
}
obj := fstests.PutTestContentsMetadata(ctx, t, f, &item, contents, true, "text/html", metadata)
obj := fstests.PutTestContentsMetadata(ctx, t, f, &item, true, contents, true, "text/html", metadata)
defer func() {
assert.NoError(t, obj.Remove(ctx))
}()

View File

@@ -75,8 +75,18 @@ func init() {
Help: "SSH password, leave blank to use ssh-agent.",
IsPassword: true,
}, {
Name: "key_pem",
Help: "Raw PEM-encoded private key.\n\nIf specified, will override key_file parameter.",
Name: "key_pem",
Help: `Raw PEM-encoded private key.
Note that this should be on a single line with line endings replaced with '\n', eg
key_pem = -----BEGIN RSA PRIVATE KEY-----\nMaMbaIXtE\n0gAMbMbaSsd\nMbaass\n-----END RSA PRIVATE KEY-----
This will generate the single line correctly:
awk '{printf "%s\\n", $0}' < ~/.ssh/id_rsa
If specified, it will override the key_file parameter.`,
Sensitive: true,
}, {
Name: "key_file",
@@ -339,13 +349,13 @@ cost of using more memory.
Note that setting this is very likely to cause deadlocks so it should
be used with care.
If you are doing a sync or copy then make sure concurrency is one more
If you are doing a sync or copy then make sure connections is one more
than the sum of |--transfers| and |--checkers|.
If you use |--check-first| then it just needs to be one more than the
maximum of |--checkers| and |--transfers|.
So for |concurrency 3| you'd use |--checkers 2 --transfers 2
So for |connections 3| you'd use |--checkers 2 --transfers 2
--check-first| or |--checkers 1 --transfers 1|.
`, "|", "`", -1),
@@ -561,7 +571,7 @@ type Object struct {
fs *Fs
remote string
size int64 // size of the object
modTime time.Time // modification time of the object
modTime uint32 // modification time of the object as unix time
mode os.FileMode // mode bits from the file
md5sum *string // Cached MD5 checksum
sha1sum *string // Cached SHA1 checksum
@@ -815,13 +825,13 @@ func (f *Fs) drainPool(ctx context.Context) (err error) {
if cErr := c.closed(); cErr == nil {
cErr = c.close()
if cErr != nil {
err = cErr
fs.Debugf(f, "Ignoring error closing connection: %v", cErr)
}
}
f.pool[i] = nil
}
f.pool = nil
return err
return nil
}
// NewFs creates a new Fs object from the name and root. It connects to
@@ -1957,7 +1967,7 @@ func (o *Object) Size() int64 {
// ModTime returns the modification time of the remote sftp file
func (o *Object) ModTime(ctx context.Context) time.Time {
return o.modTime
return time.Unix(int64(o.modTime), 0)
}
// path returns the native SFTP path of the object
@@ -1972,7 +1982,7 @@ func (o *Object) shellPath() string {
// setMetadata updates the info in the object from the stat result passed in
func (o *Object) setMetadata(info os.FileInfo) {
o.modTime = info.ModTime()
o.modTime = info.Sys().(*sftp.FileStat).Mtime
o.size = info.Size()
o.mode = info.Mode()
}
@@ -2195,7 +2205,7 @@ func (o *Object) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, op
// In the specific case of o.fs.opt.SetModTime == false
// if the object wasn't found then don't return an error
fs.Debugf(o, "Not found after upload with set_modtime=false so returning best guess")
o.modTime = src.ModTime(ctx)
o.modTime = uint32(src.ModTime(ctx).Unix())
o.size = src.Size()
o.mode = os.FileMode(0666) // regular file
} else if err != nil {

View File

@@ -278,6 +278,36 @@ provider.`,
Value: "pca",
Help: "OVH Public Cloud Archive",
}},
}, {
Name: "fetch_until_empty_page",
Help: `When paginating, always fetch unless we received an empty page.
Consider using this option if rclone listings show fewer objects
than expected, or if repeated syncs copy unchanged objects.
It is safe to enable this, but rclone may make more API calls than
necessary.
This is one of a pair of workarounds to handle implementations
of the Swift API that do not implement pagination as expected. See
also "partial_page_fetch_threshold".`,
Default: false,
Advanced: true,
}, {
Name: "partial_page_fetch_threshold",
Help: `When paginating, fetch if the current page is within this percentage of the limit.
Consider using this option if rclone listings show fewer objects
than expected, or if repeated syncs copy unchanged objects.
It is safe to enable this, but rclone may make more API calls than
necessary.
This is one of a pair of workarounds to handle implementations
of the Swift API that do not implement pagination as expected. See
also "fetch_until_empty_page".`,
Default: 0,
Advanced: true,
}}, SharedOptions...),
})
}
@@ -308,6 +338,8 @@ type Options struct {
NoLargeObjects bool `config:"no_large_objects"`
UseSegmentsContainer fs.Tristate `config:"use_segments_container"`
Enc encoder.MultiEncoder `config:"encoding"`
FetchUntilEmptyPage bool `config:"fetch_until_empty_page"`
PartialPageFetchThreshold int `config:"partial_page_fetch_threshold"`
}
// Fs represents a remote swift server
@@ -462,6 +494,8 @@ func swiftConnection(ctx context.Context, opt *Options, name string) (*swift.Con
ConnectTimeout: 10 * ci.ConnectTimeout, // Use the timeouts in the transport
Timeout: 10 * ci.Timeout, // Use the timeouts in the transport
Transport: fshttp.NewTransport(ctx),
FetchUntilEmptyPage: opt.FetchUntilEmptyPage,
PartialPageFetchThreshold: opt.PartialPageFetchThreshold,
}
if opt.EnvAuth {
err := c.ApplyEnvironment()

View File

@@ -163,7 +163,7 @@ type BatchUpdateFilePropertiesRequest struct {
// SendFilePayloadResponse represents the JSON API object that's received
// in response to uploading a file's body to the CDN URL.
type SendFilePayloadResponse struct {
Size int `json:"size"`
Size int64 `json:"size"`
ContentType string `json:"contentType"`
Md5 string `json:"md5"`
Message string `json:"message"`

View File

@@ -1,5 +1,3 @@
//go:build go1.20
package union
import (

View File

@@ -70,8 +70,17 @@ type ItemInfo struct {
Item Item `json:"data"`
}
// Links contains Cursor information
type Links struct {
Cursor struct {
HasNext bool `json:"has_next"`
Next string `json:"next"`
} `json:"cursor"`
}
// ItemList contains multiple Zoho Items
type ItemList struct {
Links Links `json:"links"`
Items []Item `json:"data"`
}

View File

@@ -289,6 +289,10 @@ func shouldRetry(ctx context.Context, resp *http.Response, err error) (bool, err
authRetry = true
fs.Debugf(nil, "Should retry: %v", err)
}
if resp != nil && resp.StatusCode == 429 {
fs.Errorf(nil, "zoho: rate limit error received, sleeping for 60s: %v", err)
time.Sleep(60 * time.Second)
}
return authRetry || fserrors.ShouldRetry(err) || fserrors.ShouldRetryHTTP(resp, retryErrorCodes), err
}
@@ -332,7 +336,7 @@ func parsePath(path string) (root string) {
// readMetaDataForPath reads the metadata from the path
func (f *Fs) readMetaDataForPath(ctx context.Context, path string) (info *api.Item, err error) {
// defer fs.Trace(f, "path=%q", path)("info=%+v, err=%v", &info, &err)
// defer log.Trace(f, "path=%q", path)("info=%+v, err=%v", &info, &err)
leaf, directoryID, err := f.dirCache.FindPath(ctx, path, false)
if err != nil {
if err == fs.ErrorDirNotFound {
@@ -454,18 +458,18 @@ type listAllFn func(*api.Item) bool
//
// If the user fn ever returns true then it early exits with found = true
func (f *Fs) listAll(ctx context.Context, dirID string, directoriesOnly bool, filesOnly bool, fn listAllFn) (found bool, err error) {
const listItemsLimit = 1000
opts := rest.Opts{
Method: "GET",
Path: "/files/" + dirID + "/files",
ExtraHeaders: map[string]string{"Accept": "application/vnd.api+json"},
Parameters: url.Values{},
Parameters: url.Values{
"page[limit]": {strconv.Itoa(listItemsLimit)},
"page[next]": {"0"},
},
}
opts.Parameters.Set("page[limit]", strconv.Itoa(10))
offset := 0
OUTER:
for {
opts.Parameters.Set("page[offset]", strconv.Itoa(offset))
var result api.ItemList
var resp *http.Response
err = f.pacer.Call(func() (bool, error) {
@@ -495,7 +499,15 @@ OUTER:
break OUTER
}
}
offset += 10
if !result.Links.Cursor.HasNext {
break
}
// Fetch the next from the URL in the response
nextURL, err := url.Parse(result.Links.Cursor.Next)
if err != nil {
return found, fmt.Errorf("failed to parse next link as URL: %w", err)
}
opts.Parameters.Set("page[next]", nextURL.Query().Get("page[next]"))
}
return
}
@@ -631,33 +643,6 @@ func (f *Fs) createObject(ctx context.Context, remote string, size int64, modTim
return
}
// Put the object
//
// Copy the reader in to the new object which is returned.
//
// The new object may have been created if an error is returned
func (f *Fs) Put(ctx context.Context, in io.Reader, src fs.ObjectInfo, options ...fs.OpenOption) (fs.Object, error) {
existingObj, err := f.newObjectWithInfo(ctx, src.Remote(), nil)
switch err {
case nil:
return existingObj, existingObj.Update(ctx, in, src, options...)
case fs.ErrorObjectNotFound:
// Not found so create it
return f.PutUnchecked(ctx, in, src)
default:
return nil, err
}
}
func isSimpleName(s string) bool {
for _, r := range s {
if (r < 'a' || r > 'z') && (r < 'A' || r > 'Z') && (r != '.') {
return false
}
}
return true
}
func (f *Fs) upload(ctx context.Context, name string, parent string, size int64, in io.Reader, options ...fs.OpenOption) (*api.Item, error) {
params := url.Values{}
params.Set("filename", name)
@@ -693,22 +678,32 @@ func (f *Fs) upload(ctx context.Context, name string, parent string, size int64,
return nil, errors.New("upload: invalid response")
}
// Received meta data is missing size so we have to read it again.
info, err := f.readMetaDataForID(ctx, uploadResponse.Uploads[0].Attributes.RessourceID)
if err != nil {
return nil, err
// It doesn't always appear on first read so try again if necessary
var info *api.Item
const maxTries = 10
sleepTime := 100 * time.Millisecond
for i := 0; i < maxTries; i++ {
info, err = f.readMetaDataForID(ctx, uploadResponse.Uploads[0].Attributes.RessourceID)
if err != nil {
return nil, err
}
if info.Attributes.StorageInfo.Size != 0 || size == 0 {
break
}
fs.Debugf(f, "Size not available yet for %q - try again in %v (try %d/%d)", name, sleepTime, i+1, maxTries)
time.Sleep(sleepTime)
sleepTime *= 2
}
return info, nil
}
// PutUnchecked the object into the container
//
// This will produce an error if the object already exists.
// Put the object into the container
//
// Copy the reader in to the new object which is returned.
//
// The new object may have been created if an error is returned
func (f *Fs) PutUnchecked(ctx context.Context, in io.Reader, src fs.ObjectInfo, options ...fs.OpenOption) (fs.Object, error) {
func (f *Fs) Put(ctx context.Context, in io.Reader, src fs.ObjectInfo, options ...fs.OpenOption) (fs.Object, error) {
size := src.Size()
remote := src.Remote()
@@ -718,25 +713,12 @@ func (f *Fs) PutUnchecked(ctx context.Context, in io.Reader, src fs.ObjectInfo,
return nil, err
}
if isSimpleName(leaf) {
info, err := f.upload(ctx, f.opt.Enc.FromStandardName(leaf), directoryID, size, in, options...)
if err != nil {
return nil, err
}
return f.newObjectWithInfo(ctx, remote, info)
}
tempName := "rcloneTemp" + random.String(8)
info, err := f.upload(ctx, tempName, directoryID, size, in, options...)
// Upload the file
info, err := f.upload(ctx, f.opt.Enc.FromStandardName(leaf), directoryID, size, in, options...)
if err != nil {
return nil, err
}
o, err := f.newObjectWithInfo(ctx, remote, info)
if err != nil {
return nil, err
}
return o, o.(*Object).rename(ctx, leaf)
return f.newObjectWithInfo(ctx, remote, info)
}
// Mkdir creates the container if it doesn't exist
@@ -1200,32 +1182,12 @@ func (o *Object) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, op
return err
}
if isSimpleName(leaf) {
// Simple name we can just overwrite the old file
info, err := o.fs.upload(ctx, o.fs.opt.Enc.FromStandardName(leaf), directoryID, size, in, options...)
if err != nil {
return err
}
return o.setMetaData(info)
}
// We have to fall back to upload + rename
tempName := "rcloneTemp" + random.String(8)
info, err := o.fs.upload(ctx, tempName, directoryID, size, in, options...)
// Overwrite the old file
info, err := o.fs.upload(ctx, o.fs.opt.Enc.FromStandardName(leaf), directoryID, size, in, options...)
if err != nil {
return err
}
// upload was successful, need to delete old object before rename
if err = o.Remove(ctx); err != nil {
return fmt.Errorf("failed to remove old object: %w", err)
}
if err = o.setMetaData(info); err != nil {
return err
}
// rename also updates metadata
return o.rename(ctx, leaf)
return o.setMetaData(info)
}
// Remove an object

View File

@@ -14,7 +14,6 @@ import (
"os"
"os/exec"
"path"
"regexp"
"runtime"
"runtime/pprof"
"strconv"
@@ -29,11 +28,10 @@ import (
"github.com/rclone/rclone/fs/config/configflags"
"github.com/rclone/rclone/fs/config/flags"
"github.com/rclone/rclone/fs/filter"
"github.com/rclone/rclone/fs/filter/filterflags"
"github.com/rclone/rclone/fs/fserrors"
"github.com/rclone/rclone/fs/fspath"
fslog "github.com/rclone/rclone/fs/log"
"github.com/rclone/rclone/fs/rc/rcflags"
"github.com/rclone/rclone/fs/rc"
"github.com/rclone/rclone/fs/rc/rcserver"
fssync "github.com/rclone/rclone/fs/sync"
"github.com/rclone/rclone/lib/atexit"
@@ -50,7 +48,6 @@ var (
cpuProfile = flags.StringP("cpuprofile", "", "", "Write cpu profile to file", "Debugging")
memProfile = flags.StringP("memprofile", "", "", "Write memory profile to file", "Debugging")
statsInterval = flags.DurationP("stats", "", time.Minute*1, "Interval between printing stats, e.g. 500ms, 60s, 5m (0 to disable)", "Logging")
dataRateUnit = flags.StringP("stats-unit", "", "bytes", "Show data rate in stats as either 'bits' or 'bytes' per second", "Logging")
version bool
// Errors
errorCommandNotFound = errors.New("command not found")
@@ -383,6 +380,12 @@ func StartStats() func() {
// initConfig is run by cobra after initialising the flags
func initConfig() {
// Set the global options from the flags
err := fs.GlobalOptionsInit()
if err != nil {
log.Fatalf("Failed to initialise global options: %v", err)
}
ctx := context.Background()
ci := fs.GetConfig(ctx)
@@ -409,12 +412,6 @@ func initConfig() {
terminal.EnableColorsStdout()
}
// Load filters
err := filterflags.Reload(ctx)
if err != nil {
log.Fatalf("Failed to load filters: %v", err)
}
// Write the args for debug purposes
fs.Debugf("rclone", "Version %q starting with parameters %q", fs.Version, os.Args)
@@ -424,7 +421,7 @@ func initConfig() {
}
// Start the remote control server if configured
_, err = rcserver.Start(context.Background(), &rcflags.Opt)
_, err = rcserver.Start(context.Background(), &rc.Opt)
if err != nil {
log.Fatalf("Failed to start remote control: %v", err)
}
@@ -473,13 +470,6 @@ func initConfig() {
}
})
}
if m, _ := regexp.MatchString("^(bits|bytes)$", *dataRateUnit); !m {
fs.Errorf(nil, "Invalid unit passed to --stats-unit. Defaulting to bytes.")
ci.DataRateUnit = "bytes"
} else {
ci.DataRateUnit = *dataRateUnit
}
}
func resolveExitCode(err error) {
@@ -522,41 +512,12 @@ var backendFlags map[string]struct{}
func AddBackendFlags() {
backendFlags = map[string]struct{}{}
for _, fsInfo := range fs.Registry {
done := map[string]struct{}{}
flags.AddFlagsFromOptions(pflag.CommandLine, fsInfo.Prefix, fsInfo.Options)
// Store the backend flag names for the help generator
for i := range fsInfo.Options {
opt := &fsInfo.Options[i]
// Skip if done already (e.g. with Provider options)
if _, doneAlready := done[opt.Name]; doneAlready {
continue
}
done[opt.Name] = struct{}{}
// Make a flag from each option
name := opt.FlagName(fsInfo.Prefix)
found := pflag.CommandLine.Lookup(name) != nil
if !found {
// Take first line of help only
help := strings.TrimSpace(opt.Help)
if nl := strings.IndexRune(help, '\n'); nl >= 0 {
help = help[:nl]
}
help = strings.TrimRight(strings.TrimSpace(help), ".!?")
if opt.IsPassword {
help += " (obscured)"
}
flag := pflag.CommandLine.VarPF(opt, name, opt.ShortOpt, help)
flags.SetDefaultFromEnv(pflag.CommandLine, name)
if _, isBool := opt.Default.(bool); isBool {
flag.NoOptDefVal = "true"
}
// Hide on the command line if requested
if opt.Hide&fs.OptionHideCommandLine != 0 {
flag.Hidden = true
}
backendFlags[name] = struct{}{}
} else {
fs.Errorf(nil, "Not adding duplicate flag --%s", name)
}
// flag.Hidden = true
backendFlags[name] = struct{}{}
}
}
}

View File

@@ -52,7 +52,7 @@ func findOption(name string, options []string) (found bool) {
func mountOptions(VFS *vfs.VFS, device string, mountpoint string, opt *mountlib.Options) (options []string) {
// Options
options = []string{
"-o", fmt.Sprintf("attr_timeout=%g", opt.AttrTimeout.Seconds()),
"-o", fmt.Sprintf("attr_timeout=%g", time.Duration(opt.AttrTimeout).Seconds()),
}
if opt.DebugFUSE {
options = append(options, "-o", "debug")
@@ -79,7 +79,7 @@ func mountOptions(VFS *vfs.VFS, device string, mountpoint string, opt *mountlib.
// WinFSP so cmount must work with or without it.
options = append(options, "-o", "atomic_o_trunc")
if opt.DaemonTimeout != 0 {
options = append(options, "-o", fmt.Sprintf("daemon_timeout=%d", int(opt.DaemonTimeout.Seconds())))
options = append(options, "-o", fmt.Sprintf("daemon_timeout=%d", int(time.Duration(opt.DaemonTimeout).Seconds())))
}
if opt.AllowOther {
options = append(options, "-o", "allow_other")

View File

@@ -29,8 +29,6 @@ type frontmatter struct {
Date string
Title string
Description string
Slug string
URL string
Source string
Annotations map[string]string
}
@@ -38,8 +36,6 @@ type frontmatter struct {
var frontmatterTemplate = template.Must(template.New("frontmatter").Parse(`---
title: "{{ .Title }}"
description: "{{ .Description }}"
slug: {{ .Slug }}
url: {{ .URL }}
{{- range $key, $value := .Annotations }}
{{ $key }}: {{ $value }}
{{- end }}
@@ -112,10 +108,14 @@ rclone.org website.`,
Date: now,
Title: strings.ReplaceAll(base, "_", " "),
Description: commands[name].Short,
Slug: base,
URL: "/commands/" + strings.ToLower(base) + "/",
Source: strings.ReplaceAll(strings.ReplaceAll(base, "rclone", "cmd"), "_", "/") + "/",
Annotations: commands[name].Annotations,
Annotations: map[string]string{},
}
// Filter out annotations that confuse hugo from the frontmatter
for k, v := range commands[name].Annotations {
if k != "groups" {
data.Annotations[k] = v
}
}
var buf bytes.Buffer
err := frontmatterTemplate.Execute(&buf, data)

View File

@@ -93,6 +93,7 @@ func findFileWithContents(t *testing.T, dir string, wantContents []byte) bool {
}
type e2eTestingContext struct {
t *testing.T
tempDir string
binDir string
homeDir string
@@ -126,7 +127,7 @@ func makeE2eTestingContext(t *testing.T) e2eTestingContext {
require.NoError(t, os.Mkdir(dir, 0700))
}
return e2eTestingContext{tempDir, binDir, homeDir, configDir, rcloneConfigDir, ephemeralRepoDir}
return e2eTestingContext{t, tempDir, binDir, homeDir, configDir, rcloneConfigDir, ephemeralRepoDir}
}
// Install the symlink that enables git-annex to invoke "rclone gitannex"
@@ -154,16 +155,17 @@ func (e *e2eTestingContext) installRcloneConfig(t *testing.T) {
// variable to a subdirectory of the temp directory. It also ensures that the
// git-annex-remote-rclone-builtin symlink will be found by extending the PATH.
func (e *e2eTestingContext) runInRepo(t *testing.T, command string, args ...string) {
fmt.Printf("+ %s %v\n", command, args)
if testing.Verbose() {
t.Logf("Running %s %v\n", command, args)
}
cmd := exec.Command(command, args...)
cmd.Dir = e.ephemeralRepoDir
cmd.Env = []string{
"HOME=" + e.homeDir,
"PATH=" + os.Getenv("PATH") + ":" + e.binDir,
}
cmd.Stdout = os.Stdout
cmd.Stderr = os.Stderr
require.NoError(t, cmd.Run())
buf, err := cmd.CombinedOutput()
require.NoError(t, err, fmt.Sprintf("+ %s %v failed:\n%s\n", command, args, buf))
}
// createGitRepo creates an empty git repository in the ephemeral repo

View File

@@ -30,10 +30,10 @@ var _ fusefs.Node = (*Dir)(nil)
// Attr updates the attributes of a directory
func (d *Dir) Attr(ctx context.Context, a *fuse.Attr) (err error) {
defer log.Trace(d, "")("attr=%+v, err=%v", a, &err)
a.Valid = d.fsys.opt.AttrTimeout
a.Valid = time.Duration(d.fsys.opt.AttrTimeout)
a.Gid = d.VFS().Opt.GID
a.Uid = d.VFS().Opt.UID
a.Mode = os.ModeDir | d.VFS().Opt.DirPerms
a.Mode = os.ModeDir | os.FileMode(d.VFS().Opt.DirPerms)
modTime := d.ModTime()
a.Atime = modTime
a.Mtime = modTime
@@ -77,7 +77,7 @@ func (d *Dir) Lookup(ctx context.Context, req *fuse.LookupRequest, resp *fuse.Lo
if err != nil {
return nil, translateError(err)
}
resp.EntryValid = d.fsys.opt.AttrTimeout
resp.EntryValid = time.Duration(d.fsys.opt.AttrTimeout)
// Check the mnode to see if it has a fuse Node cached
// We must return the same fuse nodes for vfs Nodes
node, ok := mnode.Sys().(fusefs.Node)

View File

@@ -4,6 +4,7 @@ package mount
import (
"context"
"os"
"syscall"
"time"
@@ -25,13 +26,13 @@ var _ fusefs.Node = (*File)(nil)
// Attr fills out the attributes for the file
func (f *File) Attr(ctx context.Context, a *fuse.Attr) (err error) {
defer log.Trace(f, "")("a=%+v, err=%v", a, &err)
a.Valid = f.fsys.opt.AttrTimeout
a.Valid = time.Duration(f.fsys.opt.AttrTimeout)
modTime := f.File.ModTime()
Size := uint64(f.File.Size())
Blocks := (Size + 511) / 512
a.Gid = f.VFS().Opt.GID
a.Uid = f.VFS().Opt.UID
a.Mode = f.VFS().Opt.FilePerms
a.Mode = os.FileMode(f.VFS().Opt.FilePerms)
a.Size = Size
a.Atime = modTime
a.Mtime = modTime

View File

@@ -6,6 +6,7 @@ package mount
import (
"fmt"
"runtime"
"time"
"bazil.org/fuse"
fusefs "bazil.org/fuse/fs"
@@ -50,7 +51,7 @@ func mountOptions(VFS *vfs.VFS, device string, opt *mountlib.Options) (options [
options = append(options, fuse.WritebackCache())
}
if opt.DaemonTimeout != 0 {
options = append(options, fuse.DaemonTimeout(fmt.Sprint(int(opt.DaemonTimeout.Seconds()))))
options = append(options, fuse.DaemonTimeout(fmt.Sprint(int(time.Duration(opt.DaemonTimeout).Seconds()))))
}
if len(opt.ExtraOptions) > 0 {
fs.Errorf(nil, "-o/--option not supported with this FUSE backend")

View File

@@ -7,6 +7,7 @@ package mount2
import (
"os"
"syscall"
"time"
"github.com/hanwen/go-fuse/v2/fuse"
"github.com/rclone/rclone/cmd/mountlib"
@@ -88,14 +89,14 @@ func setAttr(node vfs.Node, attr *fuse.Attr) {
// fill in AttrOut from node
func (f *FS) setAttrOut(node vfs.Node, out *fuse.AttrOut) {
setAttr(node, &out.Attr)
out.SetTimeout(f.opt.AttrTimeout)
out.SetTimeout(time.Duration(f.opt.AttrTimeout))
}
// fill in EntryOut from node
func (f *FS) setEntryOut(node vfs.Node, out *fuse.EntryOut) {
setAttr(node, &out.Attr)
out.SetEntryTimeout(f.opt.AttrTimeout)
out.SetAttrTimeout(f.opt.AttrTimeout)
out.SetEntryTimeout(time.Duration(f.opt.AttrTimeout))
out.SetAttrTimeout(time.Duration(f.opt.AttrTimeout))
}
// Translate errors from mountlib into Syscall error numbers

View File

@@ -7,6 +7,7 @@ import (
"fmt"
"log"
"runtime"
"time"
fusefs "github.com/hanwen/go-fuse/v2/fs"
"github.com/hanwen/go-fuse/v2/fuse"
@@ -215,8 +216,8 @@ func mount(VFS *vfs.VFS, mountpoint string, opt *mountlib.Options) (<-chan error
// FIXME fill out
opts := fusefs.Options{
MountOptions: *mountOpts,
EntryTimeout: &opt.AttrTimeout,
AttrTimeout: &opt.AttrTimeout,
EntryTimeout: (*time.Duration)(&opt.AttrTimeout),
AttrTimeout: (*time.Duration)(&opt.AttrTimeout),
GID: VFS.Opt.GID,
UID: VFS.Opt.UID,
}

View File

@@ -16,14 +16,13 @@ import (
"github.com/rclone/rclone/fs"
"github.com/rclone/rclone/fs/config"
"github.com/rclone/rclone/fs/config/flags"
"github.com/rclone/rclone/fs/rc"
"github.com/rclone/rclone/lib/atexit"
"github.com/rclone/rclone/lib/daemonize"
"github.com/rclone/rclone/lib/systemd"
"github.com/rclone/rclone/vfs"
"github.com/rclone/rclone/vfs/vfscommon"
"github.com/rclone/rclone/vfs/vfsflags"
"github.com/coreos/go-systemd/v22/daemon"
"github.com/spf13/cobra"
"github.com/spf13/pflag"
)
@@ -36,38 +35,161 @@ func help(commandName string) string {
return strings.TrimSpace(strings.ReplaceAll(mountHelp, "@", commandName)) + "\n\n"
}
// Options for creating the mount
type Options struct {
DebugFUSE bool
AllowNonEmpty bool
AllowRoot bool
AllowOther bool
DefaultPermissions bool
WritebackCache bool
Daemon bool
DaemonWait time.Duration // time to wait for ready mount from daemon, maximum on Linux or constant on macOS/BSD
MaxReadAhead fs.SizeSuffix
ExtraOptions []string
ExtraFlags []string
AttrTimeout time.Duration // how long the kernel caches attribute for
DeviceName string
VolumeName string
NoAppleDouble bool
NoAppleXattr bool
DaemonTimeout time.Duration // OSXFUSE only
AsyncRead bool
NetworkMode bool // Windows only
DirectIO bool // use Direct IO for file access
CaseInsensitive fs.Tristate
// OptionsInfo describes the Options in use
var OptionsInfo = fs.Options{{
Name: "debug_fuse",
Default: false,
Help: "Debug the FUSE internals - needs -v",
Groups: "Mount",
}, {
Name: "attr_timeout",
Default: fs.Duration(1 * time.Second),
Help: "Time for which file/directory attributes are cached",
Groups: "Mount",
}, {
Name: "option",
Default: []string{},
Help: "Option for libfuse/WinFsp (repeat if required)",
Groups: "Mount",
ShortOpt: "o",
}, {
Name: "fuse_flag",
Default: []string{},
Help: "Flags or arguments to be passed direct to libfuse/WinFsp (repeat if required)",
Groups: "Mount",
}, {
Name: "daemon",
Default: false,
Help: "Run mount in background and exit parent process (as background output is suppressed, use --log-file with --log-format=pid,... to monitor) (not supported on Windows)",
Groups: "Mount",
}, {
Name: "daemon_timeout",
Default: func() fs.Duration {
if runtime.GOOS == "darwin" {
// DaemonTimeout defaults to non-zero for macOS
// (this is a macOS specific kernel option unrelated to DaemonWait)
return fs.Duration(10 * time.Minute)
}
return 0
}(),
Help: "Time limit for rclone to respond to kernel (not supported on Windows)",
Groups: "Mount",
}, {
Name: "default_permissions",
Default: false,
Help: "Makes kernel enforce access control based on the file mode (not supported on Windows)",
Groups: "Mount",
}, {
Name: "allow_non_empty",
Default: false,
Help: "Allow mounting over a non-empty directory (not supported on Windows)",
Groups: "Mount",
}, {
Name: "allow_root",
Default: false,
Help: "Allow access to root user (not supported on Windows)",
Groups: "Mount",
}, {
Name: "allow_other",
Default: false,
Help: "Allow access to other users (not supported on Windows)",
Groups: "Mount",
}, {
Name: "async_read",
Default: true,
Help: "Use asynchronous reads (not supported on Windows)",
Groups: "Mount",
}, {
Name: "max_read_ahead",
Default: fs.SizeSuffix(128 * 1024),
Help: "The number of bytes that can be prefetched for sequential reads (not supported on Windows)",
Groups: "Mount",
}, {
Name: "write_back_cache",
Default: false,
Help: "Makes kernel buffer writes before sending them to rclone (without this, writethrough caching is used) (not supported on Windows)",
Groups: "Mount",
}, {
Name: "devname",
Default: "",
Help: "Set the device name - default is remote:path",
Groups: "Mount",
}, {
Name: "mount_case_insensitive",
Default: fs.Tristate{},
Help: "Tell the OS the mount is case insensitive (true) or sensitive (false) regardless of the backend (auto)",
Groups: "Mount",
}, {
Name: "direct_io",
Default: false,
Help: "Use Direct IO, disables caching of data",
Groups: "Mount",
}, {
Name: "volname",
Default: "",
Help: "Set the volume name (supported on Windows and OSX only)",
Groups: "Mount",
}, {
Name: "noappledouble",
Default: true,
Help: "Ignore Apple Double (._) and .DS_Store files (supported on OSX only)",
Groups: "Mount",
}, {
Name: "noapplexattr",
Default: false,
Help: "Ignore all \"com.apple.*\" extended attributes (supported on OSX only)",
Groups: "Mount",
}, {
Name: "network_mode",
Default: false,
Help: "Mount as remote network drive, instead of fixed disk drive (supported on Windows only)",
Groups: "Mount",
}, {
Name: "daemon_wait",
Default: func() fs.Duration {
switch runtime.GOOS {
case "linux":
// Linux provides /proc/mounts to check mount status
// so --daemon-wait means *maximum* time to wait
return fs.Duration(60 * time.Second)
case "darwin", "openbsd", "freebsd", "netbsd":
// On BSD we can't check mount status yet
// so --daemon-wait is just a *constant* delay
return fs.Duration(5 * time.Second)
}
return 0
}(),
Help: "Time to wait for ready mount from daemon (maximum time on Linux, constant sleep time on OSX/BSD) (not supported on Windows)",
Groups: "Mount",
}}
func init() {
fs.RegisterGlobalOptions(fs.OptionsInfo{Name: "mount", Opt: &Opt, Options: OptionsInfo})
}
// DefaultOpt is the default values for creating the mount
var DefaultOpt = Options{
MaxReadAhead: 128 * 1024,
AttrTimeout: 1 * time.Second, // how long the kernel caches attribute for
NoAppleDouble: true, // use noappledouble by default
NoAppleXattr: false, // do not use noapplexattr by default
AsyncRead: true, // do async reads by default
// Options for creating the mount
type Options struct {
DebugFUSE bool `config:"debug_fuse"`
AllowNonEmpty bool `config:"allow_non_empty"`
AllowRoot bool `config:"allow_root"`
AllowOther bool `config:"allow_other"`
DefaultPermissions bool `config:"default_permissions"`
WritebackCache bool `config:"write_back_cache"`
Daemon bool `config:"daemon"`
DaemonWait fs.Duration `config:"daemon_wait"` // time to wait for ready mount from daemon, maximum on Linux or constant on macOS/BSD
MaxReadAhead fs.SizeSuffix `config:"max_read_ahead"`
ExtraOptions []string `config:"option"`
ExtraFlags []string `config:"fuse_flag"`
AttrTimeout fs.Duration `config:"attr_timeout"` // how long the kernel caches attribute for
DeviceName string `config:"devname"`
VolumeName string `config:"volname"`
NoAppleDouble bool `config:"noappledouble"`
NoAppleXattr bool `config:"noapplexattr"`
DaemonTimeout fs.Duration `config:"daemon_timeout"` // OSXFUSE only
AsyncRead bool `config:"async_read"`
NetworkMode bool `config:"network_mode"` // Windows only
DirectIO bool `config:"direct_io"` // use Direct IO for file access
CaseInsensitive fs.Tristate `config:"mount_case_insensitive"`
}
type (
@@ -106,61 +228,12 @@ const (
MaxLeafSize = 1024 // don't pass file names longer than this
)
func init() {
switch runtime.GOOS {
case "darwin":
// DaemonTimeout defaults to non-zero for macOS
// (this is a macOS specific kernel option unrelated to DaemonWait)
DefaultOpt.DaemonTimeout = 10 * time.Minute
}
switch runtime.GOOS {
case "linux":
// Linux provides /proc/mounts to check mount status
// so --daemon-wait means *maximum* time to wait
DefaultOpt.DaemonWait = 60 * time.Second
case "darwin", "openbsd", "freebsd", "netbsd":
// On BSD we can't check mount status yet
// so --daemon-wait is just a *constant* delay
DefaultOpt.DaemonWait = 5 * time.Second
}
// Opt must be assigned in the init block to ensure changes really get in
Opt = DefaultOpt
}
// Opt contains options set by command line flags
var Opt Options
// AddFlags adds the non filing system specific flags to the command
func AddFlags(flagSet *pflag.FlagSet) {
rc.AddOption("mount", &Opt)
flags.BoolVarP(flagSet, &Opt.DebugFUSE, "debug-fuse", "", Opt.DebugFUSE, "Debug the FUSE internals - needs -v", "Mount")
flags.DurationVarP(flagSet, &Opt.AttrTimeout, "attr-timeout", "", Opt.AttrTimeout, "Time for which file/directory attributes are cached", "Mount")
flags.StringArrayVarP(flagSet, &Opt.ExtraOptions, "option", "o", []string{}, "Option for libfuse/WinFsp (repeat if required)", "Mount")
flags.StringArrayVarP(flagSet, &Opt.ExtraFlags, "fuse-flag", "", []string{}, "Flags or arguments to be passed direct to libfuse/WinFsp (repeat if required)", "Mount")
// Non-Windows only
flags.BoolVarP(flagSet, &Opt.Daemon, "daemon", "", Opt.Daemon, "Run mount in background and exit parent process (as background output is suppressed, use --log-file with --log-format=pid,... to monitor) (not supported on Windows)", "Mount")
flags.DurationVarP(flagSet, &Opt.DaemonTimeout, "daemon-timeout", "", Opt.DaemonTimeout, "Time limit for rclone to respond to kernel (not supported on Windows)", "Mount")
flags.BoolVarP(flagSet, &Opt.DefaultPermissions, "default-permissions", "", Opt.DefaultPermissions, "Makes kernel enforce access control based on the file mode (not supported on Windows)", "Mount")
flags.BoolVarP(flagSet, &Opt.AllowNonEmpty, "allow-non-empty", "", Opt.AllowNonEmpty, "Allow mounting over a non-empty directory (not supported on Windows)", "Mount")
flags.BoolVarP(flagSet, &Opt.AllowRoot, "allow-root", "", Opt.AllowRoot, "Allow access to root user (not supported on Windows)", "Mount")
flags.BoolVarP(flagSet, &Opt.AllowOther, "allow-other", "", Opt.AllowOther, "Allow access to other users (not supported on Windows)", "Mount")
flags.BoolVarP(flagSet, &Opt.AsyncRead, "async-read", "", Opt.AsyncRead, "Use asynchronous reads (not supported on Windows)", "Mount")
flags.FVarP(flagSet, &Opt.MaxReadAhead, "max-read-ahead", "", "The number of bytes that can be prefetched for sequential reads (not supported on Windows)", "Mount")
flags.BoolVarP(flagSet, &Opt.WritebackCache, "write-back-cache", "", Opt.WritebackCache, "Makes kernel buffer writes before sending them to rclone (without this, writethrough caching is used) (not supported on Windows)", "Mount")
flags.StringVarP(flagSet, &Opt.DeviceName, "devname", "", Opt.DeviceName, "Set the device name - default is remote:path", "Mount")
flags.FVarP(flagSet, &Opt.CaseInsensitive, "mount-case-insensitive", "", "Tell the OS the mount is case insensitive (true) or sensitive (false) regardless of the backend (auto)", "Mount")
flags.BoolVarP(flagSet, &Opt.DirectIO, "direct-io", "", Opt.DirectIO, "Use Direct IO, disables caching of data", "Mount")
// Windows and OSX
flags.StringVarP(flagSet, &Opt.VolumeName, "volname", "", Opt.VolumeName, "Set the volume name (supported on Windows and OSX only)", "Mount")
// OSX only
flags.BoolVarP(flagSet, &Opt.NoAppleDouble, "noappledouble", "", Opt.NoAppleDouble, "Ignore Apple Double (._) and .DS_Store files (supported on OSX only)", "Mount")
flags.BoolVarP(flagSet, &Opt.NoAppleXattr, "noapplexattr", "", Opt.NoAppleXattr, "Ignore all \"com.apple.*\" extended attributes (supported on OSX only)", "Mount")
// Windows only
flags.BoolVarP(flagSet, &Opt.NetworkMode, "network-mode", "", Opt.NetworkMode, "Mount as remote network drive, instead of fixed disk drive (supported on Windows only)", "Mount")
// Unix only
flags.DurationVarP(flagSet, &Opt.DaemonWait, "daemon-wait", "", Opt.DaemonWait, "Time to wait for ready mount from daemon (maximum time on Linux, constant sleep time on OSX/BSD) (not supported on Windows)", "Mount")
flags.AddFlagsFromOptions(flagSet, "", OptionsInfo)
}
const (
@@ -228,12 +301,13 @@ func NewMountCommand(commandName string, hidden bool, mount MountFn) *cobra.Comm
defer cmd.StartStats()()
}
mnt := NewMountPoint(mount, args[1], cmd.NewFsDir(args), &Opt, &vfsflags.Opt)
mnt := NewMountPoint(mount, args[1], cmd.NewFsDir(args), &Opt, &vfscommon.Opt)
mountDaemon, err := mnt.Mount()
// Wait for foreground mount, if any...
if mountDaemon == nil {
if err == nil {
defer systemd.Notify()()
err = mnt.Wait()
}
if err != nil {
@@ -258,7 +332,7 @@ func NewMountCommand(commandName string, hidden bool, mount MountFn) *cobra.Comm
handle := atexit.Register(func() {
killDaemon("Got interrupt")
})
err = WaitMountReady(mnt.MountPoint, Opt.DaemonWait, mountDaemon)
err = WaitMountReady(mnt.MountPoint, time.Duration(Opt.DaemonWait), mountDaemon)
if err != nil {
killDaemon("Daemon timed out")
}
@@ -312,7 +386,6 @@ func (m *MountPoint) Wait() error {
var finaliseOnce sync.Once
finalise := func() {
finaliseOnce.Do(func() {
_, _ = daemon.SdNotify(false, daemon.SdNotifyStopping)
// Unmount only if directory was mounted by rclone, e.g. don't unmount autofs hooks.
if err := CheckMountReady(m.MountPoint); err != nil {
fs.Debugf(m.MountPoint, "Unmounted externally. Just exit now.")
@@ -328,11 +401,6 @@ func (m *MountPoint) Wait() error {
fnHandle := atexit.Register(finalise)
defer atexit.Unregister(fnHandle)
// Notify systemd
if _, err := daemon.SdNotify(false, daemon.SdNotifyReady); err != nil {
return fmt.Errorf("failed to notify systemd: %w", err)
}
// Reload VFS cache on SIGHUP
sigHup := make(chan os.Signal, 1)
NotifyOnSigHup(sigHup)

View File

@@ -10,7 +10,7 @@ import (
"github.com/rclone/rclone/fs"
"github.com/rclone/rclone/fs/rc"
"github.com/rclone/rclone/vfs/vfsflags"
"github.com/rclone/rclone/vfs/vfscommon"
)
var (
@@ -85,7 +85,7 @@ func mountRc(ctx context.Context, in rc.Params) (out rc.Params, err error) {
return nil, err
}
vfsOpt := vfsflags.Opt
vfsOpt := vfscommon.Opt
err = in.GetStructMissingOK("vfsOpt", &vfsOpt)
if err != nil {
return nil, err

View File

@@ -65,7 +65,7 @@ These flags have the following meaning:
This an homage to the [ncdu tool](https://dev.yorhel.nl/ncdu) but for
rclone remotes. It is missing lots of features at the moment
but is useful as it stands.
but is useful as it stands. Unlike ncdu it does not show excluded files.
Note that it might take some time to delete big files/directories. The
UI won't respond in the meantime since the deletion is done synchronously.

View File

@@ -6,6 +6,7 @@ import (
"log"
"github.com/rclone/rclone/cmd"
"github.com/rclone/rclone/fs/rc"
"github.com/rclone/rclone/fs/rc/rcflags"
"github.com/rclone/rclone/fs/rc/rcserver"
libhttp "github.com/rclone/rclone/lib/http"
@@ -37,17 +38,17 @@ See the [rc documentation](/rc/) for more info on the rc flags.
},
Run: func(command *cobra.Command, args []string) {
cmd.CheckArgs(0, 1, command, args)
if rcflags.Opt.Enabled {
if rc.Opt.Enabled {
log.Fatalf("Don't supply --rc flag when using rcd")
}
// Start the rc
rcflags.Opt.Enabled = true
rc.Opt.Enabled = true
if len(args) > 0 {
rcflags.Opt.Files = args[0]
rc.Opt.Files = args[0]
}
s, err := rcserver.Start(context.Background(), &rcflags.Opt)
s, err := rcserver.Start(context.Background(), &rc.Opt)
if err != nil {
log.Fatalf("Failed to start remote control: %v", err)
}

View File

@@ -1,5 +1,3 @@
//go:build go1.21
package dlna
import (

View File

@@ -1,5 +1,3 @@
//go:build go1.21
package dlna
import (

View File

@@ -1,5 +1,3 @@
//go:build go1.21
// Package dlna provides DLNA server.
package dlna
@@ -26,6 +24,7 @@ import (
"github.com/rclone/rclone/fs"
"github.com/rclone/rclone/lib/systemd"
"github.com/rclone/rclone/vfs"
"github.com/rclone/rclone/vfs/vfscommon"
"github.com/rclone/rclone/vfs/vfsflags"
"github.com/spf13/cobra"
)
@@ -127,14 +126,14 @@ func newServer(f fs.Fs, opt *dlnaflags.Options) (*server, error) {
}
s := &server{
AnnounceInterval: opt.AnnounceInterval,
AnnounceInterval: time.Duration(opt.AnnounceInterval),
FriendlyName: friendlyName,
RootDeviceUUID: makeDeviceUUID(friendlyName),
Interfaces: interfaces,
waitChan: make(chan struct{}),
httpListenAddr: opt.ListenAddr,
f: f,
vfs: vfs.New(f, &vfsflags.Opt),
vfs: vfs.New(f, &vfscommon.Opt),
}
s.services = map[string]UPnPService{

View File

@@ -1,5 +1,3 @@
//go:build go1.21
package dlna
import (
@@ -35,7 +33,7 @@ const (
)
func startServer(t *testing.T, f fs.Fs) {
opt := dlnaflags.DefaultOpt
opt := dlnaflags.Opt
opt.ListenAddr = testBindAddress
var err error
dlnaServer, err = newServer(f, &opt)

View File

@@ -1,9 +0,0 @@
//go:build !go1.21
// Package dlna is unsupported on this platform
package dlna
import "github.com/spf13/cobra"
// Command definition is nil to show not implemented
var Command *cobra.Command

View File

@@ -1,5 +1,3 @@
//go:build go1.21
package dlna
import (

View File

@@ -4,8 +4,8 @@ package dlnaflags
import (
"time"
"github.com/rclone/rclone/fs"
"github.com/rclone/rclone/fs/config/flags"
"github.com/rclone/rclone/fs/rc"
"github.com/spf13/pflag"
)
@@ -24,39 +24,46 @@ logging of all UPNP traffic.
`
// Options is the type for DLNA serving options.
type Options struct {
ListenAddr string
FriendlyName string
LogTrace bool
InterfaceNames []string
AnnounceInterval time.Duration
// OptionsInfo descripts the Options in use
var OptionsInfo = fs.Options{{
Name: "addr",
Default: ":7879",
Help: "The ip:port or :port to bind the DLNA http server to",
}, {
Name: "name",
Default: "",
Help: "Name of DLNA server",
}, {
Name: "log_trace",
Default: false,
Help: "Enable trace logging of SOAP traffic",
}, {
Name: "interface",
Default: []string{},
Help: "The interface to use for SSDP (repeat as necessary)",
}, {
Name: "announce_interval",
Default: fs.Duration(12 * time.Minute),
Help: "The interval between SSDP announcements",
}}
func init() {
fs.RegisterGlobalOptions(fs.OptionsInfo{Name: "dlna", Opt: &Opt, Options: OptionsInfo})
}
// DefaultOpt contains the defaults options for DLNA serving.
var DefaultOpt = Options{
ListenAddr: ":7879",
FriendlyName: "",
LogTrace: false,
InterfaceNames: []string{},
AnnounceInterval: 12 * time.Minute,
// Options is the type for DLNA serving options.
type Options struct {
ListenAddr string `config:"addr"`
FriendlyName string `config:"name"`
LogTrace bool `config:"log_trace"`
InterfaceNames []string `config:"interface"`
AnnounceInterval fs.Duration `config:"announce_interval"`
}
// Opt contains the options for DLNA serving.
var (
Opt = DefaultOpt
)
func addFlagsPrefix(flagSet *pflag.FlagSet, prefix string, Opt *Options) {
rc.AddOption("dlna", &Opt)
flags.StringVarP(flagSet, &Opt.ListenAddr, prefix+"addr", "", Opt.ListenAddr, "The ip:port or :port to bind the DLNA http server to", prefix)
flags.StringVarP(flagSet, &Opt.FriendlyName, prefix+"name", "", Opt.FriendlyName, "Name of DLNA server", prefix)
flags.BoolVarP(flagSet, &Opt.LogTrace, prefix+"log-trace", "", Opt.LogTrace, "Enable trace logging of SOAP traffic", prefix)
flags.StringArrayVarP(flagSet, &Opt.InterfaceNames, prefix+"interface", "", Opt.InterfaceNames, "The interface to use for SSDP (repeat as necessary)", prefix)
flags.DurationVarP(flagSet, &Opt.AnnounceInterval, prefix+"announce-interval", "", Opt.AnnounceInterval, "The interval between SSDP announcements", prefix)
}
var Opt Options
// AddFlags add the command line flags for DLNA serving.
func AddFlags(flagSet *pflag.FlagSet) {
addFlagsPrefix(flagSet, "", &Opt)
flags.AddFlagsFromOptions(flagSet, "", OptionsInfo)
}

View File

@@ -1,5 +1,3 @@
//go:build go1.21
package dlna
import (

View File

@@ -19,7 +19,6 @@ import (
"github.com/rclone/rclone/lib/atexit"
"github.com/rclone/rclone/lib/file"
"github.com/rclone/rclone/vfs/vfscommon"
"github.com/rclone/rclone/vfs/vfsflags"
)
// Driver implements docker driver api
@@ -55,7 +54,7 @@ func NewDriver(ctx context.Context, root string, mntOpt *mountlib.Options, vfsOp
mntOpt = &mountlib.Opt
}
if vfsOpt == nil {
vfsOpt = &vfsflags.Opt
vfsOpt = &vfscommon.Opt
}
drv := &Driver{
root: root,

View File

@@ -2,7 +2,6 @@ package docker
import (
"fmt"
"strconv"
"strings"
"github.com/rclone/rclone/cmd/mountlib"
@@ -11,7 +10,6 @@ import (
"github.com/rclone/rclone/fs/fspath"
"github.com/rclone/rclone/fs/rc"
"github.com/rclone/rclone/vfs/vfscommon"
"github.com/rclone/rclone/vfs/vfsflags"
"github.com/spf13/pflag"
)
@@ -88,7 +86,7 @@ func (vol *Volume) applyOptions(volOpt VolOpts) error {
fsType = "local"
if fsName != "" {
var ok bool
fsType, ok = fs.ConfigMap(nil, fsName, nil).Get("type")
fsType, ok = fs.ConfigMap("", nil, fsName, nil).Get("type")
if !ok {
return fs.ErrorNotFoundInConfigFile
}
@@ -185,7 +183,7 @@ func getMountOption(mntOpt *mountlib.Options, opt rc.Params, key string) (ok boo
case "debug-fuse":
mntOpt.DebugFUSE, err = opt.GetBool(key)
case "attr-timeout":
mntOpt.AttrTimeout, err = opt.GetDuration(key)
mntOpt.AttrTimeout, err = opt.GetFsDuration(key)
case "option":
mntOpt.ExtraOptions, err = getStringArray(opt, key)
case "fuse-flag":
@@ -193,7 +191,7 @@ func getMountOption(mntOpt *mountlib.Options, opt rc.Params, key string) (ok boo
case "daemon":
mntOpt.Daemon, err = opt.GetBool(key)
case "daemon-timeout":
mntOpt.DaemonTimeout, err = opt.GetDuration(key)
mntOpt.DaemonTimeout, err = opt.GetFsDuration(key)
case "default-permissions":
mntOpt.DefaultPermissions, err = opt.GetBool(key)
case "allow-non-empty":
@@ -231,9 +229,9 @@ func getVFSOption(vfsOpt *vfscommon.Options, opt rc.Params, key string) (ok bool
case "vfs-cache-mode":
err = getFVarP(&vfsOpt.CacheMode, opt, key)
case "vfs-cache-poll-interval":
vfsOpt.CachePollInterval, err = opt.GetDuration(key)
vfsOpt.CachePollInterval, err = opt.GetFsDuration(key)
case "vfs-cache-max-age":
vfsOpt.CacheMaxAge, err = opt.GetDuration(key)
vfsOpt.CacheMaxAge, err = opt.GetFsDuration(key)
case "vfs-cache-max-size":
err = getFVarP(&vfsOpt.CacheMaxSize, opt, key)
case "vfs-read-chunk-size":
@@ -243,11 +241,11 @@ func getVFSOption(vfsOpt *vfscommon.Options, opt rc.Params, key string) (ok bool
case "vfs-case-insensitive":
vfsOpt.CaseInsensitive, err = opt.GetBool(key)
case "vfs-write-wait":
vfsOpt.WriteWait, err = opt.GetDuration(key)
vfsOpt.WriteWait, err = opt.GetFsDuration(key)
case "vfs-read-wait":
vfsOpt.ReadWait, err = opt.GetDuration(key)
vfsOpt.ReadWait, err = opt.GetFsDuration(key)
case "vfs-write-back":
vfsOpt.WriteBack, err = opt.GetDuration(key)
vfsOpt.WriteBack, err = opt.GetFsDuration(key)
case "vfs-read-ahead":
err = getFVarP(&vfsOpt.ReadAhead, opt, key)
case "vfs-used-is-size":
@@ -259,28 +257,19 @@ func getVFSOption(vfsOpt *vfscommon.Options, opt rc.Params, key string) (ok bool
case "no-checksum":
vfsOpt.NoChecksum, err = opt.GetBool(key)
case "dir-cache-time":
vfsOpt.DirCacheTime, err = opt.GetDuration(key)
vfsOpt.DirCacheTime, err = opt.GetFsDuration(key)
case "poll-interval":
vfsOpt.PollInterval, err = opt.GetDuration(key)
vfsOpt.PollInterval, err = opt.GetFsDuration(key)
case "read-only":
vfsOpt.ReadOnly, err = opt.GetBool(key)
case "dir-perms":
perms := &vfsflags.FileMode{Mode: &vfsOpt.DirPerms}
err = getFVarP(perms, opt, key)
err = getFVarP(&vfsOpt.DirPerms, opt, key)
case "file-perms":
perms := &vfsflags.FileMode{Mode: &vfsOpt.FilePerms}
err = getFVarP(perms, opt, key)
err = getFVarP(&vfsOpt.FilePerms, opt, key)
// unprefixed unix-only vfs options
case "umask":
// GetInt64 doesn't support the `0octal` umask syntax - parse locally
var strVal string
if strVal, err = opt.GetString(key); err == nil {
var longVal int64
if longVal, err = strconv.ParseInt(strVal, 0, 0); err == nil {
vfsOpt.Umask = int(longVal)
}
}
err = getFVarP(&vfsOpt.Umask, opt, key)
case "uid":
intVal, err = opt.GetInt64(key)
vfsOpt.UID = uint32(intVal)

View File

@@ -25,48 +25,63 @@ import (
"github.com/rclone/rclone/fs/config/flags"
"github.com/rclone/rclone/fs/config/obscure"
"github.com/rclone/rclone/fs/log"
"github.com/rclone/rclone/fs/rc"
"github.com/rclone/rclone/vfs"
"github.com/rclone/rclone/vfs/vfscommon"
"github.com/rclone/rclone/vfs/vfsflags"
"github.com/spf13/cobra"
"github.com/spf13/pflag"
ftp "goftp.io/server/v2"
)
// OptionsInfo descripts the Options in use
var OptionsInfo = fs.Options{{
Name: "addr",
Default: "localhost:2121",
Help: "IPaddress:Port or :Port to bind server to",
}, {
Name: "public_ip",
Default: "",
Help: "Public IP address to advertise for passive connections",
}, {
Name: "passive_port",
Default: "30000-32000",
Help: "Passive port range to use",
}, {
Name: "user",
Default: "anonymous",
Help: "User name for authentication",
}, {
Name: "pass",
Default: "",
Help: "Password for authentication (empty value allow every password)",
}, {
Name: "cert",
Default: "",
Help: "TLS PEM key (concatenation of certificate and CA certificate)",
}, {
Name: "key",
Default: "",
Help: "TLS PEM Private key",
}}
// Options contains options for the http Server
type Options struct {
//TODO add more options
ListenAddr string // Port to listen on
PublicIP string // Passive ports range
PassivePorts string // Passive ports range
BasicUser string // single username for basic auth if not using Htpasswd
BasicPass string // password for BasicUser
TLSCert string // TLS PEM key (concatenation of certificate and CA certificate)
TLSKey string // TLS PEM Private key
}
// DefaultOpt is the default values used for Options
var DefaultOpt = Options{
ListenAddr: "localhost:2121",
PublicIP: "",
PassivePorts: "30000-32000",
BasicUser: "anonymous",
BasicPass: "",
ListenAddr string `config:"addr"` // Port to listen on
PublicIP string `config:"public_ip"` // Passive ports range
PassivePorts string `config:"passive_port"` // Passive ports range
BasicUser string `config:"user"` // single username for basic auth if not using Htpasswd
BasicPass string `config:"pass"` // password for BasicUser
TLSCert string `config:"cert"` // TLS PEM key (concatenation of certificate and CA certificate)
TLSKey string `config:"key"` // TLS PEM Private key
}
// Opt is options set by command line flags
var Opt = DefaultOpt
var Opt Options
// AddFlags adds flags for ftp
func AddFlags(flagSet *pflag.FlagSet) {
rc.AddOption("ftp", &Opt)
flags.StringVarP(flagSet, &Opt.ListenAddr, "addr", "", Opt.ListenAddr, "IPaddress:Port or :Port to bind server to", "")
flags.StringVarP(flagSet, &Opt.PublicIP, "public-ip", "", Opt.PublicIP, "Public IP address to advertise for passive connections", "")
flags.StringVarP(flagSet, &Opt.PassivePorts, "passive-port", "", Opt.PassivePorts, "Passive port range to use", "")
flags.StringVarP(flagSet, &Opt.BasicUser, "user", "", Opt.BasicUser, "User name for authentication", "")
flags.StringVarP(flagSet, &Opt.BasicPass, "pass", "", Opt.BasicPass, "Password for authentication (empty value allow every password)", "")
flags.StringVarP(flagSet, &Opt.TLSCert, "cert", "", Opt.TLSCert, "TLS PEM key (concatenation of certificate and CA certificate)", "")
flags.StringVarP(flagSet, &Opt.TLSKey, "key", "", Opt.TLSKey, "TLS PEM Private key", "")
flags.AddFlagsFromOptions(flagSet, "", OptionsInfo)
}
func init() {
@@ -135,17 +150,21 @@ type driver struct {
userPass map[string]string // cache of username => password when using vfs proxy
}
func init() {
fs.RegisterGlobalOptions(fs.OptionsInfo{Name: "ftp", Opt: &Opt, Options: OptionsInfo})
}
var passivePortsRe = regexp.MustCompile(`^\s*\d+\s*-\s*\d+\s*$`)
// Make a new FTP to serve the remote
func newServer(ctx context.Context, f fs.Fs, opt *Options) (*driver, error) {
host, port, err := net.SplitHostPort(opt.ListenAddr)
if err != nil {
return nil, errors.New("failed to parse host:port")
return nil, fmt.Errorf("failed to parse host:port from %q", opt.ListenAddr)
}
portNum, err := strconv.Atoi(port)
if err != nil {
return nil, errors.New("failed to parse host:port")
return nil, fmt.Errorf("failed to parse port number from %q", port)
}
d := &driver{
@@ -157,7 +176,7 @@ func newServer(ctx context.Context, f fs.Fs, opt *Options) (*driver, error) {
d.proxy = proxy.New(ctx, &proxyflags.Opt)
d.userPass = make(map[string]string, 16)
} else {
d.globalVFS = vfs.New(f, &vfsflags.Opt)
d.globalVFS = vfs.New(f, &vfscommon.Opt)
}
d.useTLS = d.opt.TLSKey != ""

View File

@@ -33,7 +33,7 @@ const (
func TestFTP(t *testing.T) {
// Configure and start the server
start := func(f fs.Fs) (configmap.Simple, func()) {
opt := DefaultOpt
opt := Opt
opt.ListenAddr = testHOST + ":" + testPORT
opt.PassivePorts = testPASSIVEPORTRANGE
opt.BasicUser = testUSER

View File

@@ -24,6 +24,7 @@ import (
"github.com/rclone/rclone/lib/http/serve"
"github.com/rclone/rclone/lib/systemd"
"github.com/rclone/rclone/vfs"
"github.com/rclone/rclone/vfs/vfscommon"
"github.com/rclone/rclone/vfs/vfsflags"
"github.com/spf13/cobra"
)
@@ -148,7 +149,7 @@ func run(ctx context.Context, f fs.Fs, opt Options) (s *HTTP, err error) {
// override auth
s.opt.Auth.CustomAuthFn = s.auth
} else {
s._vfs = vfs.New(f, &vfsflags.Opt)
s._vfs = vfs.New(f, &vfscommon.Opt)
}
s.server, err = libhttp.NewServer(ctx,
@@ -215,7 +216,7 @@ func (s *HTTP) serveDir(w http.ResponseWriter, r *http.Request, dirRemote string
// Make the entries for display
directory := serve.NewDirectory(dirRemote, s.server.HTMLTemplate())
for _, node := range dirEntries {
if vfsflags.Opt.NoModTime {
if vfscommon.Opt.NoModTime {
directory.AddHTMLEntry(node.Path(), node.IsDir(), node.Size(), time.Time{})
} else {
directory.AddHTMLEntry(node.Path(), node.IsDir(), node.Size(), node.ModTime().UTC())

View File

@@ -0,0 +1,140 @@
// Implements an nbd.Backend for serving from a chunked file in the VFS.
package nbd
import (
"errors"
"fmt"
"github.com/rclone/gonbdserver/nbd"
"github.com/rclone/rclone/fs"
"github.com/rclone/rclone/fs/log"
"github.com/rclone/rclone/vfs/chunked"
"golang.org/x/net/context"
)
// Backend for a single chunked file
type chunkedBackend struct {
file *chunked.File
ec *nbd.ExportConfig
}
// Create Backend for a single chunked file
type chunkedBackendFactory struct {
s *NBD
file *chunked.File
}
// WriteAt implements Backend.WriteAt
func (cb *chunkedBackend) WriteAt(ctx context.Context, b []byte, offset int64, fua bool) (n int, err error) {
defer log.Trace(logPrefix, "size=%d, off=%d", len(b), offset)("n=%d, err=%v", &n, &err)
n, err = cb.file.WriteAt(b, offset)
if err != nil || !fua {
return n, err
}
err = cb.file.Sync()
if err != nil {
return 0, err
}
return n, err
}
// ReadAt implements Backend.ReadAt
func (cb *chunkedBackend) ReadAt(ctx context.Context, b []byte, offset int64) (n int, err error) {
defer log.Trace(logPrefix, "size=%d, off=%d", len(b), offset)("n=%d, err=%v", &n, &err)
return cb.file.ReadAt(b, offset)
}
// TrimAt implements Backend.TrimAt
func (cb *chunkedBackend) TrimAt(ctx context.Context, length int, offset int64) (n int, err error) {
defer log.Trace(logPrefix, "size=%d, off=%d", length, offset)("n=%d, err=%v", &n, &err)
return length, nil
}
// Flush implements Backend.Flush
func (cb *chunkedBackend) Flush(ctx context.Context) (err error) {
defer log.Trace(logPrefix, "")("err=%v", &err)
return nil
}
// Close implements Backend.Close
func (cb *chunkedBackend) Close(ctx context.Context) (err error) {
defer log.Trace(logPrefix, "")("err=%v", &err)
err = cb.file.Close()
return nil
}
// Geometry implements Backend.Geometry
func (cb *chunkedBackend) Geometry(ctx context.Context) (size uint64, minBS uint64, prefBS uint64, maxBS uint64, err error) {
defer log.Trace(logPrefix, "")("size=%d, minBS=%d, prefBS=%d, maxBS=%d, err=%v", &size, &minBS, &prefBS, &maxBS, &err)
size = uint64(cb.file.Size())
minBS = cb.ec.MinimumBlockSize
prefBS = cb.ec.PreferredBlockSize
maxBS = cb.ec.MaximumBlockSize
err = nil
return
}
// HasFua implements Backend.HasFua
func (cb *chunkedBackend) HasFua(ctx context.Context) (fua bool) {
defer log.Trace(logPrefix, "")("fua=%v", &fua)
return true
}
// HasFlush implements Backend.HasFua
func (cb *chunkedBackend) HasFlush(ctx context.Context) (flush bool) {
defer log.Trace(logPrefix, "")("flush=%v", &flush)
return true
}
// New generates a new chunked backend
func (cbf *chunkedBackendFactory) newBackend(ctx context.Context, ec *nbd.ExportConfig) (nbd.Backend, error) {
err := cbf.file.Open(false, 0)
if err != nil {
return nil, fmt.Errorf("failed to open chunked file: %w", err)
}
cb := &chunkedBackend{
file: cbf.file,
ec: ec,
}
return cb, nil
}
// Generate a chunked backend factory
func (s *NBD) newChunkedBackendFactory(ctx context.Context) (bf backendFactory, err error) {
create := s.opt.Create > 0
if s.vfs.Opt.ReadOnly && create {
return nil, errors.New("can't create files with --read-only")
}
file := chunked.New(s.vfs, s.leaf)
err = file.Open(create, s.log2ChunkSize)
if err != nil {
return nil, fmt.Errorf("failed to open chunked file: %w", err)
}
defer fs.CheckClose(file, &err)
var truncateSize fs.SizeSuffix
if create {
if file.Size() == 0 {
truncateSize = s.opt.Create
}
} else {
truncateSize = s.opt.Resize
}
if truncateSize > 0 {
err = file.Truncate(int64(truncateSize))
if err != nil {
return nil, fmt.Errorf("failed to create chunked file: %w", err)
}
fs.Logf(logPrefix, "Size of network block device is now %v", truncateSize)
}
return &chunkedBackendFactory{
s: s,
file: file,
}, nil
}
// Check interfaces
var (
_ nbd.Backend = (*chunkedBackend)(nil)
_ backendFactory = (*chunkedBackendFactory)(nil)
)

View File

@@ -0,0 +1,140 @@
// Implements an nbd.Backend for serving from the VFS.
package nbd
import (
"fmt"
"os"
"github.com/rclone/gonbdserver/nbd"
"github.com/rclone/rclone/fs"
"github.com/rclone/rclone/fs/log"
"github.com/rclone/rclone/vfs"
"golang.org/x/net/context"
)
// Backend for a single file
type fileBackend struct {
file vfs.Handle
ec *nbd.ExportConfig
}
// Create Backend for a single file
type fileBackendFactory struct {
s *NBD
vfs *vfs.VFS
filePath string
perms int
}
// WriteAt implements Backend.WriteAt
func (fb *fileBackend) WriteAt(ctx context.Context, b []byte, offset int64, fua bool) (n int, err error) {
defer log.Trace(logPrefix, "size=%d, off=%d", len(b), offset)("n=%d, err=%v", &n, &err)
n, err = fb.file.WriteAt(b, offset)
if err != nil || !fua {
return n, err
}
err = fb.file.Sync()
if err != nil {
return 0, err
}
return n, err
}
// ReadAt implements Backend.ReadAt
func (fb *fileBackend) ReadAt(ctx context.Context, b []byte, offset int64) (n int, err error) {
defer log.Trace(logPrefix, "size=%d, off=%d", len(b), offset)("n=%d, err=%v", &n, &err)
return fb.file.ReadAt(b, offset)
}
// TrimAt implements Backend.TrimAt
func (fb *fileBackend) TrimAt(ctx context.Context, length int, offset int64) (n int, err error) {
defer log.Trace(logPrefix, "size=%d, off=%d", length, offset)("n=%d, err=%v", &n, &err)
return length, nil
}
// Flush implements Backend.Flush
func (fb *fileBackend) Flush(ctx context.Context) (err error) {
defer log.Trace(logPrefix, "")("err=%v", &err)
return nil
}
// Close implements Backend.Close
func (fb *fileBackend) Close(ctx context.Context) (err error) {
defer log.Trace(logPrefix, "")("err=%v", &err)
err = fb.file.Close()
return nil
}
// Geometry implements Backend.Geometry
func (fb *fileBackend) Geometry(ctx context.Context) (size uint64, minBS uint64, prefBS uint64, maxBS uint64, err error) {
defer log.Trace(logPrefix, "")("size=%d, minBS=%d, prefBS=%d, maxBS=%d, err=%v", &size, &minBS, &prefBS, &maxBS, &err)
fi, err := fb.file.Stat()
if err != nil {
err = fmt.Errorf("failed read info about open backing file: %w", err)
return
}
size = uint64(fi.Size())
minBS = fb.ec.MinimumBlockSize
prefBS = fb.ec.PreferredBlockSize
maxBS = fb.ec.MaximumBlockSize
err = nil
return
}
// HasFua implements Backend.HasFua
func (fb *fileBackend) HasFua(ctx context.Context) (fua bool) {
defer log.Trace(logPrefix, "")("fua=%v", &fua)
return true
}
// HasFlush implements Backend.HasFua
func (fb *fileBackend) HasFlush(ctx context.Context) (flush bool) {
defer log.Trace(logPrefix, "")("flush=%v", &flush)
return true
}
// open the backing file
func (fbf *fileBackendFactory) open() (vfs.Handle, error) {
return fbf.vfs.OpenFile(fbf.filePath, fbf.perms, 0700)
}
// New generates a new file backend
func (fbf *fileBackendFactory) newBackend(ctx context.Context, ec *nbd.ExportConfig) (nbd.Backend, error) {
fd, err := fbf.open()
if err != nil {
return nil, fmt.Errorf("failed to open backing file: %w", err)
}
fb := &fileBackend{
file: fd,
ec: ec,
}
return fb, nil
}
// Generate a file backend factory
func (s *NBD) newFileBackendFactory(ctx context.Context) (bf backendFactory, err error) {
perms := os.O_RDWR
if s.vfs.Opt.ReadOnly {
perms = os.O_RDONLY
}
fbf := &fileBackendFactory{
s: s,
vfs: s.vfs,
perms: perms,
filePath: s.leaf,
}
// Try opening the file so we get errors now rather than later when they are more difficult to report.
fd, err := fbf.open()
if err != nil {
return nil, fmt.Errorf("failed to open backing file: %w", err)
}
defer fs.CheckClose(fd, &err)
return fbf, nil
}
// Check interfaces
var (
_ nbd.Backend = (*fileBackend)(nil)
_ backendFactory = (*fileBackendFactory)(nil)
)

260
cmd/serve/nbd/nbd.go Normal file
View File

@@ -0,0 +1,260 @@
// Package nbd provides a network block device server
package nbd
import (
"bufio"
"context"
_ "embed"
"errors"
"fmt"
"io"
"log"
"math/bits"
"path/filepath"
"strings"
"sync"
"github.com/rclone/gonbdserver/nbd"
"github.com/rclone/rclone/cmd"
"github.com/rclone/rclone/cmd/serve/proxy/proxyflags"
"github.com/rclone/rclone/fs"
"github.com/rclone/rclone/fs/config/flags"
"github.com/rclone/rclone/lib/systemd"
"github.com/rclone/rclone/vfs"
"github.com/rclone/rclone/vfs/vfscommon"
"github.com/rclone/rclone/vfs/vfsflags"
"github.com/spf13/cobra"
"github.com/spf13/pflag"
)
const logPrefix = "nbd"
// OptionsInfo descripts the Options in use
var OptionsInfo = fs.Options{{
Name: "addr",
Default: "localhost:10809",
Help: "IPaddress:Port or :Port to bind server to",
}, {
Name: "min_block_size",
Default: fs.SizeSuffix(512), // FIXME
Help: "Minimum block size to advertise",
}, {
Name: "preferred_block_size",
Default: fs.SizeSuffix(4096), // FIXME this is the max according to nbd-client
Help: "Preferred block size to advertise",
}, {
Name: "max_block_size",
Default: fs.SizeSuffix(1024 * 1024), // FIXME,
Help: "Maximum block size to advertise",
}, {
Name: "create",
Default: fs.SizeSuffix(-1),
Help: "If the destination does not exist, create it with this size",
}, {
Name: "chunk_size",
Default: fs.SizeSuffix(0),
Help: "If creating the destination use this chunk size. Must be a power of 2.",
}, {
Name: "resize",
Default: fs.SizeSuffix(-1),
Help: "If the destination exists, resize it to this size",
}}
// name := flag.String("name", "default", "Export name")
// description := flag.String("description", "The default export", "Export description")
// Options required for nbd server
type Options struct {
ListenAddr string `config:"addr"` // Port to listen on
MinBlockSize fs.SizeSuffix `config:"min_block_size"`
PreferredBlockSize fs.SizeSuffix `config:"preferred_block_size"`
MaxBlockSize fs.SizeSuffix `config:"max_block_size"`
Create fs.SizeSuffix `config:"create"`
ChunkSize fs.SizeSuffix `config:"chunk_size"`
Resize fs.SizeSuffix `config:"resize"`
}
func init() {
fs.RegisterGlobalOptions(fs.OptionsInfo{Name: "nbd", Opt: &Opt, Options: OptionsInfo})
}
// Opt is options set by command line flags
var Opt Options
// AddFlags adds flags for the nbd
func AddFlags(flagSet *pflag.FlagSet, Opt *Options) {
flags.AddFlagsFromOptions(flagSet, "", OptionsInfo)
}
func init() {
flagSet := Command.Flags()
vfsflags.AddFlags(flagSet)
proxyflags.AddFlags(flagSet)
AddFlags(flagSet, &Opt)
}
//go:embed nbd.md
var helpText string
// Command definition for cobra
var Command = &cobra.Command{
Use: "nbd remote:path",
Short: `Serve the remote over NBD.`,
Long: helpText + vfs.Help(),
Annotations: map[string]string{
"versionIntroduced": "v1.65",
"status": "experimental",
},
Run: func(command *cobra.Command, args []string) {
// FIXME could serve more than one nbd?
cmd.CheckArgs(1, 1, command, args)
f, leaf := cmd.NewFsFile(args[0])
cmd.Run(false, true, command, func() error {
s, err := run(context.Background(), f, leaf, Opt)
if err != nil {
log.Fatal(err)
}
defer systemd.Notify()()
// FIXME
_ = s
s.Wait()
return nil
})
},
}
// NBD contains everything to run the server
type NBD struct {
f fs.Fs
leaf string
vfs *vfs.VFS // don't use directly, use getVFS
opt Options
wg sync.WaitGroup
sessionWaitGroup sync.WaitGroup
logRd *io.PipeReader
logWr *io.PipeWriter
log2ChunkSize uint
readOnly bool // Set for read only by vfs config
backendFactory backendFactory
}
// interface for creating backend factories
type backendFactory interface {
newBackend(ctx context.Context, ec *nbd.ExportConfig) (nbd.Backend, error)
}
// Create and start the server for nbd either on directory f or using file leaf in f
func run(ctx context.Context, f fs.Fs, leaf string, opt Options) (s *NBD, err error) {
s = &NBD{
f: f,
leaf: leaf,
opt: opt,
vfs: vfs.New(f, &vfscommon.Opt),
readOnly: vfscommon.Opt.ReadOnly,
}
if opt.ChunkSize != 0 {
if set := bits.OnesCount64(uint64(opt.ChunkSize)); set != 1 {
return nil, fmt.Errorf("--chunk-size must be a power of 2 (counted %d bits set)", set)
}
s.log2ChunkSize = uint(bits.TrailingZeros64(uint64(opt.ChunkSize)))
fs.Debugf(logPrefix, "Using ChunkSize %v (%v), Log2ChunkSize %d", opt.ChunkSize, fs.SizeSuffix(1<<s.log2ChunkSize), s.log2ChunkSize)
}
if !vfscommon.Opt.ReadOnly && vfscommon.Opt.CacheMode < vfscommon.CacheModeWrites {
return nil, errors.New("need --vfs-cache-mode writes or full when serving read/write")
}
// Create the backend factory
if leaf != "" {
s.backendFactory, err = s.newFileBackendFactory(ctx)
} else {
s.backendFactory, err = s.newChunkedBackendFactory(ctx)
}
if err != nil {
return nil, err
}
nbd.RegisterBackend("rclone", s.backendFactory.newBackend)
fs.Debugf(logPrefix, "Registered backends: %v", nbd.GetBackendNames())
var (
protocol = "tcp"
addr = Opt.ListenAddr
)
if strings.HasPrefix(addr, "unix://") || filepath.IsAbs(addr) {
protocol = "unix"
addr = strings.TrimPrefix(addr, "unix://")
}
ec := nbd.ExportConfig{
Name: "default",
Description: fs.ConfigString(f),
Driver: "rclone",
ReadOnly: vfscommon.Opt.ReadOnly,
Workers: 8, // should this be --checkers or a new config flag FIXME
TLSOnly: false, // FIXME
MinimumBlockSize: uint64(Opt.MinBlockSize),
PreferredBlockSize: uint64(Opt.PreferredBlockSize),
MaximumBlockSize: uint64(Opt.MaxBlockSize),
DriverParameters: nbd.DriverParametersConfig{
"sync": "false",
"path": "/tmp/diskimage",
},
}
// Make a logger to feed gonbdserver's logs into rclone's logging system
s.logRd, s.logWr = io.Pipe()
go func() {
scanner := bufio.NewScanner(s.logRd)
for scanner.Scan() {
line := scanner.Text()
if s, ok := strings.CutPrefix(line, "[DEBUG] "); ok {
fs.Debugf(logPrefix, "%s", s)
} else if s, ok := strings.CutPrefix(line, "[INFO] "); ok {
fs.Infof(logPrefix, "%s", s)
} else if s, ok := strings.CutPrefix(line, "[WARN] "); ok {
fs.Logf(logPrefix, "%s", s)
} else if s, ok := strings.CutPrefix(line, "[ERROR] "); ok {
fs.Errorf(logPrefix, "%s", s)
} else if s, ok := strings.CutPrefix(line, "[CRIT] "); ok {
fs.Errorf(logPrefix, "%s", s)
} else {
fs.Infof(logPrefix, "%s", line)
}
}
if err := scanner.Err(); err != nil {
fs.Errorf(logPrefix, "Log writer failed: %v", err)
}
}()
logger := log.New(s.logWr, "", 0)
ci := fs.GetConfig(ctx)
dump := ci.Dump & (fs.DumpHeaders | fs.DumpBodies | fs.DumpAuth | fs.DumpRequests | fs.DumpResponses)
var serverConfig = nbd.ServerConfig{
Protocol: protocol, // protocol it should listen on (in net.Conn form)
Address: addr, // address to listen on
DefaultExport: "default", // name of default export
Exports: []nbd.ExportConfig{ec}, // array of configurations of exported items
//TLS: nbd.TLSConfig{}, // TLS configuration
DisableNoZeroes: false, // Disable NoZereos extension FIXME
Debug: dump != 0, // Verbose debug
}
s.wg.Add(1)
go func() {
defer s.wg.Done()
// FIXME contexts
nbd.StartServer(ctx, ctx, &s.sessionWaitGroup, logger, serverConfig)
}()
return s, nil
}
// Wait for the server to finish
func (s *NBD) Wait() {
s.wg.Wait()
_ = s.logWr.Close()
_ = s.logRd.Close()
}

139
cmd/serve/nbd/nbd.md Normal file
View File

@@ -0,0 +1,139 @@
Run a Network Block Device server using remote:path to store the object.
You can use a unix socket by setting the url to `unix:/path/to/socket`
or just by using an absolute path name.
`rclone serve nbd` will run on any OS, but the examples for using it
are Linux specific. There do exist Windows and macOS NBD clients but
these haven't been tested yet.
To see the packets on the wire use `--dump headers` or `--dump bodies`.
**NB** this has no authentication. It may in the future allow SSL
certificates. If you need access control then you will have to provide
it on the network layer, or use unix sockets.
### remote:path pointing to a file
If the `remote:path` points to a file then rclone will serve the file
directly as a network block device.
Using this with `--read-only` is recommended. You can use any
`--vfs-cache-mode` and only parts of the file that are read will be
cached locally if using `--vfs-cache-mode full`.
If you don't use `--read-only` then `--vfs-cache-mode full` is
required and the entire file will be cached locally and won't be
uploaded until the client has disconnected (`nbd-client -d`).
### remote:path pointing to a directory
If the `remote:path` points to a directory then rclone will treat the
directory as a place to store chunks of the exported network block device.
It will store an `info.json` file in the top level and store the
individual chunks in a hierarchical directory scheme with no more than
256 chunks or directories in any directory.
The first time you use this, you should use the `--create` flag
indicating how big you want the network block device to appear. Rclone
only allocates space you use so you can make this large. For example
`--create 1T`. You can also pass the `--chunk-size` flag at this
point. If you don't you will get the default of 64k chunks.
Rclone will then chunk the network block device into `--chunk-size`
chunks. Rclone has to download the entire chunk in order to change
only part of it and it will cache the chunk on disk so bear that in
mind when choosing `--chunk-size`.
If you wish to change the size of the network block device you can use
the `--resize` flag. This won't remove any data, it just changes the
size advertised. So if you have made a file system on the block device
you will need to resize it too.
If you are using `--read-only` then you can use any
`--vfs-cache-mode`.
If you are not using `--read-only` then you will need
`--vfs-cache-mode writes` or `--vfs-cache-mode full`.
Note that rclone will be acting as a writeback cache with
`--vfs-cache-mode writes` or `--vfs-cache-mode full`. Note that rclone
will only write `--transfers` files at once so the cache can get a
backlog of uploads. You can reduce the writeback caching slightly
setting `--vfs-write-back 0`, however due to the way the kernel works,
this will only reduce it slightly.
If using `--vfs-cache-mode writes` or `--vfs-cache-mode full` it is
recommended to set limits on the cache size using some or all of these
flags as the VFS can use a lot of disk space very quickly.
--vfs-cache-max-age duration Max time since last access of objects in the cache (default 1h0m0s)
--vfs-cache-max-size SizeSuffix Max total size of objects in the cache (default off)
--vfs-cache-min-free-space SizeSuffix Target minimum free space on the disk containing the cache (default off)
You might also need to set this smaller as the cache will only be
examined at this interval.
--vfs-cache-poll-interval duration Interval to poll the cache for stale objects (default 1m0s)
### Linux Examples
Install
sudo apt install nbd-client
Start server on localhost:10809 by default.
rclone -v --vfs-cache-mode full serve ndb remote:path
List devices
sudo modprobe nbd
sudo nbd-client --list localhost
Format the partition and mount read write
sudo nbd-client -g localhost 10809 /dev/nbd0
sudo mkfs.ext4 /dev/nbd0
sudo mkdir -p /mnt/tmp
sudo mount -t ext4 /dev/nbd0 /mnt/tmp
Mount read only
rclone -v --vfs-cache-mode full --read-only serve ndb remote:path
sudo nbd-client --readonly -g localhost 10809 /dev/nbd0
sudo mount -t ext4 -o ro /dev/nbd0 /mnt/tmp
Disconnect
sudo umount /mnt/tmp
sudo nbd-client -d /dev/nbd0
### TODO
Experiment with `-connections` option. This is supported by the code.
Does it improve performance?
-connections num
-C Use num connections to the server, to allow speeding up request
handling, at the cost of higher resource usage on the server.
Use of this option requires kernel support available first with
Linux 4.9.
Experiment with `-persist` option - is that a good idea?
-persist
-p When this option is specified, nbd-client will immediately try
to reconnect an nbd device if the connection ever drops unex
pectedly due to a lost server or something similar.
Need to implement Trim and see if Trim is being called.
Need to delete zero files before upload (do in VFS layer?)
FIXME need better back pressure from VFS cache to writers.
FIXME need Sync to actually work!

View File

@@ -15,26 +15,39 @@ import (
"github.com/rclone/rclone/cmd"
"github.com/rclone/rclone/fs"
"github.com/rclone/rclone/fs/config/flags"
"github.com/rclone/rclone/fs/rc"
"github.com/rclone/rclone/vfs"
"github.com/rclone/rclone/vfs/vfscommon"
"github.com/rclone/rclone/vfs/vfsflags"
"github.com/spf13/cobra"
"github.com/spf13/pflag"
)
// OptionsInfo descripts the Options in use
var OptionsInfo = fs.Options{{
Name: "addr",
Default: "",
Help: "IPaddress:Port or :Port to bind server to",
}, {
Name: "nfs_cache_handle_limit",
Default: 1000000,
Help: "max file handles cached simultaneously (min 5)",
}}
func init() {
fs.RegisterGlobalOptions(fs.OptionsInfo{Name: "nfs", Opt: &opt, Options: OptionsInfo})
}
// Options contains options for the NFS Server
type Options struct {
ListenAddr string // Port to listen on
HandleLimit int // max file handles cached by go-nfs CachingHandler
ListenAddr string `config:"addr"` // Port to listen on
HandleLimit int `config:"nfs_cache_handle_limit"` // max file handles cached by go-nfs CachingHandler
}
var opt Options
// AddFlags adds flags for serve nfs (and nfsmount)
func AddFlags(flagSet *pflag.FlagSet, Opt *Options) {
rc.AddOption("nfs", &Opt)
flags.StringVarP(flagSet, &Opt.ListenAddr, "addr", "", Opt.ListenAddr, "IPaddress:Port or :Port to bind server to", "")
flags.IntVarP(flagSet, &Opt.HandleLimit, "nfs-cache-handle-limit", "", 1000000, "max file handles cached simultaneously (min 5)", "")
flags.AddFlagsFromOptions(flagSet, "", OptionsInfo)
}
func init() {
@@ -48,7 +61,7 @@ func Run(command *cobra.Command, args []string) {
cmd.CheckArgs(1, 1, command, args)
f = cmd.NewFsSrc(args)
cmd.Run(false, true, command, func() error {
s, err := NewServer(context.Background(), vfs.New(f, &vfsflags.Opt), &opt)
s, err := NewServer(context.Background(), vfs.New(f, &vfscommon.Opt), &opt)
if err != nil {
return err
}

View File

@@ -19,7 +19,7 @@ import (
"github.com/rclone/rclone/fs/config/obscure"
libcache "github.com/rclone/rclone/lib/cache"
"github.com/rclone/rclone/vfs"
"github.com/rclone/rclone/vfs/vfsflags"
"github.com/rclone/rclone/vfs/vfscommon"
)
// Help contains text describing how to use the proxy
@@ -242,7 +242,7 @@ func (p *Proxy) call(user, auth string, isPublicKey bool) (value interface{}, er
// need to in memory. An attacker would find it easier to go
// after the unencrypted password in memory most likely.
entry := cacheEntry{
vfs: vfs.New(f, &vfsflags.Opt),
vfs: vfs.New(f, &vfscommon.Opt),
pwHash: sha256.Sum256([]byte(auth)),
}
return entry, true, nil

View File

@@ -25,22 +25,26 @@ var (
// backend for gofakes3
type s3Backend struct {
opt *Options
vfs *vfs.VFS
s *Server
meta *sync.Map
}
// newBackend creates a new SimpleBucketBackend.
func newBackend(vfs *vfs.VFS, opt *Options) gofakes3.Backend {
func newBackend(s *Server, opt *Options) gofakes3.Backend {
return &s3Backend{
vfs: vfs,
opt: opt,
s: s,
meta: new(sync.Map),
}
}
// ListBuckets always returns the default bucket.
func (b *s3Backend) ListBuckets(ctx context.Context) ([]gofakes3.BucketInfo, error) {
dirEntries, err := getDirEntries("/", b.vfs)
_vfs, err := b.s.getVFS(ctx)
if err != nil {
return nil, err
}
dirEntries, err := getDirEntries("/", _vfs)
if err != nil {
return nil, err
}
@@ -60,7 +64,11 @@ func (b *s3Backend) ListBuckets(ctx context.Context) ([]gofakes3.BucketInfo, err
// ListBucket lists the objects in the given bucket.
func (b *s3Backend) ListBucket(ctx context.Context, bucket string, prefix *gofakes3.Prefix, page gofakes3.ListBucketPage) (*gofakes3.ObjectList, error) {
_, err := b.vfs.Stat(bucket)
_vfs, err := b.s.getVFS(ctx)
if err != nil {
return nil, err
}
_, err = _vfs.Stat(bucket)
if err != nil {
return nil, gofakes3.BucketNotFound(bucket)
}
@@ -79,7 +87,7 @@ func (b *s3Backend) ListBucket(ctx context.Context, bucket string, prefix *gofak
response := gofakes3.NewObjectList()
path, remaining := prefixParser(prefix)
err = b.entryListR(bucket, path, remaining, prefix.HasDelimiter, response)
err = b.entryListR(_vfs, bucket, path, remaining, prefix.HasDelimiter, response)
if err == gofakes3.ErrNoSuchKey {
// AWS just returns an empty list
response = gofakes3.NewObjectList()
@@ -94,13 +102,17 @@ func (b *s3Backend) ListBucket(ctx context.Context, bucket string, prefix *gofak
//
// Note that the metadata is not supported yet.
func (b *s3Backend) HeadObject(ctx context.Context, bucketName, objectName string) (*gofakes3.Object, error) {
_, err := b.vfs.Stat(bucketName)
_vfs, err := b.s.getVFS(ctx)
if err != nil {
return nil, err
}
_, err = _vfs.Stat(bucketName)
if err != nil {
return nil, gofakes3.BucketNotFound(bucketName)
}
fp := path.Join(bucketName, objectName)
node, err := b.vfs.Stat(fp)
node, err := _vfs.Stat(fp)
if err != nil {
return nil, gofakes3.KeyNotFound(objectName)
}
@@ -141,13 +153,17 @@ func (b *s3Backend) HeadObject(ctx context.Context, bucketName, objectName strin
// GetObject fetchs the object from the filesystem.
func (b *s3Backend) GetObject(ctx context.Context, bucketName, objectName string, rangeRequest *gofakes3.ObjectRangeRequest) (obj *gofakes3.Object, err error) {
_, err = b.vfs.Stat(bucketName)
_vfs, err := b.s.getVFS(ctx)
if err != nil {
return nil, err
}
_, err = _vfs.Stat(bucketName)
if err != nil {
return nil, gofakes3.BucketNotFound(bucketName)
}
fp := path.Join(bucketName, objectName)
node, err := b.vfs.Stat(fp)
node, err := _vfs.Stat(fp)
if err != nil {
return nil, gofakes3.KeyNotFound(objectName)
}
@@ -223,9 +239,13 @@ func (b *s3Backend) storeModtime(fp string, meta map[string]string, val string)
// TouchObject creates or updates meta on specified object.
func (b *s3Backend) TouchObject(ctx context.Context, fp string, meta map[string]string) (result gofakes3.PutObjectResult, err error) {
_, err = b.vfs.Stat(fp)
_vfs, err := b.s.getVFS(ctx)
if err != nil {
return result, err
}
_, err = _vfs.Stat(fp)
if err == vfs.ENOENT {
f, err := b.vfs.Create(fp)
f, err := _vfs.Create(fp)
if err != nil {
return result, err
}
@@ -235,7 +255,7 @@ func (b *s3Backend) TouchObject(ctx context.Context, fp string, meta map[string]
return result, err
}
_, err = b.vfs.Stat(fp)
_, err = _vfs.Stat(fp)
if err != nil {
return result, err
}
@@ -246,7 +266,7 @@ func (b *s3Backend) TouchObject(ctx context.Context, fp string, meta map[string]
ti, err := swift.FloatStringToTime(val)
if err == nil {
b.storeModtime(fp, meta, val)
return result, b.vfs.Chtimes(fp, ti, ti)
return result, _vfs.Chtimes(fp, ti, ti)
}
// ignore error since the file is successfully created
}
@@ -255,7 +275,7 @@ func (b *s3Backend) TouchObject(ctx context.Context, fp string, meta map[string]
ti, err := swift.FloatStringToTime(val)
if err == nil {
b.storeModtime(fp, meta, val)
return result, b.vfs.Chtimes(fp, ti, ti)
return result, _vfs.Chtimes(fp, ti, ti)
}
// ignore error since the file is successfully created
}
@@ -270,7 +290,11 @@ func (b *s3Backend) PutObject(
meta map[string]string,
input io.Reader, size int64,
) (result gofakes3.PutObjectResult, err error) {
_, err = b.vfs.Stat(bucketName)
_vfs, err := b.s.getVFS(ctx)
if err != nil {
return result, err
}
_, err = _vfs.Stat(bucketName)
if err != nil {
return result, gofakes3.BucketNotFound(bucketName)
}
@@ -284,12 +308,12 @@ func (b *s3Backend) PutObject(
// }
if objectDir != "." {
if err := mkdirRecursive(objectDir, b.vfs); err != nil {
if err := mkdirRecursive(objectDir, _vfs); err != nil {
return result, err
}
}
f, err := b.vfs.Create(fp)
f, err := _vfs.Create(fp)
if err != nil {
return result, err
}
@@ -297,17 +321,17 @@ func (b *s3Backend) PutObject(
if _, err := io.Copy(f, input); err != nil {
// remove file when i/o error occurred (FsPutErr)
_ = f.Close()
_ = b.vfs.Remove(fp)
_ = _vfs.Remove(fp)
return result, err
}
if err := f.Close(); err != nil {
// remove file when close error occurred (FsPutErr)
_ = b.vfs.Remove(fp)
_ = _vfs.Remove(fp)
return result, err
}
_, err = b.vfs.Stat(fp)
_, err = _vfs.Stat(fp)
if err != nil {
return result, err
}
@@ -318,16 +342,13 @@ func (b *s3Backend) PutObject(
ti, err := swift.FloatStringToTime(val)
if err == nil {
b.storeModtime(fp, meta, val)
return result, b.vfs.Chtimes(fp, ti, ti)
return result, _vfs.Chtimes(fp, ti, ti)
}
// ignore error since the file is successfully created
}
if val, ok := meta["mtime"]; ok {
ti, err := swift.FloatStringToTime(val)
if err == nil {
if val, ok := meta["mtime"]; ok {
b.storeModtime(fp, meta, val)
return result, b.vfs.Chtimes(fp, ti, ti)
return result, _vfs.Chtimes(fp, ti, ti)
}
// ignore error since the file is successfully created
}
@@ -338,7 +359,7 @@ func (b *s3Backend) PutObject(
// DeleteMulti deletes multiple objects in a single request.
func (b *s3Backend) DeleteMulti(ctx context.Context, bucketName string, objects ...string) (result gofakes3.MultiDeleteResult, rerr error) {
for _, object := range objects {
if err := b.deleteObject(bucketName, object); err != nil {
if err := b.deleteObject(ctx, bucketName, object); err != nil {
fs.Errorf("serve s3", "delete object failed: %v", err)
result.Error = append(result.Error, gofakes3.ErrorResult{
Code: gofakes3.ErrInternal,
@@ -357,12 +378,16 @@ func (b *s3Backend) DeleteMulti(ctx context.Context, bucketName string, objects
// DeleteObject deletes the object with the given name.
func (b *s3Backend) DeleteObject(ctx context.Context, bucketName, objectName string) (result gofakes3.ObjectDeleteResult, rerr error) {
return result, b.deleteObject(bucketName, objectName)
return result, b.deleteObject(ctx, bucketName, objectName)
}
// deleteObject deletes the object from the filesystem.
func (b *s3Backend) deleteObject(bucketName, objectName string) error {
_, err := b.vfs.Stat(bucketName)
func (b *s3Backend) deleteObject(ctx context.Context, bucketName, objectName string) error {
_vfs, err := b.s.getVFS(ctx)
if err != nil {
return err
}
_, err = _vfs.Stat(bucketName)
if err != nil {
return gofakes3.BucketNotFound(bucketName)
}
@@ -370,18 +395,22 @@ func (b *s3Backend) deleteObject(bucketName, objectName string) error {
fp := path.Join(bucketName, objectName)
// S3 does not report an error when attemping to delete a key that does not exist, so
// we need to skip IsNotExist errors.
if err := b.vfs.Remove(fp); err != nil && !os.IsNotExist(err) {
if err := _vfs.Remove(fp); err != nil && !os.IsNotExist(err) {
return err
}
// FIXME: unsafe operation
rmdirRecursive(fp, b.vfs)
rmdirRecursive(fp, _vfs)
return nil
}
// CreateBucket creates a new bucket.
func (b *s3Backend) CreateBucket(ctx context.Context, name string) error {
_, err := b.vfs.Stat(name)
_vfs, err := b.s.getVFS(ctx)
if err != nil {
return err
}
_, err = _vfs.Stat(name)
if err != nil && err != vfs.ENOENT {
return gofakes3.ErrInternal
}
@@ -390,7 +419,7 @@ func (b *s3Backend) CreateBucket(ctx context.Context, name string) error {
return gofakes3.ErrBucketAlreadyExists
}
if err := b.vfs.Mkdir(name, 0755); err != nil {
if err := _vfs.Mkdir(name, 0755); err != nil {
return gofakes3.ErrInternal
}
return nil
@@ -398,12 +427,16 @@ func (b *s3Backend) CreateBucket(ctx context.Context, name string) error {
// DeleteBucket deletes the bucket with the given name.
func (b *s3Backend) DeleteBucket(ctx context.Context, name string) error {
_, err := b.vfs.Stat(name)
_vfs, err := b.s.getVFS(ctx)
if err != nil {
return err
}
_, err = _vfs.Stat(name)
if err != nil {
return gofakes3.BucketNotFound(name)
}
if err := b.vfs.Remove(name); err != nil {
if err := _vfs.Remove(name); err != nil {
return gofakes3.ErrBucketNotEmpty
}
@@ -412,7 +445,11 @@ func (b *s3Backend) DeleteBucket(ctx context.Context, name string) error {
// BucketExists checks if the bucket exists.
func (b *s3Backend) BucketExists(ctx context.Context, name string) (exists bool, err error) {
_, err = b.vfs.Stat(name)
_vfs, err := b.s.getVFS(ctx)
if err != nil {
return false, err
}
_, err = _vfs.Stat(name)
if err != nil {
return false, nil
}
@@ -422,6 +459,10 @@ func (b *s3Backend) BucketExists(ctx context.Context, name string) (exists bool,
// CopyObject copy specified object from srcKey to dstKey.
func (b *s3Backend) CopyObject(ctx context.Context, srcBucket, srcKey, dstBucket, dstKey string, meta map[string]string) (result gofakes3.CopyObjectResult, err error) {
_vfs, err := b.s.getVFS(ctx)
if err != nil {
return result, err
}
fp := path.Join(srcBucket, srcKey)
if srcBucket == dstBucket && srcKey == dstKey {
b.meta.Store(fp, meta)
@@ -439,10 +480,10 @@ func (b *s3Backend) CopyObject(ctx context.Context, srcBucket, srcKey, dstBucket
}
b.storeModtime(fp, meta, val)
return result, b.vfs.Chtimes(fp, ti, ti)
return result, _vfs.Chtimes(fp, ti, ti)
}
cStat, err := b.vfs.Stat(fp)
cStat, err := _vfs.Stat(fp)
if err != nil {
return
}

View File

@@ -5,12 +5,13 @@ import (
"strings"
"github.com/rclone/gofakes3"
"github.com/rclone/rclone/vfs"
)
func (b *s3Backend) entryListR(bucket, fdPath, name string, addPrefix bool, response *gofakes3.ObjectList) error {
func (b *s3Backend) entryListR(_vfs *vfs.VFS, bucket, fdPath, name string, addPrefix bool, response *gofakes3.ObjectList) error {
fp := path.Join(bucket, fdPath)
dirEntries, err := getDirEntries(fp, b.vfs)
dirEntries, err := getDirEntries(fp, _vfs)
if err != nil {
return err
}
@@ -30,7 +31,7 @@ func (b *s3Backend) entryListR(bucket, fdPath, name string, addPrefix bool, resp
response.AddPrefix(gofakes3.URLEncode(objectPath))
continue
}
err := b.entryListR(bucket, path.Join(fdPath, object), "", false, response)
err := b.entryListR(_vfs, bucket, path.Join(fdPath, object), "", false, response)
if err != nil {
return err
}

View File

@@ -6,6 +6,8 @@ import (
"strings"
"github.com/rclone/rclone/cmd"
"github.com/rclone/rclone/cmd/serve/proxy/proxyflags"
"github.com/rclone/rclone/fs"
"github.com/rclone/rclone/fs/config/flags"
"github.com/rclone/rclone/fs/hash"
httplib "github.com/rclone/rclone/lib/http"
@@ -20,6 +22,7 @@ var DefaultOpt = Options{
hashName: "MD5",
hashType: hash.MD5,
noCleanup: false,
Auth: httplib.DefaultAuthCfg(),
HTTP: httplib.DefaultCfg(),
}
@@ -30,8 +33,10 @@ const flagPrefix = ""
func init() {
flagSet := Command.Flags()
httplib.AddAuthFlagsPrefix(flagSet, flagPrefix, &Opt.Auth)
httplib.AddHTTPFlagsPrefix(flagSet, flagPrefix, &Opt.HTTP)
vfsflags.AddFlags(flagSet)
proxyflags.AddFlags(flagSet)
flags.BoolVarP(flagSet, &Opt.pathBucketMode, "force-path-style", "", Opt.pathBucketMode, "If true use path style access if false use virtual hosted style (default true)", "")
flags.StringVarP(flagSet, &Opt.hashName, "etag-hash", "", Opt.hashName, "Which hash to use for the ETag, or auto or blank for off", "")
flags.StringArrayVarP(flagSet, &Opt.authPair, "auth-key", "", Opt.authPair, "Set key pair for v4 authorization: access_key_id,secret_access_key", "")
@@ -55,10 +60,15 @@ var Command = &cobra.Command{
},
Use: "s3 remote:path",
Short: `Serve remote:path over s3.`,
Long: help() + httplib.Help(flagPrefix) + vfs.Help(),
Long: help() + httplib.AuthHelp(flagPrefix) + httplib.Help(flagPrefix) + vfs.Help(),
RunE: func(command *cobra.Command, args []string) error {
cmd.CheckArgs(1, 1, command, args)
f := cmd.NewFsSrc(args)
var f fs.Fs
if proxyflags.Opt.AuthProxy == "" {
cmd.CheckArgs(1, 1, command, args)
f = cmd.NewFsSrc(args)
} else {
cmd.CheckArgs(0, 0, command, args)
}
if Opt.hashName == "auto" {
Opt.hashType = f.Hashes().GetOne()
@@ -73,13 +83,13 @@ var Command = &cobra.Command{
if err != nil {
return err
}
router := s.Router()
router := s.server.Router()
s.Bind(router)
err = s.serve()
err = s.Serve()
if err != nil {
return err
}
s.Wait()
s.server.Wait()
return nil
})
return nil

View File

@@ -9,10 +9,8 @@ import (
"fmt"
"io"
"net/url"
"os"
"os/exec"
"path"
"strings"
"path/filepath"
"testing"
"time"
@@ -21,6 +19,7 @@ import (
"github.com/rclone/rclone/fs/object"
_ "github.com/rclone/rclone/backend/local"
"github.com/rclone/rclone/cmd/serve/proxy/proxyflags"
"github.com/rclone/rclone/cmd/serve/servetest"
"github.com/rclone/rclone/fs"
"github.com/rclone/rclone/fs/config/configmap"
@@ -37,7 +36,7 @@ const (
)
// Configure and serve the server
func serveS3(f fs.Fs) (testURL string, keyid string, keysec string) {
func serveS3(f fs.Fs) (testURL string, keyid string, keysec string, w *Server) {
keyid = random.String(16)
keysec = random.String(16)
serveropt := &Options{
@@ -49,12 +48,12 @@ func serveS3(f fs.Fs) (testURL string, keyid string, keysec string) {
}
serveropt.HTTP.ListenAddr = []string{endpoint}
w, _ := newServer(context.Background(), f, serveropt)
router := w.Router()
w, _ = newServer(context.Background(), f, serveropt)
router := w.server.Router()
w.Bind(router)
w.Serve()
testURL = w.Server.URLs()[0]
_ = w.Serve()
testURL = w.server.URLs()[0]
return
}
@@ -63,7 +62,7 @@ func serveS3(f fs.Fs) (testURL string, keyid string, keysec string) {
// s3 remote against it.
func TestS3(t *testing.T) {
start := func(f fs.Fs) (configmap.Simple, func()) {
testURL, keyid, keysec := serveS3(f)
testURL, keyid, keysec, _ := serveS3(f)
// Config for the backend we'll use to connect to the server
config := configmap.Simple{
"type": "s3",
@@ -76,62 +75,7 @@ func TestS3(t *testing.T) {
return config, func() {}
}
RunS3UnitTests(t, "s3", start)
}
func RunS3UnitTests(t *testing.T, name string, start servetest.StartFn) {
fstest.Initialise()
ci := fs.GetConfig(context.Background())
ci.DisableFeatures = append(ci.DisableFeatures, "Metadata")
fremote, _, clean, err := fstest.RandomRemote()
assert.NoError(t, err)
defer clean()
err = fremote.Mkdir(context.Background(), "")
assert.NoError(t, err)
f := fremote
config, cleanup := start(f)
defer cleanup()
// Change directory to run the tests
cwd, err := os.Getwd()
require.NoError(t, err)
err = os.Chdir("../../../backend/" + name)
require.NoError(t, err, "failed to cd to "+name+" backend")
defer func() {
// Change back to the old directory
require.NoError(t, os.Chdir(cwd))
}()
// RunS3UnitTests the backend tests with an on the fly remote
args := []string{"test"}
if testing.Verbose() {
args = append(args, "-v")
}
if *fstest.Verbose {
args = append(args, "-verbose")
}
remoteName := "serve" + name + ":"
args = append(args, "-remote", remoteName)
args = append(args, "-run", "^TestIntegration$")
args = append(args, "-list-retries", fmt.Sprint(*fstest.ListRetries))
cmd := exec.Command("go", args...)
// Configure the backend with environment variables
cmd.Env = os.Environ()
prefix := "RCLONE_CONFIG_" + strings.ToUpper(remoteName[:len(remoteName)-1]) + "_"
for k, v := range config {
cmd.Env = append(cmd.Env, prefix+strings.ToUpper(k)+"="+v)
}
// RunS3UnitTests the test
out, err := cmd.CombinedOutput()
if len(out) != 0 {
t.Logf("\n----------\n%s----------\n", string(out))
}
assert.NoError(t, err, "Running "+name+" integration tests")
servetest.Run(t, "s3", start)
}
// tests using the minio client
@@ -181,7 +125,7 @@ func TestEncodingWithMinioClient(t *testing.T) {
_, err = f.Put(context.Background(), in, obji)
assert.NoError(t, err)
endpoint, keyid, keysec := serveS3(f)
endpoint, keyid, keysec, _ := serveS3(f)
testURL, _ := url.Parse(endpoint)
minioClient, err := minio.New(testURL.Host, &minio.Options{
Creds: credentials.NewStaticV4(keyid, keysec, ""),
@@ -200,5 +144,161 @@ func TestEncodingWithMinioClient(t *testing.T) {
}
})
}
}
type FileStuct struct {
path string
filename string
}
type TestCase struct {
description string
bucket string
files []FileStuct
keyID string
keySec string
shouldFail bool
}
func testListBuckets(t *testing.T, cases []TestCase, useProxy bool) {
fstest.Initialise()
var f fs.Fs
if useProxy {
// the backend config will be made by the proxy
prog, err := filepath.Abs("../servetest/proxy_code.go")
require.NoError(t, err)
files, err := filepath.Abs("testdata")
require.NoError(t, err)
cmd := "go run " + prog + " " + files
// FIXME: this is untidy setting a global variable!
proxyflags.Opt.AuthProxy = cmd
defer func() {
proxyflags.Opt.AuthProxy = ""
}()
f = nil
} else {
// create a test Fs
var err error
f, err = fs.NewFs(context.Background(), "testdata")
require.NoError(t, err)
}
for _, tt := range cases {
t.Run(tt.description, func(t *testing.T) {
endpoint, keyid, keysec, s := serveS3(f)
defer func() {
assert.NoError(t, s.server.Shutdown())
}()
if tt.keyID != "" {
keyid = tt.keyID
}
if tt.keySec != "" {
keysec = tt.keySec
}
testURL, _ := url.Parse(endpoint)
minioClient, err := minio.New(testURL.Host, &minio.Options{
Creds: credentials.NewStaticV4(keyid, keysec, ""),
Secure: false,
})
assert.NoError(t, err)
buckets, err := minioClient.ListBuckets(context.Background())
if tt.shouldFail {
require.Error(t, err)
} else {
require.NoError(t, err)
require.NotEmpty(t, buckets)
assert.Equal(t, buckets[0].Name, tt.bucket)
o := minioClient.ListObjects(context.Background(), tt.bucket, minio.ListObjectsOptions{
Recursive: true,
})
// save files after reading from channel
objects := []string{}
for object := range o {
objects = append(objects, object.Key)
}
for _, tt := range tt.files {
file := path.Join(tt.path, tt.filename)
found := false
for _, fname := range objects {
if file == fname {
found = true
break
}
}
require.Equal(t, true, found, "Object not found: "+file)
}
}
})
}
}
func TestListBuckets(t *testing.T) {
var cases = []TestCase{
{
description: "list buckets",
bucket: "mybucket",
files: []FileStuct{
{
path: "",
filename: "lorem.txt",
},
{
path: "foo",
filename: "bar.txt",
},
},
},
{
description: "list buckets: wrong s3 key",
bucket: "mybucket",
keyID: "invalid",
shouldFail: true,
},
{
description: "list buckets: wrong s3 secret",
bucket: "mybucket",
keySec: "invalid",
shouldFail: true,
},
}
testListBuckets(t, cases, false)
}
func TestListBucketsAuthProxy(t *testing.T) {
var cases = []TestCase{
{
description: "list buckets",
bucket: "mybucket",
// request with random keyid
// instead of what was set in 'authPair'
keyID: random.String(16),
files: []FileStuct{
{
path: "",
filename: "lorem.txt",
},
{
path: "foo",
filename: "bar.txt",
},
},
},
{
description: "list buckets: wrong s3 secret",
bucket: "mybucket",
keySec: "invalid",
shouldFail: true,
},
}
testListBuckets(t, cases, true)
}

View File

@@ -3,17 +3,30 @@ package s3
import (
"context"
"crypto/md5"
"encoding/hex"
"errors"
"fmt"
"math/rand"
"net/http"
"strings"
"github.com/go-chi/chi/v5"
"github.com/rclone/gofakes3"
"github.com/rclone/gofakes3/signature"
"github.com/rclone/rclone/cmd/serve/proxy"
"github.com/rclone/rclone/cmd/serve/proxy/proxyflags"
"github.com/rclone/rclone/fs"
"github.com/rclone/rclone/fs/hash"
httplib "github.com/rclone/rclone/lib/http"
"github.com/rclone/rclone/vfs"
"github.com/rclone/rclone/vfs/vfsflags"
"github.com/rclone/rclone/vfs/vfscommon"
)
type ctxKey int
const (
ctxKeyID ctxKey = iota
)
// Options contains options for the http Server
@@ -24,17 +37,20 @@ type Options struct {
hashType hash.Type
authPair []string
noCleanup bool
Auth httplib.AuthConfig
HTTP httplib.Config
}
// Server is a s3.FileSystem interface
type Server struct {
*httplib.Server
f fs.Fs
vfs *vfs.VFS
faker *gofakes3.GoFakeS3
handler http.Handler
ctx context.Context // for global config
server *httplib.Server
f fs.Fs
_vfs *vfs.VFS // don't use directly, use getVFS
faker *gofakes3.GoFakeS3
handler http.Handler
proxy *proxy.Proxy
ctx context.Context // for global config
s3Secret string
}
// Make a new S3 Server to serve the remote
@@ -42,16 +58,17 @@ func newServer(ctx context.Context, f fs.Fs, opt *Options) (s *Server, err error
w := &Server{
f: f,
ctx: ctx,
vfs: vfs.New(f, &vfsflags.Opt),
}
if len(opt.authPair) == 0 {
fs.Logf("serve s3", "No auth provided so allowing anonymous access")
} else {
w.s3Secret = getAuthSecret(opt.authPair)
}
var newLogger logger
w.faker = gofakes3.New(
newBackend(w.vfs, opt),
newBackend(w, opt),
gofakes3.WithHostBucket(!opt.pathBucketMode),
gofakes3.WithLogger(newLogger),
gofakes3.WithRequestID(rand.Uint64()),
@@ -60,24 +77,124 @@ func newServer(ctx context.Context, f fs.Fs, opt *Options) (s *Server, err error
gofakes3.WithIntegrityCheck(true), // Check Content-MD5 if supplied
)
w.Server, err = httplib.NewServer(ctx,
w.handler = http.NewServeMux()
w.handler = w.faker.Server()
if proxyflags.Opt.AuthProxy != "" {
w.proxy = proxy.New(ctx, &proxyflags.Opt)
// proxy auth middleware
w.handler = proxyAuthMiddleware(w.handler, w)
w.handler = authPairMiddleware(w.handler, w)
} else {
w._vfs = vfs.New(f, &vfscommon.Opt)
if len(opt.authPair) > 0 {
w.faker.AddAuthKeys(authlistResolver(opt.authPair))
}
}
w.server, err = httplib.NewServer(ctx,
httplib.WithConfig(opt.HTTP),
httplib.WithAuth(opt.Auth),
)
if err != nil {
return nil, fmt.Errorf("failed to init server: %w", err)
}
w.handler = w.faker.Server()
return w, nil
}
func (w *Server) getVFS(ctx context.Context) (VFS *vfs.VFS, err error) {
if w._vfs != nil {
return w._vfs, nil
}
value := ctx.Value(ctxKeyID)
if value == nil {
return nil, errors.New("no VFS found in context")
}
VFS, ok := value.(*vfs.VFS)
if !ok {
return nil, fmt.Errorf("context value is not VFS: %#v", value)
}
return VFS, nil
}
// auth does proxy authorization
func (w *Server) auth(accessKeyID string) (value interface{}, err error) {
VFS, _, err := w.proxy.Call(stringToMd5Hash(accessKeyID), accessKeyID, false)
if err != nil {
return nil, err
}
return VFS, err
}
// Bind register the handler to http.Router
func (w *Server) Bind(router chi.Router) {
router.Handle("/*", w.handler)
}
func (w *Server) serve() error {
w.Serve()
fs.Logf(w.f, "Starting s3 server on %s", w.URLs())
// Serve serves the s3 server
func (w *Server) Serve() error {
w.server.Serve()
fs.Logf(w.f, "Starting s3 server on %s", w.server.URLs())
return nil
}
func authPairMiddleware(next http.Handler, ws *Server) http.Handler {
return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
accessKey, _ := parseAccessKeyID(r)
// set the auth pair
authPair := map[string]string{
accessKey: ws.s3Secret,
}
ws.faker.AddAuthKeys(authPair)
next.ServeHTTP(w, r)
})
}
func proxyAuthMiddleware(next http.Handler, ws *Server) http.Handler {
return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
accessKey, _ := parseAccessKeyID(r)
value, err := ws.auth(accessKey)
if err != nil {
fs.Infof(r.URL.Path, "%s: Auth failed: %v", r.RemoteAddr, err)
}
if value != nil {
r = r.WithContext(context.WithValue(r.Context(), ctxKeyID, value))
}
next.ServeHTTP(w, r)
})
}
func parseAccessKeyID(r *http.Request) (accessKey string, error signature.ErrorCode) {
v4Auth := r.Header.Get("Authorization")
req, err := signature.ParseSignV4(v4Auth)
if err != signature.ErrNone {
return "", err
}
return req.Credential.GetAccessKey(), signature.ErrNone
}
func stringToMd5Hash(s string) string {
hasher := md5.New()
hasher.Write([]byte(s))
return hex.EncodeToString(hasher.Sum(nil))
}
func getAuthSecret(authPair []string) string {
if len(authPair) == 0 {
return ""
}
splited := strings.Split(authPair[0], ",")
if len(splited) != 2 {
return ""
}
secret := strings.TrimSpace(splited[1])
return secret
}

View File

@@ -0,0 +1 @@
I am inside a folder

View File

@@ -0,0 +1 @@
lorem epsum gipsum

View File

@@ -9,6 +9,7 @@ import (
"github.com/rclone/rclone/cmd/serve/docker"
"github.com/rclone/rclone/cmd/serve/ftp"
"github.com/rclone/rclone/cmd/serve/http"
"github.com/rclone/rclone/cmd/serve/nbd"
"github.com/rclone/rclone/cmd/serve/nfs"
"github.com/rclone/rclone/cmd/serve/restic"
"github.com/rclone/rclone/cmd/serve/s3"
@@ -43,6 +44,9 @@ func init() {
if s3.Command != nil {
Command.AddCommand(s3.Command)
}
if nbd.Command != nil {
Command.AddCommand(nbd.Command)
}
cmd.Root.AddCommand(Command)
}

View File

@@ -76,7 +76,7 @@ func run(t *testing.T, name string, start StartFn, useProxy bool) {
if *fstest.Verbose {
args = append(args, "-verbose")
}
remoteName := name + "test:"
remoteName := "serve" + name + "test:"
if *subRun != "" {
args = append(args, "-run", *subRun)
}

View File

@@ -17,7 +17,7 @@ import (
"github.com/rclone/rclone/fs/hash"
"github.com/rclone/rclone/lib/terminal"
"github.com/rclone/rclone/vfs"
"github.com/rclone/rclone/vfs/vfsflags"
"github.com/rclone/rclone/vfs/vfscommon"
"golang.org/x/crypto/ssh"
)
@@ -307,7 +307,7 @@ func serveStdio(f fs.Fs) error {
stdin: os.Stdin,
stdout: os.Stdout,
}
handlers := newVFSHandler(vfs.New(f, &vfsflags.Opt))
handlers := newVFSHandler(vfs.New(f, &vfscommon.Opt))
return serveChannel(sshChannel, handlers, "stdio")
}

View File

@@ -28,7 +28,7 @@ import (
"github.com/rclone/rclone/lib/env"
"github.com/rclone/rclone/lib/file"
"github.com/rclone/rclone/vfs"
"github.com/rclone/rclone/vfs/vfsflags"
"github.com/rclone/rclone/vfs/vfscommon"
"golang.org/x/crypto/ssh"
)
@@ -54,7 +54,7 @@ func newServer(ctx context.Context, f fs.Fs, opt *Options) *server {
if proxyflags.Opt.AuthProxy != "" {
s.proxy = proxy.New(ctx, &proxyflags.Opt)
} else {
s.vfs = vfs.New(f, &vfsflags.Opt)
s.vfs = vfs.New(f, &vfscommon.Opt)
}
return s
}
@@ -133,7 +133,7 @@ func (s *server) serve() (err error) {
var authorizedKeysMap map[string]struct{}
// ensure the user isn't trying to use conflicting flags
if proxyflags.Opt.AuthProxy != "" && s.opt.AuthorizedKeys != "" && s.opt.AuthorizedKeys != DefaultOpt.AuthorizedKeys {
if proxyflags.Opt.AuthProxy != "" && s.opt.AuthorizedKeys != "" && s.opt.AuthorizedKeys != Opt.AuthorizedKeys {
return errors.New("--auth-proxy and --authorized-keys cannot be used at the same time")
}
@@ -142,7 +142,7 @@ func (s *server) serve() (err error) {
authKeysFile := env.ShellExpand(s.opt.AuthorizedKeys)
authorizedKeysMap, err = loadAuthorizedKeys(authKeysFile)
// If user set the flag away from the default then report an error
if err != nil && s.opt.AuthorizedKeys != DefaultOpt.AuthorizedKeys {
if err != nil && s.opt.AuthorizedKeys != Opt.AuthorizedKeys {
return err
}
fs.Logf(nil, "Loaded %d authorized keys from %q", len(authorizedKeysMap), authKeysFile)

View File

@@ -11,7 +11,6 @@ import (
"github.com/rclone/rclone/cmd/serve/proxy/proxyflags"
"github.com/rclone/rclone/fs"
"github.com/rclone/rclone/fs/config/flags"
"github.com/rclone/rclone/fs/rc"
"github.com/rclone/rclone/lib/systemd"
"github.com/rclone/rclone/vfs"
"github.com/rclone/rclone/vfs/vfsflags"
@@ -19,36 +18,58 @@ import (
"github.com/spf13/pflag"
)
// OptionsInfo descripts the Options in use
var OptionsInfo = fs.Options{{
Name: "addr",
Default: "localhost:2022",
Help: "IPaddress:Port or :Port to bind server to",
}, {
Name: "key",
Default: []string{},
Help: "SSH private host key file (Can be multi-valued, leave blank to auto generate)",
}, {
Name: "authorized_keys",
Default: "~/.ssh/authorized_keys",
Help: "Authorized keys file",
}, {
Name: "user",
Default: "",
Help: "User name for authentication",
}, {
Name: "pass",
Default: "",
Help: "Password for authentication",
}, {
Name: "no_auth",
Default: false,
Help: "Allow connections with no authentication if set",
}, {
Name: "stdio",
Default: false,
Help: "Run an sftp server on stdin/stdout",
}}
// Options contains options for the http Server
type Options struct {
ListenAddr string // Port to listen on
HostKeys []string // Paths to private host keys
AuthorizedKeys string // Path to authorized keys file
User string // single username
Pass string // password for user
NoAuth bool // allow no authentication on connections
Stdio bool // serve on stdio
ListenAddr string `config:"addr"` // Port to listen on
HostKeys []string `config:"key"` // Paths to private host keys
AuthorizedKeys string `config:"authorized_keys"` // Path to authorized keys file
User string `config:"user"` // single username
Pass string `config:"pass"` // password for user
NoAuth bool `config:"no_auth"` // allow no authentication on connections
Stdio bool `config:"stdio"` // serve on stdio
}
// DefaultOpt is the default values used for Options
var DefaultOpt = Options{
ListenAddr: "localhost:2022",
AuthorizedKeys: "~/.ssh/authorized_keys",
func init() {
fs.RegisterGlobalOptions(fs.OptionsInfo{Name: "sftp", Opt: &Opt, Options: OptionsInfo})
}
// Opt is options set by command line flags
var Opt = DefaultOpt
var Opt Options
// AddFlags adds flags for the sftp
func AddFlags(flagSet *pflag.FlagSet, Opt *Options) {
rc.AddOption("sftp", &Opt)
flags.StringVarP(flagSet, &Opt.ListenAddr, "addr", "", Opt.ListenAddr, "IPaddress:Port or :Port to bind server to", "")
flags.StringArrayVarP(flagSet, &Opt.HostKeys, "key", "", Opt.HostKeys, "SSH private host key file (Can be multi-valued, leave blank to auto generate)", "")
flags.StringVarP(flagSet, &Opt.AuthorizedKeys, "authorized-keys", "", Opt.AuthorizedKeys, "Authorized keys file", "")
flags.StringVarP(flagSet, &Opt.User, "user", "", Opt.User, "User name for authentication", "")
flags.StringVarP(flagSet, &Opt.Pass, "pass", "", Opt.Pass, "Password for authentication", "")
flags.BoolVarP(flagSet, &Opt.NoAuth, "no-auth", "", Opt.NoAuth, "Allow connections with no authentication if set", "")
flags.BoolVarP(flagSet, &Opt.Stdio, "stdio", "", Opt.Stdio, "Run an sftp server on stdin/stdout", "")
flags.AddFlagsFromOptions(flagSet, "", OptionsInfo)
}
func init() {

View File

@@ -40,7 +40,7 @@ var (
func TestSftp(t *testing.T) {
// Configure and start the server
start := func(f fs.Fs) (configmap.Simple, func()) {
opt := DefaultOpt
opt := Opt
opt.ListenAddr = testBindAddress
opt.User = testUser
opt.Pass = testPass

View File

@@ -26,6 +26,7 @@ import (
"github.com/rclone/rclone/lib/http/serve"
"github.com/rclone/rclone/lib/systemd"
"github.com/rclone/rclone/vfs"
"github.com/rclone/rclone/vfs/vfscommon"
"github.com/rclone/rclone/vfs/vfsflags"
"github.com/spf13/cobra"
"golang.org/x/net/webdav"
@@ -193,7 +194,7 @@ func newWebDAV(ctx context.Context, f fs.Fs, opt *Options) (w *WebDAV, err error
// override auth
w.opt.Auth.CustomAuthFn = w.auth
} else {
w._vfs = vfs.New(f, &vfsflags.Opt)
w._vfs = vfs.New(f, &vfscommon.Opt)
}
w.Server, err = libhttp.NewServer(ctx,
@@ -365,7 +366,7 @@ func (w *WebDAV) serveDir(rw http.ResponseWriter, r *http.Request, dirRemote str
// Make the entries for display
directory := serve.NewDirectory(dirRemote, w.Server.HTMLTemplate())
for _, node := range dirEntries {
if vfsflags.Opt.NoModTime {
if vfscommon.Opt.NoModTime {
directory.AddHTMLEntry(node.Path(), node.IsDir(), node.Size(), time.Time{})
} else {
directory.AddHTMLEntry(node.Path(), node.IsDir(), node.Size(), node.ModTime().UTC())

View File

@@ -139,7 +139,12 @@ func Touch(ctx context.Context, f fs.Fs, remote string) error {
return err
}
fs.Debugf(nil, "Touch time %v", t)
file, err := f.NewObject(ctx, remote)
var file fs.Object
if remote == "" {
err = fs.ErrorIsDir
} else {
file, err = f.NewObject(ctx, remote)
}
if err != nil {
if errors.Is(err, fs.ErrorObjectNotFound) {
// Touching non-existent path, possibly creating it as new file

View File

@@ -66,7 +66,7 @@ func TestEnvironmentVariables(t *testing.T) {
assert.NotContains(t, out, "fileAA1.txt") // depth 4
}
// Test of debug logging while initialising flags from environment (tests #5241 Enhance1)
// Test of debug logging while initialising flags from environment (tests #5341 Enhance1)
env = "RCLONE_STATS=173ms"
out, err = rcloneEnv(env, "version", "-vv")
if assert.NoError(t, err) {
@@ -323,4 +323,25 @@ func TestEnvironmentVariables(t *testing.T) {
assert.NotContains(t, out, "fileB1.txt")
}
// Test --use-json-log and -vv combinations
jsonLogOK := func() {
t.Helper()
if assert.NoError(t, err) {
assert.Contains(t, out, `{"level":"debug",`)
assert.Contains(t, out, `"msg":"Version `)
assert.Contains(t, out, `"}`)
}
}
env = "RCLONE_USE_JSON_LOG=1;RCLONE_LOG_LEVEL=DEBUG"
out, err = rcloneEnv(env, "version")
jsonLogOK()
env = "RCLONE_USE_JSON_LOG=1"
out, err = rcloneEnv(env, "version", "-vv")
jsonLogOK()
env = "RCLONE_LOG_LEVEL=DEBUG"
out, err = rcloneEnv(env, "version", "--use-json-log")
jsonLogOK()
env = ""
out, err = rcloneEnv(env, "version", "-vv", "--use-json-log")
jsonLogOK()
}

View File

@@ -9,8 +9,7 @@
"description": "rclone - rsync for cloud storage: google drive, s3, gcs, azure, dropbox, box...",
"canonifyurls": false,
"disableKinds": [
"taxonomy",
"taxonomyTerm"
"taxonomy"
],
"ignoreFiles": [
"~$",

View File

@@ -118,7 +118,7 @@ Here are the Advanced options specific to alias (Alias for an existing remote).
#### --alias-description
Description of the remote
Description of the remote.
Properties:

View File

@@ -865,3 +865,12 @@ put them back in again.` >}}
* Michał Dzienisiewicz <michal.piotr.dz@gmail.com>
* Florian Klink <flokli@flokli.de>
* Bill Fraser <bill@wfraser.dev>
* Thearas <thearas850@gmail.com>
* Filipe Herculano <fifo_@live.com>
* Russ Bubley <russ.bubley@googlemail.com>
* Paul Collins <paul.collins@canonical.com>
* Tomasz Melcer <liori@exroot.org>
* itsHenry <2671230065@qq.com>
* Ke Wang <me@ke.wang>
* AThePeanut4 <49614525+AThePeanut4@users.noreply.github.com>
* Tobias Markus <tobbi.bugs@googlemail.com>

View File

@@ -289,6 +289,13 @@ be explicitly specified using exactly one of the `msi_object_id`,
If none of `msi_object_id`, `msi_client_id`, or `msi_mi_res_id` is
set, this is is equivalent to using `env_auth`.
#### Anonymous {#anonymous}
If you want to access resources with public anonymous access then set
`account` only. You can do this without making an rclone config:
rclone lsf :azureblob,account=ACCOUNT:CONTAINER
{{< rem autogenerated options start" - DO NOT EDIT - instead edit fs.RegInfo in backend/azureblob/azureblob.go then run make backenddocs" >}}
### Standard options
@@ -851,7 +858,7 @@ Properties:
#### --azureblob-description
Description of the remote
Description of the remote.
Properties:

View File

@@ -689,7 +689,7 @@ Properties:
#### --azurefiles-description
Description of the remote
Description of the remote.
Properties:

View File

@@ -164,12 +164,21 @@ used.
### Versions
When rclone uploads a new version of a file it creates a [new version
The default setting of B2 is to keep old versions of files. This means
when rclone uploads a new version of a file it creates a [new version
of it](https://www.backblaze.com/docs/cloud-storage-file-versions).
Likewise when you delete a file, the old version will be marked hidden
and still be available. Conversely, you may opt in to a "hard delete"
of files with the `--b2-hard-delete` flag which would permanently remove
the file instead of hiding it.
and still be available.
Whether B2 keeps old versions of files or not can be adjusted on a per
bucket basis using the "Lifecycle settings" on the B2 control panel or
when creating the bucket using the [--b2-lifecycle](#b2-lifecycle)
flag or after creation using the [rclone backend lifecycle](#lifecycle)
command.
You may opt in to a "hard delete" of files with the `--b2-hard-delete`
flag which permanently removes files on deletion instead of hiding
them.
Old versions of files, where available, are visible using the
`--b2-versions` flag.
@@ -647,7 +656,7 @@ Properties:
#### --b2-description
Description of the remote
Description of the remote.
Properties:

View File

@@ -475,7 +475,7 @@ Properties:
#### --box-description
Description of the remote
Description of the remote.
Properties:

View File

@@ -666,7 +666,7 @@ Properties:
#### --cache-description
Description of the remote
Description of the remote.
Properties:

View File

@@ -5,6 +5,159 @@ description: "Rclone Changelog"
# Changelog
## v1.67.0 - 2024-06-14
[See commits](https://github.com/rclone/rclone/compare/v1.66.0...v1.67.0)
* New backends
* [uloz.to](/ulozto/) (iotmaestro)
* New S3 providers
* [Magalu Object Storage](/s3/#magalu) (Bruno Fernandes)
* New commands
* [gitannex](/commands/rclone_gitannex/): Enables git-annex to store and retrieve content from an rclone remote (Dan McArdle)
* New Features
* accounting: Add deleted files total size to status summary line (Kyle Reynolds)
* build
* Fix `CVE-2023-45288` by upgrading `golang.org/x/net` (Nick Craig-Wood)
* Fix `CVE-2024-35255` by upgrading `github.com/Azure/azure-sdk-for-go/sdk/azidentity` to 1.6.0 (dependabot)
* Convert source files with CRLF to LF (albertony)
* Update all dependencies (Nick Craig-Wood)
* doc updates (albertony, Alex Garel, Dave Nicolson, Dominik Joe Pantůček, Eric Wolf, Erisa A, Evan Harris, Evan McBeth, Gachoud Philippe, hidewrong, jakzoe, jumbi77, kapitainsky, Kyle Reynolds, Lewis Hook, Nick Craig-Wood, overallteach, pawsey-kbuckley, Pieter van Oostrum, psychopatt, racerole, static-moonlight, Warrentheo, yudrywet, yumeiyin )
* ncdu: Do not quit on Esc to aid usability (Katia Esposito)
* rcserver: Set `ModTime` for dirs and files served by `--rc-serve` (Nikita Shoshin)
* Bug Fixes
* bisync: Add integration tests against all backends and fix many many problems (nielash)
* config: Fix default value for `description` (Nick Craig-Wood)
* copy: Fix `nil` pointer dereference when corrupted on transfer with `nil` dst (nielash)
* fs
* Improve JSON Unmarshalling for `Duration` types (Kyle Reynolds)
* Close the CPU profile on exit (guangwu)
* Replace `/bin/bash` with `/usr/bin/env bash` (Florian Klink)
* oauthutil: Clear client secret if client ID is set (Michael Terry)
* operations
* Rework `rcat` so that it doesn't call the `--metadata-mapper` twice (Nick Craig-Wood)
* Ensure `SrcFsType` is set correctly when using `--metadata-mapper` (Nick Craig-Wood)
* Fix "optional feature not implemented" error with a crypted sftp bug (Nick Craig-Wood)
* Fix very long file names when using copy with `--partial` (Nick Craig-Wood)
* Fix retries downloading too much data with certain backends (Nick Craig-Wood)
* Fix move when dst is nil and fdst is case-insensitive (nielash)
* Fix lsjson `--encrypted` when using `--crypt-XXX` parameters (Nick Craig-Wood)
* Fix missing metadata for multipart transfers to local disk (Nick Craig-Wood)
* Fix incorrect modtime on some multipart transfers (Nick Craig-Wood)
* Fix hashing problem in integration tests (Nick Craig-Wood)
* rc
* Fix stats groups being ignored in `operations/check` (Nick Craig-Wood)
* Fix incorrect `Content-Type` in HTTP API (Kyle Reynolds)
* serve s3
* Fix `Last-Modified` header format (Butanediol)
* Fix in-memory metadata storing wrong modtime (nielash)
* Fix XML of error message (Nick Craig-Wood)
* serve webdav: Fix webdav with `--baseurl` under Windows (Nick Craig-Wood)
* serve dlna: Make `BrowseMetadata` more compliant (albertony)
* serve http: Added `Content-Length` header when HTML directory is served (Sunny)
* sync
* Don't sync directories if they haven't been modified (Nick Craig-Wood)
* Don't test reading metadata if we can't write it (Nick Craig-Wood)
* Fix case normalisation (problem on on s3) (Nick Craig-Wood)
* Fix management of empty directories to make it more accurate (Nick Craig-Wood)
* Fix creation of empty directories when `--create-empty-src-dirs=false` (Nick Craig-Wood)
* Fix directory modification times not being set (Nick Craig-Wood)
* Fix "failed to update directory timestamp or metadata: directory not found" (Nick Craig-Wood)
* Fix expecting SFTP to have MkdirMetadata method: optional feature not implemented (Nick Craig-Wood)
* test info: Improve cleanup of temp files (Kyle Reynolds)
* touch: Fix using `-R` on certain backends (Nick Craig-Wood)
* Mount
* Add `--direct-io` flag to force uncached access (Nick Craig-Wood)
* VFS
* Fix download loop when file size shrunk (Nick Craig-Wood)
* Fix renaming a directory (nielash)
* Local
* Add `--local-time-type` to use `mtime`/`atime`/`btime`/`ctime` as the time (Nick Craig-Wood)
* Allow `SeBackupPrivilege` and/or `SeRestorePrivilege` to work on Windows (Charles Hamilton)
* Azure Blob
* Fix encoding issue with dir path comparison (nielash)
* B2
* Add new [cleanup](/b2/#cleanup) and [cleanup-hidden](/b2/#cleanup-hidden) backend commands. (Pat Patterson)
* Update B2 URLs to new home (Nick Craig-Wood)
* Chunker
* Fix startup when root points to composite multi-chunk file without metadata (nielash)
* Fix case-insensitive comparison on local without metadata (nielash)
* Fix "finalizer already set" error (nielash)
* Drive
* Add [backend query](/drive/#query) command for general purpose querying of files (John-Paul Smith)
* Stop sending notification emails when setting permissions (Nick Craig-Wood)
* Fix server side copy with metadata from my drive to shared drive (Nick Craig-Wood)
* Set all metadata permissions and return error summary instead of stopping on the first error (Nick Craig-Wood)
* Make errors setting permissions into no retry errors (Nick Craig-Wood)
* Fix description being overwritten on server side moves (Nick Craig-Wood)
* Allow setting metadata to fail if `failok` flag is set (Nick Craig-Wood)
* Fix panic when using `--metadata-mapper` on large google doc files (Nick Craig-Wood)
* Dropbox
* Add `--dropbox-root-namespace` to override the root namespace (Bill Fraser)
* Google Cloud Storage
* Fix encoding issue with dir path comparison (nielash)
* Hdfs
* Fix f.String() not including subpath (nielash)
* Http
* Add `--http-no-escape` to not escape URL metacharacters in path names (Kyle Reynolds)
* Jottacloud
* Set metadata on server side copy and move (albertony)
* Linkbox
* Fix working with names longer than 8-25 Unicode chars. (Vitaly)
* Fix list paging and optimized synchronization. (gvitali)
* Mailru
* Attempt to fix throttling by increasing min sleep to 100ms (Nick Craig-Wood)
* Memory
* Fix dst mutating src after server-side copy (nielash)
* Fix deadlock in operations.Purge (nielash)
* Fix incorrect list entries when rooted at subdirectory (nielash)
* Onedrive
* Add `--onedrive-hard-delete` to permanently delete files (Nick Craig-Wood)
* Make server-side copy work in more scenarios (YukiUnHappy)
* Fix "unauthenticated: Unauthenticated" errors when downloading (Nick Craig-Wood)
* Fix `--metadata-mapper` being called twice if writing permissions (nielash)
* Set all metadata permissions and return error summary instead of stopping on the first error (nielash)
* Make errors setting permissions into no retry errors (Nick Craig-Wood)
* Skip writing permissions with 'owner' role (nielash)
* Fix references to deprecated permissions properties (nielash)
* Add support for group permissions (nielash)
* Allow setting permissions to fail if `failok` flag is set (Nick Craig-Wood)
* Pikpak
* Make getFile() usage more efficient to avoid the download limit (wiserain)
* Improve upload reliability and resolve potential file conflicts (wiserain)
* Implement configurable chunk size for multipart upload (wiserain)
* Protondrive
* Don't auth with an empty access token (Michał Dzienisiewicz)
* Qingstor
* Disable integration tests as test account suspended (Nick Craig-Wood)
* Quatrix
* Fix f.String() not including subpath (nielash)
* S3
* Add new AWS region `il-central-1` Tel Aviv (yoelvini)
* Update Scaleway's configuration options (Alexandre Lavigne)
* Ceph: fix quirks when creating buckets to fix trying to create an existing bucket (Thomas Schneider)
* Fix encoding issue with dir path comparison (nielash)
* Fix 405 error on HEAD for delete marker with versionId (nielash)
* Validate `--s3-copy-cutoff` size before copy (hoyho)
* SFTP
* Add `--sftp-connections` to limit the maximum number of connections (Tomasz Melcer)
* Storj
* Update `storj.io/uplink` to latest release (JT Olio)
* Update bio on request (Nick Craig-Wood)
* Swift
* Implement `--swift-use-segments-container` to allow >5G files on Blomp (Nick Craig-Wood)
* Union
* Fix deleting dirs when all remotes can't have empty dirs (Nick Craig-Wood)
* WebDAV
* Fix setting modification times erasing checksums on owncloud and nextcloud (nielash)
* owncloud: Add `--webdav-owncloud-exclude-mounts` which allows excluding mounted folders when listing remote resources (Thomas Müller)
* Zoho
* Fix throttling problem when uploading files (Nick Craig-Wood)
* Use cursor listing for improved performance (Nick Craig-Wood)
* Retry reading info after upload if size wasn't returned (Nick Craig-Wood)
* Remove simple file names complication which is no longer needed (Nick Craig-Wood)
* Sleep for 60 seconds if rate limit error received (Nick Craig-Wood)
## v1.66.0 - 2024-03-10
[See commits](https://github.com/rclone/rclone/compare/v1.65.0...v1.66.0)

View File

@@ -479,7 +479,7 @@ Properties:
#### --chunker-description
Description of the remote
Description of the remote.
Properties:

View File

@@ -160,7 +160,7 @@ Here are the Advanced options specific to combine (Combine several remotes into
#### --combine-description
Description of the remote
Description of the remote.
Properties:

View File

@@ -1,8 +1,6 @@
---
title: "rclone"
description: "Show help for rclone commands, flags and backends."
slug: rclone
url: /commands/rclone/
# autogenerated - DO NOT EDIT, instead edit the source code in cmd/ and as part of making a release run "make commanddocs"
---
## rclone
@@ -125,7 +123,7 @@ rclone [flags]
--box-token-url string Token server url
--box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (>= 50 MiB) (default 50Mi)
--buffer-size SizeSuffix In memory buffer size when reading files for each --transfer (default 16Mi)
--bwlimit BwTimetable Bandwidth limit in KiB/s, or use suffix B|K|M|G|T|P or a full timetable
--bwlimit BwTimetable Bandwidth limit in KiB/s, or use suffix B|K|M|G|T|P or a full timetable.
--bwlimit-file BwTimetable Bandwidth limit per file in KiB/s, or use suffix B|K|M|G|T|P or a full timetable
--ca-cert stringArray CA certificate used to verify servers
--cache-chunk-clean-interval Duration How often should the cache perform cleanups of the chunk storage (default 1m0s)
@@ -257,6 +255,7 @@ rclone [flags]
--dropbox-encoding Encoding The encoding for the backend (default Slash,BackSlash,Del,RightSpace,InvalidUtf8,Dot)
--dropbox-impersonate string Impersonate this user when using a business account
--dropbox-pacer-min-sleep Duration Minimum time to sleep between API calls (default 10ms)
--dropbox-root-namespace string Specify a different Dropbox namespace ID to use as the root for all paths
--dropbox-shared-files Instructs rclone to work on individual shared files
--dropbox-shared-folders Instructs rclone to work on shared folders
--dropbox-token string OAuth Access Token as a JSON blob
@@ -384,6 +383,7 @@ rclone [flags]
--hidrive-upload-cutoff SizeSuffix Cutoff/Threshold for chunked uploads (default 96Mi)
--http-description string Description of the remote
--http-headers CommaSepList Set HTTP headers for all transactions
--http-no-escape Do not escape URL metacharacters in path names
--http-no-head Don't use HEAD requests
--http-no-slash Set this if the site doesn't end directories with /
--http-url string URL of HTTP host to connect to
@@ -432,7 +432,7 @@ rclone [flags]
--koofr-encoding Encoding The encoding for the backend (default Slash,BackSlash,Del,Ctl,InvalidUtf8,Dot)
--koofr-endpoint string The Koofr API endpoint to use
--koofr-mountid string Mount ID of the mount to use
--koofr-password string Your password for rclone (generate one at https://app.koofr.net/app/admin/preferences/password) (obscured)
--koofr-password string Your password for rclone generate one at https://app.koofr.net/app/admin/preferences/password (obscured)
--koofr-provider string Choose your storage provider
--koofr-setmtime Does the backend support setting modification time (default true)
--koofr-user string Your user name
@@ -449,6 +449,7 @@ rclone [flags]
--local-no-set-modtime Disable setting modtime
--local-no-sparse Disable sparse files for multi-thread downloads
--local-nounc Disable UNC (long path names) conversion on Windows
--local-time-type mtime|atime|btime|ctime Set what kind of time is returned (default mtime)
--local-unicode-normalization Apply unicode NFC normalization to paths and filenames
--local-zero-size-links Assume the Stat size of links is zero (and read them instead) (deprecated)
--log-file string Log everything to this file
@@ -530,6 +531,7 @@ rclone [flags]
--onedrive-drive-type string The type of the drive (personal | business | documentLibrary)
--onedrive-encoding Encoding The encoding for the backend (default Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,Del,Ctl,LeftSpace,LeftTilde,RightSpace,RightPeriod,InvalidUtf8,Dot)
--onedrive-expose-onenote-files Set to make OneNote files show up in directory listings
--onedrive-hard-delete Permanently delete files on removal
--onedrive-hash-type string Specify the hash in use for the backend (default "auto")
--onedrive-link-password string Set the password for links created by the link command
--onedrive-link-scope string Set the scope of the links created by the link command (default "anonymous")
@@ -587,6 +589,7 @@ rclone [flags]
--pcloud-token-url string Token server url
--pcloud-username string Your pcloud username
--pikpak-auth-url string Auth server URL
--pikpak-chunk-size SizeSuffix Chunk size for multipart uploads (default 5Mi)
--pikpak-client-id string OAuth Client Id
--pikpak-client-secret string OAuth Client Secret
--pikpak-description string Description of the remote
@@ -597,6 +600,7 @@ rclone [flags]
--pikpak-token string OAuth Access Token as a JSON blob
--pikpak-token-url string Token server url
--pikpak-trashed-only Only show files that are in the trash
--pikpak-upload-concurrency int Concurrency for multipart uploads (default 5)
--pikpak-use-trash Send files to the trash instead of deleting permanently (default true)
--pikpak-user string Pikpak username
--premiumizeme-auth-url string Auth server URL
@@ -665,6 +669,7 @@ rclone [flags]
--rc-realm string Realm for authentication
--rc-salt string Password hashing salt (default "dlPL2MqE")
--rc-serve Enable the serving of remote objects
--rc-serve-no-modtime Don't read the modification time (can speed things up)
--rc-server-read-timeout Duration Timeout for server reading data (default 1h0m0s)
--rc-server-write-timeout Duration Timeout for server writing data (default 1h0m0s)
--rc-template string User-specified template
@@ -745,6 +750,7 @@ rclone [flags]
--sftp-chunk-size SizeSuffix Upload and download chunk size (default 32Ki)
--sftp-ciphers SpaceSepList Space separated list of ciphers to be used for session encryption, ordered by preference
--sftp-concurrency int The maximum number of outstanding requests for one file (default 64)
--sftp-connections int Maximum number of SFTP simultaneous connections, 0 for unlimited
--sftp-copy-is-hardlink Set to enable server side copies using hardlinks
--sftp-description string Description of the remote
--sftp-disable-concurrent-reads If set don't use concurrent reads
@@ -840,7 +846,7 @@ rclone [flags]
--swift-auth string Authentication URL for server (OS_AUTH_URL)
--swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN)
--swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION)
--swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container (default 5Gi)
--swift-chunk-size SizeSuffix Above this size files will be chunked (default 5Gi)
--swift-description string Description of the remote
--swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME)
--swift-encoding Encoding The encoding for the backend (default Slash,InvalidUtf8)
@@ -856,6 +862,7 @@ rclone [flags]
--swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME)
--swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME)
--swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID)
--swift-use-segments-container Tristate Choose destination for large object segments (default unset)
--swift-user string User name to log in (OS_USERNAME)
--swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID)
--syslog Use Syslog for logging
@@ -867,6 +874,13 @@ rclone [flags]
--track-renames When synchronizing, track file renames and do a server-side move if possible
--track-renames-strategy string Strategies to use when synchronizing using track-renames hash|modtime|leaf (default "hash")
--transfers int Number of file transfers to run in parallel (default 4)
--ulozto-app-token string The application token identifying the app. An app API key can be either found in the API
--ulozto-description string Description of the remote
--ulozto-encoding Encoding The encoding for the backend (default Slash,BackSlash,Del,Ctl,InvalidUtf8,Dot)
--ulozto-list-page-size int The size of a single page for list commands. 1-500 (default 500)
--ulozto-password string The password for the user (obscured)
--ulozto-root-folder-slug string If set, rclone will use this folder as the root folder for all operations. For example,
--ulozto-username string The username of the principal to operate as
--union-action-policy string Policy to choose upstream on ACTION category (default "epall")
--union-cache-time int Cache time of usage and free space (in seconds) (default 120)
--union-create-policy string Policy to choose upstream on CREATE category (default "epmfs")
@@ -883,7 +897,7 @@ rclone [flags]
--use-json-log Use json log format
--use-mmap Use mmap allocator (see docs)
--use-server-modtime Use server modified time instead of object metadata
--user-agent string Set the user-agent to a specified string (default "rclone/v1.66.0")
--user-agent string Set the user-agent to a specified string (default "rclone/v1.67.0")
-v, --verbose count Print lots more stuff (repeat for more)
-V, --version Print the version number
--webdav-bearer-token string Bearer token instead of user/pass (e.g. a Macaroon)
@@ -892,6 +906,7 @@ rclone [flags]
--webdav-encoding string The encoding for the backend
--webdav-headers CommaSepList Set HTTP headers for all transactions
--webdav-nextcloud-chunk-size SizeSuffix Nextcloud upload chunk size (default 10Mi)
--webdav-owncloud-exclude-mounts Exclude ownCloud mounted storages
--webdav-owncloud-exclude-shares Exclude ownCloud shares
--webdav-pacer-min-sleep Duration Minimum time to sleep between API calls (default 10ms)
--webdav-pass string Password (obscured)
@@ -937,6 +952,7 @@ rclone [flags]
* [rclone delete](/commands/rclone_delete/) - Remove the files in path.
* [rclone deletefile](/commands/rclone_deletefile/) - Remove a single file from remote.
* [rclone gendocs](/commands/rclone_gendocs/) - Output markdown docs for rclone to the directory supplied.
* [rclone gitannex](/commands/rclone_gitannex/) - Speaks with git-annex over stdin/stdout.
* [rclone hashsum](/commands/rclone_hashsum/) - Produces a hashsum file for all the objects in the path.
* [rclone link](/commands/rclone_link/) - Generate public link to file/folder.
* [rclone listremotes](/commands/rclone_listremotes/) - List all the remotes in the config file and defined in environment variables.

View File

@@ -1,8 +1,6 @@
---
title: "rclone about"
description: "Get quota information from the remote."
slug: rclone_about
url: /commands/rclone_about/
versionIntroduced: v1.41
# autogenerated - DO NOT EDIT, instead edit the source code in cmd/about/ and as part of making a release run "make commanddocs"
---

View File

@@ -1,8 +1,6 @@
---
title: "rclone authorize"
description: "Remote authorization."
slug: rclone_authorize
url: /commands/rclone_authorize/
versionIntroduced: v1.27
# autogenerated - DO NOT EDIT, instead edit the source code in cmd/authorize/ and as part of making a release run "make commanddocs"
---

View File

@@ -1,9 +1,6 @@
---
title: "rclone backend"
description: "Run a backend-specific command."
slug: rclone_backend
url: /commands/rclone_backend/
groups: Important
versionIntroduced: v1.52
# autogenerated - DO NOT EDIT, instead edit the source code in cmd/backend/ and as part of making a release run "make commanddocs"
---

View File

@@ -1,9 +1,6 @@
---
title: "rclone bisync"
description: "Perform bidirectional synchronization between two paths."
slug: rclone_bisync
url: /commands/rclone_bisync/
groups: Filter,Copy,Important
status: Beta
versionIntroduced: v1.58
# autogenerated - DO NOT EDIT, instead edit the source code in cmd/bisync/ and as part of making a release run "make commanddocs"

View File

@@ -1,9 +1,6 @@
---
title: "rclone cat"
description: "Concatenates any files and sends them to stdout."
slug: rclone_cat
url: /commands/rclone_cat/
groups: Filter,Listing
versionIntroduced: v1.33
# autogenerated - DO NOT EDIT, instead edit the source code in cmd/cat/ and as part of making a release run "make commanddocs"
---

View File

@@ -1,9 +1,6 @@
---
title: "rclone check"
description: "Checks the files in the source and destination match."
slug: rclone_check
url: /commands/rclone_check/
groups: Filter,Listing,Check
# autogenerated - DO NOT EDIT, instead edit the source code in cmd/check/ and as part of making a release run "make commanddocs"
---
# rclone check

Some files were not shown because too many files have changed in this diff Show More