1
0
mirror of https://github.com/rclone/rclone.git synced 2025-12-06 00:03:32 +00:00

Compare commits

...

163 Commits

Author SHA1 Message Date
Nick Craig-Wood
705572d400 webdav: nextcloud: fix must use /dav/files/USER endpoint not /webdav error
Before this change the regexp validating the endpoint URL was a bit
too strict allowing only /dav/files/USER.

This patch relaxes it allowing /dav/files/USER/dir/subdir etc.
2023-07-01 15:53:57 +01:00
Nick Craig-Wood
00512e1303 Start v1.64.0-DEV development 2023-06-30 15:39:03 +01:00
Nick Craig-Wood
fcfbd3153b docs: website: replace google analytics with plausible analytics 2023-06-30 14:32:53 +01:00
Nick Craig-Wood
9a8075b682 docs: rename donate page to sponsor page and rework 2023-06-30 14:32:53 +01:00
Sawada Tsunayoshi
996037bee9 docs: fixed typo in exclude example in filtering docs (#7097)
The exclude flag instructions had "without" written as "with" which changes the whole meaning of how the exclude flag works.
2023-06-30 15:28:38 +02:00
Nick Craig-Wood
e90537b2e9 Version v1.63.0 2023-06-30 14:11:17 +01:00
Nick Craig-Wood
42c211c6b2 Revert sponsors back to organization 2023-06-30 10:10:05 +01:00
Nick Craig-Wood
3d4f127b33 Revert "union: disable PartialUploads on integration tests failures"
This reverts commit 9065e921c1.

It turns out the problem for the failing fs/sync tests was the
policies being different for search and create which meant that the
file was being created in one union branch but a diferent one was
found in another branch.
2023-06-29 21:11:04 +01:00
Misty
ff966b37af dropbox: fix result chans not taken care by defer fun 2023-06-28 19:49:38 +01:00
Nick Craig-Wood
3b6effa81a uptobox: fix rmdir declaring that directories weren't empty
The API seems to have changed and the `totalFileCount` item no longer
tracks the number of files in the directory so is useless for seeing
if the directory is empty.

This patch fixes the problem by seeing whether there are any files or
directories in the folder instead.

This problem was detected by the integration tests.
2023-06-28 17:27:43 +01:00
Nick Craig-Wood
8308d5d640 putio: fix server side copy failures (400 errors)
For some unknown reason the API sometimes returns the name already
exists on a server side copy.

    {
      "error_id": null,
      "error_message": "Name already exist",
      "error_type": "NAME_ALREADY_EXIST",
      "error_uri": "http://api.put.io/v2/docs",
      "extra": {},
      "status": "ERROR",
      "status_code": 400
    }

This patch uploads to a temporary name then renames it which works
around the problem.

This was spotted by the integration tests.
2023-06-28 16:45:35 +01:00
Nick Craig-Wood
14024936a8 putio: fix modification times not being preserved for server side copy and move
The integration tests spotted that modification times are no longer
being preserved by the putio API in server side move and copy.

This patch explicitly sets the modtime after the server side move or
copy.
2023-06-28 11:03:19 +01:00
Nick Craig-Wood
9065e921c1 union: disable PartialUploads on integration tests failures
In this commit we enabled PartialUploads for the union backend.

3faa84b47c combine,compress,crypt,hasher,union: support wrapping backends with PartialUploads

This turns out to cause test failures in fs/sync so this commit
disables them again pending further investigation.
2023-06-27 17:31:01 +01:00
Nick Craig-Wood
99788b605e sharefile: disable streamed transfers as they no longer work
At some point the sharefile API changed to require the size of the
file in the initial transaction which makes the streaming upload fail
with this error:

    upload failed: file size does not match (-2)

This was discovered by the integration tests.
2023-06-27 17:08:37 +01:00
Nick Craig-Wood
d4cc3760e6 putio: fix uploading to the wrong object on Update with overriden remote name
In this commit we discovered a problem with objects being uploaded to
the incorrect object name. It added an integration test for the
problem.

65b2e378e0 drive: fix incorrect remote after Update on object

This test was tripped by the putio backend and this patch fixes the
problem.
2023-06-27 16:02:33 +01:00
Nick Craig-Wood
a6acbd1844 uptobox: fix Update returning the wrong object
Before this patch the Update method had a 50/50 chance of returning
the old object rather than the new updated object.

This was discovered in the integration tests.

This patch fixes the problem by deleting the duplicate object before
we look for the new object.
2023-06-27 16:02:33 +01:00
Nick Craig-Wood
389565f5e2 storj: fix uploading to the wrong object on Update with overriden remote name
In this commit we discovered a problem with objects being uploaded to
the incorrect object name. It added an integration test for the
problem.

65b2e378e0 drive: fix incorrect remote after Update on object

This test was tripped by the Storj backend and this patch fixes the
problem.
2023-06-27 16:02:33 +01:00
Nick Craig-Wood
4b4198522d storj: fix "uplink: too many requests" errors when uploading to the same file
Storj has a rate limit of 1 per second when uploading to the same
file.

This was being tripped by the integration tests.

This patch fixes it by detecting the error and sleeping for 1 second
before retrying.

See: https://github.com/storj/uplink/issues/149
2023-06-27 16:02:33 +01:00
Nick Craig-Wood
f7665300c0 fstests: allow ObjectUpdate test to retry upload 2023-06-27 16:02:33 +01:00
Nick Craig-Wood
73beae147f webdav: Fix modtime on server side copy for owncloud and nextcloud
Before this change a server side copy did not preserve the modtime.

This used to work on nextcloud but at some point it started ignoring
the `X-Oc-Mtime` header.

This patch sets the modtime explicitly after a server side copy if the
`X-Oc-Mtime` wasn't accepted.

This problem was discovered in the integration tests.
2023-06-26 20:23:28 +01:00
Nick Craig-Wood
92f8e476b7 Add mac-15 to contributors 2023-06-26 20:23:28 +01:00
Nick Craig-Wood
5849148d51 Add zzq to contributors 2023-06-26 20:23:28 +01:00
Nick Craig-Wood
37853ec412 Add Peter Fern to contributors 2023-06-26 20:23:28 +01:00
Nick Craig-Wood
ae7ff28714 Add danielkrajnik to contributors 2023-06-26 20:23:28 +01:00
Nick Craig-Wood
9873f4bc74 Add Mariusz Suchodolski to contributors 2023-06-26 20:23:28 +01:00
Nick Craig-Wood
1b200bf69a Add Paulo Schreiner to contributors 2023-06-26 20:23:28 +01:00
Nick Craig-Wood
e3fa6fe3cc swift: fix code formatting 2023-06-26 20:23:28 +01:00
mac-15
9e1b3861e7 docs: add blomp cloud storage guide 2023-06-26 17:49:27 +01:00
zzq
e9a753f678 s3: add Qiniu KODO quirks virtualHostStyle is false 2023-06-26 17:47:27 +01:00
Dimitri Papadopoulos
708391a5bf backend: fix misspellings found by codespell 2023-06-26 14:34:52 +01:00
Peter Fern
1cfed18aa7 http: add client certificate user auth middleware
This populates the authenticated user from the client certificate
common name.

Also added tests for the existing client certificate functionality.
2023-06-26 14:33:53 +01:00
kapitainsky
7751d5a00b rc: config/listremotes include from env vars
Fixes: 
#6540

Discussed:
https://forum.rclone.org/t/environment-variable-config-not-used-for-remote-control/39014
2023-06-26 12:30:44 +01:00
danielkrajnik
8274712c2c docs: s3: fix example for restoring single objects
See: https://forum.rclone.org/t/cant-restore-files-from-aws-glacier-deep-only-directories/39258/3
2023-06-26 11:41:15 +01:00
Mariusz Suchodolski
625a564ba3 docs: faq: add solution for port opening issues on Windows 2023-06-25 11:20:54 +01:00
Ehsan Tadayon
2dd2072cdb s3: Fix Arvancloud Domain and region changes and alphabetise the provider 2023-06-25 11:01:41 +01:00
kapitainsky
998d1d1727 docs: listremotes also includes remotes from env vars 2023-06-24 15:46:23 +01:00
Paulo Schreiner
fcb912a664 fs: allow setting a write buffer for multithread
when multi-thread downloading is enabled, rclone used
to send a write to disk after every read, resulting in a lot
of small writes to different locations of the file.

depending on the underlying filesystem or device, it can be more
efficient to send bigger writes.
2023-06-23 18:44:43 +01:00
Nick Craig-Wood
5f938fb9ed s3: fix "Entry doesn't belong in directory" errors when using directory markers
Before this change we were incorrectly identifying the root directory
of the listing and adding it into the listing.

This caused higher layers of rclone to emit the error above.

See #7038
2023-06-23 18:01:11 +01:00
Nick Craig-Wood
72b79504ea azureblob: fix "Entry doesn't belong in directory" errors when using directory markers
Before this change we were incorrectly identifying the root directory
of the listing and adding it into the listing.

This caused higher layers of rclone to emit the error above.

See #7038
2023-06-23 18:01:11 +01:00
Nick Craig-Wood
3e2a606adb gcs: fix "Entry doesn't belong in directory" errors when using directory markers
Before this change we were incorrectly identifying the root directory
of the listing and adding it into the listing.

This caused higher layers of rclone to emit the error above.

Fixes #7038
2023-06-23 18:01:11 +01:00
Nick Craig-Wood
95a6e3e338 Add Stanislav Gromov to contributors 2023-06-23 18:01:11 +01:00
Anagh Kumar Baranwal
d06bb55f3f mount: Added _netdev to the example mount so it gets treated as a remote-fs rather than local-fs
Signed-off-by: Anagh Kumar Baranwal <6824881+darthShadow@users.noreply.github.com>
2023-06-23 17:37:00 +01:00
Stanislav Gromov
9f3694cea3 docs: drive: fix typo 2023-06-23 14:40:47 +01:00
Nick Craig-Wood
2c50f26c36 mount: fix mount failure on macOS with on the fly remote
This commit

3567a47258 fs: make ConfigString properly reverse suffixed file systems

made fs.ConfigString() return the full config of the backend. Because
mount was using this to make a volume name it started to make volume
names with illegal characters in which couldn't be mounted by macOS.

This fixes the problem by making a separate fs.ConfigStringFull() and
using that where appropriate and leaving the original
fs.ConfigString() function untouched.

Fixes #7063
See: https://forum.rclone.org/t/1-63-beta-fails-to-mount-on-macos-with-on-the-fly-crypt-remote/39090
2023-06-23 14:12:03 +01:00
Nick Craig-Wood
22d6c8d30d Add URenko to contributors 2023-06-23 14:12:03 +01:00
Nick Craig-Wood
96fb75c5a7 Add Sam Lai to contributors 2023-06-23 14:12:03 +01:00
URenko
acd67edf9a docs: remove "After" in systemd mount example again 2023-06-22 18:03:04 +01:00
Sam Lai
b26db8e640 accounting: bwlimit signal handler should always start
The SIGUSR2 signal handler for bandwidth limits currently only starts
if rclone is started at a time when a bandwidth limit applies. This
means that if rclone starts _outside_ such a time, i.e. with no
bandwidth limits, then enters a time where bandwidth limits do apply,
it will not be possible to use SIGUSR2 to toggle it.

This fixes that by always starting the signal handler, but only
toggling the limiter if there is a bandwidth limit configured.
2023-06-22 17:59:24 +01:00
Nick Craig-Wood
da955e5d4f operations: remove partials when the copy fails
Before this change we were only removing partials when it was
corrupted rather than when the copy just failed.
2023-06-21 22:56:05 +01:00
Nick Craig-Wood
4f8dab8bce zoho: fix downloads with Range: header returning the wrong data
Zoho has started returning the results from Range: requests with a 200
response code rather than the technically correct 206 error code.

Before this change this triggered workaround code to deal with Zoho
not obeying Range: requests properly.

This fix tests the returned header for a Content-Range: header and if
it exists assumes it is a valid reply to the Range: request despite
the status being 200.

This problem was spotted by the integration tests.
2023-06-14 17:43:26 +01:00
Nick Craig-Wood
000ddc4951 s3: fix versions tests when running on minio 2023-06-14 17:30:36 +01:00
Nick Craig-Wood
3faa84b47c combine,compress,crypt,hasher,union: support wrapping backends with PartialUploads
This means that, for example, wrapping a sftp backend with crypt will
upload to a temporary name and then rename unless disabled with
--inplace.

See: https://forum.rclone.org/t/backup-versioning/38978/7
2023-06-14 10:52:03 +01:00
kapitainsky
e1162ec440 docs: clarify --server-side-across-configs 2023-06-13 17:58:27 +01:00
Nick Craig-Wood
30cccc7101 cache: fix backends shutting down when in use when used via the rc
Before this fix, if a long running task (eg a copy) was started by the
rc then the backend could expire before the copy had finished.

The typical symptom was with the dropbox backend giving "batcher is
shutting down" errors.

This patch fixes the problem by pinning the backend until the job has
finished.

See: https://forum.rclone.org/t/uploads-start-repeatedly-failing-after-a-while-using-rc-sync-copy-vs-rclone-copy-for-dropbox/38873/
2023-06-13 15:48:20 +01:00
Nick Craig-Wood
1f5a29209e rc: add Job to ctx so it can be used elsewhere
See: https://forum.rclone.org/t/uploads-start-repeatedly-failing-after-a-while-using-rc-sync-copy-vs-rclone-copy-for-dropbox/38873/
2023-06-13 15:48:20 +01:00
Nick Craig-Wood
45255bccb3 accounting: fix Prometheus metrics to be the same as core/stats
In 04aa6969a4 we updated the displayed speed to be a rolling
average in core/stats and the progress output but we didn't update the
Prometheus metrics.

This patch updates the Prometheus metrics too.

Fixes #7053
2023-06-12 17:42:29 +01:00
Nick Craig-Wood
055206c4ee yandex: fix 400 Bad Request on transfer failure
Before this fix, if the upload failed for some reason the yandex
backend would attempt to retry itself it which would fail immediately
with 400 Bad Request.

Normally we retry uploads at a higher level so they can be done with
new data and this patch does that.

See #7044
2023-06-11 11:11:43 +01:00
Nick Craig-Wood
f3070b82bc Add douchen to contributors 2023-06-11 11:11:43 +01:00
douchen
7e2deffc62 filter: fix deadlock with errors on --files-from
Before this change if doing a recursive directory listing with
`--files-from` if more than `--checkers` files errored (other than
file not found) then rclone would deadlock.

This fixes the problem by exiting on the first error.
2023-06-10 15:53:08 +01:00
Nick Craig-Wood
ae3ff50580 dropbox: implement --dropbox-pacer-min-sleep flag
See: https://forum.rclone.org/t/combine-mount-options-query/38080
2023-06-10 14:57:26 +01:00
Nick Craig-Wood
6486ba6344 operations: remove partially uploaded files on exit when not using --inplace
Before this change partially uploaded files (when --inplace is not in
effect) would be left lying around in the file system if rclone was
killed in the middle of a transfer.

This adds an exit handler to remove the file and removes it when the
file is complete.
2023-06-10 14:55:05 +01:00
Nick Craig-Wood
7842000f8a backend: for command not found errors, hint to look in the underlying remote
See: https://forum.rclone.org/t/rclone-cleanup-no-way-to-delete-pending-uploads-newer-than-24-hours/38416/6
2023-06-10 14:44:01 +01:00
Nick Craig-Wood
1f9c962183 operations: reopen downloads on error when using check --download and cat
Before this change, some parts of operations called the Open method on
objects directly, and some called NewReOpen to make an object which
can re-open itself on errors.

This adds a new function operations.Open which should be called
instead of fs.Object.Open to open a reliable stream of data and
changes all call sites to use that.

This means `rclone check --download` and `rclone cat` will re-open
files on failures.

See: https://forum.rclone.org/t/does-rclone-support-retries-for-check-when-using-download-flag/38641
2023-06-10 14:42:29 +01:00
Nick Craig-Wood
279d9ecc56 operations: fix pcloud can't set modified time
Before this change we tested special errors for straight equality.

This works for all normal backends, but the union backend may return
wrapped errors which contain the special error types.

In particular if a pcloud backend was part of a union when attempting
to set modification times the fs.ErrorCantSetModTime return wasn't
understood because it was wrapped in a union.Error.

This fixes the problem by using errors.Is instead in all the
comparisons in operations.

See: https://forum.rclone.org/t/failed-to-set-modification-time-1-error-pcloud-cant-set-modified-time/38596
2023-06-10 14:39:41 +01:00
Nick Craig-Wood
31773ecfbf union: allow errors to be unwrapped for inspection
Before this change the Errors type in the union backend produced
errors which could not be Unwrapped to test their type.

This adds the (go1.20) Unwrap method to the Errors type which allows
errors.Is to work on these errors.

It also adds unit tests for the Errors type and fixes a couple of
minor bugs thrown up in the process.

See: https://forum.rclone.org/t/failed-to-set-modification-time-1-error-pcloud-cant-set-modified-time/38596
2023-06-10 14:39:41 +01:00
kapitainsky
666e34cf69 s3: docs: old broken link updated 2023-06-09 18:15:54 +01:00
Nick Craig-Wood
5a84a08b3f build: fix build failure installing nfpm
Before this fix we used the bin/get-github-release.go script to
install nfpm.

However this script fails scraping the downloads page when the target
has more than a few download options. The alternative would be using
the GitHub API but this needs authentication so as not to be rate
limited on GitHub actions.

This patch switches over to go install which is less efficient but
should work in all circumstances.
2023-06-07 15:41:52 +01:00
Nick Craig-Wood
51a468b2ba genautocomplete: rename to completion with alias to old name
This brings it into line with cobra's naming scheme and stops cobra
writing another "completion" command which doesn't work as well which
confuses users.

See: https://forum.rclone.org/t/rclone-genautocomplete-bash-vs-rclone-completion-bash-neither-works-fully/38431
2023-05-25 14:32:40 +01:00
Nick Craig-Wood
fc798d800c vfs: fix backends being Shutdown too early when startup takes a long time
Before this change if the VFS took more than 5 to initialise (which
can happen if there is a lot of files or a lot of files which need
uploading) the backend was dropped out of the cache before the VFS was
fully created.

This was noticeable in the dropbox backend where the batcher Shutdown
too soon and prevented further uploads.

This fixes the problem by Pinning backends before the VFS cache is
created.

https://forum.rclone.org/t/if-more-than-251-elements-in-the-que-to-upload-fails-with-batcher-is-shutting-down/38076/2
2023-05-18 16:16:12 +01:00
Nick Craig-Wood
3115ede1d8 Add kapitainsky to contributors 2023-05-18 16:16:12 +01:00
kapitainsky
7a5491ba7b docs: chunker: fix typo 2023-05-17 17:10:53 +01:00
Nick Craig-Wood
a6cf4989b6 local: fix crash with --metadata on Android
Before this change we called statx which causes a

    SIGSYS: bad system call

fault.

After this we force Android to use fstatat

Fixes #7006
2023-05-17 17:03:26 +01:00
Nick Craig-Wood
f489b54fa0 operations: ignore partial tests on backends which don't support them 2023-05-17 17:03:26 +01:00
Nick Craig-Wood
6244d1729b Add Tareq Sharafy to contributors 2023-05-17 17:03:19 +01:00
Nick Craig-Wood
e97c2a2832 Add cc to contributors 2023-05-17 17:03:19 +01:00
albertony
56bf9b4a10 Add albertony to maintainers 2023-05-17 15:31:07 +02:00
WeidiDeng
ceb9406c2f serve webdav: implement owncloud checksum and modtime extensions
* implement owncloud checksum and modtime extensions for webdav server
* test rclone webdav server as owncloud webdav
2023-05-15 15:38:00 +01:00
Tareq Sharafy
1f887f7ba0 azblob: doc
Signed-off-by: Tareq Sharafy <tareq.sha@gmail.com>
2023-05-14 12:12:24 +01:00
Tareq Sharafy
7db26b6b34 azblob: support azure workload identities 2023-05-14 12:12:24 +01:00
cc
37a3309438 s3: v3sign: add missing subresource delete
The delete query string parameter must be included when you create the
CanonicalizedResource for a multi-object Delete request.
2023-05-14 11:25:52 +01:00
Nick Craig-Wood
97be9015a4 union: implement missing methods
Implement these missing methods:

- CleanUp

And declare these ones unimplementable:

- UnWrap
- WrapFs
- SetWrapper
- UserInfo
- Disconnect
- PublicLink
- PutUnchecked
- MergeDirs
- OpenWriterAt
2023-05-14 11:22:57 +01:00
Nick Craig-Wood
487e4f09b3 combine: implement missing methods
Implement these missing methods:

- PublicLink
- PutUnchecked
- MergeDirs
- CleanUp
- OpenWriterAt

And declare these ones unimplementable:

- UnWrap
- WrapFs
- SetWrapper
- UserInfo
- Disconnect

Fixes #6999
2023-05-14 11:22:57 +01:00
Nick Craig-Wood
09a408664d fs: create Overlay feature flag to indicate backend wraps others
Set this automatically for any backend which implements UnWrap and
manually for combine and union which can't implement UnWrap but do
overlay other backends.
2023-05-14 11:22:57 +01:00
Nick Craig-Wood
43fa256d56 fs: add OverrideDirectory for overriding path of directory 2023-05-14 11:22:57 +01:00
wiserain
6859c04772 pikpak: add validity check when using a media link
Before this change, the Pikpak backend would always download
the first media item whenever possible, regardless of whether
or not it was the original contents.

Now we check the validity of a media link using the `fid`
parameter in the link URL.

Fixes #6992
2023-05-13 03:41:59 +09:00
dependabot[bot]
38a0539096 build(deps): bump github.com/cloudflare/circl from 1.1.0 to 1.3.3
Bumps [github.com/cloudflare/circl](https://github.com/cloudflare/circl) from 1.1.0 to 1.3.3.
- [Release notes](https://github.com/cloudflare/circl/releases)
- [Commits](https://github.com/cloudflare/circl/compare/v1.1.0...v1.3.3)

---
updated-dependencies:
- dependency-name: github.com/cloudflare/circl
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>
2023-05-12 14:56:45 +01:00
Nick Craig-Wood
2cd85813b4 sftp: don't check remote points to a file if it ends with /
This avoids calling stat on the root directory which saves a call and
some servers don't like.

See: https://forum.rclone.org/t/stat-failed-error-on-sftp/38045
2023-05-11 07:58:20 +01:00
Nick Craig-Wood
e6e6069ecf sftp: don't stat directories before listing them
Before this change we ran stat on the directory to see if it existed.

Not only is this inefficient it isn't allowed by some SFTP servers.

See: https://forum.rclone.org/t/stat-failed-error-on-sftp/38045
2023-05-10 15:07:21 +01:00
Nick Craig-Wood
fcf47a8393 pikpak: set the NoMultiThreading feature flag to disable multi-thread copy
Before this change the pikpak backend changed the global
--multi-thread-streams flag which wasn't desirable.

Now the machinery is in place to use the NoMultiThreading feature flag
instead.

Fixes #6915
2023-05-09 17:46:19 +01:00
Nick Craig-Wood
46a323ae14 operations: Don't use multi-thread copy if the backend doesn't support it #6915 2023-05-09 17:40:58 +01:00
Nick Craig-Wood
72be80ddca fs: add new backend feature NoMultiThreading
This should be set for backends which can't support simultaneous reads
from different offsets in a single file.
2023-05-09 17:40:11 +01:00
Nick Craig-Wood
a9e7e7bcc2 ftp: Fix "501 Not a valid pathname." errors when creating directories
Some servers return a 501 error when using MLST on a non-existing
directory. This patch allows it.

I don't think this is correct usage according to the RFC, but the RFC
doesn't explicitly state which error code should be returned for
file/directory not found.
2023-05-09 17:27:35 +01:00
Nick Craig-Wood
925c4382e2 ftp: fix "unsupported LIST line" errors on startup
Before this fix a blank line in the MLST output from the FTP server
would cause the "unsupported LIST line" error.

This fixes the problem in the upstream fork.

Fixes #6879
2023-05-09 17:27:35 +01:00
Nick Craig-Wood
08c60c3091 Add Janne Hellsten to contributors 2023-05-09 17:27:35 +01:00
Janne Hellsten
5c594fea90 operations: implement uploads to temp name with --inplace to disable
When copying to a backend which has the PartialUploads feature flag
set and can Move files the file is copied into a temporary name first.
Once the copy is complete, the file is renamed to the real
destination.

This prevents other processes from seeing partially downloaded copies
of files being downloaded and prevents overwriting the old file until
the new one is complete.

This also adds --inplace flag that can be used to disable the partial
file copy/rename feature.

See #3770

Co-authored-by: Nick Craig-Wood <nick@craig-wood.com>
2023-05-09 16:28:10 +01:00
Janne Hellsten
cc01223535 fs: Implement PartialUploads feature flag
Implement a Partialuploads feature flag to mark backends for which
uploads are not atomic.

This is set for the following backends

- local
- ftp
- sftp

See #3770
2023-05-09 16:28:10 +01:00
Nick Craig-Wood
aaacfa51a0 sftp: fix move to allow overwriting existing files
Before this change rclone used a normal SFTP rename if present to
implement Move.

However the normal SFTP rename won't overwrite existing files.

This fixes it to either use the POSIX rename extension
("posix-rename@openssh.com") or to delete the source first before
renaming using the normal SFTP rename.

This isn't normally a problem as rclone always removes any existing
objects first, however to implement non --inplace operations we do
require overwriting an existing file.
2023-05-09 16:28:10 +01:00
Nick Craig-Wood
c18c66f167 fs: when creating new fs.OverrideRemotes don't layer overrides if not needed 2023-05-09 16:28:10 +01:00
Nick Craig-Wood
d6667d34e7 fs: fix String() method on fs.OverrideRemote
Before this fix it was returning the base objects string rather than
the overridden remote.
2023-05-09 16:28:10 +01:00
Nick Craig-Wood
e649cf4d50 uptobox: add --uptobox-private flag to make all uploaded files private
See: #6946
2023-05-08 17:50:50 +01:00
Nick Craig-Wood
f080ec437c azureblob: empty directory markers #3453 2023-05-07 12:47:09 +01:00
Nick Craig-Wood
4023eaebe0 gcs: fix directory marker code #3453
Use Update to upload the directory markers
2023-05-07 12:47:09 +01:00
Nick Craig-Wood
baf16a65f0 s3: fix directory marker code #3453
Use Update to upload the directory markers
2023-05-07 12:47:09 +01:00
Nick Craig-Wood
70fe2ac852 azureblob: fix azure blob uploads with multiple bits of metadata 2023-05-07 12:47:09 +01:00
Nick Craig-Wood
41cf7faea4 Add Andrei Smirnov to contributors 2023-05-07 12:47:09 +01:00
Andrei Smirnov
f226f2dfb1 s3: add petabox.io to s3 providers 2023-05-05 09:44:25 +01:00
Nick Craig-Wood
31caa019fa rc: fix output of Time values in options/get
Before this change these were output as `{}` after this change they
are output as time strings `"2022-03-26T17:48:19Z"` in standard
javascript format.
2023-05-04 15:04:11 +01:00
Nick Craig-Wood
0468375054 uptobox: ensure files and folders show the modtime configured by --default-time #6986 2023-05-04 15:03:11 +01:00
Nick Craig-Wood
6001f05a12 union: the root folder shows the modtime configured by --default-time #6986 2023-05-04 15:03:11 +01:00
Nick Craig-Wood
f7b87a8049 koofr: ensure folders show the modtime configured by --default-time #6986 2023-05-04 15:03:11 +01:00
Nick Craig-Wood
d379641021 http: ensure folders show the modtime configured by --default-time #6986 2023-05-04 15:03:11 +01:00
Nick Craig-Wood
84281c9089 dropbox: ensure folders show the modtime configured by --default-time #6986 2023-05-04 15:03:11 +01:00
Nick Craig-Wood
8e2dc069d2 fs: Add --default-time flag to control unknown modtime of files/dirs
Before this patch, files or directories with unknown modtime would
appear as the current date.

When mounted some systems look at modification dates of directories to
see if they change and having them change whenever they drop out of
the directory cache is not optimal.

See #6986
2023-05-04 15:03:11 +01:00
Nick Craig-Wood
61d6f538b3 onedrive: add --onedrive-av-override flag to download files flagged as virus
This also produces a warning when rclone detects files have been
blocked because of virus content

    server reports this file is infected with a virus - use --onedrive-av-override to download anyway

Fixes #557
2023-05-03 15:21:30 +01:00
Nick Craig-Wood
65b2e378e0 drive: fix incorrect remote after Update on object
Before this change, when Object.Update was called in the drive
backend, it overwrote the remote with that of the object info.

This is incorrect - the remote doesn't change on Update and this patch
fixes that and introduces a new test to make sure it is correct for
all backends.

This was noticed when doing Update of objects in a nested combine
backend.

See: https://forum.rclone.org/t/rclone-runtime-goroutine-stack-exceeds-1000000000-byte-limit/37912
2023-05-03 13:51:27 +01:00
Nick Craig-Wood
dea6bdf3df combine: fix goroutine stack overflow on bad object
If the Remote() call failed to do its path adjustment, then it would
recursively call Remote() as part of logging the failure and cause a
stack overflow.

This fixes it by logging the underlying object instead.

See: https://forum.rclone.org/t/rclone-runtime-goroutine-stack-exceeds-1000000000-byte-limit/37912
2023-05-03 13:51:27 +01:00
Nick Craig-Wood
27eb8c7f45 config: stop config create making invalid config files
If config create was passed a parameter with an embedded \n it wrote
it straight to the config file which made it invalid and caused a
fatal error reloading it.

This stops keys and values with \r and \n being added to the config
file.

See: https://forum.rclone.org/t/how-to-control-bad-remote-creation-which-takes-rclone-down/37856
2023-05-03 11:40:30 +01:00
Nick Craig-Wood
1607344613 Add Adam K to contributors 2023-05-03 11:40:30 +01:00
Adam K
5f138dd822 dropbox: syncing documentation with source for dropbox default batch_timeout - fixes #6984 2023-05-02 17:04:32 +01:00
Anagh Kumar Baranwal
2520c05c4b mount2: disable xattrs
Signed-off-by: Anagh Kumar Baranwal <6824881+darthShadow@users.noreply.github.com>
2023-04-30 17:56:47 +01:00
Anagh Kumar Baranwal
f7f5e87632 mount2: fixed statfs
Signed-off-by: Anagh Kumar Baranwal <6824881+darthShadow@users.noreply.github.com>
2023-04-30 17:56:47 +01:00
Anagh Kumar Baranwal
a7e6806f26 mount2: updated go-fuse version
Signed-off-by: Anagh Kumar Baranwal <6824881+darthShadow@users.noreply.github.com>
2023-04-30 17:56:47 +01:00
Anagh Kumar Baranwal
d0eb884262 mount: removed unnecessary byte slice allocation for reads
Signed-off-by: Anagh Kumar Baranwal <6824881+darthShadow@users.noreply.github.com>
2023-04-30 17:54:30 +01:00
WeidiDeng
ae6874170f webdav: set modtime using propset for owncloud and nextcloud 2023-04-28 17:38:49 +01:00
Nick Craig-Wood
f5bab284c3 s3: fix missing "tier" metadata
Before this change if the storage class wasn't set on the object, we
didn't set the "tier" metadata.

This made it impossible to filter on tier using the metadata filters.

This returns the "tier" metadata as STANDARD if the storage class
isn't set on the object.

See: https://forum.rclone.org/t/copy-from-s3-to-another-s3-filter-by-storage-class/37861
2023-04-28 14:33:01 +01:00
Nick Craig-Wood
c75dfa6436 Add Jānis Bebrītis to contributors 2023-04-28 14:33:01 +01:00
Nick Craig-Wood
56eb82bdfc Add Tobias Gion to contributors 2023-04-28 14:33:01 +01:00
Nick Craig-Wood
066e00b470 gcs: empty directory markers #3453
- Report correct feature flag
- Fix test failures due to that
- don't output the root directory marker
- Don't create the directory marker if it is the bucket or root
- Create directories when uploading files
2023-04-28 14:31:05 +01:00
Jānis Bebrītis
e0c445d36e gcs: empty directory markers - #3453 2023-04-28 14:31:05 +01:00
Nick Craig-Wood
74652bf318 s3: empty directory markers further work #3453
- Report correct feature flag
- Fix test failures due to that
- don't output the root directory marker
- Don't create the directory marker if it is the bucket or root
- Create directories when uploading files
2023-04-28 14:31:05 +01:00
Jānis Bebrītis
b6a95c70e9 s3: empty directory markers - #3453 2023-04-28 14:31:05 +01:00
Nick Craig-Wood
aca7d0fd22 s3: fix potential crash in integration tests 2023-04-28 14:31:05 +01:00
Nick Craig-Wood
12761b3058 fstests: make integration tests work with connection strings in remotes 2023-04-28 14:31:05 +01:00
Nick Craig-Wood
3567a47258 fs: make ConfigString properly reverse suffixed file systems
Before this change we renamed file systems with overridden config with
{suffix}.

However this meant that ConfigString produced a value which wouldn't
re-create the file system.

This uses an internal hash to keep note of what config goes which
which {suffix} in order to remake the config properly.
2023-04-28 14:31:05 +01:00
Nick Craig-Wood
6b670bd439 mockfs: make it so it can be registered as an Fs 2023-04-28 14:31:05 +01:00
Nick Craig-Wood
335ca6d572 lsjson: make --stat more efficient
Don't look for a file if the remote ends with /

This also makes it less likely to find a directory marker in bucket
based file systems.
2023-04-28 14:31:05 +01:00
Tobias Gion
c4a9e480c9 ftp: lower log message priority when SetModTime is not supported to debug
See: https://forum.rclone.org/t/ftp-fritz-box-setmodtime-is-not-supported/37781
2023-04-25 16:31:42 +02:00
Nick Craig-Wood
232d304c13 drive: fix trailing slash mis-identificaton of folder as file
Before this change, drive would mistakenly identify a folder with a
training slash as a file when passed to NewObject.

This was picked up by the integration tests
2023-04-25 12:10:15 +01:00
Nick Craig-Wood
44ac79e357 Add dlitster to contributors 2023-04-25 12:10:15 +01:00
dlitster
0487e465ee docs: s3: clarify that X-Amz-Meta-Md5chksum is really a base64-encoded hex 2023-04-25 11:39:36 +01:00
Nick Craig-Wood
bb6cfe109d crypt: fix reading 0 length files
In an earlier patch

d5afcf9e34 crypt: try not to return "unexpected EOF" error

This introduced a bug for 0 length files which this fixes which only
manifests if the io.Reader returns data and EOF which not all readers
do.

This was failing in the integration tests.
2023-04-24 16:54:40 +01:00
WeidiDeng
864eb89a67 webdav: fix server side copy/move not overwriting - fixes #6964 2023-04-24 14:35:42 +01:00
Nick Craig-Wood
4471e6f258 selfupdate: obey --no-check-certificate flag
This patch makes sure we use our own HTTP transport when fetching the
current rclone version.

This allows it to use --no-check-certificate (and any other features
of our own transport).

See: https://forum.rclone.org/t/rclone-selfupdate-no-check-certificate-flag-not-work/37501
2023-04-24 12:26:01 +01:00
Nick Craig-Wood
e82db0b7d5 vfs: fix potential data race - Fixes #6962
This fixes a data race that was found by static analysis.
2023-04-24 12:17:03 +01:00
Nick Craig-Wood
72e624c5e4 serve dlna: fix potential data race #6962
This fixes a data race that was found by static analysis.
2023-04-24 12:17:03 +01:00
Nick Craig-Wood
6092fa57c3 Add Loren Gordon to contributors 2023-04-24 12:17:03 +01:00
Loren Gordon
3e15a594b7 cat: adds --separator option to cat command
When using `rclone cat` to print the contents of several files, the
user may want to inject some separator between the files, such as a
comma or a newline. This patch adds a `--separator` option to the `cat`
command to make that possible. The default value remains an empty
string, `""`, maintaining the prior behavior of `rclone cat`.

Closes #6968
2023-04-24 12:01:53 +01:00
Nick Craig-Wood
db8c007983 swift: ignore 404 error when deleting an object
See: https://forum.rclone.org/t/rclone-should-optionally-ignore-404-for-delete/37592
2023-04-22 10:49:10 +01:00
dependabot[bot]
5836da14c2 build(deps): bump github.com/aws/aws-sdk-go from 1.44.236 to 1.44.246
Bumps [github.com/aws/aws-sdk-go](https://github.com/aws/aws-sdk-go) from 1.44.236 to 1.44.246.
- [Release notes](https://github.com/aws/aws-sdk-go/releases)
- [Commits](https://github.com/aws/aws-sdk-go/compare/v1.44.236...v1.44.246)

---
updated-dependencies:
- dependency-name: github.com/aws/aws-sdk-go
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2023-04-20 18:03:27 +01:00
dependabot[bot]
8ed07d11a0 build(deps): bump github.com/klauspost/compress from 1.16.3 to 1.16.5
Bumps [github.com/klauspost/compress](https://github.com/klauspost/compress) from 1.16.3 to 1.16.5.
- [Release notes](https://github.com/klauspost/compress/releases)
- [Changelog](https://github.com/klauspost/compress/blob/master/.goreleaser.yml)
- [Commits](https://github.com/klauspost/compress/compare/v1.16.3...v1.16.5)

---
updated-dependencies:
- dependency-name: github.com/klauspost/compress
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2023-04-20 18:03:18 +01:00
dependabot[bot]
1f2ee44c20 build(deps): bump golang.org/x/term from 0.6.0 to 0.7.0
Bumps [golang.org/x/term](https://github.com/golang/term) from 0.6.0 to 0.7.0.
- [Release notes](https://github.com/golang/term/releases)
- [Commits](https://github.com/golang/term/compare/v0.6.0...v0.7.0)

---
updated-dependencies:
- dependency-name: golang.org/x/term
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
2023-04-20 18:02:43 +01:00
Nick Craig-Wood
32798dca25 build: remove Go updates from dependabot as it is too noisy 2023-04-20 17:58:10 +01:00
Nick Craig-Wood
075f98551f Add jladbrook to contributors 2023-04-20 17:58:10 +01:00
Nick Craig-Wood
963ab220f6 Add Brian Starkey to contributors 2023-04-20 17:58:10 +01:00
jladbrook
281a007b1a crypt: add suffix option to set a custom suffix for encrypted files - fixes #6392 2023-04-20 17:28:13 +01:00
Brian Starkey
589b7b4873 s3: update Scaleway storage classes
There are now 3 classes:
 * "STANDARD" - Multi-AZ, all regions
 * "ONEZONE_IA" - Single-AZ, FR-PAR only
 * "GLACIER" - Archive, FR-PAR and NL-AMS only
2023-04-19 17:20:30 +01:00
Nick Craig-Wood
04d2781fda fichier: add cdn option to use CDN for download - Fixes #6943 2023-04-18 17:35:21 +01:00
Nick Craig-Wood
5b95fd9588 Add WeidiDeng to contributors 2023-04-18 17:35:21 +01:00
Nick Craig-Wood
a42643101e Add Damo to contributors 2023-04-18 17:35:21 +01:00
Nick Craig-Wood
bcca67efd5 Add Rintze Zelle to contributors 2023-04-18 17:35:21 +01:00
WeidiDeng
7771aaacf6 vfs: fix writing to a read only directory creating spurious directory entries
Before this fix, when a write to a read only directory failed, rclone
would leav spurious directory entries in the directory.

This confuses `rclone serve webdav` into giving this error

    http: superfluous response.WriteHeader

This fixes the VFS layer to remove any directory entries where the
file creation did not succeed.

Fixes #5702
2023-04-18 17:33:04 +01:00
Damo
fda06fc17d docs: mount: add guidance for macFUSE installed via macports 2023-04-18 15:28:20 +01:00
Rintze Zelle
2faa4758e4 docs: azureblob: typo fix in "azureblob-account" command 2023-04-18 12:48:55 +01:00
190 changed files with 14648 additions and 5386 deletions

4
.github/FUNDING.yml vendored
View File

@@ -1,4 +0,0 @@
github: [ncw]
patreon: njcw
liberapay: ncw
custom: ["https://rclone.org/donate/"]

View File

@@ -1,10 +1,6 @@
version: 2
updates:
- package-ecosystem: "github-actions"
directory: "/"
schedule:
interval: "daily"
- package-ecosystem: "gomod"
directory: "/"
schedule:
interval: "daily"
version: 2
updates:
- package-ecosystem: "github-actions"
directory: "/"
schedule:
interval: "daily"

View File

@@ -17,6 +17,7 @@ Current active maintainers of rclone are:
| Fred | @creativeprojects | seafile backend |
| Caleb Case | @calebcase | storj backend |
| wiserain | @wiserain | pikpak backend |
| albertony | @albertony | |
**This is a work in progress Draft**

3146
MANUAL.html generated

File diff suppressed because it is too large Load Diff

3211
MANUAL.md generated

File diff suppressed because it is too large Load Diff

3307
MANUAL.txt generated

File diff suppressed because it is too large Load Diff

View File

@@ -96,7 +96,7 @@ build_dep:
# Get the release dependencies we only install on linux
release_dep_linux:
go run bin/get-github-release.go -extract nfpm goreleaser/nfpm 'nfpm_.*_Linux_x86_64\.tar\.gz'
go install github.com/goreleaser/nfpm/v2/cmd/nfpm@latest
# Get the release dependencies we only install on Windows
release_dep_windows:

View File

@@ -25,12 +25,12 @@ Rclone *("rsync for cloud storage")* is a command-line program to sync files and
* Alibaba Cloud (Aliyun) Object Storage System (OSS) [:page_facing_up:](https://rclone.org/s3/#alibaba-oss)
* Amazon Drive [:page_facing_up:](https://rclone.org/amazonclouddrive/) ([See note](https://rclone.org/amazonclouddrive/#status))
* Amazon S3 [:page_facing_up:](https://rclone.org/s3/)
* ArvanCloud Object Storage (AOS) [:page_facing_up:](https://rclone.org/s3/#arvan-cloud-object-storage-aos)
* Backblaze B2 [:page_facing_up:](https://rclone.org/b2/)
* Box [:page_facing_up:](https://rclone.org/box/)
* Ceph [:page_facing_up:](https://rclone.org/s3/#ceph)
* China Mobile Ecloud Elastic Object Storage (EOS) [:page_facing_up:](https://rclone.org/s3/#china-mobile-ecloud-eos)
* Cloudflare R2 [:page_facing_up:](https://rclone.org/s3/#cloudflare-r2)
* Arvan Cloud Object Storage (AOS) [:page_facing_up:](https://rclone.org/s3/#arvan-cloud-object-storage-aos)
* Citrix ShareFile [:page_facing_up:](https://rclone.org/sharefile/)
* DigitalOcean Spaces [:page_facing_up:](https://rclone.org/s3/#digitalocean-spaces)
* Digi Storage [:page_facing_up:](https://rclone.org/koofr/#digi-storage)
@@ -61,12 +61,14 @@ Rclone *("rsync for cloud storage")* is a command-line program to sync files and
* Minio [:page_facing_up:](https://rclone.org/s3/#minio)
* Nextcloud [:page_facing_up:](https://rclone.org/webdav/#nextcloud)
* OVH [:page_facing_up:](https://rclone.org/swift/)
* Blomp Cloud Storage [:page_facing_up:](https://rclone.org/swift/)
* OpenDrive [:page_facing_up:](https://rclone.org/opendrive/)
* OpenStack Swift [:page_facing_up:](https://rclone.org/swift/)
* Oracle Cloud Storage [:page_facing_up:](https://rclone.org/swift/)
* Oracle Object Storage [:page_facing_up:](https://rclone.org/oracleobjectstorage/)
* ownCloud [:page_facing_up:](https://rclone.org/webdav/#owncloud)
* pCloud [:page_facing_up:](https://rclone.org/pcloud/)
* Petabox [:page_facing_up:](https://rclone.org/s3/#petabox)
* PikPak [:page_facing_up:](https://rclone.org/pikpak/)
* premiumize.me [:page_facing_up:](https://rclone.org/premiumizeme/)
* put.io [:page_facing_up:](https://rclone.org/putio/)

View File

@@ -1 +1 @@
v1.63.0
v1.64.0

View File

@@ -58,6 +58,8 @@ const (
decayConstant = 1 // bigger for slower decay, exponential
maxListChunkSize = 5000 // number of items to read at once
modTimeKey = "mtime"
dirMetaKey = "hdi_isfolder"
dirMetaValue = "true"
timeFormatIn = time.RFC3339
timeFormatOut = "2006-01-02T15:04:05.000000000Z07:00"
storageDefaultBaseURL = "blob.core.windows.net"
@@ -363,6 +365,18 @@ This option controls how often unused buffers will be removed from the pool.`,
},
},
Advanced: true,
}, {
Name: "directory_markers",
Default: false,
Advanced: true,
Help: `Upload an empty object with a trailing slash when a new directory is created
Empty folders are unsupported for bucket based remotes, this option
creates an empty object ending with "/", to persist the folder.
This object also has the metadata "` + dirMetaKey + ` = ` + dirMetaValue + `" to conform to
the Microsoft standard.
`,
}, {
Name: "no_check_container",
Help: `If set, don't attempt to check the container exists or create it.
@@ -412,6 +426,7 @@ type Options struct {
MemoryPoolUseMmap bool `config:"memory_pool_use_mmap"`
Enc encoder.MultiEncoder `config:"encoding"`
PublicAccess string `config:"public_access"`
DirectoryMarkers bool `config:"directory_markers"`
NoCheckContainer bool `config:"no_check_container"`
NoHeadObject bool `config:"no_head_object"`
}
@@ -486,7 +501,7 @@ func parsePath(path string) (root string) {
// split returns container and containerPath from the rootRelativePath
// relative to f.root
func (f *Fs) split(rootRelativePath string) (containerName, containerPath string) {
containerName, containerPath = bucket.Split(path.Join(f.root, rootRelativePath))
containerName, containerPath = bucket.Split(bucket.Join(f.root, rootRelativePath))
return f.opt.Enc.FromStandardName(containerName), f.opt.Enc.FromStandardPath(containerPath)
}
@@ -664,6 +679,10 @@ func NewFs(ctx context.Context, name, root string, m configmap.Mapper) (fs.Fs, e
SetTier: true,
GetTier: true,
}).Fill(ctx, f)
if opt.DirectoryMarkers {
f.features.CanHaveEmptyDirectories = true
fs.Debugf(f, "Using directory markers")
}
// Client options specifying our own transport
policyClientOptions := policy.ClientOptions{
@@ -906,7 +925,7 @@ func (f *Fs) cntSVC(containerName string) (containerClient *container.Client) {
// Return an Object from a path
//
// If it can't be found it returns the error fs.ErrorObjectNotFound.
func (f *Fs) newObjectWithInfo(remote string, info *container.BlobItem) (fs.Object, error) {
func (f *Fs) newObjectWithInfo(ctx context.Context, remote string, info *container.BlobItem) (fs.Object, error) {
o := &Object{
fs: f,
remote: remote,
@@ -917,7 +936,7 @@ func (f *Fs) newObjectWithInfo(remote string, info *container.BlobItem) (fs.Obje
return nil, err
}
} else if !o.fs.opt.NoHeadObject {
err := o.readMetaData() // reads info and headers, returning an error
err := o.readMetaData(ctx) // reads info and headers, returning an error
if err != nil {
return nil, err
}
@@ -928,7 +947,7 @@ func (f *Fs) newObjectWithInfo(remote string, info *container.BlobItem) (fs.Obje
// NewObject finds the Object at remote. If it can't be found
// it returns the error fs.ErrorObjectNotFound.
func (f *Fs) NewObject(ctx context.Context, remote string) (fs.Object, error) {
return f.newObjectWithInfo(remote, nil)
return f.newObjectWithInfo(ctx, remote, nil)
}
// getBlobSVC creates a blob client
@@ -964,31 +983,7 @@ func isDirectoryMarker(size int64, metadata map[string]*string, remote string) b
// defacto standard for marking blobs as directories.
// Note also that the metadata hasn't been normalised to lower case yet
for k, v := range metadata {
if v != nil && strings.EqualFold(k, "hdi_isfolder") && *v == "true" {
return true
}
}
}
return false
}
// Returns whether file is a directory marker or not using metadata
// with pointers to strings as the SDK seems to use both forms rather
// annoyingly.
//
// NB This is a duplicate of isDirectoryMarker
func isDirectoryMarkerP(size int64, metadata map[string]*string, remote string) bool {
// Directory markers are 0 length
if size == 0 {
endsWithSlash := strings.HasSuffix(remote, "/")
if endsWithSlash || remote == "" {
return true
}
// Note that metadata with hdi_isfolder = true seems to be a
// defacto standard for marking blobs as directories.
// Note also that the metadata hasn't been normalised to lower case yet
for k, pv := range metadata {
if strings.EqualFold(k, "hdi_isfolder") && pv != nil && *pv == "true" {
if v != nil && strings.EqualFold(k, dirMetaKey) && *v == dirMetaValue {
return true
}
}
@@ -1033,6 +1028,7 @@ func (f *Fs) list(ctx context.Context, containerName, directory, prefix string,
Prefix: &directory,
MaxResults: &maxResults,
})
foundItems := 0
for pager.More() {
var response container.ListBlobsHierarchyResponse
err := f.pacer.Call(func() (bool, error) {
@@ -1051,6 +1047,7 @@ func (f *Fs) list(ctx context.Context, containerName, directory, prefix string,
}
// Advance marker to next
// marker = response.NextMarker
foundItems += len(response.Segment.BlobItems)
for i := range response.Segment.BlobItems {
file := response.Segment.BlobItems[i]
// Finish if file name no longer has prefix
@@ -1066,20 +1063,27 @@ func (f *Fs) list(ctx context.Context, containerName, directory, prefix string,
fs.Debugf(f, "Odd name received %q", remote)
continue
}
remote = remote[len(prefix):]
if isDirectoryMarkerP(*file.Properties.ContentLength, file.Metadata, remote) {
continue // skip directory marker
isDirectory := isDirectoryMarker(*file.Properties.ContentLength, file.Metadata, remote)
if isDirectory {
// Don't insert the root directory
if remote == directory {
continue
}
// process directory markers as directories
remote = strings.TrimRight(remote, "/")
}
remote = remote[len(prefix):]
if addContainer {
remote = path.Join(containerName, remote)
}
// Send object
err = fn(remote, file, false)
err = fn(remote, file, isDirectory)
if err != nil {
return err
}
}
// Send the subdirectories
foundItems += len(response.Segment.BlobPrefixes)
for _, remote := range response.Segment.BlobPrefixes {
if remote.Name == nil {
fs.Debugf(f, "Nil prefix received")
@@ -1102,16 +1106,26 @@ func (f *Fs) list(ctx context.Context, containerName, directory, prefix string,
}
}
}
if f.opt.DirectoryMarkers && foundItems == 0 && directory != "" {
// Determine whether the directory exists or not by whether it has a marker
_, err := f.readMetaData(ctx, containerName, directory)
if err != nil {
if err == fs.ErrorObjectNotFound {
return fs.ErrorDirNotFound
}
return err
}
}
return nil
}
// Convert a list item into a DirEntry
func (f *Fs) itemToDirEntry(remote string, object *container.BlobItem, isDirectory bool) (fs.DirEntry, error) {
func (f *Fs) itemToDirEntry(ctx context.Context, remote string, object *container.BlobItem, isDirectory bool) (fs.DirEntry, error) {
if isDirectory {
d := fs.NewDir(remote, time.Time{})
return d, nil
}
o, err := f.newObjectWithInfo(remote, object)
o, err := f.newObjectWithInfo(ctx, remote, object)
if err != nil {
return nil, err
}
@@ -1139,7 +1153,7 @@ func (f *Fs) listDir(ctx context.Context, containerName, directory, prefix strin
return nil, fs.ErrorDirNotFound
}
err = f.list(ctx, containerName, directory, prefix, addContainer, false, int32(f.opt.ListChunkSize), func(remote string, object *container.BlobItem, isDirectory bool) error {
entry, err := f.itemToDirEntry(remote, object, isDirectory)
entry, err := f.itemToDirEntry(ctx, remote, object, isDirectory)
if err != nil {
return err
}
@@ -1220,7 +1234,7 @@ func (f *Fs) ListR(ctx context.Context, dir string, callback fs.ListRCallback) (
list := walk.NewListRHelper(callback)
listR := func(containerName, directory, prefix string, addContainer bool) error {
return f.list(ctx, containerName, directory, prefix, addContainer, true, int32(f.opt.ListChunkSize), func(remote string, object *container.BlobItem, isDirectory bool) error {
entry, err := f.itemToDirEntry(remote, object, isDirectory)
entry, err := f.itemToDirEntry(ctx, remote, object, isDirectory)
if err != nil {
return err
}
@@ -1314,10 +1328,71 @@ func (f *Fs) PutStream(ctx context.Context, in io.Reader, src fs.ObjectInfo, opt
return f.Put(ctx, in, src, options...)
}
// Create directory marker file and parents
func (f *Fs) createDirectoryMarker(ctx context.Context, container, dir string) error {
if !f.opt.DirectoryMarkers || container == "" {
return nil
}
// Object to be uploaded
o := &Object{
fs: f,
modTime: time.Now(),
meta: map[string]string{
dirMetaKey: dirMetaValue,
},
}
for {
_, containerPath := f.split(dir)
// Don't create the directory marker if it is the bucket or at the very root
if containerPath == "" {
break
}
o.remote = dir + "/"
// Check to see if object already exists
_, err := f.readMetaData(ctx, container, containerPath+"/")
if err == nil {
return nil
}
// Upload it if not
fs.Debugf(o, "Creating directory marker")
content := io.Reader(strings.NewReader(""))
err = o.Update(ctx, content, o)
if err != nil {
return fmt.Errorf("creating directory marker failed: %w", err)
}
// Now check parent directory exists
dir = path.Dir(dir)
if dir == "/" || dir == "." {
break
}
}
return nil
}
// Mkdir creates the container if it doesn't exist
func (f *Fs) Mkdir(ctx context.Context, dir string) error {
container, _ := f.split(dir)
return f.makeContainer(ctx, container)
e := f.makeContainer(ctx, container)
if e != nil {
return e
}
return f.createDirectoryMarker(ctx, container, dir)
}
// mkdirParent creates the parent bucket/directory if it doesn't exist
func (f *Fs) mkdirParent(ctx context.Context, remote string) error {
remote = strings.TrimRight(remote, "/")
dir := path.Dir(remote)
if dir == "/" || dir == "." {
dir = ""
}
return f.Mkdir(ctx, dir)
}
// makeContainer creates the container if it doesn't exist
@@ -1417,6 +1492,18 @@ func (f *Fs) deleteContainer(ctx context.Context, containerName string) error {
// Returns an error if it isn't empty
func (f *Fs) Rmdir(ctx context.Context, dir string) error {
container, directory := f.split(dir)
// Remove directory marker file
if f.opt.DirectoryMarkers && container != "" && dir != "" {
o := &Object{
fs: f,
remote: dir + "/",
}
fs.Debugf(o, "Removing directory marker")
err := o.Remove(ctx)
if err != nil {
return fmt.Errorf("removing directory marker failed: %w", err)
}
}
if container == "" || directory != "" {
return nil
}
@@ -1458,7 +1545,7 @@ func (f *Fs) Purge(ctx context.Context, dir string) error {
// If it isn't possible then return fs.ErrorCantCopy
func (f *Fs) Copy(ctx context.Context, src fs.Object, remote string) (fs.Object, error) {
dstContainer, dstPath := f.split(remote)
err := f.makeContainer(ctx, dstContainer)
err := f.mkdirParent(ctx, remote)
if err != nil {
return nil, err
}
@@ -1582,6 +1669,7 @@ func (o *Object) getMetadata() (metadata map[string]*string) {
}
metadata = make(map[string]*string, len(o.meta))
for k, v := range o.meta {
v := v
metadata[k] = &v
}
return metadata
@@ -1694,7 +1782,7 @@ func (o *Object) decodeMetaDataFromBlob(info *container.BlobItem) (err error) {
} else {
size = *info.Properties.ContentLength
}
if isDirectoryMarkerP(size, metadata, o.remote) {
if isDirectoryMarker(size, metadata, o.remote) {
return fs.ErrorNotAFile
}
// NOTE - Client library always returns MD5 as base64 decoded string, Object needs to maintain
@@ -1732,6 +1820,29 @@ func (o *Object) clearMetaData() {
o.modTime = time.Time{}
}
// readMetaData gets the metadata if it hasn't already been fetched
func (f *Fs) readMetaData(ctx context.Context, container, containerPath string) (blobProperties blob.GetPropertiesResponse, err error) {
if !f.containerOK(container) {
return blobProperties, fs.ErrorObjectNotFound
}
blb := f.getBlobSVC(container, containerPath)
// Read metadata (this includes metadata)
options := blob.GetPropertiesOptions{}
err = f.pacer.Call(func() (bool, error) {
blobProperties, err = blb.GetProperties(ctx, &options)
return f.shouldRetry(ctx, err)
})
if err != nil {
// On directories - GetProperties does not work and current SDK does not populate service code correctly hence check regular http response as well
if storageErr, ok := err.(*azcore.ResponseError); ok && (storageErr.ErrorCode == string(bloberror.BlobNotFound) || storageErr.StatusCode == http.StatusNotFound) {
return blobProperties, fs.ErrorObjectNotFound
}
return blobProperties, err
}
return blobProperties, nil
}
// readMetaData gets the metadata if it hasn't already been fetched
//
// Sets
@@ -1740,33 +1851,15 @@ func (o *Object) clearMetaData() {
// o.modTime
// o.size
// o.md5
func (o *Object) readMetaData() (err error) {
container, _ := o.split()
if !o.fs.containerOK(container) {
return fs.ErrorObjectNotFound
}
func (o *Object) readMetaData(ctx context.Context) (err error) {
if !o.modTime.IsZero() {
return nil
}
blb := o.getBlobSVC()
// fs.Debugf(o, "Blob URL = %q", blb.URL())
// Read metadata (this includes metadata)
options := blob.GetPropertiesOptions{}
ctx := context.Background()
var blobProperties blob.GetPropertiesResponse
err = o.fs.pacer.Call(func() (bool, error) {
blobProperties, err = blb.GetProperties(ctx, &options)
return o.fs.shouldRetry(ctx, err)
})
container, containerPath := o.split()
blobProperties, err := o.fs.readMetaData(ctx, container, containerPath)
if err != nil {
// On directories - GetProperties does not work and current SDK does not populate service code correctly hence check regular http response as well
if storageErr, ok := err.(*azcore.ResponseError); ok && (storageErr.ErrorCode == string(bloberror.BlobNotFound) || storageErr.StatusCode == http.StatusNotFound) {
return fs.ErrorObjectNotFound
}
return err
}
return o.decodeMetaDataFromPropertiesResponse(&blobProperties)
}
@@ -1776,7 +1869,7 @@ func (o *Object) readMetaData() (err error) {
// LastModified returned in the http headers
func (o *Object) ModTime(ctx context.Context) (result time.Time) {
// The error is logged in readMetaData
_ = o.readMetaData()
_ = o.readMetaData(ctx)
return o.modTime
}
@@ -2122,12 +2215,17 @@ func (o *Object) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, op
if container == "" || containerPath == "" {
return fmt.Errorf("can't upload to root - need a container")
}
err = o.fs.makeContainer(ctx, container)
if err != nil {
return err
// Create parent dir/bucket if not saving directory marker
_, isDirMarker := o.meta[dirMetaKey]
if !isDirMarker {
err = o.fs.mkdirParent(ctx, o.remote)
if err != nil {
return err
}
}
// Update Mod time
fs.Debugf(nil, "o.meta = %+v", o.meta)
o.updateMetadataWithModTime(src.ModTime(ctx))
if err != nil {
return err
@@ -2175,6 +2273,7 @@ func (o *Object) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, op
size := src.Size()
multipartUpload := size < 0 || size > o.fs.poolSize
fs.Debugf(nil, "o.meta = %+v", o.meta)
if multipartUpload {
err = o.uploadMultipart(ctx, in, size, blb, &httpHeaders)
} else {
@@ -2185,10 +2284,12 @@ func (o *Object) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, op
}
// Refresh metadata on object
o.clearMetaData()
err = o.readMetaData()
if err != nil {
return err
if !isDirMarker {
o.clearMetaData()
err = o.readMetaData(ctx)
if err != nil {
return err
}
}
// If tier is not changed or not specified, do not attempt to invoke `SetBlobTier` operation

View File

@@ -9,6 +9,7 @@ import (
"testing"
"github.com/rclone/rclone/fs"
"github.com/rclone/rclone/fstest"
"github.com/rclone/rclone/fstest/fstests"
"github.com/stretchr/testify/assert"
)
@@ -25,6 +26,25 @@ func TestIntegration(t *testing.T) {
})
}
// TestIntegration2 runs integration tests against the remote
func TestIntegration2(t *testing.T) {
if *fstest.RemoteName != "" {
t.Skip("Skipping as -remote set")
}
name := "TestAzureBlob:"
fstests.Run(t, &fstests.Opt{
RemoteName: name,
NilObject: (*Object)(nil),
TiersToTest: []string{"Hot", "Cool"},
ChunkedUpload: fstests.ChunkedUploadConfig{
MinChunkSize: defaultChunkSize,
},
ExtraConfig: []fstests.ExtraConfigItem{
{Name: name, Key: "directory_markers", Value: "true"},
},
})
}
func (f *Fs) SetUploadChunkSize(cs fs.SizeSuffix) (fs.SizeSuffix, error) {
return f.setUploadChunkSize(cs)
}

View File

@@ -233,6 +233,7 @@ func NewFs(ctx context.Context, name, root string, m configmap.Mapper) (outFs fs
ReadMetadata: true,
WriteMetadata: true,
UserMetadata: true,
PartialUploads: true,
}).Fill(ctx, f)
canMove := true
for _, u := range f.upstreams {
@@ -289,6 +290,16 @@ func NewFs(ctx context.Context, name, root string, m configmap.Mapper) (outFs fs
}
}
// Enable CleanUp when any upstreams support it
if features.CleanUp == nil {
for _, u := range f.upstreams {
if u.f.Features().CleanUp != nil {
features.CleanUp = f.CleanUp
break
}
}
}
// Enable ChangeNotify when any upstreams support it
if features.ChangeNotify == nil {
for _, u := range f.upstreams {
@@ -299,6 +310,9 @@ func NewFs(ctx context.Context, name, root string, m configmap.Mapper) (outFs fs
}
}
// show that we wrap other backends
features.Overlay = true
f.features = features
// Get common intersection of hashes
@@ -887,6 +901,100 @@ func (f *Fs) Shutdown(ctx context.Context) error {
})
}
// PublicLink generates a public link to the remote path (usually readable by anyone)
func (f *Fs) PublicLink(ctx context.Context, remote string, expire fs.Duration, unlink bool) (string, error) {
u, uRemote, err := f.findUpstream(remote)
if err != nil {
return "", err
}
do := u.f.Features().PublicLink
if do == nil {
return "", fs.ErrorNotImplemented
}
return do(ctx, uRemote, expire, unlink)
}
// Put in to the remote path with the modTime given of the given size
//
// May create the object even if it returns an error - if so
// will return the object and the error, otherwise will return
// nil and the error
//
// May create duplicates or return errors if src already
// exists.
func (f *Fs) PutUnchecked(ctx context.Context, in io.Reader, src fs.ObjectInfo, options ...fs.OpenOption) (fs.Object, error) {
srcPath := src.Remote()
u, uRemote, err := f.findUpstream(srcPath)
if err != nil {
return nil, err
}
do := u.f.Features().PutUnchecked
if do == nil {
return nil, fs.ErrorNotImplemented
}
uSrc := fs.NewOverrideRemote(src, uRemote)
return do(ctx, in, uSrc, options...)
}
// MergeDirs merges the contents of all the directories passed
// in into the first one and rmdirs the other directories.
func (f *Fs) MergeDirs(ctx context.Context, dirs []fs.Directory) error {
if len(dirs) == 0 {
return nil
}
var (
u *upstream
uDirs []fs.Directory
)
for _, dir := range dirs {
uNew, uDir, err := f.findUpstream(dir.Remote())
if err != nil {
return err
}
if u == nil {
u = uNew
} else if u != uNew {
return fmt.Errorf("can't merge directories from different upstreams")
}
uDirs = append(uDirs, fs.NewOverrideDirectory(dir, uDir))
}
do := u.f.Features().MergeDirs
if do == nil {
return fs.ErrorNotImplemented
}
return do(ctx, uDirs)
}
// CleanUp the trash in the Fs
//
// Implement this if you have a way of emptying the trash or
// otherwise cleaning up old versions of files.
func (f *Fs) CleanUp(ctx context.Context) error {
return f.multithread(ctx, func(ctx context.Context, u *upstream) error {
if do := u.f.Features().CleanUp; do != nil {
return do(ctx)
}
return nil
})
}
// OpenWriterAt opens with a handle for random access writes
//
// Pass in the remote desired and the size if known.
//
// It truncates any existing object
func (f *Fs) OpenWriterAt(ctx context.Context, remote string, size int64) (fs.WriterAtCloser, error) {
u, uRemote, err := f.findUpstream(remote)
if err != nil {
return nil, err
}
do := u.f.Features().OpenWriterAt
if do == nil {
return nil, fs.ErrorNotImplemented
}
return do(ctx, uRemote, size)
}
// Object describes a wrapped Object
//
// This is a wrapped Object which knows its path prefix
@@ -916,7 +1024,7 @@ func (o *Object) String() string {
func (o *Object) Remote() string {
newPath, err := o.u.pathAdjustment.do(o.Object.String())
if err != nil {
fs.Errorf(o, "Bad object: %v", err)
fs.Errorf(o.Object, "Bad object: %v", err)
return err.Error()
}
return newPath
@@ -988,5 +1096,10 @@ var (
_ fs.Abouter = (*Fs)(nil)
_ fs.ListRer = (*Fs)(nil)
_ fs.Shutdowner = (*Fs)(nil)
_ fs.PublicLinker = (*Fs)(nil)
_ fs.PutUncheckeder = (*Fs)(nil)
_ fs.MergeDirser = (*Fs)(nil)
_ fs.CleanUpper = (*Fs)(nil)
_ fs.OpenWriterAter = (*Fs)(nil)
_ fs.FullObject = (*Object)(nil)
)

View File

@@ -10,6 +10,11 @@ import (
"github.com/rclone/rclone/fstest/fstests"
)
var (
unimplementableFsMethods = []string{"UnWrap", "WrapFs", "SetWrapper", "UserInfo", "Disconnect"}
unimplementableObjectMethods = []string{}
)
// TestIntegration runs integration tests against the remote
func TestIntegration(t *testing.T) {
if *fstest.RemoteName == "" {
@@ -17,8 +22,8 @@ func TestIntegration(t *testing.T) {
}
fstests.Run(t, &fstests.Opt{
RemoteName: *fstest.RemoteName,
UnimplementableFsMethods: []string{"OpenWriterAt", "DuplicateFiles"},
UnimplementableObjectMethods: []string{"MimeType"},
UnimplementableFsMethods: unimplementableFsMethods,
UnimplementableObjectMethods: unimplementableObjectMethods,
})
}
@@ -35,7 +40,9 @@ func TestLocal(t *testing.T) {
{Name: name, Key: "type", Value: "combine"},
{Name: name, Key: "upstreams", Value: upstreams},
},
QuickTestOK: true,
QuickTestOK: true,
UnimplementableFsMethods: unimplementableFsMethods,
UnimplementableObjectMethods: unimplementableObjectMethods,
})
}
@@ -51,7 +58,9 @@ func TestMemory(t *testing.T) {
{Name: name, Key: "type", Value: "combine"},
{Name: name, Key: "upstreams", Value: upstreams},
},
QuickTestOK: true,
QuickTestOK: true,
UnimplementableFsMethods: unimplementableFsMethods,
UnimplementableObjectMethods: unimplementableObjectMethods,
})
}
@@ -68,6 +77,8 @@ func TestMixed(t *testing.T) {
{Name: name, Key: "type", Value: "combine"},
{Name: name, Key: "upstreams", Value: upstreams},
},
UnimplementableFsMethods: unimplementableFsMethods,
UnimplementableObjectMethods: unimplementableObjectMethods,
})
}

View File

@@ -186,6 +186,7 @@ func NewFs(ctx context.Context, name, rpath string, m configmap.Mapper) (fs.Fs,
ReadMetadata: true,
WriteMetadata: true,
UserMetadata: true,
PartialUploads: true,
}).Fill(ctx, f).Mask(ctx, wrappedFs).WrapsFs(f, wrappedFs)
// We support reading MIME types no matter the wrapped fs
f.features.ReadMimeType = true

View File

@@ -38,7 +38,6 @@ const (
blockHeaderSize = secretbox.Overhead
blockDataSize = 64 * 1024
blockSize = blockHeaderSize + blockDataSize
encryptedSuffix = ".bin" // when file name encryption is off we add this suffix to make sure the cloud provider doesn't process the file
)
// Errors returned by cipher
@@ -54,8 +53,9 @@ var (
ErrorEncryptedBadBlock = errors.New("failed to authenticate decrypted block - bad password?")
ErrorBadBase32Encoding = errors.New("bad base32 filename encoding")
ErrorFileClosed = errors.New("file already closed")
ErrorNotAnEncryptedFile = errors.New("not an encrypted file - no \"" + encryptedSuffix + "\" suffix")
ErrorNotAnEncryptedFile = errors.New("not an encrypted file - does not match suffix")
ErrorBadSeek = errors.New("Seek beyond end of file")
ErrorSuffixMissingDot = errors.New("suffix config setting should include a '.'")
defaultSalt = []byte{0xA8, 0x0D, 0xF4, 0x3A, 0x8F, 0xBD, 0x03, 0x08, 0xA7, 0xCA, 0xB8, 0x3E, 0x58, 0x1F, 0x86, 0xB1}
obfuscQuoteRune = '!'
)
@@ -170,25 +170,27 @@ func NewNameEncoding(s string) (enc fileNameEncoding, err error) {
// Cipher defines an encoding and decoding cipher for the crypt backend
type Cipher struct {
dataKey [32]byte // Key for secretbox
nameKey [32]byte // 16,24 or 32 bytes
nameTweak [nameCipherBlockSize]byte // used to tweak the name crypto
block gocipher.Block
mode NameEncryptionMode
fileNameEnc fileNameEncoding
buffers sync.Pool // encrypt/decrypt buffers
cryptoRand io.Reader // read crypto random numbers from here
dirNameEncrypt bool
passBadBlocks bool // if set passed bad blocks as zeroed blocks
dataKey [32]byte // Key for secretbox
nameKey [32]byte // 16,24 or 32 bytes
nameTweak [nameCipherBlockSize]byte // used to tweak the name crypto
block gocipher.Block
mode NameEncryptionMode
fileNameEnc fileNameEncoding
buffers sync.Pool // encrypt/decrypt buffers
cryptoRand io.Reader // read crypto random numbers from here
dirNameEncrypt bool
passBadBlocks bool // if set passed bad blocks as zeroed blocks
encryptedSuffix string
}
// newCipher initialises the cipher. If salt is "" then it uses a built in salt val
func newCipher(mode NameEncryptionMode, password, salt string, dirNameEncrypt bool, enc fileNameEncoding) (*Cipher, error) {
c := &Cipher{
mode: mode,
fileNameEnc: enc,
cryptoRand: rand.Reader,
dirNameEncrypt: dirNameEncrypt,
mode: mode,
fileNameEnc: enc,
cryptoRand: rand.Reader,
dirNameEncrypt: dirNameEncrypt,
encryptedSuffix: ".bin",
}
c.buffers.New = func() interface{} {
return new([blockSize]byte)
@@ -200,6 +202,19 @@ func newCipher(mode NameEncryptionMode, password, salt string, dirNameEncrypt bo
return c, nil
}
// setEncryptedSuffix set suffix, or an empty string
func (c *Cipher) setEncryptedSuffix(suffix string) {
if strings.EqualFold(suffix, "none") {
c.encryptedSuffix = ""
return
}
if !strings.HasPrefix(suffix, ".") {
fs.Errorf(nil, "crypt: bad suffix: %v", ErrorSuffixMissingDot)
suffix = "." + suffix
}
c.encryptedSuffix = suffix
}
// Call to set bad block pass through
func (c *Cipher) setPassBadBlocks(passBadBlocks bool) {
c.passBadBlocks = passBadBlocks
@@ -512,7 +527,7 @@ func (c *Cipher) encryptFileName(in string) string {
// EncryptFileName encrypts a file path
func (c *Cipher) EncryptFileName(in string) string {
if c.mode == NameEncryptionOff {
return in + encryptedSuffix
return in + c.encryptedSuffix
}
return c.encryptFileName(in)
}
@@ -572,8 +587,8 @@ func (c *Cipher) decryptFileName(in string) (string, error) {
// DecryptFileName decrypts a file path
func (c *Cipher) DecryptFileName(in string) (string, error) {
if c.mode == NameEncryptionOff {
remainingLength := len(in) - len(encryptedSuffix)
if remainingLength == 0 || !strings.HasSuffix(in, encryptedSuffix) {
remainingLength := len(in) - len(c.encryptedSuffix)
if remainingLength == 0 || !strings.HasSuffix(in, c.encryptedSuffix) {
return "", ErrorNotAnEncryptedFile
}
decrypted := in[:remainingLength]
@@ -789,7 +804,7 @@ func (c *Cipher) newDecrypter(rc io.ReadCloser) (*decrypter, error) {
if n < fileHeaderSize && err == io.EOF {
// This read from 0..fileHeaderSize-1 bytes
return nil, fh.finishAndClose(ErrorEncryptedFileTooShort)
} else if err != nil {
} else if err != io.EOF && err != nil {
return nil, fh.finishAndClose(err)
}
// check the magic

View File

@@ -405,6 +405,13 @@ func TestNonStandardEncryptFileName(t *testing.T) {
// Off mode
c, _ := newCipher(NameEncryptionOff, "", "", true, nil)
assert.Equal(t, "1/12/123.bin", c.EncryptFileName("1/12/123"))
// Off mode with custom suffix
c, _ = newCipher(NameEncryptionOff, "", "", true, nil)
c.setEncryptedSuffix(".jpg")
assert.Equal(t, "1/12/123.jpg", c.EncryptFileName("1/12/123"))
// Off mode with empty suffix
c.setEncryptedSuffix("none")
assert.Equal(t, "1/12/123", c.EncryptFileName("1/12/123"))
// Obfuscation mode
c, _ = newCipher(NameEncryptionObfuscated, "", "", true, nil)
assert.Equal(t, "49.6/99.23/150.890/53.!!lipps", c.EncryptFileName("1/12/123/!hello"))
@@ -483,21 +490,27 @@ func TestNonStandardDecryptFileName(t *testing.T) {
in string
expected string
expectedErr error
customSuffix string
}{
{NameEncryptionOff, true, "1/12/123.bin", "1/12/123", nil},
{NameEncryptionOff, true, "1/12/123.bix", "", ErrorNotAnEncryptedFile},
{NameEncryptionOff, true, ".bin", "", ErrorNotAnEncryptedFile},
{NameEncryptionOff, true, "1/12/123-v2001-02-03-040506-123.bin", "1/12/123-v2001-02-03-040506-123", nil},
{NameEncryptionOff, true, "1/12/123-v1970-01-01-010101-123-v2001-02-03-040506-123.bin", "1/12/123-v1970-01-01-010101-123-v2001-02-03-040506-123", nil},
{NameEncryptionOff, true, "1/12/123-v1970-01-01-010101-123-v2001-02-03-040506-123.txt.bin", "1/12/123-v1970-01-01-010101-123-v2001-02-03-040506-123.txt", nil},
{NameEncryptionObfuscated, true, "!.hello", "hello", nil},
{NameEncryptionObfuscated, true, "hello", "", ErrorNotAnEncryptedFile},
{NameEncryptionObfuscated, true, "161.\u00e4", "\u00a1", nil},
{NameEncryptionObfuscated, true, "160.\u03c2", "\u03a0", nil},
{NameEncryptionObfuscated, false, "1/12/123/53.!!lipps", "1/12/123/!hello", nil},
{NameEncryptionObfuscated, false, "1/12/123/53-v2001-02-03-040506-123.!!lipps", "1/12/123/!hello-v2001-02-03-040506-123", nil},
{NameEncryptionOff, true, "1/12/123.bin", "1/12/123", nil, ""},
{NameEncryptionOff, true, "1/12/123.bix", "", ErrorNotAnEncryptedFile, ""},
{NameEncryptionOff, true, ".bin", "", ErrorNotAnEncryptedFile, ""},
{NameEncryptionOff, true, "1/12/123-v2001-02-03-040506-123.bin", "1/12/123-v2001-02-03-040506-123", nil, ""},
{NameEncryptionOff, true, "1/12/123-v1970-01-01-010101-123-v2001-02-03-040506-123.bin", "1/12/123-v1970-01-01-010101-123-v2001-02-03-040506-123", nil, ""},
{NameEncryptionOff, true, "1/12/123-v1970-01-01-010101-123-v2001-02-03-040506-123.txt.bin", "1/12/123-v1970-01-01-010101-123-v2001-02-03-040506-123.txt", nil, ""},
{NameEncryptionOff, true, "1/12/123.jpg", "1/12/123", nil, ".jpg"},
{NameEncryptionOff, true, "1/12/123", "1/12/123", nil, "none"},
{NameEncryptionObfuscated, true, "!.hello", "hello", nil, ""},
{NameEncryptionObfuscated, true, "hello", "", ErrorNotAnEncryptedFile, ""},
{NameEncryptionObfuscated, true, "161.\u00e4", "\u00a1", nil, ""},
{NameEncryptionObfuscated, true, "160.\u03c2", "\u03a0", nil, ""},
{NameEncryptionObfuscated, false, "1/12/123/53.!!lipps", "1/12/123/!hello", nil, ""},
{NameEncryptionObfuscated, false, "1/12/123/53-v2001-02-03-040506-123.!!lipps", "1/12/123/!hello-v2001-02-03-040506-123", nil, ""},
} {
c, _ := newCipher(test.mode, "", "", test.dirNameEncrypt, enc)
if test.customSuffix != "" {
c.setEncryptedSuffix(test.customSuffix)
}
actual, actualErr := c.DecryptFileName(test.in)
what := fmt.Sprintf("Testing %q (mode=%v)", test.in, test.mode)
assert.Equal(t, test.expected, actual, what)

View File

@@ -48,7 +48,7 @@ func init() {
Help: "Very simple filename obfuscation.",
}, {
Value: "off",
Help: "Don't encrypt the file names.\nAdds a \".bin\" extension only.",
Help: "Don't encrypt the file names.\nAdds a \".bin\", or \"suffix\" extension only.",
},
},
}, {
@@ -79,7 +79,9 @@ NB If filename_encryption is "off" then this option will do nothing.`,
}, {
Name: "server_side_across_configs",
Default: false,
Help: `Allow server-side operations (e.g. copy) to work across different crypt configs.
Help: `Deprecated: use --server-side-across-configs instead.
Allow server-side operations (e.g. copy) to work across different crypt configs.
Normally this option is not what you want, but if you have two crypts
pointing to the same backend you can use it.
@@ -124,7 +126,7 @@ names, or for debugging purposes.`,
Help: `If set this will pass bad blocks through as all 0.
This should not be set in normal operation, it should only be set if
trying to recover a crypted file with errors and it is desired to
trying to recover an encrypted file with errors and it is desired to
recover as much of the file as possible.`,
Default: false,
Advanced: true,
@@ -151,6 +153,14 @@ length and if it's case sensitive.`,
},
},
Advanced: true,
}, {
Name: "suffix",
Help: `If this is set it will override the default suffix of ".bin".
Setting suffix to "none" will result in an empty suffix. This may be useful
when the path length is critical.`,
Default: ".bin",
Advanced: true,
}},
})
}
@@ -183,6 +193,7 @@ func newCipherForConfig(opt *Options) (*Cipher, error) {
if err != nil {
return nil, fmt.Errorf("failed to make cipher: %w", err)
}
cipher.setEncryptedSuffix(opt.Suffix)
cipher.setPassBadBlocks(opt.PassBadBlocks)
return cipher, nil
}
@@ -257,6 +268,7 @@ func NewFs(ctx context.Context, name, rpath string, m configmap.Mapper) (fs.Fs,
ReadMetadata: true,
WriteMetadata: true,
UserMetadata: true,
PartialUploads: true,
}).Fill(ctx, f).Mask(ctx, wrappedFs).WrapsFs(f, wrappedFs)
return f, err
@@ -274,6 +286,7 @@ type Options struct {
ShowMapping bool `config:"show_mapping"`
PassBadBlocks bool `config:"pass_bad_blocks"`
FilenameEncoding string `config:"filename_encoding"`
Suffix string `config:"suffix"`
}
// Fs represents a wrapped fs.Fs

View File

@@ -499,7 +499,9 @@ need to use --ignore size also.`,
}, {
Name: "server_side_across_configs",
Default: false,
Help: `Allow server-side operations (e.g. copy) to work across different drive configs.
Help: `Deprecated: use --server-side-across-configs instead.
Allow server-side operations (e.g. copy) to work across different drive configs.
This can be useful if you wish to do a server-side copy between two
different Google drives. Note that this isn't enabled by default
@@ -1512,6 +1514,9 @@ func (f *Fs) newObjectWithExportInfo(
// NewObject finds the Object at remote. If it can't be found
// it returns the error fs.ErrorObjectNotFound.
func (f *Fs) NewObject(ctx context.Context, remote string) (fs.Object, error) {
if strings.HasSuffix(remote, "/") {
return nil, fs.ErrorIsDir
}
info, extension, exportName, exportMimeType, isDocument, err := f.getRemoteInfoWithExport(ctx, remote)
if err != nil {
return nil, err
@@ -3881,7 +3886,7 @@ func (o *Object) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, op
if err != nil {
return err
}
newO, err := o.fs.newObjectWithInfo(ctx, src.Remote(), info)
newO, err := o.fs.newObjectWithInfo(ctx, o.remote, info)
if err != nil {
return err
}

View File

@@ -144,7 +144,7 @@ func (b *batcher) commitBatch(ctx context.Context, items []*files.UploadSessionF
// If commit fails then signal clients if sync
var signalled = b.async
defer func() {
if err != nil && signalled {
if err != nil && !signalled {
// Signal to clients that there was an error
for _, result := range results {
result <- batcherResponse{err: err}

View File

@@ -58,7 +58,7 @@ import (
const (
rcloneClientID = "5jcck7diasz0rqy"
rcloneEncryptedClientSecret = "fRS5vVLr2v6FbyXYnIgjwBuUAt0osq_QZTXAEcmZ7g"
minSleep = 10 * time.Millisecond
defaultMinSleep = fs.Duration(10 * time.Millisecond)
maxSleep = 2 * time.Second
decayConstant = 2 // bigger for slower decay, exponential
// Upload chunk size - setting too small makes uploads slow.
@@ -260,8 +260,8 @@ uploaded.
The default for this is 0 which means rclone will choose a sensible
default based on the batch_mode in use.
- batch_mode: async - default batch_timeout is 500ms
- batch_mode: sync - default batch_timeout is 10s
- batch_mode: async - default batch_timeout is 10s
- batch_mode: sync - default batch_timeout is 500ms
- batch_mode: off - not in use
`,
Default: fs.Duration(0),
@@ -271,6 +271,11 @@ default based on the batch_mode in use.
Help: `Max time to wait for a batch to finish committing`,
Default: fs.Duration(10 * time.Minute),
Advanced: true,
}, {
Name: "pacer_min_sleep",
Default: defaultMinSleep,
Help: "Minimum time to sleep between API calls.",
Advanced: true,
}, {
Name: config.ConfigEncoding,
Help: config.ConfigEncodingHelp,
@@ -299,6 +304,7 @@ type Options struct {
BatchTimeout fs.Duration `config:"batch_timeout"`
BatchCommitTimeout fs.Duration `config:"batch_commit_timeout"`
AsyncBatch bool `config:"async_batch"`
PacerMinSleep fs.Duration `config:"pacer_min_sleep"`
Enc encoder.MultiEncoder `config:"encoding"`
}
@@ -442,7 +448,7 @@ func NewFs(ctx context.Context, name, root string, m configmap.Mapper) (fs.Fs, e
name: name,
opt: *opt,
ci: ci,
pacer: fs.NewPacer(ctx, pacer.NewDefault(pacer.MinSleep(minSleep), pacer.MaxSleep(maxSleep), pacer.DecayConstant(decayConstant))),
pacer: fs.NewPacer(ctx, pacer.NewDefault(pacer.MinSleep(opt.PacerMinSleep), pacer.MaxSleep(maxSleep), pacer.DecayConstant(decayConstant))),
}
f.batcher, err = newBatcher(ctx, f, f.opt.BatchMode, f.opt.BatchSize, time.Duration(f.opt.BatchTimeout))
if err != nil {
@@ -719,7 +725,7 @@ func (f *Fs) listSharedFolders(ctx context.Context) (entries fs.DirEntries, err
}
for _, entry := range res.Entries {
leaf := f.opt.Enc.ToStandardName(entry.Name)
d := fs.NewDir(leaf, time.Now()).SetID(entry.SharedFolderId)
d := fs.NewDir(leaf, time.Time{}).SetID(entry.SharedFolderId)
entries = append(entries, d)
if err != nil {
return nil, err
@@ -906,7 +912,7 @@ func (f *Fs) List(ctx context.Context, dir string) (entries fs.DirEntries, err e
leaf := f.opt.Enc.ToStandardName(path.Base(entryPath))
remote := path.Join(dir, leaf)
if folderInfo != nil {
d := fs.NewDir(remote, time.Now()).SetID(folderInfo.Id)
d := fs.NewDir(remote, time.Time{}).SetID(folderInfo.Id)
entries = append(entries, d)
} else if fileInfo != nil {
o, err := f.newObjectWithInfo(ctx, remote, fileInfo)

View File

@@ -118,6 +118,9 @@ func (f *Fs) getDownloadToken(ctx context.Context, url string) (*GetTokenRespons
Single: 1,
Pass: f.opt.FilePassword,
}
if f.opt.CDN {
request.CDN = 1
}
opts := rest.Opts{
Method: "POST",
Path: "/download/get_token.cgi",

View File

@@ -54,6 +54,11 @@ func init() {
Name: "folder_password",
Advanced: true,
IsPassword: true,
}, {
Help: "Set if you wish to use CDN download links.",
Name: "cdn",
Default: false,
Advanced: true,
}, {
Name: config.ConfigEncoding,
Help: config.ConfigEncodingHelp,
@@ -89,6 +94,7 @@ type Options struct {
SharedFolder string `config:"shared_folder"`
FilePassword string `config:"file_password"`
FolderPassword string `config:"folder_password"`
CDN bool `config:"cdn"`
Enc encoder.MultiEncoder `config:"encoding"`
}

View File

@@ -20,6 +20,7 @@ type DownloadRequest struct {
URL string `json:"url"`
Single int `json:"single"`
Pass string `json:"pass,omitempty"`
CDN int `json:"cdn,omitempty"`
}
// RemoveFolderRequest is the request structure of the corresponding request

View File

@@ -15,7 +15,7 @@ import (
"sync"
"time"
"github.com/jlaffaye/ftp"
"github.com/rclone/ftp"
"github.com/rclone/rclone/fs"
"github.com/rclone/rclone/fs/accounting"
"github.com/rclone/rclone/fs/config"
@@ -580,6 +580,7 @@ func NewFs(ctx context.Context, name, root string, m configmap.Mapper) (ff fs.Fs
}
f.features = (&fs.Features{
CanHaveEmptyDirectories: true,
PartialUploads: true,
}).Fill(ctx, f)
// set the pool drainer timer going
if f.opt.IdleTimeout > 0 {
@@ -692,6 +693,12 @@ func (f *Fs) findItem(ctx context.Context, remote string) (entry *ftp.Entry, err
if err == fs.ErrorObjectNotFound {
return nil, nil
}
if errX := textprotoError(err); errX != nil {
switch errX.Code {
case ftp.StatusBadArguments:
err = nil
}
}
return nil, err
}
if entry != nil {
@@ -1098,7 +1105,7 @@ func (o *Object) ModTime(ctx context.Context) time.Time {
// SetModTime sets the modification time of the object
func (o *Object) SetModTime(ctx context.Context, modTime time.Time) error {
if !o.fs.fSetTime {
fs.Errorf(o.fs, "SetModTime is not supported")
fs.Debugf(o.fs, "SetModTime is not supported")
return nil
}
c, err := o.fs.getFtpConnection(ctx)

View File

@@ -301,6 +301,15 @@ Docs: https://cloud.google.com/storage/docs/bucket-policy-only
Value: "DURABLE_REDUCED_AVAILABILITY",
Help: "Durable reduced availability storage class",
}},
}, {
Name: "directory_markers",
Default: false,
Advanced: true,
Help: `Upload an empty object with a trailing slash when a new directory is created
Empty folders are unsupported for bucket based remotes, this option creates an empty
object ending with "/", to persist the folder.
`,
}, {
Name: "no_check_bucket",
Help: `If set, don't attempt to check the bucket exists or create it.
@@ -366,6 +375,7 @@ type Options struct {
Endpoint string `config:"endpoint"`
Enc encoder.MultiEncoder `config:"encoding"`
EnvAuth bool `config:"env_auth"`
DirectoryMarkers bool `config:"directory_markers"`
}
// Fs represents a remote storage server
@@ -461,7 +471,7 @@ func parsePath(path string) (root string) {
// split returns bucket and bucketPath from the rootRelativePath
// relative to f.root
func (f *Fs) split(rootRelativePath string) (bucketName, bucketPath string) {
bucketName, bucketPath = bucket.Split(path.Join(f.root, rootRelativePath))
bucketName, bucketPath = bucket.Split(bucket.Join(f.root, rootRelativePath))
return f.opt.Enc.FromStandardName(bucketName), f.opt.Enc.FromStandardPath(bucketPath)
}
@@ -547,6 +557,9 @@ func NewFs(ctx context.Context, name, root string, m configmap.Mapper) (fs.Fs, e
BucketBased: true,
BucketBasedRootOK: true,
}).Fill(ctx, f)
if opt.DirectoryMarkers {
f.features.CanHaveEmptyDirectories = true
}
// Create a new authorized Drive client.
f.client = oAuthClient
@@ -633,6 +646,7 @@ func (f *Fs) list(ctx context.Context, bucket, directory, prefix string, addBuck
if !recurse {
list = list.Delimiter("/")
}
foundItems := 0
for {
var objects *storage.Objects
err = f.pacer.Call(func() (bool, error) {
@@ -648,6 +662,7 @@ func (f *Fs) list(ctx context.Context, bucket, directory, prefix string, addBuck
return err
}
if !recurse {
foundItems += len(objects.Prefixes)
var object storage.Object
for _, remote := range objects.Prefixes {
if !strings.HasSuffix(remote, "/") {
@@ -668,22 +683,29 @@ func (f *Fs) list(ctx context.Context, bucket, directory, prefix string, addBuck
}
}
}
foundItems += len(objects.Items)
for _, object := range objects.Items {
remote := f.opt.Enc.ToStandardPath(object.Name)
if !strings.HasPrefix(remote, prefix) {
fs.Logf(f, "Odd name received %q", object.Name)
continue
}
remote = remote[len(prefix):]
isDirectory := remote == "" || strings.HasSuffix(remote, "/")
// is this a directory marker?
if isDirectory {
// Don't insert the root directory
if remote == directory {
continue
}
// process directory markers as directories
remote = strings.TrimRight(remote, "/")
}
remote = remote[len(prefix):]
if addBucket {
remote = path.Join(bucket, remote)
}
// is this a directory marker?
if isDirectory {
continue // skip directory marker
}
err = fn(remote, object, false)
err = fn(remote, object, isDirectory)
if err != nil {
return err
}
@@ -693,6 +715,17 @@ func (f *Fs) list(ctx context.Context, bucket, directory, prefix string, addBuck
}
list.PageToken(objects.NextPageToken)
}
if f.opt.DirectoryMarkers && foundItems == 0 && directory != "" {
// Determine whether the directory exists or not by whether it has a marker
_, err := f.readObjectInfo(ctx, bucket, directory)
if err != nil {
if err == fs.ErrorObjectNotFound {
return fs.ErrorDirNotFound
}
return err
}
}
return nil
}
@@ -856,10 +889,69 @@ func (f *Fs) PutStream(ctx context.Context, in io.Reader, src fs.ObjectInfo, opt
return f.Put(ctx, in, src, options...)
}
// Create directory marker file and parents
func (f *Fs) createDirectoryMarker(ctx context.Context, bucket, dir string) error {
if !f.opt.DirectoryMarkers || bucket == "" {
return nil
}
// Object to be uploaded
o := &Object{
fs: f,
modTime: time.Now(),
}
for {
_, bucketPath := f.split(dir)
// Don't create the directory marker if it is the bucket or at the very root
if bucketPath == "" {
break
}
o.remote = dir + "/"
// Check to see if object already exists
_, err := o.readObjectInfo(ctx)
if err == nil {
return nil
}
// Upload it if not
fs.Debugf(o, "Creating directory marker")
content := io.Reader(strings.NewReader(""))
err = o.Update(ctx, content, o)
if err != nil {
return fmt.Errorf("creating directory marker failed: %w", err)
}
// Now check parent directory exists
dir = path.Dir(dir)
if dir == "/" || dir == "." {
break
}
}
return nil
}
// Mkdir creates the bucket if it doesn't exist
func (f *Fs) Mkdir(ctx context.Context, dir string) (err error) {
bucket, _ := f.split(dir)
return f.makeBucket(ctx, bucket)
e := f.checkBucket(ctx, bucket)
if e != nil {
return e
}
return f.createDirectoryMarker(ctx, bucket, dir)
}
// mkdirParent creates the parent bucket/directory if it doesn't exist
func (f *Fs) mkdirParent(ctx context.Context, remote string) error {
remote = strings.TrimRight(remote, "/")
dir := path.Dir(remote)
if dir == "/" || dir == "." {
dir = ""
}
return f.Mkdir(ctx, dir)
}
// makeBucket creates the bucket if it doesn't exist
@@ -931,6 +1023,18 @@ func (f *Fs) checkBucket(ctx context.Context, bucket string) error {
// to delete was not empty.
func (f *Fs) Rmdir(ctx context.Context, dir string) (err error) {
bucket, directory := f.split(dir)
// Remove directory marker file
if f.opt.DirectoryMarkers && bucket != "" && dir != "" {
o := &Object{
fs: f,
remote: dir + "/",
}
fs.Debugf(o, "Removing directory marker")
err := o.Remove(ctx)
if err != nil {
return fmt.Errorf("removing directory marker failed: %w", err)
}
}
if bucket == "" || directory != "" {
return nil
}
@@ -962,7 +1066,7 @@ func (f *Fs) Precision() time.Duration {
// If it isn't possible then return fs.ErrorCantCopy
func (f *Fs) Copy(ctx context.Context, src fs.Object, remote string) (fs.Object, error) {
dstBucket, dstPath := f.split(remote)
err := f.checkBucket(ctx, dstBucket)
err := f.mkdirParent(ctx, remote)
if err != nil {
return nil, err
}
@@ -1100,10 +1204,15 @@ func (o *Object) setMetaData(info *storage.Object) {
// readObjectInfo reads the definition for an object
func (o *Object) readObjectInfo(ctx context.Context) (object *storage.Object, err error) {
bucket, bucketPath := o.split()
err = o.fs.pacer.Call(func() (bool, error) {
get := o.fs.svc.Objects.Get(bucket, bucketPath).Context(ctx)
if o.fs.opt.UserProject != "" {
get = get.UserProject(o.fs.opt.UserProject)
return o.fs.readObjectInfo(ctx, bucket, bucketPath)
}
// readObjectInfo reads the definition for an object
func (f *Fs) readObjectInfo(ctx context.Context, bucket, bucketPath string) (object *storage.Object, err error) {
err = f.pacer.Call(func() (bool, error) {
get := f.svc.Objects.Get(bucket, bucketPath).Context(ctx)
if f.opt.UserProject != "" {
get = get.UserProject(f.opt.UserProject)
}
object, err = get.Do()
return shouldRetry(ctx, err)
@@ -1244,11 +1353,14 @@ func (o *Object) Open(ctx context.Context, options ...fs.OpenOption) (in io.Read
// Update the object with the contents of the io.Reader, modTime and size
//
// The new object may have been created if an error is returned
func (o *Object) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, options ...fs.OpenOption) error {
func (o *Object) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, options ...fs.OpenOption) (err error) {
bucket, bucketPath := o.split()
err := o.fs.checkBucket(ctx, bucket)
if err != nil {
return err
// Create parent dir/bucket if not saving directory marker
if !strings.HasSuffix(o.remote, "/") {
err = o.fs.mkdirParent(ctx, o.remote)
if err != nil {
return err
}
}
modTime := src.ModTime(ctx)

View File

@@ -6,6 +6,7 @@ import (
"testing"
"github.com/rclone/rclone/backend/googlecloudstorage"
"github.com/rclone/rclone/fstest"
"github.com/rclone/rclone/fstest/fstests"
)
@@ -16,3 +17,17 @@ func TestIntegration(t *testing.T) {
NilObject: (*googlecloudstorage.Object)(nil),
})
}
func TestIntegration2(t *testing.T) {
if *fstest.RemoteName != "" {
t.Skip("Skipping as -remote set")
}
name := "TestGoogleCloudStorage"
fstests.Run(t, &fstests.Opt{
RemoteName: name + ":",
NilObject: (*googlecloudstorage.Object)(nil),
ExtraConfig: []fstests.ExtraConfigItem{
{Name: name, Key: "directory_markers", Value: "true"},
},
})
}

View File

@@ -166,6 +166,7 @@ func NewFs(ctx context.Context, fsname, rpath string, cmap configmap.Mapper) (fs
ReadMetadata: true,
WriteMetadata: true,
UserMetadata: true,
PartialUploads: true,
}
f.features = stubFeatures.Fill(ctx, f).Mask(ctx, f.Fs).WrapsFs(f, f.Fs)

View File

@@ -495,7 +495,7 @@ func (f *Fs) List(ctx context.Context, dir string) (entries fs.DirEntries, err e
add(file)
case fs.ErrorNotAFile:
// ...found a directory not a file
add(fs.NewDir(remote, timeUnset))
add(fs.NewDir(remote, time.Time{}))
default:
fs.Debugf(remote, "skipping because of error: %v", err)
}
@@ -507,7 +507,7 @@ func (f *Fs) List(ctx context.Context, dir string) (entries fs.DirEntries, err e
name = strings.TrimRight(name, "/")
remote := path.Join(dir, name)
if isDir {
add(fs.NewDir(remote, timeUnset))
add(fs.NewDir(remote, time.Time{}))
} else {
in <- remote
}

View File

@@ -376,7 +376,7 @@ func (f *Fs) List(ctx context.Context, dir string) (entries fs.DirEntries, err e
for i, file := range files {
remote := path.Join(dir, f.opt.Enc.ToStandardName(file.Name))
if file.Type == "dir" {
entries[i] = fs.NewDir(remote, time.Unix(0, 0))
entries[i] = fs.NewDir(remote, time.Time{})
} else {
entries[i] = &Object{
fs: f,

View File

@@ -303,6 +303,7 @@ func NewFs(ctx context.Context, name, root string, m configmap.Mapper) (fs.Fs, e
WriteMetadata: true,
UserMetadata: xattrSupported, // can only R/W general purpose metadata if xattrs are supported
FilterAware: true,
PartialUploads: true,
}).Fill(ctx, f)
if opt.FollowSymlinks {
f.lstat = os.Stat

View File

@@ -5,6 +5,7 @@ package local
import (
"fmt"
"runtime"
"sync"
"time"
@@ -23,7 +24,7 @@ func (o *Object) readMetadataFromFile(m *fs.Metadata) (err error) {
// Check statx() is available as it was only introduced in kernel 4.11
// If not, fall back to fstatat() which was introduced in 2.6.16 which is guaranteed for all Go versions
var stat unix.Statx_t
if unix.Statx(unix.AT_FDCWD, ".", 0, unix.STATX_ALL, &stat) != unix.ENOSYS {
if runtime.GOOS != "android" && unix.Statx(unix.AT_FDCWD, ".", 0, unix.STATX_ALL, &stat) != unix.ENOSYS {
readMetadataFromFileFn = readMetadataFromFileStatx
} else {
readMetadataFromFileFn = readMetadataFromFileFstatat

View File

@@ -196,7 +196,9 @@ listing, set this option.`,
}, {
Name: "server_side_across_configs",
Default: false,
Help: `Allow server-side operations (e.g. copy) to work across different onedrive configs.
Help: `Deprecated: use --server-side-across-configs instead.
Allow server-side operations (e.g. copy) to work across different onedrive configs.
This will only work if you are copying between two OneDrive *Personal* drives AND
the files to copy are already shared between them. In other cases, rclone will
@@ -301,6 +303,24 @@ rclone.
Help: "None - don't use any hashes",
}},
Advanced: true,
}, {
Name: "av_override",
Default: false,
Help: `Allows download of files the server thinks has a virus.
The onedrive/sharepoint server may check files uploaded with an Anti
Virus checker. If it detects any potential viruses or malware it will
block download of the file.
In this case you will see a message like this
server reports this file is infected with a virus - use --onedrive-av-override to download anyway: Infected (name of virus): 403 Forbidden:
If you are 100% sure you want to download this file anyway then use
the --onedrive-av-override flag, or av_override = true in the config
file.
`,
Advanced: true,
}, {
Name: config.ConfigEncoding,
Help: config.ConfigEncodingHelp,
@@ -640,6 +660,7 @@ type Options struct {
LinkType string `config:"link_type"`
LinkPassword string `config:"link_password"`
HashType string `config:"hash_type"`
AVOverride bool `config:"av_override"`
Enc encoder.MultiEncoder `config:"encoding"`
}
@@ -1966,12 +1987,20 @@ func (o *Object) Open(ctx context.Context, options ...fs.OpenOption) (in io.Read
var resp *http.Response
opts := o.fs.newOptsCall(o.id, "GET", "/content")
opts.Options = options
if o.fs.opt.AVOverride {
opts.Parameters = url.Values{"AVOverride": {"1"}}
}
err = o.fs.pacer.Call(func() (bool, error) {
resp, err = o.fs.srv.Call(ctx, &opts)
return shouldRetry(ctx, resp, err)
})
if err != nil {
if resp != nil {
if virus := resp.Header.Get("X-Virus-Infected"); virus != "" {
err = fmt.Errorf("server reports this file is infected with a virus - use --onedrive-av-override to download anyway: %s: %w", virus, err)
}
}
return nil, err
}

View File

@@ -227,9 +227,10 @@ type Media struct {
Duration int64 `json:"duration,omitempty"`
BitRate int `json:"bit_rate,omitempty"`
FrameRate int `json:"frame_rate,omitempty"`
VideoCodec string `json:"video_codec,omitempty"`
AudioCodec string `json:"audio_codec,omitempty"`
VideoType string `json:"video_type,omitempty"`
VideoCodec string `json:"video_codec,omitempty"` // "h264", "hevc"
AudioCodec string `json:"audio_codec,omitempty"` // "pcm_bluray", "aac"
VideoType string `json:"video_type,omitempty"` // "mpegts"
HdrType string `json:"hdr_type,omitempty"`
} `json:"video,omitempty"`
Link *Link `json:"link,omitempty"`
NeedMoreQuota bool `json:"need_more_quota,omitempty"`

View File

@@ -189,11 +189,6 @@ Fill in for rclone to use a non root folder as its starting point.
Help: "Files bigger than this will be cached on disk to calculate hash if required.",
Default: fs.SizeSuffix(10 * 1024 * 1024),
Advanced: true,
}, {
Name: "multi_thread_streams",
Help: "Max number of streams to use for multi-thread downloads.\n\nThis will override global flag `--multi-thread-streams` and defaults to 1 to avoid rate limiting.",
Default: 1,
Advanced: true,
}, {
Name: config.ConfigEncoding,
Help: config.ConfigEncodingHelp,
@@ -224,7 +219,6 @@ type Options struct {
UseTrash bool `config:"use_trash"`
TrashedOnly bool `config:"trashed_only"`
HashMemoryThreshold fs.SizeSuffix `config:"hash_memory_limit"`
MultiThreadStreams int `config:"multi_thread_streams"`
Enc encoder.MultiEncoder `config:"encoding"`
}
@@ -437,10 +431,6 @@ func newFs(ctx context.Context, name, path string, m configmap.Mapper) (*Fs, err
root := parsePath(path)
// overrides global `--multi-thread-streams` by local one
ci := fs.GetConfig(ctx)
ci.MultiThreadStreams = opt.MultiThreadStreams
f := &Fs{
name: name,
root: root,
@@ -451,6 +441,7 @@ func newFs(ctx context.Context, name, path string, m configmap.Mapper) (*Fs, err
f.features = (&fs.Features{
ReadMimeType: true, // can read the mime type of objects
CanHaveEmptyDirectories: true, // can have empty directories
NoMultiThreading: true, // can't have multiple threads downloading
}).Fill(ctx, f)
if err := f.newClientWithPacer(ctx); err != nil {
@@ -1420,6 +1411,16 @@ func (f *Fs) Command(ctx context.Context, name string, arg []string, opt map[str
// ------------------------------------------------------------
// parseFileID gets fid parameter from url query
func parseFileID(s string) string {
if u, err := url.Parse(s); err == nil {
if q, err := url.ParseQuery(u.RawQuery); err == nil {
return q.Get("fid")
}
}
return ""
}
// setMetaData sets the metadata from info
func (o *Object) setMetaData(info *api.File) (err error) {
if info.Kind == api.KindOfFolder {
@@ -1441,10 +1442,18 @@ func (o *Object) setMetaData(info *api.File) (err error) {
o.md5sum = info.Md5Checksum
if info.Links.ApplicationOctetStream != nil {
o.link = info.Links.ApplicationOctetStream
}
if len(info.Medias) > 0 && info.Medias[0].Link != nil {
fs.Debugf(o, "Using a media link")
o.link = info.Medias[0].Link
if fid := parseFileID(o.link.URL); fid != "" {
for mid, media := range info.Medias {
if media.Link == nil {
continue
}
if mfid := parseFileID(media.Link.URL); fid == mfid {
fs.Debugf(o, "Using a media link from Medias[%d]", mid)
o.link = media.Link
break
}
}
}
}
return nil
}

View File

@@ -23,6 +23,7 @@ import (
"github.com/rclone/rclone/lib/dircache"
"github.com/rclone/rclone/lib/oauthutil"
"github.com/rclone/rclone/lib/pacer"
"github.com/rclone/rclone/lib/random"
"github.com/rclone/rclone/lib/readers"
)
@@ -252,9 +253,12 @@ func (f *Fs) Put(ctx context.Context, in io.Reader, src fs.ObjectInfo, options .
// This will create a duplicate if we upload a new file without
// checking to see if there is one already - use Put() for that.
func (f *Fs) PutUnchecked(ctx context.Context, in io.Reader, src fs.ObjectInfo, options ...fs.OpenOption) (o fs.Object, err error) {
return f.putUnchecked(ctx, in, src, src.Remote(), options...)
}
func (f *Fs) putUnchecked(ctx context.Context, in io.Reader, src fs.ObjectInfo, remote string, options ...fs.OpenOption) (o fs.Object, err error) {
// defer log.Trace(f, "src=%+v", src)("o=%+v, err=%v", &o, &err)
size := src.Size()
remote := src.Remote()
leaf, directoryID, err := f.dirCache.FindPath(ctx, remote, true)
if err != nil {
return nil, err
@@ -540,24 +544,59 @@ func (f *Fs) Copy(ctx context.Context, src fs.Object, remote string) (o fs.Objec
if err != nil {
return nil, err
}
modTime := src.ModTime(ctx)
var resp struct {
File putio.File `json:"file"`
}
// For some unknown reason the API sometimes returns the name
// already exists unless we upload to a temporary name and
// rename
//
// {"error_id":null,"error_message":"Name already exist","error_type":"NAME_ALREADY_EXIST","error_uri":"http://api.put.io/v2/docs","extra":{},"status":"ERROR","status_code":400}
suffix := "." + random.String(8)
err = f.pacer.Call(func() (bool, error) {
params := url.Values{}
params.Set("file_id", strconv.FormatInt(srcObj.file.ID, 10))
params.Set("parent_id", directoryID)
params.Set("name", f.opt.Enc.FromStandardName(leaf))
params.Set("name", f.opt.Enc.FromStandardName(leaf+suffix))
req, err := f.client.NewRequest(ctx, "POST", "/v2/files/copy", strings.NewReader(params.Encode()))
if err != nil {
return false, err
}
req.Header.Set("Content-Type", "application/x-www-form-urlencoded")
// fs.Debugf(f, "copying file (%d) to parent_id: %s", srcObj.file.ID, directoryID)
_, err = f.client.Do(req, nil)
_, err = f.client.Do(req, &resp)
return shouldRetry(ctx, err)
})
if err != nil {
return nil, err
}
return f.NewObject(ctx, remote)
err = f.pacer.Call(func() (bool, error) {
params := url.Values{}
params.Set("file_id", strconv.FormatInt(resp.File.ID, 10))
params.Set("name", f.opt.Enc.FromStandardName(leaf))
req, err := f.client.NewRequest(ctx, "POST", "/v2/files/rename", strings.NewReader(params.Encode()))
if err != nil {
return false, err
}
req.Header.Set("Content-Type", "application/x-www-form-urlencoded")
_, err = f.client.Do(req, &resp)
return shouldRetry(ctx, err)
})
if err != nil {
return nil, err
}
o, err = f.newObjectWithInfo(ctx, remote, resp.File)
if err != nil {
return nil, err
}
err = o.SetModTime(ctx, modTime)
if err != nil {
return nil, err
}
return o, nil
}
// Move src to this remote using server-side move operations.
@@ -579,6 +618,7 @@ func (f *Fs) Move(ctx context.Context, src fs.Object, remote string) (o fs.Objec
if err != nil {
return nil, err
}
modTime := src.ModTime(ctx)
err = f.pacer.Call(func() (bool, error) {
params := url.Values{}
params.Set("file_id", strconv.FormatInt(srcObj.file.ID, 10))
@@ -596,7 +636,15 @@ func (f *Fs) Move(ctx context.Context, src fs.Object, remote string) (o fs.Objec
if err != nil {
return nil, err
}
return f.NewObject(ctx, remote)
o, err = f.NewObject(ctx, remote)
if err != nil {
return nil, err
}
err = o.SetModTime(ctx, modTime)
if err != nil {
return nil, err
}
return o, nil
}
// DirMove moves src, srcRemote to this remote at dstRemote

View File

@@ -275,7 +275,7 @@ func (o *Object) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, op
if err != nil {
return err
}
newObj, err := o.fs.PutUnchecked(ctx, in, src, options...)
newObj, err := o.fs.putUnchecked(ctx, in, src, o.remote, options...)
if err != nil {
return err
}

View File

@@ -66,7 +66,7 @@ import (
func init() {
fs.Register(&fs.RegInfo{
Name: "s3",
Description: "Amazon S3 Compliant Storage Providers including AWS, Alibaba, Ceph, China Mobile, Cloudflare, GCS, ArvanCloud, DigitalOcean, Dreamhost, Huawei OBS, IBM COS, IDrive e2, IONOS Cloud, Liara, Lyve Cloud, Minio, Netease, RackCorp, Scaleway, SeaweedFS, StackPath, Storj, Tencent COS, Qiniu and Wasabi",
Description: "Amazon S3 Compliant Storage Providers including AWS, Alibaba, ArvanCloud, Ceph, China Mobile, Cloudflare, GCS, DigitalOcean, Dreamhost, Huawei OBS, IBM COS, IDrive e2, IONOS Cloud, Liara, Lyve Cloud, Minio, Netease, Petabox, RackCorp, Scaleway, SeaweedFS, StackPath, Storj, Tencent COS, Qiniu and Wasabi",
NewFs: NewFs,
CommandHelp: commandHelp,
Config: func(ctx context.Context, name string, m configmap.Mapper, config fs.ConfigIn) (*fs.ConfigOut, error) {
@@ -91,6 +91,9 @@ func init() {
}, {
Value: "Alibaba",
Help: "Alibaba Cloud Object Storage System (OSS) formerly Aliyun",
}, {
Value: "ArvanCloud",
Help: "Arvan Cloud Object Storage (AOS)",
}, {
Value: "Ceph",
Help: "Ceph Object Storage",
@@ -100,9 +103,6 @@ func init() {
}, {
Value: "Cloudflare",
Help: "Cloudflare R2 Storage",
}, {
Value: "ArvanCloud",
Help: "Arvan Cloud Object Storage (AOS)",
}, {
Value: "DigitalOcean",
Help: "DigitalOcean Spaces",
@@ -136,6 +136,9 @@ func init() {
}, {
Value: "Netease",
Help: "Netease Object Storage (NOS)",
}, {
Value: "Petabox",
Help: "Petabox Object Storage",
}, {
Value: "RackCorp",
Help: "RackCorp Object Storage",
@@ -440,10 +443,30 @@ func init() {
Value: "eu-south-2",
Help: "Logrono, Spain",
}},
}, {
Name: "region",
Help: "Region where your bucket will be created and your data stored.\n",
Provider: "Petabox",
Examples: []fs.OptionExample{{
Value: "us-east-1",
Help: "US East (N. Virginia)",
}, {
Value: "eu-central-1",
Help: "Europe (Frankfurt)",
}, {
Value: "ap-southeast-1",
Help: "Asia Pacific (Singapore)",
}, {
Value: "me-south-1",
Help: "Middle East (Bahrain)",
}, {
Value: "sa-east-1",
Help: "South America (São Paulo)",
}},
}, {
Name: "region",
Help: "Region to connect to.\n\nLeave blank if you are using an S3 clone and you don't have a region.",
Provider: "!AWS,Alibaba,ChinaMobile,Cloudflare,IONOS,ArvanCloud,Liara,Qiniu,RackCorp,Scaleway,Storj,TencentCOS,HuaweiOBS,IDrive",
Provider: "!AWS,Alibaba,ArvanCloud,ChinaMobile,Cloudflare,IONOS,Petabox,Liara,Qiniu,RackCorp,Scaleway,Storj,TencentCOS,HuaweiOBS,IDrive",
Examples: []fs.OptionExample{{
Value: "",
Help: "Use this if unsure.\nWill use v4 signatures and an empty region.",
@@ -552,15 +575,15 @@ func init() {
Help: "Anhui China (Huainan)",
}},
}, {
// ArvanCloud endpoints: https://www.arvancloud.com/en/products/cloud-storage
// ArvanCloud endpoints: https://www.arvancloud.ir/en/products/cloud-storage
Name: "endpoint",
Help: "Endpoint for Arvan Cloud Object Storage (AOS) API.",
Provider: "ArvanCloud",
Examples: []fs.OptionExample{{
Value: "s3.ir-thr-at1.arvanstorage.com",
Help: "The default endpoint - a good choice if you are unsure.\nTehran Iran (Asiatech)",
Value: "s3.ir-thr-at1.arvanstorage.ir",
Help: "The default endpoint - a good choice if you are unsure.\nTehran Iran (Simin)",
}, {
Value: "s3.ir-tbz-sh1.arvanstorage.com",
Value: "s3.ir-tbz-sh1.arvanstorage.ir",
Help: "Tabriz Iran (Shahriar)",
}},
}, {
@@ -768,6 +791,30 @@ func init() {
Value: "s3-eu-south-2.ionoscloud.com",
Help: "Logrono, Spain",
}},
}, {
Name: "endpoint",
Help: "Endpoint for Petabox S3 Object Storage.\n\nSpecify the endpoint from the same region.",
Provider: "Petabox",
Required: true,
Examples: []fs.OptionExample{{
Value: "s3.petabox.io",
Help: "US East (N. Virginia)",
}, {
Value: "s3.us-east-1.petabox.io",
Help: "US East (N. Virginia)",
}, {
Value: "s3.eu-central-1.petabox.io",
Help: "Europe (Frankfurt)",
}, {
Value: "s3.ap-southeast-1.petabox.io",
Help: "Asia Pacific (Singapore)",
}, {
Value: "s3.me-south-1.petabox.io",
Help: "Middle East (Bahrain)",
}, {
Value: "s3.sa-east-1.petabox.io",
Help: "South America (São Paulo)",
}},
}, {
// Liara endpoints: https://liara.ir/landing/object-storage
Name: "endpoint",
@@ -1109,7 +1156,7 @@ func init() {
}, {
Name: "endpoint",
Help: "Endpoint for S3 API.\n\nRequired when using an S3 clone.",
Provider: "!AWS,IBMCOS,IDrive,IONOS,TencentCOS,HuaweiOBS,Alibaba,ChinaMobile,GCS,Liara,ArvanCloud,Scaleway,StackPath,Storj,RackCorp,Qiniu",
Provider: "!AWS,ArvanCloud,IBMCOS,IDrive,IONOS,TencentCOS,HuaweiOBS,Alibaba,ChinaMobile,GCS,Liara,Scaleway,StackPath,Storj,RackCorp,Qiniu,Petabox",
Examples: []fs.OptionExample{{
Value: "objects-us-east-1.dream.io",
Help: "Dream Objects endpoint",
@@ -1211,8 +1258,12 @@ func init() {
Help: "Liara Iran endpoint",
Provider: "Liara",
}, {
Value: "s3.ir-thr-at1.arvanstorage.com",
Help: "ArvanCloud Tehran Iran (Asiatech) endpoint",
Value: "s3.ir-thr-at1.arvanstorage.ir",
Help: "ArvanCloud Tehran Iran (Simin) endpoint",
Provider: "ArvanCloud",
}, {
Value: "s3.ir-tbz-sh1.arvanstorage.ir",
Help: "ArvanCloud Tabriz Iran (Shahriar) endpoint",
Provider: "ArvanCloud",
}},
}, {
@@ -1396,7 +1447,7 @@ func init() {
Provider: "ArvanCloud",
Examples: []fs.OptionExample{{
Value: "ir-thr-at1",
Help: "Tehran Iran (Asiatech)",
Help: "Tehran Iran (Simin)",
}, {
Value: "ir-tbz-sh1",
Help: "Tabriz Iran (Shahriar)",
@@ -1593,7 +1644,7 @@ func init() {
}, {
Name: "location_constraint",
Help: "Location constraint - must be set to match the Region.\n\nLeave blank if not sure. Used when creating buckets only.",
Provider: "!AWS,Alibaba,HuaweiOBS,ChinaMobile,Cloudflare,IBMCOS,IDrive,IONOS,Liara,ArvanCloud,Qiniu,RackCorp,Scaleway,StackPath,Storj,TencentCOS",
Provider: "!AWS,Alibaba,ArvanCloud,HuaweiOBS,ChinaMobile,Cloudflare,IBMCOS,IDrive,IONOS,Liara,Qiniu,RackCorp,Scaleway,StackPath,Storj,TencentCOS,Petabox",
}, {
Name: "acl",
Help: `Canned ACL used when creating buckets and storing or copying objects.
@@ -1836,7 +1887,7 @@ If you leave it blank, this is calculated automatically from the sse_customer_ke
Help: "Standard storage class",
}},
}, {
// Mapping from here: https://www.arvancloud.com/en/products/cloud-storage
// Mapping from here: https://www.arvancloud.ir/en/products/cloud-storage
Name: "storage_class",
Help: "The storage class to use when storing new objects in ArvanCloud.",
Provider: "ArvanCloud",
@@ -1863,7 +1914,7 @@ If you leave it blank, this is calculated automatically from the sse_customer_ke
Help: "Infrequent access storage mode",
}},
}, {
// Mapping from here: https://www.scaleway.com/en/docs/object-storage-glacier/#-Scaleway-Storage-Classes
// Mapping from here: https://www.scaleway.com/en/docs/storage/object/quickstart/
Name: "storage_class",
Help: "The storage class to use when storing new objects in S3.",
Provider: "Scaleway",
@@ -1872,10 +1923,13 @@ If you leave it blank, this is calculated automatically from the sse_customer_ke
Help: "Default.",
}, {
Value: "STANDARD",
Help: "The Standard class for any upload.\nSuitable for on-demand content like streaming or CDN.",
Help: "The Standard class for any upload.\nSuitable for on-demand content like streaming or CDN.\nAvailable in all regions.",
}, {
Value: "GLACIER",
Help: "Archived storage.\nPrices are lower, but it needs to be restored first to be accessed.",
Help: "Archived storage.\nPrices are lower, but it needs to be restored first to be accessed.\nAvailable in FR-PAR and NL-AMS regions.",
}, {
Value: "ONEZONE_IA",
Help: "One Zone - Infrequent Access.\nA good choice for storing secondary backup copies or easily re-creatable data.\nAvailable in the FR-PAR region only.",
}},
}, {
// Mapping from here: https://developer.qiniu.com/kodo/5906/storage-type
@@ -2193,6 +2247,15 @@ See: https://github.com/rclone/rclone/issues/4673, https://github.com/rclone/rcl
This is usually set to a CloudFront CDN URL as AWS S3 offers
cheaper egress for data downloaded through the CloudFront network.`,
Advanced: true,
}, {
Name: "directory_markers",
Default: false,
Advanced: true,
Help: `Upload an empty object with a trailing slash when a new directory is created
Empty folders are unsupported for bucket based remotes, this option creates an empty
object ending with "/", to persist the folder.
`,
}, {
Name: "use_multipart_etag",
Help: `Whether to use ETag in multipart uploads for verification
@@ -2422,6 +2485,7 @@ type Options struct {
MemoryPoolUseMmap bool `config:"memory_pool_use_mmap"`
DisableHTTP2 bool `config:"disable_http2"`
DownloadURL string `config:"download_url"`
DirectoryMarkers bool `config:"directory_markers"`
UseMultipartEtag fs.Tristate `config:"use_multipart_etag"`
UsePresignedRequest bool `config:"use_presigned_request"`
Versions bool `config:"versions"`
@@ -2868,6 +2932,8 @@ func setQuirks(opt *Options) {
// listObjectsV2 supported - https://api.ionos.com/docs/s3/#Basic-Operations-get-Bucket-list-type-2
virtualHostStyle = false
urlEncodeListings = false
case "Petabox":
// No quirks
case "Liara":
virtualHostStyle = false
urlEncodeListings = false
@@ -2911,6 +2977,7 @@ func setQuirks(opt *Options) {
case "Qiniu":
useMultipartEtag = false
urlEncodeListings = false
virtualHostStyle = false
case "GCS":
// Google break request Signature by mutating accept-encoding HTTP header
// https://github.com/rclone/rclone/issues/6670
@@ -2989,6 +3056,7 @@ func NewFs(ctx context.Context, name, root string, m configmap.Mapper) (fs.Fs, e
if err != nil {
return nil, err
}
fs.Debugf(nil, "name = %q, root = %q, opt = %#v", name, root, opt)
err = checkUploadChunkSize(opt.ChunkSize)
if err != nil {
return nil, fmt.Errorf("s3: chunk size: %w", err)
@@ -3079,6 +3147,9 @@ func NewFs(ctx context.Context, name, root string, m configmap.Mapper) (fs.Fs, e
if opt.Provider == "IDrive" {
f.features.SetTier = false
}
if opt.DirectoryMarkers {
f.features.CanHaveEmptyDirectories = true
}
// f.listMultipartUploads()
if f.rootBucket != "" && f.rootDirectory != "" && !opt.NoHeadObject && !strings.HasSuffix(root, "/") {
@@ -3571,6 +3642,7 @@ func (f *Fs) list(ctx context.Context, opt listOpt, fn listFn) error {
default:
listBucket = f.newV2List(&req)
}
foundItems := 0
for {
var resp *s3.ListObjectsV2Output
var err error
@@ -3612,6 +3684,7 @@ func (f *Fs) list(ctx context.Context, opt listOpt, fn listFn) error {
return err
}
if !opt.recurse {
foundItems += len(resp.CommonPrefixes)
for _, commonPrefix := range resp.CommonPrefixes {
if commonPrefix.Prefix == nil {
fs.Logf(f, "Nil common prefix received")
@@ -3644,6 +3717,7 @@ func (f *Fs) list(ctx context.Context, opt listOpt, fn listFn) error {
}
}
}
foundItems += len(resp.Contents)
for i, object := range resp.Contents {
remote := aws.StringValue(object.Key)
if urlEncodeListings {
@@ -3658,19 +3732,29 @@ func (f *Fs) list(ctx context.Context, opt listOpt, fn listFn) error {
fs.Logf(f, "Odd name received %q", remote)
continue
}
isDirectory := (remote == "" || strings.HasSuffix(remote, "/")) && object.Size != nil && *object.Size == 0
// is this a directory marker?
if isDirectory {
if opt.noSkipMarkers {
// process directory markers as files
isDirectory = false
} else {
// Don't insert the root directory
if remote == opt.directory {
continue
}
// process directory markers as directories
remote = strings.TrimRight(remote, "/")
}
}
remote = remote[len(opt.prefix):]
isDirectory := remote == "" || strings.HasSuffix(remote, "/")
if opt.addBucket {
remote = bucket.Join(opt.bucket, remote)
}
// is this a directory marker?
if isDirectory && object.Size != nil && *object.Size == 0 && !opt.noSkipMarkers {
continue // skip directory marker
}
if versionIDs != nil {
err = fn(remote, object, versionIDs[i], false)
err = fn(remote, object, versionIDs[i], isDirectory)
} else {
err = fn(remote, object, nil, false)
err = fn(remote, object, nil, isDirectory)
}
if err != nil {
if err == errEndList {
@@ -3683,6 +3767,20 @@ func (f *Fs) list(ctx context.Context, opt listOpt, fn listFn) error {
break
}
}
if f.opt.DirectoryMarkers && foundItems == 0 && opt.directory != "" {
// Determine whether the directory exists or not by whether it has a marker
req := s3.HeadObjectInput{
Bucket: &opt.bucket,
Key: &opt.directory,
}
_, err := f.headObject(ctx, &req)
if err != nil {
if err == fs.ErrorObjectNotFound {
return fs.ErrorDirNotFound
}
return err
}
}
return nil
}
@@ -3873,10 +3971,70 @@ func (f *Fs) bucketExists(ctx context.Context, bucket string) (bool, error) {
return false, err
}
// Create directory marker file and parents
func (f *Fs) createDirectoryMarker(ctx context.Context, bucket, dir string) error {
if !f.opt.DirectoryMarkers || bucket == "" {
return nil
}
// Object to be uploaded
o := &Object{
fs: f,
meta: map[string]string{
metaMtime: swift.TimeToFloatString(time.Now()),
},
}
for {
_, bucketPath := f.split(dir)
// Don't create the directory marker if it is the bucket or at the very root
if bucketPath == "" {
break
}
o.remote = dir + "/"
// Check to see if object already exists
_, err := o.headObject(ctx)
if err == nil {
return nil
}
// Upload it if not
fs.Debugf(o, "Creating directory marker")
content := io.Reader(strings.NewReader(""))
err = o.Update(ctx, content, o)
if err != nil {
return fmt.Errorf("creating directory marker failed: %w", err)
}
// Now check parent directory exists
dir = path.Dir(dir)
if dir == "/" || dir == "." {
break
}
}
return nil
}
// Mkdir creates the bucket if it doesn't exist
func (f *Fs) Mkdir(ctx context.Context, dir string) error {
bucket, _ := f.split(dir)
return f.makeBucket(ctx, bucket)
e := f.makeBucket(ctx, bucket)
if e != nil {
return e
}
return f.createDirectoryMarker(ctx, bucket, dir)
}
// mkdirParent creates the parent bucket/directory if it doesn't exist
func (f *Fs) mkdirParent(ctx context.Context, remote string) error {
remote = strings.TrimRight(remote, "/")
dir := path.Dir(remote)
if dir == "/" || dir == "." {
dir = ""
}
return f.Mkdir(ctx, dir)
}
// makeBucket creates the bucket if it doesn't exist
@@ -3917,6 +4075,18 @@ func (f *Fs) makeBucket(ctx context.Context, bucket string) error {
// Returns an error if it isn't empty
func (f *Fs) Rmdir(ctx context.Context, dir string) error {
bucket, directory := f.split(dir)
// Remove directory marker file
if f.opt.DirectoryMarkers && bucket != "" && dir != "" {
o := &Object{
fs: f,
remote: dir + "/",
}
fs.Debugf(o, "Removing directory marker")
err := o.Remove(ctx)
if err != nil {
return fmt.Errorf("removing directory marker failed: %w", err)
}
}
if bucket == "" || directory != "" {
return nil
}
@@ -4115,7 +4285,7 @@ func (f *Fs) Copy(ctx context.Context, src fs.Object, remote string) (fs.Object,
return nil, errNotWithVersionAt
}
dstBucket, dstPath := f.split(remote)
err := f.makeBucket(ctx, dstBucket)
err := f.mkdirParent(ctx, remote)
if err != nil {
return nil, err
}
@@ -4742,22 +4912,26 @@ func (o *Object) headObject(ctx context.Context) (resp *s3.HeadObjectOutput, err
Key: &bucketPath,
VersionId: o.versionID,
}
if o.fs.opt.RequesterPays {
return o.fs.headObject(ctx, &req)
}
func (f *Fs) headObject(ctx context.Context, req *s3.HeadObjectInput) (resp *s3.HeadObjectOutput, err error) {
if f.opt.RequesterPays {
req.RequestPayer = aws.String(s3.RequestPayerRequester)
}
if o.fs.opt.SSECustomerAlgorithm != "" {
req.SSECustomerAlgorithm = &o.fs.opt.SSECustomerAlgorithm
if f.opt.SSECustomerAlgorithm != "" {
req.SSECustomerAlgorithm = &f.opt.SSECustomerAlgorithm
}
if o.fs.opt.SSECustomerKey != "" {
req.SSECustomerKey = &o.fs.opt.SSECustomerKey
if f.opt.SSECustomerKey != "" {
req.SSECustomerKey = &f.opt.SSECustomerKey
}
if o.fs.opt.SSECustomerKeyMD5 != "" {
req.SSECustomerKeyMD5 = &o.fs.opt.SSECustomerKeyMD5
if f.opt.SSECustomerKeyMD5 != "" {
req.SSECustomerKeyMD5 = &f.opt.SSECustomerKeyMD5
}
err = o.fs.pacer.Call(func() (bool, error) {
err = f.pacer.Call(func() (bool, error) {
var err error
resp, err = o.fs.c.HeadObjectWithContext(ctx, &req)
return o.fs.shouldRetry(ctx, err)
resp, err = f.c.HeadObjectWithContext(ctx, req)
return f.shouldRetry(ctx, err)
})
if err != nil {
if awsErr, ok := err.(awserr.RequestFailure); ok {
@@ -4767,7 +4941,9 @@ func (o *Object) headObject(ctx context.Context) (resp *s3.HeadObjectOutput, err
}
return nil, err
}
o.fs.cache.MarkOK(bucket)
if req.Bucket != nil {
f.cache.MarkOK(*req.Bucket)
}
return resp, nil
}
@@ -5416,9 +5592,12 @@ func (o *Object) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, op
return errNotWithVersionAt
}
bucket, bucketPath := o.split()
err := o.fs.makeBucket(ctx, bucket)
if err != nil {
return err
// Create parent dir/bucket if not saving directory marker
if !strings.HasSuffix(o.remote, "/") {
err := o.fs.mkdirParent(ctx, o.remote)
if err != nil {
return err
}
}
modTime := src.ModTime(ctx)
size := src.Size()
@@ -5741,7 +5920,7 @@ func (o *Object) Metadata(ctx context.Context) (metadata fs.Metadata, err error)
setMetadata("content-disposition", o.contentDisposition)
setMetadata("content-encoding", o.contentEncoding)
setMetadata("content-language", o.contentLanguage)
setMetadata("tier", o.storageClass)
metadata["tier"] = o.GetTier()
return metadata, nil
}

View File

@@ -18,6 +18,7 @@ import (
"github.com/rclone/rclone/fs/hash"
"github.com/rclone/rclone/fstest"
"github.com/rclone/rclone/fstest/fstests"
"github.com/rclone/rclone/lib/bucket"
"github.com/rclone/rclone/lib/random"
"github.com/rclone/rclone/lib/version"
"github.com/stretchr/testify/assert"
@@ -317,14 +318,19 @@ func (f *Fs) InternalTestVersions(t *testing.T) {
// Check we can make a NewFs from that object with a version suffix
t.Run("NewFs", func(t *testing.T) {
newPath := path.Join(fs.ConfigString(f), fileNameVersion)
newPath := bucket.Join(fs.ConfigStringFull(f), fileNameVersion)
// Make sure --s3-versions is set in the config of the new remote
confPath := strings.Replace(newPath, ":", ",versions:", 1)
fNew, err := cache.Get(ctx, confPath)
fs.Debugf(nil, "oldPath = %q", newPath)
lastColon := strings.LastIndex(newPath, ":")
require.True(t, lastColon >= 0)
newPath = newPath[:lastColon] + ",versions" + newPath[lastColon:]
fs.Debugf(nil, "newPath = %q", newPath)
fNew, err := cache.Get(ctx, newPath)
// This should return pointing to a file
assert.Equal(t, fs.ErrorIsFile, err)
require.Equal(t, fs.ErrorIsFile, err)
require.NotNil(t, fNew)
// With the directory the directory above
assert.Equal(t, dirName, path.Base(fs.ConfigString(fNew)))
assert.Equal(t, dirName, path.Base(fs.ConfigStringFull(fNew)))
})
})

View File

@@ -5,6 +5,7 @@ import (
"testing"
"github.com/rclone/rclone/fs"
"github.com/rclone/rclone/fstest"
"github.com/rclone/rclone/fstest/fstests"
)
@@ -20,6 +21,24 @@ func TestIntegration(t *testing.T) {
})
}
func TestIntegration2(t *testing.T) {
if *fstest.RemoteName != "" {
t.Skip("skipping as -remote is set")
}
name := "TestS3"
fstests.Run(t, &fstests.Opt{
RemoteName: name + ":",
NilObject: (*Object)(nil),
TiersToTest: []string{"STANDARD", "STANDARD_IA"},
ChunkedUpload: fstests.ChunkedUploadConfig{
MinChunkSize: minChunkSize,
},
ExtraConfig: []fstests.ExtraConfigItem{
{Name: name, Key: "directory_markers", Value: "true"},
},
})
}
func (f *Fs) SetUploadChunkSize(cs fs.SizeSuffix) (fs.SizeSuffix, error) {
return f.setUploadChunkSize(cs)
}

View File

@@ -14,6 +14,7 @@ import (
// URL parameters that need to be added to the signature
var s3ParamsToSign = map[string]struct{}{
"delete": {},
"acl": {},
"location": {},
"logging": {},

View File

@@ -994,6 +994,7 @@ func NewFsWithConnection(ctx context.Context, f *Fs, name string, root string, m
f.features = (&fs.Features{
CanHaveEmptyDirectories: true,
SlowHash: true,
PartialUploads: true,
}).Fill(ctx, f)
// Make a connection and pool it to return errors early
c, err := f.getSftpConnection(ctx)
@@ -1065,7 +1066,7 @@ func NewFsWithConnection(ctx context.Context, f *Fs, name string, root string, m
}
}
f.putSftpConnection(&c, err)
if root != "" {
if root != "" && !strings.HasSuffix(root, "/") {
// Check to see if the root is actually an existing file,
// and if so change the filesystem root to its parent directory.
oldAbsRoot := f.absRoot
@@ -1168,13 +1169,6 @@ func (f *Fs) dirExists(ctx context.Context, dir string) (bool, error) {
// found.
func (f *Fs) List(ctx context.Context, dir string) (entries fs.DirEntries, err error) {
root := path.Join(f.absRoot, dir)
ok, err := f.dirExists(ctx, root)
if err != nil {
return nil, fmt.Errorf("List failed: %w", err)
}
if !ok {
return nil, fs.ErrorDirNotFound
}
sftpDir := root
if sftpDir == "" {
sftpDir = "."
@@ -1186,6 +1180,9 @@ func (f *Fs) List(ctx context.Context, dir string) (entries fs.DirEntries, err e
infos, err := c.sftpClient.ReadDir(sftpDir)
f.putSftpConnection(&c, err)
if err != nil {
if errors.Is(err, os.ErrNotExist) {
return nil, fs.ErrorDirNotFound
}
return nil, fmt.Errorf("error listing %q: %w", dir, err)
}
for _, info := range infos {
@@ -1329,10 +1326,17 @@ func (f *Fs) Move(ctx context.Context, src fs.Object, remote string) (fs.Object,
if err != nil {
return nil, fmt.Errorf("Move: %w", err)
}
err = c.sftpClient.Rename(
srcObj.path(),
path.Join(f.absRoot, remote),
)
srcPath, dstPath := srcObj.path(), path.Join(f.absRoot, remote)
if _, ok := c.sftpClient.HasExtension("posix-rename@openssh.com"); ok {
err = c.sftpClient.PosixRename(srcPath, dstPath)
} else {
// If haven't got PosixRename then remove source first before renaming
err = c.sftpClient.Remove(dstPath)
if err != nil && !errors.Is(err, iofs.ErrNotExist) {
fs.Errorf(f, "Move: Failed to remove existing file %q: %v", dstPath, err)
}
err = c.sftpClient.Rename(srcPath, dstPath)
}
f.putSftpConnection(&c, err)
if err != nil {
return nil, fmt.Errorf("Move Rename failed: %w", err)

View File

@@ -775,8 +775,13 @@ func (f *Fs) Put(ctx context.Context, in io.Reader, src fs.ObjectInfo, options .
}
}
// PutStream uploads to the remote path with the modTime given of indeterminate size
func (f *Fs) PutStream(ctx context.Context, in io.Reader, src fs.ObjectInfo, options ...fs.OpenOption) (fs.Object, error) {
// FIXMEPutStream uploads to the remote path with the modTime given of indeterminate size
//
// PutStream no longer appears to work - the streamed uploads need the
// size specified at the start otherwise we get this error:
//
// upload failed: file size does not match (-2)
func (f *Fs) FIXMEPutStream(ctx context.Context, in io.Reader, src fs.ObjectInfo, options ...fs.OpenOption) (fs.Object, error) {
return f.Put(ctx, in, src, options...)
}
@@ -1453,12 +1458,12 @@ func (o *Object) ID() string {
// Check the interfaces are satisfied
var (
_ fs.Fs = (*Fs)(nil)
_ fs.Purger = (*Fs)(nil)
_ fs.Mover = (*Fs)(nil)
_ fs.DirMover = (*Fs)(nil)
_ fs.Copier = (*Fs)(nil)
_ fs.PutStreamer = (*Fs)(nil)
_ fs.Fs = (*Fs)(nil)
_ fs.Purger = (*Fs)(nil)
_ fs.Mover = (*Fs)(nil)
_ fs.DirMover = (*Fs)(nil)
_ fs.Copier = (*Fs)(nil)
// _ fs.PutStreamer = (*Fs)(nil)
_ fs.DirCacheFlusher = (*Fs)(nil)
_ fs.Object = (*Object)(nil)
_ fs.IDer = (*Object)(nil)

View File

@@ -528,7 +528,11 @@ func (f *Fs) NewObject(ctx context.Context, relative string) (_ fs.Object, err e
// May create the object even if it returns an error - if so will return the
// object and the error, otherwise will return nil and the error
func (f *Fs) Put(ctx context.Context, in io.Reader, src fs.ObjectInfo, options ...fs.OpenOption) (_ fs.Object, err error) {
fs.Debugf(f, "cp input ./%s # %+v %d", src.Remote(), options, src.Size())
return f.put(ctx, in, src, src.Remote(), options...)
}
func (f *Fs) put(ctx context.Context, in io.Reader, src fs.ObjectInfo, remote string, options ...fs.OpenOption) (_ fs.Object, err error) {
fs.Debugf(f, "cp input ./%s # %+v %d", remote, options, src.Size())
// Reject options we don't support.
for _, option := range options {
@@ -539,7 +543,7 @@ func (f *Fs) Put(ctx context.Context, in io.Reader, src fs.ObjectInfo, options .
}
}
bucketName, bucketPath := f.absolute(src.Remote())
bucketName, bucketPath := f.absolute(remote)
upload, err := f.project.UploadObject(ctx, bucketName, bucketPath, nil)
if err != nil {
@@ -549,7 +553,7 @@ func (f *Fs) Put(ctx context.Context, in io.Reader, src fs.ObjectInfo, options .
if err != nil {
aerr := upload.Abort()
if aerr != nil && !errors.Is(aerr, uplink.ErrUploadDone) {
fs.Errorf(f, "cp input ./%s %+v: %+v", src.Remote(), options, aerr)
fs.Errorf(f, "cp input ./%s %+v: %+v", remote, options, aerr)
}
}
}()
@@ -574,7 +578,7 @@ func (f *Fs) Put(ctx context.Context, in io.Reader, src fs.ObjectInfo, options .
}
err = fserrors.RetryError(err)
fs.Errorf(f, "cp input ./%s %+v: %+v\n", src.Remote(), options, err)
fs.Errorf(f, "cp input ./%s %+v: %+v\n", remote, options, err)
return nil, err
}
@@ -589,11 +593,19 @@ func (f *Fs) Put(ctx context.Context, in io.Reader, src fs.ObjectInfo, options .
return nil, err
}
err = fserrors.RetryError(errors.New("bucket was not available, now created, the upload must be retried"))
} else if errors.Is(err, uplink.ErrTooManyRequests) {
// Storj has a rate limit of 1 per second of uploading to the same file.
// This produces ErrTooManyRequests here, so we wait 1 second and retry.
//
// See: https://github.com/storj/uplink/issues/149
fs.Debugf(f, "uploading too fast - sleeping for 1 second: %v", err)
time.Sleep(time.Second)
err = fserrors.RetryError(err)
}
return nil, err
}
return newObjectFromUplink(f, src.Remote(), upload.Info()), nil
return newObjectFromUplink(f, remote, upload.Info()), nil
}
// PutStream uploads to the remote path with the modTime given of indeterminate

View File

@@ -176,9 +176,9 @@ func (o *Object) Open(ctx context.Context, options ...fs.OpenOption) (_ io.ReadC
// But for unknown-sized objects (indicated by src.Size() == -1), Upload should either
// return an error or update the object properly (rather than e.g. calling panic).
func (o *Object) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, options ...fs.OpenOption) (err error) {
fs.Debugf(o, "cp input ./%s %+v", src.Remote(), options)
fs.Debugf(o, "cp input ./%s %+v", o.Remote(), options)
oNew, err := o.fs.Put(ctx, in, src, options...)
oNew, err := o.fs.put(ctx, in, src, o.Remote(), options...)
if err == nil {
*o = *(oNew.(*Object))

View File

@@ -100,7 +100,7 @@ but other operations such as Remove and Copy will fail.
func init() {
fs.Register(&fs.RegInfo{
Name: "swift",
Description: "OpenStack Swift (Rackspace Cloud Files, Memset Memstore, OVH)",
Description: "OpenStack Swift (Rackspace Cloud Files, Blomp Cloud Storage, Memset Memstore, OVH)",
NewFs: NewFs,
Options: append([]fs.Option{{
Name: "env_auth",
@@ -142,6 +142,9 @@ func init() {
}, {
Value: "https://auth.cloud.ovh.net/v3",
Help: "OVH",
}, {
Value: "https://authenticate.ain.net",
Help: "Blomp Cloud Storage",
}},
}, {
Name: "user_id",
@@ -1558,6 +1561,10 @@ func (o *Object) Remove(ctx context.Context) (err error) {
// Remove file/manifest first
err = o.fs.pacer.Call(func() (bool, error) {
err = o.fs.c.ObjectDelete(ctx, container, containerPath)
if err == swift.ObjectNotFound {
fs.Errorf(o, "Dangling object - ignoring: %v", err)
err = nil
}
return shouldRetry(ctx, err)
})
if err != nil {

View File

@@ -49,8 +49,7 @@ func (e Errors) Error() string {
if len(e) == 0 {
buf.WriteString("no error")
}
if len(e) == 1 {
} else if len(e) == 1 {
buf.WriteString("1 error: ")
} else {
fmt.Fprintf(&buf, "%d errors: ", len(e))
@@ -61,8 +60,17 @@ func (e Errors) Error() string {
buf.WriteString("; ")
}
buf.WriteString(err.Error())
if err != nil {
buf.WriteString(err.Error())
} else {
buf.WriteString("nil error")
}
}
return buf.String()
}
// Unwrap returns the wrapped errors
func (e Errors) Unwrap() []error {
return e
}

View File

@@ -0,0 +1,94 @@
//go:build go1.20
// +build go1.20
package union
import (
"errors"
"testing"
"github.com/stretchr/testify/assert"
)
var (
err1 = errors.New("Error 1")
err2 = errors.New("Error 2")
err3 = errors.New("Error 3")
)
func TestErrorsMap(t *testing.T) {
es := Errors{
nil,
err1,
err2,
}
want := Errors{
err2,
}
got := es.Map(func(e error) error {
if e == err1 {
return nil
}
return e
})
assert.Equal(t, want, got)
}
func TestErrorsFilterNil(t *testing.T) {
es := Errors{
nil,
err1,
nil,
err2,
nil,
}
want := Errors{
err1,
err2,
}
got := es.FilterNil()
assert.Equal(t, want, got)
}
func TestErrorsErr(t *testing.T) {
// Check not all nil case
es := Errors{
nil,
err1,
nil,
err2,
nil,
}
want := Errors{
err1,
err2,
}
got := es.Err()
// Check all nil case
assert.Equal(t, want, got)
es = Errors{
nil,
nil,
nil,
}
assert.Nil(t, es.Err())
}
func TestErrorsError(t *testing.T) {
assert.Equal(t, "no error", Errors{}.Error())
assert.Equal(t, "1 error: Error 1", Errors{err1}.Error())
assert.Equal(t, "1 error: nil error", Errors{nil}.Error())
assert.Equal(t, "2 errors: Error 1; Error 2", Errors{err1, err2}.Error())
}
func TestErrorsUnwrap(t *testing.T) {
es := Errors{
err1,
err2,
}
assert.Equal(t, []error{err1, err2}, es.Unwrap())
assert.True(t, errors.Is(es, err1))
assert.True(t, errors.Is(es, err2))
assert.False(t, errors.Is(es, err3))
}

View File

@@ -3,7 +3,6 @@ package policy
import (
"context"
"fmt"
"math/rand"
"path"
"strings"
"time"
@@ -109,9 +108,7 @@ func findEntry(ctx context.Context, f fs.Fs, remote string) fs.DirEntry {
if err != nil {
return nil
}
// random modtime for root
randomNow := time.Unix(time.Now().Unix()-rand.Int63n(10000), 0)
return fs.NewDir("", randomNow)
return fs.NewDir("", time.Time{})
}
found := false
for _, e := range entries {

View File

@@ -801,6 +801,24 @@ func (f *Fs) Shutdown(ctx context.Context) error {
return errs.Err()
}
// CleanUp the trash in the Fs
//
// Implement this if you have a way of emptying the trash or
// otherwise cleaning up old versions of files.
func (f *Fs) CleanUp(ctx context.Context) error {
errs := Errors(make([]error, len(f.upstreams)))
multithread(len(f.upstreams), func(i int) {
u := f.upstreams[i]
if do := u.Features().CleanUp; do != nil {
err := do(ctx)
if err != nil {
errs[i] = fmt.Errorf("%s: %w", u.Name(), err)
}
}
})
return errs.Err()
}
// NewFs constructs an Fs from the path.
//
// The returned Fs is the actual Fs, referenced by remote in the config
@@ -884,6 +902,7 @@ func NewFs(ctx context.Context, name, root string, m configmap.Mapper) (fs.Fs, e
ReadMetadata: true,
WriteMetadata: true,
UserMetadata: true,
PartialUploads: true,
}).Fill(ctx, f)
canMove, slowHash := true, false
for _, f := range upstreams {
@@ -914,6 +933,9 @@ func NewFs(ctx context.Context, name, root string, m configmap.Mapper) (fs.Fs, e
}
}
// show that we wrap other backends
features.Overlay = true
f.features = features
// Get common intersection of hashes
@@ -960,4 +982,5 @@ var (
_ fs.Abouter = (*Fs)(nil)
_ fs.ListRer = (*Fs)(nil)
_ fs.Shutdowner = (*Fs)(nil)
_ fs.CleanUpper = (*Fs)(nil)
)

View File

@@ -11,6 +11,11 @@ import (
"github.com/rclone/rclone/fstest/fstests"
)
var (
unimplementableFsMethods = []string{"UnWrap", "WrapFs", "SetWrapper", "UserInfo", "Disconnect", "PublicLink", "PutUnchecked", "MergeDirs", "OpenWriterAt"}
unimplementableObjectMethods = []string{}
)
// TestIntegration runs integration tests against the remote
func TestIntegration(t *testing.T) {
if *fstest.RemoteName == "" {
@@ -18,8 +23,8 @@ func TestIntegration(t *testing.T) {
}
fstests.Run(t, &fstests.Opt{
RemoteName: *fstest.RemoteName,
UnimplementableFsMethods: []string{"OpenWriterAt", "DuplicateFiles"},
UnimplementableObjectMethods: []string{"MimeType"},
UnimplementableFsMethods: unimplementableFsMethods,
UnimplementableObjectMethods: unimplementableObjectMethods,
})
}
@@ -39,8 +44,8 @@ func TestStandard(t *testing.T) {
{Name: name, Key: "create_policy", Value: "epmfs"},
{Name: name, Key: "search_policy", Value: "ff"},
},
UnimplementableFsMethods: []string{"OpenWriterAt", "DuplicateFiles"},
UnimplementableObjectMethods: []string{"MimeType"},
UnimplementableFsMethods: unimplementableFsMethods,
UnimplementableObjectMethods: unimplementableObjectMethods,
QuickTestOK: true,
})
}
@@ -61,8 +66,8 @@ func TestRO(t *testing.T) {
{Name: name, Key: "create_policy", Value: "epmfs"},
{Name: name, Key: "search_policy", Value: "ff"},
},
UnimplementableFsMethods: []string{"OpenWriterAt", "DuplicateFiles"},
UnimplementableObjectMethods: []string{"MimeType"},
UnimplementableFsMethods: unimplementableFsMethods,
UnimplementableObjectMethods: unimplementableObjectMethods,
QuickTestOK: true,
})
}
@@ -83,8 +88,8 @@ func TestNC(t *testing.T) {
{Name: name, Key: "create_policy", Value: "epmfs"},
{Name: name, Key: "search_policy", Value: "ff"},
},
UnimplementableFsMethods: []string{"OpenWriterAt", "DuplicateFiles"},
UnimplementableObjectMethods: []string{"MimeType"},
UnimplementableFsMethods: unimplementableFsMethods,
UnimplementableObjectMethods: unimplementableObjectMethods,
QuickTestOK: true,
})
}
@@ -105,8 +110,8 @@ func TestPolicy1(t *testing.T) {
{Name: name, Key: "create_policy", Value: "lus"},
{Name: name, Key: "search_policy", Value: "all"},
},
UnimplementableFsMethods: []string{"OpenWriterAt", "DuplicateFiles"},
UnimplementableObjectMethods: []string{"MimeType"},
UnimplementableFsMethods: unimplementableFsMethods,
UnimplementableObjectMethods: unimplementableObjectMethods,
QuickTestOK: true,
})
}
@@ -127,8 +132,8 @@ func TestPolicy2(t *testing.T) {
{Name: name, Key: "create_policy", Value: "rand"},
{Name: name, Key: "search_policy", Value: "ff"},
},
UnimplementableFsMethods: []string{"OpenWriterAt", "DuplicateFiles"},
UnimplementableObjectMethods: []string{"MimeType"},
UnimplementableFsMethods: unimplementableFsMethods,
UnimplementableObjectMethods: unimplementableObjectMethods,
QuickTestOK: true,
})
}
@@ -149,8 +154,8 @@ func TestPolicy3(t *testing.T) {
{Name: name, Key: "create_policy", Value: "all"},
{Name: name, Key: "search_policy", Value: "all"},
},
UnimplementableFsMethods: []string{"OpenWriterAt", "DuplicateFiles"},
UnimplementableObjectMethods: []string{"MimeType"},
UnimplementableFsMethods: unimplementableFsMethods,
UnimplementableObjectMethods: unimplementableObjectMethods,
QuickTestOK: true,
})
}

View File

@@ -45,6 +45,11 @@ func init() {
Options: []fs.Option{{
Help: "Your access token.\n\nGet it from https://uptobox.com/my_account.",
Name: "access_token",
}, {
Help: "Set to make uploaded files private",
Name: "private",
Advanced: true,
Default: false,
}, {
Name: config.ConfigEncoding,
Help: config.ConfigEncodingHelp,
@@ -63,6 +68,7 @@ func init() {
// Options defines the configuration for this backend
type Options struct {
AccessToken string `config:"access_token"`
Private bool `config:"private"`
Enc encoder.MultiEncoder `config:"encoding"`
}
@@ -75,6 +81,7 @@ type Fs struct {
srv *rest.Client
pacer *fs.Pacer
IDRegexp *regexp.Regexp
public string // "0" to make objects private
}
// Object represents an Uptobox object
@@ -211,6 +218,9 @@ func NewFs(ctx context.Context, name string, root string, config configmap.Mappe
CanHaveEmptyDirectories: true,
ReadMimeType: false,
}).Fill(ctx, f)
if f.opt.Private {
f.public = "0"
}
client := fshttp.NewClient(ctx)
f.srv = rest.NewClient(client).SetRoot(apiBaseURL)
@@ -472,11 +482,11 @@ func (f *Fs) updateFileInformation(ctx context.Context, update *api.UpdateFileIn
return err
}
func (f *Fs) putUnchecked(ctx context.Context, in io.Reader, remote string, size int64, options ...fs.OpenOption) (fs.Object, error) {
func (f *Fs) putUnchecked(ctx context.Context, in io.Reader, remote string, size int64, options ...fs.OpenOption) error {
if size > int64(200e9) { // max size 200GB
return nil, errors.New("file too big, can't upload")
return errors.New("file too big, can't upload")
} else if size == 0 {
return nil, fs.ErrorCantUploadEmptyFiles
return fs.ErrorCantUploadEmptyFiles
}
// yes it does take 4 requests if we're uploading to root and 6+ if we're uploading to any subdir :(
@@ -494,19 +504,19 @@ func (f *Fs) putUnchecked(ctx context.Context, in io.Reader, remote string, size
return shouldRetry(ctx, resp, err)
})
if err != nil {
return nil, err
return err
}
if info.StatusCode != 0 {
return nil, fmt.Errorf("putUnchecked api error: %d - %s", info.StatusCode, info.Message)
return fmt.Errorf("putUnchecked api error: %d - %s", info.StatusCode, info.Message)
}
// we need to have a safe name for the upload to work
tmpName := "rcloneTemp" + random.String(8)
upload, err := f.uploadFile(ctx, in, size, tmpName, info.Data.UploadLink, options...)
if err != nil {
return nil, err
return err
}
if len(upload.Files) != 1 {
return nil, errors.New("upload unexpected response")
return errors.New("upload unexpected response")
}
match := f.IDRegexp.FindStringSubmatch(upload.Files[0].URL)
@@ -521,23 +531,27 @@ func (f *Fs) putUnchecked(ctx context.Context, in io.Reader, remote string, size
// this might need some more error handling. if any of the following requests fail
// we'll leave an orphaned temporary file floating around somewhere
// they rarely fail though
return nil, err
return err
}
err = f.move(ctx, fullBase, match[1])
if err != nil {
return nil, err
return err
}
}
// rename file to final name
err = f.updateFileInformation(ctx, &api.UpdateFileInformation{Token: f.opt.AccessToken, FileCode: match[1], NewName: f.opt.Enc.FromStandardName(leaf)})
err = f.updateFileInformation(ctx, &api.UpdateFileInformation{
Token: f.opt.AccessToken,
FileCode: match[1],
NewName: f.opt.Enc.FromStandardName(leaf),
Public: f.public,
})
if err != nil {
return nil, err
return err
}
// finally fetch the file object.
return f.NewObject(ctx, remote)
return nil
}
// Put in to the remote path with the modTime given of the given size
@@ -567,7 +581,11 @@ func (f *Fs) Put(ctx context.Context, in io.Reader, src fs.ObjectInfo, options .
// This will create a duplicate if we upload a new file without
// checking to see if there is one already - use Put() for that.
func (f *Fs) PutUnchecked(ctx context.Context, in io.Reader, src fs.ObjectInfo, options ...fs.OpenOption) (fs.Object, error) {
return f.putUnchecked(ctx, in, src.Remote(), src.Size(), options...)
err := f.putUnchecked(ctx, in, src.Remote(), src.Size(), options...)
if err != nil {
return nil, err
}
return f.NewObject(ctx, src.Remote())
}
// CreateDir dir creates a directory with the given parent path
@@ -660,7 +678,7 @@ func (f *Fs) Rmdir(ctx context.Context, dir string) error {
if err != nil {
return err
}
if info.Data.CurrentFolder.FileCount > 0 {
if len(info.Data.Folders) > 0 || len(info.Data.Files) > 0 {
return fs.ErrorDirectoryNotEmpty
}
@@ -696,7 +714,12 @@ func (f *Fs) Move(ctx context.Context, src fs.Object, remote string) (fs.Object,
// rename to final name if we need to
if needRename {
err := f.updateFileInformation(ctx, &api.UpdateFileInformation{Token: f.opt.AccessToken, FileCode: srcObj.code, NewName: f.opt.Enc.FromStandardName(dstLeaf)})
err := f.updateFileInformation(ctx, &api.UpdateFileInformation{
Token: f.opt.AccessToken,
FileCode: srcObj.code,
NewName: f.opt.Enc.FromStandardName(dstLeaf),
Public: f.public,
})
if err != nil {
return nil, fmt.Errorf("move: failed final rename: %w", err)
}
@@ -888,7 +911,12 @@ func (f *Fs) Copy(ctx context.Context, src fs.Object, remote string) (fs.Object,
}
if needRename {
err := f.updateFileInformation(ctx, &api.UpdateFileInformation{Token: f.opt.AccessToken, FileCode: newObj.(*Object).code, NewName: f.opt.Enc.FromStandardName(dstLeaf)})
err := f.updateFileInformation(ctx, &api.UpdateFileInformation{
Token: f.opt.AccessToken,
FileCode: newObj.(*Object).code,
NewName: f.opt.Enc.FromStandardName(dstLeaf),
Public: f.public,
})
if err != nil {
return nil, fmt.Errorf("copy: failed final rename: %w", err)
}
@@ -923,7 +951,8 @@ func (o *Object) Remote() string {
// It attempts to read the objects mtime and if that isn't present the
// LastModified returned in the http headers
func (o *Object) ModTime(ctx context.Context) time.Time {
return time.Now()
ci := fs.GetConfig(ctx)
return time.Time(ci.DefaultTime)
}
// Size returns the size of an object in bytes
@@ -1000,7 +1029,7 @@ func (o *Object) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, op
}
// upload with new size but old name
info, err := o.fs.putUnchecked(ctx, in, o.Remote(), src.Size(), options...)
err := o.fs.putUnchecked(ctx, in, o.Remote(), src.Size(), options...)
if err != nil {
return err
}
@@ -1011,6 +1040,12 @@ func (o *Object) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, op
return fmt.Errorf("failed to remove old version: %w", err)
}
// Fetch new object after deleting the duplicate
info, err := o.fs.NewObject(ctx, o.Remote())
if err != nil {
return err
}
// Replace guts of old object with new one
*o = *info.(*Object)

View File

@@ -175,6 +175,7 @@ type Fs struct {
precision time.Duration // mod time precision
canStream bool // set if can stream
useOCMtime bool // set if can use X-OC-Mtime
propsetMtime bool // set if can use propset
retryWithZeroDepth bool // some vendors (sharepoint) won't list files when Depth is 1 (our default)
checkBeforePurge bool // enables extra check that directory to purge really exists
hasOCMD5 bool // set if can use owncloud style checksums for MD5
@@ -568,7 +569,7 @@ func (f *Fs) fetchAndSetBearerToken() error {
return nil
}
var validateNextCloudChunkedURL = regexp.MustCompile(`^.*/dav/files/[^/]+/?$`)
var validateNextCloudChunkedURL = regexp.MustCompile(`^.*/dav/files/`)
// setQuirks adjusts the Fs for the vendor passed in
func (f *Fs) setQuirks(ctx context.Context, vendor string) error {
@@ -582,11 +583,13 @@ func (f *Fs) setQuirks(ctx context.Context, vendor string) error {
f.canStream = true
f.precision = time.Second
f.useOCMtime = true
f.propsetMtime = true
f.hasOCMD5 = true
f.hasOCSHA1 = true
case "nextcloud":
f.precision = time.Second
f.useOCMtime = true
f.propsetMtime = true
f.hasOCSHA1 = true
f.canChunk = true
if err := f.verifyChunkConfig(); err != nil {
@@ -1047,7 +1050,7 @@ func (f *Fs) copyOrMove(ctx context.Context, src fs.Object, remote string, metho
NoResponse: true,
ExtraHeaders: map[string]string{
"Destination": destinationURL.String(),
"Overwrite": "F",
"Overwrite": "T",
},
}
if f.useOCMtime {
@@ -1065,6 +1068,13 @@ func (f *Fs) copyOrMove(ctx context.Context, src fs.Object, remote string, metho
if err != nil {
return nil, fmt.Errorf("copy NewObject failed: %w", err)
}
if f.useOCMtime && resp.Header.Get("X-OC-Mtime") != "accepted" && f.propsetMtime {
fs.Debugf(dstObj, "Setting modtime after copy to %v", src.ModTime(ctx))
err = dstObj.SetModTime(ctx, src.ModTime(ctx))
if err != nil {
return nil, fmt.Errorf("failed to set modtime: %w", err)
}
}
return dstObj, nil
}
@@ -1147,7 +1157,7 @@ func (f *Fs) DirMove(ctx context.Context, src fs.Fs, srcRemote, dstRemote string
NoResponse: true,
ExtraHeaders: map[string]string{
"Destination": addSlash(destinationURL.String()),
"Overwrite": "F",
"Overwrite": "T",
},
}
// Direct the MOVE/COPY to the source server
@@ -1299,8 +1309,53 @@ func (o *Object) ModTime(ctx context.Context) time.Time {
return o.modTime
}
// Set modified time using propset
//
// <d:multistatus xmlns:d="DAV:" xmlns:s="http://sabredav.org/ns"><d:response><d:href>/ocm/remote.php/webdav/office/wir.jpg</d:href><d:propstat><d:prop><d:lastmodified/></d:prop><d:status>HTTP/1.1 200 OK</d:status></d:propstat></d:response></d:multistatus>
var owncloudPropset = `<?xml version="1.0" encoding="utf-8" ?>
<D:propertyupdate xmlns:D="DAV:">
<D:set>
<D:prop>
<lastmodified xmlns="DAV:">%d</lastmodified>
</D:prop>
</D:set>
</D:propertyupdate>
`
// SetModTime sets the modification time of the local fs object
func (o *Object) SetModTime(ctx context.Context, modTime time.Time) error {
if o.fs.propsetMtime {
opts := rest.Opts{
Method: "PROPPATCH",
Path: o.filePath(),
NoRedirect: true,
Body: strings.NewReader(fmt.Sprintf(owncloudPropset, modTime.Unix())),
}
var result api.Multistatus
var resp *http.Response
var err error
err = o.fs.pacer.Call(func() (bool, error) {
resp, err = o.fs.srv.CallXML(ctx, &opts, nil, &result)
return o.fs.shouldRetry(ctx, resp, err)
})
if err != nil {
if apiErr, ok := err.(*api.Error); ok {
// does not exist
if apiErr.StatusCode == http.StatusNotFound {
return fs.ErrorObjectNotFound
}
}
return fmt.Errorf("couldn't set modified time: %w", err)
}
// FIXME check if response is valid
if len(result.Responses) == 1 && result.Responses[0].Props.StatusOK() {
// update cached modtime
o.modTime = modTime
return nil
}
// fallback
return fs.ErrorCantSetModTime
}
return fs.ErrorCantSetModTime
}

View File

@@ -1100,7 +1100,7 @@ func (o *Object) upload(ctx context.Context, in io.Reader, overwrite bool, mimeT
NoResponse: true,
}
err = o.fs.pacer.Call(func() (bool, error) {
err = o.fs.pacer.CallNoRetry(func() (bool, error) {
resp, err = o.fs.srv.Call(ctx, &opts)
return shouldRetry(ctx, resp, err)
})

View File

@@ -1206,7 +1206,7 @@ func (o *Object) Open(ctx context.Context, options ...fs.OpenOption) (in io.Read
if err != nil {
return nil, err
}
if partialContent && resp.StatusCode == 200 {
if partialContent && resp.StatusCode == 200 && resp.Header.Get("Content-Range") == "" {
if start > 0 {
// We need to read and discard the beginning of the data...
_, err = io.CopyN(io.Discard, resp.Body, start)

View File

@@ -98,8 +98,14 @@ Note to run these commands on a running backend then see
out, err = doCommand(context.Background(), name, arg, opt)
}
if err != nil {
if err == fs.ErrorCommandNotFound {
extra := ""
if f.Features().Overlay {
extra = " (try the underlying remote)"
}
return fmt.Errorf("%q %w%s", name, err, extra)
}
return fmt.Errorf("command %q failed: %w", name, err)
}
// Output the result
writeJSON := false

View File

@@ -824,8 +824,9 @@ func touchFiles(ctx context.Context, dateStr string, f fs.Fs, dir, glob string)
err = nil
buf := new(bytes.Buffer)
size := obj.Size()
separator := ""
if size > 0 {
err = operations.Cat(ctx, f, buf, 0, size)
err = operations.Cat(ctx, f, buf, 0, size, []byte(separator))
}
info := object.NewStaticObjectInfo(remote, date, size, true, nil, f)
if err == nil {

View File

@@ -16,11 +16,12 @@ import (
// Globals
var (
head = int64(0)
tail = int64(0)
offset = int64(0)
count = int64(-1)
discard = false
head = int64(0)
tail = int64(0)
offset = int64(0)
count = int64(-1)
discard = false
separator = string("")
)
func init() {
@@ -31,6 +32,7 @@ func init() {
flags.Int64VarP(cmdFlags, &offset, "offset", "", offset, "Start printing at offset N (or from end if -ve)")
flags.Int64VarP(cmdFlags, &count, "count", "", count, "Only print N characters")
flags.BoolVarP(cmdFlags, &discard, "discard", "", discard, "Discard the output instead of printing")
flags.StringVarP(cmdFlags, &separator, "separator", "", separator, "Separator to use between objects when printing multiple files")
}
var commandDefinition = &cobra.Command{
@@ -56,6 +58,18 @@ Use the |--head| flag to print characters only at the start, |--tail| for
the end and |--offset| and |--count| to print a section in the middle.
Note that if offset is negative it will count from the end, so
|--offset -1 --count 1| is equivalent to |--tail 1|.
Use the |--separator| flag to print a separator value between files. Be sure to
shell-escape special characters. For example, to print a newline between
files, use:
* bash:
rclone --include "*.txt" --separator $'\n' cat remote:path/to/dir
* powershell:
rclone --include "*.txt" --separator "|n" cat remote:path/to/dir
`, "|", "`"),
Annotations: map[string]string{
"versionIntroduced": "v1.33",
@@ -82,7 +96,7 @@ Note that if offset is negative it will count from the end, so
w = io.Discard
}
cmd.Run(false, false, command, func() error {
return operations.Cat(context.Background(), fsrc, w, offset, count)
return operations.Cat(context.Background(), fsrc, w, offset, count, []byte(separator))
})
},
}

View File

@@ -11,7 +11,7 @@ func init() {
}
var completionDefinition = &cobra.Command{
Use: "genautocomplete [shell]",
Use: "completion [shell]",
Short: `Output completion script for a given shell.`,
Long: `
Generates a shell completion script for rclone.
@@ -20,4 +20,5 @@ Run with ` + "`--help`" + ` to list the supported shells.
Annotations: map[string]string{
"versionIntroduced": "v1.33",
},
Aliases: []string{"genautocomplete"},
}

View File

@@ -24,7 +24,7 @@ func init() {
var commandDefinition = &cobra.Command{
Use: "listremotes",
Short: `List all the remotes in the config file.`,
Short: `List all the remotes in the config file and defined in environment variables.`,
Long: `
rclone listremotes lists all the available remotes from the config file.

View File

@@ -25,15 +25,13 @@ var _ fusefs.HandleReader = (*FileHandle)(nil)
func (fh *FileHandle) Read(ctx context.Context, req *fuse.ReadRequest, resp *fuse.ReadResponse) (err error) {
var n int
defer log.Trace(fh, "len=%d, offset=%d", req.Size, req.Offset)("read=%d, err=%v", &n, &err)
data := make([]byte, req.Size)
data := resp.Data[:req.Size]
n, err = fh.Handle.ReadAt(data, req.Offset)
resp.Data = data[:n]
if err == io.EOF {
err = nil
} else if err != nil {
return translateError(err)
}
resp.Data = data[:n]
return nil
return translateError(err)
}
// Check interface satisfied

View File

@@ -26,12 +26,14 @@ func init() {
// man mount.fuse for more info and note the -o flag for other options
func mountOptions(fsys *FS, f fs.Fs, opt *mountlib.Options) (mountOpts *fuse.MountOptions) {
mountOpts = &fuse.MountOptions{
AllowOther: fsys.opt.AllowOther,
FsName: opt.DeviceName,
Name: "rclone",
DisableXAttrs: true,
Debug: fsys.opt.DebugFUSE,
MaxReadAhead: int(fsys.opt.MaxReadAhead),
AllowOther: fsys.opt.AllowOther,
FsName: opt.DeviceName,
Name: "rclone",
DisableXAttrs: true,
Debug: fsys.opt.DebugFUSE,
MaxReadAhead: int(fsys.opt.MaxReadAhead),
MaxWrite: 1024 * 1024, // Linux v4.20+ caps requests at 1 MiB
DisableReadDirPlus: true,
// RememberInodes: true,
// SingleThreaded: true,
@@ -47,12 +49,42 @@ func mountOptions(fsys *FS, f fs.Fs, opt *mountlib.Options) (mountOpts *fuse.Mou
// async I/O. Concurrency for synchronous I/O is not limited.
MaxBackground int
// Write size to use. If 0, use default. This number is
// capped at the kernel maximum.
// MaxWrite is the max size for read and write requests. If 0, use
// go-fuse default (currently 64 kiB).
// This number is internally capped at MAX_KERNEL_WRITE (higher values don't make
// sense).
//
// Non-direct-io reads are mostly served via kernel readahead, which is
// additionally subject to the MaxReadAhead limit.
//
// Implementation notes:
//
// There's four values the Linux kernel looks at when deciding the request size:
// * MaxWrite, passed via InitOut.MaxWrite. Limits the WRITE size.
// * max_read, passed via a string mount option. Limits the READ size.
// go-fuse sets max_read equal to MaxWrite.
// You can see the current max_read value in /proc/self/mounts .
// * MaxPages, passed via InitOut.MaxPages. In Linux 4.20 and later, the value
// can go up to 1 MiB and go-fuse calculates the MaxPages value acc.
// to MaxWrite, rounding up.
// On older kernels, the value is fixed at 128 kiB and the
// passed value is ignored. No request can be larger than MaxPages, so
// READ and WRITE are effectively capped at MaxPages.
// * MaxReadAhead, passed via InitOut.MaxReadAhead.
MaxWrite int
// Max read ahead to use. If 0, use default. This number is
// capped at the kernel maximum.
// MaxReadAhead is the max read ahead size to use. It controls how much data the
// kernel reads in advance to satisfy future read requests from applications.
// How much exactly is subject to clever heuristics in the kernel
// (see https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/mm/readahead.c?h=v6.2-rc5#n375
// if you are brave) and hence also depends on the kernel version.
//
// If 0, use kernel default. This number is capped at the kernel maximum
// (128 kiB on Linux) and cannot be larger than MaxWrite.
//
// MaxReadAhead only affects buffered reads (=non-direct-io), but even then, the
// kernel can and does send larger reads to satisfy read reqests from applications
// (up to MaxWrite or VM_READAHEAD_PAGES=128 kiB, whichever is less).
MaxReadAhead int
// If IgnoreSecurityLabels is set, all security related xattr
@@ -87,9 +119,19 @@ func mountOptions(fsys *FS, f fs.Fs, opt *mountlib.Options) (mountOpts *fuse.Mou
// you must implement the GetLk/SetLk/SetLkw methods.
EnableLocks bool
// If set, the kernel caches all Readlink return values. The
// filesystem must use content notification to force the
// kernel to issue a new Readlink call.
EnableSymlinkCaching bool
// If set, ask kernel not to do automatic data cache invalidation.
// The filesystem is fully responsible for invalidating data cache.
ExplicitDataCacheControl bool
// Disable ReadDirPlus capability so ReadDir is used instead. Simple
// directory queries (i.e. 'ls' without '-l') can be faster with
// ReadDir, as no per-file stat calls are needed
DisableReadDirPlus bool
*/
}
@@ -176,8 +218,8 @@ func mount(VFS *vfs.VFS, mountpoint string, opt *mountlib.Options) (<-chan error
MountOptions: *mountOpts,
EntryTimeout: &opt.AttrTimeout,
AttrTimeout: &opt.AttrTimeout,
// UID
// GID
GID: VFS.Opt.GID,
UID: VFS.Opt.UID,
}
root, err := fsys.Root()

View File

@@ -1,16 +1,14 @@
//go:build linux || (darwin && amd64)
// +build linux darwin,amd64
//go:build linux
// +build linux
package mount2
import (
"testing"
"github.com/rclone/rclone/fstest/testy"
"github.com/rclone/rclone/vfs/vfstest"
)
func TestMount(t *testing.T) {
testy.SkipUnreliable(t)
vfstest.RunTests(t, false, mount)
}

View File

@@ -85,17 +85,16 @@ func (n *Node) lookupVfsNodeInDir(leaf string) (vfsNode vfs.Node, errno syscall.
// will not work.
func (n *Node) Statfs(ctx context.Context, out *fuse.StatfsOut) syscall.Errno {
defer log.Trace(n, "")("out=%+v", &out)
out = new(fuse.StatfsOut)
const blockSize = 4096
const fsBlocks = (1 << 50) / blockSize
out.Blocks = fsBlocks // Total data blocks in file system.
out.Bfree = fsBlocks // Free blocks in file system.
out.Bavail = fsBlocks // Free blocks in file system if you're not root.
out.Files = 1e9 // Total files in file system.
out.Ffree = 1e9 // Free files in file system.
out.Bsize = blockSize // Block size
out.NameLen = 255 // Maximum file name length?
out.Frsize = blockSize // Fragment size, smallest addressable data size in the file system.
total, _, free := n.fsys.VFS.Statfs()
out.Blocks = uint64(total) / blockSize // Total data blocks in file system.
out.Bfree = uint64(free) / blockSize // Free blocks in file system.
out.Bavail = out.Bfree // Free blocks in file system if you're not root.
out.Files = 1e9 // Total files in file system.
out.Ffree = 1e9 // Free files in file system.
out.Bsize = blockSize // Block size
out.NameLen = 255 // Maximum file name length?
out.Frsize = blockSize // Fragment size, smallest addressable data size in the file system.
mountlib.ClipBlocks(&out.Blocks)
mountlib.ClipBlocks(&out.Bfree)
mountlib.ClipBlocks(&out.Bavail)
@@ -405,3 +404,40 @@ func (n *Node) Rename(ctx context.Context, oldName string, newParent fusefs.Inod
}
var _ = (fusefs.NodeRenamer)((*Node)(nil))
// Getxattr should read data for the given attribute into
// `dest` and return the number of bytes. If `dest` is too
// small, it should return ERANGE and the size of the attribute.
// If not defined, Getxattr will return ENOATTR.
func (n *Node) Getxattr(ctx context.Context, attr string, dest []byte) (uint32, syscall.Errno) {
return 0, syscall.ENOSYS // we never implement this
}
var _ fusefs.NodeGetxattrer = (*Node)(nil)
// Setxattr should store data for the given attribute. See
// setxattr(2) for information about flags.
// If not defined, Setxattr will return ENOATTR.
func (n *Node) Setxattr(ctx context.Context, attr string, data []byte, flags uint32) syscall.Errno {
return syscall.ENOSYS // we never implement this
}
var _ fusefs.NodeSetxattrer = (*Node)(nil)
// Removexattr should delete the given attribute.
// If not defined, Removexattr will return ENOATTR.
func (n *Node) Removexattr(ctx context.Context, attr string) syscall.Errno {
return syscall.ENOSYS // we never implement this
}
var _ fusefs.NodeRemovexattrer = (*Node)(nil)
// Listxattr should read all attributes (null terminated) into
// `dest`. If the `dest` buffer is too small, it should return ERANGE
// and the correct size. If not defined, return an empty list and
// success.
func (n *Node) Listxattr(ctx context.Context, dest []byte) (uint32, syscall.Errno) {
return 0, syscall.ENOSYS // we never implement this
}
var _ fusefs.NodeListxattrer = (*Node)(nil)

View File

@@ -261,6 +261,17 @@ Mounting on macOS can be done either via [macFUSE](https://osxfuse.github.io/)
FUSE driver utilizing a macOS kernel extension (kext). FUSE-T is an alternative FUSE system
which "mounts" via an NFSv4 local server.
#### macFUSE Notes
If installing macFUSE using [dmg packages](https://github.com/osxfuse/osxfuse/releases) from
the website, rclone will locate the macFUSE libraries without any further intervention.
If however, macFUSE is installed using the [macports](https://www.macports.org/) package manager,
the following addition steps are required.
sudo mkdir /usr/local/lib
cd /usr/local/lib
sudo ln -s /opt/local/lib/libfuse.2.dylib
#### FUSE-T Limitations, Caveats, and Notes
There are some limitations, caveats, and notes about how it works. These are current as
@@ -397,20 +408,19 @@ or create systemd mount units:
|||
# /etc/systemd/system/mnt-data.mount
[Unit]
After=network-online.target
Description=Mount for /mnt/data
[Mount]
Type=rclone
What=sftp1:subdir
Where=/mnt/data
Options=rw,allow_other,args2env,vfs-cache-mode=writes,config=/etc/rclone.conf,cache-dir=/var/rclone
Options=rw,_netdev,allow_other,args2env,vfs-cache-mode=writes,config=/etc/rclone.conf,cache-dir=/var/rclone
|||
optionally accompanied by systemd automount unit
|||
# /etc/systemd/system/mnt-data.automount
[Unit]
After=network-online.target
Before=remote-fs.target
Description=AutoMount for /mnt/data
[Automount]
Where=/mnt/data
TimeoutIdleSec=600

View File

@@ -68,13 +68,14 @@ var cmdSelfUpdate = &cobra.Command{
"versionIntroduced": "v1.55",
},
Run: func(command *cobra.Command, args []string) {
ctx := context.Background()
cmd.CheckArgs(0, 0, command, args)
if Opt.Package == "" {
Opt.Package = "zip"
}
gotActionFlags := Opt.Stable || Opt.Beta || Opt.Output != "" || Opt.Version != "" || Opt.Package != "zip"
if Opt.Check && !gotActionFlags {
versionCmd.CheckVersion()
versionCmd.CheckVersion(ctx)
return
}
if Opt.Package != "zip" {
@@ -108,7 +109,7 @@ func GetVersion(ctx context.Context, beta bool, version string) (newVersion, sit
if version == "" {
// Request the latest release number from the download site
_, newVersion, _, err = versionCmd.GetVersion(siteURL + "/version.txt")
_, newVersion, _, err = versionCmd.GetVersion(ctx, siteURL+"/version.txt")
return
}

View File

@@ -303,7 +303,7 @@ func (s *server) Serve() (err error) {
go func() {
fs.Logf(s.f, "Serving HTTP on %s", s.HTTPConn.Addr().String())
err = s.serveHTTP()
err := s.serveHTTP()
if err != nil {
fs.Logf(s.f, "Error on serving HTTP server: %v", err)
}

View File

@@ -3,10 +3,12 @@ package webdav
import (
"context"
"encoding/xml"
"errors"
"fmt"
"net/http"
"os"
"strconv"
"strings"
"time"
@@ -255,6 +257,52 @@ func (w *WebDAV) auth(user, pass string) (value interface{}, err error) {
return VFS, err
}
type webdavRW struct {
http.ResponseWriter
status int
}
func (rw *webdavRW) WriteHeader(statusCode int) {
rw.status = statusCode
rw.ResponseWriter.WriteHeader(statusCode)
}
func (rw *webdavRW) isSuccessfull() bool {
return rw.status == 0 || (rw.status >= 200 && rw.status <= 299)
}
func (w *WebDAV) postprocess(r *http.Request, remote string) {
// set modtime from requests, don't write to client because status is already written
switch r.Method {
case "COPY", "MOVE", "PUT":
VFS, err := w.getVFS(r.Context())
if err != nil {
fs.Errorf(nil, "Failed to get VFS: %v", err)
return
}
// Get the node
node, err := VFS.Stat(remote)
if err != nil {
fs.Errorf(nil, "Failed to stat node: %v", err)
return
}
mh := r.Header.Get("X-OC-Mtime")
if mh != "" {
modtimeUnix, err := strconv.ParseInt(mh, 10, 64)
if err == nil {
err = node.SetModTime(time.Unix(modtimeUnix, 0))
if err != nil {
fs.Errorf(nil, "Failed to set modtime: %v", err)
}
} else {
fs.Errorf(nil, "Failed to parse modtime: %v", err)
}
}
}
}
func (w *WebDAV) ServeHTTP(rw http.ResponseWriter, r *http.Request) {
urlPath := r.URL.Path
isDir := strings.HasSuffix(urlPath, "/")
@@ -266,7 +314,12 @@ func (w *WebDAV) ServeHTTP(rw http.ResponseWriter, r *http.Request) {
// Add URL Prefix back to path since webdavhandler needs to
// return absolute references.
r.URL.Path = w.opt.HTTP.BaseURL + r.URL.Path
w.webdavhandler.ServeHTTP(rw, r)
wrw := &webdavRW{ResponseWriter: rw}
w.webdavhandler.ServeHTTP(wrw, r)
if wrw.isSuccessfull() {
w.postprocess(r, remote)
}
}
// serveDir serves a directory index at dirRemote
@@ -356,7 +409,7 @@ func (w *WebDAV) OpenFile(ctx context.Context, name string, flags int, perm os.F
if err != nil {
return nil, err
}
return Handle{Handle: f, w: w}, nil
return Handle{Handle: f, w: w, ctx: ctx}, nil
}
// RemoveAll removes a file or a directory and its contents
@@ -404,7 +457,8 @@ func (w *WebDAV) Stat(ctx context.Context, name string) (fi os.FileInfo, err err
// Handle represents an open file
type Handle struct {
vfs.Handle
w *WebDAV
w *WebDAV
ctx context.Context
}
// Readdir reads directory entries from the handle
@@ -429,6 +483,65 @@ func (h Handle) Stat() (fi os.FileInfo, err error) {
return FileInfo{FileInfo: fi, w: h.w}, nil
}
// DeadProps returns extra properties about the handle
func (h Handle) DeadProps() (map[xml.Name]webdav.Property, error) {
var (
xmlName xml.Name
property webdav.Property
properties = make(map[xml.Name]webdav.Property)
)
if h.w.opt.HashType != hash.None {
entry := h.Handle.Node().DirEntry()
if o, ok := entry.(fs.Object); ok {
hash, err := o.Hash(h.ctx, h.w.opt.HashType)
if err == nil {
xmlName.Space = "http://owncloud.org/ns"
xmlName.Local = "checksums"
property.XMLName = xmlName
property.InnerXML = append(property.InnerXML, "<checksum xmlns=\"http://owncloud.org/ns\">"...)
property.InnerXML = append(property.InnerXML, strings.ToUpper(h.w.opt.HashType.String())...)
property.InnerXML = append(property.InnerXML, ':')
property.InnerXML = append(property.InnerXML, hash...)
property.InnerXML = append(property.InnerXML, "</checksum>"...)
properties[xmlName] = property
} else {
fs.Errorf(nil, "failed to calculate hash: %v", err)
}
}
}
xmlName.Space = "DAV:"
xmlName.Local = "lastmodified"
property.XMLName = xmlName
property.InnerXML = strconv.AppendInt(nil, h.Handle.Node().ModTime().Unix(), 10)
properties[xmlName] = property
return properties, nil
}
// Patch changes modtime of the underlying resources, it returns ok for all properties, the error is from setModtime if any
// FIXME does not check for invalid property and SetModTime error
func (h Handle) Patch(proppatches []webdav.Proppatch) ([]webdav.Propstat, error) {
var (
stat webdav.Propstat
err error
)
stat.Status = http.StatusOK
for _, patch := range proppatches {
for _, prop := range patch.Props {
stat.Props = append(stat.Props, webdav.Property{XMLName: prop.XMLName})
if prop.XMLName.Space == "DAV:" && prop.XMLName.Local == "lastmodified" {
var modtimeUnix int64
modtimeUnix, err = strconv.ParseInt(string(prop.InnerXML), 10, 64)
if err == nil {
err = h.Handle.Node().SetModTime(time.Unix(modtimeUnix, 0))
}
}
}
}
return []webdav.Propstat{stat}, err
}
// FileInfo represents info about a file satisfying os.FileInfo and
// also some additional interfaces for webdav for ETag and ContentType
type FileInfo struct {

View File

@@ -65,7 +65,7 @@ func TestWebDav(t *testing.T) {
// Config for the backend we'll use to connect to the server
config := configmap.Simple{
"type": "webdav",
"vendor": "other",
"vendor": "owncloud",
"url": w.Server.URLs()[0],
"user": testUser,
"pass": obscure.MustObscure(testPass),

View File

@@ -2,6 +2,7 @@
package version
import (
"context"
"errors"
"fmt"
"io"
@@ -13,6 +14,7 @@ import (
"github.com/rclone/rclone/cmd"
"github.com/rclone/rclone/fs"
"github.com/rclone/rclone/fs/config/flags"
"github.com/rclone/rclone/fs/fshttp"
"github.com/spf13/cobra"
)
@@ -71,9 +73,10 @@ Or
"versionIntroduced": "v1.33",
},
Run: func(command *cobra.Command, args []string) {
ctx := context.Background()
cmd.CheckArgs(0, 0, command, args)
if check {
CheckVersion()
CheckVersion(ctx)
} else {
cmd.ShowVersion()
}
@@ -89,8 +92,8 @@ func stripV(s string) string {
}
// GetVersion gets the version available for download
func GetVersion(url string) (v *semver.Version, vs string, date time.Time, err error) {
resp, err := http.Get(url)
func GetVersion(ctx context.Context, url string) (v *semver.Version, vs string, date time.Time, err error) {
resp, err := fshttp.NewClient(ctx).Get(url)
if err != nil {
return v, vs, date, err
}
@@ -114,7 +117,7 @@ func GetVersion(url string) (v *semver.Version, vs string, date time.Time, err e
}
// CheckVersion checks the installed version against available downloads
func CheckVersion() {
func CheckVersion(ctx context.Context) {
vCurrent, err := semver.NewVersion(stripV(fs.Version))
if err != nil {
fs.Errorf(nil, "Failed to parse version: %v", err)
@@ -122,7 +125,7 @@ func CheckVersion() {
const timeFormat = "2006-01-02"
printVersion := func(what, url string) {
v, vs, t, err := GetVersion(url + "version.txt")
v, vs, t, err := GetVersion(ctx, url+"version.txt")
if err != nil {
fs.Errorf(nil, "Failed to get rclone %s version: %v", what, err)
return

View File

@@ -113,7 +113,7 @@ WebDAV or S3, that work out of the box.)
{{< provider name="Box" home="https://www.box.com/" config="/box/" >}}
{{< provider name="Ceph" home="http://ceph.com/" config="/s3/#ceph" >}}
{{< provider name="China Mobile Ecloud Elastic Object Storage (EOS)" home="https://ecloud.10086.cn/home/product-introduction/eos/" config="/s3/#china-mobile-ecloud-eos" >}}
{{< provider name="Arvan Cloud Object Storage (AOS)" home="https://www.arvancloud.com/en/products/cloud-storage" config="/s3/#arvan-cloud-object-storage-aos" >}}
{{< provider name="Arvan Cloud Object Storage (AOS)" home="https://www.arvancloud.ir/en/products/cloud-storage" config="/s3/#arvan-cloud-object-storage-aos" >}}
{{< provider name="Citrix ShareFile" home="http://sharefile.com/" config="/sharefile/" >}}
{{< provider name="Cloudflare R2" home="https://blog.cloudflare.com/r2-open-beta/" config="/s3/#cloudflare-r2" >}}
{{< provider name="DigitalOcean Spaces" home="https://www.digitalocean.com/products/object-storage/" config="/s3/#digitalocean-spaces" >}}
@@ -146,12 +146,14 @@ WebDAV or S3, that work out of the box.)
{{< provider name="Minio" home="https://www.minio.io/" config="/s3/#minio" >}}
{{< provider name="Nextcloud" home="https://nextcloud.com/" config="/webdav/#nextcloud" >}}
{{< provider name="OVH" home="https://www.ovh.co.uk/public-cloud/storage/object-storage/" config="/swift/" >}}
{{< provider name="Blomp Cloud Storage" home="https://rclone.org/swift/" config="/swift/" >}}
{{< provider name="OpenDrive" home="https://www.opendrive.com/" config="/opendrive/" >}}
{{< provider name="OpenStack Swift" home="https://docs.openstack.org/swift/latest/" config="/swift/" >}}
{{< provider name="Oracle Cloud Storage Swift" home="https://docs.oracle.com/en-us/iaas/integration/doc/configure-object-storage.html" config="/swift/" >}}
{{< provider name="Oracle Object Storage" home="https://www.oracle.com/cloud/storage/object-storage" config="/oracleobjectstorage/" >}}
{{< provider name="ownCloud" home="https://owncloud.org/" config="/webdav/#owncloud" >}}
{{< provider name="pCloud" home="https://www.pcloud.com/" config="/pcloud/" >}}
{{< provider name="Petabox" home="https://petabox.io/" config="/s3/#petabox" >}}
{{< provider name="PikPak" home="https://mypikpak.com/" config="/pikpak/" >}}
{{< provider name="premiumize.me" home="https://premiumize.me/" config="/premiumizeme/" >}}
{{< provider name="put.io" home="https://put.io/" config="/putio/" >}}

View File

@@ -587,7 +587,7 @@ put them back in again.` >}}
* Leroy van Logchem <lr.vanlogchem@gmail.com>
* Zsolt Ero <zsolt.ero@gmail.com>
* Lesmiscore <nao20010128@gmail.com>
* ehsantdy <ehsan.tadayon@arvancloud.com>
* ehsantdy <ehsan.tadayon@arvancloud.com> <ehsantadayon85@gmail.com>
* SwazRGB <65694696+swazrgb@users.noreply.github.com>
* Mateusz Puczyński <mati6095@gmail.com>
* Michael C Tiernan - MIT-Research Computing Project <mtiernan@mit.edu>
@@ -713,3 +713,28 @@ put them back in again.` >}}
* wiserain <mail275@gmail.com>
* Roel Arents <roel.arents@kadaster.nl>
* Shyim <github@shyim.de>
* Rintze Zelle <78232505+rzelle-lallemand@users.noreply.github.com>
* Damo <damoclark@users.noreply.github.com>
* WeidiDeng <weidi_deng@icloud.com>
* Brian Starkey <stark3y@gmail.com>
* jladbrook <jhladbrook@gmail.com>
* Loren Gordon <lorengordon@users.noreply.github.com>
* dlitster <davidlitster@gmail.com>
* Tobias Gion <tobias@gion.io>
* Jānis Bebrītis <janis.bebritis@wunder.io>
* Adam K <github.com@ak.tidy.email>
* Andrei Smirnov <smirnov.captain@gmail.com>
* Janne Hellsten <jjhellst@gmail.com>
* cc <12904584+shvc@users.noreply.github.com>
* Tareq Sharafy <tareq.sha@gmail.com>
* kapitainsky <dariuszb@me.com>
* douchen <playgoobug@gmail.com>
* Sam Lai <70988+slai@users.noreply.github.com>
* URenko <18209292+URenko@users.noreply.github.com>
* Stanislav Gromov <kullfar@gmail.com>
* Paulo Schreiner <paulo.schreiner@delivion.de>
* Mariusz Suchodolski <mariusz@suchodol.ski>
* danielkrajnik <dan94kra@gmail.com>
* Peter Fern <github@0xc0dedbad.com>
* zzq <i@zhangzqs.cn>
* mac-15 <usman.ilamdin@phpstudios.com>

View File

@@ -162,6 +162,12 @@ It reads configuration from these variables, in the following order:
- `AZURE_CLIENT_ID`: client ID of the application the user will authenticate to
- `AZURE_USERNAME`: a username (usually an email address)
- `AZURE_PASSWORD`: the user's password
4. Workload Identity
- `AZURE_TENANT_ID`: Tenant to authenticate in.
- `AZURE_CLIENT_ID`: Client ID of the application the user will authenticate to.
- `AZURE_FEDERATED_TOKEN_FILE`: Path to projected service account token file.
- `AZURE_AUTHORITY_HOST`: Authority of an Azure Active Directory endpoint (default: login.microsoftonline.com).
##### Env Auth: 2. Managed Service Identity Credentials
@@ -189,7 +195,7 @@ Then you could access rclone resources like this:
Or
rclone lsf --azureblob-env-auth --azureblob-acccount=ACCOUNT :azureblob:CONTAINER
rclone lsf --azureblob-env-auth --azureblob-account=ACCOUNT :azureblob:CONTAINER
Which is analogous to using the `az` tool:
@@ -786,6 +792,24 @@ Properties:
- "container"
- Allow full public read access for container and blob data.
#### --azureblob-directory-markers
Upload an empty object with a trailing slash when a new directory is created
Empty folders are unsupported for bucket based remotes, this option
creates an empty object ending with "/", to persist the folder.
This object also has the metadata "hdi_isfolder = true" to conform to
the Microsoft standard.
Properties:
- Config: directory_markers
- Env Var: RCLONE_AZUREBLOB_DIRECTORY_MARKERS
- Type: bool
- Default: false
#### --azureblob-no-check-container
If set, don't attempt to check the container exists or create it.

View File

@@ -5,6 +5,177 @@ description: "Rclone Changelog"
# Changelog
## v1.63.0 - 2023-06-30
[See commits](https://github.com/rclone/rclone/compare/v1.62.0...v1.63.0)
* New backends
* [Pikpak](/pikpak/) (wiserain)
* New S3 providers
* [petabox.io](/s3/#petabox) (Andrei Smirnov)
* [Google Cloud Storage](/s3/#google-cloud-storage) (Anthony Pessy)
* New WebDAV providers
* [Fastmail](/webdav/#fastmail-files) (Arnavion)
* Major changes
* Files will be copied to a temporary name ending in `.partial` when copying to `local`,`ftp`,`sftp` then renamed at the end of the transfer. (Janne Hellsten, Nick Craig-Wood)
* This helps with data integrity as we don't delete the existing file until the new one is complete.
* It can be disabled with the [--inplace](/docs/#inplace) flag.
* This behaviour will also happen if the backend is wrapped, for example `sftp` wrapped with `crypt`.
* The [s3](/s3/#s3-directory-markers), [azureblob](/azureblob/#azureblob-directory-markers) and [gcs](/googlecloudstorage/#gcs-directory-markers) backends now support directory markers so empty directories are supported (Jānis Bebrītis, Nick Craig-Wood)
* The [--default-time](/docs/#default-time-time) flag now controls the unknown modification time of files/dirs (Nick Craig-Wood)
* If a file or directory does not have a modification time rclone can read then rclone will display this fixed time instead.
* For the old behaviour use `--default-time 0s` which will set this time to the time rclone started up.
* New Features
* build
* Modernise linters in use and fixup all affected code (albertony)
* Push docker beta to GHCR (GitHub container registry) (Richard Tweed)
* cat: Add `--separator` option to cat command (Loren Gordon)
* config
* Do not remove/overwrite other files during config file save (albertony)
* Do not overwrite config file symbolic link (albertony)
* Stop `config create` making invalid config files (Nick Craig-Wood)
* doc updates (Adam K, Aditya Basu, albertony, asdffdsazqqq, Damo, danielkrajnik, Dimitri Papadopoulos, dlitster, Drew Parsons, jumbi77, kapitainsky, mac-15, Mariusz Suchodolski, Nick Craig-Wood, NickIAm, Rintze Zelle, Stanislav Gromov, Tareq Sharafy, URenko, yuudi, Zach Kipp)
* fs
* Add `size` to JSON logs when moving or copying an object (Nick Craig-Wood)
* Allow boolean features to be enabled with `--disable !Feature` (Nick Craig-Wood)
* genautocomplete: Rename to `completion` with alias to the old name (Nick Craig-Wood)
* librclone: Added example on using `librclone` with Go (alankrit)
* lsjson: Make `--stat` more efficient (Nick Craig-Wood)
* operations
* Implement `--multi-thread-write-buffer-size` for speed improvements on downloads (Paulo Schreiner)
* Reopen downloads on error when using `check --download` and `cat` (Nick Craig-Wood)
* rc: `config/listremotes` includes remotes defined with environment variables (kapitainsky)
* selfupdate: Obey `--no-check-certificate` flag (Nick Craig-Wood)
* serve restic: Trigger systemd notify (Shyim)
* serve webdav: Implement owncloud checksum and modtime extensions (WeidiDeng)
* sync: `--suffix-keep-extension` preserve 2 part extensions like .tar.gz (Nick Craig-Wood)
* Bug Fixes
* accounting
* Fix Prometheus metrics to be the same as `core/stats` (Nick Craig-Wood)
* Bwlimit signal handler should always start (Sam Lai)
* bisync: Fix `maxDelete` parameter being ignored via the rc (Nick Craig-Wood)
* cmd/ncdu: Fix screen corruption when logging (eNV25)
* filter: Fix deadlock with errors on `--files-from` (douchen)
* fs
* Fix interaction between `--progress` and `--interactive` (Nick Craig-Wood)
* Fix infinite recursive call in pacer ModifyCalculator (fixes issue reported by the staticcheck linter) (albertony)
* lib/atexit: Ensure OnError only calls cancel function once (Nick Craig-Wood)
* lib/rest: Fix problems re-using HTTP connections (Nick Craig-Wood)
* rc
* Fix `operations/stat` with trailing `/` (Nick Craig-Wood)
* Fix missing `--rc` flags (Nick Craig-Wood)
* Fix output of Time values in `options/get` (Nick Craig-Wood)
* serve dlna: Fix potential data race (Nick Craig-Wood)
* version: Fix reported os/kernel version for windows (albertony)
* Mount
* Add `--mount-case-insensitive` to force the mount to be case insensitive (Nick Craig-Wood)
* Removed unnecessary byte slice allocation for reads (Anagh Kumar Baranwal)
* Clarify rclone mount error when installed via homebrew (Nick Craig-Wood)
* Added _netdev to the example mount so it gets treated as a remote-fs rather than local-fs (Anagh Kumar Baranwal)
* Mount2
* Updated go-fuse version (Anagh Kumar Baranwal)
* Fixed statfs (Anagh Kumar Baranwal)
* Disable xattrs (Anagh Kumar Baranwal)
* VFS
* Add MkdirAll function to make a directory and all beneath (Nick Craig-Wood)
* Fix reload: failed to add virtual dir entry: file does not exist (Nick Craig-Wood)
* Fix writing to a read only directory creating spurious directory entries (WeidiDeng)
* Fix potential data race (Nick Craig-Wood)
* Fix backends being Shutdown too early when startup takes a long time (Nick Craig-Wood)
* Local
* Fix filtering of symlinks with `-l`/`--links` flag (Nick Craig-Wood)
* Fix /path/to/file.rclonelink when `-l`/`--links` is in use (Nick Craig-Wood)
* Fix crash with `--metadata` on Android (Nick Craig-Wood)
* Cache
* Fix backends shutting down when in use when used via the rc (Nick Craig-Wood)
* Crypt
* Add `--crypt-suffix` option to set a custom suffix for encrypted files (jladbrook)
* Add `--crypt-pass-bad-blocks` to allow corrupted file output (Nick Craig-Wood)
* Fix reading 0 length files (Nick Craig-Wood)
* Try not to return "unexpected EOF" error (Nick Craig-Wood)
* Reduce allocations (albertony)
* Recommend Dropbox for `base32768` encoding (Nick Craig-Wood)
* Azure Blob
* Empty directory markers (Nick Craig-Wood)
* Support azure workload identities (Tareq Sharafy)
* Fix azure blob uploads with multiple bits of metadata (Nick Craig-Wood)
* Fix azurite compatibility by sending nil tier if set to empty string (Roel Arents)
* Combine
* Implement missing methods (Nick Craig-Wood)
* Fix goroutine stack overflow on bad object (Nick Craig-Wood)
* Drive
* Add `--drive-env-auth` to get IAM credentials from runtime (Peter Brunner)
* Update drive service account guide (Juang, Yi-Lin)
* Fix change notify picking up files outside the root (Nick Craig-Wood)
* Fix trailing slash mis-identificaton of folder as file (Nick Craig-Wood)
* Fix incorrect remote after Update on object (Nick Craig-Wood)
* Dropbox
* Implement `--dropbox-pacer-min-sleep` flag (Nick Craig-Wood)
* Fix the dropbox batcher stalling (Misty)
* Fichier
* Add `--ficicher-cdn` option to use the CDN for download (Nick Craig-Wood)
* FTP
* Lower log message priority when `SetModTime` is not supported to debug (Tobias Gion)
* Fix "unsupported LIST line" errors on startup (Nick Craig-Wood)
* Fix "501 Not a valid pathname." errors when creating directories (Nick Craig-Wood)
* Google Cloud Storage
* Empty directory markers (Jānis Bebrītis, Nick Craig-Wood)
* Added `--gcs-user-project` needed for requester pays (Christopher Merry)
* HTTP
* Add client certificate user auth middleware. This can auth `serve restic` from the username in the client cert. (Peter Fern)
* Jottacloud
* Fix vfs writeback stuck in a failed upload loop with file versioning disabled (albertony)
* Onedrive
* Add `--onedrive-av-override` flag to download files flagged as virus (Nick Craig-Wood)
* Fix quickxorhash on 32 bit architectures (Nick Craig-Wood)
* Report any list errors during `rclone cleanup` (albertony)
* Putio
* Fix uploading to the wrong object on Update with overriden remote name (Nick Craig-Wood)
* Fix modification times not being preserved for server side copy and move (Nick Craig-Wood)
* Fix server side copy failures (400 errors) (Nick Craig-Wood)
* S3
* Empty directory markers (Jānis Bebrītis, Nick Craig-Wood)
* Update Scaleway storage classes (Brian Starkey)
* Fix `--s3-versions` on individual objects (Nick Craig-Wood)
* Fix hang on aborting multpart upload with iDrive e2 (Nick Craig-Wood)
* Fix missing "tier" metadata (Nick Craig-Wood)
* Fix V3sign: add missing subresource delete (cc)
* Fix Arvancloud Domain and region changes and alphabetise the provider (Ehsan Tadayon)
* Fix Qiniu KODO quirks virtualHostStyle is false (zzq)
* SFTP
* Add `--sftp-host-key-algorithms ` to allow specifying SSH host key algorithms (Joel)
* Fix using `--sftp-key-use-agent` and `--sftp-key-file` together needing private key file (Arnav Singh)
* Fix move to allow overwriting existing files (Nick Craig-Wood)
* Don't stat directories before listing them (Nick Craig-Wood)
* Don't check remote points to a file if it ends with / (Nick Craig-Wood)
* Sharefile
* Disable streamed transfers as they no longer work (Nick Craig-Wood)
* Smb
* Code cleanup to avoid overwriting ctx before first use (fixes issue reported by the staticcheck linter) (albertony)
* Storj
* Fix "uplink: too many requests" errors when uploading to the same file (Nick Craig-Wood)
* Fix uploading to the wrong object on Update with overriden remote name (Nick Craig-Wood)
* Swift
* Ignore 404 error when deleting an object (Nick Craig-Wood)
* Union
* Implement missing methods (Nick Craig-Wood)
* Allow errors to be unwrapped for inspection (Nick Craig-Wood)
* Uptobox
* Add `--uptobox-private` flag to make all uploaded files private (Nick Craig-Wood)
* Fix improper regex (Aaron Gokaslan)
* Fix Update returning the wrong object (Nick Craig-Wood)
* Fix rmdir declaring that directories weren't empty (Nick Craig-Wood)
* WebDAV
* nextcloud: Add support for chunked uploads (Paul)
* Set modtime using propset for owncloud and nextcloud (WeidiDeng)
* Make pacer minSleep configurable with `--webdav-pacer-min-sleep` (ed)
* Fix server side copy/move not overwriting (WeidiDeng)
* Fix modtime on server side copy for owncloud and nextcloud (Nick Craig-Wood)
* Yandex
* Fix 400 Bad Request on transfer failure (Nick Craig-Wood)
* Zoho
* Fix downloads with `Range:` header returning the wrong data (Nick Craig-Wood)
## v1.62.2 - 2023-03-16
[See commits](https://github.com/rclone/rclone/compare/v1.62.1...v1.62.2)

View File

@@ -218,7 +218,7 @@ guarantee given hash for all files. If wrapped remote doesn't support it,
chunker will then add metadata to all files, even small. However, this can
double the amount of small files in storage and incur additional service charges.
You can even use chunker to force md5/sha1 support in any other remote
at expense of sidecar meta objects by setting e.g. `chunk_type=sha1all`
at expense of sidecar meta objects by setting e.g. `hash_type=sha1all`
to force hashsums and `chunk_size=1P` to effectively disable chunking.
Normally, when a file is copied to chunker controlled remote, chunker

View File

@@ -42,7 +42,7 @@ See the [global flags page](/flags/) for global options not listed here.
* [rclone check](/commands/rclone_check/) - Checks the files in the source and destination match.
* [rclone checksum](/commands/rclone_checksum/) - Checks the files in the source against a SUM file.
* [rclone cleanup](/commands/rclone_cleanup/) - Clean up the remote if possible.
* [rclone completion](/commands/rclone_completion/) - Generate the autocompletion script for the specified shell
* [rclone completion](/commands/rclone_completion/) - Output completion script for a given shell.
* [rclone config](/commands/rclone_config/) - Enter an interactive configuration session.
* [rclone copy](/commands/rclone_copy/) - Copy files from source to dest, skipping identical files.
* [rclone copyto](/commands/rclone_copyto/) - Copy files from source to dest, skipping identical files.
@@ -52,11 +52,10 @@ See the [global flags page](/flags/) for global options not listed here.
* [rclone dedupe](/commands/rclone_dedupe/) - Interactively find duplicate filenames and delete/rename them.
* [rclone delete](/commands/rclone_delete/) - Remove the files in path.
* [rclone deletefile](/commands/rclone_deletefile/) - Remove a single file from remote.
* [rclone genautocomplete](/commands/rclone_genautocomplete/) - Output completion script for a given shell.
* [rclone gendocs](/commands/rclone_gendocs/) - Output markdown docs for rclone to the directory supplied.
* [rclone hashsum](/commands/rclone_hashsum/) - Produces a hashsum file for all the objects in the path.
* [rclone link](/commands/rclone_link/) - Generate public link to file/folder.
* [rclone listremotes](/commands/rclone_listremotes/) - List all the remotes in the config file.
* [rclone listremotes](/commands/rclone_listremotes/) - List all the remotes in the config file and defined in environment variables.
* [rclone ls](/commands/rclone_ls/) - List the objects in the path with size and path.
* [rclone lsd](/commands/rclone_lsd/) - List all directories/containers/buckets in the path.
* [rclone lsf](/commands/rclone_lsf/) - List directories and objects in remote:path formatted for parsing.

View File

@@ -32,6 +32,18 @@ the end and `--offset` and `--count` to print a section in the middle.
Note that if offset is negative it will count from the end, so
`--offset -1 --count 1` is equivalent to `--tail 1`.
Use the `--separator` flag to print a separator value between files. Be sure to
shell-escape special characters. For example, to print a newline between
files, use:
* bash:
rclone --include "*.txt" --separator $'\n' cat remote:path/to/dir
* powershell:
rclone --include "*.txt" --separator "`n" cat remote:path/to/dir
```
rclone cat remote:path [flags]
@@ -40,12 +52,13 @@ rclone cat remote:path [flags]
## Options
```
--count int Only print N characters (default -1)
--discard Discard the output instead of printing
--head int Only print the first N characters
-h, --help help for cat
--offset int Start printing at offset N (or from end if -ve)
--tail int Only print the last N characters
--count int Only print N characters (default -1)
--discard Discard the output instead of printing
--head int Only print the first N characters
-h, --help help for cat
--offset int Start printing at offset N (or from end if -ve)
--separator string Separator to use between objects when printing multiple files
--tail int Only print the last N characters
```
See the [global flags page](/flags/) for global options not listed here.

View File

@@ -52,8 +52,9 @@ you what happened to it. These are reminiscent of diff files.
- `* path` means path was present in source and destination but different.
- `! path` means there was an error reading or hashing the source or dest.
The default number of parallel checks is N=8. See the [--checkers=N](/docs/#checkers-n) option
for more information.
The default number of parallel checks is 8. See the [--checkers=N](/docs/#checkers-n)
option for more information.
```
rclone check source:path dest:path [flags]

View File

@@ -44,6 +44,9 @@ you what happened to it. These are reminiscent of diff files.
- `* path` means path was present in source and destination but different.
- `! path` means there was an error reading or hashing the source or dest.
The default number of parallel checks is 8. See the [--checkers=N](/docs/#checkers-n)
option for more information.
```
rclone checksum <hash> sumfile src:path [flags]

View File

@@ -1,18 +1,20 @@
---
title: "rclone completion"
description: "Generate the autocompletion script for the specified shell"
description: "Output completion script for a given shell."
slug: rclone_completion
url: /commands/rclone_completion/
versionIntroduced: v1.33
# autogenerated - DO NOT EDIT, instead edit the source code in cmd/completion/ and as part of making a release run "make commanddocs"
---
# rclone completion
Generate the autocompletion script for the specified shell
Output completion script for a given shell.
## Synopsis
Generate the autocompletion script for rclone for the specified shell.
See each sub-command's help for details on how to use the generated script.
Generates a shell completion script for rclone.
Run with `--help` to list the supported shells.
## Options
@@ -26,8 +28,7 @@ See the [global flags page](/flags/) for global options not listed here.
## SEE ALSO
* [rclone](/commands/rclone/) - Show help for rclone commands, flags and backends.
* [rclone completion bash](/commands/rclone_completion_bash/) - Generate the autocompletion script for bash
* [rclone completion fish](/commands/rclone_completion_fish/) - Generate the autocompletion script for fish
* [rclone completion powershell](/commands/rclone_completion_powershell/) - Generate the autocompletion script for powershell
* [rclone completion zsh](/commands/rclone_completion_zsh/) - Generate the autocompletion script for zsh
* [rclone completion bash](/commands/rclone_completion_bash/) - Output bash completion script for rclone.
* [rclone completion fish](/commands/rclone_completion_fish/) - Output fish completion script for rclone.
* [rclone completion zsh](/commands/rclone_completion_zsh/) - Output zsh completion script for rclone.

View File

@@ -1,52 +1,48 @@
---
title: "rclone completion bash"
description: "Generate the autocompletion script for bash"
description: "Output bash completion script for rclone."
slug: rclone_completion_bash
url: /commands/rclone_completion_bash/
# autogenerated - DO NOT EDIT, instead edit the source code in cmd/completion/bash/ and as part of making a release run "make commanddocs"
---
# rclone completion bash
Generate the autocompletion script for bash
Output bash completion script for rclone.
## Synopsis
Generate the autocompletion script for the bash shell.
This script depends on the 'bash-completion' package.
If it is not installed already, you can install it via your OS's package manager.
Generates a bash shell autocompletion script for rclone.
To load completions in your current shell session:
This writes to /etc/bash_completion.d/rclone by default so will
probably need to be run with sudo or as root, e.g.
source <(rclone completion bash)
sudo rclone genautocomplete bash
To load completions for every new session, execute once:
Logout and login again to use the autocompletion scripts, or source
them directly
### Linux:
. /etc/bash_completion
rclone completion bash > /etc/bash_completion.d/rclone
If you supply a command line argument the script will be written
there.
### macOS:
rclone completion bash > $(brew --prefix)/etc/bash_completion.d/rclone
You will need to start a new shell for this setup to take effect.
If output_file is "-", then the output will be written to stdout.
```
rclone completion bash
rclone completion bash [output_file] [flags]
```
## Options
```
-h, --help help for bash
--no-descriptions disable completion descriptions
-h, --help help for bash
```
See the [global flags page](/flags/) for global options not listed here.
## SEE ALSO
* [rclone completion](/commands/rclone_completion/) - Generate the autocompletion script for the specified shell
* [rclone completion](/commands/rclone_completion/) - Output completion script for a given shell.

View File

@@ -1,43 +1,48 @@
---
title: "rclone completion fish"
description: "Generate the autocompletion script for fish"
description: "Output fish completion script for rclone."
slug: rclone_completion_fish
url: /commands/rclone_completion_fish/
# autogenerated - DO NOT EDIT, instead edit the source code in cmd/completion/fish/ and as part of making a release run "make commanddocs"
---
# rclone completion fish
Generate the autocompletion script for fish
Output fish completion script for rclone.
## Synopsis
Generate the autocompletion script for the fish shell.
To load completions in your current shell session:
Generates a fish autocompletion script for rclone.
rclone completion fish | source
This writes to /etc/fish/completions/rclone.fish by default so will
probably need to be run with sudo or as root, e.g.
To load completions for every new session, execute once:
sudo rclone genautocomplete fish
rclone completion fish > ~/.config/fish/completions/rclone.fish
Logout and login again to use the autocompletion scripts, or source
them directly
You will need to start a new shell for this setup to take effect.
. /etc/fish/completions/rclone.fish
If you supply a command line argument the script will be written
there.
If output_file is "-", then the output will be written to stdout.
```
rclone completion fish [flags]
rclone completion fish [output_file] [flags]
```
## Options
```
-h, --help help for fish
--no-descriptions disable completion descriptions
-h, --help help for fish
```
See the [global flags page](/flags/) for global options not listed here.
## SEE ALSO
* [rclone completion](/commands/rclone_completion/) - Generate the autocompletion script for the specified shell
* [rclone completion](/commands/rclone_completion/) - Output completion script for a given shell.

View File

@@ -9,7 +9,7 @@ url: /commands/rclone_completion_powershell/
Generate the autocompletion script for powershell
## Synopsis
# Synopsis
Generate the autocompletion script for powershell.
@@ -25,7 +25,7 @@ to your powershell profile.
rclone completion powershell [flags]
```
## Options
# Options
```
-h, --help help for powershell
@@ -34,7 +34,7 @@ rclone completion powershell [flags]
See the [global flags page](/flags/) for global options not listed here.
## SEE ALSO
# SEE ALSO
* [rclone completion](/commands/rclone_completion/) - Generate the autocompletion script for the specified shell

View File

@@ -1,54 +1,48 @@
---
title: "rclone completion zsh"
description: "Generate the autocompletion script for zsh"
description: "Output zsh completion script for rclone."
slug: rclone_completion_zsh
url: /commands/rclone_completion_zsh/
# autogenerated - DO NOT EDIT, instead edit the source code in cmd/completion/zsh/ and as part of making a release run "make commanddocs"
---
# rclone completion zsh
Generate the autocompletion script for zsh
Output zsh completion script for rclone.
## Synopsis
Generate the autocompletion script for the zsh shell.
If shell completion is not already enabled in your environment you will need
to enable it. You can execute the following once:
Generates a zsh autocompletion script for rclone.
echo "autoload -U compinit; compinit" >> ~/.zshrc
This writes to /usr/share/zsh/vendor-completions/_rclone by default so will
probably need to be run with sudo or as root, e.g.
To load completions in your current shell session:
sudo rclone genautocomplete zsh
source <(rclone completion zsh); compdef _rclone rclone
Logout and login again to use the autocompletion scripts, or source
them directly
To load completions for every new session, execute once:
autoload -U compinit && compinit
### Linux:
If you supply a command line argument the script will be written
there.
rclone completion zsh > "${fpath[1]}/_rclone"
### macOS:
rclone completion zsh > $(brew --prefix)/share/zsh/site-functions/_rclone
You will need to start a new shell for this setup to take effect.
If output_file is "-", then the output will be written to stdout.
```
rclone completion zsh [flags]
rclone completion zsh [output_file] [flags]
```
## Options
```
-h, --help help for zsh
--no-descriptions disable completion descriptions
-h, --help help for zsh
```
See the [global flags page](/flags/) for global options not listed here.
## SEE ALSO
* [rclone completion](/commands/rclone_completion/) - Generate the autocompletion script for the specified shell
* [rclone completion](/commands/rclone_completion/) - Output completion script for a given shell.

View File

@@ -13,16 +13,16 @@ Cryptcheck checks the integrity of an encrypted remote.
## Synopsis
rclone cryptcheck checks a remote against an [encrypted](/crypt/) remote.
rclone cryptcheck checks a remote against a [crypted](/crypt/) remote.
This is the equivalent of running rclone [check](/commands/rclone_check/),
but able to check the checksums of the encrypted remote.
For it to work the underlying remote of the encryptedremote must support
For it to work the underlying remote of the cryptedremote must support
some kind of checksum.
It works by reading the nonce from each file on the encryptedremote: and
It works by reading the nonce from each file on the cryptedremote: and
using that to encrypt each file on the remote:. It then checks the
checksum of the underlying file on the ercryptedremote: against the
checksum of the underlying file on the cryptedremote: against the
checksum of the file it has just encrypted.
Use it like this
@@ -57,11 +57,12 @@ you what happened to it. These are reminiscent of diff files.
- `* path` means path was present in source and destination but different.
- `! path` means there was an error reading or hashing the source or dest.
The default number of parallel checks is N=8. See the [--checkers=N](/docs/#checkers-n) option
for more information.
The default number of parallel checks is 8. See the [--checkers=N](/docs/#checkers-n)
option for more information.
```
rclone cryptcheck remote:path encryptedremote:path [flags]
rclone cryptcheck remote:path cryptedremote:path [flags]
```
## Options

View File

@@ -10,14 +10,14 @@ versionIntroduced: v1.33
Output completion script for a given shell.
## Synopsis
# Synopsis
Generates a shell completion script for rclone.
Run with `--help` to list the supported shells.
## Options
# Options
```
-h, --help help for genautocomplete
@@ -25,7 +25,7 @@ Run with `--help` to list the supported shells.
See the [global flags page](/flags/) for global options not listed here.
## SEE ALSO
# SEE ALSO
* [rclone](/commands/rclone/) - Show help for rclone commands, flags and backends.
* [rclone genautocomplete bash](/commands/rclone_genautocomplete_bash/) - Output bash completion script for rclone.

View File

@@ -9,7 +9,7 @@ url: /commands/rclone_genautocomplete_bash/
Output bash completion script for rclone.
## Synopsis
# Synopsis
Generates a bash shell autocompletion script for rclone.
@@ -34,7 +34,7 @@ If output_file is "-", then the output will be written to stdout.
rclone genautocomplete bash [output_file] [flags]
```
## Options
# Options
```
-h, --help help for bash
@@ -42,7 +42,7 @@ rclone genautocomplete bash [output_file] [flags]
See the [global flags page](/flags/) for global options not listed here.
## SEE ALSO
# SEE ALSO
* [rclone genautocomplete](/commands/rclone_genautocomplete/) - Output completion script for a given shell.

View File

@@ -9,7 +9,7 @@ url: /commands/rclone_genautocomplete_fish/
Output fish completion script for rclone.
## Synopsis
# Synopsis
Generates a fish autocompletion script for rclone.
@@ -34,7 +34,7 @@ If output_file is "-", then the output will be written to stdout.
rclone genautocomplete fish [output_file] [flags]
```
## Options
# Options
```
-h, --help help for fish
@@ -42,7 +42,7 @@ rclone genautocomplete fish [output_file] [flags]
See the [global flags page](/flags/) for global options not listed here.
## SEE ALSO
# SEE ALSO
* [rclone genautocomplete](/commands/rclone_genautocomplete/) - Output completion script for a given shell.

View File

@@ -9,7 +9,7 @@ url: /commands/rclone_genautocomplete_zsh/
Output zsh completion script for rclone.
## Synopsis
# Synopsis
Generates a zsh autocompletion script for rclone.
@@ -34,7 +34,7 @@ If output_file is "-", then the output will be written to stdout.
rclone genautocomplete zsh [output_file] [flags]
```
## Options
# Options
```
-h, --help help for zsh
@@ -42,7 +42,7 @@ rclone genautocomplete zsh [output_file] [flags]
See the [global flags page](/flags/) for global options not listed here.
## SEE ALSO
# SEE ALSO
* [rclone genautocomplete](/commands/rclone_genautocomplete/) - Output completion script for a given shell.

View File

@@ -1,6 +1,6 @@
---
title: "rclone listremotes"
description: "List all the remotes in the config file."
description: "List all the remotes in the config file and defined in environment variables."
slug: rclone_listremotes
url: /commands/rclone_listremotes/
versionIntroduced: v1.34
@@ -8,7 +8,7 @@ versionIntroduced: v1.34
---
# rclone listremotes
List all the remotes in the config file.
List all the remotes in the config file and defined in environment variables.
## Synopsis

View File

@@ -272,6 +272,17 @@ Mounting on macOS can be done either via [macFUSE](https://osxfuse.github.io/)
FUSE driver utilizing a macOS kernel extension (kext). FUSE-T is an alternative FUSE system
which "mounts" via an NFSv4 local server.
### macFUSE Notes
If installing macFUSE using [dmg packages](https://github.com/osxfuse/osxfuse/releases) from
the website, rclone will locate the macFUSE libraries without any further intervention.
If however, macFUSE is installed using the [macports](https://www.macports.org/) package manager,
the following addition steps are required.
sudo mkdir /usr/local/lib
cd /usr/local/lib
sudo ln -s /opt/local/lib/libfuse.2.dylib
### FUSE-T Limitations, Caveats, and Notes
There are some limitations, caveats, and notes about how it works. These are current as
@@ -408,20 +419,19 @@ or create systemd mount units:
```
# /etc/systemd/system/mnt-data.mount
[Unit]
After=network-online.target
Description=Mount for /mnt/data
[Mount]
Type=rclone
What=sftp1:subdir
Where=/mnt/data
Options=rw,allow_other,args2env,vfs-cache-mode=writes,config=/etc/rclone.conf,cache-dir=/var/rclone
Options=rw,_netdev,allow_other,args2env,vfs-cache-mode=writes,config=/etc/rclone.conf,cache-dir=/var/rclone
```
optionally accompanied by systemd automount unit
```
# /etc/systemd/system/mnt-data.automount
[Unit]
After=network-online.target
Before=remote-fs.target
Description=AutoMount for /mnt/data
[Automount]
Where=/mnt/data
TimeoutIdleSec=600
@@ -534,7 +544,7 @@ find that you need one or the other or both.
--cache-dir string Directory rclone will use for caching.
--vfs-cache-mode CacheMode Cache mode off|minimal|writes|full (default off)
--vfs-cache-max-age duration Max age of objects in the cache (default 1h0m0s)
--vfs-cache-max-age duration Max time since last access of objects in the cache (default 1h0m0s)
--vfs-cache-max-size SizeSuffix Max total size of objects in the cache (default off)
--vfs-cache-poll-interval duration Interval to poll the cache for stale objects (default 1m0s)
--vfs-write-back duration Time to writeback files after last use when using cache (default 5s)
@@ -557,7 +567,18 @@ flags.
If using `--vfs-cache-max-size` note that the cache may exceed this size
for two reasons. Firstly because it is only checked every
`--vfs-cache-poll-interval`. Secondly because open files cannot be
evicted from the cache.
evicted from the cache. When `--vfs-cache-max-size`
is exceeded, rclone will attempt to evict the least accessed files
from the cache first. rclone will start with files that haven't
been accessed for the longest. This cache flushing strategy is
efficient and more relevant files are likely to remain cached.
The `--vfs-cache-max-age` will evict files from the cache
after the set time since last access has passed. The default value of
1 hour will start evicting files from cache that haven't been accessed
for 1 hour. When a cached file is accessed the 1 hour timer is reset to 0
and will wait for 1 more hour before evicting. Specify the time with
standard notation, s, m, h, d, w .
You **should not** run two copies of rclone using the same VFS cache
with the same or overlapping remotes if using `--vfs-cache-mode > off`.
@@ -802,6 +823,7 @@ rclone mount remote:path /path/to/mountpoint [flags]
--gid uint32 Override the gid field set by the filesystem (not supported on Windows) (default 1000)
-h, --help help for mount
--max-read-ahead SizeSuffix The number of bytes that can be prefetched for sequential reads (not supported on Windows) (default 128Ki)
--mount-case-insensitive Tristate Tell the OS the mount is case insensitive (true) or sensitive (false) regardless of the backend (auto) (default unset)
--network-mode Mount as remote network drive, instead of fixed disk drive (supported on Windows only)
--no-checksum Don't compare checksums on up/download
--no-modtime Don't read/write the modification time (can speed things up)
@@ -813,7 +835,7 @@ rclone mount remote:path /path/to/mountpoint [flags]
--read-only Only allow read-only access
--uid uint32 Override the uid field set by the filesystem (not supported on Windows) (default 1000)
--umask int Override the permission bits set by the filesystem (not supported on Windows) (default 2)
--vfs-cache-max-age Duration Max age of objects in the cache (default 1h0m0s)
--vfs-cache-max-age Duration Max time since last access of objects in the cache (default 1h0m0s)
--vfs-cache-max-size SizeSuffix Max total size of objects in the cache (default off)
--vfs-cache-mode CacheMode Cache mode off|minimal|writes|full (default off)
--vfs-cache-poll-interval Duration Interval to poll the cache for stale objects (default 1m0s)

View File

@@ -38,10 +38,11 @@ and actually stream it, even if remote backend doesn't support streaming.
size of the stream is different in length to the `--size` passed in
then the transfer will likely fail.
Note that the upload can also not be retried because the data is
not kept around until the upload succeeds. If you need to transfer
a lot of data, you're better off caching locally and then
`rclone move` it to the destination.
Note that the upload cannot be retried because the data is not stored.
If the backend supports multipart uploading then individual chunks can
be retried. If you need to transfer a lot of data, you may be better
off caching it locally and then `rclone move` it to the
destination which can use retries.
```
rclone rcat remote:path [flags]

View File

@@ -25,54 +25,54 @@ See the [rc documentation](/rc/) for more info on the rc flags.
## Server options
Use `--addr` to specify which IP address and port the server should
listen on, eg `--addr 1.2.3.4:8000` or `--addr :8080` to listen to all
Use `--rc-addr` to specify which IP address and port the server should
listen on, eg `--rc-addr 1.2.3.4:8000` or `--rc-addr :8080` to listen to all
IPs. By default it only listens on localhost. You can use port
:0 to let the OS choose an available port.
If you set `--addr` to listen on a public or LAN accessible IP address
If you set `--rc-addr` to listen on a public or LAN accessible IP address
then using Authentication is advised - see the next section for info.
You can use a unix socket by setting the url to `unix:///path/to/socket`
or just by using an absolute path name. Note that unix sockets bypass the
authentication - this is expected to be done with file system permissions.
`--addr` may be repeated to listen on multiple IPs/ports/sockets.
`--rc-addr` may be repeated to listen on multiple IPs/ports/sockets.
`--server-read-timeout` and `--server-write-timeout` can be used to
`--rc-server-read-timeout` and `--rc-server-write-timeout` can be used to
control the timeouts on the server. Note that this is the total time
for a transfer.
`--max-header-bytes` controls the maximum number of bytes the server will
`--rc-max-header-bytes` controls the maximum number of bytes the server will
accept in the HTTP header.
`--baseurl` controls the URL prefix that rclone serves from. By default
rclone will serve from the root. If you used `--baseurl "/rclone"` then
`--rc-baseurl` controls the URL prefix that rclone serves from. By default
rclone will serve from the root. If you used `--rc-baseurl "/rclone"` then
rclone would serve from a URL starting with "/rclone/". This is
useful if you wish to proxy rclone serve. Rclone automatically
inserts leading and trailing "/" on `--baseurl`, so `--baseurl "rclone"`,
`--baseurl "/rclone"` and `--baseurl "/rclone/"` are all treated
inserts leading and trailing "/" on `--rc-baseurl`, so `--rc-baseurl "rclone"`,
`--rc-baseurl "/rclone"` and `--rc-baseurl "/rclone/"` are all treated
identically.
### TLS (SSL)
By default this will serve over http. If you want you can serve over
https. You will need to supply the `--cert` and `--key` flags.
https. You will need to supply the `--rc-cert` and `--rc-key` flags.
If you wish to do client side certificate validation then you will need to
supply `--client-ca` also.
supply `--rc-client-ca` also.
`--cert` should be a either a PEM encoded certificate or a concatenation
of that with the CA certificate. `--key` should be the PEM encoded
private key and `--client-ca` should be the PEM encoded client
`--rc-cert` should be a either a PEM encoded certificate or a concatenation
of that with the CA certificate. `--krc-ey` should be the PEM encoded
private key and `--rc-client-ca` should be the PEM encoded client
certificate authority certificate.
--min-tls-version is minimum TLS version that is acceptable. Valid
--rc-min-tls-version is minimum TLS version that is acceptable. Valid
values are "tls1.0", "tls1.1", "tls1.2" and "tls1.3" (default
"tls1.0").
### Template
`--template` allows a user to specify a custom markup template for HTTP
`--rc-template` allows a user to specify a custom markup template for HTTP
and WebDAV serve functions. The server exports the following markup
to be used within the template to server pages:
@@ -100,9 +100,13 @@ to be used within the template to server pages:
By default this will serve files without needing a login.
You can either use an htpasswd file which can take lots of users, or
set a single username and password with the `--user` and `--pass` flags.
set a single username and password with the `--rc-user` and `--rc-pass` flags.
Use `--htpasswd /path/to/htpasswd` to provide an htpasswd file. This is
If no static users are configured by either of the above methods, and client
certificates are required by the `--client-ca` flag passed to the server, the
client certificate common name will be considered as the username.
Use `--rc-htpasswd /path/to/htpasswd` to provide an htpasswd file. This is
in standard apache format and supports MD5, SHA1 and BCrypt for basic
authentication. Bcrypt is recommended.
@@ -114,9 +118,9 @@ To create an htpasswd file:
The password file can be updated while rclone is running.
Use `--realm` to set the authentication realm.
Use `--rc-realm` to set the authentication realm.
Use `--salt` to change the password hashing salt from the default.
Use `--rc-salt` to change the password hashing salt from the default.
```

View File

@@ -112,7 +112,7 @@ find that you need one or the other or both.
--cache-dir string Directory rclone will use for caching.
--vfs-cache-mode CacheMode Cache mode off|minimal|writes|full (default off)
--vfs-cache-max-age duration Max age of objects in the cache (default 1h0m0s)
--vfs-cache-max-age duration Max time since last access of objects in the cache (default 1h0m0s)
--vfs-cache-max-size SizeSuffix Max total size of objects in the cache (default off)
--vfs-cache-poll-interval duration Interval to poll the cache for stale objects (default 1m0s)
--vfs-write-back duration Time to writeback files after last use when using cache (default 5s)
@@ -135,7 +135,18 @@ flags.
If using `--vfs-cache-max-size` note that the cache may exceed this size
for two reasons. Firstly because it is only checked every
`--vfs-cache-poll-interval`. Secondly because open files cannot be
evicted from the cache.
evicted from the cache. When `--vfs-cache-max-size`
is exceeded, rclone will attempt to evict the least accessed files
from the cache first. rclone will start with files that haven't
been accessed for the longest. This cache flushing strategy is
efficient and more relevant files are likely to remain cached.
The `--vfs-cache-max-age` will evict files from the cache
after the set time since last access has passed. The default value of
1 hour will start evicting files from cache that haven't been accessed
for 1 hour. When a cached file is accessed the 1 hour timer is reset to 0
and will wait for 1 more hour before evicting. Specify the time with
standard notation, s, m, h, d, w .
You **should not** run two copies of rclone using the same VFS cache
with the same or overlapping remotes if using `--vfs-cache-mode > off`.
@@ -379,7 +390,7 @@ rclone serve dlna remote:path [flags]
--read-only Only allow read-only access
--uid uint32 Override the uid field set by the filesystem (not supported on Windows) (default 1000)
--umask int Override the permission bits set by the filesystem (not supported on Windows) (default 2)
--vfs-cache-max-age Duration Max age of objects in the cache (default 1h0m0s)
--vfs-cache-max-age Duration Max time since last access of objects in the cache (default 1h0m0s)
--vfs-cache-max-size SizeSuffix Max total size of objects in the cache (default off)
--vfs-cache-mode CacheMode Cache mode off|minimal|writes|full (default off)
--vfs-cache-poll-interval Duration Interval to poll the cache for stale objects (default 1m0s)

View File

@@ -128,7 +128,7 @@ find that you need one or the other or both.
--cache-dir string Directory rclone will use for caching.
--vfs-cache-mode CacheMode Cache mode off|minimal|writes|full (default off)
--vfs-cache-max-age duration Max age of objects in the cache (default 1h0m0s)
--vfs-cache-max-age duration Max time since last access of objects in the cache (default 1h0m0s)
--vfs-cache-max-size SizeSuffix Max total size of objects in the cache (default off)
--vfs-cache-poll-interval duration Interval to poll the cache for stale objects (default 1m0s)
--vfs-write-back duration Time to writeback files after last use when using cache (default 5s)
@@ -151,7 +151,18 @@ flags.
If using `--vfs-cache-max-size` note that the cache may exceed this size
for two reasons. Firstly because it is only checked every
`--vfs-cache-poll-interval`. Secondly because open files cannot be
evicted from the cache.
evicted from the cache. When `--vfs-cache-max-size`
is exceeded, rclone will attempt to evict the least accessed files
from the cache first. rclone will start with files that haven't
been accessed for the longest. This cache flushing strategy is
efficient and more relevant files are likely to remain cached.
The `--vfs-cache-max-age` will evict files from the cache
after the set time since last access has passed. The default value of
1 hour will start evicting files from cache that haven't been accessed
for 1 hour. When a cached file is accessed the 1 hour timer is reset to 0
and will wait for 1 more hour before evicting. Specify the time with
standard notation, s, m, h, d, w .
You **should not** run two copies of rclone using the same VFS cache
with the same or overlapping remotes if using `--vfs-cache-mode > off`.
@@ -398,6 +409,7 @@ rclone serve docker [flags]
--gid uint32 Override the gid field set by the filesystem (not supported on Windows) (default 1000)
-h, --help help for docker
--max-read-ahead SizeSuffix The number of bytes that can be prefetched for sequential reads (not supported on Windows) (default 128Ki)
--mount-case-insensitive Tristate Tell the OS the mount is case insensitive (true) or sensitive (false) regardless of the backend (auto) (default unset)
--network-mode Mount as remote network drive, instead of fixed disk drive (supported on Windows only)
--no-checksum Don't compare checksums on up/download
--no-modtime Don't read/write the modification time (can speed things up)
@@ -412,7 +424,7 @@ rclone serve docker [flags]
--socket-gid int GID for unix socket (default: current process GID) (default 1000)
--uid uint32 Override the uid field set by the filesystem (not supported on Windows) (default 1000)
--umask int Override the permission bits set by the filesystem (not supported on Windows) (default 2)
--vfs-cache-max-age Duration Max age of objects in the cache (default 1h0m0s)
--vfs-cache-max-age Duration Max time since last access of objects in the cache (default 1h0m0s)
--vfs-cache-max-size SizeSuffix Max total size of objects in the cache (default off)
--vfs-cache-mode CacheMode Cache mode off|minimal|writes|full (default off)
--vfs-cache-poll-interval Duration Interval to poll the cache for stale objects (default 1m0s)

View File

@@ -109,7 +109,7 @@ find that you need one or the other or both.
--cache-dir string Directory rclone will use for caching.
--vfs-cache-mode CacheMode Cache mode off|minimal|writes|full (default off)
--vfs-cache-max-age duration Max age of objects in the cache (default 1h0m0s)
--vfs-cache-max-age duration Max time since last access of objects in the cache (default 1h0m0s)
--vfs-cache-max-size SizeSuffix Max total size of objects in the cache (default off)
--vfs-cache-poll-interval duration Interval to poll the cache for stale objects (default 1m0s)
--vfs-write-back duration Time to writeback files after last use when using cache (default 5s)
@@ -132,7 +132,18 @@ flags.
If using `--vfs-cache-max-size` note that the cache may exceed this size
for two reasons. Firstly because it is only checked every
`--vfs-cache-poll-interval`. Secondly because open files cannot be
evicted from the cache.
evicted from the cache. When `--vfs-cache-max-size`
is exceeded, rclone will attempt to evict the least accessed files
from the cache first. rclone will start with files that haven't
been accessed for the longest. This cache flushing strategy is
efficient and more relevant files are likely to remain cached.
The `--vfs-cache-max-age` will evict files from the cache
after the set time since last access has passed. The default value of
1 hour will start evicting files from cache that haven't been accessed
for 1 hour. When a cached file is accessed the 1 hour timer is reset to 0
and will wait for 1 more hour before evicting. Specify the time with
standard notation, s, m, h, d, w .
You **should not** run two copies of rclone using the same VFS cache
with the same or overlapping remotes if using `--vfs-cache-mode > off`.
@@ -460,7 +471,7 @@ rclone serve ftp remote:path [flags]
--uid uint32 Override the uid field set by the filesystem (not supported on Windows) (default 1000)
--umask int Override the permission bits set by the filesystem (not supported on Windows) (default 2)
--user string User name for authentication (default "anonymous")
--vfs-cache-max-age Duration Max age of objects in the cache (default 1h0m0s)
--vfs-cache-max-age Duration Max time since last access of objects in the cache (default 1h0m0s)
--vfs-cache-max-size SizeSuffix Max total size of objects in the cache (default off)
--vfs-cache-mode CacheMode Cache mode off|minimal|writes|full (default off)
--vfs-cache-poll-interval Duration Interval to poll the cache for stale objects (default 1m0s)

View File

@@ -103,6 +103,10 @@ By default this will serve files without needing a login.
You can either use an htpasswd file which can take lots of users, or
set a single username and password with the `--user` and `--pass` flags.
If no static users are configured by either of the above methods, and client
certificates are required by the `--client-ca` flag passed to the server, the
client certificate common name will be considered as the username.
Use `--htpasswd /path/to/htpasswd` to provide an htpasswd file. This is
in standard apache format and supports MD5, SHA1 and BCrypt for basic
authentication. Bcrypt is recommended.
@@ -195,7 +199,7 @@ find that you need one or the other or both.
--cache-dir string Directory rclone will use for caching.
--vfs-cache-mode CacheMode Cache mode off|minimal|writes|full (default off)
--vfs-cache-max-age duration Max age of objects in the cache (default 1h0m0s)
--vfs-cache-max-age duration Max time since last access of objects in the cache (default 1h0m0s)
--vfs-cache-max-size SizeSuffix Max total size of objects in the cache (default off)
--vfs-cache-poll-interval duration Interval to poll the cache for stale objects (default 1m0s)
--vfs-write-back duration Time to writeback files after last use when using cache (default 5s)
@@ -218,7 +222,18 @@ flags.
If using `--vfs-cache-max-size` note that the cache may exceed this size
for two reasons. Firstly because it is only checked every
`--vfs-cache-poll-interval`. Secondly because open files cannot be
evicted from the cache.
evicted from the cache. When `--vfs-cache-max-size`
is exceeded, rclone will attempt to evict the least accessed files
from the cache first. rclone will start with files that haven't
been accessed for the longest. This cache flushing strategy is
efficient and more relevant files are likely to remain cached.
The `--vfs-cache-max-age` will evict files from the cache
after the set time since last access has passed. The default value of
1 hour will start evicting files from cache that haven't been accessed
for 1 hour. When a cached file is accessed the 1 hour timer is reset to 0
and will wait for 1 more hour before evicting. Specify the time with
standard notation, s, m, h, d, w .
You **should not** run two copies of rclone using the same VFS cache
with the same or overlapping remotes if using `--vfs-cache-mode > off`.
@@ -554,7 +569,7 @@ rclone serve http remote:path [flags]
--uid uint32 Override the uid field set by the filesystem (not supported on Windows) (default 1000)
--umask int Override the permission bits set by the filesystem (not supported on Windows) (default 2)
--user string User name for authentication
--vfs-cache-max-age Duration Max age of objects in the cache (default 1h0m0s)
--vfs-cache-max-age Duration Max time since last access of objects in the cache (default 1h0m0s)
--vfs-cache-max-size SizeSuffix Max total size of objects in the cache (default off)
--vfs-cache-mode CacheMode Cache mode off|minimal|writes|full (default off)
--vfs-cache-poll-interval Duration Interval to poll the cache for stale objects (default 1m0s)

View File

@@ -148,6 +148,10 @@ By default this will serve files without needing a login.
You can either use an htpasswd file which can take lots of users, or
set a single username and password with the `--user` and `--pass` flags.
If no static users are configured by either of the above methods, and client
certificates are required by the `--client-ca` flag passed to the server, the
client certificate common name will be considered as the username.
Use `--htpasswd /path/to/htpasswd` to provide an htpasswd file. This is
in standard apache format and supports MD5, SHA1 and BCrypt for basic
authentication. Bcrypt is recommended.

View File

@@ -141,7 +141,7 @@ find that you need one or the other or both.
--cache-dir string Directory rclone will use for caching.
--vfs-cache-mode CacheMode Cache mode off|minimal|writes|full (default off)
--vfs-cache-max-age duration Max age of objects in the cache (default 1h0m0s)
--vfs-cache-max-age duration Max time since last access of objects in the cache (default 1h0m0s)
--vfs-cache-max-size SizeSuffix Max total size of objects in the cache (default off)
--vfs-cache-poll-interval duration Interval to poll the cache for stale objects (default 1m0s)
--vfs-write-back duration Time to writeback files after last use when using cache (default 5s)
@@ -164,7 +164,18 @@ flags.
If using `--vfs-cache-max-size` note that the cache may exceed this size
for two reasons. Firstly because it is only checked every
`--vfs-cache-poll-interval`. Secondly because open files cannot be
evicted from the cache.
evicted from the cache. When `--vfs-cache-max-size`
is exceeded, rclone will attempt to evict the least accessed files
from the cache first. rclone will start with files that haven't
been accessed for the longest. This cache flushing strategy is
efficient and more relevant files are likely to remain cached.
The `--vfs-cache-max-age` will evict files from the cache
after the set time since last access has passed. The default value of
1 hour will start evicting files from cache that haven't been accessed
for 1 hour. When a cached file is accessed the 1 hour timer is reset to 0
and will wait for 1 more hour before evicting. Specify the time with
standard notation, s, m, h, d, w .
You **should not** run two copies of rclone using the same VFS cache
with the same or overlapping remotes if using `--vfs-cache-mode > off`.
@@ -492,7 +503,7 @@ rclone serve sftp remote:path [flags]
--uid uint32 Override the uid field set by the filesystem (not supported on Windows) (default 1000)
--umask int Override the permission bits set by the filesystem (not supported on Windows) (default 2)
--user string User name for authentication
--vfs-cache-max-age Duration Max age of objects in the cache (default 1h0m0s)
--vfs-cache-max-age Duration Max time since last access of objects in the cache (default 1h0m0s)
--vfs-cache-max-size SizeSuffix Max total size of objects in the cache (default off)
--vfs-cache-mode CacheMode Cache mode off|minimal|writes|full (default off)
--vfs-cache-poll-interval Duration Interval to poll the cache for stale objects (default 1m0s)

View File

@@ -132,6 +132,10 @@ By default this will serve files without needing a login.
You can either use an htpasswd file which can take lots of users, or
set a single username and password with the `--user` and `--pass` flags.
If no static users are configured by either of the above methods, and client
certificates are required by the `--client-ca` flag passed to the server, the
client certificate common name will be considered as the username.
Use `--htpasswd /path/to/htpasswd` to provide an htpasswd file. This is
in standard apache format and supports MD5, SHA1 and BCrypt for basic
authentication. Bcrypt is recommended.
@@ -224,7 +228,7 @@ find that you need one or the other or both.
--cache-dir string Directory rclone will use for caching.
--vfs-cache-mode CacheMode Cache mode off|minimal|writes|full (default off)
--vfs-cache-max-age duration Max age of objects in the cache (default 1h0m0s)
--vfs-cache-max-age duration Max time since last access of objects in the cache (default 1h0m0s)
--vfs-cache-max-size SizeSuffix Max total size of objects in the cache (default off)
--vfs-cache-poll-interval duration Interval to poll the cache for stale objects (default 1m0s)
--vfs-write-back duration Time to writeback files after last use when using cache (default 5s)
@@ -247,7 +251,18 @@ flags.
If using `--vfs-cache-max-size` note that the cache may exceed this size
for two reasons. Firstly because it is only checked every
`--vfs-cache-poll-interval`. Secondly because open files cannot be
evicted from the cache.
evicted from the cache. When `--vfs-cache-max-size`
is exceeded, rclone will attempt to evict the least accessed files
from the cache first. rclone will start with files that haven't
been accessed for the longest. This cache flushing strategy is
efficient and more relevant files are likely to remain cached.
The `--vfs-cache-max-age` will evict files from the cache
after the set time since last access has passed. The default value of
1 hour will start evicting files from cache that haven't been accessed
for 1 hour. When a cached file is accessed the 1 hour timer is reset to 0
and will wait for 1 more hour before evicting. Specify the time with
standard notation, s, m, h, d, w .
You **should not** run two copies of rclone using the same VFS cache
with the same or overlapping remotes if using `--vfs-cache-mode > off`.
@@ -585,7 +600,7 @@ rclone serve webdav remote:path [flags]
--uid uint32 Override the uid field set by the filesystem (not supported on Windows) (default 1000)
--umask int Override the permission bits set by the filesystem (not supported on Windows) (default 2)
--user string User name for authentication
--vfs-cache-max-age Duration Max age of objects in the cache (default 1h0m0s)
--vfs-cache-max-age Duration Max time since last access of objects in the cache (default 1h0m0s)
--vfs-cache-max-size SizeSuffix Max total size of objects in the cache (default off)
--vfs-cache-mode CacheMode Cache mode off|minimal|writes|full (default off)
--vfs-cache-poll-interval Duration Interval to poll the cache for stale objects (default 1m0s)

View File

@@ -26,7 +26,7 @@ recursion.
Some backends do not always provide file sizes, see for example
[Google Photos](/googlephotos/#size) and
[Google Drive](/drive/#limitations-of-google-docs).
[Google Docs](/drive/#limitations-of-google-docs).
Rclone will then show a notice in the log indicating how many such
files were encountered, and count them in as empty files in the output
of the size command.

Some files were not shown because too many files have changed in this diff Show More