1
0
mirror of https://github.com/rclone/rclone.git synced 2026-01-21 20:03:22 +00:00

Compare commits

...

121 Commits

Author SHA1 Message Date
Nick Craig-Wood
b0c54538b0 drive: make backend config -o config add a combined AllDrives remote
This adjusts

    rclone backend drives -o config drive:

So that it also emits a config section called `AllDrives` which uses
the combine backend to make a backend which combines all the shared
drives into one.

It also makes sure that all the shared drive names are valid rclone
config names, deduplicating if necessary.

Fixes #4506
2022-04-21 09:58:59 +01:00
Nick Craig-Wood
e62b9d017d Add combine backend to combine multiple remotes in one directory tree - FIXME WIP
Needs
- docs
- integration tests

Fixes #5600
2022-04-21 09:12:50 +01:00
Nick Craig-Wood
632f626f08 fstests: check for wrapped errors in ListR test 2022-04-20 17:56:15 +01:00
Nick Craig-Wood
bab91e4402 putio: ignore URL encoded files as these fail in the integration tests 2022-04-15 17:57:15 +01:00
Nick Craig-Wood
fde40319ef koofr: remove digistorage from integration tests as no account 2022-04-15 17:57:15 +01:00
Nick Craig-Wood
94e330d4fa onedrive: remove onedrive China from integration tests as we no longer have an account 2022-04-15 17:57:15 +01:00
Nick Craig-Wood
087543d723 sftp: ignore failing entries in rsync.net integration tests 2022-04-15 17:57:15 +01:00
Nick Craig-Wood
6a759d936a storj: fix bucket creation on Move picked up by integration tests 2022-04-15 17:57:15 +01:00
Nick Craig-Wood
7c31240bb8 Add Nick Gooding to contributors 2022-04-15 17:57:15 +01:00
Nick Gooding
25146b4306 googlecloudstorage: add --gcs-no-check-bucket to minimise transactions and perms
Adds a configuration option to the GCS backend to allow skipping the
check if a bucket exists before copying an object to it, much like
f406dbb added for S3.
2022-04-14 11:18:36 +01:00
Nick Craig-Wood
240561850b test makefiles: add --chargen flag to make ascii chargen files 2022-04-13 23:07:56 +01:00
Nil Alexandrov
39a1e37441 netstorage: add support contacts to netstorage doc 2022-04-13 23:07:21 +01:00
Nick Craig-Wood
4c02f50ef5 build: update github.com/billziss-gh to github.com/winfsp 2022-04-13 10:18:26 +01:00
Nick Craig-Wood
f583b86334 test makefiles: fix crash if --min-file-size <= --max-file-size 2022-04-12 13:45:20 +01:00
Nick Craig-Wood
118e8e1470 test makefiles: add --sparse, --zero, --pattern and --ascii flags 2022-04-12 13:45:20 +01:00
Nick Craig-Wood
afcea9c72b test makefile: implement new test command to write a single file 2022-04-12 12:57:16 +01:00
Nick Craig-Wood
27176cc6bb config: use os.UserCacheDir from go stdlib to find cache dir #6095
When this code was originally implemented os.UserCacheDir wasn't
public so this used a copy of the code. This commit replaces that now
out of date copy with a call to the now public stdlib function.
2022-04-11 11:44:15 +01:00
Nick Craig-Wood
f1e4b7da7b Add Adrien Rey-Jarthon to contributors 2022-04-11 11:44:15 +01:00
albertony
f065a267f6 docs: fix some links to command pages 2022-04-07 15:50:41 +02:00
Adrien Rey-Jarthon
17f8014909 docs: Note that Scaleway C14 is deprecating SFTP in favor of S3
This updates the documentation to reflect the new C14 Cold Storage API
works with S3 and not with SFTP any more.

See: https://github.com/rclone/rclone/issues/1080#issuecomment-1082088870
2022-04-05 11:11:52 +01:00
Nick Craig-Wood
8ba04562c3 build: update android go build to 1.18.x and NDK to 23.1.7779620 2022-04-04 20:35:17 +01:00
Nick Craig-Wood
285747b1d1 build: update to go1.18 and make go1.16 the minimum required version 2022-04-04 20:35:17 +01:00
Nick Craig-Wood
7bb8b8f4ba cache: fix bug after golang.org/x/time/rate update
Before this change the cache backend was passing -1 into
rate.NewLimiter to mean unlimited transactions per second.

In a recent update this immediately returns a rate limit error as
might be expected.

This patch uses rate.Inf as indicated by the docs to signal no limits
are required.
2022-04-04 20:35:17 +01:00
Nick Craig-Wood
59c242bbf6 build: update dependencies
Also:

- dropbox: fix compile after API change in upstream library
2022-04-04 20:35:17 +01:00
Nick Craig-Wood
a2bacd7d3f Add rafma0 to contributors 2022-04-04 20:35:17 +01:00
Nick Craig-Wood
9babcc4811 Add GH to contributors 2022-04-04 20:35:17 +01:00
Nick Craig-Wood
a0f665ec3c Add KARBOWSKI Piotr to contributors 2022-04-04 20:35:17 +01:00
Nick Craig-Wood
ecdf42c17f Add Tobias Klauser to contributors 2022-04-04 20:35:17 +01:00
rafma0
be9ee1d138 putio: fix multithread download and other ranged requests
Before this change the 206 responses from putio Range requests were being
returned as errors.

This change checks for 200 and 206 in the GET response now.
2022-04-04 11:15:55 +01:00
GH
9e9ead2ac4 onedrive: note that sharepoint also changes web files (.html, .aspx) 2022-04-03 12:43:23 +01:00
KARBOWSKI Piotr
4f78226f8b sftp: Fix OpenSSH 8.8+ RSA keys incompatibility (#6076)
Updates golang.org/x/crypto to v0.0.0-20220331220935-ae2d96664a29.

Fixes the issues with connecting to OpenSSH 8.8+ remotes in case the
client uses RSA key pair due to OpenSSH dropping support for SHA1 based
ssh-rsa signature.

Bug: https://github.com/rclone/rclone/issues/6076
Bug: https://github.com/golang/go/issues/37278
Signed-off-by: KARBOWSKI Piotr <piotr.karbowski@gmail.com>
2022-04-01 12:49:39 +01:00
Tobias Klauser
54c9c3156c fs/config, lib/terminal: use golang.org/x/term
golang.org/x/crypto/ssh/terminal is deprecated in favor of
golang.org/x/term, see https://pkg.go.dev/golang.org/x/crypto/ssh/terminal

The latter also supports ReadPassword on solaris, so enable the
respective functionality in fs/config for solaris as well.
2022-04-01 12:48:18 +01:00
Nick Craig-Wood
6ecbbf796e netstorage: make levels of headings consistent 2022-03-31 18:11:37 +01:00
Nick Craig-Wood
603e51c43f s3: sync providers in config description with providers 2022-03-31 17:55:54 +01:00
Nick Craig-Wood
ca4671126e Add Berkan Teber to contributors 2022-03-31 17:55:54 +01:00
Berkan Teber
6ea26b508a putio: handle rate limit errors
For rate limit errors, "x-ratelimit-reset" header is now respected.
2022-03-30 12:25:53 +01:00
Nick Craig-Wood
887cccb2c1 filter: fix timezone of --min-age/-max-age from UTC to local as documented
Before this change if the timezone was omitted in a
--min-age/--max-age time specifier then rclone defaulted to a UTC
timezone.

This is documented as using the local timezone if the time zone
specifier is omitted which is a much more useful default and this
patch corrects the implementation to agree with the documentation.

See: https://forum.rclone.org/t/problem-utc-windows-europe-1-summer-problem/29917
2022-03-28 11:47:27 +01:00
Nick Craig-Wood
d975196cfa dropbox: fix retries of multipart uploads with incorrect_offset error
Before this fix, rclone retries chunks of multipart uploads. However
if they had been partially received dropbox would reply with an
incorrect_offset error which rclone was ignoring.

This patch parses the new offset from the error response and uses it
to adjust the data that rclone sends so it is the same as what dropbox
is expecting.

See: https://forum.rclone.org/t/dropbox-rate-limiting-for-upload/29779
2022-03-25 15:39:01 +00:00
Nick Craig-Wood
1f39b28f49 googlecloudstorage: use the s3 pacer to speed up transactions
This commit switches Google Cloud Storage from the drive pacer to the
s3 pacer. The main difference between them is that the s3 pacer does
not limit transactions in the non-error case. This is appropriate for
a cloud storage backend where you pay for each transaction.
2022-03-25 15:28:59 +00:00
Nick Craig-Wood
2738db22fb pacer: default the Google pacer to a burst of 100 to fix gcs pacing
Before this change the pacer defaulted to a burst of 1 which mean that
it kept being activated unecessarily.

This affected Google Cloud Storage and Google Photos.

See: https://forum.rclone.org/t/no-traverse-too-slow-with-lot-of-files/29886/12
2022-03-25 15:28:59 +00:00
Nick Craig-Wood
1978ddde73 Add GuoXingbin to contributors 2022-03-25 15:28:59 +00:00
GuoXingbin
c2bfda22ab s3: Add ChinaMobile EOS to provider list
China Mobile Ecloud Elastic Object Storage (EOS) is a cloud object storage service, and is fully compatible with S3.

Fixes #6054
2022-03-24 11:57:00 +00:00
Nick Craig-Wood
d4da9b98d6 vfs: add --vfs-fast-fingerprint for less accurate but faster fingerprints 2022-03-22 16:33:24 +00:00
Nick Craig-Wood
e4f5912294 azureblob: fix lint error with golangci-lint 1.45.0 2022-03-22 16:33:24 +00:00
Nick Craig-Wood
750fffdf71 netstorage: fix unescaped HTML in documentation 2022-03-18 14:40:12 +00:00
Nick Craig-Wood
388e74af52 Start v1.59.0-DEV development 2022-03-18 14:04:22 +00:00
Nick Craig-Wood
f9354fff2f Version v1.58.0 2022-03-18 12:29:54 +00:00
Nick Craig-Wood
ff1f173fc2 build: add bisync.md to docs builder and fix missing tardigrade.md stub 2022-03-18 11:22:23 +00:00
Nick Craig-Wood
f8073a7b63 build: ensure the Go version used for the build is always up to date #6020 2022-03-17 17:14:50 +00:00
Nick Craig-Wood
807f1cedaa hasher: fix crash on object not found
Before this fix `NewObject` could return a wrapped `fs.Object(nil)`
which caused a crash. This was caused by `wrapObject` returning a
`nil` `*Object` which was cast into an `fs.Object`.

This changes the interface of `wrapObject` so it returns an
`fs.Object` instead of a `*Object` and an error which must be checked.
This forces the callers to return a `nil` object rather than an
`fs.Object(nil)`.

See: https://forum.rclone.org/t/panic-in-hasher-when-mounting-with-vfs-cache-and-not-synced-data-in-the-cache/29697/11
2022-03-16 11:30:26 +00:00
Nick Craig-Wood
bf9c68c88a storj: implement server side Move 2022-03-14 15:44:56 +00:00
Nick Craig-Wood
189cba0fbe s3: add other regions for Lyve and correct Provider name 2022-03-14 15:43:35 +00:00
Nick Craig-Wood
69f726f16c Add Nil Alexandrov to contributors 2022-03-14 15:43:35 +00:00
Nil Alexandrov
65652f7a75 Add Akamai Netstorage as a new backend. 2022-03-09 12:42:22 +00:00
Nil Alexandrov
47f9ab2f56 lib/rest: add support for setting trailers 2022-03-09 12:42:22 +00:00
Nick Craig-Wood
5dd51e6149 union: fix deadlock when one part of a multi-upload fails
Before this fix, rclone would deadlock when uploading two files at
once, if one errored. This caused the other file to block in the multi
reader and never complete.

This fix drains the input buffer on error which allows the other
upload to complete.

See: https://forum.rclone.org/t/union-with-create-policy-all-copy-stuck-when-first-union-fails/29601
2022-03-09 11:30:55 +00:00
Nick Craig-Wood
6a6d254a9f s3: add support for Seagate Lyve Cloud storage 2022-03-09 11:30:55 +00:00
jaKa
fd453f2c7b koofr: renamed digistorage to exclude the romania part. 2022-03-08 22:39:23 +00:00
jaKa
5d06a82c5d koofr: add digistorage service as a koofr provider. 2022-03-08 10:36:18 +00:00
Nick Craig-Wood
847868b4ba ftp: hard fork github.com/jlaffaye/ftp to fix go get
Having a replace directive in go.mod causes "go get
github.com/rclone/rclone" to fail as it discussed in this Go issue:
https://github.com/golang/go/issues/44840

This is apparently how the Go team want go.mod to work, so this commit
hard forks github.com/jlaffaye/ftp into github.com/rclone/ftp so we
can remove the `replace` directive from the go.mod file.

Fixes #5810
2022-03-07 09:55:49 +00:00
Ivan Andreev
38ca178cf3 mailru: fix int32 overflow on arm32 - fixes #6003 2022-03-06 13:33:57 +00:00
Nick Craig-Wood
9427d22f99 Add ctrl-q to contributors 2022-03-06 13:33:26 +00:00
ctrl-q
7b1428a498 onedrive: Do not retry on 400 pathIsTooLong 2022-03-06 13:05:05 +00:00
Nick Craig-Wood
ec72432cec vfs: fix failed to _ensure cache internal error: downloaders is nil error
This error was caused by renaming an open file.

When the file was renamed in the cache, the downloaders were cleared,
however the downloaders were not re-opened when needed again, instead
this error was generated.

This fix re-opens the downloaders if they have been closed by renaming
the file.

Fixes #5984
2022-03-03 17:43:29 +00:00
Nick Craig-Wood
2339172df2 pcloud: fix pre-1970 time stamps - fixes #5917
Before this change rclone send pre-1970 timestamps as negative
numbers. pCloud ignores these and sets them as todays date.

This change sends the timestamps as unsigned 64 bit integers (which is
how the binary protocol sends them) and pCloud accepts the (actually
negative) timestamp like this.
2022-03-03 17:18:40 +00:00
Nick Craig-Wood
268b808bf8 filter: add {{ regexp }} syntax to pattern matches - fixes #4074
There has been a desire from more advanced rclone users to have regexp
filtering as well as the glob filtering.

This patch adds regexp filtering using this syntax `{{ regexp }}`
which is currently a syntax error, so is backwards compatibile.

This means regexps can be used everywhere globs can be used, and that
they also can be mixed with globs in the same pattern, eg `*.{{jpe?g}}`
2022-03-03 17:16:28 +00:00
Nick Craig-Wood
74898bac3b build: add windows/arm64 build - NB this does not support mount yet #5828 2022-03-03 17:13:32 +00:00
Nick Craig-Wood
e0fbca02d4 compress: fix memory leak - fixes #6013
Before this change we forgot to close the compressor when checking to
see if an object was compressible.
2022-03-03 17:10:21 +00:00
Nick Craig-Wood
21355b4208 sync: Fix --max-duration so it doesn't retry when the duration is exceeded
Before this change, if the --max-duration limit was reached then
rclone would retry the sync as a fatal error wasn't raised.

This checks the deadline and raises a fatal error if necessary at the
end of the sync.

Fixes #6002
2022-03-03 17:08:16 +00:00
Nick Craig-Wood
251b84ff2c sftp: fix unecessary seeking when uploading and downloading files
This stops the SFTP library issuing out of order writes which fixes
the problems uploading to `serve sftp` from the `sftp` backend.

This was fixes upstream in this pull request: https://github.com/pkg/sftp/pull/482

Fixes #5806
2022-03-03 17:02:35 +00:00
Nick Craig-Wood
537b62917f s3: add --s3-use-multipart-etag provider quirk #5993
Before this change the new multipart upload ETag checking code was
failing in the integration tests with Alibaba OSS.

Apparently Alibaba calculate the ETag in a different way to AWS.

This introduces a new provider quirk with a flag to disable the
checking of the ETag for multipart uploads.

Mulpart Etag checking has been enabled for all providers that we can
test for and work, and left disabled for the others.
2022-03-01 16:36:39 +00:00
Nick Craig-Wood
71a784cfa2 compress: fix crash if metadata upload failed - fixes #5994
Before this changed the backend attempted to delete a nil object if
the metadata upload failed.
2022-02-28 19:47:52 +00:00
Nick Craig-Wood
8ee0fe9863 serve docker: disable linux tests in CI as they are locking up regularly 2022-02-28 18:01:47 +00:00
Nick Craig-Wood
8f164e4df5 s3: Use the ETag on multipart transfers to verify the transfer was OK
Before this rclone ignored the ETag on multipart uploads which missed
an opportunity for a whole file integrity check.

This adds that check which means that we now check even harder that
multipart uploads have arrived properly.

See #5993
2022-02-25 16:19:03 +00:00
Nick Craig-Wood
06ecc6511b drive: when using a link type --drive-export-formats show all doc types
Before this change we always hid unexportable document types (eg
Google maps).

After this change, if using --drive-export-formats
url/desktop/link.html/webloc we will show links for all documents
regardless of whether they are exportable or not as the links to them
work regardless of whether they are exportable or not.

See: https://forum.rclone.org/t/rclone-mount-for-google-drive-does-not-show-as-web-links-the-google-documents-of-the-google-my-map-gmap-type/29415
2022-02-25 16:08:11 +00:00
Nick Craig-Wood
3529bdec9b sftp: update docs on how to create known_hosts file
This also removes the note on the limitation that only one entry per
host is allowed in the file as it works with many entries provided
they have different key types.

See: https://forum.rclone.org/t/rclone-fails-ssh-handshakes-with-rsync-nets-sftp-when-a-known-hosts-file-is-specified/29206/
2022-02-25 16:08:11 +00:00
partev
486b43f8c7 doc: fix a typo
"and this it may require you to unblock it temporarily" -> "and it may require you to unblock it temporarily"
2022-02-22 21:05:05 +00:00
Nick Craig-Wood
89f0e4df80 swift: fix about so it shows info about the current container only
Before this change `rclone about swift:container` would show aggregate
info about all the containers, not just the one in use.

This causes a problem if container listing is disabled (for example in
the Blomp service).

This fix makes `rclone about swift:container` show only the info about
the given `container`. If aggregate info about all the containers is
required then use `rclone about swift:`.

See: https://forum.rclone.org/t/rclone-mount-blomp-problem/29151/18
2022-02-22 12:55:57 +00:00
Nick Craig-Wood
399fb5b7fb Add Vincent Murphy to contributors 2022-02-22 12:55:57 +00:00
Vincent Murphy
19f1ed949c docs: Fix broken test_proxy.py link 2022-02-22 12:26:17 +00:00
Nick Craig-Wood
d3a1001094 drive: add --drive-skip-dangling-shortcuts flag - fixes #5949
This flag enables dangling shortcuts to be skipped without an error.
2022-02-22 12:22:21 +00:00
Nick Craig-Wood
dc7e3ea1e3 drive,gcs,googlephotos: disable OAuth OOB flow (copy a token) due to google deprecation
Before this change, rclone supported authorizing for remote systems by
going to a URL and cutting and pasting a token from Google. This is
known as the OAuth out-of-band (oob) flow.

This, while very convenient for users, has been shown to be insecure
and has been deprecated by Google.

https://developers.googleblog.com/2022/02/making-oauth-flows-safer.html#disallowed-oob

> OAuth out-of-band (OOB) is a legacy flow developed to support native
> clients which do not have a redirect URI like web apps to accept the
> credentials after a user approves an OAuth consent request. The OOB
> flow poses a remote phishing risk and clients must migrate to an
> alternative method to protect against this vulnerability. New
> clients will be unable to use this flow starting on Feb 28, 2022.

This change disables that flow, and forces the user to use the
redirect URL flow. (This is the flow used already for local configs.)

In practice this will mean that instead of cutting and pasting a token
for remote config, it will be necessary to run "rclone authorize"
instead. This is how all the other OAuth backends work so it is a well
tested code path.

Fixes #6000
2022-02-18 12:46:30 +00:00
Nick Craig-Wood
f22b703a51 storj: rename tardigrade backend to storj backend #5616
This adds an alias for backwards compatibility and leaves a stub
documentation page to redirect people to the new documentation.
2022-02-11 11:04:15 +00:00
Nick Craig-Wood
c40129d610 fs: allow backends to have aliases #5616
This allows a backend to have multiple aliases. These aliases are
hidden from `rclone config` and the command line flags are hidden from
the user. However the flags, environment varialbes and config for the
alias will work just fine.
2022-02-11 11:04:15 +00:00
Nick Craig-Wood
8dc93f1792 Add Márton Elek to contributors 2022-02-11 11:04:03 +00:00
Nick Craig-Wood
f4c40bf79d mount: add --devname to set the device name sent to FUSE for mount display
Before this change, the device name was always the remote:path rclone
was configured with. However this can contain sensitive information
and it appears in the `mount` output, so `--devname` allows the user
to configure it.

See: https://forum.rclone.org/t/rclone-mount-blomp-problem/29151/11
2022-02-09 11:56:43 +00:00
Nick Craig-Wood
9cc50a614b s3: add note about Storj provider bug and workaround
See: https://github.com/storj/gateway-mt/issues/39
2022-02-08 11:40:29 +00:00
Elek, Márton
bcb07a67f6 tardigrade: update docs to explain differences between s3 and this backend
Co-authored-by: Caleb Case <calebcase@gmail.com>
2022-02-08 11:40:29 +00:00
Márton Elek
25ea04f1db s3: add specific provider for Storj Shared gateways
- unsupported features (Copy) are turned off for Storj
- enable urlEncodedListing for Storj provider
- set chunksize to 64Mb
2022-02-08 11:40:29 +00:00
Nick Craig-Wood
06ffd4882d onedrive: add --onedrive-root-folder-id flag #5948
This is to navigate to difficult to find folders in onedrive.
2022-02-07 12:29:36 +00:00
Nick Craig-Wood
19a5e1d63b docs: document --disable-http2 #5253 2022-02-07 12:29:36 +00:00
Nick Craig-Wood
ec88b66dad Add Abhiraj to contributors 2022-02-07 12:29:36 +00:00
Abhiraj
aa2d7f00c2 drive: added --drive-copy-shortcut-content - fixes #4604 2022-02-04 11:37:58 +00:00
Nick Craig-Wood
3e125443aa build: fix ARM architecture version in .deb packages after nfpm change
Fixes #5973
2022-02-03 11:24:06 +00:00
Nick Craig-Wood
3c271b8b1e Add Eng Zer Jun to contributors 2022-02-03 11:24:06 +00:00
Nick Craig-Wood
6d92ba2c6c Add viveknathani to contributors 2022-02-03 11:24:06 +00:00
albertony
c26dc69e1b docs/jottacloud: add note that mime types are not available with --fast-list 2022-02-02 13:12:50 +01:00
albertony
b0de0b4609 docs: include all commands in online help top menu drop-down 2022-02-01 20:40:50 +01:00
albertony
f54641511a librclone: add support for mount commands
Fixes #5661
2022-02-01 19:29:36 +01:00
Eng Zer Jun
8cf76f5e11 test: use T.TempDir to create temporary test directory
The directory created by `T.TempDir` is automatically removed when the
test and all its subtests complete.

Reference: https://pkg.go.dev/testing#T.TempDir
Signed-off-by: Eng Zer Jun <engzerjun@gmail.com>
2022-02-01 11:47:04 +00:00
viveknathani
18c24014da docs/content: describe mandatory fields for drive
Making a client-id for Google Drive requires you to add two more fields
besides the already documented "Application name" field. This commit
documents what should be written for those two fields.

Fixes #5967
2022-02-01 11:42:12 +00:00
Nick Craig-Wood
0ae39bda8d docs: fix and reword --update docs
After discussion on the forum with @bandwidth, this rewords the
--update docs to be correct and easier to understand.

See: https://forum.rclone.org/t/help-understanding-update/28937
2022-02-01 11:07:51 +00:00
Nick Craig-Wood
051685baa1 s3: fix multipart upload with --no-head flag - Fixes #5956
Before this change a multipart upload with the --no-head flag returned
the MD5SUM as a base64 string rather than a Hex string as the rest of
rclone was expecting.
2022-01-29 12:48:51 +00:00
albertony
07f53aebdc touch: fix issue where directory is created instead of file
Detected on ftp, sftp and Dropbox backends.

Fixes #5952
2022-01-28 20:29:12 +01:00
albertony
bd6d36b3f6 docs: improve standard list of properties for options 2022-01-28 19:43:51 +01:00
Nick Craig-Wood
b168479429 gcs: add missing regions - fixes #5955 2022-01-28 12:34:13 +00:00
Nick Craig-Wood
b447b0cd78 build: upgrade actions runner macos-11 to fix macOS build problems #5951 2022-01-27 17:33:04 +00:00
Nick Craig-Wood
4bd2386632 build: don't specify macos SDK any more as default is good enough #5951
This fixes the build, in particular the error:

    Failed to run ["xcrun" "--sdk" "macosx11.1" "--show-sdk-path"]: exit status 1
2022-01-27 17:33:04 +00:00
Nick Craig-Wood
83b6b62c1b build: disable cmount tests under macOS and the CI since they are locking up
This fixes #5951 and allows the macOS builds to run again

See #5960 for more info.
2022-01-27 17:33:04 +00:00
Nick Craig-Wood
5826cc9d9e Add Paulo Martins to contributors 2022-01-27 17:33:04 +00:00
Nick Craig-Wood
252432ae54 Add Gourav T to contributors 2022-01-27 17:33:04 +00:00
Nick Craig-Wood
8821629333 Add Isaac Levy to contributors 2022-01-27 17:33:04 +00:00
Nick Craig-Wood
a2092a8faf Add Vanessasaurus to contributors 2022-01-27 17:33:04 +00:00
Nick Craig-Wood
2b6f4241b4 Add Alain Nussbaumer to contributors 2022-01-27 17:33:04 +00:00
Nick Craig-Wood
e3dd16d490 Add Charlie Jiang to contributors 2022-01-27 17:33:04 +00:00
Nick Craig-Wood
9e1fd923f6 Add Yunhai Luo to contributors 2022-01-27 17:33:04 +00:00
Nick Craig-Wood
3684789858 Add Koopa to contributors 2022-01-27 17:33:04 +00:00
Nick Craig-Wood
1ac1dd428a Add Niels van de Weem to contributors 2022-01-27 17:33:04 +00:00
Nick Craig-Wood
65dbd29c22 Add Kim to contributors 2022-01-27 17:33:04 +00:00
albertony
164774d7e1 Add Shmz Ozggrn to contributors 2022-01-27 09:43:42 +01:00
Shmz Ozggrn
507020f408 docs: Use Adaptive Logo in README 2022-01-27 09:35:36 +01:00
179 changed files with 35609 additions and 14655 deletions

View File

@@ -25,12 +25,12 @@ jobs:
strategy:
fail-fast: false
matrix:
job_name: ['linux', 'mac_amd64', 'mac_arm64', 'windows_amd64', 'windows_386', 'other_os', 'go1.15', 'go1.16']
job_name: ['linux', 'mac_amd64', 'mac_arm64', 'windows_amd64', 'windows_386', 'other_os', 'go1.16', 'go1.17']
include:
- job_name: linux
os: ubuntu-latest
go: '1.17.x'
go: '1.18.x'
gotags: cmount
build_flags: '-include "^linux/"'
check: true
@@ -40,8 +40,8 @@ jobs:
deploy: true
- job_name: mac_amd64
os: macOS-latest
go: '1.17.x'
os: macos-11
go: '1.18.x'
gotags: 'cmount'
build_flags: '-include "^darwin/amd64" -cgo'
quicktest: true
@@ -49,15 +49,15 @@ jobs:
deploy: true
- job_name: mac_arm64
os: macOS-latest
go: '1.17.x'
os: macos-11
go: '1.18.x'
gotags: 'cmount'
build_flags: '-include "^darwin/arm64" -cgo -macos-arch arm64 -macos-sdk macosx11.1 -cgo-cflags=-I/usr/local/include -cgo-ldflags=-L/usr/local/lib'
build_flags: '-include "^darwin/arm64" -cgo -macos-arch arm64 -cgo-cflags=-I/usr/local/include -cgo-ldflags=-L/usr/local/lib'
deploy: true
- job_name: windows_amd64
os: windows-latest
go: '1.17.x'
go: '1.18.x'
gotags: cmount
build_flags: '-include "^windows/amd64" -cgo'
build_args: '-buildmode exe'
@@ -67,7 +67,7 @@ jobs:
- job_name: windows_386
os: windows-latest
go: '1.17.x'
go: '1.18.x'
gotags: cmount
goarch: '386'
cgo: '1'
@@ -78,23 +78,23 @@ jobs:
- job_name: other_os
os: ubuntu-latest
go: '1.17.x'
build_flags: '-exclude "^(windows/|darwin/|linux/)"'
go: '1.18.x'
build_flags: '-exclude "^(windows/(386|amd64)|darwin/|linux/)"'
compile_all: true
deploy: true
- job_name: go1.15
os: ubuntu-latest
go: '1.15.x'
quicktest: true
racequicktest: true
- job_name: go1.16
os: ubuntu-latest
go: '1.16.x'
quicktest: true
racequicktest: true
- job_name: go1.17
os: ubuntu-latest
go: '1.17.x'
quicktest: true
racequicktest: true
name: ${{ matrix.job_name }}
runs-on: ${{ matrix.os }}
@@ -110,6 +110,7 @@ jobs:
with:
stable: 'false'
go-version: ${{ matrix.go }}
check-latest: true
- name: Set environment variables
shell: bash
@@ -134,7 +135,7 @@ jobs:
run: |
brew update
brew install --cask macfuse
if: matrix.os == 'macOS-latest'
if: matrix.os == 'macos-11'
- name: Install Libraries on Windows
shell: powershell
@@ -245,14 +246,14 @@ jobs:
fetch-depth: 0
# Upgrade together with NDK version
- name: Set up Go 1.16
- name: Set up Go
uses: actions/setup-go@v1
with:
go-version: 1.16
go-version: 1.18.x
# Upgrade together with Go version. Using a GitHub-provided version saves around 2 minutes.
- name: Force NDK version
run: echo "y" | sudo ${ANDROID_HOME}/tools/bin/sdkmanager --install "ndk;22.1.7171670" | grep -v = || true
run: echo "y" | sudo ${ANDROID_HOME}/tools/bin/sdkmanager --install "ndk;23.1.7779620" | grep -v = || true
- name: Go module cache
uses: actions/cache@v2
@@ -273,8 +274,8 @@ jobs:
- name: install gomobile
run: |
go get golang.org/x/mobile/cmd/gobind
go get golang.org/x/mobile/cmd/gomobile
go install golang.org/x/mobile/cmd/gobind@latest
go install golang.org/x/mobile/cmd/gomobile@latest
env PATH=$PATH:~/go/bin gomobile init
- name: arm-v7a gomobile build
@@ -283,7 +284,7 @@ jobs:
- name: arm-v7a Set environment variables
shell: bash
run: |
echo "CC=$(echo $ANDROID_HOME/ndk/22.1.7171670/toolchains/llvm/prebuilt/linux-x86_64/bin/armv7a-linux-androideabi16-clang)" >> $GITHUB_ENV
echo "CC=$(echo $ANDROID_HOME/ndk/23.1.7779620/toolchains/llvm/prebuilt/linux-x86_64/bin/armv7a-linux-androideabi16-clang)" >> $GITHUB_ENV
echo "CC_FOR_TARGET=$CC" >> $GITHUB_ENV
echo 'GOOS=android' >> $GITHUB_ENV
echo 'GOARCH=arm' >> $GITHUB_ENV
@@ -296,7 +297,7 @@ jobs:
- name: arm64-v8a Set environment variables
shell: bash
run: |
echo "CC=$(echo $ANDROID_HOME/ndk/22.1.7171670/toolchains/llvm/prebuilt/linux-x86_64/bin/aarch64-linux-android21-clang)" >> $GITHUB_ENV
echo "CC=$(echo $ANDROID_HOME/ndk/23.1.7779620/toolchains/llvm/prebuilt/linux-x86_64/bin/aarch64-linux-android21-clang)" >> $GITHUB_ENV
echo "CC_FOR_TARGET=$CC" >> $GITHUB_ENV
echo 'GOOS=android' >> $GITHUB_ENV
echo 'GOARCH=arm64' >> $GITHUB_ENV
@@ -309,7 +310,7 @@ jobs:
- name: x86 Set environment variables
shell: bash
run: |
echo "CC=$(echo $ANDROID_HOME/ndk/22.1.7171670/toolchains/llvm/prebuilt/linux-x86_64/bin/i686-linux-android16-clang)" >> $GITHUB_ENV
echo "CC=$(echo $ANDROID_HOME/ndk/23.1.7779620/toolchains/llvm/prebuilt/linux-x86_64/bin/i686-linux-android16-clang)" >> $GITHUB_ENV
echo "CC_FOR_TARGET=$CC" >> $GITHUB_ENV
echo 'GOOS=android' >> $GITHUB_ENV
echo 'GOARCH=386' >> $GITHUB_ENV
@@ -322,7 +323,7 @@ jobs:
- name: x64 Set environment variables
shell: bash
run: |
echo "CC=$(echo $ANDROID_HOME/ndk/22.1.7171670/toolchains/llvm/prebuilt/linux-x86_64/bin/x86_64-linux-android21-clang)" >> $GITHUB_ENV
echo "CC=$(echo $ANDROID_HOME/ndk/23.1.7779620/toolchains/llvm/prebuilt/linux-x86_64/bin/x86_64-linux-android21-clang)" >> $GITHUB_ENV
echo "CC_FOR_TARGET=$CC" >> $GITHUB_ENV
echo 'GOOS=android' >> $GITHUB_ENV
echo 'GOARCH=amd64' >> $GITHUB_ENV

View File

@@ -15,7 +15,7 @@ Current active maintainers of rclone are:
| Ivan Andreev | @ivandeex | chunker & mailru backends |
| Max Sum | @Max-Sum | union backend |
| Fred | @creativeprojects | seafile backend |
| Caleb Case | @calebcase | tardigrade backend |
| Caleb Case | @calebcase | storj backend |
**This is a work in progress Draft**

5267
MANUAL.html generated

File diff suppressed because it is too large Load Diff

5734
MANUAL.md generated

File diff suppressed because it is too large Load Diff

7550
MANUAL.txt generated

File diff suppressed because it is too large Load Diff

View File

@@ -1,4 +1,5 @@
[<img src="https://rclone.org/img/logo_on_light__horizontal_color.svg" width="50%" alt="rclone logo">](https://rclone.org/)
[<img src="https://rclone.org/img/logo_on_light__horizontal_color.svg" width="50%" alt="rclone logo">](https://rclone.org/#gh-light-mode-only)
[<img src="https://rclone.org/img/logo_on_dark__horizontal_color.svg" width="50%" alt="rclone logo">](https://rclone.org/#gh-dark-mode-only)
[Website](https://rclone.org) |
[Documentation](https://rclone.org/docs/) |
@@ -20,14 +21,17 @@ Rclone *("rsync for cloud storage")* is a command-line program to sync files and
## Storage providers
* 1Fichier [:page_facing_up:](https://rclone.org/fichier/)
* Akamai Netstorage [:page_facing_up:](https://rclone.org/netstorage/)
* Alibaba Cloud (Aliyun) Object Storage System (OSS) [:page_facing_up:](https://rclone.org/s3/#alibaba-oss)
* Amazon Drive [:page_facing_up:](https://rclone.org/amazonclouddrive/) ([See note](https://rclone.org/amazonclouddrive/#status))
* Amazon S3 [:page_facing_up:](https://rclone.org/s3/)
* Backblaze B2 [:page_facing_up:](https://rclone.org/b2/)
* Box [:page_facing_up:](https://rclone.org/box/)
* Ceph [:page_facing_up:](https://rclone.org/s3/#ceph)
* China Mobile Ecloud Elastic Object Storage (EOS) [:page_facing_up:](https://rclone.org/s3/#china-mobile-ecloud-eos)
* Citrix ShareFile [:page_facing_up:](https://rclone.org/sharefile/)
* DigitalOcean Spaces [:page_facing_up:](https://rclone.org/s3/#digitalocean-spaces)
* Digi Storage [:page_facing_up:](https://rclone.org/koofr/#digi-storage)
* Dreamhost [:page_facing_up:](https://rclone.org/s3/#dreamhost)
* Dropbox [:page_facing_up:](https://rclone.org/dropbox/)
* Enterprise File Fabric [:page_facing_up:](https://rclone.org/filefabric/)
@@ -65,8 +69,8 @@ Rclone *("rsync for cloud storage")* is a command-line program to sync files and
* SeaweedFS [:page_facing_up:](https://rclone.org/s3/#seaweedfs)
* SFTP [:page_facing_up:](https://rclone.org/sftp/)
* StackPath [:page_facing_up:](https://rclone.org/s3/#stackpath)
* Storj [:page_facing_up:](https://rclone.org/storj/)
* SugarSync [:page_facing_up:](https://rclone.org/sugarsync/)
* Tardigrade [:page_facing_up:](https://rclone.org/tardigrade/)
* Tencent Cloud Object Storage (COS) [:page_facing_up:](https://rclone.org/s3/#tencent-cos)
* Wasabi [:page_facing_up:](https://rclone.org/s3/#wasabi)
* WebDAV [:page_facing_up:](https://rclone.org/webdav/)

View File

@@ -1 +1 @@
v1.58.0
v1.59.0

View File

@@ -9,6 +9,7 @@ import (
_ "github.com/rclone/rclone/backend/box"
_ "github.com/rclone/rclone/backend/cache"
_ "github.com/rclone/rclone/backend/chunker"
_ "github.com/rclone/rclone/backend/combine"
_ "github.com/rclone/rclone/backend/compress"
_ "github.com/rclone/rclone/backend/crypt"
_ "github.com/rclone/rclone/backend/drive"
@@ -28,6 +29,7 @@ import (
_ "github.com/rclone/rclone/backend/mailru"
_ "github.com/rclone/rclone/backend/mega"
_ "github.com/rclone/rclone/backend/memory"
_ "github.com/rclone/rclone/backend/netstorage"
_ "github.com/rclone/rclone/backend/onedrive"
_ "github.com/rclone/rclone/backend/opendrive"
_ "github.com/rclone/rclone/backend/pcloud"
@@ -39,9 +41,9 @@ import (
_ "github.com/rclone/rclone/backend/sftp"
_ "github.com/rclone/rclone/backend/sharefile"
_ "github.com/rclone/rclone/backend/sia"
_ "github.com/rclone/rclone/backend/storj"
_ "github.com/rclone/rclone/backend/sugarsync"
_ "github.com/rclone/rclone/backend/swift"
_ "github.com/rclone/rclone/backend/tardigrade"
_ "github.com/rclone/rclone/backend/union"
_ "github.com/rclone/rclone/backend/uptobox"
_ "github.com/rclone/rclone/backend/webdav"

View File

@@ -612,7 +612,7 @@ func NewFs(ctx context.Context, name, root string, m configmap.Mapper) (fs.Fs, e
serviceURL = azblob.NewServiceURL(*u, pipeline)
case opt.UseMSI:
var token adal.Token
var userMSI *userMSI = &userMSI{}
var userMSI = &userMSI{}
if len(opt.MSIClientID) > 0 || len(opt.MSIObjectID) > 0 || len(opt.MSIResourceID) > 0 {
// Specifying a user-assigned identity. Exactly one of the above IDs must be specified.
// Validate and ensure exactly one is set. (To do: better validation.)

View File

@@ -394,7 +394,11 @@ func NewFs(ctx context.Context, name, rootPath string, m configmap.Mapper) (fs.F
notifiedRemotes: make(map[string]bool),
}
cache.PinUntilFinalized(f.Fs, f)
f.rateLimiter = rate.NewLimiter(rate.Limit(float64(opt.Rps)), opt.TotalWorkers)
rps := rate.Inf
if opt.Rps > 0 {
rps = rate.Limit(float64(opt.Rps))
}
f.rateLimiter = rate.NewLimiter(rps, opt.TotalWorkers)
f.plexConnector = &plexConnector{}
if opt.PlexURL != "" {

877
backend/combine/combine.go Normal file
View File

@@ -0,0 +1,877 @@
// Package combine implents a backend to combine multipe remotes in a directory tree
package combine
/*
Have API to add/remove branches in the combine
*/
import (
"context"
"errors"
"fmt"
"io"
"path"
"strings"
"sync"
"time"
"github.com/rclone/rclone/fs"
"github.com/rclone/rclone/fs/cache"
"github.com/rclone/rclone/fs/config/configmap"
"github.com/rclone/rclone/fs/config/configstruct"
"github.com/rclone/rclone/fs/hash"
"github.com/rclone/rclone/fs/operations"
"github.com/rclone/rclone/fs/walk"
"golang.org/x/sync/errgroup"
)
// Register with Fs
func init() {
fsi := &fs.RegInfo{
Name: "combine",
Description: "Combine several remotes into one",
NewFs: NewFs,
Options: []fs.Option{{
Name: "upstreams",
Help: `Upstreams for combining
These should be in the form
dir=remote:path dir2=remote2:path
Where before the = is specified the root directory and after is the remote to
put there.
Embedded spaces can be added using quotes
"dir=remote:path with space" "dir2=remote2:path with space"
`,
Required: true,
Default: fs.SpaceSepList(nil),
}},
}
fs.Register(fsi)
}
// Options defines the configuration for this backend
type Options struct {
Upstreams fs.SpaceSepList `config:"upstreams"`
}
// Fs represents a combine of upstreams
type Fs struct {
name string // name of this remote
features *fs.Features // optional features
opt Options // options for this Fs
root string // the path we are working on
hashSet hash.Set // common hashes
when time.Time // directory times
upstreams map[string]*upstream // map of upstreams
}
// adjustment stores the info to add a prefix to a path or chop characters off it
type adjustment struct {
prefix string
chop int
}
// do makes the adjustment on s
func (a *adjustment) do(s string) string {
if a.prefix != "" {
return join(a.prefix, s)
}
return s[a.chop:]
}
// upstream represents an upstream Fs
type upstream struct {
f fs.Fs
parent *Fs
dir string // directory the upstream is mounted
pathAdjustment adjustment // how to fiddle with the path
}
// Create an upstream from the directory it is mounted on and the remote
func (f *Fs) newUpstream(ctx context.Context, dir, remote string) (*upstream, error) {
uFs, err := cache.Get(ctx, remote)
if err == fs.ErrorIsFile {
return nil, fmt.Errorf("can't combine files yet, only directories %q: %w", remote, err)
}
if err != nil {
return nil, fmt.Errorf("failed to create upstream %q: %w", remote, err)
}
u := &upstream{
f: uFs,
parent: f,
dir: dir,
}
if len(f.root) < len(dir) {
u.pathAdjustment.prefix = dir[:len(dir)-len(f.root)]
} else {
u.pathAdjustment.chop = len(f.root) - len(dir)
}
return u, nil
}
// NewFs constructs an Fs from the path.
//
// The returned Fs is the actual Fs, referenced by remote in the config
func NewFs(ctx context.Context, name, root string, m configmap.Mapper) (outFs fs.Fs, err error) {
// defer log.Trace(nil, "name=%q, root=%q, m=%v", name, root, m)("f=%+v, err=%v", &outFs, &err)
// Parse config into Options struct
opt := new(Options)
err = configstruct.Set(m, opt)
if err != nil {
return nil, err
}
// Backward compatible to old config
if len(opt.Upstreams) == 0 {
return nil, errors.New("combine can't point to an empty upstream - check the value of the upstreams setting")
}
for _, u := range opt.Upstreams {
if strings.HasPrefix(u, name+":") {
return nil, errors.New("can't point combine remote at itself - check the value of the upstreams setting")
}
}
f := &Fs{
name: name,
root: root,
opt: *opt,
upstreams: make(map[string]*upstream, len(opt.Upstreams)),
when: time.Now(),
}
g, ctx := errgroup.WithContext(ctx)
var mu sync.Mutex
for _, upstream := range opt.Upstreams {
upstream := upstream
g.Go(func() (err error) {
equal := strings.IndexRune(upstream, '=')
if equal < 0 {
return fmt.Errorf("no \"=\" in upstream definition %q", upstream)
}
dir, remote := upstream[:equal], upstream[equal+1:]
if dir == "" {
return fmt.Errorf("empty dir in upstream definition %q", upstream)
}
if remote == "" {
return fmt.Errorf("empty remote in upstream definition %q", upstream)
}
u, err := f.newUpstream(ctx, dir, remote)
if err != nil {
return err
}
mu.Lock()
f.upstreams[dir] = u
mu.Unlock()
return nil
})
}
err = g.Wait()
if err != nil {
return nil, err
}
// check features
var features = (&fs.Features{
CaseInsensitive: true,
DuplicateFiles: false,
ReadMimeType: true,
WriteMimeType: true,
CanHaveEmptyDirectories: true,
BucketBased: true,
SetTier: true,
GetTier: true,
}).Fill(ctx, f)
canMove := true
for _, u := range f.upstreams {
features = features.Mask(ctx, u.f) // Mask all upstream fs
if !operations.CanServerSideMove(u.f) {
canMove = false
}
}
// We can move if all remotes support Move or Copy
if canMove {
features.Move = f.Move
}
// Enable ListR when upstreams either support ListR or is local
// But not when all upstreams are local
if features.ListR == nil {
for _, u := range f.upstreams {
if u.f.Features().ListR != nil {
features.ListR = f.ListR
} else if !u.f.Features().IsLocal {
features.ListR = nil
break
}
}
}
// Enable Purge when any upstreams support it
if features.Purge == nil {
for _, u := range f.upstreams {
if u.f.Features().Purge != nil {
features.Purge = f.Purge
break
}
}
}
// Enable Shutdown when any upstreams support it
if features.Shutdown == nil {
for _, u := range f.upstreams {
if u.f.Features().Shutdown != nil {
features.Shutdown = f.Shutdown
break
}
}
}
// Enable DirCacheFlush when any upstreams support it
if features.DirCacheFlush == nil {
for _, u := range f.upstreams {
if u.f.Features().DirCacheFlush != nil {
features.DirCacheFlush = f.DirCacheFlush
break
}
}
}
f.features = features
// Get common intersection of hashes
var hashSet hash.Set
var first = true
for _, u := range f.upstreams {
if first {
hashSet = u.f.Hashes()
first = false
} else {
hashSet = hashSet.Overlap(u.f.Hashes())
}
}
f.hashSet = hashSet
// Check to see if the root is actually a file
if f.root != "" {
_, err := f.NewObject(ctx, "")
if err != nil {
if err == fs.ErrorObjectNotFound || err == fs.ErrorNotAFile || err == fs.ErrorIsDir {
// File doesn't exist or is a directory so return old f
return f, nil
}
return nil, err
}
// Check to see if the root path is actually an existing file
oldRoot := f.root
newRoot, leaf := path.Split(oldRoot)
f.root = newRoot
// Adjust path adjustment to remove leaf
for _, u := range f.upstreams {
u.pathAdjustment.chop -= len(leaf) + 1
}
return f, fs.ErrorIsFile
}
return f, nil
}
// Run a function over all the upstreams in parallel
func (f *Fs) multithread(ctx context.Context, fn func(context.Context, *upstream) error) error {
g, gCtx := errgroup.WithContext(ctx)
for _, u := range f.upstreams {
u := u
g.Go(func() (err error) {
return fn(gCtx, u)
})
}
return g.Wait()
}
// join the elements together but unline path.Join return empty string
func join(elem ...string) string {
result := path.Join(elem...)
if result == "." {
return ""
}
return result
}
// find the upstream for the remote passed in, returning the upstream and the adjusted path
func (f *Fs) findUpstream(remote string) (u *upstream, uRemote string, err error) {
// defer log.Trace(remote, "")("f=%v, uRemote=%q, err=%v", &u, &uRemote, &err)
absolute := join(f.root, remote)
for dir, u := range f.upstreams {
dirSlash := dir + "/"
foundStart := -1
foundEnd := -1
if absolute == dir {
foundEnd = len(dir)
foundStart = foundEnd
} else if strings.HasPrefix(absolute, dirSlash) {
foundEnd = len(dirSlash)
foundStart = foundEnd - 1
}
if foundStart > 0 {
uRemote = absolute[foundEnd:]
return u, uRemote, nil
}
}
return nil, "", fmt.Errorf("combine for remote %q: %w", remote, fs.ErrorDirNotFound)
}
// Name of the remote (as passed into NewFs)
func (f *Fs) Name() string {
return f.name
}
// Root of the remote (as passed into NewFs)
func (f *Fs) Root() string {
return f.root
}
// String converts this Fs to a string
func (f *Fs) String() string {
return fmt.Sprintf("combine root '%s'", f.root)
}
// Features returns the optional features of this Fs
func (f *Fs) Features() *fs.Features {
return f.features
}
// Rmdir removes the root directory of the Fs object
func (f *Fs) Rmdir(ctx context.Context, dir string) error {
// The root always exists
if f.root == "" && dir == "" {
return nil
}
u, uRemote, err := f.findUpstream(dir)
if err != nil {
return err
}
return u.f.Rmdir(ctx, uRemote)
}
// Hashes returns hash.HashNone to indicate remote hashing is unavailable
func (f *Fs) Hashes() hash.Set {
return f.hashSet
}
// Mkdir makes the root directory of the Fs object
func (f *Fs) Mkdir(ctx context.Context, dir string) error {
// The root always exists
if f.root == "" && dir == "" {
return nil
}
u, uRemote, err := f.findUpstream(dir)
if err != nil {
return err
}
return u.f.Mkdir(ctx, uRemote)
}
// purge the upstream or fallback to a slow way
func (u *upstream) purge(ctx context.Context, dir string) (err error) {
if do := u.f.Features().Purge; do != nil {
err = do(ctx, dir)
} else {
err = operations.Purge(ctx, u.f, dir)
}
return err
}
// Purge all files in the directory
//
// Implement this if you have a way of deleting all the files
// quicker than just running Remove() on the result of List()
//
// Return an error if it doesn't exist
func (f *Fs) Purge(ctx context.Context, dir string) error {
if f.root == "" && dir == "" {
return f.multithread(ctx, func(ctx context.Context, u *upstream) error {
return u.purge(ctx, "")
})
}
u, uRemote, err := f.findUpstream(dir)
if err != nil {
return err
}
return u.purge(ctx, uRemote)
}
// Copy src to this remote using server-side copy operations.
//
// This is stored with the remote path given
//
// It returns the destination Object and a possible error
//
// Will only be called if src.Fs().Name() == f.Name()
//
// If it isn't possible then return fs.ErrorCantCopy
func (f *Fs) Copy(ctx context.Context, src fs.Object, remote string) (fs.Object, error) {
srcObj, ok := src.(*Object)
if !ok {
fs.Debugf(src, "Can't copy - not same remote type")
return nil, fs.ErrorCantCopy
}
dstU, dstRemote, err := f.findUpstream(remote)
if err != nil {
return nil, err
}
do := dstU.f.Features().Copy
if do == nil {
return nil, fs.ErrorCantCopy
}
o, err := do(ctx, srcObj.Object, dstRemote)
if err != nil {
return nil, err
}
return dstU.newObject(o), nil
}
// Move src to this remote using server-side move operations.
//
// This is stored with the remote path given
//
// It returns the destination Object and a possible error
//
// Will only be called if src.Fs().Name() == f.Name()
//
// If it isn't possible then return fs.ErrorCantMove
func (f *Fs) Move(ctx context.Context, src fs.Object, remote string) (fs.Object, error) {
srcObj, ok := src.(*Object)
if !ok {
fs.Debugf(src, "Can't move - not same remote type")
return nil, fs.ErrorCantMove
}
dstU, dstRemote, err := f.findUpstream(remote)
if err != nil {
return nil, err
}
do := dstU.f.Features().Move
useCopy := false
if do == nil {
do = dstU.f.Features().Copy
if do == nil {
return nil, fs.ErrorCantMove
}
useCopy = true
}
o, err := do(ctx, srcObj.Object, dstRemote)
if err != nil {
return nil, err
}
// If did Copy then remove the source object
if useCopy {
err = srcObj.Remove(ctx)
if err != nil {
return nil, err
}
}
return dstU.newObject(o), nil
}
// DirMove moves src, srcRemote to this remote at dstRemote
// using server-side move operations.
//
// Will only be called if src.Fs().Name() == f.Name()
//
// If it isn't possible then return fs.ErrorCantDirMove
//
// If destination exists then return fs.ErrorDirExists
func (f *Fs) DirMove(ctx context.Context, src fs.Fs, srcRemote, dstRemote string) (err error) {
// defer log.Trace(f, "src=%v, srcRemote=%q, dstRemote=%q", src, srcRemote, dstRemote)("err=%v", &err)
srcFs, ok := src.(*Fs)
if !ok {
fs.Debugf(src, "Can't move directory - not same remote type")
return fs.ErrorCantDirMove
}
dstU, dstURemote, err := f.findUpstream(dstRemote)
if err != nil {
return err
}
srcU, srcURemote, err := srcFs.findUpstream(srcRemote)
if err != nil {
return err
}
do := dstU.f.Features().DirMove
if do == nil {
return fs.ErrorCantDirMove
}
fs.Logf(dstU.f, "srcU.f=%v, srcURemote=%q, dstURemote=%q", srcU.f, srcURemote, dstURemote)
return do(ctx, srcU.f, srcURemote, dstURemote)
}
// ChangeNotify calls the passed function with a path
// that has had changes. If the implementation
// uses polling, it should adhere to the given interval.
// At least one value will be written to the channel,
// specifying the initial value and updated values might
// follow. A 0 Duration should pause the polling.
// The ChangeNotify implementation must empty the channel
// regularly. When the channel gets closed, the implementation
// should stop polling and release resources.
func (f *Fs) ChangeNotify(ctx context.Context, fn func(string, fs.EntryType), ch <-chan time.Duration) {
var uChans []chan time.Duration
for _, u := range f.upstreams {
if do := u.f.Features().ChangeNotify; do != nil {
ch := make(chan time.Duration)
uChans = append(uChans, ch)
do(ctx, fn, ch)
}
}
go func() {
for i := range ch {
for _, c := range uChans {
c <- i
}
}
for _, c := range uChans {
close(c)
}
}()
}
// DirCacheFlush resets the directory cache - used in testing
// as an optional interface
func (f *Fs) DirCacheFlush() {
ctx := context.Background()
_ = f.multithread(ctx, func(ctx context.Context, u *upstream) error {
if do := u.f.Features().DirCacheFlush; do != nil {
do()
}
return nil
})
}
func (f *Fs) put(ctx context.Context, in io.Reader, src fs.ObjectInfo, stream bool, options ...fs.OpenOption) (fs.Object, error) {
srcPath := src.Remote()
u, uRemote, err := f.findUpstream(srcPath)
if err != nil {
return nil, err
}
uSrc := operations.NewOverrideRemote(src, uRemote)
var o fs.Object
if stream {
o, err = u.f.Features().PutStream(ctx, in, uSrc, options...)
} else {
o, err = u.f.Put(ctx, in, uSrc, options...)
}
if err != nil {
return nil, err
}
return u.newObject(o), nil
}
// Put in to the remote path with the modTime given of the given size
//
// May create the object even if it returns an error - if so
// will return the object and the error, otherwise will return
// nil and the error
func (f *Fs) Put(ctx context.Context, in io.Reader, src fs.ObjectInfo, options ...fs.OpenOption) (fs.Object, error) {
o, err := f.NewObject(ctx, src.Remote())
switch err {
case nil:
return o, o.Update(ctx, in, src, options...)
case fs.ErrorObjectNotFound:
return f.put(ctx, in, src, false, options...)
default:
return nil, err
}
}
// PutStream uploads to the remote path with the modTime given of indeterminate size
//
// May create the object even if it returns an error - if so
// will return the object and the error, otherwise will return
// nil and the error
func (f *Fs) PutStream(ctx context.Context, in io.Reader, src fs.ObjectInfo, options ...fs.OpenOption) (fs.Object, error) {
o, err := f.NewObject(ctx, src.Remote())
switch err {
case nil:
return o, o.Update(ctx, in, src, options...)
case fs.ErrorObjectNotFound:
return f.put(ctx, in, src, true, options...)
default:
return nil, err
}
}
// About gets quota information from the Fs
func (f *Fs) About(ctx context.Context) (*fs.Usage, error) {
usage := &fs.Usage{
Total: new(int64),
Used: new(int64),
Trashed: new(int64),
Other: new(int64),
Free: new(int64),
Objects: new(int64),
}
for _, u := range f.upstreams {
doAbout := u.f.Features().About
if doAbout == nil {
continue
}
usg, err := doAbout(ctx)
if errors.Is(err, fs.ErrorDirNotFound) {
continue
}
if err != nil {
return nil, err
}
if usg.Total != nil && usage.Total != nil {
*usage.Total += *usg.Total
} else {
usage.Total = nil
}
if usg.Used != nil && usage.Used != nil {
*usage.Used += *usg.Used
} else {
usage.Used = nil
}
if usg.Trashed != nil && usage.Trashed != nil {
*usage.Trashed += *usg.Trashed
} else {
usage.Trashed = nil
}
if usg.Other != nil && usage.Other != nil {
*usage.Other += *usg.Other
} else {
usage.Other = nil
}
if usg.Free != nil && usage.Free != nil {
*usage.Free += *usg.Free
} else {
usage.Free = nil
}
if usg.Objects != nil && usage.Objects != nil {
*usage.Objects += *usg.Objects
} else {
usage.Objects = nil
}
}
return usage, nil
}
// Wraps entries for this upstream
func (u *upstream) wrapEntries(ctx context.Context, entries fs.DirEntries) (fs.DirEntries, error) {
for i, entry := range entries {
switch x := entry.(type) {
case fs.Object:
entries[i] = u.newObject(x)
case fs.Directory:
newDir := fs.NewDirCopy(ctx, x)
newDir.SetRemote(u.pathAdjustment.do(newDir.Remote()))
entries[i] = newDir
default:
return nil, fmt.Errorf("unknown entry type %T", entry)
}
}
return entries, nil
}
// List the objects and directories in dir into entries. The
// entries can be returned in any order but should be for a
// complete directory.
//
// dir should be "" to list the root, and should not have
// trailing slashes.
//
// This should return ErrDirNotFound if the directory isn't
// found.
func (f *Fs) List(ctx context.Context, dir string) (entries fs.DirEntries, err error) {
// defer log.Trace(f, "dir=%q", dir)("entries = %v, err=%v", &entries, &err)
if f.root == "" && dir == "" {
entries = make(fs.DirEntries, 0, len(f.upstreams))
for combineDir := range f.upstreams {
d := fs.NewDir(combineDir, f.when)
entries = append(entries, d)
}
return entries, nil
}
u, uRemote, err := f.findUpstream(dir)
if err != nil {
return nil, err
}
entries, err = u.f.List(ctx, uRemote)
if err != nil {
return nil, err
}
return u.wrapEntries(ctx, entries)
}
// ListR lists the objects and directories of the Fs starting
// from dir recursively into out.
//
// dir should be "" to start from the root, and should not
// have trailing slashes.
//
// This should return ErrDirNotFound if the directory isn't
// found.
//
// It should call callback for each tranche of entries read.
// These need not be returned in any particular order. If
// callback returns an error then the listing will stop
// immediately.
//
// Don't implement this unless you have a more efficient way
// of listing recursively that doing a directory traversal.
func (f *Fs) ListR(ctx context.Context, dir string, callback fs.ListRCallback) (err error) {
// defer log.Trace(f, "dir=%q, callback=%v", dir, callback)("err=%v", &err)
if f.root == "" && dir == "" {
rootEntries, err := f.List(ctx, "")
if err != nil {
return err
}
err = callback(rootEntries)
if err != nil {
return err
}
var mu sync.Mutex
syncCallback := func(entries fs.DirEntries) error {
mu.Lock()
defer mu.Unlock()
return callback(entries)
}
err = f.multithread(ctx, func(ctx context.Context, u *upstream) error {
return f.ListR(ctx, u.dir, syncCallback)
})
if err != nil {
return err
}
return nil
}
u, uRemote, err := f.findUpstream(dir)
if err != nil {
return err
}
wrapCallback := func(entries fs.DirEntries) error {
entries, err := u.wrapEntries(ctx, entries)
if err != nil {
return err
}
return callback(entries)
}
if do := u.f.Features().ListR; do != nil {
err = do(ctx, uRemote, wrapCallback)
} else {
err = walk.ListR(ctx, u.f, uRemote, true, -1, walk.ListAll, wrapCallback)
}
if err == fs.ErrorDirNotFound {
err = nil
}
return err
}
// NewObject creates a new remote combine file object
func (f *Fs) NewObject(ctx context.Context, remote string) (fs.Object, error) {
u, uRemote, err := f.findUpstream(remote)
if err != nil {
return nil, err
}
o, err := u.f.NewObject(ctx, uRemote)
if err != nil {
return nil, err
}
return u.newObject(o), nil
}
// Precision is the greatest Precision of all upstreams
func (f *Fs) Precision() time.Duration {
var greatestPrecision time.Duration
for _, u := range f.upstreams {
uPrecision := u.f.Precision()
if uPrecision > greatestPrecision {
greatestPrecision = uPrecision
}
}
return greatestPrecision
}
// Shutdown the backend, closing any background tasks and any
// cached connections.
func (f *Fs) Shutdown(ctx context.Context) error {
return f.multithread(ctx, func(ctx context.Context, u *upstream) error {
if do := u.f.Features().Shutdown; do != nil {
return do(ctx)
}
return nil
})
}
// Object describes a wrapped Object
//
// This is a wrapped Object which knows its path prefix
type Object struct {
fs.Object
u *upstream
}
func (u *upstream) newObject(o fs.Object) *Object {
return &Object{
Object: o,
u: u,
}
}
// Fs returns read only access to the Fs that this object is part of
func (o *Object) Fs() fs.Info {
return o.u.parent
}
// String returns the remote path
func (o *Object) String() string {
return o.Remote()
}
// Remote returns the remote path
func (o *Object) Remote() string {
return o.u.pathAdjustment.do(o.Object.String())
}
// MimeType returns the content type of the Object if known
func (o *Object) MimeType(ctx context.Context) (mimeType string) {
if do, ok := o.Object.(fs.MimeTyper); ok {
mimeType = do.MimeType(ctx)
}
return mimeType
}
// UnWrap returns the Object that this Object is wrapping or
// nil if it isn't wrapping anything
func (o *Object) UnWrap() fs.Object {
return o.Object
}
// Check the interfaces are satisfied
var (
_ fs.Fs = (*Fs)(nil)
_ fs.Purger = (*Fs)(nil)
_ fs.PutStreamer = (*Fs)(nil)
_ fs.Copier = (*Fs)(nil)
_ fs.Mover = (*Fs)(nil)
_ fs.DirMover = (*Fs)(nil)
_ fs.DirCacheFlusher = (*Fs)(nil)
_ fs.ChangeNotifier = (*Fs)(nil)
_ fs.Abouter = (*Fs)(nil)
_ fs.ListRer = (*Fs)(nil)
_ fs.Shutdowner = (*Fs)(nil)
)

View File

@@ -0,0 +1,79 @@
// Test Combine filesystem interface
package combine_test
import (
"testing"
_ "github.com/rclone/rclone/backend/local"
_ "github.com/rclone/rclone/backend/memory"
"github.com/rclone/rclone/fstest"
"github.com/rclone/rclone/fstest/fstests"
)
// TestIntegration runs integration tests against the remote
func TestIntegration(t *testing.T) {
if *fstest.RemoteName == "" {
t.Skip("Skipping as -remote not set")
}
fstests.Run(t, &fstests.Opt{
RemoteName: *fstest.RemoteName,
UnimplementableFsMethods: []string{"OpenWriterAt", "DuplicateFiles"},
UnimplementableObjectMethods: []string{"MimeType"},
})
}
func TestLocal(t *testing.T) {
if *fstest.RemoteName != "" {
t.Skip("Skipping as -remote set")
}
dirs := MakeTestDirs(t, 3)
upstreams := "dir1=" + dirs[0] + " dir2=" + dirs[1] + " dir3=" + dirs[2]
name := "TestCombineLocal"
fstests.Run(t, &fstests.Opt{
RemoteName: name + ":dir1",
ExtraConfig: []fstests.ExtraConfigItem{
{Name: name, Key: "type", Value: "combine"},
{Name: name, Key: "upstreams", Value: upstreams},
},
})
}
func TestMemory(t *testing.T) {
if *fstest.RemoteName != "" {
t.Skip("Skipping as -remote set")
}
upstreams := "dir1=:memory:dir1 dir2=:memory:dir2 dir3=:memory:dir3"
name := "TestCombineMemory"
fstests.Run(t, &fstests.Opt{
RemoteName: name + ":dir1",
ExtraConfig: []fstests.ExtraConfigItem{
{Name: name, Key: "type", Value: "combine"},
{Name: name, Key: "upstreams", Value: upstreams},
},
})
}
func TestMixed(t *testing.T) {
if *fstest.RemoteName != "" {
t.Skip("Skipping as -remote set")
}
dirs := MakeTestDirs(t, 2)
upstreams := "dir1=" + dirs[0] + " dir2=" + dirs[1] + " dir3=:memory:dir3"
name := "TestCombineMixed"
fstests.Run(t, &fstests.Opt{
RemoteName: name + ":dir1",
ExtraConfig: []fstests.ExtraConfigItem{
{Name: name, Key: "type", Value: "combine"},
{Name: name, Key: "upstreams", Value: upstreams},
},
})
}
// MakeTestDirs makes directories in /tmp for testing
func MakeTestDirs(t *testing.T, n int) (dirs []string) {
for i := 1; i <= n; i++ {
dir := t.TempDir()
dirs = append(dirs, dir)
}
return dirs
}

View File

@@ -401,6 +401,10 @@ func isCompressible(r io.Reader) (bool, error) {
if err != nil {
return false, err
}
err = w.Close()
if err != nil {
return false, err
}
ratio := float64(n) / float64(b.Len())
return ratio > minCompressionRatio, nil
}
@@ -626,9 +630,11 @@ func (f *Fs) putMetadata(ctx context.Context, meta *ObjectMetadata, src fs.Objec
// Put the data
mo, err = put(ctx, metaReader, f.wrapInfo(src, makeMetadataName(src.Remote()), int64(len(data))), options...)
if err != nil {
removeErr := mo.Remove(ctx)
if removeErr != nil {
fs.Errorf(mo, "Failed to remove partially transferred object: %v", err)
if mo != nil {
removeErr := mo.Remove(ctx)
if removeErr != nil {
fs.Errorf(mo, "Failed to remove partially transferred object: %v", err)
}
}
return nil, err
}

View File

@@ -18,6 +18,7 @@ import (
"mime"
"net/http"
"path"
"regexp"
"sort"
"strconv"
"strings"
@@ -84,7 +85,7 @@ var (
Endpoint: google.Endpoint,
ClientID: rcloneClientID,
ClientSecret: obscure.MustReveal(rcloneEncryptedClientSecret),
RedirectURL: oauthutil.TitleBarRedirectURL,
RedirectURL: oauthutil.RedirectURL,
}
_mimeTypeToExtensionDuplicates = map[string]string{
"application/x-vnd.oasis.opendocument.presentation": ".odp",
@@ -299,6 +300,17 @@ a non root folder as its starting point.
Default: true,
Help: "Send files to the trash instead of deleting permanently.\n\nDefaults to true, namely sending files to the trash.\nUse `--drive-use-trash=false` to delete files permanently instead.",
Advanced: true,
}, {
Name: "copy_shortcut_content",
Default: false,
Help: `Server side copy contents of shortcuts instead of the shortcut.
When doing server side copies, normally rclone will copy shortcuts as
shortcuts.
If this flag is used then rclone will copy the contents of shortcuts
rather than shortcuts themselves when doing server side copies.`,
Advanced: true,
}, {
Name: "skip_gdocs",
Default: false,
@@ -542,6 +554,14 @@ Google don't document so it may break in the future.
Normally rclone dereferences shortcut files making them appear as if
they are the original file (see [the shortcuts section](#shortcuts)).
If this flag is set then rclone will ignore shortcut files completely.
`,
Advanced: true,
Default: false,
}, {
Name: "skip_dangling_shortcuts",
Help: `If set skip dangling shortcut files.
If this is set then rclone will not show any dangling shortcuts in listings.
`,
Advanced: true,
Default: false,
@@ -578,6 +598,7 @@ type Options struct {
TeamDriveID string `config:"team_drive"`
AuthOwnerOnly bool `config:"auth_owner_only"`
UseTrash bool `config:"use_trash"`
CopyShortcutContent bool `config:"copy_shortcut_content"`
SkipGdocs bool `config:"skip_gdocs"`
SkipChecksumGphotos bool `config:"skip_checksum_gphotos"`
SharedWithMe bool `config:"shared_with_me"`
@@ -604,6 +625,7 @@ type Options struct {
StopOnUploadLimit bool `config:"stop_on_upload_limit"`
StopOnDownloadLimit bool `config:"stop_on_download_limit"`
SkipShortcuts bool `config:"skip_shortcuts"`
SkipDanglingShortcuts bool `config:"skip_dangling_shortcuts"`
Enc encoder.MultiEncoder `config:"encoding"`
}
@@ -906,6 +928,11 @@ OUTER:
if err != nil {
return false, fmt.Errorf("list: %w", err)
}
// leave the dangling shortcut out of the listings
// we've already logged about the dangling shortcut in resolveShortcut
if f.opt.SkipDanglingShortcuts && item.MimeType == shortcutMimeTypeDangling {
continue
}
}
// Check the case of items is correct since
// the `=` operator is case insensitive.
@@ -1571,6 +1598,15 @@ func (f *Fs) findExportFormatByMimeType(ctx context.Context, itemMimeType string
}
}
// If using a link type export and a more specific export
// hasn't been found all docs should be exported
for _, _extension := range f.exportExtensions {
_mimeType := mime.TypeByExtension(_extension)
if isLinkMimeType(_mimeType) {
return _extension, _mimeType, true
}
}
// else return empty
return "", "", isDocument
}
@@ -1581,6 +1617,14 @@ func (f *Fs) findExportFormatByMimeType(ctx context.Context, itemMimeType string
// Look through the exportExtensions and find the first format that can be
// converted. If none found then return ("", "", "", false)
func (f *Fs) findExportFormat(ctx context.Context, item *drive.File) (extension, filename, mimeType string, isDocument bool) {
// If item has MD5 sum it is a file stored on drive
if item.Md5Checksum != "" {
return
}
// Folders can't be documents
if item.MimeType == driveFolderType {
return
}
extension, mimeType, isDocument = f.findExportFormatByMimeType(ctx, item.MimeType)
if extension != "" {
filename = item.Name + extension
@@ -2374,9 +2418,16 @@ func (f *Fs) Copy(ctx context.Context, src fs.Object, remote string) (fs.Object,
createInfo.Description = ""
}
// get the ID of the thing to copy - this is the shortcut if available
// get the ID of the thing to copy
// copy the contents if CopyShortcutContent
// else copy the shortcut only
id := shortcutID(srcObj.id)
if f.opt.CopyShortcutContent {
id = actualID(srcObj.id)
}
var info *drive.File
err = f.pacer.Call(func() (bool, error) {
info, err = f.svc.Files.Copy(id, createInfo).
@@ -3185,7 +3236,7 @@ This will return a JSON list of objects like this
With the -o config parameter it will output the list in a format
suitable for adding to a config file to make aliases for all the
drives found.
drives found and a combined drive.
[My Drive]
type = alias
@@ -3195,10 +3246,15 @@ drives found.
type = alias
remote = drive,team_drive=0ABCDEFabcdefghijkl,root_folder_id=:
Adding this to the rclone config file will cause those team drives to
be accessible with the aliases shown. This may require manual editing
of the names.
[AllDrives]
type = combine
remote = "My Drive=My Drive:" "Test Drive=Test Drive:"
Adding this to the rclone config file will cause those team drives to
be accessible with the aliases shown. Any illegal charactes will be
substituted with "_" and duplicate names will have numbers suffixed.
It will also add a remote called AllDrives which shows all the shared
drives combined into one directory tree.
`,
}, {
Name: "untrash",
@@ -3314,14 +3370,30 @@ func (f *Fs) Command(ctx context.Context, name string, arg []string, opt map[str
if err != nil {
return nil, err
}
re := regexp.MustCompile(`[^\w_. -]+`)
if _, ok := opt["config"]; ok {
lines := []string{}
for _, drive := range drives {
upstreams := []string{}
names := make(map[string]struct{}, len(drives))
for i, drive := range drives {
name := re.ReplaceAllString(drive.Name, "_")
for {
if _, found := names[name]; !found {
break
}
name += fmt.Sprintf("-%d", i)
}
names[name] = struct{}{}
lines = append(lines, "")
lines = append(lines, fmt.Sprintf("[%s]", drive.Name))
lines = append(lines, fmt.Sprintf("[%s]", name))
lines = append(lines, fmt.Sprintf("type = alias"))
lines = append(lines, fmt.Sprintf("remote = %s,team_drive=%s,root_folder_id=:", f.name, drive.Id))
upstreams = append(upstreams, fmt.Sprintf(`"%s=%s:"`, name, name))
}
lines = append(lines, "")
lines = append(lines, fmt.Sprintf("[AllDrives]"))
lines = append(lines, fmt.Sprintf("type = combine"))
lines = append(lines, fmt.Sprintf("upstreams = %s", strings.Join(upstreams, " ")))
return lines, nil
}
return drives, nil

View File

@@ -422,11 +422,7 @@ func (f *Fs) InternalTestCopyID(t *testing.T) {
require.NoError(t, err)
o := obj.(*Object)
dir, err := ioutil.TempDir("", "rclone-drive-copyid-test")
require.NoError(t, err)
defer func() {
_ = os.RemoveAll(dir)
}()
dir := t.TempDir()
checkFile := func(name string) {
filePath := filepath.Join(dir, name)
@@ -491,19 +487,11 @@ func (f *Fs) InternalTestAgeQuery(t *testing.T) {
subFs, isDriveFs := subFsResult.(*Fs)
require.True(t, isDriveFs)
tempDir1, err := ioutil.TempDir("", "rclone-drive-agequery1-test")
require.NoError(t, err)
defer func() {
_ = os.RemoveAll(tempDir1)
}()
tempDir1 := t.TempDir()
tempFs1, err := fs.NewFs(defCtx, tempDir1)
require.NoError(t, err)
tempDir2, err := ioutil.TempDir("", "rclone-drive-agequery2-test")
require.NoError(t, err)
defer func() {
_ = os.RemoveAll(tempDir2)
}()
tempDir2 := t.TempDir()
tempFs2, err := fs.NewFs(defCtx, tempDir2)
require.NoError(t, err)

View File

@@ -1650,13 +1650,37 @@ func (o *Object) uploadChunked(ctx context.Context, in0 io.Reader, commitInfo *f
}
chunk := readers.NewRepeatableLimitReaderBuffer(in, buf, chunkSize)
skip := int64(0)
err = o.fs.pacer.Call(func() (bool, error) {
// seek to the start in case this is a retry
if _, err = chunk.Seek(0, io.SeekStart); err != nil {
return false, nil
if _, err = chunk.Seek(skip, io.SeekStart); err != nil {
return false, err
}
err = o.fs.srv.UploadSessionAppendV2(&appendArg, chunk)
// after session is started, we retry everything
if err != nil {
// Check for incorrect offset error and retry with new offset
if uErr, ok := err.(files.UploadSessionAppendV2APIError); ok {
if uErr.EndpointError != nil && uErr.EndpointError.IncorrectOffset != nil {
correctOffset := uErr.EndpointError.IncorrectOffset.CorrectOffset
delta := int64(correctOffset) - int64(cursor.Offset)
skip += delta
what := fmt.Sprintf("incorrect offset error receved: sent %d, need %d, skip %d", cursor.Offset, correctOffset, skip)
if skip < 0 {
return false, fmt.Errorf("can't seek backwards to correct offset: %s", what)
} else if skip == chunkSize {
fs.Debugf(o, "%s: chunk received OK - continuing", what)
return false, nil
} else if skip > chunkSize {
// This error should never happen
return false, fmt.Errorf("can't seek forwards by more than a chunk to correct offset: %s", what)
}
// Skip the sent data on next retry
cursor.Offset = uint64(int64(cursor.Offset) + delta)
fs.Debugf(o, "%s: skipping bytes on retry to fix offset", what)
}
}
}
return err != nil, err
})
if err != nil {
@@ -1760,7 +1784,7 @@ func (o *Object) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, op
entry, err = o.uploadChunked(ctx, in, commitInfo, size)
} else {
err = o.fs.pacer.CallNoRetry(func() (bool, error) {
entry, err = o.fs.srv.Upload(commitInfo, in)
entry, err = o.fs.srv.Upload(&files.UploadArg{CommitInfo: *commitInfo}, in)
return shouldRetry(ctx, err)
})
}

View File

@@ -15,7 +15,7 @@ import (
"sync"
"time"
"github.com/jlaffaye/ftp"
"github.com/rclone/ftp"
"github.com/rclone/rclone/fs"
"github.com/rclone/rclone/fs/accounting"
"github.com/rclone/rclone/fs/config"

View File

@@ -65,7 +65,7 @@ var (
Endpoint: google.Endpoint,
ClientID: rcloneClientID,
ClientSecret: obscure.MustReveal(rcloneEncryptedClientSecret),
RedirectURL: oauthutil.TitleBarRedirectURL,
RedirectURL: oauthutil.RedirectURL,
}
)
@@ -182,15 +182,30 @@ Docs: https://cloud.google.com/storage/docs/bucket-policy-only
}, {
Value: "asia-northeast1",
Help: "Tokyo",
}, {
Value: "asia-northeast2",
Help: "Osaka",
}, {
Value: "asia-northeast3",
Help: "Seoul",
}, {
Value: "asia-south1",
Help: "Mumbai",
}, {
Value: "asia-south2",
Help: "Delhi",
}, {
Value: "asia-southeast1",
Help: "Singapore",
}, {
Value: "asia-southeast2",
Help: "Jakarta",
}, {
Value: "australia-southeast1",
Help: "Sydney",
}, {
Value: "australia-southeast2",
Help: "Melbourne",
}, {
Value: "europe-north1",
Help: "Finland",
@@ -206,6 +221,12 @@ Docs: https://cloud.google.com/storage/docs/bucket-policy-only
}, {
Value: "europe-west4",
Help: "Netherlands",
}, {
Value: "europe-west6",
Help: "Zürich",
}, {
Value: "europe-central2",
Help: "Warsaw",
}, {
Value: "us-central1",
Help: "Iowa",
@@ -221,6 +242,33 @@ Docs: https://cloud.google.com/storage/docs/bucket-policy-only
}, {
Value: "us-west2",
Help: "California",
}, {
Value: "us-west3",
Help: "Salt Lake City",
}, {
Value: "us-west4",
Help: "Las Vegas",
}, {
Value: "northamerica-northeast1",
Help: "Montréal",
}, {
Value: "northamerica-northeast2",
Help: "Toronto",
}, {
Value: "southamerica-east1",
Help: "São Paulo",
}, {
Value: "southamerica-west1",
Help: "Santiago",
}, {
Value: "asia1",
Help: "Dual region: asia-northeast1 and asia-northeast2.",
}, {
Value: "eur4",
Help: "Dual region: europe-north1 and europe-west4.",
}, {
Value: "nam4",
Help: "Dual region: us-central1 and us-east1.",
}},
}, {
Name: "storage_class",
@@ -247,6 +295,15 @@ Docs: https://cloud.google.com/storage/docs/bucket-policy-only
Value: "DURABLE_REDUCED_AVAILABILITY",
Help: "Durable reduced availability storage class",
}},
}, {
Name: "no_check_bucket",
Help: `If set, don't attempt to check the bucket exists or create it.
This can be useful when trying to minimise the number of transactions
rclone does if you know the bucket exists already.
`,
Default: false,
Advanced: true,
}, {
Name: config.ConfigEncoding,
Help: config.ConfigEncodingHelp,
@@ -269,6 +326,7 @@ type Options struct {
BucketPolicyOnly bool `config:"bucket_policy_only"`
Location string `config:"location"`
StorageClass string `config:"storage_class"`
NoCheckBucket bool `config:"no_check_bucket"`
Enc encoder.MultiEncoder `config:"encoding"`
}
@@ -434,7 +492,7 @@ func NewFs(ctx context.Context, name, root string, m configmap.Mapper) (fs.Fs, e
name: name,
root: root,
opt: *opt,
pacer: fs.NewPacer(ctx, pacer.NewGoogleDrive(pacer.MinSleep(minSleep))),
pacer: fs.NewPacer(ctx, pacer.NewS3(pacer.MinSleep(minSleep))),
cache: bucket.NewCache(),
}
f.setRoot(root)
@@ -792,6 +850,14 @@ func (f *Fs) makeBucket(ctx context.Context, bucket string) (err error) {
}, nil)
}
// checkBucket creates the bucket if it doesn't exist unless NoCheckBucket is true
func (f *Fs) checkBucket(ctx context.Context, bucket string) error {
if f.opt.NoCheckBucket {
return nil
}
return f.makeBucket(ctx, bucket)
}
// Rmdir deletes the bucket if the fs is at the root
//
// Returns an error if it isn't empty: Error 409: The bucket you tried
@@ -825,7 +891,7 @@ func (f *Fs) Precision() time.Duration {
// If it isn't possible then return fs.ErrorCantCopy
func (f *Fs) Copy(ctx context.Context, src fs.Object, remote string) (fs.Object, error) {
dstBucket, dstPath := f.split(remote)
err := f.makeBucket(ctx, dstBucket)
err := f.checkBucket(ctx, dstBucket)
if err != nil {
return nil, err
}
@@ -1075,7 +1141,7 @@ func (o *Object) Open(ctx context.Context, options ...fs.OpenOption) (in io.Read
// The new object may have been created if an error is returned
func (o *Object) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, options ...fs.OpenOption) error {
bucket, bucketPath := o.split()
err := o.fs.makeBucket(ctx, bucket)
err := o.fs.checkBucket(ctx, bucket)
if err != nil {
return err
}

View File

@@ -69,7 +69,7 @@ var (
Endpoint: google.Endpoint,
ClientID: rcloneClientID,
ClientSecret: obscure.MustReveal(rcloneEncryptedClientSecret),
RedirectURL: oauthutil.TitleBarRedirectURL,
RedirectURL: oauthutil.RedirectURL,
}
)

View File

@@ -202,7 +202,11 @@ func (f *Fs) wrapEntries(baseEntries fs.DirEntries) (hashEntries fs.DirEntries,
for _, entry := range baseEntries {
switch x := entry.(type) {
case fs.Object:
hashEntries = append(hashEntries, f.wrapObject(x, nil))
obj, err := f.wrapObject(x, nil)
if err != nil {
return nil, err
}
hashEntries = append(hashEntries, obj)
default:
hashEntries = append(hashEntries, entry) // trash in - trash out
}
@@ -251,7 +255,7 @@ func (f *Fs) PutStream(ctx context.Context, in io.Reader, src fs.ObjectInfo, opt
if do := f.Fs.Features().PutStream; do != nil {
_ = f.pruneHash(src.Remote())
oResult, err := do(ctx, in, src, options...)
return f.wrapObject(oResult, err), err
return f.wrapObject(oResult, err)
}
return nil, errors.New("PutStream not supported")
}
@@ -261,7 +265,7 @@ func (f *Fs) PutUnchecked(ctx context.Context, in io.Reader, src fs.ObjectInfo,
if do := f.Fs.Features().PutUnchecked; do != nil {
_ = f.pruneHash(src.Remote())
oResult, err := do(ctx, in, src, options...)
return f.wrapObject(oResult, err), err
return f.wrapObject(oResult, err)
}
return nil, errors.New("PutUnchecked not supported")
}
@@ -348,7 +352,7 @@ func (f *Fs) Copy(ctx context.Context, src fs.Object, remote string) (fs.Object,
return nil, fs.ErrorCantCopy
}
oResult, err := do(ctx, o.Object, remote)
return f.wrapObject(oResult, err), err
return f.wrapObject(oResult, err)
}
// Move src to this remote using server-side move operations.
@@ -371,7 +375,7 @@ func (f *Fs) Move(ctx context.Context, src fs.Object, remote string) (fs.Object,
dir: false,
fs: f,
})
return f.wrapObject(oResult, nil), nil
return f.wrapObject(oResult, nil)
}
// DirMove moves src, srcRemote to this remote at dstRemote using server-side move operations.
@@ -410,7 +414,7 @@ func (f *Fs) Shutdown(ctx context.Context) (err error) {
// NewObject finds the Object at remote.
func (f *Fs) NewObject(ctx context.Context, remote string) (fs.Object, error) {
o, err := f.Fs.NewObject(ctx, remote)
return f.wrapObject(o, err), err
return f.wrapObject(o, err)
}
//
@@ -424,11 +428,15 @@ type Object struct {
}
// Wrap base object into hasher object
func (f *Fs) wrapObject(o fs.Object, err error) *Object {
if err != nil || o == nil {
return nil
func (f *Fs) wrapObject(o fs.Object, err error) (obj fs.Object, outErr error) {
// log.Trace(o, "err=%v", err)("obj=%#v, outErr=%v", &obj, &outErr)
if err != nil {
return nil, err
}
return &Object{Object: o, f: f}
if o == nil {
return nil, fs.ErrorObjectNotFound
}
return &Object{Object: o, f: f}, nil
}
// Fs returns read only access to the Fs that this object is part of

View File

@@ -184,7 +184,7 @@ func (o *Object) Open(ctx context.Context, options ...fs.OpenOption) (r io.ReadC
// Put data into the remote path with given modTime and size
func (f *Fs) Put(ctx context.Context, in io.Reader, src fs.ObjectInfo, options ...fs.OpenOption) (fs.Object, error) {
var (
o *Object
o fs.Object
common hash.Set
rehash bool
hashes hashMap
@@ -210,8 +210,8 @@ func (f *Fs) Put(ctx context.Context, in io.Reader, src fs.ObjectInfo, options .
_ = f.pruneHash(src.Remote())
oResult, err := f.Fs.Put(ctx, wrapIn, src, options...)
o = f.wrapObject(oResult, err)
if o == nil {
o, err = f.wrapObject(oResult, err)
if err != nil {
return nil, err
}
@@ -224,7 +224,7 @@ func (f *Fs) Put(ctx context.Context, in io.Reader, src fs.ObjectInfo, options .
}
}
if len(hashes) > 0 {
err := o.putHashes(ctx, hashes)
err := o.(*Object).putHashes(ctx, hashes)
fs.Debugf(o, "Applied %d source hashes, err: %v", len(hashes), err)
}
return o, err

View File

@@ -28,13 +28,28 @@ import (
func init() {
fs.Register(&fs.RegInfo{
Name: "koofr",
Description: "Koofr",
Description: "Koofr, Digi Storage and other Koofr-compatible storage providers",
NewFs: NewFs,
Options: []fs.Option{{
Name: fs.ConfigProvider,
Help: "Choose your storage provider.",
// NOTE if you add a new provider here, then add it in the
// setProviderDefaults() function and update options accordingly
Examples: []fs.OptionExample{{
Value: "koofr",
Help: "Koofr, https://app.koofr.net/",
}, {
Value: "digistorage",
Help: "Digi Storage, https://storage.rcs-rds.ro/",
}, {
Value: "other",
Help: "Any other Koofr API compatible storage service",
}},
}, {
Name: "endpoint",
Help: "The Koofr API endpoint to use.",
Default: "https://app.koofr.net",
Advanced: true,
Provider: "other",
Required: true,
}, {
Name: "mountid",
Help: "Mount ID of the mount to use.\n\nIf omitted, the primary mount is used.",
@@ -46,11 +61,24 @@ func init() {
Advanced: true,
}, {
Name: "user",
Help: "Your Koofr user name.",
Help: "Your user name.",
Required: true,
}, {
Name: "password",
Help: "Your Koofr password for rclone (generate one at https://app.koofr.net/app/admin/preferences/password).",
Help: "Your password for rclone (generate one at https://app.koofr.net/app/admin/preferences/password).",
Provider: "koofr",
IsPassword: true,
Required: true,
}, {
Name: "password",
Help: "Your password for rclone (generate one at https://storage.rcs-rds.ro/app/admin/preferences/password).",
Provider: "digistorage",
IsPassword: true,
Required: true,
}, {
Name: "password",
Help: "Your password for rclone (generate one at your service's settings page).",
Provider: "other",
IsPassword: true,
Required: true,
}, {
@@ -67,6 +95,7 @@ func init() {
// Options represent the configuration of the Koofr backend
type Options struct {
Provider string `config:"provider"`
Endpoint string `config:"endpoint"`
MountID string `config:"mountid"`
User string `config:"user"`
@@ -251,13 +280,38 @@ func (f *Fs) fullPath(part string) string {
return f.opt.Enc.FromStandardPath(path.Join("/", f.root, part))
}
// NewFs constructs a new filesystem given a root path and configuration options
func setProviderDefaults(opt *Options) {
// handle old, provider-less configs
if opt.Provider == "" {
if opt.Endpoint == "" || strings.HasPrefix(opt.Endpoint, "https://app.koofr.net") {
opt.Provider = "koofr"
} else if strings.HasPrefix(opt.Endpoint, "https://storage.rcs-rds.ro") {
opt.Provider = "digistorage"
} else {
opt.Provider = "other"
}
}
// now assign an endpoint
if opt.Provider == "koofr" {
opt.Endpoint = "https://app.koofr.net"
} else if opt.Provider == "digistorage" {
opt.Endpoint = "https://storage.rcs-rds.ro"
}
}
// NewFs constructs a new filesystem given a root path and rclone configuration options
func NewFs(ctx context.Context, name, root string, m configmap.Mapper) (ff fs.Fs, err error) {
opt := new(Options)
err = configstruct.Set(m, opt)
if err != nil {
return nil, err
}
setProviderDefaults(opt)
return NewFsFromOptions(ctx, name, root, opt)
}
// NewFsFromOptions constructs a new filesystem given a root path and internal configuration options
func NewFsFromOptions(ctx context.Context, name, root string, opt *Options) (ff fs.Fs, err error) {
pass, err := obscure.Reveal(opt.Password)
if err != nil {
return nil, err

View File

@@ -58,7 +58,7 @@ type UserInfoResponse struct {
AutoProlong bool `json:"auto_prolong"`
Basequota int64 `json:"basequota"`
Enabled bool `json:"enabled"`
Expires int `json:"expires"`
Expires int64 `json:"expires"`
Prolong bool `json:"prolong"`
Promocodes struct {
} `json:"promocodes"`
@@ -80,7 +80,7 @@ type UserInfoResponse struct {
FileSizeLimit int64 `json:"file_size_limit"`
Space struct {
BytesTotal int64 `json:"bytes_total"`
BytesUsed int `json:"bytes_used"`
BytesUsed int64 `json:"bytes_used"`
Overquota bool `json:"overquota"`
} `json:"space"`
} `json:"cloud"`

View File

@@ -1572,7 +1572,7 @@ func (f *Fs) About(ctx context.Context) (*fs.Usage, error) {
}
total := info.Body.Cloud.Space.BytesTotal
used := int64(info.Body.Cloud.Space.BytesUsed)
used := info.Body.Cloud.Space.BytesUsed
usage := &fs.Usage{
Total: fs.NewUsageValue(total),

1277
backend/netstorage/netstorage.go Executable file

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,16 @@
package netstorage_test
import (
"testing"
"github.com/rclone/rclone/backend/netstorage"
"github.com/rclone/rclone/fstest/fstests"
)
// TestIntegration runs integration tests against the remote
func TestIntegration(t *testing.T) {
fstests.Run(t, &fstests.Opt{
RemoteName: "TestnStorage:",
NilObject: (*netstorage.Object)(nil),
})
}

View File

@@ -140,6 +140,15 @@ Note that the chunks will be buffered into memory.`,
Help: "The type of the drive (" + driveTypePersonal + " | " + driveTypeBusiness + " | " + driveTypeSharepoint + ").",
Default: "",
Advanced: true,
}, {
Name: "root_folder_id",
Help: `ID of the root folder.
This isn't normally needed, but in special circumstances you might
know the folder ID that you wish to access but not be able to get
there through a path traversal.
`,
Advanced: true,
}, {
Name: "disable_site_permission",
Help: `Disable the request for Sites.Read.All permission.
@@ -547,6 +556,7 @@ type Options struct {
ChunkSize fs.SizeSuffix `config:"chunk_size"`
DriveID string `config:"drive_id"`
DriveType string `config:"drive_type"`
RootFolderID string `config:"root_folder_id"`
DisableSitePermission bool `config:"disable_site_permission"`
ExposeOneNoteFiles bool `config:"expose_onenote_files"`
ServerSideAcrossConfigs bool `config:"server_side_across_configs"`
@@ -639,6 +649,12 @@ func shouldRetry(ctx context.Context, resp *http.Response, err error) (bool, err
retry := false
if resp != nil {
switch resp.StatusCode {
case 400:
if apiErr, ok := err.(*api.Error); ok {
if apiErr.ErrorInfo.InnerError.Code == "pathIsTooLong" {
return false, fserrors.NoRetryError(err)
}
}
case 401:
if len(resp.Header["Www-Authenticate"]) == 1 && strings.Index(resp.Header["Www-Authenticate"][0], "expired_token") >= 0 {
retry = true
@@ -852,15 +868,19 @@ func NewFs(ctx context.Context, name, root string, m configmap.Mapper) (fs.Fs, e
})
// Get rootID
rootInfo, _, err := f.readMetaDataForPath(ctx, "")
if err != nil {
return nil, fmt.Errorf("failed to get root: %w", err)
var rootID = opt.RootFolderID
if rootID == "" {
rootInfo, _, err := f.readMetaDataForPath(ctx, "")
if err != nil {
return nil, fmt.Errorf("failed to get root: %w", err)
}
rootID = rootInfo.GetID()
}
if rootInfo.GetID() == "" {
if rootID == "" {
return nil, errors.New("failed to get root: ID was empty")
}
f.dirCache = dircache.New(root, rootInfo.GetID(), f)
f.dirCache = dircache.New(root, rootID, f)
// Find the current root
err = f.dirCache.FindRoot(ctx, false)
@@ -868,7 +888,7 @@ func NewFs(ctx context.Context, name, root string, m configmap.Mapper) (fs.Fs, e
// Assume it is a file
newRoot, remote := dircache.SplitPath(root)
tempF := *f
tempF.dirCache = dircache.New(newRoot, rootInfo.ID, &tempF)
tempF.dirCache = dircache.New(newRoot, rootID, &tempF)
tempF.root = newRoot
// Make new Fs which is the parent
err = tempF.dirCache.FindRoot(ctx, false)

View File

@@ -690,7 +690,7 @@ func (f *Fs) Copy(ctx context.Context, src fs.Object, remote string) (fs.Object,
opts.Parameters.Set("fileid", fileIDtoNumber(srcObj.id))
opts.Parameters.Set("toname", f.opt.Enc.FromStandardName(leaf))
opts.Parameters.Set("tofolderid", dirIDtoNumber(directoryID))
opts.Parameters.Set("mtime", fmt.Sprintf("%d", srcObj.modTime.Unix()))
opts.Parameters.Set("mtime", fmt.Sprintf("%d", uint64(srcObj.modTime.Unix())))
var resp *http.Response
var result api.ItemResult
err = f.pacer.Call(func() (bool, error) {
@@ -1171,7 +1171,7 @@ func (o *Object) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, op
opts.Parameters.Set("filename", leaf)
opts.Parameters.Set("folderid", dirIDtoNumber(directoryID))
opts.Parameters.Set("nopartial", "1")
opts.Parameters.Set("mtime", fmt.Sprintf("%d", modTime.Unix()))
opts.Parameters.Set("mtime", fmt.Sprintf("%d", uint64(modTime.Unix())))
// Special treatment for a 0 length upload. This doesn't work
// with PUT even with Content-Length set (by setting

View File

@@ -4,16 +4,21 @@ import (
"context"
"fmt"
"net/http"
"strconv"
"time"
"github.com/putdotio/go-putio/putio"
"github.com/rclone/rclone/fs/fserrors"
"github.com/rclone/rclone/lib/pacer"
)
func checkStatusCode(resp *http.Response, expected int) error {
if resp.StatusCode != expected {
return &statusCodeError{response: resp}
func checkStatusCode(resp *http.Response, expected ...int) error {
for _, code := range expected {
if resp.StatusCode == code {
return nil
}
}
return nil
return &statusCodeError{response: resp}
}
type statusCodeError struct {
@@ -24,8 +29,10 @@ func (e *statusCodeError) Error() string {
return fmt.Sprintf("unexpected status code (%d) response while doing %s to %s", e.response.StatusCode, e.response.Request.Method, e.response.Request.URL.String())
}
// This method is called from fserrors.ShouldRetry() to determine if an error should be retried.
// Some errors (e.g. 429 Too Many Requests) are handled before this step, so they are not included here.
func (e *statusCodeError) Temporary() bool {
return e.response.StatusCode == 429 || e.response.StatusCode >= 500
return e.response.StatusCode >= 500
}
// shouldRetry returns a boolean as to whether this err deserves to be
@@ -40,6 +47,16 @@ func shouldRetry(ctx context.Context, err error) (bool, error) {
if perr, ok := err.(*putio.ErrorResponse); ok {
err = &statusCodeError{response: perr.Response}
}
if scerr, ok := err.(*statusCodeError); ok && scerr.response.StatusCode == 429 {
delay := defaultRateLimitSleep
header := scerr.response.Header.Get("x-ratelimit-reset")
if header != "" {
if resetTime, cerr := strconv.ParseInt(header, 10, 64); cerr == nil {
delay = time.Until(time.Unix(resetTime+1, 0))
}
}
return true, pacer.RetryAfterError(scerr, delay)
}
if fserrors.ShouldRetry(err) {
return true, err
}

View File

@@ -302,8 +302,8 @@ func (f *Fs) createUpload(ctx context.Context, name string, size int64, parentID
if err != nil {
return false, err
}
if resp.StatusCode != 201 {
return false, fmt.Errorf("unexpected status code from upload create: %d", resp.StatusCode)
if err := checkStatusCode(resp, 201); err != nil {
return shouldRetry(ctx, err)
}
location = resp.Header.Get("location")
if location == "" {

View File

@@ -241,7 +241,13 @@ func (o *Object) Open(ctx context.Context, options ...fs.OpenOption) (in io.Read
}
// fs.Debugf(o, "opening file: id=%d", o.file.ID)
resp, err = o.fs.httpClient.Do(req)
return shouldRetry(ctx, err)
if err != nil {
return shouldRetry(ctx, err)
}
if err := checkStatusCode(resp, 200, 206); err != nil {
return shouldRetry(ctx, err)
}
return false, nil
})
if perr, ok := err.(*putio.ErrorResponse); ok && perr.Response.StatusCode >= 400 && perr.Response.StatusCode <= 499 {
_ = resp.Body.Close()

View File

@@ -33,8 +33,9 @@ const (
rcloneObscuredClientSecret = "cMwrjWVmrHZp3gf1ZpCrlyGAmPpB-YY5BbVnO1fj-G9evcd8"
minSleep = 10 * time.Millisecond
maxSleep = 2 * time.Second
decayConstant = 2 // bigger for slower decay, exponential
decayConstant = 1 // bigger for slower decay, exponential
defaultChunkSize = 48 * fs.Mebi
defaultRateLimitSleep = 60 * time.Second
)
var (

View File

@@ -58,7 +58,7 @@ import (
func init() {
fs.Register(&fs.RegInfo{
Name: "s3",
Description: "Amazon S3 Compliant Storage Providers including AWS, Alibaba, Ceph, Digital Ocean, Dreamhost, IBM COS, Minio, RackCorp, SeaweedFS, and Tencent COS",
Description: "Amazon S3 Compliant Storage Providers including AWS, Alibaba, Ceph, China Mobile, Digital Ocean, Dreamhost, IBM COS, Lyve Cloud, Minio, Netease, RackCorp, Scaleway, SeaweedFS, StackPath, Storj, Tencent COS and Wasabi",
NewFs: NewFs,
CommandHelp: commandHelp,
Options: []fs.Option{{
@@ -75,6 +75,9 @@ func init() {
}, {
Value: "Ceph",
Help: "Ceph Object Storage",
}, {
Value: "ChinaMobile",
Help: "China Mobile Ecloud Elastic Object Storage (EOS)",
}, {
Value: "DigitalOcean",
Help: "Digital Ocean Spaces",
@@ -84,6 +87,9 @@ func init() {
}, {
Value: "IBMCOS",
Help: "IBM COS S3",
}, {
Value: "LyveCloud",
Help: "Seagate Lyve Cloud",
}, {
Value: "Minio",
Help: "Minio Object Storage",
@@ -102,6 +108,9 @@ func init() {
}, {
Value: "StackPath",
Help: "StackPath Object Storage",
}, {
Value: "Storj",
Help: "Storj (S3 Compatible Gateway)",
}, {
Value: "TencentCOS",
Help: "Tencent Cloud Object Storage (COS)",
@@ -288,7 +297,7 @@ func init() {
}, {
Name: "region",
Help: "Region to connect to.\n\nLeave blank if you are using an S3 clone and you don't have a region.",
Provider: "!AWS,Alibaba,RackCorp,Scaleway,TencentCOS",
Provider: "!AWS,Alibaba,ChinaMobile,RackCorp,Scaleway,Storj,TencentCOS",
Examples: []fs.OptionExample{{
Value: "",
Help: "Use this if unsure.\nWill use v4 signatures and an empty region.",
@@ -300,6 +309,102 @@ func init() {
Name: "endpoint",
Help: "Endpoint for S3 API.\n\nLeave blank if using AWS to use the default endpoint for the region.",
Provider: "AWS",
}, {
// ChinaMobile endpoints: https://ecloud.10086.cn/op-help-center/doc/article/24534
Name: "endpoint",
Help: "Endpoint for China Mobile Ecloud Elastic Object Storage (EOS) API.",
Provider: "ChinaMobile",
Examples: []fs.OptionExample{{
Value: "eos-wuxi-1.cmecloud.cn",
Help: "The default endpoint - a good choice if you are unsure.\nEast China (Suzhou)",
}, {
Value: "eos-jinan-1.cmecloud.cn",
Help: "East China (Jinan)",
}, {
Value: "eos-ningbo-1.cmecloud.cn",
Help: "East China (Hangzhou)",
}, {
Value: "eos-shanghai-1.cmecloud.cn",
Help: "East China (Shanghai-1)",
}, {
Value: "eos-zhengzhou-1.cmecloud.cn",
Help: "Central China (Zhengzhou)",
}, {
Value: "eos-hunan-1.cmecloud.cn",
Help: "Central China (Changsha-1)",
}, {
Value: "eos-zhuzhou-1.cmecloud.cn",
Help: "Central China (Changsha-2)",
}, {
Value: "eos-guangzhou-1.cmecloud.cn",
Help: "South China (Guangzhou-2)",
}, {
Value: "eos-dongguan-1.cmecloud.cn",
Help: "South China (Guangzhou-3)",
}, {
Value: "eos-beijing-1.cmecloud.cn",
Help: "North China (Beijing-1)",
}, {
Value: "eos-beijing-2.cmecloud.cn",
Help: "North China (Beijing-2)",
}, {
Value: "eos-beijing-4.cmecloud.cn",
Help: "North China (Beijing-3)",
}, {
Value: "eos-huhehaote-1.cmecloud.cn",
Help: "North China (Huhehaote)",
}, {
Value: "eos-chengdu-1.cmecloud.cn",
Help: "Southwest China (Chengdu)",
}, {
Value: "eos-chongqing-1.cmecloud.cn",
Help: "Southwest China (Chongqing)",
}, {
Value: "eos-guiyang-1.cmecloud.cn",
Help: "Southwest China (Guiyang)",
}, {
Value: "eos-xian-1.cmecloud.cn",
Help: "Nouthwest China (Xian)",
}, {
Value: "eos-yunnan.cmecloud.cn",
Help: "Yunnan China (Kunming)",
}, {
Value: "eos-yunnan-2.cmecloud.cn",
Help: "Yunnan China (Kunming-2)",
}, {
Value: "eos-tianjin-1.cmecloud.cn",
Help: "Tianjin China (Tianjin)",
}, {
Value: "eos-jilin-1.cmecloud.cn",
Help: "Jilin China (Changchun)",
}, {
Value: "eos-hubei-1.cmecloud.cn",
Help: "Hubei China (Xiangyan)",
}, {
Value: "eos-jiangxi-1.cmecloud.cn",
Help: "Jiangxi China (Nanchang)",
}, {
Value: "eos-gansu-1.cmecloud.cn",
Help: "Gansu China (Lanzhou)",
}, {
Value: "eos-shanxi-1.cmecloud.cn",
Help: "Shanxi China (Taiyuan)",
}, {
Value: "eos-liaoning-1.cmecloud.cn",
Help: "Liaoning China (Shenyang)",
}, {
Value: "eos-hebei-1.cmecloud.cn",
Help: "Hebei China (Shijiazhuang)",
}, {
Value: "eos-fujian-1.cmecloud.cn",
Help: "Fujian China (Xiamen)",
}, {
Value: "eos-guangxi-1.cmecloud.cn",
Help: "Guangxi China (Nanning)",
}, {
Value: "eos-anhui-1.cmecloud.cn",
Help: "Anhui China (Huainan)",
}},
}, {
Name: "endpoint",
Help: "Endpoint for IBM COS S3 API.\n\nSpecify if using an IBM COS On Premise.",
@@ -597,6 +702,20 @@ func init() {
Value: "s3.eu-central-1.stackpathstorage.com",
Help: "EU Endpoint",
}},
}, {
Name: "endpoint",
Help: "Endpoint of the Shared Gateway.",
Provider: "Storj",
Examples: []fs.OptionExample{{
Value: "gateway.eu1.storjshare.io",
Help: "EU1 Shared Gateway",
}, {
Value: "gateway.us1.storjshare.io",
Help: "US1 Shared Gateway",
}, {
Value: "gateway.ap1.storjshare.io",
Help: "Asia-Pacific Shared Gateway",
}},
}, {
// cos endpoints: https://intl.cloud.tencent.com/document/product/436/6224
Name: "endpoint",
@@ -726,7 +845,7 @@ func init() {
}, {
Name: "endpoint",
Help: "Endpoint for S3 API.\n\nRequired when using an S3 clone.",
Provider: "!AWS,IBMCOS,TencentCOS,Alibaba,Scaleway,StackPath,RackCorp",
Provider: "!AWS,IBMCOS,TencentCOS,Alibaba,ChinaMobile,Scaleway,StackPath,Storj,RackCorp",
Examples: []fs.OptionExample{{
Value: "objects-us-east-1.dream.io",
Help: "Dream Objects endpoint",
@@ -747,6 +866,18 @@ func init() {
Value: "localhost:8333",
Help: "SeaweedFS S3 localhost",
Provider: "SeaweedFS",
}, {
Value: "s3.us-east-1.lyvecloud.seagate.com",
Help: "Seagate Lyve Cloud US East 1 (Virginia)",
Provider: "LyveCloud",
}, {
Value: "s3.us-west-1.lyvecloud.seagate.com",
Help: "Seagate Lyve Cloud US West 1 (California)",
Provider: "LyveCloud",
}, {
Value: "s3.ap-southeast-1.lyvecloud.seagate.com",
Help: "Seagate Lyve Cloud AP Southeast 1 (Singapore)",
Provider: "LyveCloud",
}, {
Value: "s3.wasabisys.com",
Help: "Wasabi US East endpoint",
@@ -848,6 +979,101 @@ func init() {
Value: "us-gov-west-1",
Help: "AWS GovCloud (US) Region",
}},
}, {
Name: "location_constraint",
Help: "Location constraint - must match endpoint.\n\nUsed when creating buckets only.",
Provider: "ChinaMobile",
Examples: []fs.OptionExample{{
Value: "wuxi1",
Help: "East China (Suzhou)",
}, {
Value: "jinan1",
Help: "East China (Jinan)",
}, {
Value: "ningbo1",
Help: "East China (Hangzhou)",
}, {
Value: "shanghai1",
Help: "East China (Shanghai-1)",
}, {
Value: "zhengzhou1",
Help: "Central China (Zhengzhou)",
}, {
Value: "hunan1",
Help: "Central China (Changsha-1)",
}, {
Value: "zhuzhou1",
Help: "Central China (Changsha-2)",
}, {
Value: "guangzhou1",
Help: "South China (Guangzhou-2)",
}, {
Value: "dongguan1",
Help: "South China (Guangzhou-3)",
}, {
Value: "beijing1",
Help: "North China (Beijing-1)",
}, {
Value: "beijing2",
Help: "North China (Beijing-2)",
}, {
Value: "beijing4",
Help: "North China (Beijing-3)",
}, {
Value: "huhehaote1",
Help: "North China (Huhehaote)",
}, {
Value: "chengdu1",
Help: "Southwest China (Chengdu)",
}, {
Value: "chongqing1",
Help: "Southwest China (Chongqing)",
}, {
Value: "guiyang1",
Help: "Southwest China (Guiyang)",
}, {
Value: "xian1",
Help: "Nouthwest China (Xian)",
}, {
Value: "yunnan",
Help: "Yunnan China (Kunming)",
}, {
Value: "yunnan2",
Help: "Yunnan China (Kunming-2)",
}, {
Value: "tianjin1",
Help: "Tianjin China (Tianjin)",
}, {
Value: "jilin1",
Help: "Jilin China (Changchun)",
}, {
Value: "hubei1",
Help: "Hubei China (Xiangyan)",
}, {
Value: "jiangxi1",
Help: "Jiangxi China (Nanchang)",
}, {
Value: "gansu1",
Help: "Gansu China (Lanzhou)",
}, {
Value: "shanxi1",
Help: "Shanxi China (Taiyuan)",
}, {
Value: "liaoning1",
Help: "Liaoning China (Shenyang)",
}, {
Value: "hebei1",
Help: "Hebei China (Shijiazhuang)",
}, {
Value: "fujian1",
Help: "Fujian China (Xiamen)",
}, {
Value: "guangxi1",
Help: "Guangxi China (Nanning)",
}, {
Value: "anhui1",
Help: "Anhui China (Huainan)",
}},
}, {
Name: "location_constraint",
Help: "Location constraint - must match endpoint when using IBM Cloud Public.\n\nFor on-prem COS, do not make a selection from this list, hit enter.",
@@ -1014,7 +1240,7 @@ func init() {
}, {
Name: "location_constraint",
Help: "Location constraint - must be set to match the Region.\n\nLeave blank if not sure. Used when creating buckets only.",
Provider: "!AWS,IBMCOS,Alibaba,RackCorp,Scaleway,StackPath,TencentCOS",
Provider: "!AWS,IBMCOS,Alibaba,ChinaMobile,RackCorp,Scaleway,StackPath,Storj,TencentCOS",
}, {
Name: "acl",
Help: `Canned ACL used when creating buckets and storing or copying objects.
@@ -1025,6 +1251,7 @@ For more info visit https://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview
Note that this ACL is applied when server-side copying objects as S3
doesn't copy the ACL from the source but rather writes a fresh one.`,
Provider: "!Storj",
Examples: []fs.OptionExample{{
Value: "default",
Help: "Owner gets Full_CONTROL.\nNo one else has access rights (default).",
@@ -1048,11 +1275,11 @@ doesn't copy the ACL from the source but rather writes a fresh one.`,
}, {
Value: "bucket-owner-read",
Help: "Object owner gets FULL_CONTROL.\nBucket owner gets READ access.\nIf you specify this canned ACL when creating a bucket, Amazon S3 ignores it.",
Provider: "!IBMCOS",
Provider: "!IBMCOS,ChinaMobile",
}, {
Value: "bucket-owner-full-control",
Help: "Both the object owner and the bucket owner get FULL_CONTROL over the object.\nIf you specify this canned ACL when creating a bucket, Amazon S3 ignores it.",
Provider: "!IBMCOS",
Provider: "!IBMCOS,ChinaMobile",
}, {
Value: "private",
Help: "Owner gets FULL_CONTROL.\nNo one else has access rights (default).\nThis acl is available on IBM Cloud (Infra), IBM Cloud (Storage), On-Premise COS.",
@@ -1101,7 +1328,7 @@ isn't set then "acl" is used instead.`,
}, {
Name: "server_side_encryption",
Help: "The server-side encryption algorithm used when storing this object in S3.",
Provider: "AWS,Ceph,Minio",
Provider: "AWS,Ceph,ChinaMobile,Minio",
Examples: []fs.OptionExample{{
Value: "",
Help: "None",
@@ -1109,13 +1336,14 @@ isn't set then "acl" is used instead.`,
Value: "AES256",
Help: "AES256",
}, {
Value: "aws:kms",
Help: "aws:kms",
Value: "aws:kms",
Help: "aws:kms",
Provider: "!ChinaMobile",
}},
}, {
Name: "sse_customer_algorithm",
Help: "If using SSE-C, the server-side encryption algorithm used when storing this object in S3.",
Provider: "AWS,Ceph,Minio",
Provider: "AWS,Ceph,ChinaMobile,Minio",
Advanced: true,
Examples: []fs.OptionExample{{
Value: "",
@@ -1138,7 +1366,7 @@ isn't set then "acl" is used instead.`,
}, {
Name: "sse_customer_key",
Help: "If using SSE-C you must provide the secret encryption key used to encrypt/decrypt your data.",
Provider: "AWS,Ceph,Minio",
Provider: "AWS,Ceph,ChinaMobile,Minio",
Advanced: true,
Examples: []fs.OptionExample{{
Value: "",
@@ -1150,7 +1378,7 @@ isn't set then "acl" is used instead.`,
If you leave it blank, this is calculated automatically from the sse_customer_key provided.
`,
Provider: "AWS,Ceph,Minio",
Provider: "AWS,Ceph,ChinaMobile,Minio",
Advanced: true,
Examples: []fs.OptionExample{{
Value: "",
@@ -1206,6 +1434,24 @@ If you leave it blank, this is calculated automatically from the sse_customer_ke
Value: "STANDARD_IA",
Help: "Infrequent access storage mode",
}},
}, {
// Mapping from here: https://ecloud.10086.cn/op-help-center/doc/article/24495
Name: "storage_class",
Help: "The storage class to use when storing new objects in ChinaMobile.",
Provider: "ChinaMobile",
Examples: []fs.OptionExample{{
Value: "",
Help: "Default",
}, {
Value: "STANDARD",
Help: "Standard storage class",
}, {
Value: "GLACIER",
Help: "Archive storage mode",
}, {
Value: "STANDARD_IA",
Help: "Infrequent access storage mode",
}},
}, {
// Mapping from here: https://intl.cloud.tencent.com/document/product/436/30925
Name: "storage_class",
@@ -1530,6 +1776,14 @@ See: https://github.com/rclone/rclone/issues/4673, https://github.com/rclone/rcl
This is usually set to a CloudFront CDN URL as AWS S3 offers
cheaper egress for data downloaded through the CloudFront network.`,
Advanced: true,
}, {
Name: "use_multipart_etag",
Help: `Whether to use ETag in multipart uploads for verification
This should be true, false or left unset to use the default for the provider.
`,
Default: fs.Tristate{},
Advanced: true,
},
}})
}
@@ -1594,6 +1848,7 @@ type Options struct {
MemoryPoolUseMmap bool `config:"memory_pool_use_mmap"`
DisableHTTP2 bool `config:"disable_http2"`
DownloadURL string `config:"download_url"`
UseMultipartEtag fs.Tristate `config:"use_multipart_etag"`
}
// Fs represents a remote s3 server
@@ -1897,16 +2152,21 @@ func setQuirks(opt *Options) {
listObjectsV2 = true
virtualHostStyle = true
urlEncodeListings = true
useMultipartEtag = true
)
switch opt.Provider {
case "AWS":
// No quirks
case "Alibaba":
// No quirks
useMultipartEtag = false // Alibaba seems to calculate multipart Etags differently from AWS
case "Ceph":
listObjectsV2 = false
virtualHostStyle = false
urlEncodeListings = false
case "ChinaMobile":
listObjectsV2 = false
virtualHostStyle = false
urlEncodeListings = false
case "DigitalOcean":
urlEncodeListings = false
case "Dreamhost":
@@ -1915,13 +2175,18 @@ func setQuirks(opt *Options) {
listObjectsV2 = false // untested
virtualHostStyle = false
urlEncodeListings = false
useMultipartEtag = false // untested
case "LyveCloud":
useMultipartEtag = false // LyveCloud seems to calculate multipart Etags differently from AWS
case "Minio":
virtualHostStyle = false
case "Netease":
listObjectsV2 = false // untested
urlEncodeListings = false
useMultipartEtag = false // untested
case "RackCorp":
// No quirks
useMultipartEtag = false // untested
case "Scaleway":
// Scaleway can only have 1000 parts in an upload
if opt.MaxUploadParts > 1000 {
@@ -1932,23 +2197,32 @@ func setQuirks(opt *Options) {
listObjectsV2 = false // untested
virtualHostStyle = false
urlEncodeListings = false
useMultipartEtag = false // untested
case "StackPath":
listObjectsV2 = false // untested
virtualHostStyle = false
urlEncodeListings = false
case "Storj":
// Force chunk size to >= 64 MiB
if opt.ChunkSize < 64*fs.Mebi {
opt.ChunkSize = 64 * fs.Mebi
}
case "TencentCOS":
listObjectsV2 = false // untested
listObjectsV2 = false // untested
useMultipartEtag = false // untested
case "Wasabi":
// No quirks
case "Other":
listObjectsV2 = false
virtualHostStyle = false
urlEncodeListings = false
useMultipartEtag = false
default:
fs.Logf("s3", "s3 provider %q not known - please set correctly", opt.Provider)
listObjectsV2 = false
virtualHostStyle = false
urlEncodeListings = false
useMultipartEtag = false
}
// Path Style vs Virtual Host style
@@ -1970,6 +2244,12 @@ func setQuirks(opt *Options) {
opt.ListVersion = 1
}
}
// Set the correct use multipart Etag for error checking if not manually set
if !opt.UseMultipartEtag.Valid {
opt.UseMultipartEtag.Valid = true
opt.UseMultipartEtag.Value = useMultipartEtag
}
}
// setRoot changes the root of the Fs
@@ -2070,6 +2350,11 @@ func NewFs(ctx context.Context, name, root string, m configmap.Mapper) (fs.Fs, e
// return an error with an fs which points to the parent
return f, fs.ErrorIsFile
}
if opt.Provider == "Storj" {
f.features.Copy = nil
f.features.SetTier = false
f.features.GetTier = false
}
// f.listMultipartUploads()
return f, nil
}
@@ -3216,9 +3501,6 @@ func (o *Object) readMetaData(ctx context.Context) (err error) {
if err != nil {
return err
}
if resp.LastModified == nil {
fs.Logf(o, "Failed to read last modified from HEAD: %v", err)
}
o.setMetaData(resp.ETag, resp.ContentLength, resp.LastModified, resp.Metadata, resp.ContentType, resp.StorageClass)
return nil
}
@@ -3248,6 +3530,7 @@ func (o *Object) setMetaData(etag *string, contentLength *int64, lastModified *t
o.storageClass = aws.StringValue(storageClass)
if lastModified == nil {
o.lastModified = time.Now()
fs.Logf(o, "Failed to read last modified")
} else {
o.lastModified = *lastModified
}
@@ -3419,9 +3702,7 @@ func (o *Object) Open(ctx context.Context, options ...fs.OpenOption) (in io.Read
if err != nil {
return nil, err
}
if resp.LastModified == nil {
fs.Logf(o, "Failed to read last modified: %v", err)
}
// read size from ContentLength or ContentRange
size := resp.ContentLength
if resp.ContentRange != nil {
@@ -3444,7 +3725,7 @@ func (o *Object) Open(ctx context.Context, options ...fs.OpenOption) (in io.Read
var warnStreamUpload sync.Once
func (o *Object) uploadMultipart(ctx context.Context, req *s3.PutObjectInput, size int64, in io.Reader) (err error) {
func (o *Object) uploadMultipart(ctx context.Context, req *s3.PutObjectInput, size int64, in io.Reader) (etag string, err error) {
f := o.fs
// make concurrency machinery
@@ -3491,7 +3772,7 @@ func (o *Object) uploadMultipart(ctx context.Context, req *s3.PutObjectInput, si
return f.shouldRetry(ctx, err)
})
if err != nil {
return fmt.Errorf("multipart upload failed to initialise: %w", err)
return etag, fmt.Errorf("multipart upload failed to initialise: %w", err)
}
uid := cout.UploadId
@@ -3520,8 +3801,21 @@ func (o *Object) uploadMultipart(ctx context.Context, req *s3.PutObjectInput, si
partsMu sync.Mutex // to protect parts
parts []*s3.CompletedPart
off int64
md5sMu sync.Mutex
md5s []byte
)
addMd5 := func(md5binary *[md5.Size]byte, partNum int64) {
md5sMu.Lock()
defer md5sMu.Unlock()
start := partNum * md5.Size
end := start + md5.Size
if extend := end - int64(len(md5s)); extend > 0 {
md5s = append(md5s, make([]byte, extend)...)
}
copy(md5s[start:end], (*md5binary)[:])
}
for partNum := int64(1); !finished; partNum++ {
// Get a block of memory from the pool and token which limits concurrency.
tokens.Get()
@@ -3551,7 +3845,7 @@ func (o *Object) uploadMultipart(ctx context.Context, req *s3.PutObjectInput, si
finished = true
} else if err != nil {
free()
return fmt.Errorf("multipart upload failed to read source: %w", err)
return etag, fmt.Errorf("multipart upload failed to read source: %w", err)
}
buf = buf[:n]
@@ -3564,6 +3858,7 @@ func (o *Object) uploadMultipart(ctx context.Context, req *s3.PutObjectInput, si
// create checksum of buffer for integrity checking
md5sumBinary := md5.Sum(buf)
addMd5(&md5sumBinary, partNum-1)
md5sum := base64.StdEncoding.EncodeToString(md5sumBinary[:])
err = f.pacer.Call(func() (bool, error) {
@@ -3605,7 +3900,7 @@ func (o *Object) uploadMultipart(ctx context.Context, req *s3.PutObjectInput, si
}
err = g.Wait()
if err != nil {
return err
return etag, err
}
// sort the completed parts by part number
@@ -3626,9 +3921,11 @@ func (o *Object) uploadMultipart(ctx context.Context, req *s3.PutObjectInput, si
return f.shouldRetry(ctx, err)
})
if err != nil {
return fmt.Errorf("multipart upload failed to finalise: %w", err)
return etag, fmt.Errorf("multipart upload failed to finalise: %w", err)
}
return nil
hashOfHashes := md5.Sum(md5s)
etag = fmt.Sprintf("%s-%d", hex.EncodeToString(hashOfHashes[:]), len(parts))
return etag, nil
}
// Update the Object from in with modTime and size
@@ -3654,19 +3951,20 @@ func (o *Object) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, op
// - so we can add the md5sum in the metadata as metaMD5Hash if using SSE/SSE-C
// - for multipart provided checksums aren't disabled
// - so we can add the md5sum in the metadata as metaMD5Hash
var md5sum string
var md5sumBase64 string
var md5sumHex string
if !multipart || !o.fs.opt.DisableChecksum {
hash, err := src.Hash(ctx, hash.MD5)
if err == nil && matchMd5.MatchString(hash) {
hashBytes, err := hex.DecodeString(hash)
md5sumHex, err = src.Hash(ctx, hash.MD5)
if err == nil && matchMd5.MatchString(md5sumHex) {
hashBytes, err := hex.DecodeString(md5sumHex)
if err == nil {
md5sum = base64.StdEncoding.EncodeToString(hashBytes)
md5sumBase64 = base64.StdEncoding.EncodeToString(hashBytes)
if (multipart || o.fs.etagIsNotMD5) && !o.fs.opt.DisableChecksum {
// Set the md5sum as metadata on the object if
// - a multipart upload
// - the Etag is not an MD5, eg when using SSE/SSE-C
// provided checksums aren't disabled
metadata[metaMD5Hash] = &md5sum
metadata[metaMD5Hash] = &md5sumBase64
}
}
}
@@ -3681,8 +3979,8 @@ func (o *Object) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, op
ContentType: &mimeType,
Metadata: metadata,
}
if md5sum != "" {
req.ContentMD5 = &md5sum
if md5sumBase64 != "" {
req.ContentMD5 = &md5sumBase64
}
if o.fs.opt.RequesterPays {
req.RequestPayer = aws.String(s3.RequestPayerRequester)
@@ -3736,8 +4034,9 @@ func (o *Object) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, op
}
var resp *http.Response // response from PUT
var wantETag string // Multipart upload Etag to check
if multipart {
err = o.uploadMultipart(ctx, &req, size, in)
wantETag, err = o.uploadMultipart(ctx, &req, size, in)
if err != nil {
return err
}
@@ -3799,7 +4098,7 @@ func (o *Object) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, op
// so make up the object as best we can assuming it got
// uploaded properly. If size < 0 then we need to do the HEAD.
if o.fs.opt.NoHead && size >= 0 {
o.md5 = md5sum
o.md5 = md5sumHex
o.bytes = size
o.lastModified = time.Now()
o.meta = req.Metadata
@@ -3817,7 +4116,18 @@ func (o *Object) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, op
// Read the metadata from the newly created object
o.meta = nil // wipe old metadata
err = o.readMetaData(ctx)
head, err := o.headObject(ctx)
if err != nil {
return err
}
o.setMetaData(head.ETag, head.ContentLength, head.LastModified, head.Metadata, head.ContentType, head.StorageClass)
if o.fs.opt.UseMultipartEtag.Value && !o.fs.etagIsNotMD5 && wantETag != "" && head.ETag != nil && *head.ETag != "" {
gotETag := strings.Trim(strings.ToLower(*head.ETag), `"`)
if wantETag != gotETag {
return fmt.Errorf("multipart upload corrupted: Etag differ: expecting %s but got %s", wantETag, gotETag)
}
fs.Debugf(o, "Multipart upload Etag: %s OK", wantETag)
}
return err
}

View File

@@ -1,8 +1,8 @@
//go:build !plan9
// +build !plan9
// Package tardigrade provides an interface to Tardigrade decentralized object storage.
package tardigrade
// Package storj provides an interface to Storj decentralized object storage.
package storj
import (
"context"
@@ -31,16 +31,17 @@ const (
)
var satMap = map[string]string{
"us-central-1.tardigrade.io": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S@us-central-1.tardigrade.io:7777",
"europe-west-1.tardigrade.io": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs@europe-west-1.tardigrade.io:7777",
"asia-east-1.tardigrade.io": "121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6@asia-east-1.tardigrade.io:7777",
"us-central-1.storj.io": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S@us-central-1.tardigrade.io:7777",
"europe-west-1.storj.io": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs@europe-west-1.tardigrade.io:7777",
"asia-east-1.storj.io": "121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6@asia-east-1.tardigrade.io:7777",
}
// Register with Fs
func init() {
fs.Register(&fs.RegInfo{
Name: "tardigrade",
Description: "Tardigrade Decentralized Cloud Storage",
Name: "storj",
Description: "Storj Decentralized Cloud Storage",
Aliases: []string{"tardigrade"},
NewFs: NewFs,
Config: func(ctx context.Context, name string, m configmap.Mapper, configIn fs.ConfigIn) (*fs.ConfigOut, error) {
provider, _ := m.Get(fs.ConfigProvider)
@@ -104,15 +105,15 @@ func init() {
Name: "satellite_address",
Help: "Satellite address.\n\nCustom satellite address should match the format: `<nodeid>@<address>:<port>`.",
Provider: newProvider,
Default: "us-central-1.tardigrade.io",
Default: "us-central-1.storj.io",
Examples: []fs.OptionExample{{
Value: "us-central-1.tardigrade.io",
Value: "us-central-1.storj.io",
Help: "US Central 1",
}, {
Value: "europe-west-1.tardigrade.io",
Value: "europe-west-1.storj.io",
Help: "Europe West 1",
}, {
Value: "asia-east-1.tardigrade.io",
Value: "asia-east-1.storj.io",
Help: "Asia East 1",
},
},
@@ -140,7 +141,7 @@ type Options struct {
Passphrase string `config:"passphrase"`
}
// Fs represents a remote to Tardigrade
// Fs represents a remote to Storj
type Fs struct {
name string // the name of the remote
root string // root of the filesystem
@@ -158,11 +159,12 @@ var (
_ fs.Fs = &Fs{}
_ fs.ListRer = &Fs{}
_ fs.PutStreamer = &Fs{}
_ fs.Mover = &Fs{}
)
// NewFs creates a filesystem backed by Tardigrade.
// NewFs creates a filesystem backed by Storj.
func NewFs(ctx context.Context, name, root string, m configmap.Mapper) (_ fs.Fs, err error) {
// Setup filesystem and connection to Tardigrade
// Setup filesystem and connection to Storj
root = norm.NFC.String(root)
root = strings.Trim(root, "/")
@@ -183,24 +185,24 @@ func NewFs(ctx context.Context, name, root string, m configmap.Mapper) (_ fs.Fs,
if f.opts.Access != "" {
access, err = uplink.ParseAccess(f.opts.Access)
if err != nil {
return nil, fmt.Errorf("tardigrade: access: %w", err)
return nil, fmt.Errorf("storj: access: %w", err)
}
}
if access == nil && f.opts.SatelliteAddress != "" && f.opts.APIKey != "" && f.opts.Passphrase != "" {
access, err = uplink.RequestAccessWithPassphrase(ctx, f.opts.SatelliteAddress, f.opts.APIKey, f.opts.Passphrase)
if err != nil {
return nil, fmt.Errorf("tardigrade: access: %w", err)
return nil, fmt.Errorf("storj: access: %w", err)
}
serializedAccess, err := access.Serialize()
if err != nil {
return nil, fmt.Errorf("tardigrade: access: %w", err)
return nil, fmt.Errorf("storj: access: %w", err)
}
err = config.SetValueAndSave(f.name, "access_grant", serializedAccess)
if err != nil {
return nil, fmt.Errorf("tardigrade: access: %w", err)
return nil, fmt.Errorf("storj: access: %w", err)
}
}
@@ -232,7 +234,7 @@ func NewFs(ctx context.Context, name, root string, m configmap.Mapper) (_ fs.Fs,
if bucketName != "" && bucketPath != "" {
_, err = project.StatBucket(ctx, bucketName)
if err != nil {
return f, fmt.Errorf("tardigrade: bucket: %w", err)
return f, fmt.Errorf("storj: bucket: %w", err)
}
object, err := project.StatObject(ctx, bucketName, bucketPath)
@@ -258,7 +260,7 @@ func NewFs(ctx context.Context, name, root string, m configmap.Mapper) (_ fs.Fs,
return f, nil
}
// connect opens a connection to Tardigrade.
// connect opens a connection to Storj.
func (f *Fs) connect(ctx context.Context) (project *uplink.Project, err error) {
fs.Debugf(f, "connecting...")
defer fs.Debugf(f, "connected: %+v", err)
@@ -269,7 +271,7 @@ func (f *Fs) connect(ctx context.Context) (project *uplink.Project, err error) {
project, err = cfg.OpenProject(ctx, f.access)
if err != nil {
return nil, fmt.Errorf("tardigrade: project: %w", err)
return nil, fmt.Errorf("storj: project: %w", err)
}
return
@@ -678,3 +680,43 @@ func newPrefix(prefix string) string {
return prefix + "/"
}
// Move src to this remote using server-side move operations.
//
// This is stored with the remote path given
//
// It returns the destination Object and a possible error
//
// Will only be called if src.Fs().Name() == f.Name()
//
// If it isn't possible then return fs.ErrorCantMove
func (f *Fs) Move(ctx context.Context, src fs.Object, remote string) (fs.Object, error) {
srcObj, ok := src.(*Object)
if !ok {
fs.Debugf(src, "Can't move - not same remote type")
return nil, fs.ErrorCantMove
}
// Move parameters
srcBucket, srcKey := bucket.Split(srcObj.absolute)
dstBucket, dstKey := f.absolute(remote)
options := uplink.MoveObjectOptions{}
// Do the move
err := f.project.MoveObject(ctx, srcBucket, srcKey, dstBucket, dstKey, &options)
if err != nil {
// Make sure destination bucket exists
_, err := f.project.EnsureBucket(ctx, dstBucket)
if err != nil {
return nil, fmt.Errorf("rename object failed to create destination bucket: %w", err)
}
// And try again
err = f.project.MoveObject(ctx, srcBucket, srcKey, dstBucket, dstKey, &options)
if err != nil {
return nil, fmt.Errorf("rename object failed: %w", err)
}
}
// Read the new object
return f.NewObject(ctx, remote)
}

View File

@@ -1,7 +1,7 @@
//go:build !plan9
// +build !plan9
package tardigrade
package storj
import (
"context"
@@ -18,7 +18,7 @@ import (
"storj.io/uplink"
)
// Object describes a Tardigrade object
// Object describes a Storj object
type Object struct {
fs *Fs
@@ -32,7 +32,7 @@ type Object struct {
// Check the interfaces are satisfied.
var _ fs.Object = &Object{}
// newObjectFromUplink creates a new object from a Tardigrade uplink object.
// newObjectFromUplink creates a new object from a Storj uplink object.
func newObjectFromUplink(f *Fs, relative string, object *uplink.Object) *Object {
// Attempt to use the modified time from the metadata. Otherwise
// fallback to the server time.

View File

@@ -1,20 +1,20 @@
//go:build !plan9
// +build !plan9
// Test Tardigrade filesystem interface
package tardigrade_test
// Test Storj filesystem interface
package storj_test
import (
"testing"
"github.com/rclone/rclone/backend/tardigrade"
"github.com/rclone/rclone/backend/storj"
"github.com/rclone/rclone/fstest/fstests"
)
// TestIntegration runs integration tests against the remote
func TestIntegration(t *testing.T) {
fstests.Run(t, &fstests.Opt{
RemoteName: "TestTardigrade:",
NilObject: (*tardigrade.Object)(nil),
RemoteName: "TestStorj:",
NilObject: (*storj.Object)(nil),
})
}

View File

@@ -1,4 +1,4 @@
//go:build plan9
// +build plan9
package tardigrade
package storj

View File

@@ -754,22 +754,34 @@ func (f *Fs) ListR(ctx context.Context, dir string, callback fs.ListRCallback) (
}
// About gets quota information
func (f *Fs) About(ctx context.Context) (*fs.Usage, error) {
var containers []swift.Container
var err error
err = f.pacer.Call(func() (bool, error) {
containers, err = f.c.ContainersAll(ctx, nil)
return shouldRetry(ctx, err)
})
if err != nil {
return nil, fmt.Errorf("container listing failed: %w", err)
}
func (f *Fs) About(ctx context.Context) (usage *fs.Usage, err error) {
var total, objects int64
for _, c := range containers {
total += c.Bytes
objects += c.Count
if f.rootContainer != "" {
var container swift.Container
err = f.pacer.Call(func() (bool, error) {
container, _, err = f.c.Container(ctx, f.rootContainer)
return shouldRetry(ctx, err)
})
if err != nil {
return nil, fmt.Errorf("container info failed: %w", err)
}
total = container.Bytes
objects = container.Count
} else {
var containers []swift.Container
err = f.pacer.Call(func() (bool, error) {
containers, err = f.c.ContainersAll(ctx, nil)
return shouldRetry(ctx, err)
})
if err != nil {
return nil, fmt.Errorf("container listing failed: %w", err)
}
for _, c := range containers {
total += c.Bytes
objects += c.Count
}
}
usage := &fs.Usage{
usage = &fs.Usage{
Used: fs.NewUsageValue(total), // bytes in use
Objects: fs.NewUsageValue(objects), // objects in use
}

View File

@@ -4,6 +4,7 @@ import (
"context"
"fmt"
"io"
"io/ioutil"
"sync"
"time"
@@ -84,6 +85,10 @@ func (o *Object) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, op
err := o.Update(ctx, readers[i], src, options...)
if err != nil {
errs[i] = fmt.Errorf("%s: %w", o.UpstreamFs().Name(), err)
if len(entries) > 1 {
// Drain the input buffer to allow other uploads to continue
_, _ = io.Copy(ioutil.Discard, readers[i])
}
}
} else {
errs[i] = fs.ErrorNotAFile

View File

@@ -6,6 +6,7 @@ import (
"errors"
"fmt"
"io"
"io/ioutil"
"path"
"path/filepath"
"strings"
@@ -486,6 +487,10 @@ func (f *Fs) put(ctx context.Context, in io.Reader, src fs.ObjectInfo, stream bo
}
if err != nil {
errs[i] = fmt.Errorf("%s: %w", u.Name(), err)
if len(upstreams) > 1 {
// Drain the input buffer to allow other uploads to continue
_, _ = io.Copy(ioutil.Discard, readers[i])
}
return
}
objs[i] = u.WrapObject(o)

View File

@@ -4,8 +4,6 @@ import (
"bytes"
"context"
"fmt"
"io/ioutil"
"os"
"testing"
"time"
@@ -20,19 +18,12 @@ import (
)
// MakeTestDirs makes directories in /tmp for testing
func MakeTestDirs(t *testing.T, n int) (dirs []string, clean func()) {
func MakeTestDirs(t *testing.T, n int) (dirs []string) {
for i := 1; i <= n; i++ {
dir, err := ioutil.TempDir("", fmt.Sprintf("rclone-union-test-%d", n))
require.NoError(t, err)
dir := t.TempDir()
dirs = append(dirs, dir)
}
clean = func() {
for _, dir := range dirs {
err := os.RemoveAll(dir)
assert.NoError(t, err)
}
}
return dirs, clean
return dirs
}
func (f *Fs) TestInternalReadOnly(t *testing.T) {
@@ -95,8 +86,7 @@ func TestMoveCopy(t *testing.T) {
t.Skip("Skipping as -remote set")
}
ctx := context.Background()
dirs, clean := MakeTestDirs(t, 1)
defer clean()
dirs := MakeTestDirs(t, 1)
fsString := fmt.Sprintf(":union,upstreams='%s :memory:bucket':", dirs[0])
f, err := fs.NewFs(ctx, fsString)
require.NoError(t, err)

View File

@@ -27,8 +27,7 @@ func TestStandard(t *testing.T) {
if *fstest.RemoteName != "" {
t.Skip("Skipping as -remote set")
}
dirs, clean := union.MakeTestDirs(t, 3)
defer clean()
dirs := union.MakeTestDirs(t, 3)
upstreams := dirs[0] + " " + dirs[1] + " " + dirs[2]
name := "TestUnion"
fstests.Run(t, &fstests.Opt{
@@ -49,8 +48,7 @@ func TestRO(t *testing.T) {
if *fstest.RemoteName != "" {
t.Skip("Skipping as -remote set")
}
dirs, clean := union.MakeTestDirs(t, 3)
defer clean()
dirs := union.MakeTestDirs(t, 3)
upstreams := dirs[0] + " " + dirs[1] + ":ro " + dirs[2] + ":ro"
name := "TestUnionRO"
fstests.Run(t, &fstests.Opt{
@@ -71,8 +69,7 @@ func TestNC(t *testing.T) {
if *fstest.RemoteName != "" {
t.Skip("Skipping as -remote set")
}
dirs, clean := union.MakeTestDirs(t, 3)
defer clean()
dirs := union.MakeTestDirs(t, 3)
upstreams := dirs[0] + " " + dirs[1] + ":nc " + dirs[2] + ":nc"
name := "TestUnionNC"
fstests.Run(t, &fstests.Opt{
@@ -93,8 +90,7 @@ func TestPolicy1(t *testing.T) {
if *fstest.RemoteName != "" {
t.Skip("Skipping as -remote set")
}
dirs, clean := union.MakeTestDirs(t, 3)
defer clean()
dirs := union.MakeTestDirs(t, 3)
upstreams := dirs[0] + " " + dirs[1] + " " + dirs[2]
name := "TestUnionPolicy1"
fstests.Run(t, &fstests.Opt{
@@ -115,8 +111,7 @@ func TestPolicy2(t *testing.T) {
if *fstest.RemoteName != "" {
t.Skip("Skipping as -remote set")
}
dirs, clean := union.MakeTestDirs(t, 3)
defer clean()
dirs := union.MakeTestDirs(t, 3)
upstreams := dirs[0] + " " + dirs[1] + " " + dirs[2]
name := "TestUnionPolicy2"
fstests.Run(t, &fstests.Opt{
@@ -137,8 +132,7 @@ func TestPolicy3(t *testing.T) {
if *fstest.RemoteName != "" {
t.Skip("Skipping as -remote set")
}
dirs, clean := union.MakeTestDirs(t, 3)
defer clean()
dirs := union.MakeTestDirs(t, 3)
upstreams := dirs[0] + " " + dirs[1] + " " + dirs[2]
name := "TestUnionPolicy3"
fstests.Run(t, &fstests.Opt{

View File

@@ -1,5 +1,5 @@
#!/bin/bash
set -e
docker build -t rclone/xgo-cgofuse https://github.com/billziss-gh/cgofuse.git
docker build -t rclone/xgo-cgofuse https://github.com/winfsp/cgofuse.git
docker images
docker push rclone/xgo-cgofuse

View File

@@ -52,6 +52,7 @@ var (
var osarches = []string{
"windows/386",
"windows/amd64",
"windows/arm64",
"darwin/amd64",
"darwin/arm64",
"linux/386",
@@ -85,6 +86,13 @@ var archFlags = map[string][]string{
"arm-v7": {"GOARM=7"},
}
// Map Go architectures to NFPM architectures
// Any missing are passed straight through
var goarchToNfpm = map[string]string{
"arm": "arm6",
"arm-v7": "arm7",
}
// runEnv - run a shell command with env
func runEnv(args, env []string) error {
if *debug {
@@ -167,11 +175,15 @@ func buildDebAndRpm(dir, version, goarch string) []string {
pkgVersion := version[1:]
pkgVersion = strings.Replace(pkgVersion, "β", "-beta", -1)
pkgVersion = strings.Replace(pkgVersion, "-", ".", -1)
nfpmArch, ok := goarchToNfpm[goarch]
if !ok {
nfpmArch = goarch
}
// Make nfpm.yaml from the template
substitute("../bin/nfpm.yaml", path.Join(dir, "nfpm.yaml"), map[string]string{
"Version": pkgVersion,
"Arch": goarch,
"Arch": nfpmArch,
})
// build them
@@ -377,7 +389,7 @@ func compileArch(version, goos, goarch, dir string) bool {
artifacts := []string{buildZip(dir)}
// build a .deb and .rpm if appropriate
if goos == "linux" {
artifacts = append(artifacts, buildDebAndRpm(dir, version, stripVersion(goarch))...)
artifacts = append(artifacts, buildDebAndRpm(dir, version, goarch)...)
}
if *copyAs != "" {
for _, artifact := range artifacts {

View File

@@ -24,6 +24,7 @@ docs = [
"overview.md",
"flags.md",
"docker.md",
"bisync.md",
# Keep these alphabetical by full name
"fichier.md",
@@ -52,6 +53,7 @@ docs = [
"mailru.md",
"mega.md",
"memory.md",
"netstorage.md",
"azureblob.md",
"onedrive.md",
"opendrive.md",
@@ -63,8 +65,9 @@ docs = [
"putio.md",
"seafile.md",
"sftp.md",
"storj.md",
"sugarsync.md",
"tardigrade.md",
"tardigrade.md", # stub only to redirect to storj.md
"uptobox.md",
"union.md",
"webdav.md",

View File

@@ -13,7 +13,7 @@ import (
"sync/atomic"
"time"
"github.com/billziss-gh/cgofuse/fuse"
"github.com/winfsp/cgofuse/fuse"
"github.com/rclone/rclone/cmd/mountlib"
"github.com/rclone/rclone/fs"
"github.com/rclone/rclone/fs/fserrors"

View File

@@ -18,7 +18,7 @@ import (
"sync/atomic"
"time"
"github.com/billziss-gh/cgofuse/fuse"
"github.com/winfsp/cgofuse/fuse"
"github.com/rclone/rclone/cmd/mountlib"
"github.com/rclone/rclone/fs"
"github.com/rclone/rclone/lib/atexit"
@@ -168,7 +168,7 @@ func mount(VFS *vfs.VFS, mountPath string, opt *mountlib.Options) (<-chan error,
host.SetCapCaseInsensitive(f.Features().CaseInsensitive)
// Create options
options := mountOptions(VFS, f.Name()+":"+f.Root(), mountpoint, opt)
options := mountOptions(VFS, opt.DeviceName, mountpoint, opt)
fs.Debugf(f, "Mounting with options: %q", options)
// Serve the mount point in the background returning error to errChan

View File

@@ -10,11 +10,17 @@
package cmount
import (
"runtime"
"testing"
"github.com/rclone/rclone/fstest/testy"
"github.com/rclone/rclone/vfs/vfstest"
)
func TestMount(t *testing.T) {
// Disable tests under macOS and the CI since they are locking up
if runtime.GOOS == "darwin" {
testy.SkipUnreliable(t)
}
vfstest.RunTests(t, false, mount)
}

View File

@@ -329,12 +329,29 @@ func showBackend(name string) {
if opt.IsPassword {
fmt.Printf("**NB** Input to this must be obscured - see [rclone obscure](/commands/rclone_obscure/).\n\n")
}
fmt.Printf("Properties:\n\n")
fmt.Printf("- Config: %s\n", opt.Name)
fmt.Printf("- Env Var: %s\n", opt.EnvVarName(backend.Prefix))
if opt.Provider != "" {
fmt.Printf("- Provider: %s\n", opt.Provider)
}
fmt.Printf("- Type: %s\n", opt.Type())
fmt.Printf("- Default: %s\n", quoteString(opt.GetValue()))
defaultValue := opt.GetValue()
// Default value and Required are related: Required means option must
// have a value, but if there is a default then a value does not have
// to be explicitely set and then Required makes no difference.
if defaultValue != "" {
fmt.Printf("- Default: %s\n", quoteString(defaultValue))
} else {
fmt.Printf("- Required: %v\n", opt.Required)
}
// List examples / possible choices
if len(opt.Examples) > 0 {
fmt.Printf("- Examples:\n")
if opt.Exclusive {
fmt.Printf("- Choices:\n")
} else {
fmt.Printf("- Examples:\n")
}
for _, ex := range opt.Examples {
fmt.Printf(" - %s\n", quoteString(ex.Value))
for _, line := range strings.Split(ex.Help, "\n") {

View File

@@ -86,7 +86,7 @@ func mount(VFS *vfs.VFS, mountpoint string, opt *mountlib.Options) (<-chan error
f := VFS.Fs()
fs.Debugf(f, "Mounting on %q", mountpoint)
c, err := fuse.Mount(mountpoint, mountOptions(VFS, f.Name()+":"+f.Root(), opt)...)
c, err := fuse.Mount(mountpoint, mountOptions(VFS, opt.DeviceName, opt)...)
if err != nil {
return nil, nil, err
}

View File

@@ -25,11 +25,10 @@ func init() {
// mountOptions configures the options from the command line flags
//
// man mount.fuse for more info and note the -o flag for other options
func mountOptions(fsys *FS, f fs.Fs) (mountOpts *fuse.MountOptions) {
device := f.Name() + ":" + f.Root()
func mountOptions(fsys *FS, f fs.Fs, opt *mountlib.Options) (mountOpts *fuse.MountOptions) {
mountOpts = &fuse.MountOptions{
AllowOther: fsys.opt.AllowOther,
FsName: device,
FsName: opt.DeviceName,
Name: "rclone",
DisableXAttrs: true,
Debug: fsys.opt.DebugFUSE,
@@ -120,7 +119,7 @@ func mountOptions(fsys *FS, f fs.Fs) (mountOpts *fuse.MountOptions) {
if runtime.GOOS == "darwin" {
opts = append(opts,
// VolumeName sets the volume name shown in Finder.
fmt.Sprintf("volname=%s", device),
fmt.Sprintf("volname=%s", opt.VolumeName),
// NoAppleXattr makes OSXFUSE disallow extended attributes with the
// prefix "com.apple.". This disables persistent Finder state and
@@ -167,7 +166,7 @@ func mount(VFS *vfs.VFS, mountpoint string, opt *mountlib.Options) (<-chan error
//mOpts.Debug = mountlib.DebugFUSE
//conn := fusefs.NewFileSystemConnector(nodeFs.Root(), mOpts)
mountOpts := mountOptions(fsys, f)
mountOpts := mountOptions(fsys, f, opt)
// FIXME fill out
opts := fusefs.Options{

View File

@@ -65,10 +65,10 @@ at all, then 1 PiB is set as both the total and the free size.
To run rclone @ on Windows, you will need to
download and install [WinFsp](http://www.secfs.net/winfsp/).
[WinFsp](https://github.com/billziss-gh/winfsp) is an open-source
[WinFsp](https://github.com/winfsp/winfsp) is an open-source
Windows File System Proxy which makes it easy to write user space file
systems for Windows. It provides a FUSE emulation layer which rclone
uses combination with [cgofuse](https://github.com/billziss-gh/cgofuse).
uses combination with [cgofuse](https://github.com/winfsp/cgofuse).
Both of these packages are by Bill Zissimopoulos who was very helpful
during the implementation of rclone @ for Windows.
@@ -218,7 +218,7 @@ from Microsoft's Sysinternals suite, which has option |-s| to start
processes as the SYSTEM account. Another alternative is to run the mount
command from a Windows Scheduled Task, or a Windows Service, configured
to run as the SYSTEM account. A third alternative is to use the
[WinFsp.Launcher infrastructure](https://github.com/billziss-gh/winfsp/wiki/WinFsp-Service-Architecture)).
[WinFsp.Launcher infrastructure](https://github.com/winfsp/winfsp/wiki/WinFsp-Service-Architecture)).
Note that when running rclone as another user, it will not use
the configuration file from your profile unless you tell it to
with the [|--config|](https://rclone.org/docs/#config-config-file) option.

View File

@@ -40,6 +40,7 @@ type Options struct {
ExtraOptions []string
ExtraFlags []string
AttrTimeout time.Duration // how long the kernel caches attribute for
DeviceName string
VolumeName string
NoAppleDouble bool
NoAppleXattr bool
@@ -125,6 +126,7 @@ func AddFlags(flagSet *pflag.FlagSet) {
flags.BoolVarP(flagSet, &Opt.AsyncRead, "async-read", "", Opt.AsyncRead, "Use asynchronous reads (not supported on Windows)")
flags.FVarP(flagSet, &Opt.MaxReadAhead, "max-read-ahead", "", "The number of bytes that can be prefetched for sequential reads (not supported on Windows)")
flags.BoolVarP(flagSet, &Opt.WritebackCache, "write-back-cache", "", Opt.WritebackCache, "Makes kernel buffer writes before sending them to rclone (without this, writethrough caching is used) (not supported on Windows)")
flags.StringVarP(flagSet, &Opt.DeviceName, "devname", "", Opt.DeviceName, "Set the device name - default is remote:path")
// Windows and OSX
flags.StringVarP(flagSet, &Opt.VolumeName, "volname", "", Opt.VolumeName, "Set the volume name (supported on Windows and OSX only)")
// OSX only
@@ -235,6 +237,7 @@ func (m *MountPoint) Mount() (daemon *os.Process, err error) {
return nil, err
}
m.SetVolumeName(m.MountOpt.VolumeName)
m.SetDeviceName(m.MountOpt.DeviceName)
// Start background task if --daemon is specified
if m.MountOpt.Daemon {

View File

@@ -16,11 +16,16 @@ import (
"github.com/rclone/rclone/cmd/mountlib"
"github.com/rclone/rclone/fs/config/configfile"
"github.com/rclone/rclone/fs/rc"
"github.com/rclone/rclone/fstest/testy"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
)
func TestRc(t *testing.T) {
// Disable tests under macOS and the CI since they are locking up
if runtime.GOOS == "darwin" {
testy.SkipUnreliable(t)
}
ctx := context.Background()
configfile.Install()
mount := rc.Calls.Get("mount/mount")
@@ -30,19 +35,14 @@ func TestRc(t *testing.T) {
getMountTypes := rc.Calls.Get("mount/types")
assert.NotNil(t, getMountTypes)
localDir, err := ioutil.TempDir("", "rclone-mountlib-localDir")
require.NoError(t, err)
defer func() { _ = os.RemoveAll(localDir) }()
err = ioutil.WriteFile(filepath.Join(localDir, "file.txt"), []byte("hello"), 0666)
localDir := t.TempDir()
err := ioutil.WriteFile(filepath.Join(localDir, "file.txt"), []byte("hello"), 0666)
require.NoError(t, err)
mountPoint, err := ioutil.TempDir("", "rclone-mountlib-mountPoint")
require.NoError(t, err)
mountPoint := t.TempDir()
if runtime.GOOS == "windows" {
// Windows requires the mount point not to exist
require.NoError(t, os.RemoveAll(mountPoint))
} else {
defer func() { _ = os.RemoveAll(mountPoint) }()
}
out, err := getMountTypes.Fn(ctx, nil)

View File

@@ -87,7 +87,7 @@ func (m *MountPoint) CheckAllowings() error {
// SetVolumeName with sensible default
func (m *MountPoint) SetVolumeName(vol string) {
if vol == "" {
vol = m.Fs.Name() + ":" + m.Fs.Root()
vol = fs.ConfigString(m.Fs)
}
m.MountOpt.SetVolumeName(vol)
}
@@ -102,3 +102,11 @@ func (o *Options) SetVolumeName(vol string) {
}
o.VolumeName = vol
}
// SetDeviceName with sensible default
func (m *MountPoint) SetDeviceName(dev string) {
if dev == "" {
dev = fs.ConfigString(m.Fs)
}
m.MountOpt.DeviceName = dev
}

View File

@@ -23,6 +23,7 @@ import (
"github.com/rclone/rclone/fs"
"github.com/rclone/rclone/fs/config"
"github.com/rclone/rclone/fstest"
"github.com/rclone/rclone/fstest/testy"
"github.com/rclone/rclone/lib/file"
"github.com/stretchr/testify/assert"
@@ -303,6 +304,10 @@ func (a *APIClient) request(path string, in, out interface{}, wantErr bool) {
}
func testMountAPI(t *testing.T, sockAddr string) {
// Disable tests under macOS and linux in the CI since they are locking up
if runtime.GOOS == "darwin" || runtime.GOOS == "linux" {
testy.SkipUnreliable(t)
}
if _, mountFn := mountlib.ResolveMountMethod(""); mountFn == nil {
t.Skip("Test requires working mount command")
}

View File

@@ -16,6 +16,7 @@ TestFichier:
TestFTP:
TestGoogleCloudStorage:
TestHubic:
TestNetStorage:
TestOneDrive:
TestPcloud:
TestQingStor:

View File

@@ -7,9 +7,7 @@ import (
"crypto/rand"
"encoding/hex"
"io"
"io/ioutil"
"net/http"
"os"
"strings"
"testing"
@@ -113,14 +111,7 @@ func TestResticHandler(t *testing.T) {
}
// setup rclone with a local backend in a temporary directory
tempdir, err := ioutil.TempDir("", "rclone-restic-test-")
require.NoError(t, err)
// make sure the tempdir is properly removed
defer func() {
err := os.RemoveAll(tempdir)
require.NoError(t, err)
}()
tempdir := t.TempDir()
// globally set append-only mode
prev := appendOnly

View File

@@ -7,9 +7,7 @@ import (
"context"
"crypto/rand"
"io"
"io/ioutil"
"net/http"
"os"
"strings"
"testing"
@@ -35,14 +33,7 @@ func TestResticPrivateRepositories(t *testing.T) {
require.NoError(t, err)
// setup rclone with a local backend in a temporary directory
tempdir, err := ioutil.TempDir("", "rclone-restic-test-")
require.NoError(t, err)
// make sure the tempdir is properly removed
defer func() {
err := os.RemoveAll(tempdir)
require.NoError(t, err)
}()
tempdir := t.TempDir()
// globally set private-repos mode & test user
prev := privateRepos

View File

@@ -8,6 +8,7 @@ exec rclone --check-normalization=true --check-control=true --check-length=true
TestDrive:testInfo \
TestDropbox:testInfo \
TestGoogleCloudStorage:rclone-testinfo \
TestnStorage:testInfo \
TestOneDrive:testInfo \
TestS3:rclone-testinfo \
TestSftp:testInfo \

View File

@@ -5,6 +5,7 @@ package makefiles
import (
"io"
"log"
"math"
"math/rand"
"os"
"path/filepath"
@@ -16,7 +17,9 @@ import (
"github.com/rclone/rclone/fs/config/flags"
"github.com/rclone/rclone/lib/file"
"github.com/rclone/rclone/lib/random"
"github.com/rclone/rclone/lib/readers"
"github.com/spf13/cobra"
"github.com/spf13/pflag"
)
var (
@@ -29,37 +32,51 @@ var (
minFileNameLength = 4
maxFileNameLength = 12
seed = int64(1)
zero = false
sparse = false
ascii = false
pattern = false
chargen = false
// Globals
randSource *rand.Rand
source io.Reader
directoriesToCreate int
totalDirectories int
fileNames = map[string]struct{}{} // keep a note of which file name we've used already
)
func init() {
test.Command.AddCommand(commandDefinition)
cmdFlags := commandDefinition.Flags()
flags.IntVarP(cmdFlags, &numberOfFiles, "files", "", numberOfFiles, "Number of files to create")
flags.IntVarP(cmdFlags, &averageFilesPerDirectory, "files-per-directory", "", averageFilesPerDirectory, "Average number of files per directory")
flags.IntVarP(cmdFlags, &maxDepth, "max-depth", "", maxDepth, "Maximum depth of directory hierarchy")
flags.FVarP(cmdFlags, &minFileSize, "min-file-size", "", "Minimum size of file to create")
flags.FVarP(cmdFlags, &maxFileSize, "max-file-size", "", "Maximum size of files to create")
flags.IntVarP(cmdFlags, &minFileNameLength, "min-name-length", "", minFileNameLength, "Minimum size of file names")
flags.IntVarP(cmdFlags, &maxFileNameLength, "max-name-length", "", maxFileNameLength, "Maximum size of file names")
flags.Int64VarP(cmdFlags, &seed, "seed", "", seed, "Seed for the random number generator (0 for random)")
test.Command.AddCommand(makefilesCmd)
makefilesFlags := makefilesCmd.Flags()
flags.IntVarP(makefilesFlags, &numberOfFiles, "files", "", numberOfFiles, "Number of files to create")
flags.IntVarP(makefilesFlags, &averageFilesPerDirectory, "files-per-directory", "", averageFilesPerDirectory, "Average number of files per directory")
flags.IntVarP(makefilesFlags, &maxDepth, "max-depth", "", maxDepth, "Maximum depth of directory hierarchy")
flags.FVarP(makefilesFlags, &minFileSize, "min-file-size", "", "Minimum size of file to create")
flags.FVarP(makefilesFlags, &maxFileSize, "max-file-size", "", "Maximum size of files to create")
flags.IntVarP(makefilesFlags, &minFileNameLength, "min-name-length", "", minFileNameLength, "Minimum size of file names")
flags.IntVarP(makefilesFlags, &maxFileNameLength, "max-name-length", "", maxFileNameLength, "Maximum size of file names")
test.Command.AddCommand(makefileCmd)
makefileFlags := makefileCmd.Flags()
// Common flags to makefiles and makefile
for _, f := range []*pflag.FlagSet{makefilesFlags, makefileFlags} {
flags.Int64VarP(f, &seed, "seed", "", seed, "Seed for the random number generator (0 for random)")
flags.BoolVarP(f, &zero, "zero", "", zero, "Fill files with ASCII 0x00")
flags.BoolVarP(f, &sparse, "sparse", "", sparse, "Make the files sparse (appear to be filled with ASCII 0x00)")
flags.BoolVarP(f, &ascii, "ascii", "", ascii, "Fill files with random ASCII printable bytes only")
flags.BoolVarP(f, &pattern, "pattern", "", pattern, "Fill files with a periodic pattern")
flags.BoolVarP(f, &chargen, "chargen", "", chargen, "Fill files with a ASCII chargen pattern")
}
}
var commandDefinition = &cobra.Command{
var makefilesCmd = &cobra.Command{
Use: "makefiles <dir>",
Short: `Make a random file hierarchy in a directory`,
Run: func(command *cobra.Command, args []string) {
cmd.CheckArgs(1, 1, command, args)
if seed == 0 {
seed = time.Now().UnixNano()
fs.Logf(nil, "Using random seed = %d", seed)
}
randSource = rand.New(rand.NewSource(seed))
commonInit()
outputDirectory := args[0]
directoriesToCreate = numberOfFiles / averageFilesPerDirectory
averageSize := (minFileSize + maxFileSize) / 2
@@ -73,13 +90,130 @@ var commandDefinition = &cobra.Command{
totalBytes := int64(0)
for i := 0; i < numberOfFiles; i++ {
dir := dirs[randSource.Intn(len(dirs))]
totalBytes += writeFile(dir, fileName())
size := int64(minFileSize)
if maxFileSize > minFileSize {
size += randSource.Int63n(int64(maxFileSize - minFileSize))
}
writeFile(dir, fileName(), size)
totalBytes += size
}
dt := time.Since(start)
fs.Logf(nil, "Written %viB in %v at %viB/s.", fs.SizeSuffix(totalBytes), dt.Round(time.Millisecond), fs.SizeSuffix((totalBytes*int64(time.Second))/int64(dt)))
fs.Logf(nil, "Written %vB in %v at %vB/s.", fs.SizeSuffix(totalBytes), dt.Round(time.Millisecond), fs.SizeSuffix((totalBytes*int64(time.Second))/int64(dt)))
},
}
var makefileCmd = &cobra.Command{
Use: "makefile <size> [<file>]+ [flags]",
Short: `Make files with random contents of the size given`,
Run: func(command *cobra.Command, args []string) {
cmd.CheckArgs(1, 1e6, command, args)
commonInit()
var size fs.SizeSuffix
err := size.Set(args[0])
if err != nil {
log.Fatalf("Failed to parse size %q: %v", args[0], err)
}
start := time.Now()
fs.Logf(nil, "Creating %d files of size %v.", len(args[1:]), size)
totalBytes := int64(0)
for _, filePath := range args[1:] {
dir := filepath.Dir(filePath)
name := filepath.Base(filePath)
writeFile(dir, name, int64(size))
totalBytes += int64(size)
}
dt := time.Since(start)
fs.Logf(nil, "Written %vB in %v at %vB/s.", fs.SizeSuffix(totalBytes), dt.Round(time.Millisecond), fs.SizeSuffix((totalBytes*int64(time.Second))/int64(dt)))
},
}
func bool2int(b bool) int {
if b {
return 1
}
return 0
}
// common initialisation for makefiles and makefile
func commonInit() {
if seed == 0 {
seed = time.Now().UnixNano()
fs.Logf(nil, "Using random seed = %d", seed)
}
randSource = rand.New(rand.NewSource(seed))
if bool2int(zero)+bool2int(sparse)+bool2int(ascii)+bool2int(pattern)+bool2int(chargen) > 1 {
log.Fatal("Can only supply one of --zero, --sparse, --ascii, --pattern or --chargen")
}
switch {
case zero, sparse:
source = zeroReader{}
case ascii:
source = asciiReader{}
case pattern:
source = readers.NewPatternReader(math.MaxInt64)
case chargen:
source = &chargenReader{}
default:
source = randSource
}
if minFileSize > maxFileSize {
maxFileSize = minFileSize
}
}
type zeroReader struct{}
// Read a chunk of zeroes
func (zeroReader) Read(p []byte) (n int, err error) {
for i := range p {
p[i] = 0
}
return len(p), nil
}
type asciiReader struct{}
// Read a chunk of printable ASCII characters
func (asciiReader) Read(p []byte) (n int, err error) {
n, err = randSource.Read(p)
for i := range p[:n] {
p[i] = (p[i] % (0x7F - 0x20)) + 0x20
}
return n, err
}
type chargenReader struct {
start byte // offset from startChar to start line with
written byte // chars in line so far
}
// Read a chunk of printable ASCII characters in chargen format
func (r *chargenReader) Read(p []byte) (n int, err error) {
const (
startChar = 0x20 // ' '
endChar = 0x7E // '~' inclusive
charsPerLine = 72
)
for i := range p {
if r.written >= charsPerLine {
r.start++
if r.start > endChar-startChar {
r.start = 0
}
p[i] = '\n'
r.written = 0
} else {
c := r.start + r.written + startChar
if c > endChar {
c -= endChar - startChar + 1
}
p[i] = c
r.written++
}
}
return len(p), err
}
// fileName creates a unique random file or directory name
func fileName() (name string) {
for {
@@ -134,7 +268,7 @@ func (d *dir) list(path string, output []string) []string {
}
// writeFile writes a random file at dir/name
func writeFile(dir, name string) int64 {
func writeFile(dir, name string, size int64) {
err := file.MkdirAll(dir, 0777)
if err != nil {
log.Fatalf("Failed to make directory %q: %v", dir, err)
@@ -144,8 +278,11 @@ func writeFile(dir, name string) int64 {
if err != nil {
log.Fatalf("Failed to open file %q: %v", path, err)
}
size := randSource.Int63n(int64(maxFileSize-minFileSize)) + int64(minFileSize)
_, err = io.CopyN(fd, randSource, size)
if sparse {
err = fd.Truncate(size)
} else {
_, err = io.CopyN(fd, source, size)
}
if err != nil {
log.Fatalf("Failed to write %v bytes to file %q: %v", size, path, err)
}
@@ -154,5 +291,4 @@ func writeFile(dir, name string) int64 {
log.Fatalf("Failed to close file %q: %v", path, err)
}
fs.Infof(path, "Written file size %v", fs.SizeSuffix(size))
return size
}

View File

@@ -5,11 +5,13 @@ import (
"context"
"errors"
"fmt"
"log"
"time"
"github.com/rclone/rclone/cmd"
"github.com/rclone/rclone/fs"
"github.com/rclone/rclone/fs/config/flags"
"github.com/rclone/rclone/fs/fspath"
"github.com/rclone/rclone/fs/object"
"github.com/rclone/rclone/fs/operations"
"github.com/spf13/cobra"
@@ -63,13 +65,32 @@ then add the ` + "`--localtime`" + ` flag.
`,
Run: func(command *cobra.Command, args []string) {
cmd.CheckArgs(1, 1, command, args)
f, fileName := cmd.NewFsFile(args[0])
f, remote := newFsDst(args)
cmd.Run(true, false, command, func() error {
return Touch(context.Background(), f, fileName)
return Touch(context.Background(), f, remote)
})
},
}
// newFsDst creates a new dst fs from the arguments.
//
// The returned fs will never point to a file. It will point to the
// parent directory of specified path, and is returned together with
// the basename of file or directory, except if argument is only a
// remote name. Similar to cmd.NewFsDstFile, but without raising fatal
// when name of file or directory is empty (e.g. "remote:" or "remote:path/").
func newFsDst(args []string) (f fs.Fs, remote string) {
root, remote, err := fspath.Split(args[0])
if err != nil {
log.Fatalf("Parsing %q failed: %v", args[0], err)
}
if root == "" {
root = "."
}
f = cmd.NewFsDir([]string{root})
return f, remote
}
// parseTimeArgument parses a timestamp string according to specific layouts
func parseTimeArgument(timeString string) (time.Time, error) {
layout := defaultLayout
@@ -107,47 +128,51 @@ func createEmptyObject(ctx context.Context, remote string, modTime time.Time, f
}
// Touch create new file or change file modification time.
func Touch(ctx context.Context, f fs.Fs, fileName string) error {
func Touch(ctx context.Context, f fs.Fs, remote string) error {
t, err := timeOfTouch()
if err != nil {
return err
}
fs.Debugf(nil, "Touch time %v", t)
file, err := f.NewObject(ctx, fileName)
file, err := f.NewObject(ctx, remote)
if err != nil {
if errors.Is(err, fs.ErrorObjectNotFound) {
// Touch single non-existent file
// Touching non-existant path, possibly creating it as new file
if remote == "" {
fs.Logf(f, "Not touching empty directory")
return nil
}
if notCreateNewFile {
fs.Logf(f, "Not touching non-existent file due to --no-create")
return nil
}
if recursive {
// For consistency, --recursive never creates new files.
fs.Logf(f, "Not touching non-existent file due to --recursive")
return nil
}
if operations.SkipDestructive(ctx, f, "touch (create)") {
return nil
}
fs.Debugf(f, "Touching (creating)")
if err = createEmptyObject(ctx, fileName, t, f); err != nil {
fs.Debugf(f, "Touching (creating) %q", remote)
if err = createEmptyObject(ctx, remote, t, f); err != nil {
return fmt.Errorf("failed to touch (create): %w", err)
}
}
if errors.Is(err, fs.ErrorIsDir) {
// Touching existing directory
if recursive {
// Touch existing directory, recursive
fs.Debugf(nil, "Touching files in directory recursively")
return operations.TouchDir(ctx, f, t, true)
fs.Debugf(f, "Touching recursively files in directory %q", remote)
return operations.TouchDir(ctx, f, remote, t, true)
}
// Touch existing directory without recursing
fs.Debugf(nil, "Touching files in directory non-recursively")
return operations.TouchDir(ctx, f, t, false)
fs.Debugf(f, "Touching non-recursively files in directory %q", remote)
return operations.TouchDir(ctx, f, remote, t, false)
}
return err
}
// Touch single existing file
if !operations.SkipDestructive(ctx, fileName, "touch") {
fs.Debugf(f, "Touching %q", fileName)
if !operations.SkipDestructive(ctx, remote, "touch") {
fs.Debugf(f, "Touching %q", remote)
err = file.SetModTime(ctx, t)
if err != nil {
return fmt.Errorf("failed to touch: %w", err)

View File

@@ -113,6 +113,15 @@ func TestTouchCreateMultipleDirAndFile(t *testing.T) {
fstest.CheckListingWithPrecision(t, r.Fremote, []fstest.Item{file1}, []string{"a", "a/b"}, fs.ModTimeNotSupported)
}
func TestTouchEmptyName(t *testing.T) {
r := fstest.NewRun(t)
defer r.Finalise()
err := Touch(context.Background(), r.Fremote, "")
require.NoError(t, err)
fstest.CheckListingWithPrecision(t, r.Fremote, []fstest.Item{}, []string{}, fs.ModTimeNotSupported)
}
func TestTouchEmptyDir(t *testing.T) {
r := fstest.NewRun(t)
defer r.Finalise()

View File

@@ -102,8 +102,7 @@ var envInitial []string
// sets testConfig to testFolder/rclone.config.
func createTestEnvironment(t *testing.T) {
//Set temporary folder for config and test data
tempFolder, err := ioutil.TempDir("", "rclone_cmdtest_")
require.NoError(t, err)
tempFolder := t.TempDir()
testFolder = filepath.ToSlash(tempFolder)
// Set path to temporary config file

View File

@@ -105,15 +105,18 @@ WebDAV or S3, that work out of the box.)
{{< provider_list >}}
{{< provider name="1Fichier" home="https://1fichier.com/" config="/fichier/" start="true">}}
{{< provider name="Akamai Netstorage" home="https://www.akamai.com/us/en/products/media-delivery/netstorage.jsp" config="/netstorage/" >}}
{{< provider name="Alibaba Cloud (Aliyun) Object Storage System (OSS)" home="https://www.alibabacloud.com/product/oss/" config="/s3/#alibaba-oss" >}}
{{< provider name="Amazon Drive" home="https://www.amazon.com/clouddrive" config="/amazonclouddrive/" note="#status">}}
{{< provider name="Amazon S3" home="https://aws.amazon.com/s3/" config="/s3/" >}}
{{< provider name="Backblaze B2" home="https://www.backblaze.com/b2/cloud-storage.html" config="/b2/" >}}
{{< provider name="Box" home="https://www.box.com/" config="/box/" >}}
{{< provider name="Ceph" home="http://ceph.com/" config="/s3/#ceph" >}}
{{< provider name="China Mobile Ecloud Elastic Object Storage (EOS)" home="https://ecloud.10086.cn/home/product-introduction/eos/" config="/s3/#china-mobile-ecloud-eos" >}}
{{< provider name="Citrix ShareFile" home="http://sharefile.com/" config="/sharefile/" >}}
{{< provider name="C14" home="https://www.online.net/en/storage/c14-cold-storage" config="/sftp/#c14" >}}
{{< provider name="C14" home="https://www.online.net/en/storage/c14-cold-storage" config="/s3/#scaleway" >}}
{{< provider name="DigitalOcean Spaces" home="https://www.digitalocean.com/products/object-storage/" config="/s3/#digitalocean-spaces" >}}
{{< provider name="Digi Storage" home="https://storage.rcs-rds.ro/" config="/koofr/#digi-storage" >}}
{{< provider name="Dreamhost" home="https://www.dreamhost.com/cloud/storage/" config="/s3/#dreamhost" >}}
{{< provider name="Dropbox" home="https://www.dropbox.com/" config="/dropbox/" >}}
{{< provider name="Enterprise File Fabric" home="https://storagemadeeasy.com/about/" config="/filefabric/" >}}
@@ -148,12 +151,13 @@ WebDAV or S3, that work out of the box.)
{{< provider name="rsync.net" home="https://rsync.net/products/rclone.html" config="/sftp/#rsync-net" >}}
{{< provider name="Scaleway" home="https://www.scaleway.com/object-storage/" config="/s3/#scaleway" >}}
{{< provider name="Seafile" home="https://www.seafile.com/" config="/seafile/" >}}
{{< provider name="Seagate Lyve Cloud" home="https://www.seagate.com/gb/en/services/cloud/storage/" config="/s3/#lyve" >}}
{{< provider name="SeaweedFS" home="https://github.com/chrislusf/seaweedfs/" config="/s3/#seaweedfs" >}}
{{< provider name="SFTP" home="https://en.wikipedia.org/wiki/SSH_File_Transfer_Protocol" config="/sftp/" >}}
{{< provider name="Sia" home="https://sia.tech/" config="/sia/" >}}
{{< provider name="StackPath" home="https://www.stackpath.com/products/object-storage/" config="/s3/#stackpath" >}}
{{< provider name="Storj" home="https://storj.io/" config="/storj/" >}}
{{< provider name="SugarSync" home="https://sugarsync.com/" config="/sugarsync/" >}}
{{< provider name="Tardigrade" home="https://tardigrade.io/" config="/tardigrade/" >}}
{{< provider name="Tencent Cloud Object Storage (COS)" home="https://intl.cloud.tencent.com/product/cos" config="/s3/#tencent-cos" >}}
{{< provider name="Uptobox" home="https://uptobox.com" config="/uptobox/" >}}
{{< provider name="Wasabi" home="https://wasabi.com/" config="/s3/#wasabi" >}}

View File

@@ -99,9 +99,11 @@ Remote or path to alias.
Can be "myremote:path/to/dir", "myremote:bucket", "myremote:" or "/local/path".
Properties:
- Config: remote
- Env Var: RCLONE_ALIAS_REMOTE
- Type: string
- Default: ""
- Required: true
{{< rem autogenerated options stop >}}

View File

@@ -168,10 +168,12 @@ OAuth Client Id.
Leave blank normally.
Properties:
- Config: client_id
- Env Var: RCLONE_ACD_CLIENT_ID
- Type: string
- Default: ""
- Required: false
#### --acd-client-secret
@@ -179,10 +181,12 @@ OAuth Client Secret.
Leave blank normally.
Properties:
- Config: client_secret
- Env Var: RCLONE_ACD_CLIENT_SECRET
- Type: string
- Default: ""
- Required: false
### Advanced options
@@ -192,10 +196,12 @@ Here are the advanced options specific to amazon cloud drive (Amazon Drive).
OAuth Access Token as a JSON blob.
Properties:
- Config: token
- Env Var: RCLONE_ACD_TOKEN
- Type: string
- Default: ""
- Required: false
#### --acd-auth-url
@@ -203,10 +209,12 @@ Auth server URL.
Leave blank to use the provider defaults.
Properties:
- Config: auth_url
- Env Var: RCLONE_ACD_AUTH_URL
- Type: string
- Default: ""
- Required: false
#### --acd-token-url
@@ -214,19 +222,23 @@ Token server url.
Leave blank to use the provider defaults.
Properties:
- Config: token_url
- Env Var: RCLONE_ACD_TOKEN_URL
- Type: string
- Default: ""
- Required: false
#### --acd-checkpoint
Checkpoint for internal polling (debug).
Properties:
- Config: checkpoint
- Env Var: RCLONE_ACD_CHECKPOINT
- Type: string
- Default: ""
- Required: false
#### --acd-upload-wait-per-gb
@@ -252,6 +264,8 @@ of big files for a range of file sizes.
Upload with the "-v" flag to see more info about what rclone is doing
in this situation.
Properties:
- Config: upload_wait_per_gb
- Env Var: RCLONE_ACD_UPLOAD_WAIT_PER_GB
- Type: Duration
@@ -270,6 +284,8 @@ To download files above this threshold, rclone requests a "tempLink"
which downloads the file through a temporary URL directly from the
underlying S3 storage.
Properties:
- Config: templink_threshold
- Env Var: RCLONE_ACD_TEMPLINK_THRESHOLD
- Type: SizeSuffix
@@ -277,10 +293,12 @@ underlying S3 storage.
#### --acd-encoding
This sets the encoding for the backend.
The encoding for the backend.
See the [encoding section in the overview](/overview/#encoding) for more info.
Properties:
- Config: encoding
- Env Var: RCLONE_ACD_ENCODING
- Type: MultiEncoder

View File

@@ -557,3 +557,29 @@ put them back in again.` >}}
* Logeshwaran Murugesan <logeshwaran@testpress.in>
* Lu Wang <coolwanglu@gmail.com>
* Bumsu Hyeon <ksitht@gmail.com>
* Shmz Ozggrn <98463324+ShmzOzggrn@users.noreply.github.com>
* Kim <kim@jotta.no>
* Niels van de Weem <n.van.de.weem@smile.nl>
* Koopa <codingkoopa@gmail.com>
* Yunhai Luo <yunhai-luo@hotmail.com>
* Charlie Jiang <w@chariri.moe>
* Alain Nussbaumer <alain.nussbaumer@alleluia.ch>
* Vanessasaurus <814322+vsoch@users.noreply.github.com>
* Isaac Levy <isaac.r.levy@gmail.com>
* Gourav T <workflowautomation@protonmail.com>
* Paulo Martins <paulo.pontes.m@gmail.com>
* viveknathani <viveknathani2402@gmail.com>
* Eng Zer Jun <engzerjun@gmail.com>
* Abhiraj <abhiraj.official15@gmail.com>
* Márton Elek <elek@apache.org> <elek@users.noreply.github.com>
* Vincent Murphy <vdm@vdm.ie>
* ctrl-q <34975747+ctrl-q@users.noreply.github.com>
* Nil Alexandrov <nalexand@akamai.com>
* GuoXingbin <101376330+guoxingbin@users.noreply.github.com>
* Berkan Teber <berkan@berkanteber.com>
* Tobias Klauser <tklauser@distanz.ch>
* KARBOWSKI Piotr <piotr.karbowski@gmail.com>
* GH <geeklihui@foxmail.com>
* rafma0 <int.main@gmail.com>
* Adrien Rey-Jarthon <jobs@adrienjarthon.com>
* Nick Gooding <73336146+nickgooding@users.noreply.github.com>

View File

@@ -166,10 +166,12 @@ Storage Account Name.
Leave blank to use SAS URL or Emulator.
Properties:
- Config: account
- Env Var: RCLONE_AZUREBLOB_ACCOUNT
- Type: string
- Default: ""
- Required: false
#### --azureblob-service-principal-file
@@ -185,10 +187,12 @@ Leave blank normally. Needed only if you want to use a service principal instead
See ["Create an Azure service principal"](https://docs.microsoft.com/en-us/cli/azure/create-an-azure-service-principal-azure-cli) and ["Assign an Azure role for access to blob data"](https://docs.microsoft.com/en-us/azure/storage/common/storage-auth-aad-rbac-cli) pages for more details.
Properties:
- Config: service_principal_file
- Env Var: RCLONE_AZUREBLOB_SERVICE_PRINCIPAL_FILE
- Type: string
- Default: ""
- Required: false
#### --azureblob-key
@@ -196,10 +200,12 @@ Storage Account Key.
Leave blank to use SAS URL or Emulator.
Properties:
- Config: key
- Env Var: RCLONE_AZUREBLOB_KEY
- Type: string
- Default: ""
- Required: false
#### --azureblob-sas-url
@@ -207,10 +213,12 @@ SAS URL for container level access only.
Leave blank if using account/key or Emulator.
Properties:
- Config: sas_url
- Env Var: RCLONE_AZUREBLOB_SAS_URL
- Type: string
- Default: ""
- Required: false
#### --azureblob-use-msi
@@ -225,6 +233,8 @@ the user-assigned identity will be used by default. If the resource has multiple
identities, the identity to use must be explicitly specified using exactly one of the msi_object_id,
msi_client_id, or msi_mi_res_id parameters.
Properties:
- Config: use_msi
- Env Var: RCLONE_AZUREBLOB_USE_MSI
- Type: bool
@@ -236,6 +246,8 @@ Uses local storage emulator if provided as 'true'.
Leave blank if using real azure storage endpoint.
Properties:
- Config: use_emulator
- Env Var: RCLONE_AZUREBLOB_USE_EMULATOR
- Type: bool
@@ -251,10 +263,12 @@ Object ID of the user-assigned MSI to use, if any.
Leave blank if msi_client_id or msi_mi_res_id specified.
Properties:
- Config: msi_object_id
- Env Var: RCLONE_AZUREBLOB_MSI_OBJECT_ID
- Type: string
- Default: ""
- Required: false
#### --azureblob-msi-client-id
@@ -262,10 +276,12 @@ Object ID of the user-assigned MSI to use, if any.
Leave blank if msi_object_id or msi_mi_res_id specified.
Properties:
- Config: msi_client_id
- Env Var: RCLONE_AZUREBLOB_MSI_CLIENT_ID
- Type: string
- Default: ""
- Required: false
#### --azureblob-msi-mi-res-id
@@ -273,10 +289,12 @@ Azure resource ID of the user-assigned MSI to use, if any.
Leave blank if msi_client_id or msi_object_id specified.
Properties:
- Config: msi_mi_res_id
- Env Var: RCLONE_AZUREBLOB_MSI_MI_RES_ID
- Type: string
- Default: ""
- Required: false
#### --azureblob-endpoint
@@ -284,32 +302,65 @@ Endpoint for the service.
Leave blank normally.
Properties:
- Config: endpoint
- Env Var: RCLONE_AZUREBLOB_ENDPOINT
- Type: string
- Default: ""
- Required: false
#### --azureblob-upload-cutoff
Cutoff for switching to chunked upload (<= 256 MiB) (deprecated).
Properties:
- Config: upload_cutoff
- Env Var: RCLONE_AZUREBLOB_UPLOAD_CUTOFF
- Type: string
- Default: ""
- Required: false
#### --azureblob-chunk-size
Upload chunk size (<= 100 MiB).
Upload chunk size.
Note that this is stored in memory and there may be up to
"--transfers" chunks stored at once in memory.
"--transfers" * "--azureblob-upload-concurrency" chunks stored at once
in memory.
Properties:
- Config: chunk_size
- Env Var: RCLONE_AZUREBLOB_CHUNK_SIZE
- Type: SizeSuffix
- Default: 4Mi
#### --azureblob-upload-concurrency
Concurrency for multipart uploads.
This is the number of chunks of the same file that are uploaded
concurrently.
If you are uploading small numbers of large files over high-speed
links and these uploads do not fully utilize your bandwidth, then
increasing this may help to speed up the transfers.
In tests, upload speed increases almost linearly with upload
concurrency. For example to fill a gigabit pipe it may be necessary to
raise this to 64. Note that this will use more memory.
Note that chunks are stored in memory and there may be up to
"--transfers" * "--azureblob-upload-concurrency" chunks stored at once
in memory.
Properties:
- Config: upload_concurrency
- Env Var: RCLONE_AZUREBLOB_UPLOAD_CONCURRENCY
- Type: int
- Default: 16
#### --azureblob-list-chunk
Size of blob list.
@@ -322,6 +373,8 @@ minutes per megabyte on average, it will time out (
). This can be used to limit the number of blobs items to return, to
avoid the time out.
Properties:
- Config: list_chunk
- Env Var: RCLONE_AZUREBLOB_LIST_CHUNK
- Type: int
@@ -342,10 +395,12 @@ If blobs are in "archive tier" at remote, trying to perform data transfer
operations from remote will not be allowed. User should first restore by
tiering blob to "Hot" or "Cool".
Properties:
- Config: access_tier
- Env Var: RCLONE_AZUREBLOB_ACCESS_TIER
- Type: string
- Default: ""
- Required: false
#### --azureblob-archive-tier-delete
@@ -364,6 +419,8 @@ replacement. This has the potential for data loss if the upload fails
archive tier blobs early may be chargable.
Properties:
- Config: archive_tier_delete
- Env Var: RCLONE_AZUREBLOB_ARCHIVE_TIER_DELETE
- Type: bool
@@ -378,6 +435,8 @@ uploading it so it can add it to metadata on the object. This is great
for data integrity checking but can cause long delays for large files
to start uploading.
Properties:
- Config: disable_checksum
- Env Var: RCLONE_AZUREBLOB_DISABLE_CHECKSUM
- Type: bool
@@ -390,6 +449,8 @@ How often internal memory buffer pools will be flushed.
Uploads which requires additional buffers (f.e multipart) will use memory pool for allocations.
This option controls how often unused buffers will be removed from the pool.
Properties:
- Config: memory_pool_flush_time
- Env Var: RCLONE_AZUREBLOB_MEMORY_POOL_FLUSH_TIME
- Type: Duration
@@ -399,6 +460,8 @@ This option controls how often unused buffers will be removed from the pool.
Whether to use mmap buffers in internal memory pool.
Properties:
- Config: memory_pool_use_mmap
- Env Var: RCLONE_AZUREBLOB_MEMORY_POOL_USE_MMAP
- Type: bool
@@ -406,10 +469,12 @@ Whether to use mmap buffers in internal memory pool.
#### --azureblob-encoding
This sets the encoding for the backend.
The encoding for the backend.
See the [encoding section in the overview](/overview/#encoding) for more info.
Properties:
- Config: encoding
- Env Var: RCLONE_AZUREBLOB_ENCODING
- Type: MultiEncoder
@@ -419,10 +484,12 @@ See the [encoding section in the overview](/overview/#encoding) for more info.
Public access level of a container: blob or container.
Properties:
- Config: public_access
- Env Var: RCLONE_AZUREBLOB_PUBLIC_ACCESS
- Type: string
- Default: ""
- Required: false
- Examples:
- ""
- The container and its blobs can be accessed only with an authorized request.
@@ -436,6 +503,8 @@ Public access level of a container: blob or container.
If set, do not do HEAD before GET when getting objects.
Properties:
- Config: no_head_object
- Env Var: RCLONE_AZUREBLOB_NO_HEAD_OBJECT
- Type: bool

View File

@@ -329,24 +329,30 @@ Here are the standard options specific to b2 (Backblaze B2).
Account ID or Application Key ID.
Properties:
- Config: account
- Env Var: RCLONE_B2_ACCOUNT
- Type: string
- Default: ""
- Required: true
#### --b2-key
Application Key.
Properties:
- Config: key
- Env Var: RCLONE_B2_KEY
- Type: string
- Default: ""
- Required: true
#### --b2-hard-delete
Permanently delete files on remote removal, otherwise hide files.
Properties:
- Config: hard_delete
- Env Var: RCLONE_B2_HARD_DELETE
- Type: bool
@@ -362,10 +368,12 @@ Endpoint for the service.
Leave blank normally.
Properties:
- Config: endpoint
- Env Var: RCLONE_B2_ENDPOINT
- Type: string
- Default: ""
- Required: false
#### --b2-test-mode
@@ -381,10 +389,12 @@ below will cause b2 to return specific errors:
These will be set in the "X-Bz-Test-Mode" header which is documented
in the [b2 integrations checklist](https://www.backblaze.com/b2/docs/integration_checklist.html).
Properties:
- Config: test_mode
- Env Var: RCLONE_B2_TEST_MODE
- Type: string
- Default: ""
- Required: false
#### --b2-versions
@@ -393,6 +403,8 @@ Include old versions in directory listings.
Note that when using this no file write operations are permitted,
so you can't upload files or delete them.
Properties:
- Config: versions
- Env Var: RCLONE_B2_VERSIONS
- Type: bool
@@ -406,6 +418,8 @@ Files above this size will be uploaded in chunks of "--b2-chunk-size".
This value should be set no larger than 4.657 GiB (== 5 GB).
Properties:
- Config: upload_cutoff
- Env Var: RCLONE_B2_UPLOAD_CUTOFF
- Type: SizeSuffix
@@ -420,6 +434,8 @@ copied in chunks of this size.
The minimum is 0 and the maximum is 4.6 GiB.
Properties:
- Config: copy_cutoff
- Env Var: RCLONE_B2_COPY_CUTOFF
- Type: SizeSuffix
@@ -436,6 +452,8 @@ might a maximum of "--transfers" chunks in progress at once.
5,000,000 Bytes is the minimum size.
Properties:
- Config: chunk_size
- Env Var: RCLONE_B2_CHUNK_SIZE
- Type: SizeSuffix
@@ -450,6 +468,8 @@ uploading it so it can add it to metadata on the object. This is great
for data integrity checking but can cause long delays for large files
to start uploading.
Properties:
- Config: disable_checksum
- Env Var: RCLONE_B2_DISABLE_CHECKSUM
- Type: bool
@@ -466,10 +486,20 @@ If the custom endpoint rewrites the requests for authentication,
e.g., in Cloudflare Workers, this header needs to be handled properly.
Leave blank if you want to use the endpoint provided by Backblaze.
The URL provided here SHOULD have the protocol and SHOULD NOT have
a trailing slash or specify the /file/bucket subpath as rclone will
request files with "{download_url}/file/{bucket_name}/{path}".
Example:
> https://mysubdomain.mydomain.tld
(No trailing "/", "file" or "bucket")
Properties:
- Config: download_url
- Env Var: RCLONE_B2_DOWNLOAD_URL
- Type: string
- Default: ""
- Required: false
#### --b2-download-auth-duration
@@ -478,6 +508,8 @@ Time before the authorization token will expire in s or suffix ms|s|m|h|d.
The duration before the download authorization token will expire.
The minimum value is 1 second. The maximum value is one week.
Properties:
- Config: download_auth_duration
- Env Var: RCLONE_B2_DOWNLOAD_AUTH_DURATION
- Type: Duration
@@ -489,6 +521,8 @@ How often internal memory buffer pools will be flushed.
Uploads which requires additional buffers (f.e multipart) will use memory pool for allocations.
This option controls how often unused buffers will be removed from the pool.
Properties:
- Config: memory_pool_flush_time
- Env Var: RCLONE_B2_MEMORY_POOL_FLUSH_TIME
- Type: Duration
@@ -498,6 +532,8 @@ This option controls how often unused buffers will be removed from the pool.
Whether to use mmap buffers in internal memory pool.
Properties:
- Config: memory_pool_use_mmap
- Env Var: RCLONE_B2_MEMORY_POOL_USE_MMAP
- Type: bool
@@ -505,10 +541,12 @@ Whether to use mmap buffers in internal memory pool.
#### --b2-encoding
This sets the encoding for the backend.
The encoding for the backend.
See the [encoding section in the overview](/overview/#encoding) for more info.
Properties:
- Config: encoding
- Env Var: RCLONE_B2_ENCODING
- Type: MultiEncoder

View File

@@ -275,10 +275,12 @@ OAuth Client Id.
Leave blank normally.
Properties:
- Config: client_id
- Env Var: RCLONE_BOX_CLIENT_ID
- Type: string
- Default: ""
- Required: false
#### --box-client-secret
@@ -286,10 +288,12 @@ OAuth Client Secret.
Leave blank normally.
Properties:
- Config: client_secret
- Env Var: RCLONE_BOX_CLIENT_SECRET
- Type: string
- Default: ""
- Required: false
#### --box-box-config-file
@@ -299,10 +303,12 @@ Leave blank normally.
Leading `~` will be expanded in the file name as will environment variables such as `${RCLONE_CONFIG_DIR}`.
Properties:
- Config: box_config_file
- Env Var: RCLONE_BOX_BOX_CONFIG_FILE
- Type: string
- Default: ""
- Required: false
#### --box-access-token
@@ -310,15 +316,19 @@ Box App Primary Access Token
Leave blank normally.
Properties:
- Config: access_token
- Env Var: RCLONE_BOX_ACCESS_TOKEN
- Type: string
- Default: ""
- Required: false
#### --box-box-sub-type
Properties:
- Config: box_sub_type
- Env Var: RCLONE_BOX_BOX_SUB_TYPE
- Type: string
@@ -337,10 +347,12 @@ Here are the advanced options specific to box (Box).
OAuth Access Token as a JSON blob.
Properties:
- Config: token
- Env Var: RCLONE_BOX_TOKEN
- Type: string
- Default: ""
- Required: false
#### --box-auth-url
@@ -348,10 +360,12 @@ Auth server URL.
Leave blank to use the provider defaults.
Properties:
- Config: auth_url
- Env Var: RCLONE_BOX_AUTH_URL
- Type: string
- Default: ""
- Required: false
#### --box-token-url
@@ -359,15 +373,19 @@ Token server url.
Leave blank to use the provider defaults.
Properties:
- Config: token_url
- Env Var: RCLONE_BOX_TOKEN_URL
- Type: string
- Default: ""
- Required: false
#### --box-root-folder-id
Fill in for rclone to use a non root folder as its starting point.
Properties:
- Config: root_folder_id
- Env Var: RCLONE_BOX_ROOT_FOLDER_ID
- Type: string
@@ -377,6 +395,8 @@ Fill in for rclone to use a non root folder as its starting point.
Cutoff for switching to multipart upload (>= 50 MiB).
Properties:
- Config: upload_cutoff
- Env Var: RCLONE_BOX_UPLOAD_CUTOFF
- Type: SizeSuffix
@@ -386,6 +406,8 @@ Cutoff for switching to multipart upload (>= 50 MiB).
Max number of times to try committing a multipart file.
Properties:
- Config: commit_retries
- Env Var: RCLONE_BOX_COMMIT_RETRIES
- Type: int
@@ -395,6 +417,8 @@ Max number of times to try committing a multipart file.
Size of listing chunk 1-1000.
Properties:
- Config: list_chunk
- Env Var: RCLONE_BOX_LIST_CHUNK
- Type: int
@@ -404,17 +428,21 @@ Size of listing chunk 1-1000.
Only show items owned by the login (email address) passed in.
Properties:
- Config: owned_by
- Env Var: RCLONE_BOX_OWNED_BY
- Type: string
- Default: ""
- Required: false
#### --box-encoding
This sets the encoding for the backend.
The encoding for the backend.
See the [encoding section in the overview](/overview/#encoding) for more info.
Properties:
- Config: encoding
- Env Var: RCLONE_BOX_ENCODING
- Type: MultiEncoder

View File

@@ -316,28 +316,34 @@ Remote to cache.
Normally should contain a ':' and a path, e.g. "myremote:path/to/dir",
"myremote:bucket" or maybe "myremote:" (not recommended).
Properties:
- Config: remote
- Env Var: RCLONE_CACHE_REMOTE
- Type: string
- Default: ""
- Required: true
#### --cache-plex-url
The URL of the Plex server.
Properties:
- Config: plex_url
- Env Var: RCLONE_CACHE_PLEX_URL
- Type: string
- Default: ""
- Required: false
#### --cache-plex-username
The username of the Plex user.
Properties:
- Config: plex_username
- Env Var: RCLONE_CACHE_PLEX_USERNAME
- Type: string
- Default: ""
- Required: false
#### --cache-plex-password
@@ -345,10 +351,12 @@ The password of the Plex user.
**NB** Input to this must be obscured - see [rclone obscure](/commands/rclone_obscure/).
Properties:
- Config: plex_password
- Env Var: RCLONE_CACHE_PLEX_PASSWORD
- Type: string
- Default: ""
- Required: false
#### --cache-chunk-size
@@ -358,6 +366,8 @@ Use lower numbers for slower connections. If the chunk size is
changed, any downloaded chunks will be invalid and cache-chunk-path
will need to be cleared or unexpected EOF errors will occur.
Properties:
- Config: chunk_size
- Env Var: RCLONE_CACHE_CHUNK_SIZE
- Type: SizeSuffix
@@ -376,6 +386,8 @@ How long to cache file structure information (directory listings, file size, tim
If all write operations are done through the cache then you can safely make
this value very large as the cache store will also be updated in real time.
Properties:
- Config: info_age
- Env Var: RCLONE_CACHE_INFO_AGE
- Type: Duration
@@ -395,6 +407,8 @@ The total size that the chunks can take up on the local disk.
If the cache exceeds this value then it will start to delete the
oldest chunks until it goes under this value.
Properties:
- Config: chunk_total_size
- Env Var: RCLONE_CACHE_CHUNK_TOTAL_SIZE
- Type: SizeSuffix
@@ -415,19 +429,23 @@ Here are the advanced options specific to cache (Cache a remote).
The plex token for authentication - auto set normally.
Properties:
- Config: plex_token
- Env Var: RCLONE_CACHE_PLEX_TOKEN
- Type: string
- Default: ""
- Required: false
#### --cache-plex-insecure
Skip all certificate verification when connecting to the Plex server.
Properties:
- Config: plex_insecure
- Env Var: RCLONE_CACHE_PLEX_INSECURE
- Type: string
- Default: ""
- Required: false
#### --cache-db-path
@@ -435,6 +453,8 @@ Directory to store file structure metadata DB.
The remote name is used as the DB file name.
Properties:
- Config: db_path
- Env Var: RCLONE_CACHE_DB_PATH
- Type: string
@@ -451,6 +471,8 @@ This config follows the "--cache-db-path". If you specify a custom
location for "--cache-db-path" and don't specify one for "--cache-chunk-path"
then "--cache-chunk-path" will use the same path as "--cache-db-path".
Properties:
- Config: chunk_path
- Env Var: RCLONE_CACHE_CHUNK_PATH
- Type: string
@@ -460,6 +482,8 @@ then "--cache-chunk-path" will use the same path as "--cache-db-path".
Clear all the cached data for this remote on start.
Properties:
- Config: db_purge
- Env Var: RCLONE_CACHE_DB_PURGE
- Type: bool
@@ -473,6 +497,8 @@ The default value should be ok for most people. If you find that the
cache goes over "cache-chunk-total-size" too often then try to lower
this value to force it to perform cleanups more often.
Properties:
- Config: chunk_clean_interval
- Env Var: RCLONE_CACHE_CHUNK_CLEAN_INTERVAL
- Type: Duration
@@ -490,6 +516,8 @@ cache isn't able to provide file data anymore.
For really slow connections, increase this to a point where the stream is
able to provide data but your experience will be very stuttering.
Properties:
- Config: read_retries
- Env Var: RCLONE_CACHE_READ_RETRIES
- Type: int
@@ -509,6 +537,8 @@ more fluid and data will be available much more faster to readers.
setting will adapt to the type of reading performed and the value
specified here will be used as a maximum number of workers to use.
Properties:
- Config: workers
- Env Var: RCLONE_CACHE_WORKERS
- Type: int
@@ -531,6 +561,8 @@ If the hardware permits it, use this feature to provide an overall better
performance during streaming but it can also be disabled if RAM is not
available on the local machine.
Properties:
- Config: chunk_no_memory
- Env Var: RCLONE_CACHE_CHUNK_NO_MEMORY
- Type: bool
@@ -556,6 +588,8 @@ useless but it is available to set for more special cases.
other API calls to the cloud provider like directory listings will
still pass.
Properties:
- Config: rps
- Env Var: RCLONE_CACHE_RPS
- Type: int
@@ -569,6 +603,8 @@ If you need to read files immediately after you upload them through
cache you can enable this flag to have their data stored in the
cache store at the same time during upload.
Properties:
- Config: writes
- Env Var: RCLONE_CACHE_WRITES
- Type: bool
@@ -585,10 +621,12 @@ Specifying a value will enable this feature. Without it, it is
completely disabled and files will be uploaded directly to the cloud
provider
Properties:
- Config: tmp_upload_path
- Env Var: RCLONE_CACHE_TMP_UPLOAD_PATH
- Type: string
- Default: ""
- Required: false
#### --cache-tmp-wait-time
@@ -600,6 +638,8 @@ _cache-tmp-upload-path_ before it is selected for upload.
Note that only one file is uploaded at a time and it can take longer
to start the upload if a queue formed for this purpose.
Properties:
- Config: tmp_wait_time
- Env Var: RCLONE_CACHE_TMP_WAIT_TIME
- Type: Duration
@@ -615,6 +655,8 @@ error.
If you set it to 0 then it will wait forever.
Properties:
- Config: db_wait_time
- Env Var: RCLONE_CACHE_DB_WAIT_TIME
- Type: Duration
@@ -634,7 +676,7 @@ See [the "rclone backend" command](/commands/rclone_backend/) for more
info on how to pass options and arguments.
These can be run on a running backend using the rc command
[backend/command](/rc/#backend/command).
[backend/command](/rc/#backend-command).
### stats

View File

@@ -5,6 +5,138 @@ description: "Rclone Changelog"
# Changelog
## v1.58.0 - 2022-03-18
[See commits](https://github.com/rclone/rclone/compare/v1.57.0...v1.58.0)
* New backends
* [Akamai Netstorage](/netstorage) (Nil Alexandrov)
* [Seagate Lyve](/s3/#lyve), [SeaweedFS](/s3/#seaweedfs), [Storj](/s3/#storj), [RackCorp](/s3/#RackCorp) via s3 backend
* [Storj](/storj/) (renamed from Tardigrade - your old config files will continue working)
* New commands
* [bisync](/bisync/) - experimental bidirectional cloud sync (Ivan Andreev, Chris Nelson)
* New Features
* build
* Add `windows/arm64` build (`rclone mount` not supported yet) (Nick Craig-Wood)
* Raise minimum go version to go1.15 (Nick Craig-Wood)
* config: Allow dot in remote names and improve config editing (albertony)
* dedupe: Add quit as a choice in interactive mode (albertony)
* dlna: Change icons to the newest ones. (Alain Nussbaumer)
* filter: Add [`{{ regexp }}` syntax](/filtering/#regexp) to pattern matches (Nick Craig-Wood)
* fshttp: Add prometheus metrics for HTTP status code (Michał Matczuk)
* hashsum: Support creating hash from data received on stdin (albertony)
* librclone
* Allow empty string or null input instead of empty json object (albertony)
* Add support for mount commands (albertony)
* operations: Add server-side moves to stats (Ole Frost)
* rc: Allow user to disable authentication for web gui (negative0)
* tree: Remove obsolete `--human` replaced by global `--human-readable` (albertony)
* version: Report correct friendly-name for newer Windows 10/11 versions (albertony)
* Bug Fixes
* build
* Fix ARM architecture version in .deb packages after nfpm change (Nick Craig-Wood)
* Hard fork `github.com/jlaffaye/ftp` to fix `go get github.com/rclone/rclone` (Nick Craig-Wood)
* oauthutil: Fix crash when webrowser requests `/robots.txt` (Nick Craig-Wood)
* operations: Fix goroutine leak in case of copy retry (Ankur Gupta)
* rc:
* Fix `operations/publiclink` default for `expires` parameter (Nick Craig-Wood)
* Fix missing computation of `transferQueueSize` when summing up statistics group (Carlo Mion)
* Fix missing `StatsInfo` fields in the computation of the group sum (Carlo Mion)
* sync: Fix `--max-duration` so it doesn't retry when the duration is exceeded (Nick Craig-Wood)
* touch: Fix issue where a directory is created instead of a file (albertony)
* Mount
* Add `--devname` to set the device name sent to FUSE for mount display (Nick Craig-Wood)
* VFS
* Add `vfs/stats` remote control to show statistics (Nick Craig-Wood)
* Fix `failed to _ensure cache internal error: downloaders is nil error` (Nick Craig-Wood)
* Fix handling of special characters in file names (Bumsu Hyeon)
* Local
* Fix hash invalidation which caused errors with local crypt mount (Nick Craig-Wood)
* Crypt
* Add `base64` and `base32768` filename encoding options (Max Sum, Sinan Tan)
* Azure Blob
* Implement `--azureblob-upload-concurrency` parameter to speed uploads (Nick Craig-Wood)
* Remove 100MB upper limit on `chunk_size` as it is no longer needed (Nick Craig-Wood)
* Raise `--azureblob-upload-concurrency` to 16 by default (Nick Craig-Wood)
* Fix crash with SAS URL and no container (Nick Craig-Wood)
* Compress
* Fix crash if metadata upload failed (Nick Craig-Wood)
* Fix memory leak (Nick Craig-Wood)
* Drive
* Added `--drive-copy-shortcut-content` (Abhiraj)
* Disable OAuth OOB flow (copy a token) due to Google deprecation (Nick Craig-Wood)
* See [the deprecation note](https://developers.googleblog.com/2022/02/making-oauth-flows-safer.html#disallowed-oob).
* Add `--drive-skip-dangling-shortcuts` flag (Nick Craig-Wood)
* When using a link type `--drive-export-formats` shows all doc types (Nick Craig-Wood)
* Dropbox
* Speed up directory listings by specifying 1000 items in a chunk (Nick Craig-Wood)
* Save an API request when at the root (Nick Craig-Wood)
* Fichier
* Implemented About functionality (Gourav T)
* FTP
* Add `--ftp-ask-password` to prompt for password when needed (Borna Butkovic)
* Google Cloud Storage
* Add missing regions (Nick Craig-Wood)
* Disable OAuth OOB flow (copy a token) due to Google deprecation (Nick Craig-Wood)
* See [the deprecation note](https://developers.googleblog.com/2022/02/making-oauth-flows-safer.html#disallowed-oob).
* Googlephotos
* Disable OAuth OOB flow (copy a token) due to Google deprecation (Nick Craig-Wood)
* See [the deprecation note](https://developers.googleblog.com/2022/02/making-oauth-flows-safer.html#disallowed-oob).
* Hasher
* Fix crash on object not found (Nick Craig-Wood)
* Hdfs
* Add file (Move) and directory move (DirMove) support (Andy Jackson)
* HTTP
* Improved recognition of URL pointing to a single file (albertony)
* Jottacloud
* Change API used by recursive list (ListR) (Kim)
* Add support for Tele2 Cloud (Fredric Arklid)
* Koofr
* Add Digistorage service as a Koofr provider. (jaKa)
* Mailru
* Fix int32 overflow on arm32 (Ivan Andreev)
* Onedrive
* Add config option for oauth scope `Sites.Read.All` (Charlie Jiang)
* Minor optimization of quickxorhash (Isaac Levy)
* Add `--onedrive-root-folder-id` flag (Nick Craig-Wood)
* Do not retry on `400 pathIsTooLong` error (ctrl-q)
* Pcloud
* Add support for recursive list (ListR) (Niels van de Weem)
* Fix pre-1970 time stamps (Nick Craig-Wood)
* S3
* Use `ListObjectsV2` for faster listings (Felix Bünemann)
* Fallback to `ListObject` v1 on unsupported providers (Nick Craig-Wood)
* Use the `ETag` on multipart transfers to verify the transfer was OK (Nick Craig-Wood)
* Add `--s3-use-multipart-etag` provider quirk to disable this on unsupported providers (Nick Craig-Wood)
* New Providers
* RackCorp object storage (bbabich)
* Seagate Lyve Cloud storage (Nick Craig-Wood)
* SeaweedFS (Chris Lu)
* Storj Shared gateways (Márton Elek, Nick Craig-Wood)
* Add Wasabi AP Northeast 2 endpoint info (lindwurm)
* Add `GLACIER_IR` storage class (Yunhai Luo)
* Document `Content-MD5` workaround for object-lock enabled buckets (Paulo Martins)
* Fix multipart upload with `--no-head` flag (Nick Craig-Wood)
* Simplify content length processing in s3 with download url (Logeshwaran Murugesan)
* SFTP
* Add rclone to list of supported `md5sum`/`sha1sum` commands to look for (albertony)
* Refactor so we only have one way of running remote commands (Nick Craig-Wood)
* Fix timeout on hashing large files by sending keepalives (Nick Craig-Wood)
* Fix unecessary seeking when uploading and downloading files (Nick Craig-Wood)
* Update docs on how to create `known_hosts` file (Nick Craig-Wood)
* Storj
* Rename tardigrade backend to storj backend (Nick Craig-Wood)
* Implement server side Move for files (Nick Craig-Wood)
* Update docs to explain differences between s3 and this backend (Elek, Márton)
* Swift
* Fix About so it shows info about the current container only (Nick Craig-Wood)
* Union
* Fix treatment of remotes with `//` in (Nick Craig-Wood)
* Fix deadlock when one part of a multi-upload fails (Nick Craig-Wood)
* Fix eplus policy returned nil (Vitor Arruda)
* Yandex
* Add permanent deletion support (deinferno)
## v1.57.0 - 2021-11-01
[See commits](https://github.com/rclone/rclone/compare/v1.56.0...v1.57.0)

View File

@@ -322,15 +322,19 @@ Remote to chunk/unchunk.
Normally should contain a ':' and a path, e.g. "myremote:path/to/dir",
"myremote:bucket" or maybe "myremote:" (not recommended).
Properties:
- Config: remote
- Env Var: RCLONE_CHUNKER_REMOTE
- Type: string
- Default: ""
- Required: true
#### --chunker-chunk-size
Files larger than chunk size will be split in chunks.
Properties:
- Config: chunk_size
- Env Var: RCLONE_CHUNKER_CHUNK_SIZE
- Type: SizeSuffix
@@ -342,6 +346,8 @@ Choose how chunker handles hash sums.
All modes but "none" require metadata.
Properties:
- Config: hash_type
- Env Var: RCLONE_CHUNKER_HASH_TYPE
- Type: string
@@ -378,6 +384,8 @@ If chunk number has less digits than the number of hashes, it is left-padded by
If there are more digits in the number, they are left as is.
Possible chunk files are ignored if their name does not match given format.
Properties:
- Config: name_format
- Env Var: RCLONE_CHUNKER_NAME_FORMAT
- Type: string
@@ -389,6 +397,8 @@ Minimum valid chunk number. Usually 0 or 1.
By default chunk numbers start from 1.
Properties:
- Config: start_from
- Env Var: RCLONE_CHUNKER_START_FROM
- Type: int
@@ -401,6 +411,8 @@ Format of the metadata object or "none".
By default "simplejson".
Metadata is a small JSON file named after the composite file.
Properties:
- Config: meta_format
- Env Var: RCLONE_CHUNKER_META_FORMAT
- Type: string
@@ -418,6 +430,8 @@ Metadata is a small JSON file named after the composite file.
Choose how chunker should handle files with missing or invalid chunks.
Properties:
- Config: fail_hard
- Env Var: RCLONE_CHUNKER_FAIL_HARD
- Type: bool
@@ -432,6 +446,8 @@ Choose how chunker should handle files with missing or invalid chunks.
Choose how chunker should handle temporary files during transactions.
Properties:
- Config: transactions
- Env Var: RCLONE_CHUNKER_TRANSACTIONS
- Type: string

View File

@@ -36,7 +36,8 @@ See the [global flags page](/flags/) for global options not listed here.
* [rclone about](/commands/rclone_about/) - Get quota information from the remote.
* [rclone authorize](/commands/rclone_authorize/) - Remote authorization.
* [rclone backend](/commands/rclone_backend/) - Run a backend specific command.
* [rclone backend](/commands/rclone_backend/) - Run a backend-specific command.
* [rclone bisync](/commands/rclone_bisync/) - Perform bidirectonal synchronization between two paths.
* [rclone cat](/commands/rclone_cat/) - Concatenates any files and sends them to stdout.
* [rclone check](/commands/rclone_check/) - Checks the files in the source and destination match.
* [rclone checksum](/commands/rclone_checksum/) - Checks the files in the source against a SUM file.

View File

@@ -42,7 +42,7 @@ Applying a `--full` flag to the command prints the bytes in full, e.g.
Trashed: 104857602
Other: 8849156022
A `--json` flag generates conveniently computer readable output, e.g.
A `--json` flag generates conveniently machine-readable output, e.g.
{
"total": 18253611008,

View File

@@ -1,18 +1,18 @@
---
title: "rclone backend"
description: "Run a backend specific command."
description: "Run a backend-specific command."
slug: rclone_backend
url: /commands/rclone_backend/
# autogenerated - DO NOT EDIT, instead edit the source code in cmd/backend/ and as part of making a release run "make commanddocs"
---
# rclone backend
Run a backend specific command.
Run a backend-specific command.
## Synopsis
This runs a backend specific command. The commands themselves (except
This runs a backend-specific command. The commands themselves (except
for "help" and "features") are defined by the backends and you should
see the backend docs for definitions.
@@ -22,7 +22,7 @@ You can discover what commands a backend implements by using
rclone backend help <backendname>
You can also discover information about the backend using (see
[operations/fsinfo](/rc/#operations/fsinfo) in the remote control docs
[operations/fsinfo](/rc/#operations-fsinfo) in the remote control docs
for more info).
rclone backend features remote:
@@ -36,7 +36,7 @@ Pass arguments to the backend by placing them on the end of the line
rclone backend cleanup remote:path file1 file2 file3
Note to run these commands on a running backend then see
[backend/command](/rc/#backend/command) in the rc docs.
[backend/command](/rc/#backend-command) in the rc docs.
```

View File

@@ -13,7 +13,7 @@ Checks the files in the source and destination match.
Checks the files in the source and destination match. It compares
sizes and hashes (MD5 or SHA1) and logs a report of files which don't
sizes and hashes (MD5 or SHA1) and logs a report of files that don't
match. It doesn't alter the source or destination.
If you supply the `--size-only` flag, it will only compare the sizes not

View File

@@ -15,7 +15,7 @@ Create a new remote with name, type and options.
Create a new remote of `name` with `type` and options. The options
should be passed in pairs of `key` `value` or as `key=value`.
For example to make a swift remote of name myremote using auto config
For example, to make a swift remote of name myremote using auto config
you would do:
rclone config create myremote swift env_auth true
@@ -107,9 +107,8 @@ At the end of the non interactive process, rclone will return a result
with `State` as empty string.
If `--all` is passed then rclone will ask all the config questions,
not just the post config questions. Parameters that are supplied on
the command line or from environment variables are used as defaults
for questions as usual.
not just the post config questions. Any parameters are used as
defaults for questions as usual.
Note that `bin/config.py` in the rclone source implements this protocol
as a readable demonstration.

View File

@@ -16,7 +16,7 @@ Update an existing remote's password. The password
should be passed in pairs of `key` `password` or as `key=password`.
The `password` should be passed in in clear (unobscured).
For example to set password of a remote of name myremote you would do:
For example, to set password of a remote of name myremote you would do:
rclone config password myremote fieldname mypassword
rclone config password myremote fieldname=mypassword

View File

@@ -15,7 +15,7 @@ Update options in an existing remote.
Update an existing remote's options. The options should be passed in
pairs of `key` `value` or as `key=value`.
For example to update the env_auth field of a remote of name myremote
For example, to update the env_auth field of a remote of name myremote
you would do:
rclone config update myremote env_auth true

View File

@@ -32,7 +32,7 @@ name. It will do this iteratively until all the identically named
directories have been merged.
Next, if deduping by name, for every group of duplicate file names /
hashes, it will delete all but one identical files it finds without
hashes, it will delete all but one identical file it finds without
confirmation. This means that for most duplicated files the `dedupe` command will not be interactive.
`dedupe` considers files to be identical if they have the
@@ -43,7 +43,7 @@ identical if they have the same size (any hash will be ignored). This
can be useful on crypt backends which do not support hashes.
Next rclone will resolve the remaining duplicates. Exactly which
action is taken depends on the dedupe mode. By default rclone will
action is taken depends on the dedupe mode. By default, rclone will
interactively query the user for each one.
**Important**: Since this can cause data loss, test first with the
@@ -74,8 +74,7 @@ Now the `dedupe` session
s) Skip and do nothing
k) Keep just one (choose which in next step)
r) Rename all to be different (by changing file.jpg to file-1.jpg)
q) Quit
s/k/r/q> k
s/k/r> k
Enter the number of the file to keep> 1
one.txt: Deleted 1 extra copies
two.txt: Found 3 files with duplicate names
@@ -86,8 +85,7 @@ Now the `dedupe` session
s) Skip and do nothing
k) Keep just one (choose which in next step)
r) Rename all to be different (by changing file.jpg to file-1.jpg)
q) Quit
s/k/r/q> r
s/k/r> r
two-1.txt: renamed from: two.txt
two-2.txt: renamed from: two.txt
two-3.txt: renamed from: two.txt
@@ -112,7 +110,7 @@ Dedupe can be run non interactively using the `--dedupe-mode` flag or by using a
* `--dedupe-mode rename` - removes identical files then renames the rest to be different.
* `--dedupe-mode list` - lists duplicate dirs and files only and changes nothing.
For example to rename all the identically named photos in your Google Photos directory, do
For example, to rename all the identically named photos in your Google Photos directory, do
rclone dedupe --dedupe-mode rename "drive:Google Photos"
@@ -128,7 +126,7 @@ rclone dedupe [mode] remote:path [flags]
## Options
```
--by-hash Find indentical hashes rather than names
--by-hash Find identical hashes rather than names
--dedupe-mode string Dedupe mode interactive|skip|first|newest|oldest|largest|smallest|rename (default "interactive")
-h, --help help for dedupe
```

View File

@@ -21,6 +21,11 @@ not supported by the remote, no hash will be returned. With the
download flag, the file will be downloaded from the remote and
hashed locally enabling any hash for any remote.
This command can also hash data received on standard input (stdin),
by not passing a remote:path, or by passing a hyphen as remote:path
when there is data to read (if not, the hypen will be treated literaly,
as a relative path).
Run without a hash to see the list of all supported hashes, e.g.
$ rclone hashsum

View File

@@ -34,17 +34,17 @@ There are several related list commands
* `lsf` to list objects and directories in easy to parse format
* `lsjson` to list objects and directories in JSON format
`ls`,`lsl`,`lsd` are designed to be human readable.
`lsf` is designed to be human and machine readable.
`lsjson` is designed to be machine readable.
`ls`,`lsl`,`lsd` are designed to be human-readable.
`lsf` is designed to be human and machine-readable.
`lsjson` is designed to be machine-readable.
Note that `ls` and `lsl` recurse by default - use `--max-depth 1` to stop the recursion.
The other list commands `lsd`,`lsf`,`lsjson` do not recurse by default - use `-R` to make them recurse.
Listing a non existent directory will produce an error except for
Listing a non-existent directory will produce an error except for
remotes which can't have empty directories (e.g. s3, swift, or gcs -
the bucket based remotes).
the bucket-based remotes).
```

View File

@@ -44,17 +44,17 @@ There are several related list commands
* `lsf` to list objects and directories in easy to parse format
* `lsjson` to list objects and directories in JSON format
`ls`,`lsl`,`lsd` are designed to be human readable.
`lsf` is designed to be human and machine readable.
`lsjson` is designed to be machine readable.
`ls`,`lsl`,`lsd` are designed to be human-readable.
`lsf` is designed to be human and machine-readable.
`lsjson` is designed to be machine-readable.
Note that `ls` and `lsl` recurse by default - use `--max-depth 1` to stop the recursion.
The other list commands `lsd`,`lsf`,`lsjson` do not recurse by default - use `-R` to make them recurse.
Listing a non existent directory will produce an error except for
Listing a non-existent directory will produce an error except for
remotes which can't have empty directories (e.g. s3, swift, or gcs -
the bucket based remotes).
the bucket-based remotes).
```

View File

@@ -59,13 +59,13 @@ can be returned as an empty string if it isn't available on the object
the object and "UNSUPPORTED" if that object does not support that hash
type.
For example to emulate the md5sum command you can use
For example, to emulate the md5sum command you can use
rclone lsf -R --hash MD5 --format hp --separator " " --files-only .
Eg
$ rclone lsf -R --hash MD5 --format hp --separator " " --files-only swift:bucket
$ rclone lsf -R --hash MD5 --format hp --separator " " --files-only swift:bucket
7908e352297f0f530b84a756f188baa3 bevajer5jef
cd65ac234e6fea5925974a51cdd865cc canole
03b5341b4f234b9d984d03ad076bae91 diwogej7
@@ -100,7 +100,7 @@ Eg
Note that the --absolute parameter is useful for making lists of files
to pass to an rclone copy with the --files-from-raw flag.
For example to find all the files modified within one day and copy
For example, to find all the files modified within one day and copy
those only (without traversing the whole directory structure):
rclone lsf --absolute --files-only --max-age 1d /path/to/local > new_files
@@ -117,17 +117,17 @@ There are several related list commands
* `lsf` to list objects and directories in easy to parse format
* `lsjson` to list objects and directories in JSON format
`ls`,`lsl`,`lsd` are designed to be human readable.
`lsf` is designed to be human and machine readable.
`lsjson` is designed to be machine readable.
`ls`,`lsl`,`lsd` are designed to be human-readable.
`lsf` is designed to be human and machine-readable.
`lsjson` is designed to be machine-readable.
Note that `ls` and `lsl` recurse by default - use `--max-depth 1` to stop the recursion.
The other list commands `lsd`,`lsf`,`lsjson` do not recurse by default - use `-R` to make them recurse.
Listing a non existent directory will produce an error except for
Listing a non-existent directory will produce an error except for
remotes which can't have empty directories (e.g. s3, swift, or gcs -
the bucket based remotes).
the bucket-based remotes).
```

View File

@@ -66,7 +66,7 @@ If "remote:path" contains the file "subfolder/file.txt", the Path for "file.txt"
will be "subfolder/file.txt", not "remote:path/subfolder/file.txt".
When used without --recursive the Path will always be the same as Name.
If the directory is a bucket in a bucket based backend, then
If the directory is a bucket in a bucket-based backend, then
"IsBucket" will be set to true. This key won't be present unless it is
"true".
@@ -91,17 +91,17 @@ There are several related list commands
* `lsf` to list objects and directories in easy to parse format
* `lsjson` to list objects and directories in JSON format
`ls`,`lsl`,`lsd` are designed to be human readable.
`lsf` is designed to be human and machine readable.
`lsjson` is designed to be machine readable.
`ls`,`lsl`,`lsd` are designed to be human-readable.
`lsf` is designed to be human and machine-readable.
`lsjson` is designed to be machine-readable.
Note that `ls` and `lsl` recurse by default - use `--max-depth 1` to stop the recursion.
The other list commands `lsd`,`lsf`,`lsjson` do not recurse by default - use `-R` to make them recurse.
Listing a non existent directory will produce an error except for
Listing a non-existent directory will produce an error except for
remotes which can't have empty directories (e.g. s3, swift, or gcs -
the bucket based remotes).
the bucket-based remotes).
```

View File

@@ -34,17 +34,17 @@ There are several related list commands
* `lsf` to list objects and directories in easy to parse format
* `lsjson` to list objects and directories in JSON format
`ls`,`lsl`,`lsd` are designed to be human readable.
`lsf` is designed to be human and machine readable.
`lsjson` is designed to be machine readable.
`ls`,`lsl`,`lsd` are designed to be human-readable.
`lsf` is designed to be human and machine-readable.
`lsjson` is designed to be machine-readable.
Note that `ls` and `lsl` recurse by default - use `--max-depth 1` to stop the recursion.
The other list commands `lsd`,`lsf`,`lsjson` do not recurse by default - use `-R` to make them recurse.
Listing a non existent directory will produce an error except for
Listing a non-existent directory will produce an error except for
remotes which can't have empty directories (e.g. s3, swift, or gcs -
the bucket based remotes).
the bucket-based remotes).
```

View File

@@ -20,6 +20,11 @@ not supported by the remote, no hash will be returned. With the
download flag, the file will be downloaded from the remote and
hashed locally enabling MD5 for any remote.
This command can also hash data received on standard input (stdin),
by not passing a remote:path, or by passing a hyphen as remote:path
when there is data to read (if not, the hypen will be treated literaly,
as a relative path).
```
rclone md5sum remote:path [flags]

View File

@@ -75,7 +75,7 @@ at all, then 1 PiB is set as both the total and the free size.
To run rclone mount on Windows, you will need to
download and install [WinFsp](http://www.secfs.net/winfsp/).
[WinFsp](https://github.com/billziss-gh/winfsp) is an open source
[WinFsp](https://github.com/billziss-gh/winfsp) is an open-source
Windows File System Proxy which makes it easy to write user space file
systems for Windows. It provides a FUSE emulation layer which rclone
uses combination with [cgofuse](https://github.com/billziss-gh/cgofuse).
@@ -245,7 +245,7 @@ applications won't work with their files on an rclone mount without
`--vfs-cache-mode writes` or `--vfs-cache-mode full`.
See the [VFS File Caching](#vfs-file-caching) section for more info.
The bucket based remotes (e.g. Swift, S3, Google Compute Storage, B2,
The bucket-based remotes (e.g. Swift, S3, Google Compute Storage, B2,
Hubic) do not support the concept of empty directories, so empty
directories will have a tendency to disappear once they fall out of
the directory cache.
@@ -689,6 +689,7 @@ rclone mount remote:path /path/to/mountpoint [flags]
--daemon-wait duration Time to wait for ready mount from daemon (maximum time on Linux, constant sleep time on OSX/BSD) (not supported on Windows) (default 1m0s)
--debug-fuse Debug the FUSE internals - needs -v
--default-permissions Makes kernel enforce access control based on the file mode (not supported on Windows)
--devname string Set the device name - default is remote:path
--dir-cache-time duration Time to cache directory entries for (default 5m0s)
--dir-perms FileMode Directory permissions (default 0777)
--file-perms FileMode File permissions (default 0666)

View File

@@ -11,7 +11,7 @@ Obscure password for use in the rclone config file.
## Synopsis
In the rclone config file, human readable passwords are
In the rclone config file, human-readable passwords are
obscured. Obscuring them is done by encrypting them and writing them
out in base64. This is **not** a secure way of encrypting these
passwords as rclone can decrypt them - it is to prevent "eyedropping"

View File

@@ -349,6 +349,7 @@ rclone serve docker [flags]
--daemon-wait duration Time to wait for ready mount from daemon (maximum time on Linux, constant sleep time on OSX/BSD) (not supported on Windows) (default 1m0s)
--debug-fuse Debug the FUSE internals - needs -v
--default-permissions Makes kernel enforce access control based on the file mode (not supported on Windows)
--devname string Set the device name - default is remote:path
--dir-cache-time duration Time to cache directory entries for (default 5m0s)
--dir-perms FileMode Directory permissions (default 0777)
--file-perms FileMode File permissions (default 0666)

View File

@@ -59,6 +59,9 @@ supply --client-ca also.
of that with the CA certificate. --key should be the PEM encoded
private key and --client-ca should be the PEM encoded client
certificate authority certificate.
### Template
--template allows a user to specify a custom markup template for http
and webdav serve functions. The server exports the following markup
to be used within the template to server pages:

View File

@@ -15,7 +15,7 @@ rclone serve restic implements restic's REST backend API
over HTTP. This allows restic to use rclone as a data storage
mechanism for cloud providers that restic does not support directly.
[Restic](https://restic.net/) is a command line program for doing
[Restic](https://restic.net/) is a command-line program for doing
backups.
The server will log errors. Use -v to see access logs.
@@ -194,7 +194,7 @@ rclone serve restic remote:path [flags]
--max-header-bytes int Maximum size of request header (default 4096)
--pass string Password for authentication
--private-repos Users can only access their private repo
--realm string realm for authentication (default "rclone")
--realm string Realm for authentication (default "rclone")
--server-read-timeout duration Timeout for server reading data (default 1h0m0s)
--server-write-timeout duration Timeout for server writing data (default 1h0m0s)
--stdio Run an HTTP2 server on stdin/stdout

View File

@@ -49,6 +49,17 @@ be used with sshd via ~/.ssh/authorized_keys, for example:
restrict,command="rclone serve sftp --stdio ./photos" ssh-rsa ...
On the client you need to set "--transfers 1" when using --stdio.
Otherwise multiple instances of the rclone server are started by OpenSSH
which can lead to "corrupted on transfer" errors. This is the case because
the client chooses indiscriminately which server to send commands to while
the servers all have different views of the state of the filing system.
The "restrict" in authorized_keys prevents SHA1SUMs and MD5SUMs from beeing
used. Omitting "restrict" and using --sftp-path-override to enable
checksumming is possible but less secure and you could use the SFTP server
provided by OpenSSH in this case.
## VFS - Virtual File System

View File

@@ -501,7 +501,7 @@ rclone serve webdav remote:path [flags]
--pass string Password for authentication
--poll-interval duration Time to wait between polling for changes, must be smaller than dir-cache-time and only on supported remotes (set 0 to disable) (default 1m0s)
--read-only Mount read-only
--realm string realm for authentication (default "rclone")
--realm string Realm for authentication (default "rclone")
--server-read-timeout duration Timeout for server reading data (default 1h0m0s)
--server-write-timeout duration Timeout for server writing data (default 1h0m0s)
--template string User-specified template

View File

@@ -20,6 +20,14 @@ not supported by the remote, no hash will be returned. With the
download flag, the file will be downloaded from the remote and
hashed locally enabling SHA-1 for any remote.
This command can also hash data received on standard input (stdin),
by not passing a remote:path, or by passing a hyphen as remote:path
when there is data to read (if not, the hypen will be treated literaly,
as a relative path).
This command can also hash data received on STDIN, if not passing
a remote:path.
```
rclone sha1sum remote:path [flags]

View File

@@ -25,7 +25,7 @@ For example
└── subdir
├── file4
└── file5
1 directories, 5 files
You can use any of the filtering options with the tree command (e.g.
@@ -49,7 +49,6 @@ rclone tree remote:path [flags]
--dirsfirst List directories before files (-U disables)
--full-path Print the full path prefix for each file
-h, --help help for tree
--human Print the size in a more human readable way.
--level int Descend only level directories deep
-D, --modtime Print the date of last modification.
--noindent Don't print indentation lines

View File

@@ -96,15 +96,19 @@ Here are the standard options specific to compress (Compress a remote).
Remote to compress.
Properties:
- Config: remote
- Env Var: RCLONE_COMPRESS_REMOTE
- Type: string
- Default: ""
- Required: true
#### --compress-mode
Compression mode.
Properties:
- Config: mode
- Env Var: RCLONE_COMPRESS_MODE
- Type: string
@@ -129,6 +133,8 @@ Level -2 uses Huffmann encoding only. Only use if you know what you
are doing.
Level 0 turns off compression.
Properties:
- Config: level
- Env Var: RCLONE_COMPRESS_LEVEL
- Type: int
@@ -143,6 +149,8 @@ it's size.
Files smaller than this limit will be cached in RAM, files larger than
this limit will be cached on disk.
Properties:
- Config: ram_cache_limit
- Env Var: RCLONE_COMPRESS_RAM_CACHE_LIMIT
- Type: SizeSuffix

View File

@@ -428,15 +428,19 @@ Remote to encrypt/decrypt.
Normally should contain a ':' and a path, e.g. "myremote:path/to/dir",
"myremote:bucket" or maybe "myremote:" (not recommended).
Properties:
- Config: remote
- Env Var: RCLONE_CRYPT_REMOTE
- Type: string
- Default: ""
- Required: true
#### --crypt-filename-encryption
How to encrypt the filenames.
Properties:
- Config: filename_encryption
- Env Var: RCLONE_CRYPT_FILENAME_ENCRYPTION
- Type: string
@@ -457,6 +461,8 @@ Option to either encrypt directory names or leave them intact.
NB If filename_encryption is "off" then this option will do nothing.
Properties:
- Config: directory_name_encryption
- Env Var: RCLONE_CRYPT_DIRECTORY_NAME_ENCRYPTION
- Type: bool
@@ -473,10 +479,12 @@ Password or pass phrase for encryption.
**NB** Input to this must be obscured - see [rclone obscure](/commands/rclone_obscure/).
Properties:
- Config: password
- Env Var: RCLONE_CRYPT_PASSWORD
- Type: string
- Default: ""
- Required: true
#### --crypt-password2
@@ -487,10 +495,12 @@ Should be different to the previous password.
**NB** Input to this must be obscured - see [rclone obscure](/commands/rclone_obscure/).
Properties:
- Config: password2
- Env Var: RCLONE_CRYPT_PASSWORD2
- Type: string
- Default: ""
- Required: false
### Advanced options
@@ -509,6 +519,8 @@ pointing to two different directories with the single changed
parameter and use rclone move to move the files between the crypt
remotes.
Properties:
- Config: server_side_across_configs
- Env Var: RCLONE_CRYPT_SERVER_SIDE_ACROSS_CONFIGS
- Type: bool
@@ -526,6 +538,8 @@ This is so you can work out which encrypted names are which decrypted
names just in case you need to do something with the encrypted file
names, or for debugging purposes.
Properties:
- Config: show_mapping
- Env Var: RCLONE_CRYPT_SHOW_MAPPING
- Type: bool
@@ -535,6 +549,8 @@ names, or for debugging purposes.
Option to either encrypt file data or leave it unencrypted.
Properties:
- Config: no_data_encryption
- Env Var: RCLONE_CRYPT_NO_DATA_ENCRYPTION
- Type: bool
@@ -545,6 +561,29 @@ Option to either encrypt file data or leave it unencrypted.
- "false"
- Encrypt file data.
#### --crypt-filename-encoding
How to encode the encrypted filename to text string.
This option could help with shortening the encrypted filename. The
suitable option would depend on the way your remote count the filename
length and if it's case sensitve.
Properties:
- Config: filename_encoding
- Env Var: RCLONE_CRYPT_FILENAME_ENCODING
- Type: string
- Default: "base32"
- Examples:
- "base32"
- Encode using base32. Suitable for all remote.
- "base64"
- Encode using base64. Suitable for case sensitive remote.
- "base32768"
- Encode using base32768. Suitable if your remote counts UTF-16 or
- Unicode codepoint instead of UTF-8 byte length. (Eg. Onedrive)
## Backend commands
Here are the commands specific to the crypt backend.
@@ -559,7 +598,7 @@ See [the "rclone backend" command](/commands/rclone_backend/) for more
info on how to pass options and arguments.
These can be run on a running backend using the rc command
[backend/command](/rc/#backend/command).
[backend/command](/rc/#backend-command).
### encode

Some files were not shown because too many files have changed in this diff Show More