1
0
mirror of https://github.com/rclone/rclone.git synced 2026-01-25 13:53:29 +00:00

Compare commits

...

209 Commits

Author SHA1 Message Date
Nick Craig-Wood
c726ae3afb azureblob: update to SDK 0.15.0 to Fix memory leak
This also adjusts the API breakage on the SetTier call

See: https://github.com/Azure/azure-storage-blob-go/issues/253
See: https://forum.rclone.org/t/ask-for-settings-recommendation-for-azureblob/21505/37
See: https://forum.rclone.org/t/new-azure-sdk-version-0-15-0-memory-leak-fix/30450
2022-04-29 12:45:08 +01:00
ehsantdy
e34c543660 s3: Add ArvanCloud AOS to provider list 2022-04-28 10:42:30 +01:00
Lesmiscore
598364ad0f backend/internetarchive: add support for Internet Archive
This adds support for Internet Archive (archive.org) Items.
2022-04-28 10:25:38 +01:00
Zsolt Ero
211dbe9aee docs: Add --multi-thread-streams note to --transfers. 2022-04-27 18:40:39 +01:00
albertony
4829527dac jottacloud: refactor timestamp handling
Jottacloud have several different apis and endpoints using a mix of different
timestamp formats. In existing code the list operation (after the recent liststream
implementation) uses format from golang's time.RFC3339 constant. Uploads (using the
allocate api) uses format from a hard coded constant with value identical to golang's
time.RFC3339. And then we have the classic JFS time format, which is similar to RFC3339
but not identical, using a different constant. Also the naming is a bit confusing,
since the term api is used both as a generic term and also as a reference to the
newer format used in the api subdomain where the allocate endpoint is located.
This commit refactors these things a bit.
2022-04-27 13:41:39 +02:00
albertony
cc8dde402f jottacloud: refactor SetModTime function
Now using the utility function for deduplication that was newly implemented to
fix an issue with server-side copy. This function uses the original, and generic,
"jfs" api (and its "cphash" feature), instead of the newer "allocate" api dedicated
for uploads. Both apis support similar deduplication functionaly that we rely on for
the SetModTime operation. One advantage of using the jfs variant is that the allocate
api is specialized for uploads, an initial request performs modtime-only changes and
deduplication if possible but if not possible it creates an incomplete file revision
and returns a special url to be used with a following request to upload missing content.
In the SetModTime function we only sent the first request, using metadata from existing
remote file but different timestamps, which lead to a modtime-only change. If, for some
reason, this should fail it would leave the incomplete revision behind. Probably not
a problem, but the jfs implementation used with this commit is simpler and
a more "standalone" request which either succeeds or fails without expecting additional
requests.
2022-04-27 12:06:36 +02:00
albertony
2b67ad17aa jottacloud: fix issue with server-side copy when destination is in trash
A strange feature (probably bug) in the api used by the server-side copy implementation
in Jottacloud backend is that if the destination file is in trash, the copy request
succeeds but the destination will still be in trash! When this situation occurs in
rclone, the copy command will fail with "Failed to copy: object not found" because
rclone verifies that the file info in the response from the copy request is valid,
and since it is marked as deleted it is treated as invalid.

This commit works around this problem by looking for this situation in the response
from the copy operation, and send an additional request to a built-in deduplication
endpoint that will restore the file from trash.

Fixes #6112
2022-04-27 12:06:36 +02:00
albertony
6da3522499 jottacloud: minor cleanup of upload response
The UploadResponse type included several properties that are no longer returned
by Jottacloud, and the backend implementation did not use them anyway.
2022-04-27 08:40:34 +02:00
albertony
97606bbdef ncdu: refactor accumulated attributes into a struct 2022-04-26 21:12:52 +02:00
albertony
a15885dd74 docs/ncdu: document flag prefixes 2022-04-26 21:12:52 +02:00
albertony
87c201c92a ncdu: fix issue where dir size is summed when file sizes are -1
Some backends may not provide size for all objects, and instead
return -1. Existing version included these in directory sums,
with strange results. With this commit rclone ncdu will consider
negative sizes as zero, but add a new prefix flag '~' with a
description that indicates the shown size is inaccurate.

Fixes #6084
2022-04-26 21:12:52 +02:00
albertony
d77736c21a docs/size: extend documentation of size command 2022-04-26 19:37:15 +02:00
albertony
86bd5f6922 size: warn about inaccurate results when objects with unknown size 2022-04-26 19:37:15 +02:00
albertony
fe271a4e35 docs/drive: mention that google docs count as empty files in directory totals 2022-04-26 19:34:37 +02:00
Leroy van Logchem
75455d4000 azureblob: Calculate Chunksize/blocksize to stay below maxUploadParts 2022-04-26 17:37:40 +01:00
Nick Craig-Wood
82e24f521f webdav: don't override Referer if user sets it - fixes #6040 2022-04-26 08:58:31 +01:00
Nick Craig-Wood
5605e34f7b mount: fix --devname and fusermount: unknown option 'fsname' when mounting via rc
In this commit

f4c40bf79d mount: add --devname to set the device name sent to FUSE for mount display

The --devname parameter was added. However it was soon noticed that
attempting to mount via the rc gave this error:

    mount helper error: fusermount: unknown option 'fsname'
    mount FAILED: fusermount: exit status 1

This was because the DeviceName (and VolumeName) parameter was never
being initialised when the mount was called via the rc.

The fix for this was to refactor the rc interface so it called the
same Mount method as the command line mount which initialised the
DeviceName and VolumeName parameters properly.

This also fixes the cmd/mount tests which were breaking in the same
way but since they aren't normally run on the CI we didn't notice.

Fixes #6044
2022-04-25 12:17:25 +01:00
Nick Craig-Wood
06598531e0 vfs: remove wording which suggests VFS is only for mounting
See: https://forum.rclone.org/t/solved-rclone-serve-read-only/30377/2
2022-04-25 12:17:25 +01:00
albertony
b1d43f8d41 jottacloud: fix scope in token request
The existing code in rclone set the value "offline_access+openid",
when encoded in body it will become "offline_access%2Bopenid". I think
this is wrong. Probably an artifact of "double urlencoding" mixup -
either in rclone or in the jottacloud cli tool version it was sniffed
from? It does work, though. The token received will have scopes "email
offline_access" in it, and the same is true if I change to only
sending "offline_access" as scope.

If a proper space delimited list of "offline_access openid"
is used in the request, the response also includes openid scope:
"openid email offline_access". I think this is more correct and this
patch implements this.

See: #6107
2022-04-22 12:52:00 +01:00
Sơn Trần-Nguyễn
b53c38c9fd fs/rc/js: correct RC method names 2022-04-22 12:44:04 +01:00
Nick Craig-Wood
03715f6c6b docs: add encoded characters to encoding table 2022-04-21 12:22:04 +01:00
Nick Craig-Wood
07481396e0 lib/encoder: add Semicolon encoding 2022-04-21 12:02:27 +01:00
Nick Craig-Wood
bab91e4402 putio: ignore URL encoded files as these fail in the integration tests 2022-04-15 17:57:15 +01:00
Nick Craig-Wood
fde40319ef koofr: remove digistorage from integration tests as no account 2022-04-15 17:57:15 +01:00
Nick Craig-Wood
94e330d4fa onedrive: remove onedrive China from integration tests as we no longer have an account 2022-04-15 17:57:15 +01:00
Nick Craig-Wood
087543d723 sftp: ignore failing entries in rsync.net integration tests 2022-04-15 17:57:15 +01:00
Nick Craig-Wood
6a759d936a storj: fix bucket creation on Move picked up by integration tests 2022-04-15 17:57:15 +01:00
Nick Craig-Wood
7c31240bb8 Add Nick Gooding to contributors 2022-04-15 17:57:15 +01:00
Nick Gooding
25146b4306 googlecloudstorage: add --gcs-no-check-bucket to minimise transactions and perms
Adds a configuration option to the GCS backend to allow skipping the
check if a bucket exists before copying an object to it, much like
f406dbb added for S3.
2022-04-14 11:18:36 +01:00
Nick Craig-Wood
240561850b test makefiles: add --chargen flag to make ascii chargen files 2022-04-13 23:07:56 +01:00
Nil Alexandrov
39a1e37441 netstorage: add support contacts to netstorage doc 2022-04-13 23:07:21 +01:00
Nick Craig-Wood
4c02f50ef5 build: update github.com/billziss-gh to github.com/winfsp 2022-04-13 10:18:26 +01:00
Nick Craig-Wood
f583b86334 test makefiles: fix crash if --min-file-size <= --max-file-size 2022-04-12 13:45:20 +01:00
Nick Craig-Wood
118e8e1470 test makefiles: add --sparse, --zero, --pattern and --ascii flags 2022-04-12 13:45:20 +01:00
Nick Craig-Wood
afcea9c72b test makefile: implement new test command to write a single file 2022-04-12 12:57:16 +01:00
Nick Craig-Wood
27176cc6bb config: use os.UserCacheDir from go stdlib to find cache dir #6095
When this code was originally implemented os.UserCacheDir wasn't
public so this used a copy of the code. This commit replaces that now
out of date copy with a call to the now public stdlib function.
2022-04-11 11:44:15 +01:00
Nick Craig-Wood
f1e4b7da7b Add Adrien Rey-Jarthon to contributors 2022-04-11 11:44:15 +01:00
albertony
f065a267f6 docs: fix some links to command pages 2022-04-07 15:50:41 +02:00
Adrien Rey-Jarthon
17f8014909 docs: Note that Scaleway C14 is deprecating SFTP in favor of S3
This updates the documentation to reflect the new C14 Cold Storage API
works with S3 and not with SFTP any more.

See: https://github.com/rclone/rclone/issues/1080#issuecomment-1082088870
2022-04-05 11:11:52 +01:00
Nick Craig-Wood
8ba04562c3 build: update android go build to 1.18.x and NDK to 23.1.7779620 2022-04-04 20:35:17 +01:00
Nick Craig-Wood
285747b1d1 build: update to go1.18 and make go1.16 the minimum required version 2022-04-04 20:35:17 +01:00
Nick Craig-Wood
7bb8b8f4ba cache: fix bug after golang.org/x/time/rate update
Before this change the cache backend was passing -1 into
rate.NewLimiter to mean unlimited transactions per second.

In a recent update this immediately returns a rate limit error as
might be expected.

This patch uses rate.Inf as indicated by the docs to signal no limits
are required.
2022-04-04 20:35:17 +01:00
Nick Craig-Wood
59c242bbf6 build: update dependencies
Also:

- dropbox: fix compile after API change in upstream library
2022-04-04 20:35:17 +01:00
Nick Craig-Wood
a2bacd7d3f Add rafma0 to contributors 2022-04-04 20:35:17 +01:00
Nick Craig-Wood
9babcc4811 Add GH to contributors 2022-04-04 20:35:17 +01:00
Nick Craig-Wood
a0f665ec3c Add KARBOWSKI Piotr to contributors 2022-04-04 20:35:17 +01:00
Nick Craig-Wood
ecdf42c17f Add Tobias Klauser to contributors 2022-04-04 20:35:17 +01:00
rafma0
be9ee1d138 putio: fix multithread download and other ranged requests
Before this change the 206 responses from putio Range requests were being
returned as errors.

This change checks for 200 and 206 in the GET response now.
2022-04-04 11:15:55 +01:00
GH
9e9ead2ac4 onedrive: note that sharepoint also changes web files (.html, .aspx) 2022-04-03 12:43:23 +01:00
KARBOWSKI Piotr
4f78226f8b sftp: Fix OpenSSH 8.8+ RSA keys incompatibility (#6076)
Updates golang.org/x/crypto to v0.0.0-20220331220935-ae2d96664a29.

Fixes the issues with connecting to OpenSSH 8.8+ remotes in case the
client uses RSA key pair due to OpenSSH dropping support for SHA1 based
ssh-rsa signature.

Bug: https://github.com/rclone/rclone/issues/6076
Bug: https://github.com/golang/go/issues/37278
Signed-off-by: KARBOWSKI Piotr <piotr.karbowski@gmail.com>
2022-04-01 12:49:39 +01:00
Tobias Klauser
54c9c3156c fs/config, lib/terminal: use golang.org/x/term
golang.org/x/crypto/ssh/terminal is deprecated in favor of
golang.org/x/term, see https://pkg.go.dev/golang.org/x/crypto/ssh/terminal

The latter also supports ReadPassword on solaris, so enable the
respective functionality in fs/config for solaris as well.
2022-04-01 12:48:18 +01:00
Nick Craig-Wood
6ecbbf796e netstorage: make levels of headings consistent 2022-03-31 18:11:37 +01:00
Nick Craig-Wood
603e51c43f s3: sync providers in config description with providers 2022-03-31 17:55:54 +01:00
Nick Craig-Wood
ca4671126e Add Berkan Teber to contributors 2022-03-31 17:55:54 +01:00
Berkan Teber
6ea26b508a putio: handle rate limit errors
For rate limit errors, "x-ratelimit-reset" header is now respected.
2022-03-30 12:25:53 +01:00
Nick Craig-Wood
887cccb2c1 filter: fix timezone of --min-age/-max-age from UTC to local as documented
Before this change if the timezone was omitted in a
--min-age/--max-age time specifier then rclone defaulted to a UTC
timezone.

This is documented as using the local timezone if the time zone
specifier is omitted which is a much more useful default and this
patch corrects the implementation to agree with the documentation.

See: https://forum.rclone.org/t/problem-utc-windows-europe-1-summer-problem/29917
2022-03-28 11:47:27 +01:00
Nick Craig-Wood
d975196cfa dropbox: fix retries of multipart uploads with incorrect_offset error
Before this fix, rclone retries chunks of multipart uploads. However
if they had been partially received dropbox would reply with an
incorrect_offset error which rclone was ignoring.

This patch parses the new offset from the error response and uses it
to adjust the data that rclone sends so it is the same as what dropbox
is expecting.

See: https://forum.rclone.org/t/dropbox-rate-limiting-for-upload/29779
2022-03-25 15:39:01 +00:00
Nick Craig-Wood
1f39b28f49 googlecloudstorage: use the s3 pacer to speed up transactions
This commit switches Google Cloud Storage from the drive pacer to the
s3 pacer. The main difference between them is that the s3 pacer does
not limit transactions in the non-error case. This is appropriate for
a cloud storage backend where you pay for each transaction.
2022-03-25 15:28:59 +00:00
Nick Craig-Wood
2738db22fb pacer: default the Google pacer to a burst of 100 to fix gcs pacing
Before this change the pacer defaulted to a burst of 1 which mean that
it kept being activated unecessarily.

This affected Google Cloud Storage and Google Photos.

See: https://forum.rclone.org/t/no-traverse-too-slow-with-lot-of-files/29886/12
2022-03-25 15:28:59 +00:00
Nick Craig-Wood
1978ddde73 Add GuoXingbin to contributors 2022-03-25 15:28:59 +00:00
GuoXingbin
c2bfda22ab s3: Add ChinaMobile EOS to provider list
China Mobile Ecloud Elastic Object Storage (EOS) is a cloud object storage service, and is fully compatible with S3.

Fixes #6054
2022-03-24 11:57:00 +00:00
Nick Craig-Wood
d4da9b98d6 vfs: add --vfs-fast-fingerprint for less accurate but faster fingerprints 2022-03-22 16:33:24 +00:00
Nick Craig-Wood
e4f5912294 azureblob: fix lint error with golangci-lint 1.45.0 2022-03-22 16:33:24 +00:00
Nick Craig-Wood
750fffdf71 netstorage: fix unescaped HTML in documentation 2022-03-18 14:40:12 +00:00
Nick Craig-Wood
388e74af52 Start v1.59.0-DEV development 2022-03-18 14:04:22 +00:00
Nick Craig-Wood
f9354fff2f Version v1.58.0 2022-03-18 12:29:54 +00:00
Nick Craig-Wood
ff1f173fc2 build: add bisync.md to docs builder and fix missing tardigrade.md stub 2022-03-18 11:22:23 +00:00
Nick Craig-Wood
f8073a7b63 build: ensure the Go version used for the build is always up to date #6020 2022-03-17 17:14:50 +00:00
Nick Craig-Wood
807f1cedaa hasher: fix crash on object not found
Before this fix `NewObject` could return a wrapped `fs.Object(nil)`
which caused a crash. This was caused by `wrapObject` returning a
`nil` `*Object` which was cast into an `fs.Object`.

This changes the interface of `wrapObject` so it returns an
`fs.Object` instead of a `*Object` and an error which must be checked.
This forces the callers to return a `nil` object rather than an
`fs.Object(nil)`.

See: https://forum.rclone.org/t/panic-in-hasher-when-mounting-with-vfs-cache-and-not-synced-data-in-the-cache/29697/11
2022-03-16 11:30:26 +00:00
Nick Craig-Wood
bf9c68c88a storj: implement server side Move 2022-03-14 15:44:56 +00:00
Nick Craig-Wood
189cba0fbe s3: add other regions for Lyve and correct Provider name 2022-03-14 15:43:35 +00:00
Nick Craig-Wood
69f726f16c Add Nil Alexandrov to contributors 2022-03-14 15:43:35 +00:00
Nil Alexandrov
65652f7a75 Add Akamai Netstorage as a new backend. 2022-03-09 12:42:22 +00:00
Nil Alexandrov
47f9ab2f56 lib/rest: add support for setting trailers 2022-03-09 12:42:22 +00:00
Nick Craig-Wood
5dd51e6149 union: fix deadlock when one part of a multi-upload fails
Before this fix, rclone would deadlock when uploading two files at
once, if one errored. This caused the other file to block in the multi
reader and never complete.

This fix drains the input buffer on error which allows the other
upload to complete.

See: https://forum.rclone.org/t/union-with-create-policy-all-copy-stuck-when-first-union-fails/29601
2022-03-09 11:30:55 +00:00
Nick Craig-Wood
6a6d254a9f s3: add support for Seagate Lyve Cloud storage 2022-03-09 11:30:55 +00:00
jaKa
fd453f2c7b koofr: renamed digistorage to exclude the romania part. 2022-03-08 22:39:23 +00:00
jaKa
5d06a82c5d koofr: add digistorage service as a koofr provider. 2022-03-08 10:36:18 +00:00
Nick Craig-Wood
847868b4ba ftp: hard fork github.com/jlaffaye/ftp to fix go get
Having a replace directive in go.mod causes "go get
github.com/rclone/rclone" to fail as it discussed in this Go issue:
https://github.com/golang/go/issues/44840

This is apparently how the Go team want go.mod to work, so this commit
hard forks github.com/jlaffaye/ftp into github.com/rclone/ftp so we
can remove the `replace` directive from the go.mod file.

Fixes #5810
2022-03-07 09:55:49 +00:00
Ivan Andreev
38ca178cf3 mailru: fix int32 overflow on arm32 - fixes #6003 2022-03-06 13:33:57 +00:00
Nick Craig-Wood
9427d22f99 Add ctrl-q to contributors 2022-03-06 13:33:26 +00:00
ctrl-q
7b1428a498 onedrive: Do not retry on 400 pathIsTooLong 2022-03-06 13:05:05 +00:00
Nick Craig-Wood
ec72432cec vfs: fix failed to _ensure cache internal error: downloaders is nil error
This error was caused by renaming an open file.

When the file was renamed in the cache, the downloaders were cleared,
however the downloaders were not re-opened when needed again, instead
this error was generated.

This fix re-opens the downloaders if they have been closed by renaming
the file.

Fixes #5984
2022-03-03 17:43:29 +00:00
Nick Craig-Wood
2339172df2 pcloud: fix pre-1970 time stamps - fixes #5917
Before this change rclone send pre-1970 timestamps as negative
numbers. pCloud ignores these and sets them as todays date.

This change sends the timestamps as unsigned 64 bit integers (which is
how the binary protocol sends them) and pCloud accepts the (actually
negative) timestamp like this.
2022-03-03 17:18:40 +00:00
Nick Craig-Wood
268b808bf8 filter: add {{ regexp }} syntax to pattern matches - fixes #4074
There has been a desire from more advanced rclone users to have regexp
filtering as well as the glob filtering.

This patch adds regexp filtering using this syntax `{{ regexp }}`
which is currently a syntax error, so is backwards compatibile.

This means regexps can be used everywhere globs can be used, and that
they also can be mixed with globs in the same pattern, eg `*.{{jpe?g}}`
2022-03-03 17:16:28 +00:00
Nick Craig-Wood
74898bac3b build: add windows/arm64 build - NB this does not support mount yet #5828 2022-03-03 17:13:32 +00:00
Nick Craig-Wood
e0fbca02d4 compress: fix memory leak - fixes #6013
Before this change we forgot to close the compressor when checking to
see if an object was compressible.
2022-03-03 17:10:21 +00:00
Nick Craig-Wood
21355b4208 sync: Fix --max-duration so it doesn't retry when the duration is exceeded
Before this change, if the --max-duration limit was reached then
rclone would retry the sync as a fatal error wasn't raised.

This checks the deadline and raises a fatal error if necessary at the
end of the sync.

Fixes #6002
2022-03-03 17:08:16 +00:00
Nick Craig-Wood
251b84ff2c sftp: fix unecessary seeking when uploading and downloading files
This stops the SFTP library issuing out of order writes which fixes
the problems uploading to `serve sftp` from the `sftp` backend.

This was fixes upstream in this pull request: https://github.com/pkg/sftp/pull/482

Fixes #5806
2022-03-03 17:02:35 +00:00
Nick Craig-Wood
537b62917f s3: add --s3-use-multipart-etag provider quirk #5993
Before this change the new multipart upload ETag checking code was
failing in the integration tests with Alibaba OSS.

Apparently Alibaba calculate the ETag in a different way to AWS.

This introduces a new provider quirk with a flag to disable the
checking of the ETag for multipart uploads.

Mulpart Etag checking has been enabled for all providers that we can
test for and work, and left disabled for the others.
2022-03-01 16:36:39 +00:00
Nick Craig-Wood
71a784cfa2 compress: fix crash if metadata upload failed - fixes #5994
Before this changed the backend attempted to delete a nil object if
the metadata upload failed.
2022-02-28 19:47:52 +00:00
Nick Craig-Wood
8ee0fe9863 serve docker: disable linux tests in CI as they are locking up regularly 2022-02-28 18:01:47 +00:00
Nick Craig-Wood
8f164e4df5 s3: Use the ETag on multipart transfers to verify the transfer was OK
Before this rclone ignored the ETag on multipart uploads which missed
an opportunity for a whole file integrity check.

This adds that check which means that we now check even harder that
multipart uploads have arrived properly.

See #5993
2022-02-25 16:19:03 +00:00
Nick Craig-Wood
06ecc6511b drive: when using a link type --drive-export-formats show all doc types
Before this change we always hid unexportable document types (eg
Google maps).

After this change, if using --drive-export-formats
url/desktop/link.html/webloc we will show links for all documents
regardless of whether they are exportable or not as the links to them
work regardless of whether they are exportable or not.

See: https://forum.rclone.org/t/rclone-mount-for-google-drive-does-not-show-as-web-links-the-google-documents-of-the-google-my-map-gmap-type/29415
2022-02-25 16:08:11 +00:00
Nick Craig-Wood
3529bdec9b sftp: update docs on how to create known_hosts file
This also removes the note on the limitation that only one entry per
host is allowed in the file as it works with many entries provided
they have different key types.

See: https://forum.rclone.org/t/rclone-fails-ssh-handshakes-with-rsync-nets-sftp-when-a-known-hosts-file-is-specified/29206/
2022-02-25 16:08:11 +00:00
partev
486b43f8c7 doc: fix a typo
"and this it may require you to unblock it temporarily" -> "and it may require you to unblock it temporarily"
2022-02-22 21:05:05 +00:00
Nick Craig-Wood
89f0e4df80 swift: fix about so it shows info about the current container only
Before this change `rclone about swift:container` would show aggregate
info about all the containers, not just the one in use.

This causes a problem if container listing is disabled (for example in
the Blomp service).

This fix makes `rclone about swift:container` show only the info about
the given `container`. If aggregate info about all the containers is
required then use `rclone about swift:`.

See: https://forum.rclone.org/t/rclone-mount-blomp-problem/29151/18
2022-02-22 12:55:57 +00:00
Nick Craig-Wood
399fb5b7fb Add Vincent Murphy to contributors 2022-02-22 12:55:57 +00:00
Vincent Murphy
19f1ed949c docs: Fix broken test_proxy.py link 2022-02-22 12:26:17 +00:00
Nick Craig-Wood
d3a1001094 drive: add --drive-skip-dangling-shortcuts flag - fixes #5949
This flag enables dangling shortcuts to be skipped without an error.
2022-02-22 12:22:21 +00:00
Nick Craig-Wood
dc7e3ea1e3 drive,gcs,googlephotos: disable OAuth OOB flow (copy a token) due to google deprecation
Before this change, rclone supported authorizing for remote systems by
going to a URL and cutting and pasting a token from Google. This is
known as the OAuth out-of-band (oob) flow.

This, while very convenient for users, has been shown to be insecure
and has been deprecated by Google.

https://developers.googleblog.com/2022/02/making-oauth-flows-safer.html#disallowed-oob

> OAuth out-of-band (OOB) is a legacy flow developed to support native
> clients which do not have a redirect URI like web apps to accept the
> credentials after a user approves an OAuth consent request. The OOB
> flow poses a remote phishing risk and clients must migrate to an
> alternative method to protect against this vulnerability. New
> clients will be unable to use this flow starting on Feb 28, 2022.

This change disables that flow, and forces the user to use the
redirect URL flow. (This is the flow used already for local configs.)

In practice this will mean that instead of cutting and pasting a token
for remote config, it will be necessary to run "rclone authorize"
instead. This is how all the other OAuth backends work so it is a well
tested code path.

Fixes #6000
2022-02-18 12:46:30 +00:00
Nick Craig-Wood
f22b703a51 storj: rename tardigrade backend to storj backend #5616
This adds an alias for backwards compatibility and leaves a stub
documentation page to redirect people to the new documentation.
2022-02-11 11:04:15 +00:00
Nick Craig-Wood
c40129d610 fs: allow backends to have aliases #5616
This allows a backend to have multiple aliases. These aliases are
hidden from `rclone config` and the command line flags are hidden from
the user. However the flags, environment varialbes and config for the
alias will work just fine.
2022-02-11 11:04:15 +00:00
Nick Craig-Wood
8dc93f1792 Add Márton Elek to contributors 2022-02-11 11:04:03 +00:00
Nick Craig-Wood
f4c40bf79d mount: add --devname to set the device name sent to FUSE for mount display
Before this change, the device name was always the remote:path rclone
was configured with. However this can contain sensitive information
and it appears in the `mount` output, so `--devname` allows the user
to configure it.

See: https://forum.rclone.org/t/rclone-mount-blomp-problem/29151/11
2022-02-09 11:56:43 +00:00
Nick Craig-Wood
9cc50a614b s3: add note about Storj provider bug and workaround
See: https://github.com/storj/gateway-mt/issues/39
2022-02-08 11:40:29 +00:00
Elek, Márton
bcb07a67f6 tardigrade: update docs to explain differences between s3 and this backend
Co-authored-by: Caleb Case <calebcase@gmail.com>
2022-02-08 11:40:29 +00:00
Márton Elek
25ea04f1db s3: add specific provider for Storj Shared gateways
- unsupported features (Copy) are turned off for Storj
- enable urlEncodedListing for Storj provider
- set chunksize to 64Mb
2022-02-08 11:40:29 +00:00
Nick Craig-Wood
06ffd4882d onedrive: add --onedrive-root-folder-id flag #5948
This is to navigate to difficult to find folders in onedrive.
2022-02-07 12:29:36 +00:00
Nick Craig-Wood
19a5e1d63b docs: document --disable-http2 #5253 2022-02-07 12:29:36 +00:00
Nick Craig-Wood
ec88b66dad Add Abhiraj to contributors 2022-02-07 12:29:36 +00:00
Abhiraj
aa2d7f00c2 drive: added --drive-copy-shortcut-content - fixes #4604 2022-02-04 11:37:58 +00:00
Nick Craig-Wood
3e125443aa build: fix ARM architecture version in .deb packages after nfpm change
Fixes #5973
2022-02-03 11:24:06 +00:00
Nick Craig-Wood
3c271b8b1e Add Eng Zer Jun to contributors 2022-02-03 11:24:06 +00:00
Nick Craig-Wood
6d92ba2c6c Add viveknathani to contributors 2022-02-03 11:24:06 +00:00
albertony
c26dc69e1b docs/jottacloud: add note that mime types are not available with --fast-list 2022-02-02 13:12:50 +01:00
albertony
b0de0b4609 docs: include all commands in online help top menu drop-down 2022-02-01 20:40:50 +01:00
albertony
f54641511a librclone: add support for mount commands
Fixes #5661
2022-02-01 19:29:36 +01:00
Eng Zer Jun
8cf76f5e11 test: use T.TempDir to create temporary test directory
The directory created by `T.TempDir` is automatically removed when the
test and all its subtests complete.

Reference: https://pkg.go.dev/testing#T.TempDir
Signed-off-by: Eng Zer Jun <engzerjun@gmail.com>
2022-02-01 11:47:04 +00:00
viveknathani
18c24014da docs/content: describe mandatory fields for drive
Making a client-id for Google Drive requires you to add two more fields
besides the already documented "Application name" field. This commit
documents what should be written for those two fields.

Fixes #5967
2022-02-01 11:42:12 +00:00
Nick Craig-Wood
0ae39bda8d docs: fix and reword --update docs
After discussion on the forum with @bandwidth, this rewords the
--update docs to be correct and easier to understand.

See: https://forum.rclone.org/t/help-understanding-update/28937
2022-02-01 11:07:51 +00:00
Nick Craig-Wood
051685baa1 s3: fix multipart upload with --no-head flag - Fixes #5956
Before this change a multipart upload with the --no-head flag returned
the MD5SUM as a base64 string rather than a Hex string as the rest of
rclone was expecting.
2022-01-29 12:48:51 +00:00
albertony
07f53aebdc touch: fix issue where directory is created instead of file
Detected on ftp, sftp and Dropbox backends.

Fixes #5952
2022-01-28 20:29:12 +01:00
albertony
bd6d36b3f6 docs: improve standard list of properties for options 2022-01-28 19:43:51 +01:00
Nick Craig-Wood
b168479429 gcs: add missing regions - fixes #5955 2022-01-28 12:34:13 +00:00
Nick Craig-Wood
b447b0cd78 build: upgrade actions runner macos-11 to fix macOS build problems #5951 2022-01-27 17:33:04 +00:00
Nick Craig-Wood
4bd2386632 build: don't specify macos SDK any more as default is good enough #5951
This fixes the build, in particular the error:

    Failed to run ["xcrun" "--sdk" "macosx11.1" "--show-sdk-path"]: exit status 1
2022-01-27 17:33:04 +00:00
Nick Craig-Wood
83b6b62c1b build: disable cmount tests under macOS and the CI since they are locking up
This fixes #5951 and allows the macOS builds to run again

See #5960 for more info.
2022-01-27 17:33:04 +00:00
Nick Craig-Wood
5826cc9d9e Add Paulo Martins to contributors 2022-01-27 17:33:04 +00:00
Nick Craig-Wood
252432ae54 Add Gourav T to contributors 2022-01-27 17:33:04 +00:00
Nick Craig-Wood
8821629333 Add Isaac Levy to contributors 2022-01-27 17:33:04 +00:00
Nick Craig-Wood
a2092a8faf Add Vanessasaurus to contributors 2022-01-27 17:33:04 +00:00
Nick Craig-Wood
2b6f4241b4 Add Alain Nussbaumer to contributors 2022-01-27 17:33:04 +00:00
Nick Craig-Wood
e3dd16d490 Add Charlie Jiang to contributors 2022-01-27 17:33:04 +00:00
Nick Craig-Wood
9e1fd923f6 Add Yunhai Luo to contributors 2022-01-27 17:33:04 +00:00
Nick Craig-Wood
3684789858 Add Koopa to contributors 2022-01-27 17:33:04 +00:00
Nick Craig-Wood
1ac1dd428a Add Niels van de Weem to contributors 2022-01-27 17:33:04 +00:00
Nick Craig-Wood
65dbd29c22 Add Kim to contributors 2022-01-27 17:33:04 +00:00
albertony
164774d7e1 Add Shmz Ozggrn to contributors 2022-01-27 09:43:42 +01:00
Shmz Ozggrn
507020f408 docs: Use Adaptive Logo in README 2022-01-27 09:35:36 +01:00
albertony
a667e03fc9 http: improved recognition of url pointing to a single file - fixes #5929 2022-01-26 11:41:01 +01:00
albertony
1045344943 http: status string already includes the status code 2022-01-26 11:41:01 +01:00
albertony
5e469db420 docs/http: fix list layout in --http-no-head help
Existing help text ended with a list, but then auto-generated list items
Config, Env Var, Type and Default would be included in the same list.
2022-01-26 11:41:01 +01:00
albertony
946e84d194 http: use string contains instead of index 2022-01-26 11:41:01 +01:00
albertony
162aba60eb http: error strings should not be capitalized 2022-01-26 11:41:01 +01:00
albertony
d8a874c32b Make http tests line ending agnostic 2022-01-26 11:41:01 +01:00
albertony
9c451d9ac6 Fix linting errors 2022-01-26 00:02:17 +01:00
albertony
8f3f24672c docs/serve: move help for template option into separate section 2022-01-25 18:19:21 +01:00
Paulo Martins
0eb7b716d9 s3: document Content-MD5 workaround for object-lock enabled buckets - Fixes #5765 2022-01-25 16:10:57 +00:00
Gourav T
ee9684e60f fichier: implemented about functionality 2022-01-25 15:53:58 +00:00
negative0
e0cbe413e1 rc: Allow user to disable authentication for web gui 2022-01-25 15:52:30 +00:00
albertony
2523dd6220 version: report correct friendly-name for windows 10/11 versions after 2004
Until Windows 10 version 2004 (May 2020) this can be found from registry entry
ReleaseID, after that we must use entry DisplayVersion (ReleaseId is stuck at 2009).
Source: https://ss64.com/nt/ver.html
2022-01-24 21:27:42 +01:00
albertony
c504d97017 config: fix display of config choices with empty help text 2022-01-18 20:17:57 +01:00
albertony
b783f09fc6 config: show default and example values in correct input syntax instead of quoted and escaped golang string syntax
See #5551
2022-01-16 14:57:38 +01:00
albertony
a301478a13 config: improved punctuation in initial config prompt 2022-01-16 14:57:38 +01:00
albertony
63b450a2a5 config: minor improvement of help text for encoding option
See #5551
2022-01-16 14:57:38 +01:00
albertony
843b77aaaa docs/ftp: improved default value description of port and username options
See #5551
2022-01-16 14:57:38 +01:00
albertony
3641727edb config: fix issue where required password options had to be re-entered when editing existing remote
See #5551
2022-01-16 14:57:38 +01:00
albertony
38e2f835ed config: fix handling of default, exclusive and required properties of multiple-choice options
Previously an empty input (just pressing enter) was only allowed for multiple-choice
options that did not have the Exclusive property set. With this change the existing
Required property is introduced into the multiple choice handling, so that one can have
Exclusive and Required options where only a value from the list is allowed, and one can
have Exclusive but not Required options where an empty value is accepted but any
non-empty value must still be matching an item from the list.

Fixes #5549

See #5551
2022-01-16 14:57:38 +01:00
albertony
bd4bbed592 config: remove explicit setting of required property to true for options with a default value
See #5551
2022-01-16 14:57:38 +01:00
albertony
994b501188 config: remove explicit setting of required property to its default value false
See #5551
2022-01-16 14:57:38 +01:00
albertony
dfa9381814 docs/jottacloud: correct reference to temp-dir 2022-01-16 14:34:15 +01:00
albertony
2a85feda4b docs/jottacloud: add note about upload only being supported on jotta device 2022-01-16 14:34:15 +01:00
albertony
ad46af9168 docs/librclone: note that adding -ldflags -s to the build command will reduce size of library file 2022-01-16 14:32:01 +01:00
albertony
2fed02211c docs/librclone: document use from C/C++ on Windows 2022-01-16 14:11:56 +01:00
albertony
237daa8aaf dedupe: add quit as a choice in interactive mode
Fixes #5881
2022-01-14 19:57:48 +01:00
albertony
8aeca6c033 docs: align menu items when icons have different sizes 2022-01-14 17:39:27 +00:00
albertony
fd82876086 librclone: allow empty string or null input instead of empty json object 2022-01-14 17:37:13 +00:00
Isaac Levy
be1a668e95 onedrive: minor optimization of quickxorhash
This patch avoids creating a new slice header in favour of a for loop.

This saves a few instructions!
2022-01-14 17:30:56 +00:00
Vanessasaurus
9d4eab32d8 cmd: fix broken example link in help.go
This link appears to be broken, so here is another reference to (I think) the same file that provides a good example of coba. We could also do the current commit 8312004f41/cli/cobra.go although it might be better to maintain an up to date example.
2022-01-13 16:26:19 +00:00
Alain Nussbaumer
b4ba7b69b8 dlna: change icons to the newest ones. 2022-01-13 16:23:24 +00:00
albertony
deef659aef Add Bumsu Hyeon to contributors 2022-01-13 13:25:20 +01:00
Bumsu Hyeon
4b99e84242 vfs/cache: fix handling of special characters in file names (#5875) 2022-01-13 13:23:25 +01:00
albertony
06bdf7c64c Add Lu Wang to contributors 2022-01-12 21:33:35 +01:00
Lu Wang
e1225b5729 docs/s3: fixed max-age example 2022-01-12 21:31:54 +01:00
albertony
871cc2f62d docs: fix links to rc sections 2022-01-12 19:51:26 +01:00
Charlie Jiang
bc23bf11db onedrive: add config option for oauth scope Sites.Read.All (#5883) 2022-01-10 21:28:19 +08:00
albertony
b55575e622 docs: fix typo 2022-01-03 18:46:40 +01:00
albertony
328f0e7135 docs: fix links to rc debug commands 2021-12-30 21:52:34 +01:00
albertony
a52814eed9 docs: fix links to rc data types section 2021-12-30 20:46:39 +01:00
albertony
071a9e882d docs: capitalization of flag usage strings 2021-12-30 14:07:24 +01:00
albertony
4e2ca3330c tree: remove obsolete --human replaced by global --human-readable - fixes #5868 2021-12-21 20:17:00 +01:00
Yunhai Luo
408d9f3e7a s3: Add GLACIER_IR storage class 2021-12-03 14:46:45 +00:00
Koopa
0681a5c86a lib/rest: process HTML entities within XML
MEGAcmd currently includes escaped HTML4 entites in its XML messages.
This behavior deviates from the XML standard, but currently it prevents
rclone from being able to use the remote.
2021-12-01 16:31:43 +00:00
Niels van de Weem
df09c3f555 pcloud: add support for recursive list 2021-12-01 15:58:44 +00:00
Kim
c41814fd2d backend:jottacloud change api used by ListR ( --fast-list ) 2021-12-01 14:21:37 +01:00
Nick Craig-Wood
c2557cc432 azureblob: fix crash with SAS URL and no container - fixes #5820
Before this change attempting NewObject on a SAS URL's root would
crash the Azure SDK.

This change detects that using the code from this previous fix

f7404f52e7 azureblob: fix crash when listing outside a SAS URL's root - fixes #4851

And returns not object not found instead.

It also prevents things being uploaded to the root of the SAS URL
which also crashes the Azure SDK.
2021-11-27 16:18:18 +00:00
Nick Craig-Wood
3425726c50 oauthutil: fix crash when webrowser requests /robots.txt - fixes #5836
Before this change the oauth webserver would crash if it received a
request to /robots.txt.

This patch makes it ignore (with 404 error) any paths it isn't
expecting.
2021-11-25 12:12:14 +00:00
Nick Craig-Wood
46175a22d8 Add Logeshwaran Murugesan to contributors 2021-11-25 12:11:47 +00:00
Logeshwaran Murugesan
bcf0e15ad7 Simplify content length processing in s3 with download url 2021-11-25 12:03:14 +00:00
Nick Craig-Wood
b91c349cd5 local: fix hash invalidation which caused errors with local crypt mount
Before this fix if a file was updated, but to the same length and
timestamp then the local backend would return the wrong (cached)
hashes for the object.

This happens regularly on a crypted local disk mount when the VFS
thinks files have been changed but actually their contents are
identical to that written previously. This is because when files are
uploaded their nonce changes so the contents of the file changes but
the timestamp and size remain the same because the file didn't
actually change.

This causes errors like this:

    ERROR: file: Failed to copy: corrupted on transfer: md5 crypted
    hash differ "X" vs "Y"

This turned out to be because the local backend wasn't clearing its
cache of hashes when the file was updated.

This fix clears the hash cache for Update and Remove.

It also puts a src and destination in the crypt message to make future
debugging easier.

Fixes #4031
2021-11-24 12:09:34 +00:00
Nick Craig-Wood
d252816706 vfs: add vfs/stats remote control to show statistics - fixes #5816 2021-11-23 18:00:21 +00:00
Nick Craig-Wood
729117af68 Add GGG KILLER to contributors 2021-11-23 18:00:21 +00:00
GGG KILLER
cd4d8d55ec docs: add a note about the B2 download_url format
Currently the B2 docs don't specify which format the download_url
setting should have, and if you input it wrong, there is nothing
in the verbose logs or anywhere else that can let you know that.
2021-11-23 17:57:34 +00:00
Nick Craig-Wood
f26abc89a6 union: fix treatment of remotes with // in
See: https://forum.rclone.org/t/connection-string-with-union-backend-and-a-lot-of-quotes/27577
2021-11-23 17:41:12 +00:00
lindwurm
b5abbe819f s3: Add Wasabi AP Northeast 2 endpoint info
* Wasabi starts to provide AP Northeast 2 (Osaka) endpoint, so add it to the list
* Rename ap-northeast-1 as "AP Northeast 1 (Tokyo)" from "AP Northeast"

Signed-off-by: lindwurm <lindwurm.q@gmail.com>
2021-11-22 18:02:57 +00:00
Nick Craig-Wood
a351484997 sftp: fix timeout on hashing large files by sending keepalives
Before this fix the SFTP sessions could timeout when doing hashes if
they took longer than the --timeout parameter.

This patch sends keepalive packets every minute while a shell command
is running to keep the connection open.

See: https://forum.rclone.org/t/rclone-check-over-sftp-failure-to-calculate-md5-hash-for-large-files/27487
2021-11-22 15:26:29 +00:00
Nick Craig-Wood
099eff8891 sftp: refactor so we only have one way of running remote commands
This also returns errors from running ssh Hash commands which we
didn't do before.
2021-11-22 15:26:29 +00:00
albertony
c4cb167d4a Add rsapkf and Will Holtz to contributors 2021-11-21 19:26:05 +01:00
Will Holtz
38e100ab19 docs/config: more explicit doc for config create --all with params 2021-11-21 19:22:19 +01:00
rsapkf
db95a0d6c3 docs/pcloud: fix typo 2021-11-21 19:16:19 +01:00
Nick Craig-Wood
df07964db3 azureblob: raise --azureblob-upload-concurrency to 16 by default
After speed testing it was discovered that upload speed goes up pretty
much linearly with upload concurrency. This patch changes the default
from 4 to 16 which means that rclone will use 16 * 4M = 64M per
transfer which is OK even for low memory devices.

This adds a note that performance may be increased by increasing
upload concurrency.

See: https://forum.rclone.org/t/performance-of-rclone-vs-azcopy/27437/9
2021-11-18 16:09:02 +00:00
Nick Craig-Wood
fbc4c4ad9a azureblob: remove 100MB upper limit on chunk_size as it is no longer needed 2021-11-18 16:09:02 +00:00
Nick Craig-Wood
4454b3e1ae azureblob: implement --azureblob-upload-concurrency parameter to speed uploads
See: https://forum.rclone.org/t/performance-of-rclone-vs-azcopy/27437
2021-11-18 16:08:57 +00:00
Nick Craig-Wood
f9321fccbb Add deinferno to contributors 2021-11-18 15:51:45 +00:00
Ole Frost
3c2252b7c0 fs/operations: add server-side moves to stats
Fixes #5430
2021-11-18 12:20:56 +00:00
Cnly
51c952654c fstests: treat accountUpgradeRequired as success for OneDrive PublicLink 2021-11-17 17:35:17 +00:00
deinferno
80e47be65f yandex: add permanent deletion support 2021-11-17 16:57:41 +00:00
Michał Matczuk
38dc3e93ee fshttp: add prometheus metrics for HTTP status code
This patch adds rclone_http_status_code counter vector labeled by

* host,
* method,
* code.

It allows to see HTTP errors, backoffs etc.

The Metrics struct is designed for extensibility.
Adding new metrics is a matter of adding them to Metrics struct and including them in the response handling.

This feature has been discussed in the forum [1].

[1] https://forum.rclone.org/t/prometheus-metrics/14484
2021-11-17 18:38:12 +03:00
234 changed files with 68851 additions and 43575 deletions

View File

@@ -25,12 +25,12 @@ jobs:
strategy:
fail-fast: false
matrix:
job_name: ['linux', 'mac_amd64', 'mac_arm64', 'windows_amd64', 'windows_386', 'other_os', 'go1.15', 'go1.16']
job_name: ['linux', 'mac_amd64', 'mac_arm64', 'windows_amd64', 'windows_386', 'other_os', 'go1.16', 'go1.17']
include:
- job_name: linux
os: ubuntu-latest
go: '1.17.x'
go: '1.18.x'
gotags: cmount
build_flags: '-include "^linux/"'
check: true
@@ -40,8 +40,8 @@ jobs:
deploy: true
- job_name: mac_amd64
os: macOS-latest
go: '1.17.x'
os: macos-11
go: '1.18.x'
gotags: 'cmount'
build_flags: '-include "^darwin/amd64" -cgo'
quicktest: true
@@ -49,15 +49,15 @@ jobs:
deploy: true
- job_name: mac_arm64
os: macOS-latest
go: '1.17.x'
os: macos-11
go: '1.18.x'
gotags: 'cmount'
build_flags: '-include "^darwin/arm64" -cgo -macos-arch arm64 -macos-sdk macosx11.1 -cgo-cflags=-I/usr/local/include -cgo-ldflags=-L/usr/local/lib'
build_flags: '-include "^darwin/arm64" -cgo -macos-arch arm64 -cgo-cflags=-I/usr/local/include -cgo-ldflags=-L/usr/local/lib'
deploy: true
- job_name: windows_amd64
os: windows-latest
go: '1.17.x'
go: '1.18.x'
gotags: cmount
build_flags: '-include "^windows/amd64" -cgo'
build_args: '-buildmode exe'
@@ -67,7 +67,7 @@ jobs:
- job_name: windows_386
os: windows-latest
go: '1.17.x'
go: '1.18.x'
gotags: cmount
goarch: '386'
cgo: '1'
@@ -78,23 +78,23 @@ jobs:
- job_name: other_os
os: ubuntu-latest
go: '1.17.x'
build_flags: '-exclude "^(windows/|darwin/|linux/)"'
go: '1.18.x'
build_flags: '-exclude "^(windows/(386|amd64)|darwin/|linux/)"'
compile_all: true
deploy: true
- job_name: go1.15
os: ubuntu-latest
go: '1.15.x'
quicktest: true
racequicktest: true
- job_name: go1.16
os: ubuntu-latest
go: '1.16.x'
quicktest: true
racequicktest: true
- job_name: go1.17
os: ubuntu-latest
go: '1.17.x'
quicktest: true
racequicktest: true
name: ${{ matrix.job_name }}
runs-on: ${{ matrix.os }}
@@ -110,6 +110,7 @@ jobs:
with:
stable: 'false'
go-version: ${{ matrix.go }}
check-latest: true
- name: Set environment variables
shell: bash
@@ -134,7 +135,7 @@ jobs:
run: |
brew update
brew install --cask macfuse
if: matrix.os == 'macOS-latest'
if: matrix.os == 'macos-11'
- name: Install Libraries on Windows
shell: powershell
@@ -245,14 +246,14 @@ jobs:
fetch-depth: 0
# Upgrade together with NDK version
- name: Set up Go 1.16
- name: Set up Go
uses: actions/setup-go@v1
with:
go-version: 1.16
go-version: 1.18.x
# Upgrade together with Go version. Using a GitHub-provided version saves around 2 minutes.
- name: Force NDK version
run: echo "y" | sudo ${ANDROID_HOME}/tools/bin/sdkmanager --install "ndk;22.1.7171670" | grep -v = || true
run: echo "y" | sudo ${ANDROID_HOME}/tools/bin/sdkmanager --install "ndk;23.1.7779620" | grep -v = || true
- name: Go module cache
uses: actions/cache@v2
@@ -273,8 +274,8 @@ jobs:
- name: install gomobile
run: |
go get golang.org/x/mobile/cmd/gobind
go get golang.org/x/mobile/cmd/gomobile
go install golang.org/x/mobile/cmd/gobind@latest
go install golang.org/x/mobile/cmd/gomobile@latest
env PATH=$PATH:~/go/bin gomobile init
- name: arm-v7a gomobile build
@@ -283,7 +284,7 @@ jobs:
- name: arm-v7a Set environment variables
shell: bash
run: |
echo "CC=$(echo $ANDROID_HOME/ndk/22.1.7171670/toolchains/llvm/prebuilt/linux-x86_64/bin/armv7a-linux-androideabi16-clang)" >> $GITHUB_ENV
echo "CC=$(echo $ANDROID_HOME/ndk/23.1.7779620/toolchains/llvm/prebuilt/linux-x86_64/bin/armv7a-linux-androideabi16-clang)" >> $GITHUB_ENV
echo "CC_FOR_TARGET=$CC" >> $GITHUB_ENV
echo 'GOOS=android' >> $GITHUB_ENV
echo 'GOARCH=arm' >> $GITHUB_ENV
@@ -296,7 +297,7 @@ jobs:
- name: arm64-v8a Set environment variables
shell: bash
run: |
echo "CC=$(echo $ANDROID_HOME/ndk/22.1.7171670/toolchains/llvm/prebuilt/linux-x86_64/bin/aarch64-linux-android21-clang)" >> $GITHUB_ENV
echo "CC=$(echo $ANDROID_HOME/ndk/23.1.7779620/toolchains/llvm/prebuilt/linux-x86_64/bin/aarch64-linux-android21-clang)" >> $GITHUB_ENV
echo "CC_FOR_TARGET=$CC" >> $GITHUB_ENV
echo 'GOOS=android' >> $GITHUB_ENV
echo 'GOARCH=arm64' >> $GITHUB_ENV
@@ -309,7 +310,7 @@ jobs:
- name: x86 Set environment variables
shell: bash
run: |
echo "CC=$(echo $ANDROID_HOME/ndk/22.1.7171670/toolchains/llvm/prebuilt/linux-x86_64/bin/i686-linux-android16-clang)" >> $GITHUB_ENV
echo "CC=$(echo $ANDROID_HOME/ndk/23.1.7779620/toolchains/llvm/prebuilt/linux-x86_64/bin/i686-linux-android16-clang)" >> $GITHUB_ENV
echo "CC_FOR_TARGET=$CC" >> $GITHUB_ENV
echo 'GOOS=android' >> $GITHUB_ENV
echo 'GOARCH=386' >> $GITHUB_ENV
@@ -322,7 +323,7 @@ jobs:
- name: x64 Set environment variables
shell: bash
run: |
echo "CC=$(echo $ANDROID_HOME/ndk/22.1.7171670/toolchains/llvm/prebuilt/linux-x86_64/bin/x86_64-linux-android21-clang)" >> $GITHUB_ENV
echo "CC=$(echo $ANDROID_HOME/ndk/23.1.7779620/toolchains/llvm/prebuilt/linux-x86_64/bin/x86_64-linux-android21-clang)" >> $GITHUB_ENV
echo "CC_FOR_TARGET=$CC" >> $GITHUB_ENV
echo 'GOOS=android' >> $GITHUB_ENV
echo 'GOARCH=amd64' >> $GITHUB_ENV

View File

@@ -15,7 +15,7 @@ Current active maintainers of rclone are:
| Ivan Andreev | @ivandeex | chunker & mailru backends |
| Max Sum | @Max-Sum | union backend |
| Fred | @creativeprojects | seafile backend |
| Caleb Case | @calebcase | tardigrade backend |
| Caleb Case | @calebcase | storj backend |
**This is a work in progress Draft**

5267
MANUAL.html generated

File diff suppressed because it is too large Load Diff

5734
MANUAL.md generated

File diff suppressed because it is too large Load Diff

7550
MANUAL.txt generated

File diff suppressed because it is too large Load Diff

View File

@@ -1,4 +1,5 @@
[<img src="https://rclone.org/img/logo_on_light__horizontal_color.svg" width="50%" alt="rclone logo">](https://rclone.org/)
[<img src="https://rclone.org/img/logo_on_light__horizontal_color.svg" width="50%" alt="rclone logo">](https://rclone.org/#gh-light-mode-only)
[<img src="https://rclone.org/img/logo_on_dark__horizontal_color.svg" width="50%" alt="rclone logo">](https://rclone.org/#gh-dark-mode-only)
[Website](https://rclone.org) |
[Documentation](https://rclone.org/docs/) |
@@ -20,14 +21,18 @@ Rclone *("rsync for cloud storage")* is a command-line program to sync files and
## Storage providers
* 1Fichier [:page_facing_up:](https://rclone.org/fichier/)
* Akamai Netstorage [:page_facing_up:](https://rclone.org/netstorage/)
* Alibaba Cloud (Aliyun) Object Storage System (OSS) [:page_facing_up:](https://rclone.org/s3/#alibaba-oss)
* Amazon Drive [:page_facing_up:](https://rclone.org/amazonclouddrive/) ([See note](https://rclone.org/amazonclouddrive/#status))
* Amazon S3 [:page_facing_up:](https://rclone.org/s3/)
* Backblaze B2 [:page_facing_up:](https://rclone.org/b2/)
* Box [:page_facing_up:](https://rclone.org/box/)
* Ceph [:page_facing_up:](https://rclone.org/s3/#ceph)
* China Mobile Ecloud Elastic Object Storage (EOS) [:page_facing_up:](https://rclone.org/s3/#china-mobile-ecloud-eos)
* Arvan Cloud Object Storage (AOS) [:page_facing_up:](https://rclone.org/s3/#arvan-cloud-object-storage-aos)
* Citrix ShareFile [:page_facing_up:](https://rclone.org/sharefile/)
* DigitalOcean Spaces [:page_facing_up:](https://rclone.org/s3/#digitalocean-spaces)
* Digi Storage [:page_facing_up:](https://rclone.org/koofr/#digi-storage)
* Dreamhost [:page_facing_up:](https://rclone.org/s3/#dreamhost)
* Dropbox [:page_facing_up:](https://rclone.org/dropbox/)
* Enterprise File Fabric [:page_facing_up:](https://rclone.org/filefabric/)
@@ -38,6 +43,7 @@ Rclone *("rsync for cloud storage")* is a command-line program to sync files and
* HDFS (Hadoop Distributed Filesystem) [:page_facing_up:](https://rclone.org/hdfs/)
* HTTP [:page_facing_up:](https://rclone.org/http/)
* Hubic [:page_facing_up:](https://rclone.org/hubic/)
* Internet Archive [:page_facing_up:](https://rclone.org/internetarchive/)
* Jottacloud [:page_facing_up:](https://rclone.org/jottacloud/)
* IBM COS S3 [:page_facing_up:](https://rclone.org/s3/#ibm-cos-s3)
* Koofr [:page_facing_up:](https://rclone.org/koofr/)
@@ -65,8 +71,8 @@ Rclone *("rsync for cloud storage")* is a command-line program to sync files and
* SeaweedFS [:page_facing_up:](https://rclone.org/s3/#seaweedfs)
* SFTP [:page_facing_up:](https://rclone.org/sftp/)
* StackPath [:page_facing_up:](https://rclone.org/s3/#stackpath)
* Storj [:page_facing_up:](https://rclone.org/storj/)
* SugarSync [:page_facing_up:](https://rclone.org/sugarsync/)
* Tardigrade [:page_facing_up:](https://rclone.org/tardigrade/)
* Tencent Cloud Object Storage (COS) [:page_facing_up:](https://rclone.org/s3/#tencent-cos)
* Wasabi [:page_facing_up:](https://rclone.org/s3/#wasabi)
* WebDAV [:page_facing_up:](https://rclone.org/webdav/)

View File

@@ -1 +1 @@
v1.58.0
v1.59.0

View File

@@ -22,12 +22,14 @@ import (
_ "github.com/rclone/rclone/backend/hdfs"
_ "github.com/rclone/rclone/backend/http"
_ "github.com/rclone/rclone/backend/hubic"
_ "github.com/rclone/rclone/backend/internetarchive"
_ "github.com/rclone/rclone/backend/jottacloud"
_ "github.com/rclone/rclone/backend/koofr"
_ "github.com/rclone/rclone/backend/local"
_ "github.com/rclone/rclone/backend/mailru"
_ "github.com/rclone/rclone/backend/mega"
_ "github.com/rclone/rclone/backend/memory"
_ "github.com/rclone/rclone/backend/netstorage"
_ "github.com/rclone/rclone/backend/onedrive"
_ "github.com/rclone/rclone/backend/opendrive"
_ "github.com/rclone/rclone/backend/pcloud"
@@ -39,9 +41,9 @@ import (
_ "github.com/rclone/rclone/backend/sftp"
_ "github.com/rclone/rclone/backend/sharefile"
_ "github.com/rclone/rclone/backend/sia"
_ "github.com/rclone/rclone/backend/storj"
_ "github.com/rclone/rclone/backend/sugarsync"
_ "github.com/rclone/rclone/backend/swift"
_ "github.com/rclone/rclone/backend/tardigrade"
_ "github.com/rclone/rclone/backend/union"
_ "github.com/rclone/rclone/backend/uptobox"
_ "github.com/rclone/rclone/backend/webdav"

View File

@@ -43,15 +43,14 @@ import (
const (
minSleep = 10 * time.Millisecond
maxSleep = 10 * time.Second
decayConstant = 1 // bigger for slower decay, exponential
maxListChunkSize = 5000 // number of items to read at once
decayConstant = 1 // bigger for slower decay, exponential
maxListChunkSize = 5000 // number of items to read at once
maxUploadParts = 50000 // maximum allowed number of parts/blocks in a multi-part upload
modTimeKey = "mtime"
timeFormatIn = time.RFC3339
timeFormatOut = "2006-01-02T15:04:05.000000000Z07:00"
storageDefaultBaseURL = "blob.core.windows.net"
defaultChunkSize = 4 * fs.Mebi
maxChunkSize = 100 * fs.Mebi
uploadConcurrency = 4
defaultAccessTier = azblob.AccessTierNone
maxTryTimeout = time.Hour * 24 * 365 //max time of an azure web request response window (whether or not data is flowing)
// Default storage account, key and blob endpoint for emulator support,
@@ -134,12 +133,33 @@ msi_client_id, or msi_mi_res_id parameters.`,
Advanced: true,
}, {
Name: "chunk_size",
Help: `Upload chunk size (<= 100 MiB).
Help: `Upload chunk size.
Note that this is stored in memory and there may be up to
"--transfers" chunks stored at once in memory.`,
"--transfers" * "--azureblob-upload-concurrency" chunks stored at once
in memory.`,
Default: defaultChunkSize,
Advanced: true,
}, {
Name: "upload_concurrency",
Help: `Concurrency for multipart uploads.
This is the number of chunks of the same file that are uploaded
concurrently.
If you are uploading small numbers of large files over high-speed
links and these uploads do not fully utilize your bandwidth, then
increasing this may help to speed up the transfers.
In tests, upload speed increases almost linearly with upload
concurrency. For example to fill a gigabit pipe it may be necessary to
raise this to 64. Note that this will use more memory.
Note that chunks are stored in memory and there may be up to
"--transfers" * "--azureblob-upload-concurrency" chunks stored at once
in memory.`,
Default: 16,
Advanced: true,
}, {
Name: "list_chunk",
Help: `Size of blob list.
@@ -257,6 +277,7 @@ type Options struct {
Endpoint string `config:"endpoint"`
SASURL string `config:"sas_url"`
ChunkSize fs.SizeSuffix `config:"chunk_size"`
UploadConcurrency int `config:"upload_concurrency"`
ListChunkSize uint `config:"list_chunk"`
AccessTier string `config:"access_tier"`
ArchiveTierDelete bool `config:"archive_tier_delete"`
@@ -416,9 +437,6 @@ func checkUploadChunkSize(cs fs.SizeSuffix) error {
if cs < minChunkSize {
return fmt.Errorf("%s is less than %s", cs, minChunkSize)
}
if cs > maxChunkSize {
return fmt.Errorf("%s is greater than %s", cs, maxChunkSize)
}
return nil
}
@@ -595,7 +613,7 @@ func NewFs(ctx context.Context, name, root string, m configmap.Mapper) (fs.Fs, e
serviceURL = azblob.NewServiceURL(*u, pipeline)
case opt.UseMSI:
var token adal.Token
var userMSI *userMSI = &userMSI{}
var userMSI = &userMSI{}
if len(opt.MSIClientID) > 0 || len(opt.MSIObjectID) > 0 || len(opt.MSIResourceID) > 0 {
// Specifying a user-assigned identity. Exactly one of the above IDs must be specified.
// Validate and ensure exactly one is set. (To do: better validation.)
@@ -1444,6 +1462,10 @@ func (o *Object) clearMetaData() {
// o.size
// o.md5
func (o *Object) readMetaData() (err error) {
container, _ := o.split()
if !o.fs.containerOK(container) {
return fs.ErrorObjectNotFound
}
if !o.modTime.IsZero() {
return nil
}
@@ -1636,7 +1658,10 @@ func (o *Object) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, op
return errCantUpdateArchiveTierBlobs
}
}
container, _ := o.split()
container, containerPath := o.split()
if container == "" || containerPath == "" {
return fmt.Errorf("can't upload to root - need a container")
}
err = o.fs.makeContainer(ctx, container)
if err != nil {
return err
@@ -1665,12 +1690,29 @@ func (o *Object) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, op
}
}
// calculate size of parts/blocks
partSize := int(o.fs.opt.ChunkSize)
uploadParts := int64(maxUploadParts)
if uploadParts < 1 {
uploadParts = 1
} else if uploadParts > maxUploadParts {
uploadParts = maxUploadParts
}
// Adjust partSize until the number of parts/blocks is small enough.
if o.size/int64(partSize) >= uploadParts {
// Calculate partition size rounded up to the nearest MiB
partSize = int((((o.size / uploadParts) >> 20) + 1) << 20)
fs.Debugf(o, "Adjust partSize to %q", partSize)
}
putBlobOptions := azblob.UploadStreamToBlockBlobOptions{
BufferSize: int(o.fs.opt.ChunkSize),
MaxBuffers: uploadConcurrency,
BufferSize: partSize,
MaxBuffers: o.fs.opt.UploadConcurrency,
Metadata: o.meta,
BlobHTTPHeaders: httpHeaders,
TransferManager: o.fs.newPoolWrapper(uploadConcurrency),
TransferManager: o.fs.newPoolWrapper(o.fs.opt.UploadConcurrency),
}
// Don't retry, return a retry error instead
@@ -1734,7 +1776,7 @@ func (o *Object) SetTier(tier string) error {
blob := o.getBlobReference()
ctx := context.Background()
err := o.fs.pacer.Call(func() (bool, error) {
_, err := blob.SetTier(ctx, desiredAccessTier, azblob.LeaseAccessConditions{})
_, err := blob.SetTier(ctx, desiredAccessTier, azblob.LeaseAccessConditions{}, azblob.RehydratePriorityNone)
return o.fs.shouldRetry(ctx, err)
})

View File

@@ -17,12 +17,10 @@ import (
// TestIntegration runs integration tests against the remote
func TestIntegration(t *testing.T) {
fstests.Run(t, &fstests.Opt{
RemoteName: "TestAzureBlob:",
NilObject: (*Object)(nil),
TiersToTest: []string{"Hot", "Cool"},
ChunkedUpload: fstests.ChunkedUploadConfig{
MaxChunkSize: maxChunkSize,
},
RemoteName: "TestAzureBlob:",
NilObject: (*Object)(nil),
TiersToTest: []string{"Hot", "Cool"},
ChunkedUpload: fstests.ChunkedUploadConfig{},
})
}

View File

@@ -160,7 +160,15 @@ free egress for data downloaded through the Cloudflare network.
Rclone works with private buckets by sending an "Authorization" header.
If the custom endpoint rewrites the requests for authentication,
e.g., in Cloudflare Workers, this header needs to be handled properly.
Leave blank if you want to use the endpoint provided by Backblaze.`,
Leave blank if you want to use the endpoint provided by Backblaze.
The URL provided here SHOULD have the protocol and SHOULD NOT have
a trailing slash or specify the /file/bucket subpath as rclone will
request files with "{download_url}/file/{bucket_name}/{path}".
Example:
> https://mysubdomain.mydomain.tld
(No trailing "/", "file" or "bucket")`,
Advanced: true,
}, {
Name: "download_auth_duration",

View File

@@ -394,7 +394,11 @@ func NewFs(ctx context.Context, name, rootPath string, m configmap.Mapper) (fs.F
notifiedRemotes: make(map[string]bool),
}
cache.PinUntilFinalized(f.Fs, f)
f.rateLimiter = rate.NewLimiter(rate.Limit(float64(opt.Rps)), opt.TotalWorkers)
rps := rate.Inf
if opt.Rps > 0 {
rps = rate.Limit(float64(opt.Rps))
}
f.rateLimiter = rate.NewLimiter(rps, opt.TotalWorkers)
f.plexConnector = &plexConnector{}
if opt.PlexURL != "" {

View File

@@ -401,6 +401,10 @@ func isCompressible(r io.Reader) (bool, error) {
if err != nil {
return false, err
}
err = w.Close()
if err != nil {
return false, err
}
ratio := float64(n) / float64(b.Len())
return ratio > minCompressionRatio, nil
}
@@ -626,9 +630,11 @@ func (f *Fs) putMetadata(ctx context.Context, meta *ObjectMetadata, src fs.Objec
// Put the data
mo, err = put(ctx, metaReader, f.wrapInfo(src, makeMetadataName(src.Remote()), int64(len(data))), options...)
if err != nil {
removeErr := mo.Remove(ctx)
if removeErr != nil {
fs.Errorf(mo, "Failed to remove partially transferred object: %v", err)
if mo != nil {
removeErr := mo.Remove(ctx)
if removeErr != nil {
fs.Errorf(mo, "Failed to remove partially transferred object: %v", err)
}
}
return nil, err
}

View File

@@ -443,7 +443,7 @@ func (f *Fs) put(ctx context.Context, in io.Reader, src fs.ObjectInfo, options [
if err != nil {
fs.Errorf(o, "Failed to remove corrupted object: %v", err)
}
return nil, fmt.Errorf("corrupted on transfer: %v crypted hash differ %q vs %q", ht, srcHash, dstHash)
return nil, fmt.Errorf("corrupted on transfer: %v crypted hash differ src %q vs dst %q", ht, srcHash, dstHash)
}
fs.Debugf(src, "%v = %s OK", ht, srcHash)
}

View File

@@ -84,7 +84,7 @@ var (
Endpoint: google.Endpoint,
ClientID: rcloneClientID,
ClientSecret: obscure.MustReveal(rcloneEncryptedClientSecret),
RedirectURL: oauthutil.TitleBarRedirectURL,
RedirectURL: oauthutil.RedirectURL,
}
_mimeTypeToExtensionDuplicates = map[string]string{
"application/x-vnd.oasis.opendocument.presentation": ".odp",
@@ -299,6 +299,17 @@ a non root folder as its starting point.
Default: true,
Help: "Send files to the trash instead of deleting permanently.\n\nDefaults to true, namely sending files to the trash.\nUse `--drive-use-trash=false` to delete files permanently instead.",
Advanced: true,
}, {
Name: "copy_shortcut_content",
Default: false,
Help: `Server side copy contents of shortcuts instead of the shortcut.
When doing server side copies, normally rclone will copy shortcuts as
shortcuts.
If this flag is used then rclone will copy the contents of shortcuts
rather than shortcuts themselves when doing server side copies.`,
Advanced: true,
}, {
Name: "skip_gdocs",
Default: false,
@@ -542,6 +553,14 @@ Google don't document so it may break in the future.
Normally rclone dereferences shortcut files making them appear as if
they are the original file (see [the shortcuts section](#shortcuts)).
If this flag is set then rclone will ignore shortcut files completely.
`,
Advanced: true,
Default: false,
}, {
Name: "skip_dangling_shortcuts",
Help: `If set skip dangling shortcut files.
If this is set then rclone will not show any dangling shortcuts in listings.
`,
Advanced: true,
Default: false,
@@ -578,6 +597,7 @@ type Options struct {
TeamDriveID string `config:"team_drive"`
AuthOwnerOnly bool `config:"auth_owner_only"`
UseTrash bool `config:"use_trash"`
CopyShortcutContent bool `config:"copy_shortcut_content"`
SkipGdocs bool `config:"skip_gdocs"`
SkipChecksumGphotos bool `config:"skip_checksum_gphotos"`
SharedWithMe bool `config:"shared_with_me"`
@@ -604,6 +624,7 @@ type Options struct {
StopOnUploadLimit bool `config:"stop_on_upload_limit"`
StopOnDownloadLimit bool `config:"stop_on_download_limit"`
SkipShortcuts bool `config:"skip_shortcuts"`
SkipDanglingShortcuts bool `config:"skip_dangling_shortcuts"`
Enc encoder.MultiEncoder `config:"encoding"`
}
@@ -906,6 +927,11 @@ OUTER:
if err != nil {
return false, fmt.Errorf("list: %w", err)
}
// leave the dangling shortcut out of the listings
// we've already logged about the dangling shortcut in resolveShortcut
if f.opt.SkipDanglingShortcuts && item.MimeType == shortcutMimeTypeDangling {
continue
}
}
// Check the case of items is correct since
// the `=` operator is case insensitive.
@@ -1571,6 +1597,15 @@ func (f *Fs) findExportFormatByMimeType(ctx context.Context, itemMimeType string
}
}
// If using a link type export and a more specific export
// hasn't been found all docs should be exported
for _, _extension := range f.exportExtensions {
_mimeType := mime.TypeByExtension(_extension)
if isLinkMimeType(_mimeType) {
return _extension, _mimeType, true
}
}
// else return empty
return "", "", isDocument
}
@@ -1581,6 +1616,14 @@ func (f *Fs) findExportFormatByMimeType(ctx context.Context, itemMimeType string
// Look through the exportExtensions and find the first format that can be
// converted. If none found then return ("", "", "", false)
func (f *Fs) findExportFormat(ctx context.Context, item *drive.File) (extension, filename, mimeType string, isDocument bool) {
// If item has MD5 sum it is a file stored on drive
if item.Md5Checksum != "" {
return
}
// Folders can't be documents
if item.MimeType == driveFolderType {
return
}
extension, mimeType, isDocument = f.findExportFormatByMimeType(ctx, item.MimeType)
if extension != "" {
filename = item.Name + extension
@@ -2374,9 +2417,16 @@ func (f *Fs) Copy(ctx context.Context, src fs.Object, remote string) (fs.Object,
createInfo.Description = ""
}
// get the ID of the thing to copy - this is the shortcut if available
// get the ID of the thing to copy
// copy the contents if CopyShortcutContent
// else copy the shortcut only
id := shortcutID(srcObj.id)
if f.opt.CopyShortcutContent {
id = actualID(srcObj.id)
}
var info *drive.File
err = f.pacer.Call(func() (bool, error) {
info, err = f.svc.Files.Copy(id, createInfo).

View File

@@ -422,11 +422,7 @@ func (f *Fs) InternalTestCopyID(t *testing.T) {
require.NoError(t, err)
o := obj.(*Object)
dir, err := ioutil.TempDir("", "rclone-drive-copyid-test")
require.NoError(t, err)
defer func() {
_ = os.RemoveAll(dir)
}()
dir := t.TempDir()
checkFile := func(name string) {
filePath := filepath.Join(dir, name)
@@ -491,19 +487,11 @@ func (f *Fs) InternalTestAgeQuery(t *testing.T) {
subFs, isDriveFs := subFsResult.(*Fs)
require.True(t, isDriveFs)
tempDir1, err := ioutil.TempDir("", "rclone-drive-agequery1-test")
require.NoError(t, err)
defer func() {
_ = os.RemoveAll(tempDir1)
}()
tempDir1 := t.TempDir()
tempFs1, err := fs.NewFs(defCtx, tempDir1)
require.NoError(t, err)
tempDir2, err := ioutil.TempDir("", "rclone-drive-agequery2-test")
require.NoError(t, err)
defer func() {
_ = os.RemoveAll(tempDir2)
}()
tempDir2 := t.TempDir()
tempFs2, err := fs.NewFs(defCtx, tempDir2)
require.NoError(t, err)

View File

@@ -1650,13 +1650,37 @@ func (o *Object) uploadChunked(ctx context.Context, in0 io.Reader, commitInfo *f
}
chunk := readers.NewRepeatableLimitReaderBuffer(in, buf, chunkSize)
skip := int64(0)
err = o.fs.pacer.Call(func() (bool, error) {
// seek to the start in case this is a retry
if _, err = chunk.Seek(0, io.SeekStart); err != nil {
return false, nil
if _, err = chunk.Seek(skip, io.SeekStart); err != nil {
return false, err
}
err = o.fs.srv.UploadSessionAppendV2(&appendArg, chunk)
// after session is started, we retry everything
if err != nil {
// Check for incorrect offset error and retry with new offset
if uErr, ok := err.(files.UploadSessionAppendV2APIError); ok {
if uErr.EndpointError != nil && uErr.EndpointError.IncorrectOffset != nil {
correctOffset := uErr.EndpointError.IncorrectOffset.CorrectOffset
delta := int64(correctOffset) - int64(cursor.Offset)
skip += delta
what := fmt.Sprintf("incorrect offset error receved: sent %d, need %d, skip %d", cursor.Offset, correctOffset, skip)
if skip < 0 {
return false, fmt.Errorf("can't seek backwards to correct offset: %s", what)
} else if skip == chunkSize {
fs.Debugf(o, "%s: chunk received OK - continuing", what)
return false, nil
} else if skip > chunkSize {
// This error should never happen
return false, fmt.Errorf("can't seek forwards by more than a chunk to correct offset: %s", what)
}
// Skip the sent data on next retry
cursor.Offset = uint64(int64(cursor.Offset) + delta)
fs.Debugf(o, "%s: skipping bytes on retry to fix offset", what)
}
}
}
return err != nil, err
})
if err != nil {
@@ -1760,7 +1784,7 @@ func (o *Object) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, op
entry, err = o.uploadChunked(ctx, in, commitInfo, size)
} else {
err = o.fs.pacer.CallNoRetry(func() (bool, error) {
entry, err = o.fs.srv.Upload(commitInfo, in)
entry, err = o.fs.srv.Upload(&files.UploadArg{CommitInfo: *commitInfo}, in)
return shouldRetry(ctx, err)
})
}

View File

@@ -42,18 +42,15 @@ func init() {
}, {
Help: "If you want to download a shared folder, add this parameter.",
Name: "shared_folder",
Required: false,
Advanced: true,
}, {
Help: "If you want to download a shared file that is password protected, add this parameter.",
Name: "file_password",
Required: false,
Advanced: true,
IsPassword: true,
}, {
Help: "If you want to list the files in a shared folder that is password protected, add this parameter.",
Name: "folder_password",
Required: false,
Advanced: true,
IsPassword: true,
}, {
@@ -517,6 +514,32 @@ func (f *Fs) Copy(ctx context.Context, src fs.Object, remote string) (fs.Object,
return dstObj, nil
}
// About gets quota information
func (f *Fs) About(ctx context.Context) (usage *fs.Usage, err error) {
opts := rest.Opts{
Method: "POST",
Path: "/user/info.cgi",
ContentType: "application/json",
}
var accountInfo AccountInfo
var resp *http.Response
err = f.pacer.Call(func() (bool, error) {
resp, err = f.rest.CallJSON(ctx, &opts, nil, &accountInfo)
return shouldRetry(ctx, resp, err)
})
if err != nil {
return nil, fmt.Errorf("failed to read user info: %w", err)
}
// FIXME max upload size would be useful to use in Update
usage = &fs.Usage{
Used: fs.NewUsageValue(accountInfo.ColdStorage), // bytes in use
Total: fs.NewUsageValue(accountInfo.AvailableColdStorage), // bytes total
Free: fs.NewUsageValue(accountInfo.AvailableColdStorage - accountInfo.ColdStorage), // bytes free
}
return usage, nil
}
// PublicLink adds a "readable by anyone with link" permission on the given file or folder.
func (f *Fs) PublicLink(ctx context.Context, remote string, expire fs.Duration, unlink bool) (string, error) {
o, err := f.NewObject(ctx, remote)

View File

@@ -182,3 +182,34 @@ type FoldersList struct {
Status string `json:"Status"`
SubFolders []Folder `json:"sub_folders"`
}
// AccountInfo is the structure how 1Fichier returns user info
type AccountInfo struct {
StatsDate string `json:"stats_date"`
MailRM string `json:"mail_rm"`
DefaultQuota int64 `json:"default_quota"`
UploadForbidden string `json:"upload_forbidden"`
PageLimit int `json:"page_limit"`
ColdStorage int64 `json:"cold_storage"`
Status string `json:"status"`
UseCDN string `json:"use_cdn"`
AvailableColdStorage int64 `json:"available_cold_storage"`
DefaultPort string `json:"default_port"`
DefaultDomain int `json:"default_domain"`
Email string `json:"email"`
DownloadMenu string `json:"download_menu"`
FTPDID int `json:"ftp_did"`
DefaultPortFiles string `json:"default_port_files"`
FTPReport string `json:"ftp_report"`
OverQuota int64 `json:"overquota"`
AvailableStorage int64 `json:"available_storage"`
CDN string `json:"cdn"`
Offer string `json:"offer"`
SubscriptionEnd string `json:"subscription_end"`
TFA string `json:"2fa"`
AllowedColdStorage int64 `json:"allowed_cold_storage"`
HotStorage int64 `json:"hot_storage"`
DefaultColdStorageQuota int64 `json:"default_cold_storage_quota"`
FTPMode string `json:"ftp_mode"`
RUReport string `json:"ru_report"`
}

View File

@@ -15,7 +15,7 @@ import (
"sync"
"time"
"github.com/jlaffaye/ftp"
"github.com/rclone/ftp"
"github.com/rclone/rclone/fs"
"github.com/rclone/rclone/fs/accounting"
"github.com/rclone/rclone/fs/config"
@@ -52,11 +52,13 @@ func init() {
Help: "FTP host to connect to.\n\nE.g. \"ftp.example.com\".",
Required: true,
}, {
Name: "user",
Help: "FTP username, leave blank for current username, " + currentUser + ".",
Name: "user",
Help: "FTP username.",
Default: currentUser,
}, {
Name: "port",
Help: "FTP port, leave blank to use default (21).",
Name: "port",
Help: "FTP port number.",
Default: 21,
}, {
Name: "pass",
Help: "FTP password.",

View File

@@ -65,7 +65,7 @@ var (
Endpoint: google.Endpoint,
ClientID: rcloneClientID,
ClientSecret: obscure.MustReveal(rcloneEncryptedClientSecret),
RedirectURL: oauthutil.TitleBarRedirectURL,
RedirectURL: oauthutil.RedirectURL,
}
)
@@ -182,15 +182,30 @@ Docs: https://cloud.google.com/storage/docs/bucket-policy-only
}, {
Value: "asia-northeast1",
Help: "Tokyo",
}, {
Value: "asia-northeast2",
Help: "Osaka",
}, {
Value: "asia-northeast3",
Help: "Seoul",
}, {
Value: "asia-south1",
Help: "Mumbai",
}, {
Value: "asia-south2",
Help: "Delhi",
}, {
Value: "asia-southeast1",
Help: "Singapore",
}, {
Value: "asia-southeast2",
Help: "Jakarta",
}, {
Value: "australia-southeast1",
Help: "Sydney",
}, {
Value: "australia-southeast2",
Help: "Melbourne",
}, {
Value: "europe-north1",
Help: "Finland",
@@ -206,6 +221,12 @@ Docs: https://cloud.google.com/storage/docs/bucket-policy-only
}, {
Value: "europe-west4",
Help: "Netherlands",
}, {
Value: "europe-west6",
Help: "Zürich",
}, {
Value: "europe-central2",
Help: "Warsaw",
}, {
Value: "us-central1",
Help: "Iowa",
@@ -221,6 +242,33 @@ Docs: https://cloud.google.com/storage/docs/bucket-policy-only
}, {
Value: "us-west2",
Help: "California",
}, {
Value: "us-west3",
Help: "Salt Lake City",
}, {
Value: "us-west4",
Help: "Las Vegas",
}, {
Value: "northamerica-northeast1",
Help: "Montréal",
}, {
Value: "northamerica-northeast2",
Help: "Toronto",
}, {
Value: "southamerica-east1",
Help: "São Paulo",
}, {
Value: "southamerica-west1",
Help: "Santiago",
}, {
Value: "asia1",
Help: "Dual region: asia-northeast1 and asia-northeast2.",
}, {
Value: "eur4",
Help: "Dual region: europe-north1 and europe-west4.",
}, {
Value: "nam4",
Help: "Dual region: us-central1 and us-east1.",
}},
}, {
Name: "storage_class",
@@ -247,6 +295,15 @@ Docs: https://cloud.google.com/storage/docs/bucket-policy-only
Value: "DURABLE_REDUCED_AVAILABILITY",
Help: "Durable reduced availability storage class",
}},
}, {
Name: "no_check_bucket",
Help: `If set, don't attempt to check the bucket exists or create it.
This can be useful when trying to minimise the number of transactions
rclone does if you know the bucket exists already.
`,
Default: false,
Advanced: true,
}, {
Name: config.ConfigEncoding,
Help: config.ConfigEncodingHelp,
@@ -269,6 +326,7 @@ type Options struct {
BucketPolicyOnly bool `config:"bucket_policy_only"`
Location string `config:"location"`
StorageClass string `config:"storage_class"`
NoCheckBucket bool `config:"no_check_bucket"`
Enc encoder.MultiEncoder `config:"encoding"`
}
@@ -434,7 +492,7 @@ func NewFs(ctx context.Context, name, root string, m configmap.Mapper) (fs.Fs, e
name: name,
root: root,
opt: *opt,
pacer: fs.NewPacer(ctx, pacer.NewGoogleDrive(pacer.MinSleep(minSleep))),
pacer: fs.NewPacer(ctx, pacer.NewS3(pacer.MinSleep(minSleep))),
cache: bucket.NewCache(),
}
f.setRoot(root)
@@ -792,6 +850,14 @@ func (f *Fs) makeBucket(ctx context.Context, bucket string) (err error) {
}, nil)
}
// checkBucket creates the bucket if it doesn't exist unless NoCheckBucket is true
func (f *Fs) checkBucket(ctx context.Context, bucket string) error {
if f.opt.NoCheckBucket {
return nil
}
return f.makeBucket(ctx, bucket)
}
// Rmdir deletes the bucket if the fs is at the root
//
// Returns an error if it isn't empty: Error 409: The bucket you tried
@@ -825,7 +891,7 @@ func (f *Fs) Precision() time.Duration {
// If it isn't possible then return fs.ErrorCantCopy
func (f *Fs) Copy(ctx context.Context, src fs.Object, remote string) (fs.Object, error) {
dstBucket, dstPath := f.split(remote)
err := f.makeBucket(ctx, dstBucket)
err := f.checkBucket(ctx, dstBucket)
if err != nil {
return nil, err
}
@@ -1075,7 +1141,7 @@ func (o *Object) Open(ctx context.Context, options ...fs.OpenOption) (in io.Read
// The new object may have been created if an error is returned
func (o *Object) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, options ...fs.OpenOption) error {
bucket, bucketPath := o.split()
err := o.fs.makeBucket(ctx, bucket)
err := o.fs.checkBucket(ctx, bucket)
if err != nil {
return err
}

View File

@@ -69,7 +69,7 @@ var (
Endpoint: google.Endpoint,
ClientID: rcloneClientID,
ClientSecret: obscure.MustReveal(rcloneEncryptedClientSecret),
RedirectURL: oauthutil.TitleBarRedirectURL,
RedirectURL: oauthutil.RedirectURL,
}
)

View File

@@ -202,7 +202,11 @@ func (f *Fs) wrapEntries(baseEntries fs.DirEntries) (hashEntries fs.DirEntries,
for _, entry := range baseEntries {
switch x := entry.(type) {
case fs.Object:
hashEntries = append(hashEntries, f.wrapObject(x, nil))
obj, err := f.wrapObject(x, nil)
if err != nil {
return nil, err
}
hashEntries = append(hashEntries, obj)
default:
hashEntries = append(hashEntries, entry) // trash in - trash out
}
@@ -251,7 +255,7 @@ func (f *Fs) PutStream(ctx context.Context, in io.Reader, src fs.ObjectInfo, opt
if do := f.Fs.Features().PutStream; do != nil {
_ = f.pruneHash(src.Remote())
oResult, err := do(ctx, in, src, options...)
return f.wrapObject(oResult, err), err
return f.wrapObject(oResult, err)
}
return nil, errors.New("PutStream not supported")
}
@@ -261,7 +265,7 @@ func (f *Fs) PutUnchecked(ctx context.Context, in io.Reader, src fs.ObjectInfo,
if do := f.Fs.Features().PutUnchecked; do != nil {
_ = f.pruneHash(src.Remote())
oResult, err := do(ctx, in, src, options...)
return f.wrapObject(oResult, err), err
return f.wrapObject(oResult, err)
}
return nil, errors.New("PutUnchecked not supported")
}
@@ -348,7 +352,7 @@ func (f *Fs) Copy(ctx context.Context, src fs.Object, remote string) (fs.Object,
return nil, fs.ErrorCantCopy
}
oResult, err := do(ctx, o.Object, remote)
return f.wrapObject(oResult, err), err
return f.wrapObject(oResult, err)
}
// Move src to this remote using server-side move operations.
@@ -371,7 +375,7 @@ func (f *Fs) Move(ctx context.Context, src fs.Object, remote string) (fs.Object,
dir: false,
fs: f,
})
return f.wrapObject(oResult, nil), nil
return f.wrapObject(oResult, nil)
}
// DirMove moves src, srcRemote to this remote at dstRemote using server-side move operations.
@@ -410,7 +414,7 @@ func (f *Fs) Shutdown(ctx context.Context) (err error) {
// NewObject finds the Object at remote.
func (f *Fs) NewObject(ctx context.Context, remote string) (fs.Object, error) {
o, err := f.Fs.NewObject(ctx, remote)
return f.wrapObject(o, err), err
return f.wrapObject(o, err)
}
//
@@ -424,11 +428,15 @@ type Object struct {
}
// Wrap base object into hasher object
func (f *Fs) wrapObject(o fs.Object, err error) *Object {
if err != nil || o == nil {
return nil
func (f *Fs) wrapObject(o fs.Object, err error) (obj fs.Object, outErr error) {
// log.Trace(o, "err=%v", err)("obj=%#v, outErr=%v", &obj, &outErr)
if err != nil {
return nil, err
}
return &Object{Object: o, f: f}
if o == nil {
return nil, fs.ErrorObjectNotFound
}
return &Object{Object: o, f: f}, nil
}
// Fs returns read only access to the Fs that this object is part of

View File

@@ -184,7 +184,7 @@ func (o *Object) Open(ctx context.Context, options ...fs.OpenOption) (r io.ReadC
// Put data into the remote path with given modTime and size
func (f *Fs) Put(ctx context.Context, in io.Reader, src fs.ObjectInfo, options ...fs.OpenOption) (fs.Object, error) {
var (
o *Object
o fs.Object
common hash.Set
rehash bool
hashes hashMap
@@ -210,8 +210,8 @@ func (f *Fs) Put(ctx context.Context, in io.Reader, src fs.ObjectInfo, options .
_ = f.pruneHash(src.Remote())
oResult, err := f.Fs.Put(ctx, wrapIn, src, options...)
o = f.wrapObject(oResult, err)
if o == nil {
o, err = f.wrapObject(oResult, err)
if err != nil {
return nil, err
}
@@ -224,7 +224,7 @@ func (f *Fs) Put(ctx context.Context, in io.Reader, src fs.ObjectInfo, options .
}
}
if len(hashes) > 0 {
err := o.putHashes(ctx, hashes)
err := o.(*Object).putHashes(ctx, hashes)
fs.Debugf(o, "Applied %d source hashes, err: %v", len(hashes), err)
}
return o, err

View File

@@ -22,9 +22,8 @@ func init() {
Help: "Hadoop name node and port.\n\nE.g. \"namenode:8020\" to connect to host namenode at port 8020.",
Required: true,
}, {
Name: "username",
Help: "Hadoop user name.",
Required: false,
Name: "username",
Help: "Hadoop user name.",
Examples: []fs.OptionExample{{
Value: "root",
Help: "Connect to hdfs as root.",
@@ -36,7 +35,6 @@ func init() {
Enables KERBEROS authentication. Specifies the Service Principal Name
(SERVICE/FQDN) for the namenode. E.g. \"hdfs/namenode.hadoop.docker\"
for namenode running as service 'hdfs' with FQDN 'namenode.hadoop.docker'.`,
Required: false,
Advanced: true,
}, {
Name: "data_transfer_protection",
@@ -46,7 +44,6 @@ Specifies whether or not authentication, data signature integrity
checks, and wire encryption is required when communicating the the
datanodes. Possible values are 'authentication', 'integrity' and
'privacy'. Used only with KERBEROS enabled.`,
Required: false,
Examples: []fs.OptionExample{{
Value: "privacy",
Help: "Ensure authentication, integrity and encryption enabled.",

View File

@@ -52,8 +52,7 @@ The input format is comma separated list of key,value pairs. Standard
For example, to set a Cookie use 'Cookie,name=value', or '"Cookie","name=value"'.
You can set multiple headers, e.g. '"Cookie","name=value","Authorization","xxx"'.
`,
You can set multiple headers, e.g. '"Cookie","name=value","Authorization","xxx"'.`,
Default: fs.CommaSepList{},
Advanced: true,
}, {
@@ -74,8 +73,9 @@ directories.`,
Advanced: true,
}, {
Name: "no_head",
Help: `Don't use HEAD requests to find file sizes in dir listing.
Help: `Don't use HEAD requests.
HEAD requests are mainly used to find file sizes in dir listing.
If your site is being very slow to load then you can try this option.
Normally rclone does a HEAD request for each potential file in a
directory listing to:
@@ -84,12 +84,9 @@ directory listing to:
- check it really exists
- check to see if it is a directory
If you set this option, rclone will not do the HEAD request. This will mean
- directory listings are much quicker
- rclone won't have the times or sizes of any files
- some files that don't exist may be in the listing
`,
If you set this option, rclone will not do the HEAD request. This will mean
that directory listings are much quicker, but rclone won't have the times or
sizes of any files, and some files that don't exist may be in the listing.`,
Default: false,
Advanced: true,
}},
@@ -133,11 +130,87 @@ func statusError(res *http.Response, err error) error {
}
if res.StatusCode < 200 || res.StatusCode > 299 {
_ = res.Body.Close()
return fmt.Errorf("HTTP Error %d: %s", res.StatusCode, res.Status)
return fmt.Errorf("HTTP Error: %s", res.Status)
}
return nil
}
// getFsEndpoint decides if url is to be considered a file or directory,
// and returns a proper endpoint url to use for the fs.
func getFsEndpoint(ctx context.Context, client *http.Client, url string, opt *Options) (string, bool) {
// If url ends with '/' it is already a proper url always assumed to be a directory.
if url[len(url)-1] == '/' {
return url, false
}
// If url does not end with '/' we send a HEAD request to decide
// if it is directory or file, and if directory appends the missing
// '/', or if file returns the directory url to parent instead.
createFileResult := func() (string, bool) {
fs.Debugf(nil, "If path is a directory you must add a trailing '/'")
parent, _ := path.Split(url)
return parent, true
}
createDirResult := func() (string, bool) {
fs.Debugf(nil, "To avoid the initial HEAD request add a trailing '/' to the path")
return url + "/", false
}
// If HEAD requests are not allowed we just have to assume it is a file.
if opt.NoHead {
fs.Debugf(nil, "Assuming path is a file as --http-no-head is set")
return createFileResult()
}
// Use a client which doesn't follow redirects so the server
// doesn't redirect http://host/dir to http://host/dir/
noRedir := *client
noRedir.CheckRedirect = func(req *http.Request, via []*http.Request) error {
return http.ErrUseLastResponse
}
req, err := http.NewRequestWithContext(ctx, "HEAD", url, nil)
if err != nil {
fs.Debugf(nil, "Assuming path is a file as HEAD request could not be created: %v", err)
return createFileResult()
}
addHeaders(req, opt)
res, err := noRedir.Do(req)
if err != nil {
fs.Debugf(nil, "Assuming path is a file as HEAD request could not be sent: %v", err)
return createFileResult()
}
if res.StatusCode == http.StatusNotFound {
fs.Debugf(nil, "Assuming path is a directory as HEAD response is it does not exist as a file (%s)", res.Status)
return createDirResult()
}
if res.StatusCode == http.StatusMovedPermanently ||
res.StatusCode == http.StatusFound ||
res.StatusCode == http.StatusSeeOther ||
res.StatusCode == http.StatusTemporaryRedirect ||
res.StatusCode == http.StatusPermanentRedirect {
redir := res.Header.Get("Location")
if redir != "" {
if redir[len(redir)-1] == '/' {
fs.Debugf(nil, "Assuming path is a directory as HEAD response is redirect (%s) to a path that ends with '/': %s", res.Status, redir)
return createDirResult()
}
fs.Debugf(nil, "Assuming path is a file as HEAD response is redirect (%s) to a path that does not end with '/': %s", res.Status, redir)
return createFileResult()
}
fs.Debugf(nil, "Assuming path is a file as HEAD response is redirect (%s) but no location header", res.Status)
return createFileResult()
}
if res.StatusCode < 200 || res.StatusCode > 299 {
// Example is 403 (http.StatusForbidden) for servers not allowing HEAD requests.
fs.Debugf(nil, "Assuming path is a file as HEAD response is an error (%s)", res.Status)
return createFileResult()
}
fs.Debugf(nil, "Assuming path is a file as HEAD response is success (%s)", res.Status)
return createFileResult()
}
// NewFs creates a new Fs object from the name and root. It connects to
// the host specified in the config file.
func NewFs(ctx context.Context, name, root string, m configmap.Mapper) (fs.Fs, error) {
@@ -168,37 +241,9 @@ func NewFs(ctx context.Context, name, root string, m configmap.Mapper) (fs.Fs, e
client := fshttp.NewClient(ctx)
var isFile = false
if !strings.HasSuffix(u.String(), "/") {
// Make a client which doesn't follow redirects so the server
// doesn't redirect http://host/dir to http://host/dir/
noRedir := *client
noRedir.CheckRedirect = func(req *http.Request, via []*http.Request) error {
return http.ErrUseLastResponse
}
// check to see if points to a file
req, err := http.NewRequestWithContext(ctx, "HEAD", u.String(), nil)
if err == nil {
addHeaders(req, opt)
res, err := noRedir.Do(req)
err = statusError(res, err)
if err == nil {
isFile = true
}
}
}
newRoot := u.String()
if isFile {
// Point to the parent if this is a file
newRoot, _ = path.Split(u.String())
} else {
if !strings.HasSuffix(newRoot, "/") {
newRoot += "/"
}
}
u, err = url.Parse(newRoot)
endpoint, isFile := getFsEndpoint(ctx, client, u.String(), opt)
fs.Debugf(nil, "Root: %s", endpoint)
u, err = url.Parse(endpoint)
if err != nil {
return nil, err
}
@@ -216,12 +261,16 @@ func NewFs(ctx context.Context, name, root string, m configmap.Mapper) (fs.Fs, e
f.features = (&fs.Features{
CanHaveEmptyDirectories: true,
}).Fill(ctx, f)
if isFile {
// return an error with an fs which points to the parent
return f, fs.ErrorIsFile
}
if !strings.HasSuffix(f.endpointURL, "/") {
return nil, errors.New("internal error: url doesn't end with /")
}
return f, nil
}
@@ -297,7 +346,7 @@ func parseName(base *url.URL, name string) (string, error) {
}
// check it doesn't have URL parameters
uStr := u.String()
if strings.Index(uStr, "?") >= 0 {
if strings.Contains(uStr, "?") {
return "", errFoundQuestionMark
}
// check that this is going back to the same host and scheme
@@ -409,7 +458,7 @@ func (f *Fs) readDir(ctx context.Context, dir string) (names []string, err error
return nil, fmt.Errorf("readDir: %w", err)
}
default:
return nil, fmt.Errorf("Can't parse content type %q", contentType)
return nil, fmt.Errorf("can't parse content type %q", contentType)
}
return names, nil
}

View File

@@ -8,8 +8,10 @@ import (
"net/http/httptest"
"net/url"
"os"
"path"
"path/filepath"
"sort"
"strconv"
"strings"
"testing"
"time"
@@ -24,10 +26,11 @@ import (
)
var (
remoteName = "TestHTTP"
testPath = "test"
filesPath = filepath.Join(testPath, "files")
headers = []string{"X-Potato", "sausage", "X-Rhubarb", "cucumber"}
remoteName = "TestHTTP"
testPath = "test"
filesPath = filepath.Join(testPath, "files")
headers = []string{"X-Potato", "sausage", "X-Rhubarb", "cucumber"}
lineEndSize = 1
)
// prepareServer the test server and return a function to tidy it up afterwards
@@ -35,6 +38,22 @@ func prepareServer(t *testing.T) (configmap.Simple, func()) {
// file server for test/files
fileServer := http.FileServer(http.Dir(filesPath))
// verify the file path is correct, and also check which line endings
// are used to get sizes right ("\n" except on Windows, but even there
// we may have "\n" or "\r\n" depending on git crlf setting)
fileList, err := ioutil.ReadDir(filesPath)
require.NoError(t, err)
require.Greater(t, len(fileList), 0)
for _, file := range fileList {
if !file.IsDir() {
data, _ := ioutil.ReadFile(filepath.Join(filesPath, file.Name()))
if strings.HasSuffix(string(data), "\r\n") {
lineEndSize = 2
}
break
}
}
// test the headers are there then pass on to fileServer
handler := http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
what := fmt.Sprintf("%s %s: Header ", r.Method, r.URL.Path)
@@ -91,7 +110,7 @@ func testListRoot(t *testing.T, f fs.Fs, noSlash bool) {
e = entries[1]
assert.Equal(t, "one%.txt", e.Remote())
assert.Equal(t, int64(6), e.Size())
assert.Equal(t, int64(5+lineEndSize), e.Size())
_, ok = e.(*Object)
assert.True(t, ok)
@@ -108,7 +127,7 @@ func testListRoot(t *testing.T, f fs.Fs, noSlash bool) {
_, ok = e.(fs.Directory)
assert.True(t, ok)
} else {
assert.Equal(t, int64(41), e.Size())
assert.Equal(t, int64(40+lineEndSize), e.Size())
_, ok = e.(*Object)
assert.True(t, ok)
}
@@ -141,7 +160,7 @@ func TestListSubDir(t *testing.T) {
e := entries[0]
assert.Equal(t, "three/underthree.txt", e.Remote())
assert.Equal(t, int64(9), e.Size())
assert.Equal(t, int64(8+lineEndSize), e.Size())
_, ok := e.(*Object)
assert.True(t, ok)
}
@@ -154,7 +173,7 @@ func TestNewObject(t *testing.T) {
require.NoError(t, err)
assert.Equal(t, "four/under four.txt", o.Remote())
assert.Equal(t, int64(9), o.Size())
assert.Equal(t, int64(8+lineEndSize), o.Size())
_, ok := o.(*Object)
assert.True(t, ok)
@@ -187,7 +206,11 @@ func TestOpen(t *testing.T) {
data, err := ioutil.ReadAll(fd)
require.NoError(t, err)
require.NoError(t, fd.Close())
assert.Equal(t, "beetroot\n", string(data))
if lineEndSize == 2 {
assert.Equal(t, "beetroot\r\n", string(data))
} else {
assert.Equal(t, "beetroot\n", string(data))
}
// Test with range request
fd, err = o.Open(context.Background(), &fs.RangeOption{Start: 1, End: 5})
@@ -236,7 +259,7 @@ func TestIsAFileSubDir(t *testing.T) {
e := entries[0]
assert.Equal(t, "underthree.txt", e.Remote())
assert.Equal(t, int64(9), e.Size())
assert.Equal(t, int64(8+lineEndSize), e.Size())
_, ok := e.(*Object)
assert.True(t, ok)
}
@@ -353,3 +376,106 @@ func TestParseCaddy(t *testing.T) {
"v1.36-22-g06ea13a-ssh-agentβ/",
})
}
func TestFsNoSlashRoots(t *testing.T) {
// Test Fs with roots that does not end with '/', the logic that
// decides if url is to be considered a file or directory, based
// on result from a HEAD request.
// Handler for faking HEAD responses with different status codes
headCount := 0
handler := http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
if r.Method == "HEAD" {
headCount++
responseCode, err := strconv.Atoi(path.Base(r.URL.String()))
require.NoError(t, err)
if strings.HasPrefix(r.URL.String(), "/redirect/") {
var redir string
if strings.HasPrefix(r.URL.String(), "/redirect/file/") {
redir = "/redirected"
} else if strings.HasPrefix(r.URL.String(), "/redirect/dir/") {
redir = "/redirected/"
} else {
require.Fail(t, "Redirect test requests must start with '/redirect/file/' or '/redirect/dir/'")
}
http.Redirect(w, r, redir, responseCode)
} else {
http.Error(w, http.StatusText(responseCode), responseCode)
}
}
})
// Make the test server
ts := httptest.NewServer(handler)
defer ts.Close()
// Configure the remote
configfile.Install()
m := configmap.Simple{
"type": "http",
"url": ts.URL,
}
// Test
for i, test := range []struct {
root string
isFile bool
}{
// 2xx success
{"parent/200", true},
{"parent/204", true},
// 3xx redirection Redirect status 301, 302, 303, 307, 308
{"redirect/file/301", true}, // Request is redirected to "/redirected"
{"redirect/dir/301", false}, // Request is redirected to "/redirected/"
{"redirect/file/302", true}, // Request is redirected to "/redirected"
{"redirect/dir/302", false}, // Request is redirected to "/redirected/"
{"redirect/file/303", true}, // Request is redirected to "/redirected"
{"redirect/dir/303", false}, // Request is redirected to "/redirected/"
{"redirect/file/304", true}, // Not really a redirect, handled like 4xx errors (below)
{"redirect/file/305", true}, // Not really a redirect, handled like 4xx errors (below)
{"redirect/file/306", true}, // Not really a redirect, handled like 4xx errors (below)
{"redirect/file/307", true}, // Request is redirected to "/redirected"
{"redirect/dir/307", false}, // Request is redirected to "/redirected/"
{"redirect/file/308", true}, // Request is redirected to "/redirected"
{"redirect/dir/308", false}, // Request is redirected to "/redirected/"
// 4xx client errors
{"parent/403", true}, // Forbidden status (head request blocked)
{"parent/404", false}, // Not found status
} {
for _, noHead := range []bool{false, true} {
var isFile bool
if noHead {
m.Set("no_head", "true")
isFile = true
} else {
m.Set("no_head", "false")
isFile = test.isFile
}
headCount = 0
f, err := NewFs(context.Background(), remoteName, test.root, m)
if noHead {
assert.Equal(t, 0, headCount)
} else {
assert.Equal(t, 1, headCount)
}
if isFile {
assert.ErrorIs(t, err, fs.ErrorIsFile)
} else {
assert.NoError(t, err)
}
var endpoint string
if isFile {
parent, _ := path.Split(test.root)
endpoint = "/" + parent
} else {
endpoint = "/" + test.root + "/"
}
what := fmt.Sprintf("i=%d, root=%q, isFile=%v, noHead=%v", i, test.root, isFile, noHead)
assert.Equal(t, ts.URL+endpoint, f.String(), what)
}
}
}

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,17 @@
// Test internetarchive filesystem interface
package internetarchive_test
import (
"testing"
"github.com/rclone/rclone/backend/internetarchive"
"github.com/rclone/rclone/fstest/fstests"
)
// TestIntegration runs integration tests against the remote
func TestIntegration(t *testing.T) {
fstests.Run(t, &fstests.Opt{
RemoteName: "TestIA:lesmi-rclone-test/",
NilObject: (*internetarchive.Object)(nil),
})
}

View File

@@ -8,42 +8,69 @@ import (
)
const (
// default time format for almost all request and responses
timeFormat = "2006-01-02-T15:04:05Z0700"
// the API server seems to use a different format
apiTimeFormat = "2006-01-02T15:04:05Z07:00"
// default time format historically used for all request and responses.
// Similar to time.RFC3339, but with an extra '-' in front of 'T',
// and no ':' separator in timezone offset. Some newer endpoints have
// moved to proper time.RFC3339 conformant format instead.
jottaTimeFormat = "2006-01-02-T15:04:05Z0700"
)
// Time represents time values in the Jottacloud API. It uses a custom RFC3339 like format.
type Time time.Time
// UnmarshalXML turns XML into a Time
func (t *Time) UnmarshalXML(d *xml.Decoder, start xml.StartElement) error {
// unmarshalXML turns XML into a Time
func unmarshalXMLTime(d *xml.Decoder, start xml.StartElement, timeFormat string) (time.Time, error) {
var v string
if err := d.DecodeElement(&v, &start); err != nil {
return err
return time.Time{}, err
}
if v == "" {
*t = Time(time.Time{})
return nil
return time.Time{}, nil
}
newTime, err := time.Parse(timeFormat, v)
if err == nil {
*t = Time(newTime)
return newTime, nil
}
return time.Time{}, err
}
// JottaTime represents time values in the classic API using a custom RFC3339 like format
type JottaTime time.Time
// String returns JottaTime string in Jottacloud classic format
func (t JottaTime) String() string { return time.Time(t).Format(jottaTimeFormat) }
// UnmarshalXML turns XML into a JottaTime
func (t *JottaTime) UnmarshalXML(d *xml.Decoder, start xml.StartElement) error {
tm, err := unmarshalXMLTime(d, start, jottaTimeFormat)
*t = JottaTime(tm)
return err
}
// MarshalXML turns a Time into XML
func (t *Time) MarshalXML(e *xml.Encoder, start xml.StartElement) error {
// MarshalXML turns a JottaTime into XML
func (t *JottaTime) MarshalXML(e *xml.Encoder, start xml.StartElement) error {
return e.EncodeElement(t.String(), start)
}
// Return Time string in Jottacloud format
func (t Time) String() string { return time.Time(t).Format(timeFormat) }
// Rfc3339Time represents time values in the newer APIs using standard RFC3339 format
type Rfc3339Time time.Time
// APIString returns Time string in Jottacloud API format
func (t Time) APIString() string { return time.Time(t).Format(apiTimeFormat) }
// String returns Rfc3339Time string in Jottacloud RFC3339 format
func (t Rfc3339Time) String() string { return time.Time(t).Format(time.RFC3339) }
// UnmarshalXML turns XML into a Rfc3339Time
func (t *Rfc3339Time) UnmarshalXML(d *xml.Decoder, start xml.StartElement) error {
tm, err := unmarshalXMLTime(d, start, time.RFC3339)
*t = Rfc3339Time(tm)
return err
}
// MarshalXML turns a Rfc3339Time into XML
func (t *Rfc3339Time) MarshalXML(e *xml.Encoder, start xml.StartElement) error {
return e.EncodeElement(t.String(), start)
}
// MarshalJSON turns a Rfc3339Time into JSON
func (t *Rfc3339Time) MarshalJSON() ([]byte, error) {
return []byte(fmt.Sprintf("\"%s\"", t.String())), nil
}
// LoginToken is struct representing the login token generated in the WebUI
type LoginToken struct {
@@ -122,16 +149,11 @@ type AllocateFileResponse struct {
// UploadResponse after an upload
type UploadResponse struct {
Name string `json:"name"`
Path string `json:"path"`
Kind string `json:"kind"`
ContentID string `json:"content_id"`
Bytes int64 `json:"bytes"`
Md5 string `json:"md5"`
Created int64 `json:"created"`
Modified int64 `json:"modified"`
Deleted interface{} `json:"deleted"`
Mime string `json:"mime"`
Path string `json:"path"`
ContentID string `json:"content_id"`
Bytes int64 `json:"bytes"`
Md5 string `json:"md5"`
Modified int64 `json:"modified"`
}
// DeviceRegistrationResponse is the response to registering a device
@@ -338,9 +360,9 @@ type JottaFolder struct {
Name string `xml:"name,attr"`
Deleted Flag `xml:"deleted,attr"`
Path string `xml:"path"`
CreatedAt Time `xml:"created"`
ModifiedAt Time `xml:"modified"`
Updated Time `xml:"updated"`
CreatedAt JottaTime `xml:"created"`
ModifiedAt JottaTime `xml:"modified"`
Updated JottaTime `xml:"updated"`
Folders []JottaFolder `xml:"folders>folder"`
Files []JottaFile `xml:"files>file"`
}
@@ -365,17 +387,17 @@ GET http://www.jottacloud.com/JFS/<account>/<device>/<mountpoint>/.../<file>
// JottaFile represents a Jottacloud file
type JottaFile struct {
XMLName xml.Name
Name string `xml:"name,attr"`
Deleted Flag `xml:"deleted,attr"`
PublicURI string `xml:"publicURI"`
PublicSharePath string `xml:"publicSharePath"`
State string `xml:"currentRevision>state"`
CreatedAt Time `xml:"currentRevision>created"`
ModifiedAt Time `xml:"currentRevision>modified"`
Updated Time `xml:"currentRevision>updated"`
Size int64 `xml:"currentRevision>size"`
MimeType string `xml:"currentRevision>mime"`
MD5 string `xml:"currentRevision>md5"`
Name string `xml:"name,attr"`
Deleted Flag `xml:"deleted,attr"`
PublicURI string `xml:"publicURI"`
PublicSharePath string `xml:"publicSharePath"`
State string `xml:"currentRevision>state"`
CreatedAt JottaTime `xml:"currentRevision>created"`
ModifiedAt JottaTime `xml:"currentRevision>modified"`
Updated JottaTime `xml:"currentRevision>updated"`
Size int64 `xml:"currentRevision>size"`
MimeType string `xml:"currentRevision>mime"`
MD5 string `xml:"currentRevision>md5"`
}
// Error is a custom Error for wrapping Jottacloud error responses

View File

@@ -7,6 +7,7 @@ import (
"encoding/base64"
"encoding/hex"
"encoding/json"
"encoding/xml"
"errors"
"fmt"
"io"
@@ -518,7 +519,7 @@ func doTokenAuth(ctx context.Context, apiSrv *rest.Client, loginTokenBase64 stri
values.Set("client_id", defaultClientID)
values.Set("grant_type", "password")
values.Set("password", loginToken.AuthToken)
values.Set("scope", "offline_access+openid")
values.Set("scope", "openid offline_access")
values.Set("username", loginToken.Username)
values.Encode()
opts = rest.Opts{
@@ -931,49 +932,102 @@ func (f *Fs) List(ctx context.Context, dir string) (entries fs.DirEntries, err e
return entries, nil
}
// listFileDirFn is called from listFileDir to handle an object.
type listFileDirFn func(fs.DirEntry) error
func parseListRStream(ctx context.Context, r io.Reader, trimPrefix string, filesystem *Fs, callback func(fs.DirEntry) error) error {
// List the objects and directories into entries, from a
// special kind of JottaFolder representing a FileDirLis
func (f *Fs) listFileDir(ctx context.Context, remoteStartPath string, startFolder *api.JottaFolder, fn listFileDirFn) error {
pathPrefix := "/" + f.filePathRaw("") // Non-escaped prefix of API paths to be cut off, to be left with the remote path including the remoteStartPath
pathPrefixLength := len(pathPrefix)
startPath := path.Join(pathPrefix, remoteStartPath) // Non-escaped API path up to and including remoteStartPath, to decide if it should be created as a new dir object
startPathLength := len(startPath)
for i := range startFolder.Folders {
folder := &startFolder.Folders[i]
if !f.validFolder(folder) {
return nil
type stats struct {
Folders int `xml:"folders"`
Files int `xml:"files"`
}
var expected, actual stats
type xmlFile struct {
Path string `xml:"path"`
Name string `xml:"filename"`
Checksum string `xml:"md5"`
Size int64 `xml:"size"`
Modified api.Rfc3339Time `xml:"modified"` // Note: Liststream response includes 3 decimal milliseconds, but we ignore them since there is second precision everywhere else
Created api.Rfc3339Time `xml:"created"`
}
type xmlFolder struct {
Path string `xml:"path"`
}
addFolder := func(path string) error {
return callback(fs.NewDir(filesystem.opt.Enc.ToStandardPath(path), time.Time{}))
}
addFile := func(f *xmlFile) error {
return callback(&Object{
hasMetaData: true,
fs: filesystem,
remote: filesystem.opt.Enc.ToStandardPath(path.Join(f.Path, f.Name)),
size: f.Size,
md5: f.Checksum,
modTime: time.Time(f.Modified),
})
}
trimPathPrefix := func(p string) string {
p = strings.TrimPrefix(p, trimPrefix)
p = strings.TrimPrefix(p, "/")
return p
}
uniqueFolders := map[string]bool{}
decoder := xml.NewDecoder(r)
for {
t, err := decoder.Token()
if err != nil {
if err != io.EOF {
return err
}
break
}
folderPath := f.opt.Enc.ToStandardPath(path.Join(folder.Path, folder.Name))
folderPathLength := len(folderPath)
var remoteDir string
if folderPathLength > pathPrefixLength {
remoteDir = folderPath[pathPrefixLength+1:]
if folderPathLength > startPathLength {
d := fs.NewDir(remoteDir, time.Time(folder.ModifiedAt))
err := fn(d)
if err != nil {
return err
}
}
}
for i := range folder.Files {
file := &folder.Files[i]
if f.validFile(file) {
remoteFile := path.Join(remoteDir, f.opt.Enc.ToStandardName(file.Name))
o, err := f.newObjectWithInfo(ctx, remoteFile, file)
if err != nil {
return err
}
err = fn(o)
if err != nil {
switch se := t.(type) {
case xml.StartElement:
switch se.Name.Local {
case "file":
var f xmlFile
if err := decoder.DecodeElement(&f, &se); err != nil {
return err
}
f.Path = trimPathPrefix(f.Path)
actual.Files++
if !uniqueFolders[f.Path] {
uniqueFolders[f.Path] = true
actual.Folders++
if err := addFolder(f.Path); err != nil {
return err
}
}
if err := addFile(&f); err != nil {
return err
}
case "folder":
var f xmlFolder
if err := decoder.DecodeElement(&f, &se); err != nil {
return err
}
f.Path = trimPathPrefix(f.Path)
uniqueFolders[f.Path] = true
actual.Folders++
if err := addFolder(f.Path); err != nil {
return err
}
case "stats":
if err := decoder.DecodeElement(&expected, &se); err != nil {
return err
}
}
}
}
if expected.Folders != actual.Folders ||
expected.Files != actual.Files {
return fmt.Errorf("Invalid result from listStream: expected[%#v] != actual[%#v]", expected, actual)
}
return nil
}
@@ -988,12 +1042,27 @@ func (f *Fs) ListR(ctx context.Context, dir string, callback fs.ListRCallback) (
Path: f.filePath(dir),
Parameters: url.Values{},
}
opts.Parameters.Set("mode", "list")
opts.Parameters.Set("mode", "liststream")
list := walk.NewListRHelper(callback)
var resp *http.Response
var result api.JottaFolder // Could be JottaFileDirList, but JottaFolder is close enough
err = f.pacer.Call(func() (bool, error) {
resp, err = f.srv.CallXML(ctx, &opts, nil, &result)
resp, err = f.srv.Call(ctx, &opts)
if err != nil {
return shouldRetry(ctx, resp, err)
}
// liststream paths are /mountpoint/root/path
// so the returned paths should have /mountpoint/root/ trimmed
// as the caller is expecting path.
trimPrefix := path.Join("/", f.opt.Mountpoint, f.root)
err = parseListRStream(ctx, resp.Body, trimPrefix, f, func(d fs.DirEntry) error {
if d.Remote() == dir {
return nil
}
return list.Add(d)
})
_ = resp.Body.Close()
return shouldRetry(ctx, resp, err)
})
if err != nil {
@@ -1005,10 +1074,6 @@ func (f *Fs) ListR(ctx context.Context, dir string, callback fs.ListRCallback) (
}
return fmt.Errorf("couldn't list files: %w", err)
}
list := walk.NewListRHelper(callback)
err = f.listFileDir(ctx, dir, &result, func(entry fs.DirEntry) error {
return list.Add(entry)
})
if err != nil {
return err
}
@@ -1126,6 +1191,45 @@ func (f *Fs) Purge(ctx context.Context, dir string) error {
return f.purgeCheck(ctx, dir, false)
}
// createOrUpdate tries to make remote file match without uploading.
// If the remote file exists, and has matching size and md5, only
// timestamps are updated. If the file does not exist or does does
// not match size and md5, but matching content can be constructed
// from deduplication, the file will be updated/created. If the file
// is currently in trash, but can be made to match, it will be
// restored. Returns ErrorObjectNotFound if upload will be necessary
// to get a matching remote file.
func (f *Fs) createOrUpdate(ctx context.Context, file string, modTime time.Time, size int64, md5 string) (info *api.JottaFile, err error) {
opts := rest.Opts{
Method: "POST",
Path: f.filePath(file),
Parameters: url.Values{},
ExtraHeaders: make(map[string]string),
}
opts.Parameters.Set("cphash", "true")
fileDate := api.JottaTime(modTime).String()
opts.ExtraHeaders["JSize"] = strconv.FormatInt(size, 10)
opts.ExtraHeaders["JMd5"] = md5
opts.ExtraHeaders["JCreated"] = fileDate
opts.ExtraHeaders["JModified"] = fileDate
var resp *http.Response
err = f.pacer.Call(func() (bool, error) {
resp, err = f.srv.CallXML(ctx, &opts, nil, &info)
return shouldRetry(ctx, resp, err)
})
if apiErr, ok := err.(*api.Error); ok {
// does not exist, i.e. not matching size and md5, and not possible to make it by deduplication
if apiErr.StatusCode == http.StatusNotFound {
return nil, fs.ErrorObjectNotFound
}
}
return info, nil
}
// copyOrMoves copies or moves directories or files depending on the method parameter
func (f *Fs) copyOrMove(ctx context.Context, method, src, dest string) (info *api.JottaFile, err error) {
opts := rest.Opts{
@@ -1169,6 +1273,12 @@ func (f *Fs) Copy(ctx context.Context, src fs.Object, remote string) (fs.Object,
}
info, err := f.copyOrMove(ctx, "cp", srcObj.filePath(), remote)
// if destination was a trashed file then after a successfull copy the copied file is still in trash (bug in api?)
if err == nil && bool(info.Deleted) && !f.opt.TrashedOnly && info.State == "COMPLETED" {
fs.Debugf(src, "Server-side copied to trashed destination, restoring")
info, err = f.createOrUpdate(ctx, remote, srcObj.modTime, srcObj.size, srcObj.md5)
}
if err != nil {
return nil, fmt.Errorf("couldn't copy file: %w", err)
}
@@ -1470,40 +1580,19 @@ func (o *Object) SetModTime(ctx context.Context, modTime time.Time) error {
return err
}
// prepare allocate request with existing metadata but changed timestamps
var resp *http.Response
var options []fs.OpenOption
opts := rest.Opts{
Method: "POST",
Path: "files/v1/allocate",
Options: options,
ExtraHeaders: make(map[string]string),
}
fileDate := api.Time(modTime).APIString()
var request = api.AllocateFileRequest{
Bytes: o.size,
Created: fileDate,
Modified: fileDate,
Md5: o.md5,
Path: path.Join(o.fs.opt.Mountpoint, o.fs.opt.Enc.FromStandardPath(path.Join(o.fs.root, o.remote))),
}
// send it
var response api.AllocateFileResponse
err = o.fs.pacer.Call(func() (bool, error) {
resp, err = o.fs.apiSrv.CallJSON(ctx, &opts, &request, &response)
return shouldRetry(ctx, resp, err)
})
// request check/update with existing metadata and new modtime
// (note that if size/md5 does not match, the file content will
// also be modified if deduplication is possible, i.e. it is
// important to use correct/latest values)
_, err = o.fs.createOrUpdate(ctx, o.remote, modTime, o.size, o.md5)
if err != nil {
if err == fs.ErrorObjectNotFound {
// file was modified (size/md5 changed) between readMetaData and createOrUpdate?
return errors.New("metadata did not match")
}
return err
}
// check response
if response.State != "COMPLETED" {
// could be the file was modified (size/md5 changed) between readMetaData and the allocate request
return errors.New("metadata did not match")
}
// update local metadata
o.modTime = modTime
return nil
@@ -1641,7 +1730,7 @@ func (o *Object) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, op
Options: options,
ExtraHeaders: make(map[string]string),
}
fileDate := api.Time(src.ModTime(ctx)).APIString()
fileDate := api.Rfc3339Time(src.ModTime(ctx)).String()
// the allocate request
var request = api.AllocateFileRequest{

View File

@@ -28,33 +28,57 @@ import (
func init() {
fs.Register(&fs.RegInfo{
Name: "koofr",
Description: "Koofr",
Description: "Koofr, Digi Storage and other Koofr-compatible storage providers",
NewFs: NewFs,
Options: []fs.Option{{
Name: fs.ConfigProvider,
Help: "Choose your storage provider.",
// NOTE if you add a new provider here, then add it in the
// setProviderDefaults() function and update options accordingly
Examples: []fs.OptionExample{{
Value: "koofr",
Help: "Koofr, https://app.koofr.net/",
}, {
Value: "digistorage",
Help: "Digi Storage, https://storage.rcs-rds.ro/",
}, {
Value: "other",
Help: "Any other Koofr API compatible storage service",
}},
}, {
Name: "endpoint",
Help: "The Koofr API endpoint to use.",
Default: "https://app.koofr.net",
Provider: "other",
Required: true,
Advanced: true,
}, {
Name: "mountid",
Help: "Mount ID of the mount to use.\n\nIf omitted, the primary mount is used.",
Required: false,
Default: "",
Advanced: true,
}, {
Name: "setmtime",
Help: "Does the backend support setting modification time.\n\nSet this to false if you use a mount ID that points to a Dropbox or Amazon Drive backend.",
Default: true,
Required: true,
Advanced: true,
}, {
Name: "user",
Help: "Your Koofr user name.",
Help: "Your user name.",
Required: true,
}, {
Name: "password",
Help: "Your Koofr password for rclone (generate one at https://app.koofr.net/app/admin/preferences/password).",
Help: "Your password for rclone (generate one at https://app.koofr.net/app/admin/preferences/password).",
Provider: "koofr",
IsPassword: true,
Required: true,
}, {
Name: "password",
Help: "Your password for rclone (generate one at https://storage.rcs-rds.ro/app/admin/preferences/password).",
Provider: "digistorage",
IsPassword: true,
Required: true,
}, {
Name: "password",
Help: "Your password for rclone (generate one at your service's settings page).",
Provider: "other",
IsPassword: true,
Required: true,
}, {
@@ -71,6 +95,7 @@ func init() {
// Options represent the configuration of the Koofr backend
type Options struct {
Provider string `config:"provider"`
Endpoint string `config:"endpoint"`
MountID string `config:"mountid"`
User string `config:"user"`
@@ -255,13 +280,38 @@ func (f *Fs) fullPath(part string) string {
return f.opt.Enc.FromStandardPath(path.Join("/", f.root, part))
}
// NewFs constructs a new filesystem given a root path and configuration options
func setProviderDefaults(opt *Options) {
// handle old, provider-less configs
if opt.Provider == "" {
if opt.Endpoint == "" || strings.HasPrefix(opt.Endpoint, "https://app.koofr.net") {
opt.Provider = "koofr"
} else if strings.HasPrefix(opt.Endpoint, "https://storage.rcs-rds.ro") {
opt.Provider = "digistorage"
} else {
opt.Provider = "other"
}
}
// now assign an endpoint
if opt.Provider == "koofr" {
opt.Endpoint = "https://app.koofr.net"
} else if opt.Provider == "digistorage" {
opt.Endpoint = "https://storage.rcs-rds.ro"
}
}
// NewFs constructs a new filesystem given a root path and rclone configuration options
func NewFs(ctx context.Context, name, root string, m configmap.Mapper) (ff fs.Fs, err error) {
opt := new(Options)
err = configstruct.Set(m, opt)
if err != nil {
return nil, err
}
setProviderDefaults(opt)
return NewFsFromOptions(ctx, name, root, opt)
}
// NewFsFromOptions constructs a new filesystem given a root path and internal configuration options
func NewFsFromOptions(ctx context.Context, name, root string, opt *Options) (ff fs.Fs, err error) {
pass, err := obscure.Reveal(opt.Password)
if err != nil {
return nil, err

View File

@@ -1133,6 +1133,9 @@ func (o *Object) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, op
return err
}
// Wipe hashes before update
o.clearHashCache()
var symlinkData bytes.Buffer
// If the object is a regular file, create it.
// If it is a translated link, just read in the contents, and
@@ -1295,6 +1298,13 @@ func (o *Object) setMetadata(info os.FileInfo) {
}
}
// clearHashCache wipes any cached hashes for the object
func (o *Object) clearHashCache() {
o.fs.objectMetaMu.Lock()
o.hashes = nil
o.fs.objectMetaMu.Unlock()
}
// Stat an Object into info
func (o *Object) lstat() error {
info, err := o.fs.lstat(o.path)
@@ -1306,6 +1316,7 @@ func (o *Object) lstat() error {
// Remove an object
func (o *Object) Remove(ctx context.Context) error {
o.clearHashCache()
return remove(o.path)
}

View File

@@ -1,6 +1,7 @@
package local
import (
"bytes"
"context"
"io/ioutil"
"os"
@@ -12,6 +13,7 @@ import (
"github.com/rclone/rclone/fs"
"github.com/rclone/rclone/fs/config/configmap"
"github.com/rclone/rclone/fs/hash"
"github.com/rclone/rclone/fs/object"
"github.com/rclone/rclone/fstest"
"github.com/rclone/rclone/lib/file"
"github.com/rclone/rclone/lib/readers"
@@ -166,3 +168,64 @@ func TestSymlinkError(t *testing.T) {
_, err := NewFs(context.Background(), "local", "/", m)
assert.Equal(t, errLinksAndCopyLinks, err)
}
// Test hashes on updating an object
func TestHashOnUpdate(t *testing.T) {
ctx := context.Background()
r := fstest.NewRun(t)
defer r.Finalise()
const filePath = "file.txt"
when := time.Now()
r.WriteFile(filePath, "content", when)
f := r.Flocal.(*Fs)
// Get the object
o, err := f.NewObject(ctx, filePath)
require.NoError(t, err)
// Test the hash is as we expect
md5, err := o.Hash(ctx, hash.MD5)
require.NoError(t, err)
assert.Equal(t, "9a0364b9e99bb480dd25e1f0284c8555", md5)
// Reupload it with diferent contents but same size and timestamp
var b = bytes.NewBufferString("CONTENT")
src := object.NewStaticObjectInfo(filePath, when, int64(b.Len()), true, nil, f)
err = o.Update(ctx, b, src)
require.NoError(t, err)
// Check the hash is as expected
md5, err = o.Hash(ctx, hash.MD5)
require.NoError(t, err)
assert.Equal(t, "45685e95985e20822fb2538a522a5ccf", md5)
}
// Test hashes on deleting an object
func TestHashOnDelete(t *testing.T) {
ctx := context.Background()
r := fstest.NewRun(t)
defer r.Finalise()
const filePath = "file.txt"
when := time.Now()
r.WriteFile(filePath, "content", when)
f := r.Flocal.(*Fs)
// Get the object
o, err := f.NewObject(ctx, filePath)
require.NoError(t, err)
// Test the hash is as we expect
md5, err := o.Hash(ctx, hash.MD5)
require.NoError(t, err)
assert.Equal(t, "9a0364b9e99bb480dd25e1f0284c8555", md5)
// Delete the object
require.NoError(t, o.Remove(ctx))
// Test the hash cache is empty
require.Nil(t, o.(*Object).hashes)
// Test the hash returns an error
_, err = o.Hash(ctx, hash.MD5)
require.Error(t, err)
}

View File

@@ -58,7 +58,7 @@ type UserInfoResponse struct {
AutoProlong bool `json:"auto_prolong"`
Basequota int64 `json:"basequota"`
Enabled bool `json:"enabled"`
Expires int `json:"expires"`
Expires int64 `json:"expires"`
Prolong bool `json:"prolong"`
Promocodes struct {
} `json:"promocodes"`
@@ -80,7 +80,7 @@ type UserInfoResponse struct {
FileSizeLimit int64 `json:"file_size_limit"`
Space struct {
BytesTotal int64 `json:"bytes_total"`
BytesUsed int `json:"bytes_used"`
BytesUsed int64 `json:"bytes_used"`
Overquota bool `json:"overquota"`
} `json:"space"`
} `json:"cloud"`

View File

@@ -1572,7 +1572,7 @@ func (f *Fs) About(ctx context.Context) (*fs.Usage, error) {
}
total := info.Body.Cloud.Space.BytesTotal
used := int64(info.Body.Cloud.Space.BytesUsed)
used := info.Body.Cloud.Space.BytesUsed
usage := &fs.Usage{
Total: fs.NewUsageValue(total),

1277
backend/netstorage/netstorage.go Executable file

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,16 @@
package netstorage_test
import (
"testing"
"github.com/rclone/rclone/backend/netstorage"
"github.com/rclone/rclone/fstest/fstests"
)
// TestIntegration runs integration tests against the remote
func TestIntegration(t *testing.T) {
fstests.Run(t, &fstests.Opt{
RemoteName: "TestnStorage:",
NilObject: (*netstorage.Object)(nil),
})
}

View File

@@ -65,9 +65,12 @@ var (
authPath = "/common/oauth2/v2.0/authorize"
tokenPath = "/common/oauth2/v2.0/token"
scopesWithSitePermission = []string{"Files.Read", "Files.ReadWrite", "Files.Read.All", "Files.ReadWrite.All", "offline_access", "Sites.Read.All"}
scopesWithoutSitePermission = []string{"Files.Read", "Files.ReadWrite", "Files.Read.All", "Files.ReadWrite.All", "offline_access"}
// Description of how to auth for this app for a business account
oauthConfig = &oauth2.Config{
Scopes: []string{"Files.Read", "Files.ReadWrite", "Files.Read.All", "Files.ReadWrite.All", "offline_access", "Sites.Read.All"},
Scopes: scopesWithSitePermission,
ClientID: rcloneClientID,
ClientSecret: obscure.MustReveal(rcloneEncryptedClientSecret),
RedirectURL: oauthutil.RedirectLocalhostURL,
@@ -137,6 +140,26 @@ Note that the chunks will be buffered into memory.`,
Help: "The type of the drive (" + driveTypePersonal + " | " + driveTypeBusiness + " | " + driveTypeSharepoint + ").",
Default: "",
Advanced: true,
}, {
Name: "root_folder_id",
Help: `ID of the root folder.
This isn't normally needed, but in special circumstances you might
know the folder ID that you wish to access but not be able to get
there through a path traversal.
`,
Advanced: true,
}, {
Name: "disable_site_permission",
Help: `Disable the request for Sites.Read.All permission.
If set to true, you will no longer be able to search for a SharePoint site when
configuring drive ID, because rclone will not request Sites.Read.All permission.
Set it to true if your organization didn't assign Sites.Read.All permission to the
application, and your organization disallows users to consent app permission
request on their own.`,
Default: false,
Advanced: true,
}, {
Name: "expose_onenote_files",
Help: `Set to make OneNote files show up in directory listings.
@@ -374,6 +397,12 @@ func Config(ctx context.Context, name string, m configmap.Mapper, config fs.Conf
region, graphURL := getRegionURL(m)
if config.State == "" {
disableSitePermission, _ := m.Get("disable_site_permission")
if disableSitePermission == "true" {
oauthConfig.Scopes = scopesWithoutSitePermission
} else {
oauthConfig.Scopes = scopesWithSitePermission
}
oauthConfig.Endpoint = oauth2.Endpoint{
AuthURL: authEndpoint[region] + authPath,
TokenURL: authEndpoint[region] + tokenPath,
@@ -527,6 +556,8 @@ type Options struct {
ChunkSize fs.SizeSuffix `config:"chunk_size"`
DriveID string `config:"drive_id"`
DriveType string `config:"drive_type"`
RootFolderID string `config:"root_folder_id"`
DisableSitePermission bool `config:"disable_site_permission"`
ExposeOneNoteFiles bool `config:"expose_onenote_files"`
ServerSideAcrossConfigs bool `config:"server_side_across_configs"`
ListChunk int64 `config:"list_chunk"`
@@ -618,6 +649,12 @@ func shouldRetry(ctx context.Context, resp *http.Response, err error) (bool, err
retry := false
if resp != nil {
switch resp.StatusCode {
case 400:
if apiErr, ok := err.(*api.Error); ok {
if apiErr.ErrorInfo.InnerError.Code == "pathIsTooLong" {
return false, fserrors.NoRetryError(err)
}
}
case 401:
if len(resp.Header["Www-Authenticate"]) == 1 && strings.Index(resp.Header["Www-Authenticate"][0], "expired_token") >= 0 {
retry = true
@@ -789,6 +826,11 @@ func NewFs(ctx context.Context, name, root string, m configmap.Mapper) (fs.Fs, e
}
rootURL := graphAPIEndpoint[opt.Region] + "/v1.0" + "/drives/" + opt.DriveID
if opt.DisableSitePermission {
oauthConfig.Scopes = scopesWithoutSitePermission
} else {
oauthConfig.Scopes = scopesWithSitePermission
}
oauthConfig.Endpoint = oauth2.Endpoint{
AuthURL: authEndpoint[opt.Region] + authPath,
TokenURL: authEndpoint[opt.Region] + tokenPath,
@@ -826,15 +868,19 @@ func NewFs(ctx context.Context, name, root string, m configmap.Mapper) (fs.Fs, e
})
// Get rootID
rootInfo, _, err := f.readMetaDataForPath(ctx, "")
if err != nil {
return nil, fmt.Errorf("failed to get root: %w", err)
var rootID = opt.RootFolderID
if rootID == "" {
rootInfo, _, err := f.readMetaDataForPath(ctx, "")
if err != nil {
return nil, fmt.Errorf("failed to get root: %w", err)
}
rootID = rootInfo.GetID()
}
if rootInfo.GetID() == "" {
if rootID == "" {
return nil, errors.New("failed to get root: ID was empty")
}
f.dirCache = dircache.New(root, rootInfo.GetID(), f)
f.dirCache = dircache.New(root, rootID, f)
// Find the current root
err = f.dirCache.FindRoot(ctx, false)
@@ -842,7 +888,7 @@ func NewFs(ctx context.Context, name, root string, m configmap.Mapper) (fs.Fs, e
// Assume it is a file
newRoot, remote := dircache.SplitPath(root)
tempF := *f
tempF.dirCache = dircache.New(newRoot, rootInfo.ID, &tempF)
tempF.dirCache = dircache.New(newRoot, rootID, &tempF)
tempF.root = newRoot
// Make new Fs which is the parent
err = tempF.dirCache.FindRoot(ctx, false)

View File

@@ -136,7 +136,8 @@ func (q *quickXorHash) Write(p []byte) (n int, err error) {
func (q *quickXorHash) checkSum() (h [Size]byte) {
// Output the data as little endian bytes
ph := 0
for _, d := range q.data[:len(q.data)-1] {
for i := 0; i < len(q.data)-1; i++ {
d := q.data[i]
_ = h[ph+7] // bounds check
h[ph+0] = byte(d >> (8 * 0))
h[ph+1] = byte(d >> (8 * 1))

View File

@@ -2,8 +2,6 @@
// object storage system.
package pcloud
// FIXME implement ListR? /listfolder can do recursive lists
// FIXME cleanup returns login required?
// FIXME mime type? Fix overview if implement.
@@ -27,6 +25,7 @@ import (
"github.com/rclone/rclone/fs/config/obscure"
"github.com/rclone/rclone/fs/fserrors"
"github.com/rclone/rclone/fs/hash"
"github.com/rclone/rclone/fs/walk"
"github.com/rclone/rclone/lib/dircache"
"github.com/rclone/rclone/lib/encoder"
"github.com/rclone/rclone/lib/oauthutil"
@@ -246,7 +245,7 @@ func (f *Fs) readMetaDataForPath(ctx context.Context, path string) (info *api.It
return nil, err
}
found, err := f.listAll(ctx, directoryID, false, true, func(item *api.Item) bool {
found, err := f.listAll(ctx, directoryID, false, true, false, func(item *api.Item) bool {
if item.Name == leaf {
info = item
return true
@@ -380,7 +379,7 @@ func (f *Fs) NewObject(ctx context.Context, remote string) (fs.Object, error) {
// FindLeaf finds a directory of name leaf in the folder with ID pathID
func (f *Fs) FindLeaf(ctx context.Context, pathID, leaf string) (pathIDOut string, found bool, err error) {
// Find the leaf in pathID
found, err = f.listAll(ctx, pathID, true, false, func(item *api.Item) bool {
found, err = f.listAll(ctx, pathID, true, false, false, func(item *api.Item) bool {
if item.Name == leaf {
pathIDOut = item.ID
return true
@@ -446,14 +445,16 @@ type listAllFn func(*api.Item) bool
// Lists the directory required calling the user function on each item found
//
// If the user fn ever returns true then it early exits with found = true
func (f *Fs) listAll(ctx context.Context, dirID string, directoriesOnly bool, filesOnly bool, fn listAllFn) (found bool, err error) {
func (f *Fs) listAll(ctx context.Context, dirID string, directoriesOnly bool, filesOnly bool, recursive bool, fn listAllFn) (found bool, err error) {
opts := rest.Opts{
Method: "GET",
Path: "/listfolder",
Parameters: url.Values{},
}
if recursive {
opts.Parameters.Set("recursive", "1")
}
opts.Parameters.Set("folderid", dirIDtoNumber(dirID))
// FIXME can do recursive
var result api.ItemResult
var resp *http.Response
@@ -465,26 +466,71 @@ func (f *Fs) listAll(ctx context.Context, dirID string, directoriesOnly bool, fi
if err != nil {
return found, fmt.Errorf("couldn't list files: %w", err)
}
for i := range result.Metadata.Contents {
item := &result.Metadata.Contents[i]
if item.IsFolder {
if filesOnly {
continue
var recursiveContents func(is []api.Item, path string)
recursiveContents = func(is []api.Item, path string) {
for i := range is {
item := &is[i]
if item.IsFolder {
if filesOnly {
continue
}
} else {
if directoriesOnly {
continue
}
}
} else {
if directoriesOnly {
continue
item.Name = path + f.opt.Enc.ToStandardName(item.Name)
if fn(item) {
found = true
break
}
if recursive {
recursiveContents(item.Contents, item.Name+"/")
}
}
item.Name = f.opt.Enc.ToStandardName(item.Name)
if fn(item) {
found = true
break
}
}
recursiveContents(result.Metadata.Contents, "")
return
}
// listHelper iterates over all items from the directory
// and calls the callback for each element.
func (f *Fs) listHelper(ctx context.Context, dir string, recursive bool, callback func(entries fs.DirEntry) error) (err error) {
directoryID, err := f.dirCache.FindDir(ctx, dir, false)
if err != nil {
return err
}
var iErr error
_, err = f.listAll(ctx, directoryID, false, false, recursive, func(info *api.Item) bool {
remote := path.Join(dir, info.Name)
if info.IsFolder {
// cache the directory ID for later lookups
f.dirCache.Put(remote, info.ID)
d := fs.NewDir(remote, info.ModTime()).SetID(info.ID)
// FIXME more info from dir?
iErr = callback(d)
} else {
o, err := f.newObjectWithInfo(ctx, remote, info)
if err != nil {
iErr = err
return true
}
iErr = callback(o)
}
if iErr != nil {
return true
}
return false
})
if err != nil {
return err
}
if iErr != nil {
return iErr
}
return nil
}
// List the objects and directories in dir into entries. The
// entries can be returned in any order but should be for a
// complete directory.
@@ -495,36 +541,24 @@ func (f *Fs) listAll(ctx context.Context, dirID string, directoriesOnly bool, fi
// This should return ErrDirNotFound if the directory isn't
// found.
func (f *Fs) List(ctx context.Context, dir string) (entries fs.DirEntries, err error) {
directoryID, err := f.dirCache.FindDir(ctx, dir, false)
if err != nil {
return nil, err
}
var iErr error
_, err = f.listAll(ctx, directoryID, false, false, func(info *api.Item) bool {
remote := path.Join(dir, info.Name)
if info.IsFolder {
// cache the directory ID for later lookups
f.dirCache.Put(remote, info.ID)
d := fs.NewDir(remote, info.ModTime()).SetID(info.ID)
// FIXME more info from dir?
entries = append(entries, d)
} else {
o, err := f.newObjectWithInfo(ctx, remote, info)
if err != nil {
iErr = err
return true
}
entries = append(entries, o)
}
return false
err = f.listHelper(ctx, dir, false, func(o fs.DirEntry) error {
entries = append(entries, o)
return nil
})
return entries, err
}
// ListR lists the objects and directories of the Fs starting
// from dir recursively into out.
func (f *Fs) ListR(ctx context.Context, dir string, callback fs.ListRCallback) (err error) {
list := walk.NewListRHelper(callback)
err = f.listHelper(ctx, dir, true, func(o fs.DirEntry) error {
return list.Add(o)
})
if err != nil {
return nil, err
return err
}
if iErr != nil {
return nil, iErr
}
return entries, nil
return list.Flush()
}
// Creates from the parameters passed in a half finished Object which
@@ -656,7 +690,7 @@ func (f *Fs) Copy(ctx context.Context, src fs.Object, remote string) (fs.Object,
opts.Parameters.Set("fileid", fileIDtoNumber(srcObj.id))
opts.Parameters.Set("toname", f.opt.Enc.FromStandardName(leaf))
opts.Parameters.Set("tofolderid", dirIDtoNumber(directoryID))
opts.Parameters.Set("mtime", fmt.Sprintf("%d", srcObj.modTime.Unix()))
opts.Parameters.Set("mtime", fmt.Sprintf("%d", uint64(srcObj.modTime.Unix())))
var resp *http.Response
var result api.ItemResult
err = f.pacer.Call(func() (bool, error) {
@@ -1137,7 +1171,7 @@ func (o *Object) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, op
opts.Parameters.Set("filename", leaf)
opts.Parameters.Set("folderid", dirIDtoNumber(directoryID))
opts.Parameters.Set("nopartial", "1")
opts.Parameters.Set("mtime", fmt.Sprintf("%d", modTime.Unix()))
opts.Parameters.Set("mtime", fmt.Sprintf("%d", uint64(modTime.Unix())))
// Special treatment for a 0 length upload. This doesn't work
// with PUT even with Content-Length set (by setting

View File

@@ -4,16 +4,21 @@ import (
"context"
"fmt"
"net/http"
"strconv"
"time"
"github.com/putdotio/go-putio/putio"
"github.com/rclone/rclone/fs/fserrors"
"github.com/rclone/rclone/lib/pacer"
)
func checkStatusCode(resp *http.Response, expected int) error {
if resp.StatusCode != expected {
return &statusCodeError{response: resp}
func checkStatusCode(resp *http.Response, expected ...int) error {
for _, code := range expected {
if resp.StatusCode == code {
return nil
}
}
return nil
return &statusCodeError{response: resp}
}
type statusCodeError struct {
@@ -24,8 +29,10 @@ func (e *statusCodeError) Error() string {
return fmt.Sprintf("unexpected status code (%d) response while doing %s to %s", e.response.StatusCode, e.response.Request.Method, e.response.Request.URL.String())
}
// This method is called from fserrors.ShouldRetry() to determine if an error should be retried.
// Some errors (e.g. 429 Too Many Requests) are handled before this step, so they are not included here.
func (e *statusCodeError) Temporary() bool {
return e.response.StatusCode == 429 || e.response.StatusCode >= 500
return e.response.StatusCode >= 500
}
// shouldRetry returns a boolean as to whether this err deserves to be
@@ -40,6 +47,16 @@ func shouldRetry(ctx context.Context, err error) (bool, error) {
if perr, ok := err.(*putio.ErrorResponse); ok {
err = &statusCodeError{response: perr.Response}
}
if scerr, ok := err.(*statusCodeError); ok && scerr.response.StatusCode == 429 {
delay := defaultRateLimitSleep
header := scerr.response.Header.Get("x-ratelimit-reset")
if header != "" {
if resetTime, cerr := strconv.ParseInt(header, 10, 64); cerr == nil {
delay = time.Until(time.Unix(resetTime+1, 0))
}
}
return true, pacer.RetryAfterError(scerr, delay)
}
if fserrors.ShouldRetry(err) {
return true, err
}

View File

@@ -302,8 +302,8 @@ func (f *Fs) createUpload(ctx context.Context, name string, size int64, parentID
if err != nil {
return false, err
}
if resp.StatusCode != 201 {
return false, fmt.Errorf("unexpected status code from upload create: %d", resp.StatusCode)
if err := checkStatusCode(resp, 201); err != nil {
return shouldRetry(ctx, err)
}
location = resp.Header.Get("location")
if location == "" {

View File

@@ -241,7 +241,13 @@ func (o *Object) Open(ctx context.Context, options ...fs.OpenOption) (in io.Read
}
// fs.Debugf(o, "opening file: id=%d", o.file.ID)
resp, err = o.fs.httpClient.Do(req)
return shouldRetry(ctx, err)
if err != nil {
return shouldRetry(ctx, err)
}
if err := checkStatusCode(resp, 200, 206); err != nil {
return shouldRetry(ctx, err)
}
return false, nil
})
if perr, ok := err.(*putio.ErrorResponse); ok && perr.Response.StatusCode >= 400 && perr.Response.StatusCode <= 499 {
_ = resp.Body.Close()

View File

@@ -33,8 +33,9 @@ const (
rcloneObscuredClientSecret = "cMwrjWVmrHZp3gf1ZpCrlyGAmPpB-YY5BbVnO1fj-G9evcd8"
minSleep = 10 * time.Millisecond
maxSleep = 2 * time.Second
decayConstant = 2 // bigger for slower decay, exponential
decayConstant = 1 // bigger for slower decay, exponential
defaultChunkSize = 48 * fs.Mebi
defaultRateLimitSleep = 60 * time.Second
)
var (

View File

@@ -58,7 +58,7 @@ import (
func init() {
fs.Register(&fs.RegInfo{
Name: "s3",
Description: "Amazon S3 Compliant Storage Providers including AWS, Alibaba, Ceph, Digital Ocean, Dreamhost, IBM COS, Minio, RackCorp, SeaweedFS, and Tencent COS",
Description: "Amazon S3 Compliant Storage Providers including AWS, Alibaba, Ceph, China Mobile, ArvanCloud, Digital Ocean, Dreamhost, IBM COS, Lyve Cloud, Minio, Netease, RackCorp, Scaleway, SeaweedFS, StackPath, Storj, Tencent COS and Wasabi",
NewFs: NewFs,
CommandHelp: commandHelp,
Options: []fs.Option{{
@@ -75,6 +75,12 @@ func init() {
}, {
Value: "Ceph",
Help: "Ceph Object Storage",
}, {
Value: "ChinaMobile",
Help: "China Mobile Ecloud Elastic Object Storage (EOS)",
}, {
Value: "ArvanCloud",
Help: "Arvan Cloud Object Storage (AOS)",
}, {
Value: "DigitalOcean",
Help: "Digital Ocean Spaces",
@@ -84,6 +90,9 @@ func init() {
}, {
Value: "IBMCOS",
Help: "IBM COS S3",
}, {
Value: "LyveCloud",
Help: "Seagate Lyve Cloud",
}, {
Value: "Minio",
Help: "Minio Object Storage",
@@ -102,6 +111,9 @@ func init() {
}, {
Value: "StackPath",
Help: "StackPath Object Storage",
}, {
Value: "Storj",
Help: "Storj (S3 Compatible Gateway)",
}, {
Value: "TencentCOS",
Help: "Tencent Cloud Object Storage (COS)",
@@ -288,7 +300,7 @@ func init() {
}, {
Name: "region",
Help: "Region to connect to.\n\nLeave blank if you are using an S3 clone and you don't have a region.",
Provider: "!AWS,Alibaba,RackCorp,Scaleway,TencentCOS",
Provider: "!AWS,Alibaba,ChinaMobile,ArvanCloud,RackCorp,Scaleway,Storj,TencentCOS",
Examples: []fs.OptionExample{{
Value: "",
Help: "Use this if unsure.\nWill use v4 signatures and an empty region.",
@@ -300,6 +312,114 @@ func init() {
Name: "endpoint",
Help: "Endpoint for S3 API.\n\nLeave blank if using AWS to use the default endpoint for the region.",
Provider: "AWS",
}, {
// ChinaMobile endpoints: https://ecloud.10086.cn/op-help-center/doc/article/24534
Name: "endpoint",
Help: "Endpoint for China Mobile Ecloud Elastic Object Storage (EOS) API.",
Provider: "ChinaMobile",
Examples: []fs.OptionExample{{
Value: "eos-wuxi-1.cmecloud.cn",
Help: "The default endpoint - a good choice if you are unsure.\nEast China (Suzhou)",
}, {
Value: "eos-jinan-1.cmecloud.cn",
Help: "East China (Jinan)",
}, {
Value: "eos-ningbo-1.cmecloud.cn",
Help: "East China (Hangzhou)",
}, {
Value: "eos-shanghai-1.cmecloud.cn",
Help: "East China (Shanghai-1)",
}, {
Value: "eos-zhengzhou-1.cmecloud.cn",
Help: "Central China (Zhengzhou)",
}, {
Value: "eos-hunan-1.cmecloud.cn",
Help: "Central China (Changsha-1)",
}, {
Value: "eos-zhuzhou-1.cmecloud.cn",
Help: "Central China (Changsha-2)",
}, {
Value: "eos-guangzhou-1.cmecloud.cn",
Help: "South China (Guangzhou-2)",
}, {
Value: "eos-dongguan-1.cmecloud.cn",
Help: "South China (Guangzhou-3)",
}, {
Value: "eos-beijing-1.cmecloud.cn",
Help: "North China (Beijing-1)",
}, {
Value: "eos-beijing-2.cmecloud.cn",
Help: "North China (Beijing-2)",
}, {
Value: "eos-beijing-4.cmecloud.cn",
Help: "North China (Beijing-3)",
}, {
Value: "eos-huhehaote-1.cmecloud.cn",
Help: "North China (Huhehaote)",
}, {
Value: "eos-chengdu-1.cmecloud.cn",
Help: "Southwest China (Chengdu)",
}, {
Value: "eos-chongqing-1.cmecloud.cn",
Help: "Southwest China (Chongqing)",
}, {
Value: "eos-guiyang-1.cmecloud.cn",
Help: "Southwest China (Guiyang)",
}, {
Value: "eos-xian-1.cmecloud.cn",
Help: "Nouthwest China (Xian)",
}, {
Value: "eos-yunnan.cmecloud.cn",
Help: "Yunnan China (Kunming)",
}, {
Value: "eos-yunnan-2.cmecloud.cn",
Help: "Yunnan China (Kunming-2)",
}, {
Value: "eos-tianjin-1.cmecloud.cn",
Help: "Tianjin China (Tianjin)",
}, {
Value: "eos-jilin-1.cmecloud.cn",
Help: "Jilin China (Changchun)",
}, {
Value: "eos-hubei-1.cmecloud.cn",
Help: "Hubei China (Xiangyan)",
}, {
Value: "eos-jiangxi-1.cmecloud.cn",
Help: "Jiangxi China (Nanchang)",
}, {
Value: "eos-gansu-1.cmecloud.cn",
Help: "Gansu China (Lanzhou)",
}, {
Value: "eos-shanxi-1.cmecloud.cn",
Help: "Shanxi China (Taiyuan)",
}, {
Value: "eos-liaoning-1.cmecloud.cn",
Help: "Liaoning China (Shenyang)",
}, {
Value: "eos-hebei-1.cmecloud.cn",
Help: "Hebei China (Shijiazhuang)",
}, {
Value: "eos-fujian-1.cmecloud.cn",
Help: "Fujian China (Xiamen)",
}, {
Value: "eos-guangxi-1.cmecloud.cn",
Help: "Guangxi China (Nanning)",
}, {
Value: "eos-anhui-1.cmecloud.cn",
Help: "Anhui China (Huainan)",
}},
}, {
// ArvanCloud endpoints: https://www.arvancloud.com/en/products/cloud-storage
Name: "endpoint",
Help: "Endpoint for Arvan Cloud Object Storage (AOS) API.",
Provider: "ArvanCloud",
Examples: []fs.OptionExample{{
Value: "s3.ir-thr-at1.arvanstorage.com",
Help: "The default endpoint - a good choice if you are unsure.\nTehran Iran (Asiatech)",
}, {
Value: "s3.ir-tbz-sh1.arvanstorage.com",
Help: "Tabriz Iran (Shahriar)",
}},
}, {
Name: "endpoint",
Help: "Endpoint for IBM COS S3 API.\n\nSpecify if using an IBM COS On Premise.",
@@ -597,6 +717,20 @@ func init() {
Value: "s3.eu-central-1.stackpathstorage.com",
Help: "EU Endpoint",
}},
}, {
Name: "endpoint",
Help: "Endpoint of the Shared Gateway.",
Provider: "Storj",
Examples: []fs.OptionExample{{
Value: "gateway.eu1.storjshare.io",
Help: "EU1 Shared Gateway",
}, {
Value: "gateway.us1.storjshare.io",
Help: "US1 Shared Gateway",
}, {
Value: "gateway.ap1.storjshare.io",
Help: "Asia-Pacific Shared Gateway",
}},
}, {
// cos endpoints: https://intl.cloud.tencent.com/document/product/436/6224
Name: "endpoint",
@@ -726,7 +860,7 @@ func init() {
}, {
Name: "endpoint",
Help: "Endpoint for S3 API.\n\nRequired when using an S3 clone.",
Provider: "!AWS,IBMCOS,TencentCOS,Alibaba,Scaleway,StackPath,RackCorp",
Provider: "!AWS,IBMCOS,TencentCOS,Alibaba,ChinaMobile,ArvanCloud,Scaleway,StackPath,Storj,RackCorp",
Examples: []fs.OptionExample{{
Value: "objects-us-east-1.dream.io",
Help: "Dream Objects endpoint",
@@ -747,6 +881,18 @@ func init() {
Value: "localhost:8333",
Help: "SeaweedFS S3 localhost",
Provider: "SeaweedFS",
}, {
Value: "s3.us-east-1.lyvecloud.seagate.com",
Help: "Seagate Lyve Cloud US East 1 (Virginia)",
Provider: "LyveCloud",
}, {
Value: "s3.us-west-1.lyvecloud.seagate.com",
Help: "Seagate Lyve Cloud US West 1 (California)",
Provider: "LyveCloud",
}, {
Value: "s3.ap-southeast-1.lyvecloud.seagate.com",
Help: "Seagate Lyve Cloud AP Southeast 1 (Singapore)",
Provider: "LyveCloud",
}, {
Value: "s3.wasabisys.com",
Help: "Wasabi US East endpoint",
@@ -761,8 +907,16 @@ func init() {
Provider: "Wasabi",
}, {
Value: "s3.ap-northeast-1.wasabisys.com",
Help: "Wasabi AP Northeast endpoint",
Help: "Wasabi AP Northeast 1 (Tokyo) endpoint",
Provider: "Wasabi",
}, {
Value: "s3.ap-northeast-2.wasabisys.com",
Help: "Wasabi AP Northeast 2 (Osaka) endpoint",
Provider: "Wasabi",
}, {
Value: "s3.ir-thr-at1.arvanstorage.com",
Help: "ArvanCloud Tehran Iran (Asiatech) endpoint",
Provider: "ArvanCloud",
}},
}, {
Name: "location_constraint",
@@ -844,6 +998,112 @@ func init() {
Value: "us-gov-west-1",
Help: "AWS GovCloud (US) Region",
}},
}, {
Name: "location_constraint",
Help: "Location constraint - must match endpoint.\n\nUsed when creating buckets only.",
Provider: "ChinaMobile",
Examples: []fs.OptionExample{{
Value: "wuxi1",
Help: "East China (Suzhou)",
}, {
Value: "jinan1",
Help: "East China (Jinan)",
}, {
Value: "ningbo1",
Help: "East China (Hangzhou)",
}, {
Value: "shanghai1",
Help: "East China (Shanghai-1)",
}, {
Value: "zhengzhou1",
Help: "Central China (Zhengzhou)",
}, {
Value: "hunan1",
Help: "Central China (Changsha-1)",
}, {
Value: "zhuzhou1",
Help: "Central China (Changsha-2)",
}, {
Value: "guangzhou1",
Help: "South China (Guangzhou-2)",
}, {
Value: "dongguan1",
Help: "South China (Guangzhou-3)",
}, {
Value: "beijing1",
Help: "North China (Beijing-1)",
}, {
Value: "beijing2",
Help: "North China (Beijing-2)",
}, {
Value: "beijing4",
Help: "North China (Beijing-3)",
}, {
Value: "huhehaote1",
Help: "North China (Huhehaote)",
}, {
Value: "chengdu1",
Help: "Southwest China (Chengdu)",
}, {
Value: "chongqing1",
Help: "Southwest China (Chongqing)",
}, {
Value: "guiyang1",
Help: "Southwest China (Guiyang)",
}, {
Value: "xian1",
Help: "Nouthwest China (Xian)",
}, {
Value: "yunnan",
Help: "Yunnan China (Kunming)",
}, {
Value: "yunnan2",
Help: "Yunnan China (Kunming-2)",
}, {
Value: "tianjin1",
Help: "Tianjin China (Tianjin)",
}, {
Value: "jilin1",
Help: "Jilin China (Changchun)",
}, {
Value: "hubei1",
Help: "Hubei China (Xiangyan)",
}, {
Value: "jiangxi1",
Help: "Jiangxi China (Nanchang)",
}, {
Value: "gansu1",
Help: "Gansu China (Lanzhou)",
}, {
Value: "shanxi1",
Help: "Shanxi China (Taiyuan)",
}, {
Value: "liaoning1",
Help: "Liaoning China (Shenyang)",
}, {
Value: "hebei1",
Help: "Hebei China (Shijiazhuang)",
}, {
Value: "fujian1",
Help: "Fujian China (Xiamen)",
}, {
Value: "guangxi1",
Help: "Guangxi China (Nanning)",
}, {
Value: "anhui1",
Help: "Anhui China (Huainan)",
}},
}, {
Name: "location_constraint",
Help: "Location constraint - must match endpoint.\n\nUsed when creating buckets only.",
Provider: "ArvanCloud",
Examples: []fs.OptionExample{{
Value: "ir-thr-at1",
Help: "Tehran Iran (Asiatech)",
}, {
Value: "ir-tbz-sh1",
Help: "Tabriz Iran (Shahriar)",
}},
}, {
Name: "location_constraint",
Help: "Location constraint - must match endpoint when using IBM Cloud Public.\n\nFor on-prem COS, do not make a selection from this list, hit enter.",
@@ -1010,7 +1270,7 @@ func init() {
}, {
Name: "location_constraint",
Help: "Location constraint - must be set to match the Region.\n\nLeave blank if not sure. Used when creating buckets only.",
Provider: "!AWS,IBMCOS,Alibaba,RackCorp,Scaleway,StackPath,TencentCOS",
Provider: "!AWS,IBMCOS,Alibaba,ChinaMobile,ArvanCloud,RackCorp,Scaleway,StackPath,Storj,TencentCOS",
}, {
Name: "acl",
Help: `Canned ACL used when creating buckets and storing or copying objects.
@@ -1021,6 +1281,7 @@ For more info visit https://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview
Note that this ACL is applied when server-side copying objects as S3
doesn't copy the ACL from the source but rather writes a fresh one.`,
Provider: "!Storj",
Examples: []fs.OptionExample{{
Value: "default",
Help: "Owner gets Full_CONTROL.\nNo one else has access rights (default).",
@@ -1044,11 +1305,11 @@ doesn't copy the ACL from the source but rather writes a fresh one.`,
}, {
Value: "bucket-owner-read",
Help: "Object owner gets FULL_CONTROL.\nBucket owner gets READ access.\nIf you specify this canned ACL when creating a bucket, Amazon S3 ignores it.",
Provider: "!IBMCOS",
Provider: "!IBMCOS,ChinaMobile",
}, {
Value: "bucket-owner-full-control",
Help: "Both the object owner and the bucket owner get FULL_CONTROL over the object.\nIf you specify this canned ACL when creating a bucket, Amazon S3 ignores it.",
Provider: "!IBMCOS",
Provider: "!IBMCOS,ChinaMobile",
}, {
Value: "private",
Help: "Owner gets FULL_CONTROL.\nNo one else has access rights (default).\nThis acl is available on IBM Cloud (Infra), IBM Cloud (Storage), On-Premise COS.",
@@ -1097,7 +1358,7 @@ isn't set then "acl" is used instead.`,
}, {
Name: "server_side_encryption",
Help: "The server-side encryption algorithm used when storing this object in S3.",
Provider: "AWS,Ceph,Minio",
Provider: "AWS,Ceph,ChinaMobile,ArvanCloud,Minio",
Examples: []fs.OptionExample{{
Value: "",
Help: "None",
@@ -1105,13 +1366,14 @@ isn't set then "acl" is used instead.`,
Value: "AES256",
Help: "AES256",
}, {
Value: "aws:kms",
Help: "aws:kms",
Value: "aws:kms",
Help: "aws:kms",
Provider: "!ChinaMobile",
}},
}, {
Name: "sse_customer_algorithm",
Help: "If using SSE-C, the server-side encryption algorithm used when storing this object in S3.",
Provider: "AWS,Ceph,Minio",
Provider: "AWS,Ceph,ChinaMobile,ArvanCloud,Minio",
Advanced: true,
Examples: []fs.OptionExample{{
Value: "",
@@ -1123,7 +1385,7 @@ isn't set then "acl" is used instead.`,
}, {
Name: "sse_kms_key_id",
Help: "If using KMS ID you must provide the ARN of Key.",
Provider: "AWS,Ceph,Minio",
Provider: "AWS,Ceph,ArvanCloud,Minio",
Examples: []fs.OptionExample{{
Value: "",
Help: "None",
@@ -1134,7 +1396,7 @@ isn't set then "acl" is used instead.`,
}, {
Name: "sse_customer_key",
Help: "If using SSE-C you must provide the secret encryption key used to encrypt/decrypt your data.",
Provider: "AWS,Ceph,Minio",
Provider: "AWS,Ceph,ChinaMobile,ArvanCloud,Minio",
Advanced: true,
Examples: []fs.OptionExample{{
Value: "",
@@ -1146,7 +1408,7 @@ isn't set then "acl" is used instead.`,
If you leave it blank, this is calculated automatically from the sse_customer_key provided.
`,
Provider: "AWS,Ceph,Minio",
Provider: "AWS,Ceph,ChinaMobile,ArvanCloud,Minio",
Advanced: true,
Examples: []fs.OptionExample{{
Value: "",
@@ -1180,6 +1442,9 @@ If you leave it blank, this is calculated automatically from the sse_customer_ke
}, {
Value: "INTELLIGENT_TIERING",
Help: "Intelligent-Tiering storage class",
}, {
Value: "GLACIER_IR",
Help: "Glacier Instant Retrieval storage class",
}},
}, {
// Mapping from here: https://www.alibabacloud.com/help/doc-detail/64919.htm
@@ -1199,6 +1464,36 @@ If you leave it blank, this is calculated automatically from the sse_customer_ke
Value: "STANDARD_IA",
Help: "Infrequent access storage mode",
}},
}, {
// Mapping from here: https://ecloud.10086.cn/op-help-center/doc/article/24495
Name: "storage_class",
Help: "The storage class to use when storing new objects in ChinaMobile.",
Provider: "ChinaMobile",
Examples: []fs.OptionExample{{
Value: "",
Help: "Default",
}, {
Value: "STANDARD",
Help: "Standard storage class",
}, {
Value: "GLACIER",
Help: "Archive storage mode",
}, {
Value: "STANDARD_IA",
Help: "Infrequent access storage mode",
}},
}, {
// Mapping from here: https://www.arvancloud.com/en/products/cloud-storage
Name: "storage_class",
Help: "The storage class to use when storing new objects in ArvanCloud.",
Provider: "ArvanCloud",
Examples: []fs.OptionExample{{
Value: "",
Help: "Default",
}, {
Value: "STANDARD",
Help: "Standard storage class",
}},
}, {
// Mapping from here: https://intl.cloud.tencent.com/document/product/436/30925
Name: "storage_class",
@@ -1523,6 +1818,14 @@ See: https://github.com/rclone/rclone/issues/4673, https://github.com/rclone/rcl
This is usually set to a CloudFront CDN URL as AWS S3 offers
cheaper egress for data downloaded through the CloudFront network.`,
Advanced: true,
}, {
Name: "use_multipart_etag",
Help: `Whether to use ETag in multipart uploads for verification
This should be true, false or left unset to use the default for the provider.
`,
Default: fs.Tristate{},
Advanced: true,
},
}})
}
@@ -1587,6 +1890,7 @@ type Options struct {
MemoryPoolUseMmap bool `config:"memory_pool_use_mmap"`
DisableHTTP2 bool `config:"disable_http2"`
DownloadURL string `config:"download_url"`
UseMultipartEtag fs.Tristate `config:"use_multipart_etag"`
}
// Fs represents a remote s3 server
@@ -1890,16 +2194,25 @@ func setQuirks(opt *Options) {
listObjectsV2 = true
virtualHostStyle = true
urlEncodeListings = true
useMultipartEtag = true
)
switch opt.Provider {
case "AWS":
// No quirks
case "Alibaba":
// No quirks
useMultipartEtag = false // Alibaba seems to calculate multipart Etags differently from AWS
case "Ceph":
listObjectsV2 = false
virtualHostStyle = false
urlEncodeListings = false
case "ChinaMobile":
listObjectsV2 = false
virtualHostStyle = false
urlEncodeListings = false
case "ArvanCloud":
listObjectsV2 = false
virtualHostStyle = false
urlEncodeListings = false
case "DigitalOcean":
urlEncodeListings = false
case "Dreamhost":
@@ -1908,13 +2221,18 @@ func setQuirks(opt *Options) {
listObjectsV2 = false // untested
virtualHostStyle = false
urlEncodeListings = false
useMultipartEtag = false // untested
case "LyveCloud":
useMultipartEtag = false // LyveCloud seems to calculate multipart Etags differently from AWS
case "Minio":
virtualHostStyle = false
case "Netease":
listObjectsV2 = false // untested
urlEncodeListings = false
useMultipartEtag = false // untested
case "RackCorp":
// No quirks
useMultipartEtag = false // untested
case "Scaleway":
// Scaleway can only have 1000 parts in an upload
if opt.MaxUploadParts > 1000 {
@@ -1925,23 +2243,32 @@ func setQuirks(opt *Options) {
listObjectsV2 = false // untested
virtualHostStyle = false
urlEncodeListings = false
useMultipartEtag = false // untested
case "StackPath":
listObjectsV2 = false // untested
virtualHostStyle = false
urlEncodeListings = false
case "Storj":
// Force chunk size to >= 64 MiB
if opt.ChunkSize < 64*fs.Mebi {
opt.ChunkSize = 64 * fs.Mebi
}
case "TencentCOS":
listObjectsV2 = false // untested
listObjectsV2 = false // untested
useMultipartEtag = false // untested
case "Wasabi":
// No quirks
case "Other":
listObjectsV2 = false
virtualHostStyle = false
urlEncodeListings = false
useMultipartEtag = false
default:
fs.Logf("s3", "s3 provider %q not known - please set correctly", opt.Provider)
listObjectsV2 = false
virtualHostStyle = false
urlEncodeListings = false
useMultipartEtag = false
}
// Path Style vs Virtual Host style
@@ -1963,6 +2290,12 @@ func setQuirks(opt *Options) {
opt.ListVersion = 1
}
}
// Set the correct use multipart Etag for error checking if not manually set
if !opt.UseMultipartEtag.Valid {
opt.UseMultipartEtag.Valid = true
opt.UseMultipartEtag.Value = useMultipartEtag
}
}
// setRoot changes the root of the Fs
@@ -2063,6 +2396,11 @@ func NewFs(ctx context.Context, name, root string, m configmap.Mapper) (fs.Fs, e
// return an error with an fs which points to the parent
return f, fs.ErrorIsFile
}
if opt.Provider == "Storj" {
f.features.Copy = nil
f.features.SetTier = false
f.features.GetTier = false
}
// f.listMultipartUploads()
return f, nil
}
@@ -3209,9 +3547,6 @@ func (o *Object) readMetaData(ctx context.Context) (err error) {
if err != nil {
return err
}
if resp.LastModified == nil {
fs.Logf(o, "Failed to read last modified from HEAD: %v", err)
}
o.setMetaData(resp.ETag, resp.ContentLength, resp.LastModified, resp.Metadata, resp.ContentType, resp.StorageClass)
return nil
}
@@ -3241,6 +3576,7 @@ func (o *Object) setMetaData(etag *string, contentLength *int64, lastModified *t
o.storageClass = aws.StringValue(storageClass)
if lastModified == nil {
o.lastModified = time.Now()
fs.Logf(o, "Failed to read last modified")
} else {
o.lastModified = *lastModified
}
@@ -3321,11 +3657,7 @@ func (o *Object) downloadFromURL(ctx context.Context, bucketPath string, options
return nil, err
}
size, err := strconv.ParseInt(resp.Header.Get("Content-Length"), 10, 64)
if err != nil {
fs.Debugf(o, "Failed to parse content length from string %s, %v", resp.Header.Get("Content-Length"), err)
}
contentLength := &size
contentLength := &resp.ContentLength
if resp.Header.Get("Content-Range") != "" {
var contentRange = resp.Header.Get("Content-Range")
slash := strings.IndexRune(contentRange, '/')
@@ -3416,9 +3748,7 @@ func (o *Object) Open(ctx context.Context, options ...fs.OpenOption) (in io.Read
if err != nil {
return nil, err
}
if resp.LastModified == nil {
fs.Logf(o, "Failed to read last modified: %v", err)
}
// read size from ContentLength or ContentRange
size := resp.ContentLength
if resp.ContentRange != nil {
@@ -3441,7 +3771,7 @@ func (o *Object) Open(ctx context.Context, options ...fs.OpenOption) (in io.Read
var warnStreamUpload sync.Once
func (o *Object) uploadMultipart(ctx context.Context, req *s3.PutObjectInput, size int64, in io.Reader) (err error) {
func (o *Object) uploadMultipart(ctx context.Context, req *s3.PutObjectInput, size int64, in io.Reader) (etag string, err error) {
f := o.fs
// make concurrency machinery
@@ -3488,7 +3818,7 @@ func (o *Object) uploadMultipart(ctx context.Context, req *s3.PutObjectInput, si
return f.shouldRetry(ctx, err)
})
if err != nil {
return fmt.Errorf("multipart upload failed to initialise: %w", err)
return etag, fmt.Errorf("multipart upload failed to initialise: %w", err)
}
uid := cout.UploadId
@@ -3517,8 +3847,21 @@ func (o *Object) uploadMultipart(ctx context.Context, req *s3.PutObjectInput, si
partsMu sync.Mutex // to protect parts
parts []*s3.CompletedPart
off int64
md5sMu sync.Mutex
md5s []byte
)
addMd5 := func(md5binary *[md5.Size]byte, partNum int64) {
md5sMu.Lock()
defer md5sMu.Unlock()
start := partNum * md5.Size
end := start + md5.Size
if extend := end - int64(len(md5s)); extend > 0 {
md5s = append(md5s, make([]byte, extend)...)
}
copy(md5s[start:end], (*md5binary)[:])
}
for partNum := int64(1); !finished; partNum++ {
// Get a block of memory from the pool and token which limits concurrency.
tokens.Get()
@@ -3548,7 +3891,7 @@ func (o *Object) uploadMultipart(ctx context.Context, req *s3.PutObjectInput, si
finished = true
} else if err != nil {
free()
return fmt.Errorf("multipart upload failed to read source: %w", err)
return etag, fmt.Errorf("multipart upload failed to read source: %w", err)
}
buf = buf[:n]
@@ -3561,6 +3904,7 @@ func (o *Object) uploadMultipart(ctx context.Context, req *s3.PutObjectInput, si
// create checksum of buffer for integrity checking
md5sumBinary := md5.Sum(buf)
addMd5(&md5sumBinary, partNum-1)
md5sum := base64.StdEncoding.EncodeToString(md5sumBinary[:])
err = f.pacer.Call(func() (bool, error) {
@@ -3602,7 +3946,7 @@ func (o *Object) uploadMultipart(ctx context.Context, req *s3.PutObjectInput, si
}
err = g.Wait()
if err != nil {
return err
return etag, err
}
// sort the completed parts by part number
@@ -3623,9 +3967,11 @@ func (o *Object) uploadMultipart(ctx context.Context, req *s3.PutObjectInput, si
return f.shouldRetry(ctx, err)
})
if err != nil {
return fmt.Errorf("multipart upload failed to finalise: %w", err)
return etag, fmt.Errorf("multipart upload failed to finalise: %w", err)
}
return nil
hashOfHashes := md5.Sum(md5s)
etag = fmt.Sprintf("%s-%d", hex.EncodeToString(hashOfHashes[:]), len(parts))
return etag, nil
}
// Update the Object from in with modTime and size
@@ -3651,19 +3997,20 @@ func (o *Object) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, op
// - so we can add the md5sum in the metadata as metaMD5Hash if using SSE/SSE-C
// - for multipart provided checksums aren't disabled
// - so we can add the md5sum in the metadata as metaMD5Hash
var md5sum string
var md5sumBase64 string
var md5sumHex string
if !multipart || !o.fs.opt.DisableChecksum {
hash, err := src.Hash(ctx, hash.MD5)
if err == nil && matchMd5.MatchString(hash) {
hashBytes, err := hex.DecodeString(hash)
md5sumHex, err = src.Hash(ctx, hash.MD5)
if err == nil && matchMd5.MatchString(md5sumHex) {
hashBytes, err := hex.DecodeString(md5sumHex)
if err == nil {
md5sum = base64.StdEncoding.EncodeToString(hashBytes)
md5sumBase64 = base64.StdEncoding.EncodeToString(hashBytes)
if (multipart || o.fs.etagIsNotMD5) && !o.fs.opt.DisableChecksum {
// Set the md5sum as metadata on the object if
// - a multipart upload
// - the Etag is not an MD5, eg when using SSE/SSE-C
// provided checksums aren't disabled
metadata[metaMD5Hash] = &md5sum
metadata[metaMD5Hash] = &md5sumBase64
}
}
}
@@ -3678,8 +4025,8 @@ func (o *Object) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, op
ContentType: &mimeType,
Metadata: metadata,
}
if md5sum != "" {
req.ContentMD5 = &md5sum
if md5sumBase64 != "" {
req.ContentMD5 = &md5sumBase64
}
if o.fs.opt.RequesterPays {
req.RequestPayer = aws.String(s3.RequestPayerRequester)
@@ -3733,8 +4080,9 @@ func (o *Object) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, op
}
var resp *http.Response // response from PUT
var wantETag string // Multipart upload Etag to check
if multipart {
err = o.uploadMultipart(ctx, &req, size, in)
wantETag, err = o.uploadMultipart(ctx, &req, size, in)
if err != nil {
return err
}
@@ -3796,7 +4144,7 @@ func (o *Object) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, op
// so make up the object as best we can assuming it got
// uploaded properly. If size < 0 then we need to do the HEAD.
if o.fs.opt.NoHead && size >= 0 {
o.md5 = md5sum
o.md5 = md5sumHex
o.bytes = size
o.lastModified = time.Now()
o.meta = req.Metadata
@@ -3814,7 +4162,18 @@ func (o *Object) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, op
// Read the metadata from the newly created object
o.meta = nil // wipe old metadata
err = o.readMetaData(ctx)
head, err := o.headObject(ctx)
if err != nil {
return err
}
o.setMetaData(head.ETag, head.ContentLength, head.LastModified, head.Metadata, head.ContentType, head.StorageClass)
if o.fs.opt.UseMultipartEtag.Value && !o.fs.etagIsNotMD5 && wantETag != "" && head.ETag != nil && *head.ETag != "" {
gotETag := strings.Trim(strings.ToLower(*head.ETag), `"`)
if wantETag != gotETag {
return fmt.Errorf("multipart upload corrupted: Etag differ: expecting %s but got %s", wantETag, gotETag)
}
fs.Debugf(o, "Multipart upload Etag: %s OK", wantETag)
}
return err
}

View File

@@ -42,7 +42,8 @@ const (
hashCommandNotSupported = "none"
minSleep = 100 * time.Millisecond
maxSleep = 2 * time.Second
decayConstant = 2 // bigger for slower decay, exponential
decayConstant = 2 // bigger for slower decay, exponential
keepAliveInterval = time.Minute // send keepalives every this long while running commands
)
var (
@@ -59,11 +60,13 @@ func init() {
Help: "SSH host to connect to.\n\nE.g. \"example.com\".",
Required: true,
}, {
Name: "user",
Help: "SSH username, leave blank for current username, " + currentUser + ".",
Name: "user",
Help: "SSH username.",
Default: currentUser,
}, {
Name: "port",
Help: "SSH port, leave blank to use default (22).",
Name: "port",
Help: "SSH port number.",
Default: 22,
}, {
Name: "pass",
Help: "SSH password, leave blank to use ssh-agent.",
@@ -339,6 +342,32 @@ func (c *conn) wait() {
c.err <- c.sshClient.Conn.Wait()
}
// Send a keepalive over the ssh connection
func (c *conn) sendKeepAlive() {
_, _, err := c.sshClient.SendRequest("keepalive@openssh.com", true, nil)
if err != nil {
fs.Debugf(nil, "Failed to send keep alive: %v", err)
}
}
// Send keepalives every interval over the ssh connection until done is closed
func (c *conn) sendKeepAlives(interval time.Duration) (done chan struct{}) {
done = make(chan struct{})
go func() {
t := time.NewTicker(interval)
defer t.Stop()
for {
select {
case <-t.C:
c.sendKeepAlive()
case <-done:
return
}
}
}()
return done
}
// Closes the connection
func (c *conn) close() error {
sftpErr := c.sftpClient.Close()
@@ -1098,6 +1127,9 @@ func (f *Fs) run(ctx context.Context, cmd string) ([]byte, error) {
}
defer f.putSftpConnection(&c, err)
// Send keepalives while the connection is open
defer close(c.sendKeepAlives(keepAliveInterval))
session, err := c.sshClient.NewSession()
if err != nil {
return nil, fmt.Errorf("run: get SFTP session: %w", err)
@@ -1110,10 +1142,12 @@ func (f *Fs) run(ctx context.Context, cmd string) ([]byte, error) {
session.Stdout = &stdout
session.Stderr = &stderr
fs.Debugf(f, "Running remote command: %s", cmd)
err = session.Run(cmd)
if err != nil {
return nil, fmt.Errorf("failed to run %q: %s: %w", cmd, stderr.Bytes(), err)
return nil, fmt.Errorf("failed to run %q: %s: %w", cmd, bytes.TrimSpace(stderr.Bytes()), err)
}
fs.Debugf(f, "Remote command result: %s", bytes.TrimSpace(stdout.Bytes()))
return stdout.Bytes(), nil
}
@@ -1230,8 +1264,6 @@ func (o *Object) Remote() string {
// Hash returns the selected checksum of the file
// If no checksum is available it returns ""
func (o *Object) Hash(ctx context.Context, r hash.Type) (string, error) {
o.fs.addSession() // Show session in use
defer o.fs.removeSession()
if o.fs.opt.DisableHashCheck {
return "", nil
}
@@ -1255,36 +1287,16 @@ func (o *Object) Hash(ctx context.Context, r hash.Type) (string, error) {
return "", hash.ErrUnsupported
}
c, err := o.fs.getSftpConnection(ctx)
if err != nil {
return "", fmt.Errorf("Hash get SFTP connection: %w", err)
}
session, err := c.sshClient.NewSession()
o.fs.putSftpConnection(&c, err)
if err != nil {
return "", fmt.Errorf("Hash put SFTP connection: %w", err)
}
var stdout, stderr bytes.Buffer
session.Stdout = &stdout
session.Stderr = &stderr
escapedPath := shellEscape(o.path())
if o.fs.opt.PathOverride != "" {
escapedPath = shellEscape(path.Join(o.fs.opt.PathOverride, o.remote))
}
err = session.Run(hashCmd + " " + escapedPath)
fs.Debugf(nil, "sftp cmd = %s", escapedPath)
b, err := o.fs.run(ctx, hashCmd+" "+escapedPath)
if err != nil {
_ = session.Close()
fs.Debugf(o, "Failed to calculate %v hash: %v (%s)", r, err, bytes.TrimSpace(stderr.Bytes()))
return "", nil
return "", fmt.Errorf("failed to calculate %v hash: %w", r, err)
}
_ = session.Close()
b := stdout.Bytes()
fs.Debugf(nil, "sftp output = %q", b)
str := parseHash(b)
fs.Debugf(nil, "sftp hash = %q", str)
if r == hash.MD5 {
o.md5sum = &str
} else if r == hash.SHA1 {

View File

@@ -1,8 +1,8 @@
//go:build !plan9
// +build !plan9
// Package tardigrade provides an interface to Tardigrade decentralized object storage.
package tardigrade
// Package storj provides an interface to Storj decentralized object storage.
package storj
import (
"context"
@@ -31,16 +31,17 @@ const (
)
var satMap = map[string]string{
"us-central-1.tardigrade.io": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S@us-central-1.tardigrade.io:7777",
"europe-west-1.tardigrade.io": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs@europe-west-1.tardigrade.io:7777",
"asia-east-1.tardigrade.io": "121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6@asia-east-1.tardigrade.io:7777",
"us-central-1.storj.io": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S@us-central-1.tardigrade.io:7777",
"europe-west-1.storj.io": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs@europe-west-1.tardigrade.io:7777",
"asia-east-1.storj.io": "121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6@asia-east-1.tardigrade.io:7777",
}
// Register with Fs
func init() {
fs.Register(&fs.RegInfo{
Name: "tardigrade",
Description: "Tardigrade Decentralized Cloud Storage",
Name: "storj",
Description: "Storj Decentralized Cloud Storage",
Aliases: []string{"tardigrade"},
NewFs: NewFs,
Config: func(ctx context.Context, name string, m configmap.Mapper, configIn fs.ConfigIn) (*fs.ConfigOut, error) {
provider, _ := m.Get(fs.ConfigProvider)
@@ -84,10 +85,9 @@ func init() {
},
Options: []fs.Option{
{
Name: fs.ConfigProvider,
Help: "Choose an authentication method.",
Required: true,
Default: existingProvider,
Name: fs.ConfigProvider,
Help: "Choose an authentication method.",
Default: existingProvider,
Examples: []fs.OptionExample{{
Value: "existing",
Help: "Use an existing access grant.",
@@ -99,23 +99,21 @@ func init() {
{
Name: "access_grant",
Help: "Access grant.",
Required: false,
Provider: "existing",
},
{
Name: "satellite_address",
Help: "Satellite address.\n\nCustom satellite address should match the format: `<nodeid>@<address>:<port>`.",
Required: false,
Provider: newProvider,
Default: "us-central-1.tardigrade.io",
Default: "us-central-1.storj.io",
Examples: []fs.OptionExample{{
Value: "us-central-1.tardigrade.io",
Value: "us-central-1.storj.io",
Help: "US Central 1",
}, {
Value: "europe-west-1.tardigrade.io",
Value: "europe-west-1.storj.io",
Help: "Europe West 1",
}, {
Value: "asia-east-1.tardigrade.io",
Value: "asia-east-1.storj.io",
Help: "Asia East 1",
},
},
@@ -123,13 +121,11 @@ func init() {
{
Name: "api_key",
Help: "API key.",
Required: false,
Provider: newProvider,
},
{
Name: "passphrase",
Help: "Encryption passphrase.\n\nTo access existing objects enter passphrase used for uploading.",
Required: false,
Provider: newProvider,
},
},
@@ -145,7 +141,7 @@ type Options struct {
Passphrase string `config:"passphrase"`
}
// Fs represents a remote to Tardigrade
// Fs represents a remote to Storj
type Fs struct {
name string // the name of the remote
root string // root of the filesystem
@@ -163,11 +159,12 @@ var (
_ fs.Fs = &Fs{}
_ fs.ListRer = &Fs{}
_ fs.PutStreamer = &Fs{}
_ fs.Mover = &Fs{}
)
// NewFs creates a filesystem backed by Tardigrade.
// NewFs creates a filesystem backed by Storj.
func NewFs(ctx context.Context, name, root string, m configmap.Mapper) (_ fs.Fs, err error) {
// Setup filesystem and connection to Tardigrade
// Setup filesystem and connection to Storj
root = norm.NFC.String(root)
root = strings.Trim(root, "/")
@@ -188,24 +185,24 @@ func NewFs(ctx context.Context, name, root string, m configmap.Mapper) (_ fs.Fs,
if f.opts.Access != "" {
access, err = uplink.ParseAccess(f.opts.Access)
if err != nil {
return nil, fmt.Errorf("tardigrade: access: %w", err)
return nil, fmt.Errorf("storj: access: %w", err)
}
}
if access == nil && f.opts.SatelliteAddress != "" && f.opts.APIKey != "" && f.opts.Passphrase != "" {
access, err = uplink.RequestAccessWithPassphrase(ctx, f.opts.SatelliteAddress, f.opts.APIKey, f.opts.Passphrase)
if err != nil {
return nil, fmt.Errorf("tardigrade: access: %w", err)
return nil, fmt.Errorf("storj: access: %w", err)
}
serializedAccess, err := access.Serialize()
if err != nil {
return nil, fmt.Errorf("tardigrade: access: %w", err)
return nil, fmt.Errorf("storj: access: %w", err)
}
err = config.SetValueAndSave(f.name, "access_grant", serializedAccess)
if err != nil {
return nil, fmt.Errorf("tardigrade: access: %w", err)
return nil, fmt.Errorf("storj: access: %w", err)
}
}
@@ -237,7 +234,7 @@ func NewFs(ctx context.Context, name, root string, m configmap.Mapper) (_ fs.Fs,
if bucketName != "" && bucketPath != "" {
_, err = project.StatBucket(ctx, bucketName)
if err != nil {
return f, fmt.Errorf("tardigrade: bucket: %w", err)
return f, fmt.Errorf("storj: bucket: %w", err)
}
object, err := project.StatObject(ctx, bucketName, bucketPath)
@@ -263,7 +260,7 @@ func NewFs(ctx context.Context, name, root string, m configmap.Mapper) (_ fs.Fs,
return f, nil
}
// connect opens a connection to Tardigrade.
// connect opens a connection to Storj.
func (f *Fs) connect(ctx context.Context) (project *uplink.Project, err error) {
fs.Debugf(f, "connecting...")
defer fs.Debugf(f, "connected: %+v", err)
@@ -274,7 +271,7 @@ func (f *Fs) connect(ctx context.Context) (project *uplink.Project, err error) {
project, err = cfg.OpenProject(ctx, f.access)
if err != nil {
return nil, fmt.Errorf("tardigrade: project: %w", err)
return nil, fmt.Errorf("storj: project: %w", err)
}
return
@@ -683,3 +680,43 @@ func newPrefix(prefix string) string {
return prefix + "/"
}
// Move src to this remote using server-side move operations.
//
// This is stored with the remote path given
//
// It returns the destination Object and a possible error
//
// Will only be called if src.Fs().Name() == f.Name()
//
// If it isn't possible then return fs.ErrorCantMove
func (f *Fs) Move(ctx context.Context, src fs.Object, remote string) (fs.Object, error) {
srcObj, ok := src.(*Object)
if !ok {
fs.Debugf(src, "Can't move - not same remote type")
return nil, fs.ErrorCantMove
}
// Move parameters
srcBucket, srcKey := bucket.Split(srcObj.absolute)
dstBucket, dstKey := f.absolute(remote)
options := uplink.MoveObjectOptions{}
// Do the move
err := f.project.MoveObject(ctx, srcBucket, srcKey, dstBucket, dstKey, &options)
if err != nil {
// Make sure destination bucket exists
_, err := f.project.EnsureBucket(ctx, dstBucket)
if err != nil {
return nil, fmt.Errorf("rename object failed to create destination bucket: %w", err)
}
// And try again
err = f.project.MoveObject(ctx, srcBucket, srcKey, dstBucket, dstKey, &options)
if err != nil {
return nil, fmt.Errorf("rename object failed: %w", err)
}
}
// Read the new object
return f.NewObject(ctx, remote)
}

View File

@@ -1,7 +1,7 @@
//go:build !plan9
// +build !plan9
package tardigrade
package storj
import (
"context"
@@ -18,7 +18,7 @@ import (
"storj.io/uplink"
)
// Object describes a Tardigrade object
// Object describes a Storj object
type Object struct {
fs *Fs
@@ -32,7 +32,7 @@ type Object struct {
// Check the interfaces are satisfied.
var _ fs.Object = &Object{}
// newObjectFromUplink creates a new object from a Tardigrade uplink object.
// newObjectFromUplink creates a new object from a Storj uplink object.
func newObjectFromUplink(f *Fs, relative string, object *uplink.Object) *Object {
// Attempt to use the modified time from the metadata. Otherwise
// fallback to the server time.

View File

@@ -1,20 +1,20 @@
//go:build !plan9
// +build !plan9
// Test Tardigrade filesystem interface
package tardigrade_test
// Test Storj filesystem interface
package storj_test
import (
"testing"
"github.com/rclone/rclone/backend/tardigrade"
"github.com/rclone/rclone/backend/storj"
"github.com/rclone/rclone/fstest/fstests"
)
// TestIntegration runs integration tests against the remote
func TestIntegration(t *testing.T) {
fstests.Run(t, &fstests.Opt{
RemoteName: "TestTardigrade:",
NilObject: (*tardigrade.Object)(nil),
RemoteName: "TestStorj:",
NilObject: (*storj.Object)(nil),
})
}

View File

@@ -1,4 +1,4 @@
//go:build plan9
// +build plan9
package tardigrade
package storj

View File

@@ -754,22 +754,34 @@ func (f *Fs) ListR(ctx context.Context, dir string, callback fs.ListRCallback) (
}
// About gets quota information
func (f *Fs) About(ctx context.Context) (*fs.Usage, error) {
var containers []swift.Container
var err error
err = f.pacer.Call(func() (bool, error) {
containers, err = f.c.ContainersAll(ctx, nil)
return shouldRetry(ctx, err)
})
if err != nil {
return nil, fmt.Errorf("container listing failed: %w", err)
}
func (f *Fs) About(ctx context.Context) (usage *fs.Usage, err error) {
var total, objects int64
for _, c := range containers {
total += c.Bytes
objects += c.Count
if f.rootContainer != "" {
var container swift.Container
err = f.pacer.Call(func() (bool, error) {
container, _, err = f.c.Container(ctx, f.rootContainer)
return shouldRetry(ctx, err)
})
if err != nil {
return nil, fmt.Errorf("container info failed: %w", err)
}
total = container.Bytes
objects = container.Count
} else {
var containers []swift.Container
err = f.pacer.Call(func() (bool, error) {
containers, err = f.c.ContainersAll(ctx, nil)
return shouldRetry(ctx, err)
})
if err != nil {
return nil, fmt.Errorf("container listing failed: %w", err)
}
for _, c := range containers {
total += c.Bytes
objects += c.Count
}
}
usage := &fs.Usage{
usage = &fs.Usage{
Used: fs.NewUsageValue(total), // bytes in use
Objects: fs.NewUsageValue(objects), // objects in use
}

View File

@@ -4,6 +4,7 @@ import (
"context"
"fmt"
"io"
"io/ioutil"
"sync"
"time"
@@ -84,6 +85,10 @@ func (o *Object) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, op
err := o.Update(ctx, readers[i], src, options...)
if err != nil {
errs[i] = fmt.Errorf("%s: %w", o.UpstreamFs().Name(), err)
if len(entries) > 1 {
// Drain the input buffer to allow other uploads to continue
_, _ = io.Copy(ioutil.Discard, readers[i])
}
}
} else {
errs[i] = fs.ErrorNotAFile

View File

@@ -6,6 +6,7 @@ import (
"errors"
"fmt"
"io"
"io/ioutil"
"path"
"path/filepath"
"strings"
@@ -33,25 +34,21 @@ func init() {
Help: "List of space separated upstreams.\n\nCan be 'upstreama:test/dir upstreamb:', '\"upstreama:test/space:ro dir\" upstreamb:', etc.",
Required: true,
}, {
Name: "action_policy",
Help: "Policy to choose upstream on ACTION category.",
Required: true,
Default: "epall",
Name: "action_policy",
Help: "Policy to choose upstream on ACTION category.",
Default: "epall",
}, {
Name: "create_policy",
Help: "Policy to choose upstream on CREATE category.",
Required: true,
Default: "epmfs",
Name: "create_policy",
Help: "Policy to choose upstream on CREATE category.",
Default: "epmfs",
}, {
Name: "search_policy",
Help: "Policy to choose upstream on SEARCH category.",
Required: true,
Default: "ff",
Name: "search_policy",
Help: "Policy to choose upstream on SEARCH category.",
Default: "ff",
}, {
Name: "cache_time",
Help: "Cache time of usage and free space (in seconds).\n\nThis option is only useful when a path preserving policy is used.",
Required: true,
Default: 120,
Name: "cache_time",
Help: "Cache time of usage and free space (in seconds).\n\nThis option is only useful when a path preserving policy is used.",
Default: 120,
}},
}
fs.Register(fsi)
@@ -490,6 +487,10 @@ func (f *Fs) put(ctx context.Context, in io.Reader, src fs.ObjectInfo, stream bo
}
if err != nil {
errs[i] = fmt.Errorf("%s: %w", u.Name(), err)
if len(upstreams) > 1 {
// Drain the input buffer to allow other uploads to continue
_, _ = io.Copy(ioutil.Discard, readers[i])
}
return
}
objs[i] = u.WrapObject(o)

View File

@@ -4,8 +4,6 @@ import (
"bytes"
"context"
"fmt"
"io/ioutil"
"os"
"testing"
"time"
@@ -20,19 +18,12 @@ import (
)
// MakeTestDirs makes directories in /tmp for testing
func MakeTestDirs(t *testing.T, n int) (dirs []string, clean func()) {
func MakeTestDirs(t *testing.T, n int) (dirs []string) {
for i := 1; i <= n; i++ {
dir, err := ioutil.TempDir("", fmt.Sprintf("rclone-union-test-%d", n))
require.NoError(t, err)
dir := t.TempDir()
dirs = append(dirs, dir)
}
clean = func() {
for _, dir := range dirs {
err := os.RemoveAll(dir)
assert.NoError(t, err)
}
}
return dirs, clean
return dirs
}
func (f *Fs) TestInternalReadOnly(t *testing.T) {
@@ -95,8 +86,7 @@ func TestMoveCopy(t *testing.T) {
t.Skip("Skipping as -remote set")
}
ctx := context.Background()
dirs, clean := MakeTestDirs(t, 1)
defer clean()
dirs := MakeTestDirs(t, 1)
fsString := fmt.Sprintf(":union,upstreams='%s :memory:bucket':", dirs[0])
f, err := fs.NewFs(ctx, fsString)
require.NoError(t, err)

View File

@@ -27,8 +27,7 @@ func TestStandard(t *testing.T) {
if *fstest.RemoteName != "" {
t.Skip("Skipping as -remote set")
}
dirs, clean := union.MakeTestDirs(t, 3)
defer clean()
dirs := union.MakeTestDirs(t, 3)
upstreams := dirs[0] + " " + dirs[1] + " " + dirs[2]
name := "TestUnion"
fstests.Run(t, &fstests.Opt{
@@ -49,8 +48,7 @@ func TestRO(t *testing.T) {
if *fstest.RemoteName != "" {
t.Skip("Skipping as -remote set")
}
dirs, clean := union.MakeTestDirs(t, 3)
defer clean()
dirs := union.MakeTestDirs(t, 3)
upstreams := dirs[0] + " " + dirs[1] + ":ro " + dirs[2] + ":ro"
name := "TestUnionRO"
fstests.Run(t, &fstests.Opt{
@@ -71,8 +69,7 @@ func TestNC(t *testing.T) {
if *fstest.RemoteName != "" {
t.Skip("Skipping as -remote set")
}
dirs, clean := union.MakeTestDirs(t, 3)
defer clean()
dirs := union.MakeTestDirs(t, 3)
upstreams := dirs[0] + " " + dirs[1] + ":nc " + dirs[2] + ":nc"
name := "TestUnionNC"
fstests.Run(t, &fstests.Opt{
@@ -93,8 +90,7 @@ func TestPolicy1(t *testing.T) {
if *fstest.RemoteName != "" {
t.Skip("Skipping as -remote set")
}
dirs, clean := union.MakeTestDirs(t, 3)
defer clean()
dirs := union.MakeTestDirs(t, 3)
upstreams := dirs[0] + " " + dirs[1] + " " + dirs[2]
name := "TestUnionPolicy1"
fstests.Run(t, &fstests.Opt{
@@ -115,8 +111,7 @@ func TestPolicy2(t *testing.T) {
if *fstest.RemoteName != "" {
t.Skip("Skipping as -remote set")
}
dirs, clean := union.MakeTestDirs(t, 3)
defer clean()
dirs := union.MakeTestDirs(t, 3)
upstreams := dirs[0] + " " + dirs[1] + " " + dirs[2]
name := "TestUnionPolicy2"
fstests.Run(t, &fstests.Opt{
@@ -137,8 +132,7 @@ func TestPolicy3(t *testing.T) {
if *fstest.RemoteName != "" {
t.Skip("Skipping as -remote set")
}
dirs, clean := union.MakeTestDirs(t, 3)
defer clean()
dirs := union.MakeTestDirs(t, 3)
upstreams := dirs[0] + " " + dirs[1] + " " + dirs[2]
name := "TestUnionPolicy3"
fstests.Run(t, &fstests.Opt{

View File

@@ -6,8 +6,6 @@ import (
"fmt"
"io"
"math"
"path"
"path/filepath"
"strings"
"sync"
"sync/atomic"
@@ -91,7 +89,7 @@ func New(ctx context.Context, remote, root string, cacheTime time.Duration) (*Fs
return nil, err
}
f.RootFs = rFs
rootString := path.Join(remote, filepath.ToSlash(root))
rootString := fspath.JoinRootPath(remote, root)
myFs, err := cache.Get(ctx, rootString)
if err != nil && err != fs.ErrorIsFile {
return nil, err

View File

@@ -454,7 +454,9 @@ func NewFs(ctx context.Context, name, root string, m configmap.Mapper) (fs.Fs, e
if err != nil {
return nil, err
}
f.srv.SetHeader("Referer", u.String())
if !f.findHeader(opt.Headers, "Referer") {
f.srv.SetHeader("Referer", u.String())
}
if root != "" && !rootIsDir {
// Check to see if the root actually an existing file
@@ -517,6 +519,17 @@ func (f *Fs) addHeaders(headers fs.CommaSepList) {
}
}
// Returns true if the header was configured
func (f *Fs) findHeader(headers fs.CommaSepList, find string) bool {
for i := 0; i < len(headers); i += 2 {
key := f.opt.Headers[i]
if strings.EqualFold(key, find) {
return true
}
}
return false
}
// fetch the bearer token and set it if successful
func (f *Fs) fetchAndSetBearerToken() error {
if f.opt.BearerTokenCommand == "" {

View File

@@ -66,6 +66,11 @@ func init() {
})
},
Options: append(oauthutil.SharedOptions, []fs.Option{{
Name: "hard_delete",
Help: "Delete files permanently rather than putting them into the trash.",
Default: false,
Advanced: true,
}, {
Name: config.ConfigEncoding,
Help: config.ConfigEncodingHelp,
Advanced: true,
@@ -79,8 +84,9 @@ func init() {
// Options defines the configuration for this backend
type Options struct {
Token string `config:"token"`
Enc encoder.MultiEncoder `config:"encoding"`
Token string `config:"token"`
HardDelete bool `config:"hard_delete"`
Enc encoder.MultiEncoder `config:"encoding"`
}
// Fs represents a remote yandex
@@ -630,7 +636,7 @@ func (f *Fs) purgeCheck(ctx context.Context, dir string, check bool) error {
}
}
//delete directory
return f.delete(ctx, root, false)
return f.delete(ctx, root, f.opt.HardDelete)
}
// Rmdir deletes the container
@@ -1141,7 +1147,7 @@ func (o *Object) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, op
// Remove an object
func (o *Object) Remove(ctx context.Context) error {
return o.fs.delete(ctx, o.filePath(), false)
return o.fs.delete(ctx, o.filePath(), o.fs.opt.HardDelete)
}
// MimeType of an Object if known, "" otherwise

View File

@@ -1,5 +1,5 @@
#!/bin/bash
set -e
docker build -t rclone/xgo-cgofuse https://github.com/billziss-gh/cgofuse.git
docker build -t rclone/xgo-cgofuse https://github.com/winfsp/cgofuse.git
docker images
docker push rclone/xgo-cgofuse

View File

@@ -52,6 +52,7 @@ var (
var osarches = []string{
"windows/386",
"windows/amd64",
"windows/arm64",
"darwin/amd64",
"darwin/arm64",
"linux/386",
@@ -85,6 +86,13 @@ var archFlags = map[string][]string{
"arm-v7": {"GOARM=7"},
}
// Map Go architectures to NFPM architectures
// Any missing are passed straight through
var goarchToNfpm = map[string]string{
"arm": "arm6",
"arm-v7": "arm7",
}
// runEnv - run a shell command with env
func runEnv(args, env []string) error {
if *debug {
@@ -167,11 +175,15 @@ func buildDebAndRpm(dir, version, goarch string) []string {
pkgVersion := version[1:]
pkgVersion = strings.Replace(pkgVersion, "β", "-beta", -1)
pkgVersion = strings.Replace(pkgVersion, "-", ".", -1)
nfpmArch, ok := goarchToNfpm[goarch]
if !ok {
nfpmArch = goarch
}
// Make nfpm.yaml from the template
substitute("../bin/nfpm.yaml", path.Join(dir, "nfpm.yaml"), map[string]string{
"Version": pkgVersion,
"Arch": goarch,
"Arch": nfpmArch,
})
// build them
@@ -377,7 +389,7 @@ func compileArch(version, goos, goarch, dir string) bool {
artifacts := []string{buildZip(dir)}
// build a .deb and .rpm if appropriate
if goos == "linux" {
artifacts = append(artifacts, buildDebAndRpm(dir, version, stripVersion(goarch))...)
artifacts = append(artifacts, buildDebAndRpm(dir, version, goarch)...)
}
if *copyAs != "" {
for _, artifact := range artifacts {

View File

@@ -24,6 +24,7 @@ docs = [
"overview.md",
"flags.md",
"docker.md",
"bisync.md",
# Keep these alphabetical by full name
"fichier.md",
@@ -47,11 +48,13 @@ docs = [
"hdfs.md",
"http.md",
"hubic.md",
"internetarchive.md",
"jottacloud.md",
"koofr.md",
"mailru.md",
"mega.md",
"memory.md",
"netstorage.md",
"azureblob.md",
"onedrive.md",
"opendrive.md",
@@ -63,8 +66,9 @@ docs = [
"putio.md",
"seafile.md",
"sftp.md",
"storj.md",
"sugarsync.md",
"tardigrade.md",
"tardigrade.md", # stub only to redirect to storj.md
"uptobox.md",
"union.md",
"webdav.md",

View File

@@ -41,7 +41,7 @@ You can discover what commands a backend implements by using
rclone backend help <backendname>
You can also discover information about the backend using (see
[operations/fsinfo](/rc/#operations/fsinfo) in the remote control docs
[operations/fsinfo](/rc/#operations-fsinfo) in the remote control docs
for more info).
rclone backend features remote:
@@ -55,7 +55,7 @@ Pass arguments to the backend by placing them on the end of the line
rclone backend cleanup remote:path file1 file2 file3
Note to run these commands on a running backend then see
[backend/command](/rc/#backend/command) in the rc docs.
[backend/command](/rc/#backend-command) in the rc docs.
`,
RunE: func(command *cobra.Command, args []string) error {
cmd.CheckArgs(2, 1e6, command, args)
@@ -149,7 +149,7 @@ See [the "rclone backend" command](/commands/rclone_backend/) for more
info on how to pass options and arguments.
These can be run on a running backend using the rc command
[backend/command](/rc/#backend/command).
[backend/command](/rc/#backend-command).
`, name)
for _, cmd := range cmds {

View File

@@ -13,7 +13,7 @@ import (
"sync/atomic"
"time"
"github.com/billziss-gh/cgofuse/fuse"
"github.com/winfsp/cgofuse/fuse"
"github.com/rclone/rclone/cmd/mountlib"
"github.com/rclone/rclone/fs"
"github.com/rclone/rclone/fs/fserrors"

View File

@@ -18,7 +18,7 @@ import (
"sync/atomic"
"time"
"github.com/billziss-gh/cgofuse/fuse"
"github.com/winfsp/cgofuse/fuse"
"github.com/rclone/rclone/cmd/mountlib"
"github.com/rclone/rclone/fs"
"github.com/rclone/rclone/lib/atexit"
@@ -168,7 +168,7 @@ func mount(VFS *vfs.VFS, mountPath string, opt *mountlib.Options) (<-chan error,
host.SetCapCaseInsensitive(f.Features().CaseInsensitive)
// Create options
options := mountOptions(VFS, f.Name()+":"+f.Root(), mountpoint, opt)
options := mountOptions(VFS, opt.DeviceName, mountpoint, opt)
fs.Debugf(f, "Mounting with options: %q", options)
// Serve the mount point in the background returning error to errChan

View File

@@ -10,11 +10,17 @@
package cmount
import (
"runtime"
"testing"
"github.com/rclone/rclone/fstest/testy"
"github.com/rclone/rclone/vfs/vfstest"
)
func TestMount(t *testing.T) {
// Disable tests under macOS and the CI since they are locking up
if runtime.GOOS == "darwin" {
testy.SkipUnreliable(t)
}
vfstest.RunTests(t, false, mount)
}

View File

@@ -165,7 +165,7 @@ func runRoot(cmd *cobra.Command, args []string) {
// setupRootCommand sets default usage, help, and error handling for
// the root command.
//
// Helpful example: http://rtfcode.com/xref/moby-17.03.2-ce/cli/cobra.go
// Helpful example: https://github.com/moby/moby/blob/master/cli/cobra.go
func setupRootCommand(rootCmd *cobra.Command) {
ci := fs.GetConfig(context.Background())
// Add global flags
@@ -329,12 +329,29 @@ func showBackend(name string) {
if opt.IsPassword {
fmt.Printf("**NB** Input to this must be obscured - see [rclone obscure](/commands/rclone_obscure/).\n\n")
}
fmt.Printf("Properties:\n\n")
fmt.Printf("- Config: %s\n", opt.Name)
fmt.Printf("- Env Var: %s\n", opt.EnvVarName(backend.Prefix))
if opt.Provider != "" {
fmt.Printf("- Provider: %s\n", opt.Provider)
}
fmt.Printf("- Type: %s\n", opt.Type())
fmt.Printf("- Default: %s\n", quoteString(opt.GetValue()))
defaultValue := opt.GetValue()
// Default value and Required are related: Required means option must
// have a value, but if there is a default then a value does not have
// to be explicitely set and then Required makes no difference.
if defaultValue != "" {
fmt.Printf("- Default: %s\n", quoteString(defaultValue))
} else {
fmt.Printf("- Required: %v\n", opt.Required)
}
// List examples / possible choices
if len(opt.Examples) > 0 {
fmt.Printf("- Examples:\n")
if opt.Exclusive {
fmt.Printf("- Choices:\n")
} else {
fmt.Printf("- Examples:\n")
}
for _, ex := range opt.Examples {
fmt.Printf(" - %s\n", quoteString(ex.Value))
for _, line := range strings.Split(ex.Help, "\n") {

View File

@@ -86,7 +86,7 @@ func mount(VFS *vfs.VFS, mountpoint string, opt *mountlib.Options) (<-chan error
f := VFS.Fs()
fs.Debugf(f, "Mounting on %q", mountpoint)
c, err := fuse.Mount(mountpoint, mountOptions(VFS, f.Name()+":"+f.Root(), opt)...)
c, err := fuse.Mount(mountpoint, mountOptions(VFS, opt.DeviceName, opt)...)
if err != nil {
return nil, nil, err
}

View File

@@ -25,11 +25,10 @@ func init() {
// mountOptions configures the options from the command line flags
//
// man mount.fuse for more info and note the -o flag for other options
func mountOptions(fsys *FS, f fs.Fs) (mountOpts *fuse.MountOptions) {
device := f.Name() + ":" + f.Root()
func mountOptions(fsys *FS, f fs.Fs, opt *mountlib.Options) (mountOpts *fuse.MountOptions) {
mountOpts = &fuse.MountOptions{
AllowOther: fsys.opt.AllowOther,
FsName: device,
FsName: opt.DeviceName,
Name: "rclone",
DisableXAttrs: true,
Debug: fsys.opt.DebugFUSE,
@@ -120,7 +119,7 @@ func mountOptions(fsys *FS, f fs.Fs) (mountOpts *fuse.MountOptions) {
if runtime.GOOS == "darwin" {
opts = append(opts,
// VolumeName sets the volume name shown in Finder.
fmt.Sprintf("volname=%s", device),
fmt.Sprintf("volname=%s", opt.VolumeName),
// NoAppleXattr makes OSXFUSE disallow extended attributes with the
// prefix "com.apple.". This disables persistent Finder state and
@@ -167,7 +166,7 @@ func mount(VFS *vfs.VFS, mountpoint string, opt *mountlib.Options) (<-chan error
//mOpts.Debug = mountlib.DebugFUSE
//conn := fusefs.NewFileSystemConnector(nodeFs.Root(), mOpts)
mountOpts := mountOptions(fsys, f)
mountOpts := mountOptions(fsys, f, opt)
// FIXME fill out
opts := fusefs.Options{

View File

@@ -65,10 +65,10 @@ at all, then 1 PiB is set as both the total and the free size.
To run rclone @ on Windows, you will need to
download and install [WinFsp](http://www.secfs.net/winfsp/).
[WinFsp](https://github.com/billziss-gh/winfsp) is an open-source
[WinFsp](https://github.com/winfsp/winfsp) is an open-source
Windows File System Proxy which makes it easy to write user space file
systems for Windows. It provides a FUSE emulation layer which rclone
uses combination with [cgofuse](https://github.com/billziss-gh/cgofuse).
uses combination with [cgofuse](https://github.com/winfsp/cgofuse).
Both of these packages are by Bill Zissimopoulos who was very helpful
during the implementation of rclone @ for Windows.
@@ -218,7 +218,7 @@ from Microsoft's Sysinternals suite, which has option |-s| to start
processes as the SYSTEM account. Another alternative is to run the mount
command from a Windows Scheduled Task, or a Windows Service, configured
to run as the SYSTEM account. A third alternative is to use the
[WinFsp.Launcher infrastructure](https://github.com/billziss-gh/winfsp/wiki/WinFsp-Service-Architecture)).
[WinFsp.Launcher infrastructure](https://github.com/winfsp/winfsp/wiki/WinFsp-Service-Architecture)).
Note that when running rclone as another user, it will not use
the configuration file from your profile unless you tell it to
with the [|--config|](https://rclone.org/docs/#config-config-file) option.

View File

@@ -40,6 +40,7 @@ type Options struct {
ExtraOptions []string
ExtraFlags []string
AttrTimeout time.Duration // how long the kernel caches attribute for
DeviceName string
VolumeName string
NoAppleDouble bool
NoAppleXattr bool
@@ -77,6 +78,17 @@ type MountPoint struct {
ErrChan <-chan error
}
// NewMountPoint makes a new mounting structure
func NewMountPoint(mount MountFn, mountPoint string, f fs.Fs, mountOpt *Options, vfsOpt *vfscommon.Options) *MountPoint {
return &MountPoint{
MountFn: mount,
MountPoint: mountPoint,
Fs: f,
MountOpt: *mountOpt,
VFSOpt: *vfsOpt,
}
}
// Global constants
const (
MaxLeafSize = 1024 // don't pass file names longer than this
@@ -125,6 +137,7 @@ func AddFlags(flagSet *pflag.FlagSet) {
flags.BoolVarP(flagSet, &Opt.AsyncRead, "async-read", "", Opt.AsyncRead, "Use asynchronous reads (not supported on Windows)")
flags.FVarP(flagSet, &Opt.MaxReadAhead, "max-read-ahead", "", "The number of bytes that can be prefetched for sequential reads (not supported on Windows)")
flags.BoolVarP(flagSet, &Opt.WritebackCache, "write-back-cache", "", Opt.WritebackCache, "Makes kernel buffer writes before sending them to rclone (without this, writethrough caching is used) (not supported on Windows)")
flags.StringVarP(flagSet, &Opt.DeviceName, "devname", "", Opt.DeviceName, "Set the device name - default is remote:path")
// Windows and OSX
flags.StringVarP(flagSet, &Opt.VolumeName, "volname", "", Opt.VolumeName, "Set the volume name (supported on Windows and OSX only)")
// OSX only
@@ -165,14 +178,7 @@ func NewMountCommand(commandName string, hidden bool, mount MountFn) *cobra.Comm
defer cmd.StartStats()()
}
mnt := &MountPoint{
MountFn: mount,
MountPoint: args[1],
Fs: cmd.NewFsDir(args),
MountOpt: Opt,
VFSOpt: vfsflags.Opt,
}
mnt := NewMountPoint(mount, args[1], cmd.NewFsDir(args), &Opt, &vfsflags.Opt)
daemon, err := mnt.Mount()
// Wait for foreground mount, if any...
@@ -235,6 +241,7 @@ func (m *MountPoint) Mount() (daemon *os.Process, err error) {
return nil, err
}
m.SetVolumeName(m.MountOpt.VolumeName)
m.SetDeviceName(m.MountOpt.DeviceName)
// Start background task if --daemon is specified
if m.MountOpt.Daemon {
@@ -250,6 +257,7 @@ func (m *MountPoint) Mount() (daemon *os.Process, err error) {
if err != nil {
return nil, fmt.Errorf("failed to mount FUSE fs: %w", err)
}
m.MountedOn = time.Now()
return nil, nil
}

View File

@@ -10,7 +10,6 @@ import (
"github.com/rclone/rclone/fs"
"github.com/rclone/rclone/fs/rc"
"github.com/rclone/rclone/vfs"
"github.com/rclone/rclone/vfs/vfsflags"
)
@@ -117,23 +116,15 @@ func mountRc(ctx context.Context, in rc.Params) (out rc.Params, err error) {
return nil, err
}
VFS := vfs.New(fdst, &vfsOpt)
_, unmountFn, err := mountFn(VFS, mountPoint, &mountOpt)
mnt := NewMountPoint(mountFn, mountPoint, fdst, &mountOpt, &vfsOpt)
_, err = mnt.Mount()
if err != nil {
log.Printf("mount FAILED: %v", err)
return nil, err
}
// Add mount to list if mount point was successfully created
liveMounts[mountPoint] = &MountPoint{
MountPoint: mountPoint,
MountedOn: time.Now(),
MountFn: mountFn,
UnmountFn: unmountFn,
MountOpt: mountOpt,
VFSOpt: vfsOpt,
Fs: fdst,
}
liveMounts[mountPoint] = mnt
fs.Debugf(nil, "Mount for %s created at %s using %s", fdst.String(), mountPoint, mountType)
return nil, nil

View File

@@ -16,11 +16,16 @@ import (
"github.com/rclone/rclone/cmd/mountlib"
"github.com/rclone/rclone/fs/config/configfile"
"github.com/rclone/rclone/fs/rc"
"github.com/rclone/rclone/fstest/testy"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
)
func TestRc(t *testing.T) {
// Disable tests under macOS and the CI since they are locking up
if runtime.GOOS == "darwin" {
testy.SkipUnreliable(t)
}
ctx := context.Background()
configfile.Install()
mount := rc.Calls.Get("mount/mount")
@@ -30,19 +35,14 @@ func TestRc(t *testing.T) {
getMountTypes := rc.Calls.Get("mount/types")
assert.NotNil(t, getMountTypes)
localDir, err := ioutil.TempDir("", "rclone-mountlib-localDir")
require.NoError(t, err)
defer func() { _ = os.RemoveAll(localDir) }()
err = ioutil.WriteFile(filepath.Join(localDir, "file.txt"), []byte("hello"), 0666)
localDir := t.TempDir()
err := ioutil.WriteFile(filepath.Join(localDir, "file.txt"), []byte("hello"), 0666)
require.NoError(t, err)
mountPoint, err := ioutil.TempDir("", "rclone-mountlib-mountPoint")
require.NoError(t, err)
mountPoint := t.TempDir()
if runtime.GOOS == "windows" {
// Windows requires the mount point not to exist
require.NoError(t, os.RemoveAll(mountPoint))
} else {
defer func() { _ = os.RemoveAll(mountPoint) }()
}
out, err := getMountTypes.Fn(ctx, nil)

View File

@@ -87,7 +87,7 @@ func (m *MountPoint) CheckAllowings() error {
// SetVolumeName with sensible default
func (m *MountPoint) SetVolumeName(vol string) {
if vol == "" {
vol = m.Fs.Name() + ":" + m.Fs.Root()
vol = fs.ConfigString(m.Fs)
}
m.MountOpt.SetVolumeName(vol)
}
@@ -102,3 +102,11 @@ func (o *Options) SetVolumeName(vol string) {
}
o.VolumeName = vol
}
// SetDeviceName with sensible default
func (m *MountPoint) SetDeviceName(dev string) {
if dev == "" {
dev = fs.ConfigString(m.Fs)
}
m.MountOpt.DeviceName = dev
}

View File

@@ -42,15 +42,31 @@ builds an in memory representation. rclone ncdu can be used during
this scanning phase and you will see it building up the directory
structure as it goes along.
Here are the keys - press '?' to toggle the help on and off
You can interact with the user interface using key presses,
press '?' to toggle the help on and off. The supported keys are:
` + strings.Join(helpText()[1:], "\n ") + `
Listed files/directories may be prefixed by a one-character flag,
some of them combined with a description in brackes at end of line.
These flags have the following meaning:
e means this is an empty directory, i.e. contains no files (but
may contain empty subdirectories)
~ means this is a directory where some of the files (possibly in
subdirectories) have unknown size, and therefore the directory
size may be underestimated (and average size inaccurate, as it
is average of the files with known sizes).
. means an error occurred while reading a subdirectory, and
therefore the directory size may be underestimated (and average
size inaccurate)
! means an error occurred while reading this directory
This an homage to the [ncdu tool](https://dev.yorhel.nl/ncdu) but for
rclone remotes. It is missing lots of features at the moment
but is useful as it stands.
Note that it might take some time to delete big files/folders. The
Note that it might take some time to delete big files/directories. The
UI won't respond in the meantime since the deletion is done synchronously.
`,
Run: func(command *cobra.Command, args []string) {
@@ -283,9 +299,9 @@ func (u *UI) biggestEntry() (biggest int64) {
return
}
for i := range u.entries {
size, _, _, _, _, _ := u.d.AttrI(u.sortPerm[i])
if size > biggest {
biggest = size
attrs, _ := u.d.AttrI(u.sortPerm[i])
if attrs.Size > biggest {
biggest = attrs.Size
}
}
return
@@ -297,8 +313,8 @@ func (u *UI) hasEmptyDir() bool {
return false
}
for i := range u.entries {
_, count, isDir, _, _, _ := u.d.AttrI(u.sortPerm[i])
if isDir && count == 0 {
attrs, _ := u.d.AttrI(u.sortPerm[i])
if attrs.IsDir && attrs.Count == 0 {
return true
}
}
@@ -343,9 +359,9 @@ func (u *UI) Draw() error {
if y >= h-1 {
break
}
size, count, isDir, readable, entriesHaveErrors, err := u.d.AttrI(u.sortPerm[n])
attrs, err := u.d.AttrI(u.sortPerm[n])
fg := termbox.ColorWhite
if entriesHaveErrors {
if attrs.EntriesHaveErrors {
fg = termbox.ColorYellow
}
if err != nil {
@@ -356,15 +372,19 @@ func (u *UI) Draw() error {
fg, bg = bg, fg
}
mark := ' '
if isDir {
if attrs.IsDir {
mark = '/'
}
fileFlag := ' '
message := ""
if !readable {
if !attrs.Readable {
message = " [not read yet]"
}
if entriesHaveErrors {
if attrs.CountUnknownSize > 0 {
message = fmt.Sprintf(" [%d of %d files have unknown size, size may be underestimated]", attrs.CountUnknownSize, attrs.Count)
fileFlag = '~'
}
if attrs.EntriesHaveErrors {
message = " [some subdirectories could not be read, size may be underestimated]"
fileFlag = '.'
}
@@ -374,32 +394,29 @@ func (u *UI) Draw() error {
}
extras := ""
if u.showCounts {
ss := operations.CountStringField(count, u.humanReadable, 9) + " "
if count > 0 {
ss := operations.CountStringField(attrs.Count, u.humanReadable, 9) + " "
if attrs.Count > 0 {
extras += ss
} else {
extras += strings.Repeat(" ", len(ss))
}
}
var averageSize float64
if count > 0 {
averageSize = float64(size) / float64(count)
}
if u.showDirAverageSize {
ss := operations.SizeStringField(int64(averageSize), u.humanReadable, 9) + " "
if averageSize > 0 {
avg := attrs.AverageSize()
ss := operations.SizeStringField(int64(avg), u.humanReadable, 9) + " "
if avg > 0 {
extras += ss
} else {
extras += strings.Repeat(" ", len(ss))
}
}
if showEmptyDir {
if isDir && count == 0 && fileFlag == ' ' {
if attrs.IsDir && attrs.Count == 0 && fileFlag == ' ' {
fileFlag = 'e'
}
}
if u.showGraph {
bars := (size + perBar/2 - 1) / perBar
bars := (attrs.Size + perBar/2 - 1) / perBar
// clip if necessary - only happens during startup
if bars > 10 {
bars = 10
@@ -408,7 +425,7 @@ func (u *UI) Draw() error {
}
extras += "[" + graph[graphBars-bars:2*graphBars-bars] + "] "
}
Linef(0, y, w, fg, bg, ' ', "%c %s %s%c%s%s", fileFlag, operations.SizeStringField(size, u.humanReadable, 12), extras, mark, path.Base(entry.Remote()), message)
Linef(0, y, w, fg, bg, ' ', "%c %s %s%c%s%s", fileFlag, operations.SizeStringField(attrs.Size, u.humanReadable, 12), extras, mark, path.Base(entry.Remote()), message)
y++
}
}
@@ -559,14 +576,14 @@ type ncduSort struct {
// Less is part of sort.Interface.
func (ds *ncduSort) Less(i, j int) bool {
var iAvgSize, jAvgSize float64
isize, icount, _, _, _, _ := ds.d.AttrI(ds.sortPerm[i])
jsize, jcount, _, _, _, _ := ds.d.AttrI(ds.sortPerm[j])
iattrs, _ := ds.d.AttrI(ds.sortPerm[i])
jattrs, _ := ds.d.AttrI(ds.sortPerm[j])
iname, jname := ds.entries[ds.sortPerm[i]].Remote(), ds.entries[ds.sortPerm[j]].Remote()
if icount > 0 {
iAvgSize = float64(isize / icount)
if iattrs.Count > 0 {
iAvgSize = iattrs.AverageSize()
}
if jcount > 0 {
jAvgSize = float64(jsize / jcount)
if jattrs.Count > 0 {
jAvgSize = jattrs.AverageSize()
}
switch {
@@ -575,33 +592,33 @@ func (ds *ncduSort) Less(i, j int) bool {
case ds.u.sortByName > 0:
break
case ds.u.sortBySize < 0:
if isize != jsize {
return isize < jsize
if iattrs.Size != jattrs.Size {
return iattrs.Size < jattrs.Size
}
case ds.u.sortBySize > 0:
if isize != jsize {
return isize > jsize
if iattrs.Size != jattrs.Size {
return iattrs.Size > jattrs.Size
}
case ds.u.sortByCount < 0:
if icount != jcount {
return icount < jcount
if iattrs.Count != jattrs.Count {
return iattrs.Count < jattrs.Count
}
case ds.u.sortByCount > 0:
if icount != jcount {
return icount > jcount
if iattrs.Count != jattrs.Count {
return iattrs.Count > jattrs.Count
}
case ds.u.sortByAverageSize < 0:
if iAvgSize != jAvgSize {
return iAvgSize < jAvgSize
}
// if avgSize is equal, sort by size
return isize < jsize
return iattrs.Size < jattrs.Size
case ds.u.sortByAverageSize > 0:
if iAvgSize != jAvgSize {
return iAvgSize > jAvgSize
}
// if avgSize is equal, sort by size
return isize > jsize
return iattrs.Size > jattrs.Size
}
// if everything equal, sort by name
return iname < jname

View File

@@ -16,14 +16,42 @@ type Dir struct {
parent *Dir
path string
mu sync.Mutex
count int64
size int64
count int64
countUnknownSize int64
entries fs.DirEntries
dirs map[string]*Dir
readError error
entriesHaveErrors bool
}
// Attrs contains accumulated properties for a directory entry
//
// Files with unknown size are counted separately but also included
// in the total count. They are not included in the size, i.e. treated
// as empty files, which means the size may be underestimated.
type Attrs struct {
Size int64
Count int64
CountUnknownSize int64
IsDir bool
Readable bool
EntriesHaveErrors bool
}
// AverageSize calculates average size of files in directory
//
// If there are files with unknown size, this returns the average over
// files with known sizes, which means it may be under- or
// overestimated.
func (a *Attrs) AverageSize() float64 {
countKnownSize := a.Count - a.CountUnknownSize
if countKnownSize > 0 {
return float64(a.Size) / float64(countKnownSize)
}
return 0
}
// Parent returns the directory above this one
func (d *Dir) Parent() *Dir {
// no locking needed since these are write once in newDir()
@@ -49,7 +77,13 @@ func newDir(parent *Dir, dirPath string, entries fs.DirEntries, err error) *Dir
for _, entry := range entries {
if o, ok := entry.(fs.Object); ok {
d.count++
d.size += o.Size()
size := o.Size()
if size < 0 {
// Some backends may return -1 because size of object is not known
d.countUnknownSize++
} else {
d.size += size
}
}
}
// Set my directory entry in parent
@@ -62,8 +96,9 @@ func newDir(parent *Dir, dirPath string, entries fs.DirEntries, err error) *Dir
// Accumulate counts in parents
for ; parent != nil; parent = parent.parent {
parent.mu.Lock()
parent.count += d.count
parent.size += d.size
parent.count += d.count
parent.countUnknownSize += d.countUnknownSize
if d.readError != nil {
parent.entriesHaveErrors = true
}
@@ -91,17 +126,24 @@ func (d *Dir) Remove(i int) {
// Call with d.mu held
func (d *Dir) remove(i int) {
size := d.entries[i].Size()
countUnknownSize := int64(0)
if size < 0 {
size = 0
countUnknownSize = 1
}
count := int64(1)
subDir, ok := d.getDir(i)
if ok {
size = subDir.size
count = subDir.count
countUnknownSize = subDir.countUnknownSize
delete(d.dirs, path.Base(subDir.path))
}
d.size -= size
d.count -= count
d.countUnknownSize -= countUnknownSize
d.entries = append(d.entries[:i], d.entries[i+1:]...)
dir := d
@@ -111,6 +153,7 @@ func (d *Dir) remove(i int) {
parent.dirs[path.Base(dir.path)] = dir
parent.size -= size
parent.count -= count
parent.countUnknownSize -= countUnknownSize
dir = parent
parent.mu.Unlock()
}
@@ -151,19 +194,19 @@ func (d *Dir) Attr() (size int64, count int64) {
}
// AttrI returns the size, count and flags for the i-th directory entry
func (d *Dir) AttrI(i int) (size int64, count int64, isDir bool, readable bool, entriesHaveErrors bool, err error) {
func (d *Dir) AttrI(i int) (attrs Attrs, err error) {
d.mu.Lock()
defer d.mu.Unlock()
subDir, isDir := d.getDir(i)
if !isDir {
return d.entries[i].Size(), 0, false, true, d.entriesHaveErrors, d.readError
return Attrs{d.entries[i].Size(), 0, 0, false, true, d.entriesHaveErrors}, d.readError
}
if subDir == nil {
return 0, 0, true, false, false, nil
return Attrs{0, 0, 0, true, false, false}, nil
}
size, count = subDir.Attr()
return size, count, true, true, subDir.entriesHaveErrors, subDir.readError
size, count := subDir.Attr()
return Attrs{size, count, subDir.countUnknownSize, true, true, subDir.entriesHaveErrors}, subDir.readError
}
// Scan the Fs passed in, returning a root directory channel and an

Binary file not shown.

Before

Width:  |  Height:  |  Size: 20 KiB

After

Width:  |  Height:  |  Size: 1.4 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 5.6 KiB

After

Width:  |  Height:  |  Size: 724 B

View File

@@ -23,6 +23,7 @@ import (
"github.com/rclone/rclone/fs"
"github.com/rclone/rclone/fs/config"
"github.com/rclone/rclone/fstest"
"github.com/rclone/rclone/fstest/testy"
"github.com/rclone/rclone/lib/file"
"github.com/stretchr/testify/assert"
@@ -303,6 +304,10 @@ func (a *APIClient) request(path string, in, out interface{}, wantErr bool) {
}
func testMountAPI(t *testing.T, sockAddr string) {
// Disable tests under macOS and linux in the CI since they are locking up
if runtime.GOOS == "darwin" || runtime.GOOS == "linux" {
testy.SkipUnreliable(t)
}
if _, mountFn := mountlib.ResolveMountMethod(""); mountFn == nil {
t.Skip("Test requires working mount command")
}

View File

@@ -274,7 +274,6 @@ func (vol *Volume) mount(id string) error {
if _, err := vol.mnt.Mount(); err != nil {
return err
}
vol.mnt.MountedOn = time.Now()
vol.mountReqs[id] = nil
vol.drv.monChan <- false // ask monitor to refresh channels
return nil

View File

@@ -16,7 +16,10 @@ import (
)
// Help describes the options for the serve package
var Help = `--template allows a user to specify a custom markup template for http
var Help = `
#### Template
--template allows a user to specify a custom markup template for http
and webdav serve functions. The server exports the following markup
to be used within the template to server pages:

View File

@@ -23,7 +23,7 @@ func AddFlagsPrefix(flagSet *pflag.FlagSet, prefix string, Opt *httplib.Options)
flags.StringVarP(flagSet, &Opt.SslKey, prefix+"key", "", Opt.SslKey, "SSL PEM Private key")
flags.StringVarP(flagSet, &Opt.ClientCA, prefix+"client-ca", "", Opt.ClientCA, "Client certificate authority to verify clients with")
flags.StringVarP(flagSet, &Opt.HtPasswd, prefix+"htpasswd", "", Opt.HtPasswd, "htpasswd file - if not provided no authentication is done")
flags.StringVarP(flagSet, &Opt.Realm, prefix+"realm", "", Opt.Realm, "realm for authentication")
flags.StringVarP(flagSet, &Opt.Realm, prefix+"realm", "", Opt.Realm, "Realm for authentication")
flags.StringVarP(flagSet, &Opt.BasicUser, prefix+"user", "", Opt.BasicUser, "User name for authentication")
flags.StringVarP(flagSet, &Opt.BasicPass, prefix+"pass", "", Opt.BasicPass, "Password for authentication")
flags.StringVarP(flagSet, &Opt.BaseURL, prefix+"baseurl", "", Opt.BaseURL, "Prefix for URLs - leave blank for root")

View File

@@ -16,6 +16,7 @@ TestFichier:
TestFTP:
TestGoogleCloudStorage:
TestHubic:
TestNetStorage:
TestOneDrive:
TestPcloud:
TestQingStor:

View File

@@ -7,9 +7,7 @@ import (
"crypto/rand"
"encoding/hex"
"io"
"io/ioutil"
"net/http"
"os"
"strings"
"testing"
@@ -113,14 +111,7 @@ func TestResticHandler(t *testing.T) {
}
// setup rclone with a local backend in a temporary directory
tempdir, err := ioutil.TempDir("", "rclone-restic-test-")
require.NoError(t, err)
// make sure the tempdir is properly removed
defer func() {
err := os.RemoveAll(tempdir)
require.NoError(t, err)
}()
tempdir := t.TempDir()
// globally set append-only mode
prev := appendOnly

View File

@@ -7,9 +7,7 @@ import (
"context"
"crypto/rand"
"io"
"io/ioutil"
"net/http"
"os"
"strings"
"testing"
@@ -35,14 +33,7 @@ func TestResticPrivateRepositories(t *testing.T) {
require.NoError(t, err)
// setup rclone with a local backend in a temporary directory
tempdir, err := ioutil.TempDir("", "rclone-restic-test-")
require.NoError(t, err)
// make sure the tempdir is properly removed
defer func() {
err := os.RemoveAll(tempdir)
require.NoError(t, err)
}()
tempdir := t.TempDir()
// globally set private-repos mode & test user
prev := privateRepos

View File

@@ -24,26 +24,51 @@ func init() {
var commandDefinition = &cobra.Command{
Use: "size remote:path",
Short: `Prints the total size and number of objects in remote:path.`,
Long: `
Counts objects in the path and calculates the total size. Prints the
result to standard output.
By default the output is in human-readable format, but shows values in
both human-readable format as well as the raw numbers (global option
` + "`--human-readable`" + ` is not considered). Use option ` + "`--json`" + `
to format output as JSON instead.
Recurses by default, use ` + "`--max-depth 1`" + ` to stop the
recursion.
Some backends do not always provide file sizes, see for example
[Google Photos](/googlephotos/#size) and
[Google Drive](/drive/#limitations-of-google-docs).
Rclone will then show a notice in the log indicating how many such
files were encountered, and count them in as empty files in the output
of the size command.
`,
Run: func(command *cobra.Command, args []string) {
cmd.CheckArgs(1, 1, command, args)
fsrc := cmd.NewFsSrc(args)
cmd.Run(false, false, command, func() error {
var err error
var results struct {
Count int64 `json:"count"`
Bytes int64 `json:"bytes"`
Count int64 `json:"count"`
Bytes int64 `json:"bytes"`
Sizeless int64 `json:"sizeless"`
}
results.Count, results.Bytes, err = operations.Count(context.Background(), fsrc)
results.Count, results.Bytes, results.Sizeless, err = operations.Count(context.Background(), fsrc)
if err != nil {
return err
}
if results.Sizeless > 0 {
fs.Logf(fsrc, "Size may be underestimated due to %d objects with unknown size", results.Sizeless)
}
if jsonOutput {
return json.NewEncoder(os.Stdout).Encode(results)
}
fmt.Printf("Total objects: %s (%d)\n", fs.CountSuffix(results.Count), results.Count)
fmt.Printf("Total size: %s (%d Byte)\n", fs.SizeSuffix(results.Bytes).ByteUnit(), results.Bytes)
if results.Sizeless > 0 {
fmt.Printf("Total objects with unknown size: %s (%d)\n", fs.CountSuffix(results.Sizeless), results.Sizeless)
}
return nil
})
},

View File

@@ -8,6 +8,7 @@ exec rclone --check-normalization=true --check-control=true --check-length=true
TestDrive:testInfo \
TestDropbox:testInfo \
TestGoogleCloudStorage:rclone-testinfo \
TestnStorage:testInfo \
TestOneDrive:testInfo \
TestS3:rclone-testinfo \
TestSftp:testInfo \

View File

@@ -5,6 +5,7 @@ package makefiles
import (
"io"
"log"
"math"
"math/rand"
"os"
"path/filepath"
@@ -16,7 +17,9 @@ import (
"github.com/rclone/rclone/fs/config/flags"
"github.com/rclone/rclone/lib/file"
"github.com/rclone/rclone/lib/random"
"github.com/rclone/rclone/lib/readers"
"github.com/spf13/cobra"
"github.com/spf13/pflag"
)
var (
@@ -29,37 +32,51 @@ var (
minFileNameLength = 4
maxFileNameLength = 12
seed = int64(1)
zero = false
sparse = false
ascii = false
pattern = false
chargen = false
// Globals
randSource *rand.Rand
source io.Reader
directoriesToCreate int
totalDirectories int
fileNames = map[string]struct{}{} // keep a note of which file name we've used already
)
func init() {
test.Command.AddCommand(commandDefinition)
cmdFlags := commandDefinition.Flags()
flags.IntVarP(cmdFlags, &numberOfFiles, "files", "", numberOfFiles, "Number of files to create")
flags.IntVarP(cmdFlags, &averageFilesPerDirectory, "files-per-directory", "", averageFilesPerDirectory, "Average number of files per directory")
flags.IntVarP(cmdFlags, &maxDepth, "max-depth", "", maxDepth, "Maximum depth of directory hierarchy")
flags.FVarP(cmdFlags, &minFileSize, "min-file-size", "", "Minimum size of file to create")
flags.FVarP(cmdFlags, &maxFileSize, "max-file-size", "", "Maximum size of files to create")
flags.IntVarP(cmdFlags, &minFileNameLength, "min-name-length", "", minFileNameLength, "Minimum size of file names")
flags.IntVarP(cmdFlags, &maxFileNameLength, "max-name-length", "", maxFileNameLength, "Maximum size of file names")
flags.Int64VarP(cmdFlags, &seed, "seed", "", seed, "Seed for the random number generator (0 for random)")
test.Command.AddCommand(makefilesCmd)
makefilesFlags := makefilesCmd.Flags()
flags.IntVarP(makefilesFlags, &numberOfFiles, "files", "", numberOfFiles, "Number of files to create")
flags.IntVarP(makefilesFlags, &averageFilesPerDirectory, "files-per-directory", "", averageFilesPerDirectory, "Average number of files per directory")
flags.IntVarP(makefilesFlags, &maxDepth, "max-depth", "", maxDepth, "Maximum depth of directory hierarchy")
flags.FVarP(makefilesFlags, &minFileSize, "min-file-size", "", "Minimum size of file to create")
flags.FVarP(makefilesFlags, &maxFileSize, "max-file-size", "", "Maximum size of files to create")
flags.IntVarP(makefilesFlags, &minFileNameLength, "min-name-length", "", minFileNameLength, "Minimum size of file names")
flags.IntVarP(makefilesFlags, &maxFileNameLength, "max-name-length", "", maxFileNameLength, "Maximum size of file names")
test.Command.AddCommand(makefileCmd)
makefileFlags := makefileCmd.Flags()
// Common flags to makefiles and makefile
for _, f := range []*pflag.FlagSet{makefilesFlags, makefileFlags} {
flags.Int64VarP(f, &seed, "seed", "", seed, "Seed for the random number generator (0 for random)")
flags.BoolVarP(f, &zero, "zero", "", zero, "Fill files with ASCII 0x00")
flags.BoolVarP(f, &sparse, "sparse", "", sparse, "Make the files sparse (appear to be filled with ASCII 0x00)")
flags.BoolVarP(f, &ascii, "ascii", "", ascii, "Fill files with random ASCII printable bytes only")
flags.BoolVarP(f, &pattern, "pattern", "", pattern, "Fill files with a periodic pattern")
flags.BoolVarP(f, &chargen, "chargen", "", chargen, "Fill files with a ASCII chargen pattern")
}
}
var commandDefinition = &cobra.Command{
var makefilesCmd = &cobra.Command{
Use: "makefiles <dir>",
Short: `Make a random file hierarchy in a directory`,
Run: func(command *cobra.Command, args []string) {
cmd.CheckArgs(1, 1, command, args)
if seed == 0 {
seed = time.Now().UnixNano()
fs.Logf(nil, "Using random seed = %d", seed)
}
randSource = rand.New(rand.NewSource(seed))
commonInit()
outputDirectory := args[0]
directoriesToCreate = numberOfFiles / averageFilesPerDirectory
averageSize := (minFileSize + maxFileSize) / 2
@@ -73,13 +90,130 @@ var commandDefinition = &cobra.Command{
totalBytes := int64(0)
for i := 0; i < numberOfFiles; i++ {
dir := dirs[randSource.Intn(len(dirs))]
totalBytes += writeFile(dir, fileName())
size := int64(minFileSize)
if maxFileSize > minFileSize {
size += randSource.Int63n(int64(maxFileSize - minFileSize))
}
writeFile(dir, fileName(), size)
totalBytes += size
}
dt := time.Since(start)
fs.Logf(nil, "Written %viB in %v at %viB/s.", fs.SizeSuffix(totalBytes), dt.Round(time.Millisecond), fs.SizeSuffix((totalBytes*int64(time.Second))/int64(dt)))
fs.Logf(nil, "Written %vB in %v at %vB/s.", fs.SizeSuffix(totalBytes), dt.Round(time.Millisecond), fs.SizeSuffix((totalBytes*int64(time.Second))/int64(dt)))
},
}
var makefileCmd = &cobra.Command{
Use: "makefile <size> [<file>]+ [flags]",
Short: `Make files with random contents of the size given`,
Run: func(command *cobra.Command, args []string) {
cmd.CheckArgs(1, 1e6, command, args)
commonInit()
var size fs.SizeSuffix
err := size.Set(args[0])
if err != nil {
log.Fatalf("Failed to parse size %q: %v", args[0], err)
}
start := time.Now()
fs.Logf(nil, "Creating %d files of size %v.", len(args[1:]), size)
totalBytes := int64(0)
for _, filePath := range args[1:] {
dir := filepath.Dir(filePath)
name := filepath.Base(filePath)
writeFile(dir, name, int64(size))
totalBytes += int64(size)
}
dt := time.Since(start)
fs.Logf(nil, "Written %vB in %v at %vB/s.", fs.SizeSuffix(totalBytes), dt.Round(time.Millisecond), fs.SizeSuffix((totalBytes*int64(time.Second))/int64(dt)))
},
}
func bool2int(b bool) int {
if b {
return 1
}
return 0
}
// common initialisation for makefiles and makefile
func commonInit() {
if seed == 0 {
seed = time.Now().UnixNano()
fs.Logf(nil, "Using random seed = %d", seed)
}
randSource = rand.New(rand.NewSource(seed))
if bool2int(zero)+bool2int(sparse)+bool2int(ascii)+bool2int(pattern)+bool2int(chargen) > 1 {
log.Fatal("Can only supply one of --zero, --sparse, --ascii, --pattern or --chargen")
}
switch {
case zero, sparse:
source = zeroReader{}
case ascii:
source = asciiReader{}
case pattern:
source = readers.NewPatternReader(math.MaxInt64)
case chargen:
source = &chargenReader{}
default:
source = randSource
}
if minFileSize > maxFileSize {
maxFileSize = minFileSize
}
}
type zeroReader struct{}
// Read a chunk of zeroes
func (zeroReader) Read(p []byte) (n int, err error) {
for i := range p {
p[i] = 0
}
return len(p), nil
}
type asciiReader struct{}
// Read a chunk of printable ASCII characters
func (asciiReader) Read(p []byte) (n int, err error) {
n, err = randSource.Read(p)
for i := range p[:n] {
p[i] = (p[i] % (0x7F - 0x20)) + 0x20
}
return n, err
}
type chargenReader struct {
start byte // offset from startChar to start line with
written byte // chars in line so far
}
// Read a chunk of printable ASCII characters in chargen format
func (r *chargenReader) Read(p []byte) (n int, err error) {
const (
startChar = 0x20 // ' '
endChar = 0x7E // '~' inclusive
charsPerLine = 72
)
for i := range p {
if r.written >= charsPerLine {
r.start++
if r.start > endChar-startChar {
r.start = 0
}
p[i] = '\n'
r.written = 0
} else {
c := r.start + r.written + startChar
if c > endChar {
c -= endChar - startChar + 1
}
p[i] = c
r.written++
}
}
return len(p), err
}
// fileName creates a unique random file or directory name
func fileName() (name string) {
for {
@@ -134,7 +268,7 @@ func (d *dir) list(path string, output []string) []string {
}
// writeFile writes a random file at dir/name
func writeFile(dir, name string) int64 {
func writeFile(dir, name string, size int64) {
err := file.MkdirAll(dir, 0777)
if err != nil {
log.Fatalf("Failed to make directory %q: %v", dir, err)
@@ -144,8 +278,11 @@ func writeFile(dir, name string) int64 {
if err != nil {
log.Fatalf("Failed to open file %q: %v", path, err)
}
size := randSource.Int63n(int64(maxFileSize-minFileSize)) + int64(minFileSize)
_, err = io.CopyN(fd, randSource, size)
if sparse {
err = fd.Truncate(size)
} else {
_, err = io.CopyN(fd, source, size)
}
if err != nil {
log.Fatalf("Failed to write %v bytes to file %q: %v", size, path, err)
}
@@ -154,5 +291,4 @@ func writeFile(dir, name string) int64 {
log.Fatalf("Failed to close file %q: %v", path, err)
}
fs.Infof(path, "Written file size %v", fs.SizeSuffix(size))
return size
}

View File

@@ -25,7 +25,7 @@ var commandDefinition = &cobra.Command{
cmd.Run(false, false, command, func() error {
ctx := context.Background()
ci := fs.GetConfig(context.Background())
objects, _, err := operations.Count(ctx, fsrc)
objects, _, _, err := operations.Count(ctx, fsrc)
if err != nil {
return err
}

View File

@@ -5,11 +5,13 @@ import (
"context"
"errors"
"fmt"
"log"
"time"
"github.com/rclone/rclone/cmd"
"github.com/rclone/rclone/fs"
"github.com/rclone/rclone/fs/config/flags"
"github.com/rclone/rclone/fs/fspath"
"github.com/rclone/rclone/fs/object"
"github.com/rclone/rclone/fs/operations"
"github.com/spf13/cobra"
@@ -63,13 +65,32 @@ then add the ` + "`--localtime`" + ` flag.
`,
Run: func(command *cobra.Command, args []string) {
cmd.CheckArgs(1, 1, command, args)
f, fileName := cmd.NewFsFile(args[0])
f, remote := newFsDst(args)
cmd.Run(true, false, command, func() error {
return Touch(context.Background(), f, fileName)
return Touch(context.Background(), f, remote)
})
},
}
// newFsDst creates a new dst fs from the arguments.
//
// The returned fs will never point to a file. It will point to the
// parent directory of specified path, and is returned together with
// the basename of file or directory, except if argument is only a
// remote name. Similar to cmd.NewFsDstFile, but without raising fatal
// when name of file or directory is empty (e.g. "remote:" or "remote:path/").
func newFsDst(args []string) (f fs.Fs, remote string) {
root, remote, err := fspath.Split(args[0])
if err != nil {
log.Fatalf("Parsing %q failed: %v", args[0], err)
}
if root == "" {
root = "."
}
f = cmd.NewFsDir([]string{root})
return f, remote
}
// parseTimeArgument parses a timestamp string according to specific layouts
func parseTimeArgument(timeString string) (time.Time, error) {
layout := defaultLayout
@@ -107,47 +128,51 @@ func createEmptyObject(ctx context.Context, remote string, modTime time.Time, f
}
// Touch create new file or change file modification time.
func Touch(ctx context.Context, f fs.Fs, fileName string) error {
func Touch(ctx context.Context, f fs.Fs, remote string) error {
t, err := timeOfTouch()
if err != nil {
return err
}
fs.Debugf(nil, "Touch time %v", t)
file, err := f.NewObject(ctx, fileName)
file, err := f.NewObject(ctx, remote)
if err != nil {
if errors.Is(err, fs.ErrorObjectNotFound) {
// Touch single non-existent file
// Touching non-existant path, possibly creating it as new file
if remote == "" {
fs.Logf(f, "Not touching empty directory")
return nil
}
if notCreateNewFile {
fs.Logf(f, "Not touching non-existent file due to --no-create")
return nil
}
if recursive {
// For consistency, --recursive never creates new files.
fs.Logf(f, "Not touching non-existent file due to --recursive")
return nil
}
if operations.SkipDestructive(ctx, f, "touch (create)") {
return nil
}
fs.Debugf(f, "Touching (creating)")
if err = createEmptyObject(ctx, fileName, t, f); err != nil {
fs.Debugf(f, "Touching (creating) %q", remote)
if err = createEmptyObject(ctx, remote, t, f); err != nil {
return fmt.Errorf("failed to touch (create): %w", err)
}
}
if errors.Is(err, fs.ErrorIsDir) {
// Touching existing directory
if recursive {
// Touch existing directory, recursive
fs.Debugf(nil, "Touching files in directory recursively")
return operations.TouchDir(ctx, f, t, true)
fs.Debugf(f, "Touching recursively files in directory %q", remote)
return operations.TouchDir(ctx, f, remote, t, true)
}
// Touch existing directory without recursing
fs.Debugf(nil, "Touching files in directory non-recursively")
return operations.TouchDir(ctx, f, t, false)
fs.Debugf(f, "Touching non-recursively files in directory %q", remote)
return operations.TouchDir(ctx, f, remote, t, false)
}
return err
}
// Touch single existing file
if !operations.SkipDestructive(ctx, fileName, "touch") {
fs.Debugf(f, "Touching %q", fileName)
if !operations.SkipDestructive(ctx, remote, "touch") {
fs.Debugf(f, "Touching %q", remote)
err = file.SetModTime(ctx, t)
if err != nil {
return fmt.Errorf("failed to touch: %w", err)

View File

@@ -113,6 +113,15 @@ func TestTouchCreateMultipleDirAndFile(t *testing.T) {
fstest.CheckListingWithPrecision(t, r.Fremote, []fstest.Item{file1}, []string{"a", "a/b"}, fs.ModTimeNotSupported)
}
func TestTouchEmptyName(t *testing.T) {
r := fstest.NewRun(t)
defer r.Finalise()
err := Touch(context.Background(), r.Fremote, "")
require.NoError(t, err)
fstest.CheckListingWithPrecision(t, r.Fremote, []fstest.Item{}, []string{}, fs.ModTimeNotSupported)
}
func TestTouchEmptyDir(t *testing.T) {
r := fstest.NewRun(t)
defer r.Finalise()

View File

@@ -43,7 +43,6 @@ func init() {
flags.StringVarP(cmdFlags, &outFileName, "output", "o", "", "Output to file instead of stdout")
// Files
flags.BoolVarP(cmdFlags, &opts.ByteSize, "size", "s", false, "Print the size in bytes of each file.")
flags.BoolVarP(cmdFlags, &opts.UnitSize, "human", "", false, "Print the size in a more human readable way.")
flags.BoolVarP(cmdFlags, &opts.FileMode, "protections", "p", false, "Print the protections for each file.")
// flags.BoolVarP(cmdFlags, &opts.ShowUid, "uid", "", false, "Displays file owner or UID number.")
// flags.BoolVarP(cmdFlags, &opts.ShowGid, "gid", "", false, "Displays file group owner or GID number.")

View File

@@ -102,8 +102,7 @@ var envInitial []string
// sets testConfig to testFolder/rclone.config.
func createTestEnvironment(t *testing.T) {
//Set temporary folder for config and test data
tempFolder, err := ioutil.TempDir("", "rclone_cmdtest_")
require.NoError(t, err)
tempFolder := t.TempDir()
testFolder = filepath.ToSlash(tempFolder)
// Set path to temporary config file

View File

@@ -105,15 +105,19 @@ WebDAV or S3, that work out of the box.)
{{< provider_list >}}
{{< provider name="1Fichier" home="https://1fichier.com/" config="/fichier/" start="true">}}
{{< provider name="Akamai Netstorage" home="https://www.akamai.com/us/en/products/media-delivery/netstorage.jsp" config="/netstorage/" >}}
{{< provider name="Alibaba Cloud (Aliyun) Object Storage System (OSS)" home="https://www.alibabacloud.com/product/oss/" config="/s3/#alibaba-oss" >}}
{{< provider name="Amazon Drive" home="https://www.amazon.com/clouddrive" config="/amazonclouddrive/" note="#status">}}
{{< provider name="Amazon S3" home="https://aws.amazon.com/s3/" config="/s3/" >}}
{{< provider name="Backblaze B2" home="https://www.backblaze.com/b2/cloud-storage.html" config="/b2/" >}}
{{< provider name="Box" home="https://www.box.com/" config="/box/" >}}
{{< provider name="Ceph" home="http://ceph.com/" config="/s3/#ceph" >}}
{{< provider name="China Mobile Ecloud Elastic Object Storage (EOS)" home="https://ecloud.10086.cn/home/product-introduction/eos/" config="/s3/#china-mobile-ecloud-eos" >}}
{{< provider name="Arvan Cloud Object Storage (AOS)" home="https://www.arvancloud.com/en/products/cloud-storage" config="/s3/#arvan-cloud-object-storage-aos" >}}
{{< provider name="Citrix ShareFile" home="http://sharefile.com/" config="/sharefile/" >}}
{{< provider name="C14" home="https://www.online.net/en/storage/c14-cold-storage" config="/sftp/#c14" >}}
{{< provider name="C14" home="https://www.online.net/en/storage/c14-cold-storage" config="/s3/#scaleway" >}}
{{< provider name="DigitalOcean Spaces" home="https://www.digitalocean.com/products/object-storage/" config="/s3/#digitalocean-spaces" >}}
{{< provider name="Digi Storage" home="https://storage.rcs-rds.ro/" config="/koofr/#digi-storage" >}}
{{< provider name="Dreamhost" home="https://www.dreamhost.com/cloud/storage/" config="/s3/#dreamhost" >}}
{{< provider name="Dropbox" home="https://www.dropbox.com/" config="/dropbox/" >}}
{{< provider name="Enterprise File Fabric" home="https://storagemadeeasy.com/about/" config="/filefabric/" >}}
@@ -124,6 +128,7 @@ WebDAV or S3, that work out of the box.)
{{< provider name="HDFS" home="https://hadoop.apache.org/" config="/hdfs/" >}}
{{< provider name="HTTP" home="https://en.wikipedia.org/wiki/Hypertext_Transfer_Protocol" config="/http/" >}}
{{< provider name="Hubic" home="https://hubic.com/" config="/hubic/" >}}
{{< provider name="Internet Archive" home="https://archive.org/" config="/internetarchive/" >}}
{{< provider name="Jottacloud" home="https://www.jottacloud.com/en/" config="/jottacloud/" >}}
{{< provider name="IBM COS S3" home="http://www.ibm.com/cloud/object-storage" config="/s3/#ibm-cos-s3" >}}
{{< provider name="Koofr" home="https://koofr.eu/" config="/koofr/" >}}
@@ -148,12 +153,13 @@ WebDAV or S3, that work out of the box.)
{{< provider name="rsync.net" home="https://rsync.net/products/rclone.html" config="/sftp/#rsync-net" >}}
{{< provider name="Scaleway" home="https://www.scaleway.com/object-storage/" config="/s3/#scaleway" >}}
{{< provider name="Seafile" home="https://www.seafile.com/" config="/seafile/" >}}
{{< provider name="Seagate Lyve Cloud" home="https://www.seagate.com/gb/en/services/cloud/storage/" config="/s3/#lyve" >}}
{{< provider name="SeaweedFS" home="https://github.com/chrislusf/seaweedfs/" config="/s3/#seaweedfs" >}}
{{< provider name="SFTP" home="https://en.wikipedia.org/wiki/SSH_File_Transfer_Protocol" config="/sftp/" >}}
{{< provider name="Sia" home="https://sia.tech/" config="/sia/" >}}
{{< provider name="StackPath" home="https://www.stackpath.com/products/object-storage/" config="/s3/#stackpath" >}}
{{< provider name="Storj" home="https://storj.io/" config="/storj/" >}}
{{< provider name="SugarSync" home="https://sugarsync.com/" config="/sugarsync/" >}}
{{< provider name="Tardigrade" home="https://tardigrade.io/" config="/tardigrade/" >}}
{{< provider name="Tencent Cloud Object Storage (COS)" home="https://intl.cloud.tencent.com/product/cos" config="/s3/#tencent-cos" >}}
{{< provider name="Uptobox" home="https://uptobox.com" config="/uptobox/" >}}
{{< provider name="Wasabi" home="https://wasabi.com/" config="/s3/#wasabi" >}}

View File

@@ -33,7 +33,7 @@ First run:
This will guide you through an interactive setup process:
```
No remotes found - make a new one
No remotes found, make a new one?
n) New remote
s) Set configuration password
q) Quit config
@@ -99,9 +99,11 @@ Remote or path to alias.
Can be "myremote:path/to/dir", "myremote:bucket", "myremote:" or "/local/path".
Properties:
- Config: remote
- Env Var: RCLONE_ALIAS_REMOTE
- Type: string
- Default: ""
- Required: true
{{< rem autogenerated options stop >}}

View File

@@ -53,7 +53,7 @@ Here is an example of how to make a remote called `remote`. First run:
This will guide you through an interactive setup process:
```
No remotes found - make a new one
No remotes found, make a new one?
n) New remote
r) Rename remote
c) Copy remote
@@ -168,10 +168,12 @@ OAuth Client Id.
Leave blank normally.
Properties:
- Config: client_id
- Env Var: RCLONE_ACD_CLIENT_ID
- Type: string
- Default: ""
- Required: false
#### --acd-client-secret
@@ -179,10 +181,12 @@ OAuth Client Secret.
Leave blank normally.
Properties:
- Config: client_secret
- Env Var: RCLONE_ACD_CLIENT_SECRET
- Type: string
- Default: ""
- Required: false
### Advanced options
@@ -192,10 +196,12 @@ Here are the advanced options specific to amazon cloud drive (Amazon Drive).
OAuth Access Token as a JSON blob.
Properties:
- Config: token
- Env Var: RCLONE_ACD_TOKEN
- Type: string
- Default: ""
- Required: false
#### --acd-auth-url
@@ -203,10 +209,12 @@ Auth server URL.
Leave blank to use the provider defaults.
Properties:
- Config: auth_url
- Env Var: RCLONE_ACD_AUTH_URL
- Type: string
- Default: ""
- Required: false
#### --acd-token-url
@@ -214,19 +222,23 @@ Token server url.
Leave blank to use the provider defaults.
Properties:
- Config: token_url
- Env Var: RCLONE_ACD_TOKEN_URL
- Type: string
- Default: ""
- Required: false
#### --acd-checkpoint
Checkpoint for internal polling (debug).
Properties:
- Config: checkpoint
- Env Var: RCLONE_ACD_CHECKPOINT
- Type: string
- Default: ""
- Required: false
#### --acd-upload-wait-per-gb
@@ -252,6 +264,8 @@ of big files for a range of file sizes.
Upload with the "-v" flag to see more info about what rclone is doing
in this situation.
Properties:
- Config: upload_wait_per_gb
- Env Var: RCLONE_ACD_UPLOAD_WAIT_PER_GB
- Type: Duration
@@ -270,6 +284,8 @@ To download files above this threshold, rclone requests a "tempLink"
which downloads the file through a temporary URL directly from the
underlying S3 storage.
Properties:
- Config: templink_threshold
- Env Var: RCLONE_ACD_TEMPLINK_THRESHOLD
- Type: SizeSuffix
@@ -277,10 +293,12 @@ underlying S3 storage.
#### --acd-encoding
This sets the encoding for the backend.
The encoding for the backend.
See the [encoding section in the overview](/overview/#encoding) for more info.
Properties:
- Config: encoding
- Env Var: RCLONE_ACD_ENCODING
- Type: MultiEncoder

View File

@@ -550,3 +550,36 @@ put them back in again.` >}}
* Fredric Arklid <fredric.arklid@consid.se>
* Andy Jackson <Andrew.Jackson@bl.uk>
* Sinan Tan <i@tinytangent.com>
* deinferno <14363193+deinferno@users.noreply.github.com>
* rsapkf <rsapkfff@pm.me>
* Will Holtz <wholtz@gmail.com>
* GGG KILLER <gggkiller2@gmail.com>
* Logeshwaran Murugesan <logeshwaran@testpress.in>
* Lu Wang <coolwanglu@gmail.com>
* Bumsu Hyeon <ksitht@gmail.com>
* Shmz Ozggrn <98463324+ShmzOzggrn@users.noreply.github.com>
* Kim <kim@jotta.no>
* Niels van de Weem <n.van.de.weem@smile.nl>
* Koopa <codingkoopa@gmail.com>
* Yunhai Luo <yunhai-luo@hotmail.com>
* Charlie Jiang <w@chariri.moe>
* Alain Nussbaumer <alain.nussbaumer@alleluia.ch>
* Vanessasaurus <814322+vsoch@users.noreply.github.com>
* Isaac Levy <isaac.r.levy@gmail.com>
* Gourav T <workflowautomation@protonmail.com>
* Paulo Martins <paulo.pontes.m@gmail.com>
* viveknathani <viveknathani2402@gmail.com>
* Eng Zer Jun <engzerjun@gmail.com>
* Abhiraj <abhiraj.official15@gmail.com>
* Márton Elek <elek@apache.org> <elek@users.noreply.github.com>
* Vincent Murphy <vdm@vdm.ie>
* ctrl-q <34975747+ctrl-q@users.noreply.github.com>
* Nil Alexandrov <nalexand@akamai.com>
* GuoXingbin <101376330+guoxingbin@users.noreply.github.com>
* Berkan Teber <berkan@berkanteber.com>
* Tobias Klauser <tklauser@distanz.ch>
* KARBOWSKI Piotr <piotr.karbowski@gmail.com>
* GH <geeklihui@foxmail.com>
* rafma0 <int.main@gmail.com>
* Adrien Rey-Jarthon <jobs@adrienjarthon.com>
* Nick Gooding <73336146+nickgooding@users.noreply.github.com>

View File

@@ -19,7 +19,7 @@ configuration. For a remote called `remote`. First run:
This will guide you through an interactive setup process:
```
No remotes found - make a new one
No remotes found, make a new one?
n) New remote
s) Set configuration password
q) Quit config
@@ -81,6 +81,14 @@ key. It is stored using RFC3339 Format time with nanosecond
precision. The metadata is supplied during directory listings so
there is no overhead to using it.
### Performance
When uploading large files, increasing the value of
`--azureblob-upload-concurrency` will increase performance at the cost
of using more memory. The default of 16 is set quite conservatively to
use less memory. It maybe be necessary raise it to 64 or higher to
fully utilize a 1 GBit/s link with a single file transfer.
### Restricted filename characters
In addition to the [default restricted characters set](/overview/#restricted-characters)
@@ -158,10 +166,12 @@ Storage Account Name.
Leave blank to use SAS URL or Emulator.
Properties:
- Config: account
- Env Var: RCLONE_AZUREBLOB_ACCOUNT
- Type: string
- Default: ""
- Required: false
#### --azureblob-service-principal-file
@@ -177,10 +187,12 @@ Leave blank normally. Needed only if you want to use a service principal instead
See ["Create an Azure service principal"](https://docs.microsoft.com/en-us/cli/azure/create-an-azure-service-principal-azure-cli) and ["Assign an Azure role for access to blob data"](https://docs.microsoft.com/en-us/azure/storage/common/storage-auth-aad-rbac-cli) pages for more details.
Properties:
- Config: service_principal_file
- Env Var: RCLONE_AZUREBLOB_SERVICE_PRINCIPAL_FILE
- Type: string
- Default: ""
- Required: false
#### --azureblob-key
@@ -188,10 +200,12 @@ Storage Account Key.
Leave blank to use SAS URL or Emulator.
Properties:
- Config: key
- Env Var: RCLONE_AZUREBLOB_KEY
- Type: string
- Default: ""
- Required: false
#### --azureblob-sas-url
@@ -199,10 +213,12 @@ SAS URL for container level access only.
Leave blank if using account/key or Emulator.
Properties:
- Config: sas_url
- Env Var: RCLONE_AZUREBLOB_SAS_URL
- Type: string
- Default: ""
- Required: false
#### --azureblob-use-msi
@@ -217,6 +233,8 @@ the user-assigned identity will be used by default. If the resource has multiple
identities, the identity to use must be explicitly specified using exactly one of the msi_object_id,
msi_client_id, or msi_mi_res_id parameters.
Properties:
- Config: use_msi
- Env Var: RCLONE_AZUREBLOB_USE_MSI
- Type: bool
@@ -228,6 +246,8 @@ Uses local storage emulator if provided as 'true'.
Leave blank if using real azure storage endpoint.
Properties:
- Config: use_emulator
- Env Var: RCLONE_AZUREBLOB_USE_EMULATOR
- Type: bool
@@ -243,10 +263,12 @@ Object ID of the user-assigned MSI to use, if any.
Leave blank if msi_client_id or msi_mi_res_id specified.
Properties:
- Config: msi_object_id
- Env Var: RCLONE_AZUREBLOB_MSI_OBJECT_ID
- Type: string
- Default: ""
- Required: false
#### --azureblob-msi-client-id
@@ -254,10 +276,12 @@ Object ID of the user-assigned MSI to use, if any.
Leave blank if msi_object_id or msi_mi_res_id specified.
Properties:
- Config: msi_client_id
- Env Var: RCLONE_AZUREBLOB_MSI_CLIENT_ID
- Type: string
- Default: ""
- Required: false
#### --azureblob-msi-mi-res-id
@@ -265,10 +289,12 @@ Azure resource ID of the user-assigned MSI to use, if any.
Leave blank if msi_client_id or msi_object_id specified.
Properties:
- Config: msi_mi_res_id
- Env Var: RCLONE_AZUREBLOB_MSI_MI_RES_ID
- Type: string
- Default: ""
- Required: false
#### --azureblob-endpoint
@@ -276,32 +302,65 @@ Endpoint for the service.
Leave blank normally.
Properties:
- Config: endpoint
- Env Var: RCLONE_AZUREBLOB_ENDPOINT
- Type: string
- Default: ""
- Required: false
#### --azureblob-upload-cutoff
Cutoff for switching to chunked upload (<= 256 MiB) (deprecated).
Properties:
- Config: upload_cutoff
- Env Var: RCLONE_AZUREBLOB_UPLOAD_CUTOFF
- Type: string
- Default: ""
- Required: false
#### --azureblob-chunk-size
Upload chunk size (<= 100 MiB).
Upload chunk size.
Note that this is stored in memory and there may be up to
"--transfers" chunks stored at once in memory.
"--transfers" * "--azureblob-upload-concurrency" chunks stored at once
in memory.
Properties:
- Config: chunk_size
- Env Var: RCLONE_AZUREBLOB_CHUNK_SIZE
- Type: SizeSuffix
- Default: 4Mi
#### --azureblob-upload-concurrency
Concurrency for multipart uploads.
This is the number of chunks of the same file that are uploaded
concurrently.
If you are uploading small numbers of large files over high-speed
links and these uploads do not fully utilize your bandwidth, then
increasing this may help to speed up the transfers.
In tests, upload speed increases almost linearly with upload
concurrency. For example to fill a gigabit pipe it may be necessary to
raise this to 64. Note that this will use more memory.
Note that chunks are stored in memory and there may be up to
"--transfers" * "--azureblob-upload-concurrency" chunks stored at once
in memory.
Properties:
- Config: upload_concurrency
- Env Var: RCLONE_AZUREBLOB_UPLOAD_CONCURRENCY
- Type: int
- Default: 16
#### --azureblob-list-chunk
Size of blob list.
@@ -314,6 +373,8 @@ minutes per megabyte on average, it will time out (
). This can be used to limit the number of blobs items to return, to
avoid the time out.
Properties:
- Config: list_chunk
- Env Var: RCLONE_AZUREBLOB_LIST_CHUNK
- Type: int
@@ -334,10 +395,12 @@ If blobs are in "archive tier" at remote, trying to perform data transfer
operations from remote will not be allowed. User should first restore by
tiering blob to "Hot" or "Cool".
Properties:
- Config: access_tier
- Env Var: RCLONE_AZUREBLOB_ACCESS_TIER
- Type: string
- Default: ""
- Required: false
#### --azureblob-archive-tier-delete
@@ -356,6 +419,8 @@ replacement. This has the potential for data loss if the upload fails
archive tier blobs early may be chargable.
Properties:
- Config: archive_tier_delete
- Env Var: RCLONE_AZUREBLOB_ARCHIVE_TIER_DELETE
- Type: bool
@@ -370,6 +435,8 @@ uploading it so it can add it to metadata on the object. This is great
for data integrity checking but can cause long delays for large files
to start uploading.
Properties:
- Config: disable_checksum
- Env Var: RCLONE_AZUREBLOB_DISABLE_CHECKSUM
- Type: bool
@@ -382,6 +449,8 @@ How often internal memory buffer pools will be flushed.
Uploads which requires additional buffers (f.e multipart) will use memory pool for allocations.
This option controls how often unused buffers will be removed from the pool.
Properties:
- Config: memory_pool_flush_time
- Env Var: RCLONE_AZUREBLOB_MEMORY_POOL_FLUSH_TIME
- Type: Duration
@@ -391,6 +460,8 @@ This option controls how often unused buffers will be removed from the pool.
Whether to use mmap buffers in internal memory pool.
Properties:
- Config: memory_pool_use_mmap
- Env Var: RCLONE_AZUREBLOB_MEMORY_POOL_USE_MMAP
- Type: bool
@@ -398,10 +469,12 @@ Whether to use mmap buffers in internal memory pool.
#### --azureblob-encoding
This sets the encoding for the backend.
The encoding for the backend.
See the [encoding section in the overview](/overview/#encoding) for more info.
Properties:
- Config: encoding
- Env Var: RCLONE_AZUREBLOB_ENCODING
- Type: MultiEncoder
@@ -411,10 +484,12 @@ See the [encoding section in the overview](/overview/#encoding) for more info.
Public access level of a container: blob or container.
Properties:
- Config: public_access
- Env Var: RCLONE_AZUREBLOB_PUBLIC_ACCESS
- Type: string
- Default: ""
- Required: false
- Examples:
- ""
- The container and its blobs can be accessed only with an authorized request.
@@ -428,6 +503,8 @@ Public access level of a container: blob or container.
If set, do not do HEAD before GET when getting objects.
Properties:
- Config: no_head_object
- Env Var: RCLONE_AZUREBLOB_NO_HEAD_OBJECT
- Type: bool

View File

@@ -23,7 +23,7 @@ recommended method. See below for further details on generating and using
an Application Key.
```
No remotes found - make a new one
No remotes found, make a new one?
n) New remote
q) Quit config
n/q> n
@@ -329,24 +329,30 @@ Here are the standard options specific to b2 (Backblaze B2).
Account ID or Application Key ID.
Properties:
- Config: account
- Env Var: RCLONE_B2_ACCOUNT
- Type: string
- Default: ""
- Required: true
#### --b2-key
Application Key.
Properties:
- Config: key
- Env Var: RCLONE_B2_KEY
- Type: string
- Default: ""
- Required: true
#### --b2-hard-delete
Permanently delete files on remote removal, otherwise hide files.
Properties:
- Config: hard_delete
- Env Var: RCLONE_B2_HARD_DELETE
- Type: bool
@@ -362,10 +368,12 @@ Endpoint for the service.
Leave blank normally.
Properties:
- Config: endpoint
- Env Var: RCLONE_B2_ENDPOINT
- Type: string
- Default: ""
- Required: false
#### --b2-test-mode
@@ -381,10 +389,12 @@ below will cause b2 to return specific errors:
These will be set in the "X-Bz-Test-Mode" header which is documented
in the [b2 integrations checklist](https://www.backblaze.com/b2/docs/integration_checklist.html).
Properties:
- Config: test_mode
- Env Var: RCLONE_B2_TEST_MODE
- Type: string
- Default: ""
- Required: false
#### --b2-versions
@@ -393,6 +403,8 @@ Include old versions in directory listings.
Note that when using this no file write operations are permitted,
so you can't upload files or delete them.
Properties:
- Config: versions
- Env Var: RCLONE_B2_VERSIONS
- Type: bool
@@ -406,6 +418,8 @@ Files above this size will be uploaded in chunks of "--b2-chunk-size".
This value should be set no larger than 4.657 GiB (== 5 GB).
Properties:
- Config: upload_cutoff
- Env Var: RCLONE_B2_UPLOAD_CUTOFF
- Type: SizeSuffix
@@ -420,6 +434,8 @@ copied in chunks of this size.
The minimum is 0 and the maximum is 4.6 GiB.
Properties:
- Config: copy_cutoff
- Env Var: RCLONE_B2_COPY_CUTOFF
- Type: SizeSuffix
@@ -436,6 +452,8 @@ might a maximum of "--transfers" chunks in progress at once.
5,000,000 Bytes is the minimum size.
Properties:
- Config: chunk_size
- Env Var: RCLONE_B2_CHUNK_SIZE
- Type: SizeSuffix
@@ -450,6 +468,8 @@ uploading it so it can add it to metadata on the object. This is great
for data integrity checking but can cause long delays for large files
to start uploading.
Properties:
- Config: disable_checksum
- Env Var: RCLONE_B2_DISABLE_CHECKSUM
- Type: bool
@@ -466,10 +486,20 @@ If the custom endpoint rewrites the requests for authentication,
e.g., in Cloudflare Workers, this header needs to be handled properly.
Leave blank if you want to use the endpoint provided by Backblaze.
The URL provided here SHOULD have the protocol and SHOULD NOT have
a trailing slash or specify the /file/bucket subpath as rclone will
request files with "{download_url}/file/{bucket_name}/{path}".
Example:
> https://mysubdomain.mydomain.tld
(No trailing "/", "file" or "bucket")
Properties:
- Config: download_url
- Env Var: RCLONE_B2_DOWNLOAD_URL
- Type: string
- Default: ""
- Required: false
#### --b2-download-auth-duration
@@ -478,6 +508,8 @@ Time before the authorization token will expire in s or suffix ms|s|m|h|d.
The duration before the download authorization token will expire.
The minimum value is 1 second. The maximum value is one week.
Properties:
- Config: download_auth_duration
- Env Var: RCLONE_B2_DOWNLOAD_AUTH_DURATION
- Type: Duration
@@ -489,6 +521,8 @@ How often internal memory buffer pools will be flushed.
Uploads which requires additional buffers (f.e multipart) will use memory pool for allocations.
This option controls how often unused buffers will be removed from the pool.
Properties:
- Config: memory_pool_flush_time
- Env Var: RCLONE_B2_MEMORY_POOL_FLUSH_TIME
- Type: Duration
@@ -498,6 +532,8 @@ This option controls how often unused buffers will be removed from the pool.
Whether to use mmap buffers in internal memory pool.
Properties:
- Config: memory_pool_use_mmap
- Env Var: RCLONE_B2_MEMORY_POOL_USE_MMAP
- Type: bool
@@ -505,10 +541,12 @@ Whether to use mmap buffers in internal memory pool.
#### --b2-encoding
This sets the encoding for the backend.
The encoding for the backend.
See the [encoding section in the overview](/overview/#encoding) for more info.
Properties:
- Config: encoding
- Env Var: RCLONE_B2_ENCODING
- Type: MultiEncoder

View File

@@ -22,7 +22,7 @@ Here is an example of how to make a remote called `remote`. First run:
This will guide you through an interactive setup process:
```
No remotes found - make a new one
No remotes found, make a new one?
n) New remote
s) Set configuration password
q) Quit config
@@ -275,10 +275,12 @@ OAuth Client Id.
Leave blank normally.
Properties:
- Config: client_id
- Env Var: RCLONE_BOX_CLIENT_ID
- Type: string
- Default: ""
- Required: false
#### --box-client-secret
@@ -286,10 +288,12 @@ OAuth Client Secret.
Leave blank normally.
Properties:
- Config: client_secret
- Env Var: RCLONE_BOX_CLIENT_SECRET
- Type: string
- Default: ""
- Required: false
#### --box-box-config-file
@@ -299,10 +303,12 @@ Leave blank normally.
Leading `~` will be expanded in the file name as will environment variables such as `${RCLONE_CONFIG_DIR}`.
Properties:
- Config: box_config_file
- Env Var: RCLONE_BOX_BOX_CONFIG_FILE
- Type: string
- Default: ""
- Required: false
#### --box-access-token
@@ -310,15 +316,19 @@ Box App Primary Access Token
Leave blank normally.
Properties:
- Config: access_token
- Env Var: RCLONE_BOX_ACCESS_TOKEN
- Type: string
- Default: ""
- Required: false
#### --box-box-sub-type
Properties:
- Config: box_sub_type
- Env Var: RCLONE_BOX_BOX_SUB_TYPE
- Type: string
@@ -337,10 +347,12 @@ Here are the advanced options specific to box (Box).
OAuth Access Token as a JSON blob.
Properties:
- Config: token
- Env Var: RCLONE_BOX_TOKEN
- Type: string
- Default: ""
- Required: false
#### --box-auth-url
@@ -348,10 +360,12 @@ Auth server URL.
Leave blank to use the provider defaults.
Properties:
- Config: auth_url
- Env Var: RCLONE_BOX_AUTH_URL
- Type: string
- Default: ""
- Required: false
#### --box-token-url
@@ -359,15 +373,19 @@ Token server url.
Leave blank to use the provider defaults.
Properties:
- Config: token_url
- Env Var: RCLONE_BOX_TOKEN_URL
- Type: string
- Default: ""
- Required: false
#### --box-root-folder-id
Fill in for rclone to use a non root folder as its starting point.
Properties:
- Config: root_folder_id
- Env Var: RCLONE_BOX_ROOT_FOLDER_ID
- Type: string
@@ -377,6 +395,8 @@ Fill in for rclone to use a non root folder as its starting point.
Cutoff for switching to multipart upload (>= 50 MiB).
Properties:
- Config: upload_cutoff
- Env Var: RCLONE_BOX_UPLOAD_CUTOFF
- Type: SizeSuffix
@@ -386,6 +406,8 @@ Cutoff for switching to multipart upload (>= 50 MiB).
Max number of times to try committing a multipart file.
Properties:
- Config: commit_retries
- Env Var: RCLONE_BOX_COMMIT_RETRIES
- Type: int
@@ -395,6 +417,8 @@ Max number of times to try committing a multipart file.
Size of listing chunk 1-1000.
Properties:
- Config: list_chunk
- Env Var: RCLONE_BOX_LIST_CHUNK
- Type: int
@@ -404,17 +428,21 @@ Size of listing chunk 1-1000.
Only show items owned by the login (email address) passed in.
Properties:
- Config: owned_by
- Env Var: RCLONE_BOX_OWNED_BY
- Type: string
- Default: ""
- Required: false
#### --box-encoding
This sets the encoding for the backend.
The encoding for the backend.
See the [encoding section in the overview](/overview/#encoding) for more info.
Properties:
- Config: encoding
- Env Var: RCLONE_BOX_ENCODING
- Type: MultiEncoder

Some files were not shown because too many files have changed in this diff Show More