1
0
mirror of https://github.com/rclone/rclone.git synced 2026-02-01 17:23:39 +00:00

Compare commits

...

108 Commits

Author SHA1 Message Date
Nick Craig-Wood
1dc58a924c local: add debugging for link problem
See: https://forum.rclone.org/t/problem-with-symlinks-and-links/23840/13
2021-04-29 14:38:39 +01:00
Nick Craig-Wood
34627c5c7e librclone: update docs for merge #4891 2021-04-28 20:42:00 +01:00
Nick Craig-Wood
e33303df94 librclone: add basic Python bindings with tests #4891 2021-04-28 16:55:08 +01:00
Nick Craig-Wood
665eceaec3 librclone: catch panics at the language change boundary #4891 2021-04-28 16:55:08 +01:00
Nick Craig-Wood
ba09ee18bb librclone: factor into gomobile and internal implementation #4891
This was needed because gomobile can't use a main package wheras this
is required to make a normal shared C library.
2021-04-28 16:55:08 +01:00
Nick Craig-Wood
62bf63d36f librclone: add tests for build and execute them in the actions #4891 2021-04-28 16:55:08 +01:00
Nick Craig-Wood
f38c262471 librclone: change interface for C code and add Mobile interface #4891
This changes the interface for the C code to return a struct on the
stack that is defined in the code rather than one which is defined by
the cgo compiler. This is more future proof and inline with the
gomobile interface.

This also adds a gomobile interface RcloneMobileRPC which uses generic
go types conforming to the gobind restrictions.

It also fixes up initialisation errors.
2021-04-28 16:55:08 +01:00
Nick Craig-Wood
5db88fed2b librclone: exports, errors, docs and examples #4891
- rename C exports to be namespaced with Rclone prefix
- fix error handling in RcloneRPC
- add more examples
- add more docs
- add README
- simplify ctest Makefile
2021-04-28 16:55:08 +01:00
lewisxy
316e65589b librclone: export the rclone RC as a C library #4891 2021-04-28 16:55:08 +01:00
Tatsuya Noyori
4401d180aa s3: add --s3-no-head-object
This stops rclone doing any HEAD requests on objects.
2021-04-28 11:05:54 +01:00
Nick Craig-Wood
9ccd870267 Move the how to use GitHub info in the bug/issue templates to the end
This is so that we see the text of the bug/issue first rather than the
how to use GitHub issue which is very useful when posting bug reports
to the forum or social media.
2021-04-28 09:40:19 +01:00
Nick Craig-Wood
16d1da2c1e vfs: remove item.metaDirty as it was confusing and not used
See discussion in #5277
2021-04-28 09:33:22 +01:00
Nick Craig-Wood
00a0ee1899 vfs: fix modtime changing when reading file into cache - fixes #5277
Before this change but after:

aea8776a43 vfs: fix modtimes not updating when writing via cache #4763

When a file was opened read-only the modtime was read from the cached
file. However this modtime wasn't correct leading to an incorrect
result.

This change fixes the definition of `item.IsDirty` to be true only
when the data is dirty. This fixes the problem as a read only file
isn't considered dirty.
2021-04-28 09:33:22 +01:00
Nick Craig-Wood
b78c9a65fa backends: remove log.Fatal and replace with error returns #5234
This changes the Config interface so that it returns an error.
2021-04-27 18:18:08 +01:00
Nick Craig-Wood
ef3c350686 box: return errors instead of calling log.Fatal with them #5234 2021-04-27 18:18:08 +01:00
Nick Craig-Wood
742af80972 Add jtagcat to contributors 2021-04-27 18:18:08 +01:00
albertony
08a2df51be Use decimal prefixes for counts
Fixes #5126
2021-04-27 02:25:52 +03:00
albertony
2925e1384c Use binary prefixes for size and rate units
Includes adding support for additional size input suffix Mi and MiB, treated equivalent to M.
Extends binary suffix output with letter i, e.g. Ki and Mi.
Centralizes creation of bit/byte unit strings.
2021-04-27 02:25:52 +03:00
albertony
2ec0c8d45f stats: correct spelling of data rate units 2021-04-27 02:25:52 +03:00
albertony
98579608ec docs: cleanup spelling of size and rate units 2021-04-27 02:25:52 +03:00
Caleb Case
a1a41aa0c1 backend/tardigrade: use negative offset
v1.4.6 of uplink allows us to do a negative offset from the end of the
file. This removes a round trip when requesting the last N bytes of a
file.

Previous to v1.4.6 of uplink it wasn't possible to do a negative offset
on download. This meant that to fulfill the semantics of http range
headers it was necessary to first fetch the size of the object via a
stat call and compute absolute offset and length.
2021-04-27 02:20:08 +03:00
albertony
f8d56bebaf config: delay load config file (#5258)
Restructuring of config code in v1.55 resulted in config
file being loaded early at process startup. If configuration
file is encrypted this means user will need to supply the password,
even when running commands that does not use config.
This also lead to an issue where mount with --deamon failed to
decrypt the config file when it had to prompt user for passord.

Fixes #5236
Fixes #5228
2021-04-26 23:37:49 +02:00
jtagcat
5d799431a7 GitHub issue templates: Add GH Etiquette. 2021-04-26 18:12:37 +01:00
Leo Luan
8f23cae1c0 vfs: Add cache reset for --vfs-cache-max-size handling at cache poll interval
The vfs-cache-max-size parameter is probably confusing to many users.
The cache cleaner checks cache size periodically at the --vfs-cache-poll-interval
(default 60 seconds) interval and remove cache items in the following order.

(1) cache items that are not in use and with age > vfs-cache-max-age
(2) if the cache space used at this time still is larger than
vfs-cache-max-size, the cleaner continues to remove cache items that are
not in use.

The cache cleaning process does not remove cache items that are currently in use.
If the total space consumed by in-use cache items exceeds vfs-cache-max-size, the
periodical cache cleaner thread does not do anything further and leaves the in-use
cache items alone with a total space larger than vfs-cache-max-size.

A cache reset feature was introduced in 1.53 which resets in-use (but not dirty,
i.e., not being updated) cache items when additional cache data incurs an ENOSPC
error.  But this code was not activated in the periodical cache cleaning thread.

This patch adds the cache reset step in the cache cleaner thread during cache
poll to reset cache items until the total size of the remaining cache items is
below vfs-cache-max-size.
2021-04-26 17:55:52 +01:00
Mathieu Carbou
964088affa build: Only run event-based workflow scripts under rclone repo with manual override
This updates the actions to only run event-based workflow scripts
under the rclone repository only and not forks. It also adds the
ability to manually trigger a build from a branch in rclone repository
and forks.

Fixes #5272
2021-04-26 17:52:03 +01:00
Nick Craig-Wood
f4068d406b Add Jeffrey Tolar to contributors 2021-04-26 16:57:21 +01:00
Jeffrey Tolar
7511b6f4f1 b2: don't include the bucket name in public link file prefixes
Including the bucket name as part of the `fileNamePrefix` passed to
`b2_get_download_authorization` results in a link valid for objects that
have the bucket name as part of the object path; e.g.,

    rclone link :b2:some-bucket/some-file

would result in a public link valid for the object
`some-bucket/some-file` in the `some-bucket` bucket (in rclone-remote
parlance, `:b2:some-bucket/some-bucket/some-file`). This will almost
certainly result in a broken link.

The B2 docs don't explicitly specify this behavior, but the example
given for `fileNamePrefix` provides some clarification.

See https://www.backblaze.com/b2/docs/b2_get_download_authorization.html.
2021-04-26 16:56:41 +01:00
Nick Craig-Wood
e618ea83dd s3: remove WebIdentityRoleProvider to fix crash on auth #5255
This code removes the code added in

15d19131bd s3: use aws web identity role provider

This code no longer works because it doesn't initialise the
tokenFetcher - leading to a nil pointer crash.

The proper way to initialise this is with the
NewWebIdentityCredentials but it isn't clear where to get the other
parameters: roleARN, roleSessionName, path.

In the linked issue a user reports rclone working with EKS anyway, so
perhaps this code is no longer needed.

If it is needed, hopefully someone who knows AWS better will come
along and fix it!

See: https://forum.rclone.org/t/add-support-for-aws-sso/23569
2021-04-26 16:55:50 +01:00
Nick Craig-Wood
34dc257c55 Add Kenny Parsons to contributors 2021-04-26 16:55:50 +01:00
Kenny Parsons
4cacf5d30c docs: clarify and add examples for sftp docs
- added clarification to default remote path if no path is specified 
- added examples for mounting a remote path (other than the default home directory) to a local folder.
2021-04-26 16:13:42 +01:00
Nick Craig-Wood
0537791d14 sftp: Fix performance regression by re-enabling concurrent writes #5197
Betweeen rclone v1.54 and v1.55 there was an approx 3x performance
regression when transferring to distant SFTP servers (in particular
rsync.net).

This turned out to be due to the library github.com/pkg/sftp rclone
uses. Concurrent writes used to be enabled in this library by default
(for v1.12.0 as used in rclone v1.54) but they are no longer enabled
(for v1.13.0 as used in rclone v1.55) for safety reasons and it is
necessary to enable them specifically.

The safety concerns are due to the uncertainty as to whether writes
come in order and whether a half completed file might have holes in
it. This isn't a problem for rclone since a) it doesn't restart
uploads and b) it has a post-transfer checksum test.

This change introduces a new flag `--sftp-disable-concurrent-writes`
to control the feature which defaults to false, meaning that
concurrent writes are enabled as in v1.54.

However this isn't quite enough to fix the problem as the sftp library
needs to be able to sniff the size of the stream from the reader
passed in, so this also adds a `Size` interface to the reader to
enable this. This involved a patch to the library.

The library was reverted to v1.12.0 for v1.55.1 - this patch installs
v1.13.0+master to fix the Size interface problem.

See: https://github.com/pkg/sftp/issues/426
2021-04-26 09:24:28 +01:00
Nick Craig-Wood
4b1d28550a Changelog updates from Version v1.55.1 2021-04-26 09:22:49 +01:00
Nick Craig-Wood
d27c35ee4a box: use upload preflight check to avoid listings in file uploads
Before this change, rclone checked to see if an object existed before
doing an upload by listing the destination directory. This was very
inefficient, especially with large directories.

After this change rclone uses the pre upload check API call which
checks to see if it is OK to upload an object, and also returns the ID
of an existing object which saves rclone having to do a directory
listing.
2021-04-25 11:45:44 +01:00
Nick Craig-Wood
ffec0d4f03 Add OleFrost to contributors 2021-04-25 11:45:39 +01:00
OleFrost
89daa9efd1 onedrive: Work around for random "Unable to initialize RPS" errors
OneDrive randomly returns the error message: "InvalidAuthenticationToken: Unable to initialize RPS". These unexpected errors typically caused the entire rclone command to fail.

This work around recognizes these errors and marks them for a low level retry, that mostly succeeds. This will make rclone commands complete without being noticeable affected.

Fixes: #5270
2021-04-24 23:05:34 +01:00
Nick Craig-Wood
ee502a757f ncdu: update termbox-go library to fix crash - fixes #5259 2021-04-24 15:17:14 +01:00
Cnly
386acaa110 oauthutil: fix #5265 old authorize result not recognised 2021-04-23 01:20:52 +08:00
buengese
efdee3a5fe compress: fix compressed name regexp 2021-04-22 18:38:38 +02:00
Nick Craig-Wood
5d85e6bc9c dropbox: fix Unable to decrypt returned paths from changeNotify - fixes #5165
This was caused by incorrect use of strings.TrimLeft where
strings.TrimPrefix was required.
2021-04-21 10:52:05 +01:00
Nick Craig-Wood
4a9469a3dc test changenotify: add command to help debugging changenotify 2021-04-21 10:52:05 +01:00
Nick Craig-Wood
f8884a7200 build: fix version numbers in android branch builds 2021-04-20 17:40:06 +01:00
Nick Craig-Wood
2a40f00077 vfs: fix a code path which allows dirty data to be removed causing data loss
Before this change the VFS layer could remove a locally cached file
even if it had data which needed to be written back, thus causing data loss.

See: https://forum.rclone.org/t/rclone-1-55-doesnt-save-file-changes-if-the-file-has-been-reopened-during-upload-google-drive-mount/23646
2021-04-20 16:36:38 +01:00
Nick Craig-Wood
9799fdbae2 Add noabody to contributors 2021-04-20 16:36:38 +01:00
Nick Craig-Wood
492504a601 Add new email address for Caleb Case 2021-04-20 16:36:25 +01:00
Nick Craig-Wood
0c03a7fead Add Ansh Mittal to contributors 2021-04-20 16:31:40 +01:00
Nick Craig-Wood
7afb4487ef build: update all dependencies 2021-04-20 00:00:13 +01:00
noabody
b9d0ed4f5c make_manual.py: fix missing comma for doc build after uptobox merge
This fixes a problem introduced in

cd69f9e6e8 uptobox: add docs
2021-04-19 16:18:18 +01:00
Caleb Case
baa4c039a0 backend/tardigrade: Upgrade to uplink v1.4.6
Release notes: https://github.com/storj/uplink/releases/tag/v1.4.6

Follow up PRs will take advantage of the new bucket error and negative
offset support to remove roundtrips.
2021-04-19 16:14:56 +01:00
Alex Chen
31a8211afa oauthutil: raise fatal error if token expired without refresh token (#5252) 2021-04-18 12:04:13 +08:00
albertony
3544e09e95 config: treat any config file paths with filename notfound as memory-only config (#5235) 2021-04-18 00:09:03 +02:00
Ansh Mittal
b456be4303 drive: don't open browser when service account...
credentials specified 

Fixes #5104
2021-04-17 19:49:53 +01:00
Nick Craig-Wood
3e96752079 dropbox: add missing team_data.member scope for use with --impersonate
See: https://forum.rclone.org/t/dropbox-business-not-accepting-oauth2/23390/32
2021-04-17 17:40:08 +01:00
buengese
4a5cbf2a19 cmd/ncdu: fix out of range panic in delete 2021-04-16 23:20:03 +02:00
Nick Craig-Wood
dcd4edc9f5 dropbox: fix About after scopes changes - rclone config reconnect needed
This adds the missing scope for the About call. To use it it will be
necessary to refresh the token with `rclone config reconnect`.

See: https://forum.rclone.org/t/dropbox-too-many-requests-or-write-operations-trying-again-in-15-seconds/23316/33
2021-04-16 15:07:03 +01:00
Nick Craig-Wood
7f5e347d94 Add Nazar Mishturak to contributors 2021-04-16 15:07:03 +01:00
Cnly
040677ab5b onedrive: also report root error if unable to cancel multipart upload 2021-04-16 12:41:38 +08:00
albertony
6366d3dfc5 docs: extend description of drive mount access on windows 2021-04-13 22:33:19 +02:00
albertony
60d376c323 docs: add guide to configuring autorun in install documentation 2021-04-13 22:33:19 +02:00
albertony
7b1ca716bf config: add touch command to ensure config exists at configured location (#5226)
A new command `rclone config touch` which calls config.SaveConfig().
Useful during testing of configuration location things.
It will ensure the config file exists and test that it is writable.
2021-04-13 19:25:09 +03:00
albertony
d8711cf7f9 config: create config file in windows appdata directory by default (#5226)
Use %AppData% as primary default for configuration file on Windows,
which is more in line with Windows standards, while existing default
of using home directory is more Unix standards - though that made rclone
more consistent accross different OS.

Fixes #4667
2021-04-13 19:25:09 +03:00
buengese
cd69f9e6e8 uptobox: add docs 2021-04-13 17:46:07 +02:00
buengese
a737ff21af uptobox: integration tests 2021-04-13 17:46:07 +02:00
buengese
ad9aa693a3 new backend: uptobox 2021-04-13 17:46:07 +02:00
Nazar Mishturak
964c3e0732 rcat: add --size flag for more efficient uploads of known size - fixes #4403
This allows preallocating space at remote end with RcatSize.
2021-04-13 12:25:47 +01:00
Nick Craig-Wood
a46a3c0811 test makefiles: add log levels and speed summary 2021-04-12 18:14:01 +01:00
Nick Craig-Wood
60dcafe04d test makefiles: add --seed flag and make data generated repeatable #5214 2021-04-12 18:14:01 +01:00
Nick Craig-Wood
813bf029d4 Add Dominik Mydlil to contributors 2021-04-12 18:14:01 +01:00
albertony
f2d3264054 config: prevent use of windows reserved names in config file name 2021-04-12 18:17:19 +02:00
albertony
23a0d4a1e6 config: fix issues with memory-only config file paths
Fixes #5222
2021-04-12 18:17:19 +02:00
albertony
b96ebfc40b docs: less confusing example with config path option 2021-04-12 18:17:19 +02:00
Dominik Mydlil
3fe2aaf96c crypt: support timestamped filenames from --b2-versions
With the file version format standardized in lib/version, `crypt` can
now treat the version strings separately from the encrypted/decrypted
file names. This allows --b2-versions to work with `crypt`.

Fixes #1627

Co-authored-by: Luc Ritchie <luc.ritchie@gmail.com>
2021-04-12 15:59:18 +01:00
Dominik Mydlil
c163e6b250 b2: factor version handling into lib/version
Standardizes the filename version tagging so that it can be used by any
backend.
2021-04-12 15:59:18 +01:00
Nick Craig-Wood
c1492cfa28 test: add sftp to rsync.net to integration tests 2021-04-12 15:52:31 +01:00
Nick Craig-Wood
38a8071a58 Add Ashok Gelal to contributors 2021-04-12 15:52:31 +01:00
Ashok Gelal
8c68a76a4a install.sh: silence the progress output with curl requests
This commit silences the progress output from the curl requests made by the install.sh script.

Having a progress seems to break some automated scripts and there isn't a way to pass some
flags to these curl requests to disable them.
2021-04-12 14:18:29 +01:00
Dan Dascalescu
e7b736f8ca docs: fix minor typo in symlinks / junction points 2021-04-10 15:34:34 +02:00
Nick Craig-Wood
cb30a8c80e webdav: fix sharepoint auth over http - fixes #4418
Before this change rclone would auth over https even when the server
was configured with http.

Authing over http obviously isn't ideal, however this type of server
is on-premise and doesn't work over https.
2021-04-10 11:59:56 +01:00
Ivan Andreev
629a3eeca2 backend/ftp: fix implicit TLS after PR #4266 (#5219)
PR #4266 modified ftpConnection to make ftp library into using
a custom dial function which is QoS aware and takes care of TLS.
However the ServerConn.Login function from the ftp library also needs
TLS config passed explicitly as a trigger for sending PSBZ and PROT
options to FTP server. This was not taken care of resulting in
failure to connect via FTP with implicit TLS.
This PR fixes that.

Fixes #5210
2021-04-09 01:43:50 +03:00
Nick Craig-Wood
f52ae75a51 rclone authorize: Send and receive extra config options to fix oauth
Before this change any backends which required extra config in the
oauth phase (like the `region` for zoho) didn't work with `rclone
authorize`.

This change serializes the extra config and passes it to `rclone
authorize` and returns new config items to be set from rclone
authorize.

`rclone authorize` will still accept its previous configuration
parameters for use with old rclones.

Fixes #5178
2021-04-08 12:34:15 +01:00
Nick Craig-Wood
9d5c5bf7ab fs: add Options.NonDefault to read options which aren't at their default #5178 2021-04-08 12:34:15 +01:00
Nick Craig-Wood
53573b4a09 configmap: Add Encode and Decode methods to Simple for command line encoding #5178 2021-04-08 12:34:15 +01:00
Nick Craig-Wood
3622e064f5 configmap: Add priorities to configmap Setters #5178 2021-04-08 12:34:15 +01:00
Nick Craig-Wood
6d28ea7ab5 fs: factor config override detection into its own function #5178 2021-04-08 12:34:15 +01:00
Nick Craig-Wood
b9fd02039b authorize: refactor to use new config interfaces #5178 2021-04-08 12:34:15 +01:00
Nick Craig-Wood
1a41c930f3 configmap: add ClearSetters to get rid of all setters #5178 2021-04-08 12:34:15 +01:00
albertony
ddb7eb6e0a docs: fixed some typos 2021-04-08 10:19:03 +02:00
buengese
c114695a66 zoho: do not ask for mountpoint twice when using headless setup 2021-04-08 00:23:27 +02:00
Nick Craig-Wood
fcba51557f dropbox: set visibility in link sharing when --expire is set
Note that due to a bug in the dropbox SDK you'll need to set --expire
to access this.

See: https://github.com/dropbox/dropbox-sdk-go-unofficial/issues/75
See: https://forum.rclone.org/t/rclone-link-dropbox-permissions/23211
2021-04-07 13:58:37 +01:00
Nick Craig-Wood
9393225a1d link: use "off" value for unset expiry 2021-04-07 13:58:37 +01:00
albertony
3d3ff61f74 docs: minor cleanup of space around code section 2021-04-07 08:47:29 +02:00
albertony
d98f192425 docs: WinFsp 2021 is out of beta 2021-04-07 08:13:40 +02:00
Nick Craig-Wood
54771e4402 sync: fix incorrect error reported by graceful cutoff - fixes #5203
Before this change, a sync which was finished with a graceful transfer
cutoff could return "context canceled" instead of the correct error.

This fixes the problem by ignoring "context canceled" errors if we
have done a graceful stop.
2021-04-06 13:08:42 +01:00
Nick Craig-Wood
dc286529bc drive: fix backend copyid of google doc to directory - fixes #5196
Before this change the google doc was being copied to the directory
without an extension.
2021-04-06 11:46:52 +01:00
Nick Craig-Wood
7dc7c021db sftp: fix Update ReadFrom failed: failed to send packet: EOF errors
In

a3fcadddc8 sftp: close idle connections after --sftp-idle-timeout (1m by default)

Idle SFTP connections were closed after 1 minute. However due to the
way SSH multiplexes connections over a single SSH connection this
meant that if uploads or downloads went on for more than one minute
they failed with "EOF errors" as their underlying connection was
closed.

This fixes the problem by not clearing idle connections if there are
any transfers in progress.

Fixes #5197
2021-04-06 10:01:49 +01:00
Nick Craig-Wood
fe1aa13069 sftp: revert sftp library to v1.12.0 from v1.13.0 to fix performance regression #5197
This reverts the library update done in this commit.

713f8f357d sftp: fix "file not found" errors for read once servers

Reverting this commit triples the performance to a far away sftp server.

See: https://github.com/pkg/sftp/issues/426
2021-04-06 10:01:49 +01:00
Nick Craig-Wood
5fa8e7d957 Add Nick Gaya to contributors 2021-04-06 10:01:49 +01:00
Nick Gaya
9db7c51eaa sync: don't warn about --no-traverse when --files-from is set 2021-04-05 20:36:39 +01:00
Ivan Andreev
3859fe2f52 cmd/version: print os/version, kernel and bitness (#5204)
Related to #5121

Note: OpenBSD is stub yet. This will be fixed after upstream PR gets resolved
https://github.com/shirou/gopsutil/pull/993
2021-04-05 21:53:09 +03:00
buengese
0caf417779 zoho: fix error when region isn't set 2021-04-05 15:11:30 +02:00
Ivan Andreev
9eab258ffb build: add build tag noselfupdate
Allow downstream packaging to build rclone without selfupdate command:
$ go build -tags noselfupdate

Fixes #5187
2021-04-04 11:22:09 +03:00
Nick Gaya
7df57cd625 contributing.md: update setup instructions for go1.16 2021-04-04 09:10:43 +01:00
Nick Gaya
1fd9b483c8 onedrive: add list_chunk option
Add --onedrive-list-chunk option similar to existing options for azureblob, drive, and s3.

Suggested as a workaround for a OneDrive pagination bug

See: https://forum.rclone.org/t/unexpected-duplicates-on-onedrive-with-0s-in-filename/23164/8
2021-04-04 09:08:16 +01:00
Ivan Andreev
93353c431b selfupdate: dont detect FUSE if build is static
Before this patch selfupdate detected ANY build with cmount tag as a build
having libFUSE capabilities. However, only dynamic builds really have it.
The official linux builds are static and have the cmount tag as of the time
of this writing. This results in inability to update official linux binaries.
This patch fixes that. The build can be fixed independently.
2021-04-03 21:54:15 +03:00
Nick Craig-Wood
886dfd23e2 fichier: check if more than one upload link is returned #5152 2021-04-03 15:00:50 +01:00
Nick Craig-Wood
116a8021bb drive: switch to the Drives API for looking up shared drives - fixes #3139
Before this change rclone used the deprecated teamdrives API. This
change uses the new drives API (which seems to be the teamdrives API
renames).
2021-04-03 14:21:20 +01:00
Nick Craig-Wood
9e2fbe0f1a install.sh: fix macOS arm64 download - fixes #5183 2021-03-31 21:48:31 +01:00
Nick Craig-Wood
6d65d116df Start v1.56.0-DEV development 2021-03-31 19:51:43 +01:00
Ivan Andreev
edaeb51ea9 backlog: ticket templates should recommend to update rclone
Aligns Bug and Feature github templates with rclone forum
and instructs submitter to proactively update rclone.
2021-03-31 19:13:50 +01:00
187 changed files with 7625 additions and 1253 deletions

View File

@@ -5,19 +5,31 @@ about: Report a problem with rclone
<!--
Welcome :-) We understand you are having a problem with rclone; we want to help you with that!
We understand you are having a problem with rclone; we want to help you with that!
If you've just got a question or aren't sure if you've found a bug then please use the rclone forum:
**STOP and READ**
**YOUR POST WILL BE REMOVED IF IT IS LOW QUALITY**:
Please show the effort you've put in to solving the problem and please be specific.
People are volunteering their time to help! Low effort posts are not likely to get good answers!
If you think you might have found a bug, try to replicate it with the latest beta (or stable).
The update instructions are available at https://rclone.org/commands/rclone_selfupdate/
If you can still replicate it or just got a question then please use the rclone forum:
https://forum.rclone.org/
instead of filing an issue for a quick response.
for a quick response instead of filing an issue on this repo.
If you think you might have found a bug, please can you try to replicate it with the latest beta?
If nothing else helps, then please fill in the info below which helps us help you.
https://beta.rclone.org/
If you can still replicate it with the latest beta, then please fill in the info below which makes our lives much easier. A log with -vv will make our day :-)
**DO NOT REDACT** any information except passwords/keys/personal info.
You should use 3 backticks to begin and end your paste to make it readable.
Make sure to include a log obtained with '-vv'.
You can also use '-vv --log-file bug.log' and a service such as https://pastebin.com or https://gist.github.com/
Thank you
@@ -25,6 +37,10 @@ The Rclone Developers
-->
#### The associated forum post URL from `https://forum.rclone.org`
#### What is the problem you are having with rclone?
@@ -37,7 +53,7 @@ The Rclone Developers
#### Which cloud storage system are you using? (e.g. Google Drive)
#### Which cloud storage system are you using? (e.g. Google Drive)
@@ -48,3 +64,11 @@ The Rclone Developers
#### A log from the command with the `-vv` flag (e.g. output from `rclone -vv copy /tmp remote:tmp`)
<!--- Please keep the note below for others who read your bug report. -->
#### How to use GitHub
* Please use the 👍 [reaction](https://blog.github.com/2016-03-10-add-reactions-to-pull-requests-issues-and-comments/) to show that you are affected by the same issue.
* Please don't comment if you have no relevant information to add. It's just extra noise for everyone subscribed to this issue.
* Subscribe to receive notifications on status change and new comments.

View File

@@ -7,12 +7,16 @@ about: Suggest a new feature or enhancement for rclone
Welcome :-)
So you've got an idea to improve rclone? We love that! You'll be glad to hear we've incorporated hundreds of ideas from contributors already.
So you've got an idea to improve rclone? We love that!
You'll be glad to hear we've incorporated hundreds of ideas from contributors already.
Here is a checklist of things to do:
Probably the latest beta (or stable) release has your feature, so try to update your rclone.
The update instructions are available at https://rclone.org/commands/rclone_selfupdate/
1. Please search the old issues first for your idea and +1 or comment on an existing issue if possible.
2. Discuss on the forum first: https://forum.rclone.org/
If it still isn't there, here is a checklist of things to do:
1. Search the old issues for your idea and +1 or comment on an existing issue if possible.
2. Discuss on the forum: https://forum.rclone.org/
3. Make a feature request issue (this is the right place!).
4. Be prepared to get involved making the feature :-)
@@ -22,6 +26,9 @@ The Rclone Developers
-->
#### The associated forum post URL from `https://forum.rclone.org`
#### What is your current rclone version (output from `rclone version`)?
@@ -34,3 +41,11 @@ The Rclone Developers
#### How do you think rclone should be changed to solve that?
<!--- Please keep the note below for others who read your feature request. -->
#### How to use GitHub
* Please use the 👍 [reaction](https://blog.github.com/2016-03-10-add-reactions-to-pull-requests-issues-and-comments/) to show that you are affected by the same issue.
* Please don't comment if you have no relevant information to add. It's just extra noise for everyone subscribed to this issue.
* Subscribe to receive notifications on status change and new comments.

View File

@@ -12,9 +12,15 @@ on:
tags:
- '*'
pull_request:
workflow_dispatch:
inputs:
manual:
required: true
default: true
jobs:
build:
if: ${{ github.repository == 'rclone/rclone' || github.event.inputs.manual }}
timeout-minutes: 60
strategy:
fail-fast: false
@@ -30,6 +36,7 @@ jobs:
check: true
quicktest: true
racequicktest: true
librclonetest: true
deploy: true
- job_name: mac_amd64
@@ -187,6 +194,14 @@ jobs:
make racequicktest
if: matrix.racequicktest
- name: Run librclone tests
shell: bash
run: |
make -C librclone/ctest test
make -C librclone/ctest clean
librclone/python/test_rclone.py
if: matrix.librclonetest
- name: Code quality test
shell: bash
run: |
@@ -214,6 +229,7 @@ jobs:
if: matrix.deploy && github.head_ref == '' && github.repository == 'rclone/rclone'
android:
if: ${{ github.repository == 'rclone/rclone' || github.event.inputs.manual }}
timeout-minutes: 30
name: "android-all"
runs-on: ubuntu-latest
@@ -221,6 +237,8 @@ jobs:
steps:
- name: Checkout
uses: actions/checkout@v2
with:
fetch-depth: 0
# Upgrade together with NDK version
- name: Set up Go 1.14

View File

@@ -7,6 +7,7 @@ on:
jobs:
build:
if: github.repository == 'rclone/rclone'
runs-on: ubuntu-latest
name: Build image job
steps:

View File

@@ -6,6 +6,7 @@ on:
jobs:
build:
if: github.repository == 'rclone/rclone'
runs-on: ubuntu-latest
name: Build image job
steps:

View File

@@ -33,10 +33,11 @@ page](https://github.com/rclone/rclone).
Now in your terminal
go get -u github.com/rclone/rclone
cd $GOPATH/src/github.com/rclone/rclone
git clone https://github.com/rclone/rclone.git
cd rclone
git remote rename origin upstream
git remote add origin git@github.com:YOURUSER/rclone.git
go build
Make a branch to add your new feature

View File

@@ -1 +1 @@
v1.55.0
v1.56.0

View File

@@ -20,7 +20,7 @@ var (
)
func prepare(t *testing.T, root string) {
configfile.LoadConfig(context.Background())
configfile.Install()
// Configure the remote
config.FileSet(remoteName, "type", "alias")

View File

@@ -41,6 +41,7 @@ import (
_ "github.com/rclone/rclone/backend/swift"
_ "github.com/rclone/rclone/backend/tardigrade"
_ "github.com/rclone/rclone/backend/union"
_ "github.com/rclone/rclone/backend/uptobox"
_ "github.com/rclone/rclone/backend/webdav"
_ "github.com/rclone/rclone/backend/yandex"
_ "github.com/rclone/rclone/backend/zoho"

View File

@@ -16,7 +16,6 @@ import (
"encoding/json"
"fmt"
"io"
"log"
"net/http"
"path"
"strings"
@@ -70,11 +69,12 @@ func init() {
Prefix: "acd",
Description: "Amazon Drive",
NewFs: NewFs,
Config: func(ctx context.Context, name string, m configmap.Mapper) {
Config: func(ctx context.Context, name string, m configmap.Mapper) error {
err := oauthutil.Config(ctx, "amazon cloud drive", name, m, acdConfig, nil)
if err != nil {
log.Fatalf("Failed to configure token: %v", err)
return errors.Wrap(err, "failed to configure token")
}
return nil
},
Options: append(oauthutil.SharedOptions, []fs.Option{{
Name: "checkpoint",
@@ -83,16 +83,16 @@ func init() {
Advanced: true,
}, {
Name: "upload_wait_per_gb",
Help: `Additional time per GB to wait after a failed complete upload to see if it appears.
Help: `Additional time per GiB to wait after a failed complete upload to see if it appears.
Sometimes Amazon Drive gives an error when a file has been fully
uploaded but the file appears anyway after a little while. This
happens sometimes for files over 1GB in size and nearly every time for
files bigger than 10GB. This parameter controls the time rclone waits
happens sometimes for files over 1 GiB in size and nearly every time for
files bigger than 10 GiB. This parameter controls the time rclone waits
for the file to appear.
The default value for this parameter is 3 minutes per GB, so by
default it will wait 3 minutes for every GB uploaded to see if the
The default value for this parameter is 3 minutes per GiB, so by
default it will wait 3 minutes for every GiB uploaded to see if the
file appears.
You can disable this feature by setting it to 0. This may cause
@@ -112,7 +112,7 @@ in this situation.`,
Files this size or more will be downloaded via their "tempLink". This
is to work around a problem with Amazon Drive which blocks downloads
of files bigger than about 10GB. The default for this is 9GB which
of files bigger than about 10 GiB. The default for this is 9 GiB which
shouldn't need to be changed.
To download files above this threshold, rclone requests a "tempLink"

View File

@@ -47,8 +47,8 @@ const (
timeFormatIn = time.RFC3339
timeFormatOut = "2006-01-02T15:04:05.000000000Z07:00"
storageDefaultBaseURL = "blob.core.windows.net"
defaultChunkSize = 4 * fs.MebiByte
maxChunkSize = 100 * fs.MebiByte
defaultChunkSize = 4 * fs.Mebi
maxChunkSize = 100 * fs.Mebi
uploadConcurrency = 4
defaultAccessTier = azblob.AccessTierNone
maxTryTimeout = time.Hour * 24 * 365 //max time of an azure web request response window (whether or not data is flowing)
@@ -129,11 +129,11 @@ msi_client_id, or msi_mi_res_id parameters.`,
Advanced: true,
}, {
Name: "upload_cutoff",
Help: "Cutoff for switching to chunked upload (<= 256MB). (Deprecated)",
Help: "Cutoff for switching to chunked upload (<= 256 MiB). (Deprecated)",
Advanced: true,
}, {
Name: "chunk_size",
Help: `Upload chunk size (<= 100MB).
Help: `Upload chunk size (<= 100 MiB).
Note that this is stored in memory and there may be up to
"--transfers" chunks stored at once in memory.`,
@@ -404,7 +404,7 @@ func (f *Fs) shouldRetry(ctx context.Context, err error) (bool, error) {
}
func checkUploadChunkSize(cs fs.SizeSuffix) error {
const minChunkSize = fs.Byte
const minChunkSize = fs.SizeSuffixBase
if cs < minChunkSize {
return errors.Errorf("%s is less than %s", cs, minChunkSize)
}

View File

@@ -2,12 +2,11 @@ package api
import (
"fmt"
"path"
"strconv"
"strings"
"time"
"github.com/rclone/rclone/fs/fserrors"
"github.com/rclone/rclone/lib/version"
)
// Error describes a B2 error response
@@ -63,16 +62,17 @@ func (t *Timestamp) UnmarshalJSON(data []byte) error {
return nil
}
const versionFormat = "-v2006-01-02-150405.000"
// HasVersion returns true if it looks like the passed filename has a timestamp on it.
//
// Note that the passed filename's timestamp may still be invalid even if this
// function returns true.
func HasVersion(remote string) bool {
return version.Match(remote)
}
// AddVersion adds the timestamp as a version string into the filename passed in.
func (t Timestamp) AddVersion(remote string) string {
ext := path.Ext(remote)
base := remote[:len(remote)-len(ext)]
s := time.Time(t).Format(versionFormat)
// Replace the '.' with a '-'
s = strings.Replace(s, ".", "-", -1)
return base + s + ext
return version.Add(remote, time.Time(t))
}
// RemoveVersion removes the timestamp from a filename as a version string.
@@ -80,24 +80,9 @@ func (t Timestamp) AddVersion(remote string) string {
// It returns the new file name and a timestamp, or the old filename
// and a zero timestamp.
func RemoveVersion(remote string) (t Timestamp, newRemote string) {
newRemote = remote
ext := path.Ext(remote)
base := remote[:len(remote)-len(ext)]
if len(base) < len(versionFormat) {
return
}
versionStart := len(base) - len(versionFormat)
// Check it ends in -xxx
if base[len(base)-4] != '-' {
return
}
// Replace with .xxx for parsing
base = base[:len(base)-4] + "." + base[len(base)-3:]
newT, err := time.Parse(versionFormat, base[versionStart:])
if err != nil {
return
}
return Timestamp(newT), base[:versionStart] + ext
time, newRemote := version.Remove(remote)
t = Timestamp(time)
return
}
// IsZero returns true if the timestamp is uninitialized

View File

@@ -13,7 +13,6 @@ import (
var (
emptyT api.Timestamp
t0 = api.Timestamp(fstest.Time("1970-01-01T01:01:01.123456789Z"))
t0r = api.Timestamp(fstest.Time("1970-01-01T01:01:01.123000000Z"))
t1 = api.Timestamp(fstest.Time("2001-02-03T04:05:06.123000000Z"))
)
@@ -36,40 +35,6 @@ func TestTimestampUnmarshalJSON(t *testing.T) {
assert.Equal(t, (time.Time)(t1), (time.Time)(tActual))
}
func TestTimestampAddVersion(t *testing.T) {
for _, test := range []struct {
t api.Timestamp
in string
expected string
}{
{t0, "potato.txt", "potato-v1970-01-01-010101-123.txt"},
{t1, "potato", "potato-v2001-02-03-040506-123"},
{t1, "", "-v2001-02-03-040506-123"},
} {
actual := test.t.AddVersion(test.in)
assert.Equal(t, test.expected, actual, test.in)
}
}
func TestTimestampRemoveVersion(t *testing.T) {
for _, test := range []struct {
in string
expectedT api.Timestamp
expectedRemote string
}{
{"potato.txt", emptyT, "potato.txt"},
{"potato-v1970-01-01-010101-123.txt", t0r, "potato.txt"},
{"potato-v2001-02-03-040506-123", t1, "potato"},
{"-v2001-02-03-040506-123", t1, ""},
{"potato-v2A01-02-03-040506-123", emptyT, "potato-v2A01-02-03-040506-123"},
{"potato-v2001-02-03-040506=123", emptyT, "potato-v2001-02-03-040506=123"},
} {
actualT, actualRemote := api.RemoveVersion(test.in)
assert.Equal(t, test.expectedT, actualT, test.in)
assert.Equal(t, test.expectedRemote, actualRemote, test.in)
}
}
func TestTimestampIsZero(t *testing.T) {
assert.True(t, emptyT.IsZero())
assert.False(t, t0.IsZero())

View File

@@ -54,10 +54,10 @@ const (
decayConstant = 1 // bigger for slower decay, exponential
maxParts = 10000
maxVersions = 100 // maximum number of versions we search in --b2-versions mode
minChunkSize = 5 * fs.MebiByte
defaultChunkSize = 96 * fs.MebiByte
defaultUploadCutoff = 200 * fs.MebiByte
largeFileCopyCutoff = 4 * fs.GibiByte // 5E9 is the max
minChunkSize = 5 * fs.Mebi
defaultChunkSize = 96 * fs.Mebi
defaultUploadCutoff = 200 * fs.Mebi
largeFileCopyCutoff = 4 * fs.Gibi // 5E9 is the max
memoryPoolFlushTime = fs.Duration(time.Minute) // flush the cached buffers after this long
memoryPoolUseMmap = false
)
@@ -116,7 +116,7 @@ in the [b2 integrations checklist](https://www.backblaze.com/b2/docs/integration
Files above this size will be uploaded in chunks of "--b2-chunk-size".
This value should be set no larger than 4.657GiB (== 5GB).`,
This value should be set no larger than 4.657 GiB (== 5 GB).`,
Default: defaultUploadCutoff,
Advanced: true,
}, {
@@ -126,7 +126,7 @@ This value should be set no larger than 4.657GiB (== 5GB).`,
Any files larger than this that need to be server-side copied will be
copied in chunks of this size.
The minimum is 0 and the maximum is 4.6GB.`,
The minimum is 0 and the maximum is 4.6 GiB.`,
Default: largeFileCopyCutoff,
Advanced: true,
}, {
@@ -1353,7 +1353,7 @@ func (f *Fs) getDownloadAuthorization(ctx context.Context, bucket, remote string
}
var request = api.GetDownloadAuthorizationRequest{
BucketID: bucketID,
FileNamePrefix: f.opt.Enc.FromStandardPath(path.Join(f.root, remote)),
FileNamePrefix: f.opt.Enc.FromStandardPath(path.Join(f.rootDirectory, remote)),
ValidDurationInSeconds: validDurationInSeconds,
}
var response api.GetDownloadAuthorizationResponse

View File

@@ -230,14 +230,14 @@ func (up *largeUpload) transferChunk(ctx context.Context, part int64, body []byt
//
// The number of bytes in the file being uploaded. Note that
// this header is required; you cannot leave it out and just
// use chunked encoding. The minimum size of every part but
// the last one is 100MB.
// use chunked encoding. The minimum size of every part but
// the last one is 100 MB (100,000,000 bytes)
//
// X-Bz-Content-Sha1
//
// The SHA1 checksum of the this part of the file. B2 will
// check this when the part is uploaded, to make sure that the
// data arrived correctly. The same SHA1 checksum must be
// data arrived correctly. The same SHA1 checksum must be
// passed to b2_finish_large_file.
opts := rest.Opts{
Method: "POST",

View File

@@ -36,13 +36,13 @@ func (t *Time) UnmarshalJSON(data []byte) error {
// Error is returned from box when things go wrong
type Error struct {
Type string `json:"type"`
Status int `json:"status"`
Code string `json:"code"`
ContextInfo json.RawMessage
HelpURL string `json:"help_url"`
Message string `json:"message"`
RequestID string `json:"request_id"`
Type string `json:"type"`
Status int `json:"status"`
Code string `json:"code"`
ContextInfo json.RawMessage `json:"context_info"`
HelpURL string `json:"help_url"`
Message string `json:"message"`
RequestID string `json:"request_id"`
}
// Error returns a string for the error and satisfies the error interface
@@ -132,6 +132,38 @@ type UploadFile struct {
ContentModifiedAt Time `json:"content_modified_at"`
}
// PreUploadCheck is the request for upload preflight check
type PreUploadCheck struct {
Name string `json:"name"`
Parent Parent `json:"parent"`
Size *int64 `json:"size,omitempty"`
}
// PreUploadCheckResponse is the response from upload preflight check
// if successful
type PreUploadCheckResponse struct {
UploadToken string `json:"upload_token"`
UploadURL string `json:"upload_url"`
}
// PreUploadCheckConflict is returned in the ContextInfo error field
// from PreUploadCheck when the error code is "item_name_in_use"
type PreUploadCheckConflict struct {
Conflicts struct {
Type string `json:"type"`
ID string `json:"id"`
FileVersion struct {
Type string `json:"type"`
ID string `json:"id"`
Sha1 string `json:"sha1"`
} `json:"file_version"`
SequenceID string `json:"sequence_id"`
Etag string `json:"etag"`
Sha1 string `json:"sha1"`
Name string `json:"name"`
} `json:"conflicts"`
}
// UpdateFileModTime is used in Update File Info
type UpdateFileModTime struct {
ContentModifiedAt Time `json:"content_modified_at"`

View File

@@ -17,7 +17,6 @@ import (
"fmt"
"io"
"io/ioutil"
"log"
"net/http"
"net/url"
"path"
@@ -84,7 +83,7 @@ func init() {
Name: "box",
Description: "Box",
NewFs: NewFs,
Config: func(ctx context.Context, name string, m configmap.Mapper) {
Config: func(ctx context.Context, name string, m configmap.Mapper) error {
jsonFile, ok := m.Get("box_config_file")
boxSubType, boxSubTypeOk := m.Get("box_sub_type")
boxAccessToken, boxAccessTokenOk := m.Get("access_token")
@@ -93,15 +92,16 @@ func init() {
if ok && boxSubTypeOk && jsonFile != "" && boxSubType != "" {
err = refreshJWTToken(ctx, jsonFile, boxSubType, name, m)
if err != nil {
log.Fatalf("Failed to configure token with jwt authentication: %v", err)
return errors.Wrap(err, "failed to configure token with jwt authentication")
}
// Else, if not using an access token, use oauth2
} else if boxAccessToken == "" || !boxAccessTokenOk {
err = oauthutil.Config(ctx, "box", name, m, oauthConfig, nil)
if err != nil {
log.Fatalf("Failed to configure token with oauth authentication: %v", err)
return errors.Wrap(err, "failed to configure token with oauth authentication")
}
}
return nil
},
Options: append(oauthutil.SharedOptions, []fs.Option{{
Name: "root_folder_id",
@@ -126,7 +126,7 @@ func init() {
}},
}, {
Name: "upload_cutoff",
Help: "Cutoff for switching to multipart upload (>= 50MB).",
Help: "Cutoff for switching to multipart upload (>= 50 MiB).",
Default: fs.SizeSuffix(defaultUploadCutoff),
Advanced: true,
}, {
@@ -157,15 +157,15 @@ func refreshJWTToken(ctx context.Context, jsonFile string, boxSubType string, na
jsonFile = env.ShellExpand(jsonFile)
boxConfig, err := getBoxConfig(jsonFile)
if err != nil {
log.Fatalf("Failed to configure token: %v", err)
return errors.Wrap(err, "get box config")
}
privateKey, err := getDecryptedPrivateKey(boxConfig)
if err != nil {
log.Fatalf("Failed to configure token: %v", err)
return errors.Wrap(err, "get decrypted private key")
}
claims, err := getClaims(boxConfig, boxSubType)
if err != nil {
log.Fatalf("Failed to configure token: %v", err)
return errors.Wrap(err, "get claims")
}
signingHeaders := getSigningHeaders(boxConfig)
queryParams := getQueryParams(boxConfig)
@@ -686,22 +686,80 @@ func (f *Fs) createObject(ctx context.Context, remote string, modTime time.Time,
return o, leaf, directoryID, nil
}
// preUploadCheck checks to see if a file can be uploaded
//
// It returns "", nil if the file is good to go
// It returns "ID", nil if the file must be updated
func (f *Fs) preUploadCheck(ctx context.Context, leaf, directoryID string, size int64) (ID string, err error) {
check := api.PreUploadCheck{
Name: f.opt.Enc.FromStandardName(leaf),
Parent: api.Parent{
ID: directoryID,
},
}
if size >= 0 {
check.Size = &size
}
opts := rest.Opts{
Method: "OPTIONS",
Path: "/files/content/",
}
var result api.PreUploadCheckResponse
var resp *http.Response
err = f.pacer.Call(func() (bool, error) {
resp, err = f.srv.CallJSON(ctx, &opts, &check, &result)
return shouldRetry(ctx, resp, err)
})
if err != nil {
if apiErr, ok := err.(*api.Error); ok && apiErr.Code == "item_name_in_use" {
var conflict api.PreUploadCheckConflict
err = json.Unmarshal(apiErr.ContextInfo, &conflict)
if err != nil {
return "", errors.Wrap(err, "pre-upload check: JSON decode failed")
}
if conflict.Conflicts.Type != api.ItemTypeFile {
return "", errors.Wrap(err, "pre-upload check: can't overwrite non file with file")
}
return conflict.Conflicts.ID, nil
}
return "", errors.Wrap(err, "pre-upload check")
}
return "", nil
}
// Put the object
//
// Copy the reader in to the new object which is returned
//
// The new object may have been created if an error is returned
func (f *Fs) Put(ctx context.Context, in io.Reader, src fs.ObjectInfo, options ...fs.OpenOption) (fs.Object, error) {
existingObj, err := f.newObjectWithInfo(ctx, src.Remote(), nil)
switch err {
case nil:
return existingObj, existingObj.Update(ctx, in, src, options...)
case fs.ErrorObjectNotFound:
// Not found so create it
return f.PutUnchecked(ctx, in, src)
default:
// If directory doesn't exist, file doesn't exist so can upload
remote := src.Remote()
leaf, directoryID, err := f.dirCache.FindPath(ctx, remote, false)
if err != nil {
if err == fs.ErrorDirNotFound {
return f.PutUnchecked(ctx, in, src, options...)
}
return nil, err
}
// Preflight check the upload, which returns the ID if the
// object already exists
ID, err := f.preUploadCheck(ctx, leaf, directoryID, src.Size())
if err != nil {
return nil, err
}
if ID == "" {
return f.PutUnchecked(ctx, in, src, options...)
}
// If object exists then create a skeleton one with just id
o := &Object{
fs: f,
remote: remote,
id: ID,
}
return o, o.Update(ctx, in, src, options...)
}
// PutStream uploads to the remote path with the modTime given of indeterminate size
@@ -1228,7 +1286,7 @@ func (o *Object) Open(ctx context.Context, options ...fs.OpenOption) (in io.Read
// upload does a single non-multipart upload
//
// This is recommended for less than 50 MB of content
// This is recommended for less than 50 MiB of content
func (o *Object) upload(ctx context.Context, in io.Reader, leaf, directoryID string, modTime time.Time, options ...fs.OpenOption) (err error) {
upload := api.UploadFile{
Name: o.fs.opt.Enc.FromStandardName(leaf),

View File

@@ -98,14 +98,14 @@ changed, any downloaded chunks will be invalid and cache-chunk-path
will need to be cleared or unexpected EOF errors will occur.`,
Default: DefCacheChunkSize,
Examples: []fs.OptionExample{{
Value: "1m",
Help: "1MB",
Value: "1M",
Help: "1 MiB",
}, {
Value: "5M",
Help: "5 MB",
Help: "5 MiB",
}, {
Value: "10M",
Help: "10 MB",
Help: "10 MiB",
}},
}, {
Name: "info_age",
@@ -132,13 +132,13 @@ oldest chunks until it goes under this value.`,
Default: DefCacheTotalChunkSize,
Examples: []fs.OptionExample{{
Value: "500M",
Help: "500 MB",
Help: "500 MiB",
}, {
Value: "1G",
Help: "1 GB",
Help: "1 GiB",
}, {
Value: "10G",
Help: "10 GB",
Help: "10 GiB",
}},
}, {
Name: "db_path",

View File

@@ -836,7 +836,7 @@ func newRun() *run {
if uploadDir == "" {
r.tmpUploadDir, err = ioutil.TempDir("", "rclonecache-tmp")
if err != nil {
log.Fatalf("Failed to create temp dir: %v", err)
panic(fmt.Sprintf("Failed to create temp dir: %v", err))
}
} else {
r.tmpUploadDir = uploadDir

View File

@@ -155,7 +155,7 @@ Normally should contain a ':' and a path, e.g. "myremote:path/to/dir",
}, {
Name: "chunk_size",
Advanced: false,
Default: fs.SizeSuffix(2147483648), // 2GB
Default: fs.SizeSuffix(2147483648), // 2 GiB
Help: `Files larger than chunk size will be split in chunks.`,
}, {
Name: "name_format",
@@ -1448,7 +1448,7 @@ func (c *chunkingReader) dummyRead(in io.Reader, size int64) error {
c.accountBytes(size)
return nil
}
const bufLen = 1048576 // 1MB
const bufLen = 1048576 // 1 MiB
buf := make([]byte, bufLen)
for size > 0 {
n := size

View File

@@ -33,7 +33,7 @@ func testPutLarge(t *testing.T, f *Fs, kilobytes int) {
fstests.TestPutLarge(context.Background(), t, f, &fstest.Item{
ModTime: fstest.Time("2001-02-03T04:05:06.499999999Z"),
Path: fmt.Sprintf("chunker-upload-%dk", kilobytes),
Size: int64(kilobytes) * int64(fs.KibiByte),
Size: int64(kilobytes) * int64(fs.Kibi),
})
})
}

View File

@@ -36,7 +36,7 @@ import (
// Globals
const (
initialChunkSize = 262144 // Initial and max sizes of chunks when reading parts of the file. Currently
maxChunkSize = 8388608 // at 256KB and 8 MB.
maxChunkSize = 8388608 // at 256 KiB and 8 MiB.
bufferSize = 8388608
heuristicBytes = 1048576
@@ -53,7 +53,7 @@ const (
Gzip = 2
)
var nameRegexp = regexp.MustCompile("^(.+?)\\.([A-Za-z0-9+_]{11})$")
var nameRegexp = regexp.MustCompile("^(.+?)\\.([A-Za-z0-9-_]{11})$")
// Register with Fs
func init() {

View File

@@ -12,12 +12,14 @@ import (
"strconv"
"strings"
"sync"
"time"
"unicode/utf8"
"github.com/pkg/errors"
"github.com/rclone/rclone/backend/crypt/pkcs7"
"github.com/rclone/rclone/fs"
"github.com/rclone/rclone/fs/accounting"
"github.com/rclone/rclone/lib/version"
"github.com/rfjakob/eme"
"golang.org/x/crypto/nacl/secretbox"
"golang.org/x/crypto/scrypt"
@@ -442,11 +444,32 @@ func (c *Cipher) encryptFileName(in string) string {
if !c.dirNameEncrypt && i != (len(segments)-1) {
continue
}
// Strip version string so that only the non-versioned part
// of the file name gets encrypted/obfuscated
hasVersion := false
var t time.Time
if i == (len(segments)-1) && version.Match(segments[i]) {
var s string
t, s = version.Remove(segments[i])
// version.Remove can fail, in which case it returns segments[i]
if s != segments[i] {
segments[i] = s
hasVersion = true
}
}
if c.mode == NameEncryptionStandard {
segments[i] = c.encryptSegment(segments[i])
} else {
segments[i] = c.obfuscateSegment(segments[i])
}
// Add back a version to the encrypted/obfuscated
// file name, if we stripped it off earlier
if hasVersion {
segments[i] = version.Add(segments[i], t)
}
}
return strings.Join(segments, "/")
}
@@ -477,6 +500,21 @@ func (c *Cipher) decryptFileName(in string) (string, error) {
if !c.dirNameEncrypt && i != (len(segments)-1) {
continue
}
// Strip version string so that only the non-versioned part
// of the file name gets decrypted/deobfuscated
hasVersion := false
var t time.Time
if i == (len(segments)-1) && version.Match(segments[i]) {
var s string
t, s = version.Remove(segments[i])
// version.Remove can fail, in which case it returns segments[i]
if s != segments[i] {
segments[i] = s
hasVersion = true
}
}
if c.mode == NameEncryptionStandard {
segments[i], err = c.decryptSegment(segments[i])
} else {
@@ -486,6 +524,12 @@ func (c *Cipher) decryptFileName(in string) (string, error) {
if err != nil {
return "", err
}
// Add back a version to the decrypted/deobfuscated
// file name, if we stripped it off earlier
if hasVersion {
segments[i] = version.Add(segments[i], t)
}
}
return strings.Join(segments, "/"), nil
}
@@ -494,10 +538,18 @@ func (c *Cipher) decryptFileName(in string) (string, error) {
func (c *Cipher) DecryptFileName(in string) (string, error) {
if c.mode == NameEncryptionOff {
remainingLength := len(in) - len(encryptedSuffix)
if remainingLength > 0 && strings.HasSuffix(in, encryptedSuffix) {
return in[:remainingLength], nil
if remainingLength == 0 || !strings.HasSuffix(in, encryptedSuffix) {
return "", ErrorNotAnEncryptedFile
}
return "", ErrorNotAnEncryptedFile
decrypted := in[:remainingLength]
if version.Match(decrypted) {
_, unversioned := version.Remove(decrypted)
if unversioned == "" {
return "", ErrorNotAnEncryptedFile
}
}
// Leave the version string on, if it was there
return decrypted, nil
}
return c.decryptFileName(in)
}

View File

@@ -160,22 +160,29 @@ func TestEncryptFileName(t *testing.T) {
assert.Equal(t, "p0e52nreeaj0a5ea7s64m4j72s", c.EncryptFileName("1"))
assert.Equal(t, "p0e52nreeaj0a5ea7s64m4j72s/l42g6771hnv3an9cgc8cr2n1ng", c.EncryptFileName("1/12"))
assert.Equal(t, "p0e52nreeaj0a5ea7s64m4j72s/l42g6771hnv3an9cgc8cr2n1ng/qgm4avr35m5loi1th53ato71v0", c.EncryptFileName("1/12/123"))
assert.Equal(t, "p0e52nreeaj0a5ea7s64m4j72s-v2001-02-03-040506-123", c.EncryptFileName("1-v2001-02-03-040506-123"))
assert.Equal(t, "p0e52nreeaj0a5ea7s64m4j72s/l42g6771hnv3an9cgc8cr2n1ng-v2001-02-03-040506-123", c.EncryptFileName("1/12-v2001-02-03-040506-123"))
// Standard mode with directory name encryption off
c, _ = newCipher(NameEncryptionStandard, "", "", false)
assert.Equal(t, "p0e52nreeaj0a5ea7s64m4j72s", c.EncryptFileName("1"))
assert.Equal(t, "1/l42g6771hnv3an9cgc8cr2n1ng", c.EncryptFileName("1/12"))
assert.Equal(t, "1/12/qgm4avr35m5loi1th53ato71v0", c.EncryptFileName("1/12/123"))
assert.Equal(t, "p0e52nreeaj0a5ea7s64m4j72s-v2001-02-03-040506-123", c.EncryptFileName("1-v2001-02-03-040506-123"))
assert.Equal(t, "1/l42g6771hnv3an9cgc8cr2n1ng-v2001-02-03-040506-123", c.EncryptFileName("1/12-v2001-02-03-040506-123"))
// Now off mode
c, _ = newCipher(NameEncryptionOff, "", "", true)
assert.Equal(t, "1/12/123.bin", c.EncryptFileName("1/12/123"))
// Obfuscation mode
c, _ = newCipher(NameEncryptionObfuscated, "", "", true)
assert.Equal(t, "49.6/99.23/150.890/53.!!lipps", c.EncryptFileName("1/12/123/!hello"))
assert.Equal(t, "49.6/99.23/150.890/53-v2001-02-03-040506-123.!!lipps", c.EncryptFileName("1/12/123/!hello-v2001-02-03-040506-123"))
assert.Equal(t, "49.6/99.23/150.890/162.uryyB-v2001-02-03-040506-123.GKG", c.EncryptFileName("1/12/123/hello-v2001-02-03-040506-123.txt"))
assert.Equal(t, "161.\u00e4", c.EncryptFileName("\u00a1"))
assert.Equal(t, "160.\u03c2", c.EncryptFileName("\u03a0"))
// Obfuscation mode with directory name encryption off
c, _ = newCipher(NameEncryptionObfuscated, "", "", false)
assert.Equal(t, "1/12/123/53.!!lipps", c.EncryptFileName("1/12/123/!hello"))
assert.Equal(t, "1/12/123/53-v2001-02-03-040506-123.!!lipps", c.EncryptFileName("1/12/123/!hello-v2001-02-03-040506-123"))
assert.Equal(t, "161.\u00e4", c.EncryptFileName("\u00a1"))
assert.Equal(t, "160.\u03c2", c.EncryptFileName("\u03a0"))
}
@@ -194,14 +201,19 @@ func TestDecryptFileName(t *testing.T) {
{NameEncryptionStandard, true, "p0e52nreeaj0a5ea7s64m4j72s/l42g6771hnv3an9cgc8cr2n1ng/qgm4avr35m5loi1th53ato71v0", "1/12/123", nil},
{NameEncryptionStandard, true, "p0e52nreeaj0a5ea7s64m4j72s/l42g6771hnv3an9cgc8cr2n1/qgm4avr35m5loi1th53ato71v0", "", ErrorNotAMultipleOfBlocksize},
{NameEncryptionStandard, false, "1/12/qgm4avr35m5loi1th53ato71v0", "1/12/123", nil},
{NameEncryptionStandard, true, "p0e52nreeaj0a5ea7s64m4j72s-v2001-02-03-040506-123", "1-v2001-02-03-040506-123", nil},
{NameEncryptionOff, true, "1/12/123.bin", "1/12/123", nil},
{NameEncryptionOff, true, "1/12/123.bix", "", ErrorNotAnEncryptedFile},
{NameEncryptionOff, true, ".bin", "", ErrorNotAnEncryptedFile},
{NameEncryptionOff, true, "1/12/123-v2001-02-03-040506-123.bin", "1/12/123-v2001-02-03-040506-123", nil},
{NameEncryptionOff, true, "1/12/123-v1970-01-01-010101-123-v2001-02-03-040506-123.bin", "1/12/123-v1970-01-01-010101-123-v2001-02-03-040506-123", nil},
{NameEncryptionOff, true, "1/12/123-v1970-01-01-010101-123-v2001-02-03-040506-123.txt.bin", "1/12/123-v1970-01-01-010101-123-v2001-02-03-040506-123.txt", nil},
{NameEncryptionObfuscated, true, "!.hello", "hello", nil},
{NameEncryptionObfuscated, true, "hello", "", ErrorNotAnEncryptedFile},
{NameEncryptionObfuscated, true, "161.\u00e4", "\u00a1", nil},
{NameEncryptionObfuscated, true, "160.\u03c2", "\u03a0", nil},
{NameEncryptionObfuscated, false, "1/12/123/53.!!lipps", "1/12/123/!hello", nil},
{NameEncryptionObfuscated, false, "1/12/123/53-v2001-02-03-040506-123.!!lipps", "1/12/123/!hello-v2001-02-03-040506-123", nil},
} {
c, _ := newCipher(test.mode, "", "", test.dirNameEncrypt)
actual, actualErr := c.DecryptFileName(test.in)

View File

@@ -14,7 +14,6 @@ import (
"fmt"
"io"
"io/ioutil"
"log"
"mime"
"net/http"
"path"
@@ -68,8 +67,8 @@ const (
defaultScope = "drive"
// chunkSize is the size of the chunks created during a resumable upload and should be a power of two.
// 1<<18 is the minimum size supported by the Google uploader, and there is no maximum.
minChunkSize = 256 * fs.KibiByte
defaultChunkSize = 8 * fs.MebiByte
minChunkSize = 256 * fs.Kibi
defaultChunkSize = 8 * fs.Mebi
partialFields = "id,name,size,md5Checksum,trashed,explicitlyTrashed,modifiedTime,createdTime,mimeType,parents,webViewLink,shortcutDetails,exportLinks"
listRGrouping = 50 // number of IDs to search at once when using ListR
listRInputBuffer = 1000 // size of input buffer when using ListR
@@ -183,13 +182,12 @@ func init() {
Description: "Google Drive",
NewFs: NewFs,
CommandHelp: commandHelp,
Config: func(ctx context.Context, name string, m configmap.Mapper) {
Config: func(ctx context.Context, name string, m configmap.Mapper) error {
// Parse config into Options struct
opt := new(Options)
err := configstruct.Set(m, opt)
if err != nil {
fs.Errorf(nil, "Couldn't parse config into struct: %v", err)
return
return errors.Wrap(err, "couldn't parse config into struct")
}
// Fill in the scopes
@@ -199,16 +197,17 @@ func init() {
m.Set("root_folder_id", "appDataFolder")
}
if opt.ServiceAccountFile == "" {
if opt.ServiceAccountFile == "" && opt.ServiceAccountCredentials == "" {
err = oauthutil.Config(ctx, "drive", name, m, driveConfig, nil)
if err != nil {
log.Fatalf("Failed to configure token: %v", err)
return errors.Wrap(err, "failed to configure token")
}
}
err = configTeamDrive(ctx, opt, m, name)
if err != nil {
log.Fatalf("Failed to configure Shared Drive: %v", err)
return errors.Wrap(err, "failed to configure Shared Drive")
}
return nil
},
Options: append(driveOAuthOptions(), []fs.Option{{
Name: "scope",
@@ -467,7 +466,7 @@ See: https://github.com/rclone/rclone/issues/3631
Default: false,
Help: `Make upload limit errors be fatal
At the time of writing it is only possible to upload 750GB of data to
At the time of writing it is only possible to upload 750 GiB of data to
Google Drive a day (this is an undocumented limit). When this limit is
reached Google Drive produces a slightly different error message. When
this flag is set it causes these errors to be fatal. These will stop
@@ -484,7 +483,7 @@ See: https://github.com/rclone/rclone/issues/3857
Default: false,
Help: `Make download limit errors be fatal
At the time of writing it is only possible to download 10TB of data from
At the time of writing it is only possible to download 10 TiB of data from
Google Drive a day (this is an undocumented limit). When this limit is
reached Google Drive produces a slightly different error message. When
this flag is set it causes these errors to be fatal. These will stop
@@ -522,7 +521,7 @@ If this flag is set then rclone will ignore shortcut files completely.
} {
for mimeType, extension := range m {
if err := mime.AddExtensionType(extension, mimeType); err != nil {
log.Fatalf("Failed to register MIME type %q: %v", mimeType, err)
fs.Errorf("Failed to register MIME type %q: %v", mimeType, err)
}
}
}
@@ -2959,12 +2958,12 @@ func (f *Fs) makeShortcut(ctx context.Context, srcPath string, dstFs *Fs, dstPat
}
// List all team drives
func (f *Fs) listTeamDrives(ctx context.Context) (drives []*drive.TeamDrive, err error) {
drives = []*drive.TeamDrive{}
listTeamDrives := f.svc.Teamdrives.List().PageSize(100)
func (f *Fs) listTeamDrives(ctx context.Context) (drives []*drive.Drive, err error) {
drives = []*drive.Drive{}
listTeamDrives := f.svc.Drives.List().PageSize(100)
var defaultFs Fs // default Fs with default Options
for {
var teamDrives *drive.TeamDriveList
var teamDrives *drive.DriveList
err = f.pacer.Call(func() (bool, error) {
teamDrives, err = listTeamDrives.Context(ctx).Do()
return defaultFs.shouldRetry(ctx, err)
@@ -2972,7 +2971,7 @@ func (f *Fs) listTeamDrives(ctx context.Context) (drives []*drive.TeamDrive, err
if err != nil {
return drives, errors.Wrap(err, "listing Team Drives failed")
}
drives = append(drives, teamDrives.TeamDrives...)
drives = append(drives, teamDrives.Drives...)
if teamDrives.NextPageToken == "" {
break
}
@@ -3069,7 +3068,7 @@ func (f *Fs) copyID(ctx context.Context, id, dest string) (err error) {
return err
}
if destLeaf == "" {
destLeaf = info.Name
destLeaf = path.Base(o.Remote())
}
if destDir == "" {
destDir = "."

View File

@@ -25,7 +25,6 @@ import (
"context"
"fmt"
"io"
"log"
"path"
"regexp"
"strings"
@@ -65,9 +64,9 @@ const (
// Upload chunk size - setting too small makes uploads slow.
// Chunks are buffered into memory for retries.
//
// Speed vs chunk size uploading a 1 GB file on 2017-11-22
// Speed vs chunk size uploading a 1 GiB file on 2017-11-22
//
// Chunk Size MB, Speed Mbyte/s, % of max
// Chunk Size MiB, Speed MiByte/s, % of max
// 1 1.364 11%
// 2 2.443 19%
// 4 4.288 33%
@@ -82,11 +81,11 @@ const (
// 96 12.302 95%
// 128 12.945 100%
//
// Choose 48MB which is 91% of Maximum speed. rclone by
// default does 4 transfers so this should use 4*48MB = 192MB
// Choose 48 MiB which is 91% of Maximum speed. rclone by
// default does 4 transfers so this should use 4*48 MiB = 192 MiB
// by default.
defaultChunkSize = 48 * fs.MebiByte
maxChunkSize = 150 * fs.MebiByte
defaultChunkSize = 48 * fs.Mebi
maxChunkSize = 150 * fs.Mebi
// Max length of filename parts: https://help.dropbox.com/installs-integrations/sync-uploads/files-not-syncing
maxFileNameLength = 255
)
@@ -99,8 +98,10 @@ var (
"files.content.write",
"files.content.read",
"sharing.write",
"account_info.read", // needed for About
// "file_requests.write",
// "members.read", // needed for impersonate - but causes app to need to be approved by Dropbox Team Admin during the flow
// "team_data.member"
},
// Endpoint: oauth2.Endpoint{
// AuthURL: "https://www.dropbox.com/1/oauth2/authorize",
@@ -130,8 +131,8 @@ func getOauthConfig(m configmap.Mapper) *oauth2.Config {
}
// Make a copy of the config
config := *dropboxConfig
// Make a copy of the scopes with "members.read" appended
config.Scopes = append(config.Scopes, "members.read")
// Make a copy of the scopes with extra scopes requires appended
config.Scopes = append(config.Scopes, "members.read", "team_data.member")
return &config
}
@@ -142,7 +143,7 @@ func init() {
Name: "dropbox",
Description: "Dropbox",
NewFs: NewFs,
Config: func(ctx context.Context, name string, m configmap.Mapper) {
Config: func(ctx context.Context, name string, m configmap.Mapper) error {
opt := oauthutil.Options{
NoOffline: true,
OAuth2Opts: []oauth2.AuthCodeOption{
@@ -151,8 +152,9 @@ func init() {
}
err := oauthutil.Config(ctx, "dropbox", name, m, getOauthConfig(m), &opt)
if err != nil {
log.Fatalf("Failed to configure token: %v", err)
return errors.Wrap(err, "failed to configure token")
}
return nil
},
Options: append(oauthutil.SharedOptions, []fs.Option{{
Name: "chunk_size",
@@ -162,7 +164,7 @@ Any files larger than this will be uploaded in chunks of this size.
Note that chunks are buffered in memory (one at a time) so rclone can
deal with retries. Setting this larger will increase the speed
slightly (at most 10%% for 128MB in tests) at the cost of using more
slightly (at most 10%% for 128 MiB in tests) at the cost of using more
memory. It can be set smaller if you are tight on memory.`, maxChunkSize),
Default: defaultChunkSize,
Advanced: true,
@@ -323,7 +325,7 @@ func shouldRetry(ctx context.Context, err error) (bool, error) {
}
func checkUploadChunkSize(cs fs.SizeSuffix) error {
const minChunkSize = fs.Byte
const minChunkSize = fs.SizeSuffixBase
if cs < minChunkSize {
return errors.Errorf("%s is less than %s", cs, minChunkSize)
}
@@ -1084,13 +1086,30 @@ func (f *Fs) PublicLink(ctx context.Context, remote string, expire fs.Duration,
fs.Debugf(f, "attempting to share '%s' (absolute path: %s)", remote, absPath)
createArg := sharing.CreateSharedLinkWithSettingsArg{
Path: absPath,
// FIXME this gives settings_error/not_authorized/.. errors
// and the expires setting isn't in the documentation so remove
// for now.
// Settings: &sharing.SharedLinkSettings{
// Expires: time.Now().Add(time.Duration(expire)).UTC().Round(time.Second),
// },
Settings: &sharing.SharedLinkSettings{
RequestedVisibility: &sharing.RequestedVisibility{
Tagged: dropbox.Tagged{Tag: sharing.RequestedVisibilityPublic},
},
Audience: &sharing.LinkAudience{
Tagged: dropbox.Tagged{Tag: sharing.LinkAudiencePublic},
},
Access: &sharing.RequestedLinkAccessLevel{
Tagged: dropbox.Tagged{Tag: sharing.RequestedLinkAccessLevelViewer},
},
},
}
if expire < fs.DurationOff {
expiryTime := time.Now().Add(time.Duration(expire)).UTC().Round(time.Second)
createArg.Settings.Expires = expiryTime
}
// FIXME note we can't set Settings for non enterprise dropbox
// because of https://github.com/dropbox/dropbox-sdk-go-unofficial/issues/75
// however this only goes wrong when we set Expires, so as a
// work-around remove Settings unless expire is set.
if expire == fs.DurationOff {
createArg.Settings = nil
}
var linkRes sharing.IsSharedLinkMetadata
err = f.pacer.Call(func() (bool, error) {
linkRes, err = f.sharing.CreateSharedLinkWithSettings(&createArg)
@@ -1334,13 +1353,13 @@ func (f *Fs) changeNotifyRunner(ctx context.Context, notifyFunc func(string, fs.
switch info := entry.(type) {
case *files.FolderMetadata:
entryType = fs.EntryDirectory
entryPath = strings.TrimLeft(info.PathDisplay, f.slashRootSlash)
entryPath = strings.TrimPrefix(info.PathDisplay, f.slashRootSlash)
case *files.FileMetadata:
entryType = fs.EntryObject
entryPath = strings.TrimLeft(info.PathDisplay, f.slashRootSlash)
entryPath = strings.TrimPrefix(info.PathDisplay, f.slashRootSlash)
case *files.DeletedMetadata:
entryType = fs.EntryObject
entryPath = strings.TrimLeft(info.PathDisplay, f.slashRootSlash)
entryPath = strings.TrimPrefix(info.PathDisplay, f.slashRootSlash)
default:
fs.Errorf(entry, "dropbox ChangeNotify: ignoring unknown EntryType %T", entry)
continue

View File

@@ -35,9 +35,7 @@ func init() {
fs.Register(&fs.RegInfo{
Name: "fichier",
Description: "1Fichier",
Config: func(ctx context.Context, name string, config configmap.Mapper) {
},
NewFs: NewFs,
NewFs: NewFs,
Options: []fs.Option{{
Help: "Your API Key, get it from https://1fichier.com/console/params.pl",
Name: "api_key",
@@ -348,8 +346,10 @@ func (f *Fs) putUnchecked(ctx context.Context, in io.Reader, remote string, size
return nil, err
}
if len(fileUploadResponse.Links) != 1 {
return nil, errors.New("unexpected amount of files")
if len(fileUploadResponse.Links) == 0 {
return nil, errors.New("upload response not found")
} else if len(fileUploadResponse.Links) > 1 {
fs.Debugf(remote, "Multiple upload responses found, using the first")
}
link := fileUploadResponse.Links[0]

View File

@@ -241,23 +241,6 @@ func (dl *debugLog) Write(p []byte) (n int, err error) {
return len(p), nil
}
type dialCtx struct {
f *Fs
ctx context.Context
}
// dial a new connection with fshttp dialer
func (d *dialCtx) dial(network, address string) (net.Conn, error) {
conn, err := fshttp.NewDialer(d.ctx).Dial(network, address)
if err != nil {
return nil, err
}
if d.f.tlsConf != nil {
conn = tls.Client(conn, d.f.tlsConf)
}
return conn, err
}
// shouldRetry returns a boolean as to whether this err deserve to be
// retried. It returns the err as a convenience
func shouldRetry(ctx context.Context, err error) (bool, error) {
@@ -277,9 +260,22 @@ func shouldRetry(ctx context.Context, err error) (bool, error) {
// Open a new connection to the FTP server.
func (f *Fs) ftpConnection(ctx context.Context) (c *ftp.ServerConn, err error) {
fs.Debugf(f, "Connecting to FTP server")
dCtx := dialCtx{f, ctx}
ftpConfig := []ftp.DialOption{ftp.DialWithDialFunc(dCtx.dial)}
if f.opt.ExplicitTLS {
// Make ftp library dial with fshttp dialer optionally using TLS
dial := func(network, address string) (conn net.Conn, err error) {
conn, err = fshttp.NewDialer(ctx).Dial(network, address)
if f.tlsConf != nil && err == nil {
conn = tls.Client(conn, f.tlsConf)
}
return
}
ftpConfig := []ftp.DialOption{ftp.DialWithDialFunc(dial)}
if f.opt.TLS {
// Our dialer takes care of TLS but ftp library also needs tlsConf
// as a trigger for sending PSBZ and PROT options to server.
ftpConfig = append(ftpConfig, ftp.DialWithTLS(f.tlsConf))
} else if f.opt.ExplicitTLS {
ftpConfig = append(ftpConfig, ftp.DialWithExplicitTLS(f.tlsConf))
// Initial connection needs to be cleartext for explicit TLS
conn, err := fshttp.NewDialer(ctx).Dial("tcp", f.dialAddr)

View File

@@ -19,7 +19,6 @@ import (
"fmt"
"io"
"io/ioutil"
"log"
"net/http"
"path"
"strings"
@@ -76,17 +75,18 @@ func init() {
Prefix: "gcs",
Description: "Google Cloud Storage (this is not Google Drive)",
NewFs: NewFs,
Config: func(ctx context.Context, name string, m configmap.Mapper) {
Config: func(ctx context.Context, name string, m configmap.Mapper) error {
saFile, _ := m.Get("service_account_file")
saCreds, _ := m.Get("service_account_credentials")
anonymous, _ := m.Get("anonymous")
if saFile != "" || saCreds != "" || anonymous == "true" {
return
return nil
}
err := oauthutil.Config(ctx, "google cloud storage", name, m, storageConfig, nil)
if err != nil {
log.Fatalf("Failed to configure token: %v", err)
return errors.Wrap(err, "failed to configure token")
}
return nil
},
Options: append(oauthutil.SharedOptions, []fs.Option{{
Name: "project_number",

View File

@@ -8,7 +8,6 @@ import (
"encoding/json"
"fmt"
"io"
golog "log"
"net/http"
"net/url"
"path"
@@ -78,13 +77,12 @@ func init() {
Prefix: "gphotos",
Description: "Google Photos",
NewFs: NewFs,
Config: func(ctx context.Context, name string, m configmap.Mapper) {
Config: func(ctx context.Context, name string, m configmap.Mapper) error {
// Parse config into Options struct
opt := new(Options)
err := configstruct.Set(m, opt)
if err != nil {
fs.Errorf(nil, "Couldn't parse config into struct: %v", err)
return
return errors.Wrap(err, "couldn't parse config into struct")
}
// Fill in the scopes
@@ -97,7 +95,7 @@ func init() {
// Do the oauth
err = oauthutil.Config(ctx, "google photos", name, m, oauthConfig, nil)
if err != nil {
golog.Fatalf("Failed to configure token: %v", err)
return errors.Wrap(err, "failed to configure token")
}
// Warn the user
@@ -108,6 +106,7 @@ func init() {
`)
return nil
},
Options: append(oauthutil.SharedOptions, []fs.Option{{
Name: "read_only",

View File

@@ -47,7 +47,7 @@ func prepareServer(t *testing.T) (configmap.Simple, func()) {
ts := httptest.NewServer(handler)
// Configure the remote
configfile.LoadConfig(context.Background())
configfile.Install()
// fs.Config.LogLevel = fs.LogLevelDebug
// fs.Config.DumpHeaders = true
// fs.Config.DumpBodies = true

View File

@@ -11,7 +11,6 @@ import (
"encoding/json"
"fmt"
"io/ioutil"
"log"
"net/http"
"strings"
"time"
@@ -56,11 +55,12 @@ func init() {
Name: "hubic",
Description: "Hubic",
NewFs: NewFs,
Config: func(ctx context.Context, name string, m configmap.Mapper) {
Config: func(ctx context.Context, name string, m configmap.Mapper) error {
err := oauthutil.Config(ctx, "hubic", name, m, oauthConfig, nil)
if err != nil {
log.Fatalf("Failed to configure token: %v", err)
return errors.Wrap(err, "failed to configure token")
}
return nil
},
Options: append(oauthutil.SharedOptions, swift.SharedOptions...),
})

View File

@@ -10,7 +10,6 @@ import (
"fmt"
"io"
"io/ioutil"
"log"
"math/rand"
"net/http"
"net/url"
@@ -87,12 +86,12 @@ func init() {
Name: "jottacloud",
Description: "Jottacloud",
NewFs: NewFs,
Config: func(ctx context.Context, name string, m configmap.Mapper) {
Config: func(ctx context.Context, name string, m configmap.Mapper) error {
refresh := false
if version, ok := m.Get("configVersion"); ok {
ver, err := strconv.Atoi(version)
if err != nil {
log.Fatalf("Failed to parse config version - corrupted config")
return errors.Wrap(err, "failed to parse config version - corrupted config")
}
refresh = (ver != configVersion) && (ver != v1configVersion)
}
@@ -104,7 +103,7 @@ func init() {
if ok && tokenString != "" {
fmt.Printf("Already have a token - refresh?\n")
if !config.Confirm(false) {
return
return nil
}
}
}
@@ -116,11 +115,13 @@ func init() {
switch config.ChooseNumber("Your choice", 1, 3) {
case 1:
v2config(ctx, name, m)
return v2config(ctx, name, m)
case 2:
v1config(ctx, name, m)
return v1config(ctx, name, m)
case 3:
teliaCloudConfig(ctx, name, m)
return teliaCloudConfig(ctx, name, m)
default:
return errors.New("unknown config choice")
}
},
Options: []fs.Option{{
@@ -242,7 +243,7 @@ func shouldRetry(ctx context.Context, resp *http.Response, err error) (bool, err
return fserrors.ShouldRetry(err) || fserrors.ShouldRetryHTTP(resp, retryErrorCodes), err
}
func teliaCloudConfig(ctx context.Context, name string, m configmap.Mapper) {
func teliaCloudConfig(ctx context.Context, name string, m configmap.Mapper) error {
teliaCloudOauthConfig := &oauth2.Config{
Endpoint: oauth2.Endpoint{
AuthURL: teliaCloudAuthURL,
@@ -255,15 +256,14 @@ func teliaCloudConfig(ctx context.Context, name string, m configmap.Mapper) {
err := oauthutil.Config(ctx, "jottacloud", name, m, teliaCloudOauthConfig, nil)
if err != nil {
log.Fatalf("Failed to configure token: %v", err)
return
return errors.Wrap(err, "failed to configure token")
}
fmt.Printf("\nDo you want to use a non standard device/mountpoint e.g. for accessing files uploaded using the official Jottacloud client?\n\n")
if config.Confirm(false) {
oAuthClient, _, err := oauthutil.NewClient(ctx, name, m, teliaCloudOauthConfig)
if err != nil {
log.Fatalf("Failed to load oAuthClient: %s", err)
return errors.Wrap(err, "failed to load oAuthClient")
}
srv := rest.NewClient(oAuthClient).SetRoot(rootURL)
@@ -271,7 +271,7 @@ func teliaCloudConfig(ctx context.Context, name string, m configmap.Mapper) {
device, mountpoint, err := setupMountpoint(ctx, srv, apiSrv)
if err != nil {
log.Fatalf("Failed to setup mountpoint: %s", err)
return errors.Wrap(err, "failed to setup mountpoint")
}
m.Set(configDevice, device)
m.Set(configMountpoint, mountpoint)
@@ -280,17 +280,18 @@ func teliaCloudConfig(ctx context.Context, name string, m configmap.Mapper) {
m.Set("configVersion", strconv.Itoa(configVersion))
m.Set(configClientID, teliaCloudClientID)
m.Set(configTokenURL, teliaCloudTokenURL)
return nil
}
// v1config configure a jottacloud backend using legacy authentication
func v1config(ctx context.Context, name string, m configmap.Mapper) {
func v1config(ctx context.Context, name string, m configmap.Mapper) error {
srv := rest.NewClient(fshttp.NewClient(ctx))
fmt.Printf("\nDo you want to create a machine specific API key?\n\nRclone has it's own Jottacloud API KEY which works fine as long as one only uses rclone on a single machine. When you want to use rclone with this account on more than one machine it's recommended to create a machine specific API key. These keys can NOT be shared between machines.\n\n")
if config.Confirm(false) {
deviceRegistration, err := registerDevice(ctx, srv)
if err != nil {
log.Fatalf("Failed to register device: %v", err)
return errors.Wrap(err, "failed to register device")
}
m.Set(configClientID, deviceRegistration.ClientID)
@@ -318,18 +319,18 @@ func v1config(ctx context.Context, name string, m configmap.Mapper) {
token, err := doAuthV1(ctx, srv, username, password)
if err != nil {
log.Fatalf("Failed to get oauth token: %s", err)
return errors.Wrap(err, "failed to get oauth token")
}
err = oauthutil.PutToken(name, m, &token, true)
if err != nil {
log.Fatalf("Error while saving token: %s", err)
return errors.Wrap(err, "error while saving token")
}
fmt.Printf("\nDo you want to use a non standard device/mountpoint e.g. for accessing files uploaded using the official Jottacloud client?\n\n")
if config.Confirm(false) {
oAuthClient, _, err := oauthutil.NewClient(ctx, name, m, oauthConfig)
if err != nil {
log.Fatalf("Failed to load oAuthClient: %s", err)
return errors.Wrap(err, "failed to load oAuthClient")
}
srv = rest.NewClient(oAuthClient).SetRoot(rootURL)
@@ -337,13 +338,14 @@ func v1config(ctx context.Context, name string, m configmap.Mapper) {
device, mountpoint, err := setupMountpoint(ctx, srv, apiSrv)
if err != nil {
log.Fatalf("Failed to setup mountpoint: %s", err)
return errors.Wrap(err, "failed to setup mountpoint")
}
m.Set(configDevice, device)
m.Set(configMountpoint, mountpoint)
}
m.Set("configVersion", strconv.Itoa(v1configVersion))
return nil
}
// registerDevice register a new device for use with the jottacloud API
@@ -418,7 +420,7 @@ func doAuthV1(ctx context.Context, srv *rest.Client, username, password string)
}
// v2config configure a jottacloud backend using the modern JottaCli token based authentication
func v2config(ctx context.Context, name string, m configmap.Mapper) {
func v2config(ctx context.Context, name string, m configmap.Mapper) error {
srv := rest.NewClient(fshttp.NewClient(ctx))
fmt.Printf("Generate a personal login token here: https://www.jottacloud.com/web/secure\n")
@@ -430,31 +432,32 @@ func v2config(ctx context.Context, name string, m configmap.Mapper) {
token, err := doAuthV2(ctx, srv, loginToken, m)
if err != nil {
log.Fatalf("Failed to get oauth token: %s", err)
return errors.Wrap(err, "failed to get oauth token")
}
err = oauthutil.PutToken(name, m, &token, true)
if err != nil {
log.Fatalf("Error while saving token: %s", err)
return errors.Wrap(err, "error while saving token")
}
fmt.Printf("\nDo you want to use a non standard device/mountpoint e.g. for accessing files uploaded using the official Jottacloud client?\n\n")
if config.Confirm(false) {
oAuthClient, _, err := oauthutil.NewClient(ctx, name, m, oauthConfig)
if err != nil {
log.Fatalf("Failed to load oAuthClient: %s", err)
return errors.Wrap(err, "failed to load oAuthClient")
}
srv = rest.NewClient(oAuthClient).SetRoot(rootURL)
apiSrv := rest.NewClient(oAuthClient).SetRoot(apiURL)
device, mountpoint, err := setupMountpoint(ctx, srv, apiSrv)
if err != nil {
log.Fatalf("Failed to setup mountpoint: %s", err)
return errors.Wrap(err, "failed to setup mountpoint")
}
m.Set(configDevice, device)
m.Set(configMountpoint, mountpoint)
}
m.Set("configVersion", strconv.Itoa(configVersion))
return nil
}
// doAuthV2 runs the actual token request for V2 authentication

View File

@@ -534,7 +534,7 @@ func (f *Fs) DirMove(ctx context.Context, src fs.Fs, srcRemote, dstRemote string
return nil
}
// About reports space usage (with a MB precision)
// About reports space usage (with a MiB precision)
func (f *Fs) About(ctx context.Context) (*fs.Usage, error) {
mount, err := f.client.MountsDetails(f.mountID)
if err != nil {

View File

@@ -1017,6 +1017,7 @@ func (file *localOpenFile) Close() (err error) {
func (o *Object) openTranslatedLink(offset, limit int64) (lrc io.ReadCloser, err error) {
// Read the link and return the destination it as the contents of the object
linkdst, err := os.Readlink(o.path)
fs.Infof(o, "openTranslatedLink link = %q, offset = %d, limit = %d, err = %v", linkdst, offset, limit, err)
if err != nil {
return nil, err
}
@@ -1271,6 +1272,7 @@ func (o *Object) setMetadata(info os.FileInfo) {
// Optionally, users can turn this feature on with the zero_size_links flag
if (runtime.GOOS == "windows" || o.fs.opt.ZeroSizeLinks) && o.translatedLink {
linkdst, err := os.Readlink(o.path)
fs.Infof(o, "setMetadata link = %q, err = %v", linkdst, err)
if err != nil {
fs.Errorf(o, "Failed to read link size: %v", err)
} else {

View File

@@ -6,8 +6,8 @@ import (
"bufio"
"bytes"
"encoding/binary"
"fmt"
"io"
"log"
"time"
"github.com/pkg/errors"
@@ -48,7 +48,7 @@ func (w *BinWriter) Reader() io.Reader {
// WritePu16 writes a short as unsigned varint
func (w *BinWriter) WritePu16(val int) {
if val < 0 || val > 65535 {
log.Fatalf("Invalid UInt16 %v", val)
panic(fmt.Sprintf("Invalid UInt16 %v", val))
}
w.WritePu64(int64(val))
}
@@ -56,7 +56,7 @@ func (w *BinWriter) WritePu16(val int) {
// WritePu32 writes a signed long as unsigned varint
func (w *BinWriter) WritePu32(val int64) {
if val < 0 || val > 4294967295 {
log.Fatalf("Invalid UInt32 %v", val)
panic(fmt.Sprintf("Invalid UInt32 %v", val))
}
w.WritePu64(val)
}
@@ -64,7 +64,7 @@ func (w *BinWriter) WritePu32(val int64) {
// WritePu64 writes an unsigned (actually, signed) long as unsigned varint
func (w *BinWriter) WritePu64(val int64) {
if val < 0 {
log.Fatalf("Invalid UInt64 %v", val)
panic(fmt.Sprintf("Invalid UInt64 %v", val))
}
w.b.Write(w.a[:binary.PutUvarint(w.a, uint64(val))])
}
@@ -123,7 +123,7 @@ func (r *BinReader) check(err error) bool {
r.err = err
}
if err != io.EOF {
log.Fatalf("Error parsing response: %v", err)
panic(fmt.Sprintf("Error parsing response: %v", err))
}
return false
}

View File

@@ -9,7 +9,6 @@ import (
"encoding/json"
"fmt"
"io"
"log"
"net/http"
"net/url"
"path"
@@ -52,8 +51,8 @@ const (
driveTypePersonal = "personal"
driveTypeBusiness = "business"
driveTypeSharepoint = "documentLibrary"
defaultChunkSize = 10 * fs.MebiByte
chunkSizeMultiple = 320 * fs.KibiByte
defaultChunkSize = 10 * fs.Mebi
chunkSizeMultiple = 320 * fs.Kibi
regionGlobal = "global"
regionUS = "us"
@@ -99,7 +98,7 @@ func init() {
Name: "onedrive",
Description: "Microsoft OneDrive",
NewFs: NewFs,
Config: func(ctx context.Context, name string, m configmap.Mapper) {
Config: func(ctx context.Context, name string, m configmap.Mapper) error {
region, _ := m.Get("region")
graphURL := graphAPIEndpoint[region] + "/v1.0"
oauthConfig.Endpoint = oauth2.Endpoint{
@@ -109,13 +108,12 @@ func init() {
ci := fs.GetConfig(ctx)
err := oauthutil.Config(ctx, "onedrive", name, m, oauthConfig, nil)
if err != nil {
log.Fatalf("Failed to configure token: %v", err)
return
return errors.Wrap(err, "failed to configure token")
}
// Stop if we are running non-interactive config
if ci.AutoConfirm {
return
return nil
}
type driveResource struct {
@@ -138,7 +136,7 @@ func init() {
oAuthClient, _, err := oauthutil.NewClient(ctx, name, m, oauthConfig)
if err != nil {
log.Fatalf("Failed to configure OneDrive: %v", err)
return errors.Wrap(err, "failed to configure OneDrive")
}
srv := rest.NewClient(oAuthClient)
@@ -203,18 +201,17 @@ func init() {
sites := siteResponse{}
_, err := srv.CallJSON(ctx, &opts, nil, &sites)
if err != nil {
log.Fatalf("Failed to query available sites: %v", err)
return errors.Wrap(err, "failed to query available sites")
}
if len(sites.Sites) == 0 {
log.Fatalf("Search for '%s' returned no results", searchTerm)
} else {
fmt.Printf("Found %d sites, please select the one you want to use:\n", len(sites.Sites))
for index, site := range sites.Sites {
fmt.Printf("%d: %s (%s) id=%s\n", index, site.SiteName, site.SiteURL, site.SiteID)
}
siteID = sites.Sites[config.ChooseNumber("Chose drive to use:", 0, len(sites.Sites)-1)].SiteID
return errors.Errorf("search for %q returned no results", searchTerm)
}
fmt.Printf("Found %d sites, please select the one you want to use:\n", len(sites.Sites))
for index, site := range sites.Sites {
fmt.Printf("%d: %s (%s) id=%s\n", index, site.SiteName, site.SiteURL, site.SiteID)
}
siteID = sites.Sites[config.ChooseNumber("Chose drive to use:", 0, len(sites.Sites)-1)].SiteID
}
// if we use server-relative URL for finding the drive
@@ -227,7 +224,7 @@ func init() {
site := siteResource{}
_, err := srv.CallJSON(ctx, &opts, nil, &site)
if err != nil {
log.Fatalf("Failed to query available site by relative path: %v", err)
return errors.Wrap(err, "failed to query available site by relative path")
}
siteID = site.SiteID
}
@@ -247,7 +244,7 @@ func init() {
drives := drivesResponse{}
_, err := srv.CallJSON(ctx, &opts, nil, &drives)
if err != nil {
log.Fatalf("Failed to query available drives: %v", err)
return errors.Wrap(err, "failed to query available drives")
}
// Also call /me/drive as sometimes /me/drives doesn't return it #4068
@@ -256,7 +253,7 @@ func init() {
meDrive := driveResource{}
_, err := srv.CallJSON(ctx, &opts, nil, &meDrive)
if err != nil {
log.Fatalf("Failed to query available drives: %v", err)
return errors.Wrap(err, "failed to query available drives")
}
found := false
for _, drive := range drives.Drives {
@@ -273,14 +270,13 @@ func init() {
}
if len(drives.Drives) == 0 {
log.Fatalf("No drives found")
} else {
fmt.Printf("Found %d drives, please select the one you want to use:\n", len(drives.Drives))
for index, drive := range drives.Drives {
fmt.Printf("%d: %s (%s) id=%s\n", index, drive.DriveName, drive.DriveType, drive.DriveID)
}
finalDriveID = drives.Drives[config.ChooseNumber("Chose drive to use:", 0, len(drives.Drives)-1)].DriveID
return errors.New("no drives found")
}
fmt.Printf("Found %d drives, please select the one you want to use:\n", len(drives.Drives))
for index, drive := range drives.Drives {
fmt.Printf("%d: %s (%s) id=%s\n", index, drive.DriveName, drive.DriveType, drive.DriveID)
}
finalDriveID = drives.Drives[config.ChooseNumber("Chose drive to use:", 0, len(drives.Drives)-1)].DriveID
}
// Test the driveID and get drive type
@@ -291,17 +287,18 @@ func init() {
var rootItem api.Item
_, err = srv.CallJSON(ctx, &opts, nil, &rootItem)
if err != nil {
log.Fatalf("Failed to query root for drive %s: %v", finalDriveID, err)
return errors.Wrapf(err, "failed to query root for drive %s", finalDriveID)
}
fmt.Printf("Found drive '%s' of type '%s', URL: %s\nIs that okay?\n", rootItem.Name, rootItem.ParentReference.DriveType, rootItem.WebURL)
// This does not work, YET :)
if !config.ConfirmWithConfig(ctx, m, "config_drive_ok", true) {
log.Fatalf("Cancelled by user")
return errors.New("cancelled by user")
}
m.Set(configDriveID, finalDriveID)
m.Set(configDriveType, rootItem.ParentReference.DriveType)
return nil
},
Options: append(oauthutil.SharedOptions, []fs.Option{{
Name: "region",
@@ -361,6 +358,11 @@ This will only work if you are copying between two OneDrive *Personal* drives AN
the files to copy are already shared between them. In other cases, rclone will
fall back to normal copy (which will be slightly slower).`,
Advanced: true,
}, {
Name: "list_chunk",
Help: "Size of listing chunk.",
Default: 1000,
Advanced: true,
}, {
Name: "no_versions",
Default: false,
@@ -468,6 +470,7 @@ type Options struct {
DriveType string `config:"drive_type"`
ExposeOneNoteFiles bool `config:"expose_onenote_files"`
ServerSideAcrossConfigs bool `config:"server_side_across_configs"`
ListChunk int64 `config:"list_chunk"`
NoVersions bool `config:"no_versions"`
LinkScope string `config:"link_scope"`
LinkType string `config:"link_type"`
@@ -560,6 +563,9 @@ func shouldRetry(ctx context.Context, resp *http.Response, err error) (bool, err
if len(resp.Header["Www-Authenticate"]) == 1 && strings.Index(resp.Header["Www-Authenticate"][0], "expired_token") >= 0 {
retry = true
fs.Debugf(nil, "Should retry: %v", err)
} else if err != nil && strings.Contains(err.Error(), "Unable to initialize RPS") {
retry = true
fs.Debugf(nil, "HTTP 401: Unable to initialize RPS. Trying again.")
}
case 429: // Too Many Requests.
// see https://docs.microsoft.com/en-us/sharepoint/dev/general-development/how-to-avoid-getting-throttled-or-blocked-in-sharepoint-online
@@ -687,7 +693,7 @@ func errorHandler(resp *http.Response) error {
}
func checkUploadChunkSize(cs fs.SizeSuffix) error {
const minChunkSize = fs.Byte
const minChunkSize = fs.SizeSuffixBase
if cs%chunkSizeMultiple != 0 {
return errors.Errorf("%s is not a multiple of %s", cs, chunkSizeMultiple)
}
@@ -896,7 +902,7 @@ type listAllFn func(*api.Item) bool
func (f *Fs) listAll(ctx context.Context, dirID string, directoriesOnly bool, filesOnly bool, fn listAllFn) (found bool, err error) {
// Top parameter asks for bigger pages of data
// https://dev.onedrive.com/odata/optional-query-parameters.htm
opts := f.newOptsCall(dirID, "GET", "/children?$top=1000")
opts := f.newOptsCall(dirID, "GET", fmt.Sprintf("/children?$top=%d", f.opt.ListChunk))
OUTER:
for {
var result api.ListChildrenResponse
@@ -1423,7 +1429,7 @@ func (f *Fs) PublicLink(ctx context.Context, remote string, expire fs.Duration,
Password: f.opt.LinkPassword,
}
if expire < fs.Duration(time.Hour*24*365*100) {
if expire < fs.DurationOff {
expiry := time.Now().Add(time.Duration(expire))
share.Expiry = &expiry
}
@@ -1851,7 +1857,7 @@ func (o *Object) uploadMultipart(ctx context.Context, in io.Reader, size int64,
fs.Debugf(o, "Cancelling multipart upload: %v", err)
cancelErr := o.cancelUploadSession(ctx, uploadURL)
if cancelErr != nil {
fs.Logf(o, "Failed to cancel multipart upload: %v", cancelErr)
fs.Logf(o, "Failed to cancel multipart upload: %v (upload failed due to: %v)", cancelErr, err)
}
})()
@@ -1876,11 +1882,11 @@ func (o *Object) uploadMultipart(ctx context.Context, in io.Reader, size int64,
return info, nil
}
// Update the content of a remote file within 4MB size in one single request
// Update the content of a remote file within 4 MiB size in one single request
// This function will set modtime after uploading, which will create a new version for the remote file
func (o *Object) uploadSinglepart(ctx context.Context, in io.Reader, size int64, modTime time.Time, options ...fs.OpenOption) (info *api.Item, err error) {
if size < 0 || size > int64(fs.SizeSuffix(4*1024*1024)) {
return nil, errors.New("size passed into uploadSinglepart must be >= 0 and <= 4MiB")
return nil, errors.New("size passed into uploadSinglepart must be >= 0 and <= 4 MiB")
}
fs.Debugf(o, "Starting singlepart upload")

View File

@@ -88,7 +88,7 @@ func init() {
Note that these chunks are buffered in memory so increasing them will
increase memory use.`,
Default: 10 * fs.MebiByte,
Default: 10 * fs.Mebi,
Advanced: true,
}},
})

View File

@@ -12,7 +12,6 @@ import (
"context"
"fmt"
"io"
"log"
"net/http"
"net/url"
"path"
@@ -72,7 +71,7 @@ func init() {
Name: "pcloud",
Description: "Pcloud",
NewFs: NewFs,
Config: func(ctx context.Context, name string, m configmap.Mapper) {
Config: func(ctx context.Context, name string, m configmap.Mapper) error {
optc := new(Options)
err := configstruct.Set(m, optc)
if err != nil {
@@ -100,8 +99,9 @@ func init() {
}
err = oauthutil.Config(ctx, "pcloud", name, m, oauthConfig, &opt)
if err != nil {
log.Fatalf("Failed to configure token: %v", err)
return errors.Wrap(err, "failed to configure token")
}
return nil
},
Options: append(oauthutil.SharedOptions, []fs.Option{{
Name: config.ConfigEncoding,

View File

@@ -20,7 +20,6 @@ import (
"encoding/json"
"fmt"
"io"
"log"
"net"
"net/http"
"net/url"
@@ -78,11 +77,12 @@ func init() {
Name: "premiumizeme",
Description: "premiumize.me",
NewFs: NewFs,
Config: func(ctx context.Context, name string, m configmap.Mapper) {
Config: func(ctx context.Context, name string, m configmap.Mapper) error {
err := oauthutil.Config(ctx, "premiumizeme", name, m, oauthConfig, nil)
if err != nil {
log.Fatalf("Failed to configure token: %v", err)
return errors.Wrap(err, "failed to configure token")
}
return nil
},
Options: []fs.Option{{
Name: "api_key",

View File

@@ -2,10 +2,10 @@ package putio
import (
"context"
"log"
"regexp"
"time"
"github.com/pkg/errors"
"github.com/rclone/rclone/fs"
"github.com/rclone/rclone/fs/config"
"github.com/rclone/rclone/fs/config/configmap"
@@ -35,7 +35,7 @@ const (
minSleep = 10 * time.Millisecond
maxSleep = 2 * time.Second
decayConstant = 2 // bigger for slower decay, exponential
defaultChunkSize = 48 * fs.MebiByte
defaultChunkSize = 48 * fs.Mebi
)
var (
@@ -60,14 +60,15 @@ func init() {
Name: "putio",
Description: "Put.io",
NewFs: NewFs,
Config: func(ctx context.Context, name string, m configmap.Mapper) {
Config: func(ctx context.Context, name string, m configmap.Mapper) error {
opt := oauthutil.Options{
NoOffline: true,
}
err := oauthutil.Config(ctx, "putio", name, m, putioConfig, &opt)
if err != nil {
log.Fatalf("Failed to configure token: %v", err)
return errors.Wrap(err, "failed to configure token")
}
return nil
},
Options: []fs.Option{{
Name: config.ConfigEncoding,

View File

@@ -80,7 +80,7 @@ func init() {
Help: `Cutoff for switching to chunked upload
Any files larger than this will be uploaded in chunks of chunk_size.
The minimum is 0 and the maximum is 5GB.`,
The minimum is 0 and the maximum is 5 GiB.`,
Default: defaultUploadCutoff,
Advanced: true,
}, {

View File

@@ -26,7 +26,6 @@ import (
"github.com/aws/aws-sdk-go/aws/corehandlers"
"github.com/aws/aws-sdk-go/aws/credentials"
"github.com/aws/aws-sdk-go/aws/credentials/ec2rolecreds"
"github.com/aws/aws-sdk-go/aws/credentials/stscreds"
"github.com/aws/aws-sdk-go/aws/defaults"
"github.com/aws/aws-sdk-go/aws/ec2metadata"
"github.com/aws/aws-sdk-go/aws/endpoints"
@@ -1017,7 +1016,7 @@ If you leave it blank, this is calculated automatically from the sse_customer_ke
Help: `Cutoff for switching to chunked upload
Any files larger than this will be uploaded in chunks of chunk_size.
The minimum is 0 and the maximum is 5GB.`,
The minimum is 0 and the maximum is 5 GiB.`,
Default: defaultUploadCutoff,
Advanced: true,
}, {
@@ -1039,9 +1038,9 @@ Rclone will automatically increase the chunk size when uploading a
large file of known size to stay below the 10,000 chunks limit.
Files of unknown size are uploaded with the configured
chunk_size. Since the default chunk size is 5MB and there can be at
chunk_size. Since the default chunk size is 5 MiB and there can be at
most 10,000 chunks, this means that by default the maximum size of
a file you can stream upload is 48GB. If you wish to stream upload
a file you can stream upload is 48 GiB. If you wish to stream upload
larger files then you will need to increase chunk_size.`,
Default: minChunkSize,
Advanced: true,
@@ -1067,7 +1066,7 @@ large file of a known size to stay below this number of chunks limit.
Any files larger than this that need to be server-side copied will be
copied in chunks of this size.
The minimum is 0 and the maximum is 5GB.`,
The minimum is 0 and the maximum is 5 GiB.`,
Default: fs.SizeSuffix(maxSizeForCopy),
Advanced: true,
}, {
@@ -1221,6 +1220,11 @@ very small even with this flag.
`,
Default: false,
Advanced: true,
}, {
Name: "no_head_object",
Help: `If set, don't HEAD objects`,
Default: false,
Advanced: true,
}, {
Name: config.ConfigEncoding,
Help: config.ConfigEncodingHelp,
@@ -1271,7 +1275,7 @@ See: https://github.com/rclone/rclone/issues/4673, https://github.com/rclone/rcl
const (
metaMtime = "Mtime" // the meta key to store mtime in - e.g. X-Amz-Meta-Mtime
metaMD5Hash = "Md5chksum" // the meta key to store md5hash in
// The maximum size of object we can COPY - this should be 5GiB but is < 5GB for b2 compatibility
// The maximum size of object we can COPY - this should be 5 GiB but is < 5 GB for b2 compatibility
// See https://forum.rclone.org/t/copying-files-within-a-b2-bucket/16680/76
maxSizeForCopy = 4768 * 1024 * 1024
maxUploadParts = 10000 // maximum allowed number of parts in a multi-part upload
@@ -1319,6 +1323,7 @@ type Options struct {
ListChunk int64 `config:"list_chunk"`
NoCheckBucket bool `config:"no_check_bucket"`
NoHead bool `config:"no_head"`
NoHeadObject bool `config:"no_head_object"`
Enc encoder.MultiEncoder `config:"encoding"`
MemoryPoolFlushTime fs.Duration `config:"memory_pool_flush_time"`
MemoryPoolUseMmap bool `config:"memory_pool_use_mmap"`
@@ -1511,11 +1516,6 @@ func s3Connection(ctx context.Context, opt *Options, client *http.Client) (*s3.S
}),
ExpiryWindow: 3 * time.Minute,
},
// Pick up IAM role if we are in EKS
&stscreds.WebIdentityRoleProvider{
ExpiryWindow: 3 * time.Minute,
},
}
cred := credentials.NewChainCredentials(providers)
@@ -1693,7 +1693,7 @@ func NewFs(ctx context.Context, name, root string, m configmap.Mapper) (fs.Fs, e
GetTier: true,
SlowModTime: true,
}).Fill(ctx, f)
if f.rootBucket != "" && f.rootDirectory != "" {
if f.rootBucket != "" && f.rootDirectory != "" && !opt.NoHeadObject {
// Check to see if the (bucket,directory) is actually an existing file
oldRoot := f.root
newRoot, leaf := path.Split(oldRoot)
@@ -1730,7 +1730,7 @@ func (f *Fs) newObjectWithInfo(ctx context.Context, remote string, info *s3.Obje
o.setMD5FromEtag(aws.StringValue(info.ETag))
o.bytes = aws.Int64Value(info.Size)
o.storageClass = aws.StringValue(info.StorageClass)
} else {
} else if !o.fs.opt.NoHeadObject {
err := o.readMetaData(ctx) // reads info and meta, returning an error
if err != nil {
return nil, err
@@ -2831,15 +2831,23 @@ func (o *Object) readMetaData(ctx context.Context) (err error) {
if err != nil {
return err
}
if resp.LastModified == nil {
fs.Logf(o, "Failed to read last modified from HEAD: %v", err)
}
o.setMetaData(resp.ETag, resp.ContentLength, resp.LastModified, resp.Metadata, resp.ContentType, resp.StorageClass)
return nil
}
func (o *Object) setMetaData(etag *string, contentLength *int64, lastModified *time.Time, meta map[string]*string, mimeType *string, storageClass *string) {
var size int64
// Ignore missing Content-Length assuming it is 0
// Some versions of ceph do this due their apache proxies
if resp.ContentLength != nil {
size = *resp.ContentLength
if contentLength != nil {
size = *contentLength
}
o.setMD5FromEtag(aws.StringValue(resp.ETag))
o.setMD5FromEtag(aws.StringValue(etag))
o.bytes = size
o.meta = resp.Metadata
o.meta = meta
if o.meta == nil {
o.meta = map[string]*string{}
}
@@ -2854,15 +2862,13 @@ func (o *Object) readMetaData(ctx context.Context) (err error) {
o.md5 = hex.EncodeToString(md5sumBytes)
}
}
o.storageClass = aws.StringValue(resp.StorageClass)
if resp.LastModified == nil {
fs.Logf(o, "Failed to read last modified from HEAD: %v", err)
o.storageClass = aws.StringValue(storageClass)
if lastModified == nil {
o.lastModified = time.Now()
} else {
o.lastModified = *resp.LastModified
o.lastModified = *lastModified
}
o.mimeType = aws.StringValue(resp.ContentType)
return nil
o.mimeType = aws.StringValue(mimeType)
}
// ModTime returns the modification time of the object
@@ -2972,6 +2978,26 @@ func (o *Object) Open(ctx context.Context, options ...fs.OpenOption) (in io.Read
if err != nil {
return nil, err
}
if resp.LastModified == nil {
fs.Logf(o, "Failed to read last modified: %v", err)
}
// read size from ContentLength or ContentRange
size := resp.ContentLength
if resp.ContentRange != nil {
var contentRange = *resp.ContentRange
slash := strings.IndexRune(contentRange, '/')
if slash >= 0 {
i, err := strconv.ParseInt(contentRange[slash+1:], 10, 64)
if err == nil {
size = &i
} else {
fs.Debugf(o, "Failed to find parse integer from in %q: %v", contentRange, err)
}
} else {
fs.Debugf(o, "Failed to find length in %q", contentRange)
}
}
o.setMetaData(resp.ETag, size, resp.LastModified, resp.Metadata, resp.ContentType, resp.StorageClass)
return resp.Body, nil
}
@@ -2997,9 +3023,9 @@ func (o *Object) uploadMultipart(ctx context.Context, req *s3.PutObjectInput, si
// calculate size of parts
partSize := int(f.opt.ChunkSize)
// size can be -1 here meaning we don't know the size of the incoming file. We use ChunkSize
// buffers here (default 5MB). With a maximum number of parts (10,000) this will be a file of
// 48GB which seems like a not too unreasonable limit.
// size can be -1 here meaning we don't know the size of the incoming file. We use ChunkSize
// buffers here (default 5 MiB). With a maximum number of parts (10,000) this will be a file of
// 48 GiB which seems like a not too unreasonable limit.
if size == -1 {
warnStreamUpload.Do(func() {
fs.Logf(f, "Streaming uploads using chunk size %v will have maximum file size of %v",
@@ -3008,7 +3034,7 @@ func (o *Object) uploadMultipart(ctx context.Context, req *s3.PutObjectInput, si
} else {
// Adjust partSize until the number of parts is small enough.
if size/int64(partSize) >= uploadParts {
// Calculate partition size rounded up to the nearest MB
// Calculate partition size rounded up to the nearest MiB
partSize = int((((size / uploadParts) >> 20) + 1) << 20)
}
}

View File

@@ -296,36 +296,32 @@ func NewFs(ctx context.Context, name, root string, m configmap.Mapper) (fs.Fs, e
}
// Config callback for 2FA
func Config(ctx context.Context, name string, m configmap.Mapper) {
func Config(ctx context.Context, name string, m configmap.Mapper) error {
ci := fs.GetConfig(ctx)
serverURL, ok := m.Get(configURL)
if !ok || serverURL == "" {
// If there's no server URL, it means we're trying an operation at the backend level, like a "rclone authorize seafile"
fmt.Print("\nOperation not supported on this remote.\nIf you need a 2FA code on your account, use the command:\n\nrclone config reconnect <remote name>:\n\n")
return
return errors.New("operation not supported on this remote. If you need a 2FA code on your account, use the command: nrclone config reconnect <remote name>: ")
}
// Stop if we are running non-interactive config
if ci.AutoConfirm {
return
return nil
}
u, err := url.Parse(serverURL)
if err != nil {
fs.Errorf(nil, "Invalid server URL %s", serverURL)
return
return errors.Errorf("invalid server URL %s", serverURL)
}
is2faEnabled, _ := m.Get(config2FA)
if is2faEnabled != "true" {
fmt.Println("Two-factor authentication is not enabled on this account.")
return
return errors.New("two-factor authentication is not enabled on this account")
}
username, _ := m.Get(configUser)
if username == "" {
fs.Errorf(nil, "A username is required")
return
return errors.New("a username is required")
}
password, _ := m.Get(configPassword)
@@ -376,6 +372,7 @@ func Config(ctx context.Context, name string, m configmap.Mapper) {
break
}
}
return nil
}
// sets the AuthorizationToken up

View File

@@ -16,6 +16,7 @@ import (
"strconv"
"strings"
"sync"
"sync/atomic"
"time"
"github.com/pkg/errors"
@@ -223,6 +224,17 @@ have a server which returns
Then you may need to enable this flag.
If concurrent reads are disabled, the use_fstat option is ignored.
`,
Advanced: true,
}, {
Name: "disable_concurrent_writes",
Default: false,
Help: `If set don't use concurrent writes
Normally rclone uses concurrent writes to upload files. This improves
the performance greatly, especially for distant servers.
This option disables concurrent writes should that be necessary.
`,
Advanced: true,
}, {
@@ -243,29 +255,30 @@ Set to 0 to keep connections indefinitely.
// Options defines the configuration for this backend
type Options struct {
Host string `config:"host"`
User string `config:"user"`
Port string `config:"port"`
Pass string `config:"pass"`
KeyPem string `config:"key_pem"`
KeyFile string `config:"key_file"`
KeyFilePass string `config:"key_file_pass"`
PubKeyFile string `config:"pubkey_file"`
KnownHostsFile string `config:"known_hosts_file"`
KeyUseAgent bool `config:"key_use_agent"`
UseInsecureCipher bool `config:"use_insecure_cipher"`
DisableHashCheck bool `config:"disable_hashcheck"`
AskPassword bool `config:"ask_password"`
PathOverride string `config:"path_override"`
SetModTime bool `config:"set_modtime"`
Md5sumCommand string `config:"md5sum_command"`
Sha1sumCommand string `config:"sha1sum_command"`
SkipLinks bool `config:"skip_links"`
Subsystem string `config:"subsystem"`
ServerCommand string `config:"server_command"`
UseFstat bool `config:"use_fstat"`
DisableConcurrentReads bool `config:"disable_concurrent_reads"`
IdleTimeout fs.Duration `config:"idle_timeout"`
Host string `config:"host"`
User string `config:"user"`
Port string `config:"port"`
Pass string `config:"pass"`
KeyPem string `config:"key_pem"`
KeyFile string `config:"key_file"`
KeyFilePass string `config:"key_file_pass"`
PubKeyFile string `config:"pubkey_file"`
KnownHostsFile string `config:"known_hosts_file"`
KeyUseAgent bool `config:"key_use_agent"`
UseInsecureCipher bool `config:"use_insecure_cipher"`
DisableHashCheck bool `config:"disable_hashcheck"`
AskPassword bool `config:"ask_password"`
PathOverride string `config:"path_override"`
SetModTime bool `config:"set_modtime"`
Md5sumCommand string `config:"md5sum_command"`
Sha1sumCommand string `config:"sha1sum_command"`
SkipLinks bool `config:"skip_links"`
Subsystem string `config:"subsystem"`
ServerCommand string `config:"server_command"`
UseFstat bool `config:"use_fstat"`
DisableConcurrentReads bool `config:"disable_concurrent_reads"`
DisableConcurrentWrites bool `config:"disable_concurrent_writes"`
IdleTimeout fs.Duration `config:"idle_timeout"`
}
// Fs stores the interface to the remote SFTP files
@@ -286,6 +299,7 @@ type Fs struct {
drain *time.Timer // used to drain the pool when we stop using the connections
pacer *fs.Pacer // pacer for operations
savedpswd string
transfers int32 // count in use references
}
// Object is a remote SFTP file that has been stat'd (so it exists, but is not necessarily open for reading)
@@ -348,6 +362,23 @@ func (c *conn) closed() error {
return nil
}
// Show that we are doing an upload or download
//
// Call removeTransfer() when done
func (f *Fs) addTransfer() {
atomic.AddInt32(&f.transfers, 1)
}
// Show the upload or download done
func (f *Fs) removeTransfer() {
atomic.AddInt32(&f.transfers, -1)
}
// getTransfers shows whether there are any transfers in progress
func (f *Fs) getTransfers() int32 {
return atomic.LoadInt32(&f.transfers)
}
// Open a new connection to the SFTP server.
func (f *Fs) sftpConnection(ctx context.Context) (c *conn, err error) {
// Rate limit rate of new connections
@@ -396,7 +427,11 @@ func (f *Fs) newSftpClient(conn *ssh.Client, opts ...sftp.ClientOption) (*sftp.C
opts = append(opts,
sftp.UseFstat(f.opt.UseFstat),
sftp.UseConcurrentReads(!f.opt.DisableConcurrentReads),
sftp.UseConcurrentWrites(!f.opt.DisableConcurrentWrites),
)
if f.opt.DisableConcurrentReads { // FIXME
fs.Errorf(f, "Ignoring disable_concurrent_reads after library reversion - see #5197")
}
return sftp.NewClientPipe(pr, pw, opts...)
}
@@ -474,6 +509,13 @@ func (f *Fs) putSftpConnection(pc **conn, err error) {
func (f *Fs) drainPool(ctx context.Context) (err error) {
f.poolMu.Lock()
defer f.poolMu.Unlock()
if transfers := f.getTransfers(); transfers != 0 {
fs.Debugf(f, "Not closing %d unused connections as %d transfers in progress", len(f.pool), transfers)
if f.opt.IdleTimeout > 0 {
f.drain.Reset(time.Duration(f.opt.IdleTimeout)) // nudge on the pool emptying timer
}
return nil
}
if f.opt.IdleTimeout > 0 {
f.drain.Stop()
}
@@ -1380,18 +1422,22 @@ func (o *Object) Storable() bool {
// objectReader represents a file open for reading on the SFTP server
type objectReader struct {
f *Fs
sftpFile *sftp.File
pipeReader *io.PipeReader
done chan struct{}
}
func newObjectReader(sftpFile *sftp.File) *objectReader {
func (f *Fs) newObjectReader(sftpFile *sftp.File) *objectReader {
pipeReader, pipeWriter := io.Pipe()
file := &objectReader{
f: f,
sftpFile: sftpFile,
pipeReader: pipeReader,
done: make(chan struct{}),
}
// Show connection in use
f.addTransfer()
go func() {
// Use sftpFile.WriteTo to pump data so that it gets a
@@ -1421,6 +1467,8 @@ func (file *objectReader) Close() (err error) {
_ = file.pipeReader.Close()
// Wait for the background process to finish
<-file.done
// Show connection no longer in use
file.f.removeTransfer()
return err
}
@@ -1454,12 +1502,27 @@ func (o *Object) Open(ctx context.Context, options ...fs.OpenOption) (in io.Read
return nil, errors.Wrap(err, "Open Seek failed")
}
}
in = readers.NewLimitedReadCloser(newObjectReader(sftpFile), limit)
in = readers.NewLimitedReadCloser(o.fs.newObjectReader(sftpFile), limit)
return in, nil
}
type sizeReader struct {
io.Reader
size int64
}
// Size returns the expected size of the stream
//
// It is used in sftpFile.ReadFrom as a hint to work out the
// concurrency needed
func (sr *sizeReader) Size() int64 {
return sr.size
}
// Update a remote sftp file using the data <in> and ModTime from <src>
func (o *Object) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, options ...fs.OpenOption) error {
o.fs.addTransfer() // Show transfer in progress
defer o.fs.removeTransfer()
// Clear the hash cache since we are about to update the object
o.md5sum = nil
o.sha1sum = nil
@@ -1487,7 +1550,7 @@ func (o *Object) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, op
fs.Debugf(src, "Removed after failed upload: %v", err)
}
}
_, err = file.ReadFrom(in)
_, err = file.ReadFrom(&sizeReader{Reader: in, size: src.Size()})
if err != nil {
remove()
return errors.Wrap(err, "Update ReadFrom failed")

View File

@@ -77,7 +77,6 @@ import (
"fmt"
"io"
"io/ioutil"
"log"
"net/http"
"net/url"
"path"
@@ -110,10 +109,10 @@ const (
decayConstant = 2 // bigger for slower decay, exponential
apiPath = "/sf/v3" // add to endpoint to get API path
tokenPath = "/oauth/token" // add to endpoint to get Token path
minChunkSize = 256 * fs.KibiByte
maxChunkSize = 2 * fs.GibiByte
defaultChunkSize = 64 * fs.MebiByte
defaultUploadCutoff = 128 * fs.MebiByte
minChunkSize = 256 * fs.Kibi
maxChunkSize = 2 * fs.Gibi
defaultChunkSize = 64 * fs.Mebi
defaultUploadCutoff = 128 * fs.Mebi
)
// Generate a new oauth2 config which we will update when we know the TokenURL
@@ -136,7 +135,7 @@ func init() {
Name: "sharefile",
Description: "Citrix Sharefile",
NewFs: NewFs,
Config: func(ctx context.Context, name string, m configmap.Mapper) {
Config: func(ctx context.Context, name string, m configmap.Mapper) error {
oauthConfig := newOauthConfig("")
checkAuth := func(oauthConfig *oauth2.Config, auth *oauthutil.AuthResult) error {
if auth == nil || auth.Form == nil {
@@ -157,8 +156,9 @@ func init() {
}
err := oauthutil.Config(ctx, "sharefile", name, m, oauthConfig, &opt)
if err != nil {
log.Fatalf("Failed to configure token: %v", err)
return errors.Wrap(err, "failed to configure token")
}
return nil
},
Options: []fs.Option{{
Name: "upload_cutoff",

View File

@@ -16,7 +16,6 @@ import (
"context"
"fmt"
"io"
"log"
"net/http"
"net/url"
"path"
@@ -76,17 +75,17 @@ func init() {
Name: "sugarsync",
Description: "Sugarsync",
NewFs: NewFs,
Config: func(ctx context.Context, name string, m configmap.Mapper) {
Config: func(ctx context.Context, name string, m configmap.Mapper) error {
opt := new(Options)
err := configstruct.Set(m, opt)
if err != nil {
log.Fatalf("Failed to read options: %v", err)
return errors.Wrap(err, "failed to read options")
}
if opt.RefreshToken != "" {
fmt.Printf("Already have a token - refresh?\n")
if !config.ConfirmWithConfig(ctx, m, "config_refresh_token", true) {
return
return nil
}
}
fmt.Printf("Username (email address)> ")
@@ -114,10 +113,11 @@ func init() {
// return shouldRetry(ctx, resp, err)
//})
if err != nil {
log.Fatalf("Failed to get token: %v", err)
return errors.Wrap(err, "failed to get token")
}
opt.RefreshToken = resp.Header.Get("Location")
m.Set("refresh_token", opt.RefreshToken)
return nil
},
Options: []fs.Option{{
Name: "app_id",

View File

@@ -36,7 +36,7 @@ import (
const (
directoryMarkerContentType = "application/directory" // content type of directory marker objects
listChunks = 1000 // chunk size to read directory listings
defaultChunkSize = 5 * fs.GibiByte
defaultChunkSize = 5 * fs.Gibi
minSleep = 10 * time.Millisecond // In case of error, start at 10ms sleep.
)
@@ -46,7 +46,7 @@ var SharedOptions = []fs.Option{{
Help: `Above this size files will be chunked into a _segments container.
Above this size files will be chunked into a _segments container. The
default for this is 5GB which is its maximum value.`,
default for this is 5 GiB which is its maximum value.`,
Default: defaultChunkSize,
Advanced: true,
}, {
@@ -56,7 +56,7 @@ default for this is 5GB which is its maximum value.`,
When doing streaming uploads (e.g. using rcat or mount) setting this
flag will cause the swift backend to not upload chunked files.
This will limit the maximum upload size to 5GB. However non chunked
This will limit the maximum upload size to 5 GiB. However non chunked
files are easier to deal with and have an MD5SUM.
Rclone will still chunk files bigger than chunk_size when doing normal
@@ -419,7 +419,7 @@ func swiftConnection(ctx context.Context, opt *Options, name string) (*swift.Con
}
func checkUploadChunkSize(cs fs.SizeSuffix) error {
const minChunkSize = fs.Byte
const minChunkSize = fs.SizeSuffixBase
if cs < minChunkSize {
return errors.Errorf("%s is less than %s", cs, minChunkSize)
}

View File

@@ -87,7 +87,7 @@ func (f *Fs) testWithChunk(t *testing.T) {
preConfChunkSize := f.opt.ChunkSize
preConfChunk := f.opt.NoChunk
f.opt.NoChunk = false
f.opt.ChunkSize = 1024 * fs.Byte
f.opt.ChunkSize = 1024 * fs.SizeSuffixBase
defer func() {
//restore old config after test
f.opt.ChunkSize = preConfChunkSize
@@ -117,7 +117,7 @@ func (f *Fs) testWithChunkFail(t *testing.T) {
preConfChunkSize := f.opt.ChunkSize
preConfChunk := f.opt.NoChunk
f.opt.NoChunk = false
f.opt.ChunkSize = 1024 * fs.Byte
f.opt.ChunkSize = 1024 * fs.SizeSuffixBase
segmentContainer := f.root + "_segments"
defer func() {
//restore config
@@ -159,7 +159,7 @@ func (f *Fs) testCopyLargeObject(t *testing.T) {
preConfChunkSize := f.opt.ChunkSize
preConfChunk := f.opt.NoChunk
f.opt.NoChunk = false
f.opt.ChunkSize = 1024 * fs.Byte
f.opt.ChunkSize = 1024 * fs.SizeSuffixBase
defer func() {
//restore old config after test
f.opt.ChunkSize = preConfChunkSize

View File

@@ -7,7 +7,6 @@ import (
"context"
"fmt"
"io"
"log"
"path"
"strings"
"time"
@@ -42,7 +41,7 @@ func init() {
Name: "tardigrade",
Description: "Tardigrade Decentralized Cloud Storage",
NewFs: NewFs,
Config: func(ctx context.Context, name string, configMapper configmap.Mapper) {
Config: func(ctx context.Context, name string, configMapper configmap.Mapper) error {
provider, _ := configMapper.Get(fs.ConfigProvider)
config.FileDeleteKey(name, fs.ConfigProvider)
@@ -54,7 +53,7 @@ func init() {
// satelliteString contains always default and passphrase can be empty
if apiKey == "" {
return
return nil
}
satellite, found := satMap[satelliteString]
@@ -64,12 +63,12 @@ func init() {
access, err := uplink.RequestAccessWithPassphrase(context.TODO(), satellite, apiKey, passphrase)
if err != nil {
log.Fatalf("Couldn't create access grant: %v", err)
return errors.Wrap(err, "couldn't create access grant")
}
serializedAccess, err := access.Serialize()
if err != nil {
log.Fatalf("Couldn't serialize access grant: %v", err)
return errors.Wrap(err, "couldn't serialize access grant")
}
configMapper.Set("satellite_address", satellite)
configMapper.Set("access_grant", serializedAccess)
@@ -78,8 +77,9 @@ func init() {
config.FileDeleteKey(name, "api_key")
config.FileDeleteKey(name, "passphrase")
} else {
log.Fatalf("Invalid provider type: %s", provider)
return errors.Errorf("invalid provider type: %s", provider)
}
return nil
},
Options: []fs.Option{
{

View File

@@ -148,13 +148,7 @@ func (o *Object) Open(ctx context.Context, options ...fs.OpenOption) (_ io.ReadC
case s && !e:
offset = opt.Start
case !s && e:
object, err := o.fs.project.StatObject(ctx, bucketName, bucketPath)
if err != nil {
return nil, err
}
offset = object.System.ContentLength - opt.End
length = opt.End
offset = -opt.End
}
case *fs.SeekOption:
offset = opt.Offset

View File

@@ -0,0 +1,170 @@
package api
import "fmt"
// Error contains the error code and message returned by the API
type Error struct {
Success bool `json:"success,omitempty"`
StatusCode int `json:"statusCode,omitempty"`
Message string `json:"message,omitempty"`
Data string `json:"data,omitempty"`
}
// Error returns a string for the error and satisfies the error interface
func (e Error) Error() string {
out := fmt.Sprintf("api error %d", e.StatusCode)
if e.Message != "" {
out += ": " + e.Message
}
if e.Data != "" {
out += ": " + e.Data
}
return out
}
// FolderEntry represents a Uptobox subfolder when listing folder contents
type FolderEntry struct {
FolderID uint64 `json:"fld_id"`
Description string `json:"fld_descr"`
Password string `json:"fld_password"`
FullPath string `json:"fullPath"`
Path string `json:"fld_name"`
Name string `json:"name"`
Hash string `json:"hash"`
}
// FolderInfo represents the current folder when listing folder contents
type FolderInfo struct {
FolderID uint64 `json:"fld_id"`
Hash string `json:"hash"`
FileCount uint64 `json:"fileCount"`
TotalFileSize int64 `json:"totalFileSize"`
}
// FileInfo represents a file when listing folder contents
type FileInfo struct {
Name string `json:"file_name"`
Description string `json:"file_descr"`
Created string `json:"file_created"`
Size int64 `json:"file_size"`
Downloads uint64 `json:"file_downloads"`
Code string `json:"file_code"`
Password string `json:"file_password"`
Public int `json:"file_public"`
LastDownload string `json:"file_last_download"`
ID uint64 `json:"id"`
}
// ReadMetadataResponse is the response when listing folder contents
type ReadMetadataResponse struct {
StatusCode int `json:"statusCode"`
Message string `json:"message"`
Data struct {
CurrentFolder FolderInfo `json:"currentFolder"`
Folders []FolderEntry `json:"folders"`
Files []FileInfo `json:"files"`
PageCount int `json:"pageCount"`
TotalFileCount int `json:"totalFileCount"`
TotalFileSize int64 `json:"totalFileSize"`
} `json:"data"`
}
// UploadInfo is the response when initiating an upload
type UploadInfo struct {
StatusCode int `json:"statusCode"`
Message string `json:"message"`
Data struct {
UploadLink string `json:"uploadLink"`
MaxUpload string `json:"maxUpload"`
} `json:"data"`
}
// UploadResponse is the respnse to a successful upload
type UploadResponse struct {
Files []struct {
Name string `json:"name"`
Size int64 `json:"size"`
URL string `json:"url"`
DeleteURL string `json:"deleteUrl"`
} `json:"files"`
}
// UpdateResponse is a generic response to various action on files (rename/copy/move)
type UpdateResponse struct {
Message string `json:"message"`
StatusCode int `json:"statusCode"`
}
// Download is the response when requesting a download link
type Download struct {
StatusCode int `json:"statusCode"`
Message string `json:"message"`
Data struct {
DownloadLink string `json:"dlLink"`
} `json:"data"`
}
// MetadataRequestOptions represents all the options when listing folder contents
type MetadataRequestOptions struct {
Limit uint64
Offset uint64
SearchField string
Search string
}
// CreateFolderRequest is used for creating a folder
type CreateFolderRequest struct {
Token string `json:"token"`
Path string `json:"path"`
Name string `json:"name"`
}
// DeleteFolderRequest is used for deleting a folder
type DeleteFolderRequest struct {
Token string `json:"token"`
FolderID uint64 `json:"fld_id"`
}
// CopyMoveFileRequest is used for moving/copying a file
type CopyMoveFileRequest struct {
Token string `json:"token"`
FileCodes string `json:"file_codes"`
DestinationFolderID uint64 `json:"destination_fld_id"`
Action string `json:"action"`
}
// MoveFolderRequest is used for moving a folder
type MoveFolderRequest struct {
Token string `json:"token"`
FolderID uint64 `json:"fld_id"`
DestinationFolderID uint64 `json:"destination_fld_id"`
Action string `json:"action"`
}
// RenameFolderRequest is used for renaming a folder
type RenameFolderRequest struct {
Token string `json:"token"`
FolderID uint64 `json:"fld_id"`
NewName string `json:"new_name"`
}
// UpdateFileInformation is used for renaming a file
type UpdateFileInformation struct {
Token string `json:"token"`
FileCode string `json:"file_code"`
NewName string `json:"new_name,omitempty"`
Description string `json:"description,omitempty"`
Password string `json:"password,omitempty"`
Public string `json:"public,omitempty"`
}
// RemoveFileRequest is used for deleting a file
type RemoveFileRequest struct {
Token string `json:"token"`
FileCodes string `json:"file_codes"`
}
// Token represents the authentication token
type Token struct {
Token string `json:"token"`
}

1053
backend/uptobox/uptobox.go Normal file

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,21 @@
// Test Uptobox filesystem interface
package uptobox_test
import (
"testing"
"github.com/rclone/rclone/backend/uptobox"
"github.com/rclone/rclone/fstest"
"github.com/rclone/rclone/fstest/fstests"
)
// TestIntegration runs integration tests against the remote
func TestIntegration(t *testing.T) {
if *fstest.RemoteName == "" {
*fstest.RemoteName = "TestUptobox:"
}
fstests.Run(t, &fstests.Opt{
RemoteName: *fstest.RemoteName,
NilObject: (*uptobox.Object)(nil),
})
}

View File

@@ -125,7 +125,7 @@ func (ca *CookieAuth) getSPCookie(conf *SharepointSuccessResponse) (*CookieRespo
return nil, errors.Wrap(err, "Error while constructing endpoint URL")
}
u, err := url.Parse("https://" + spRoot.Host + "/_forms/default.aspx?wa=wsignin1.0")
u, err := url.Parse(spRoot.Scheme + "://" + spRoot.Host + "/_forms/default.aspx?wa=wsignin1.0")
if err != nil {
return nil, errors.Wrap(err, "Error while constructing login URL")
}

View File

@@ -60,12 +60,12 @@ func init() {
Name: "yandex",
Description: "Yandex Disk",
NewFs: NewFs,
Config: func(ctx context.Context, name string, m configmap.Mapper) {
Config: func(ctx context.Context, name string, m configmap.Mapper) error {
err := oauthutil.Config(ctx, "yandex", name, m, oauthConfig, nil)
if err != nil {
log.Fatalf("Failed to configure token: %v", err)
return
return errors.Wrap(err, "failed to configure token")
}
return nil
},
Options: append(oauthutil.SharedOptions, []fs.Option{{
Name: config.ConfigEncoding,
@@ -251,22 +251,22 @@ func NewFs(ctx context.Context, name, root string, m configmap.Mapper) (fs.Fs, e
token, err := oauthutil.GetToken(name, m)
if err != nil {
log.Fatalf("Couldn't read OAuth token (this should never happen).")
return nil, errors.Wrap(err, "couldn't read OAuth token")
}
if token.RefreshToken == "" {
log.Fatalf("Unable to get RefreshToken. If you are upgrading from older versions of rclone, please run `rclone config` and re-configure this backend.")
return nil, errors.New("unable to get RefreshToken. If you are upgrading from older versions of rclone, please run `rclone config` and re-configure this backend")
}
if token.TokenType != "OAuth" {
token.TokenType = "OAuth"
err = oauthutil.PutToken(name, m, token, false)
if err != nil {
log.Fatalf("Couldn't save OAuth token (this should never happen).")
return nil, errors.Wrap(err, "couldn't save OAuth token")
}
log.Printf("Automatically upgraded OAuth config.")
}
oAuthClient, _, err := oauthutil.NewClient(ctx, name, m, oauthConfig)
if err != nil {
log.Fatalf("Failed to configure Yandex: %v", err)
return nil, errors.Wrap(err, "failed to configure Yandex")
}
ci := fs.GetConfig(ctx)

View File

@@ -7,7 +7,6 @@ import (
"fmt"
"io"
"io/ioutil"
"log"
"net/http"
"net/url"
"path"
@@ -73,32 +72,41 @@ func init() {
Name: "zoho",
Description: "Zoho",
NewFs: NewFs,
Config: func(ctx context.Context, name string, m configmap.Mapper) {
Config: func(ctx context.Context, name string, m configmap.Mapper) error {
// Need to setup region before configuring oauth
setupRegion(m)
err := setupRegion(m)
if err != nil {
return err
}
opt := oauthutil.Options{
// No refresh token unless ApprovalForce is set
OAuth2Opts: []oauth2.AuthCodeOption{oauth2.ApprovalForce},
}
if err := oauthutil.Config(ctx, "zoho", name, m, oauthConfig, &opt); err != nil {
log.Fatalf("Failed to configure token: %v", err)
return errors.Wrap(err, "failed to configure token")
}
// We need to rewrite the token type to "Zoho-oauthtoken" because Zoho wants
// it's own custom type
token, err := oauthutil.GetToken(name, m)
if err != nil {
log.Fatalf("Failed to read token: %v", err)
return errors.Wrap(err, "failed to read token")
}
if token.TokenType != "Zoho-oauthtoken" {
token.TokenType = "Zoho-oauthtoken"
err = oauthutil.PutToken(name, m, token, false)
if err != nil {
log.Fatalf("Failed to configure token: %v", err)
return errors.Wrap(err, "failed to configure token")
}
}
if err = setupRoot(ctx, name, m); err != nil {
log.Fatalf("Failed to configure root directory: %v", err)
if fs.GetConfig(ctx).AutoConfirm {
return nil
}
if err = setupRoot(ctx, name, m); err != nil {
return errors.Wrap(err, "failed to configure root directory")
}
return nil
},
Options: append(oauthutil.SharedOptions, []fs.Option{{
Name: "region",
@@ -159,15 +167,16 @@ type Object struct {
// ------------------------------------------------------------
func setupRegion(m configmap.Mapper) {
func setupRegion(m configmap.Mapper) error {
region, ok := m.Get("region")
if !ok {
log.Fatalf("No region set\n")
if !ok || region == "" {
return errors.New("no region set")
}
rootURL = fmt.Sprintf("https://workdrive.zoho.%s/api/v1", region)
accountsURL = fmt.Sprintf("https://accounts.zoho.%s", region)
oauthConfig.Endpoint.AuthURL = fmt.Sprintf("https://accounts.zoho.%s/oauth/v2/auth", region)
oauthConfig.Endpoint.TokenURL = fmt.Sprintf("https://accounts.zoho.%s/oauth/v2/token", region)
return nil
}
// ------------------------------------------------------------
@@ -203,7 +212,7 @@ func listWorkspaces(ctx context.Context, teamID string, srv *rest.Client) ([]api
func setupRoot(ctx context.Context, name string, m configmap.Mapper) error {
oAuthClient, _, err := oauthutil.NewClient(ctx, name, m, oauthConfig)
if err != nil {
log.Fatalf("Failed to load oAuthClient: %s", err)
return errors.Wrap(err, "failed to load oAuthClient")
}
authSrv := rest.NewClient(oAuthClient).SetRoot(accountsURL)
opts := rest.Opts{
@@ -372,7 +381,10 @@ func NewFs(ctx context.Context, name, root string, m configmap.Mapper) (fs.Fs, e
if err := configstruct.Set(m, opt); err != nil {
return nil, err
}
setupRegion(m)
err := setupRegion(m)
if err != nil {
return nil, err
}
root = parsePath(root)
oAuthClient, _, err := oauthutil.NewClient(ctx, name, m, oauthConfig)

View File

@@ -62,6 +62,7 @@ docs = [
"sftp.md",
"sugarsync.md",
"tardigrade.md",
"uptobox.md",
"union.md",
"webdav.md",
"yandex.md",

View File

@@ -44,10 +44,10 @@ var commandDefinition = &cobra.Command{
Use: "about remote:",
Short: `Get quota information from the remote.`,
Long: `
` + "`rclone about`" + `prints quota information about a remote to standard
` + "`rclone about`" + ` prints quota information about a remote to standard
output. The output is typically used, free, quota and trash contents.
E.g. Typical output from` + "`rclone about remote:`" + `is:
E.g. Typical output from ` + "`rclone about remote:`" + ` is:
Total: 17G
Used: 7.444G
@@ -75,7 +75,7 @@ Applying a ` + "`--full`" + ` flag to the command prints the bytes in full, e.g.
Trashed: 104857602
Other: 8849156022
A ` + "`--json`" + `flag generates conveniently computer readable output, e.g.
A ` + "`--json`" + ` flag generates conveniently computer readable output, e.g.
{
"total": 18253611008,

View File

@@ -54,6 +54,7 @@ import (
_ "github.com/rclone/rclone/cmd/size"
_ "github.com/rclone/rclone/cmd/sync"
_ "github.com/rclone/rclone/cmd/test"
_ "github.com/rclone/rclone/cmd/test/changenotify"
_ "github.com/rclone/rclone/cmd/test/histogram"
_ "github.com/rclone/rclone/cmd/test/info"
_ "github.com/rclone/rclone/cmd/test/makefiles"

View File

@@ -49,7 +49,7 @@ var (
cpuProfile = flags.StringP("cpuprofile", "", "", "Write cpu profile to file")
memProfile = flags.StringP("memprofile", "", "", "Write memory profile to file")
statsInterval = flags.DurationP("stats", "", time.Minute*1, "Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable)")
dataRateUnit = flags.StringP("stats-unit", "", "bytes", "Show data rate in stats as either 'bits' or 'bytes'/s")
dataRateUnit = flags.StringP("stats-unit", "", "bytes", "Show data rate in stats as either 'bits' or 'bytes' per second")
version bool
retries = flags.IntP("retries", "", 3, "Retry operations this many times if they fail")
retriesInterval = flags.DurationP("retries-sleep", "", 0, "Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable)")
@@ -75,8 +75,19 @@ const (
// ShowVersion prints the version to stdout
func ShowVersion() {
osVersion, osKernel := buildinfo.GetOSVersion()
if osVersion == "" {
osVersion = "unknown"
}
if osKernel == "" {
osKernel = "unknown"
}
linking, tagString := buildinfo.GetLinkingAndTags()
fmt.Printf("rclone %s\n", fs.Version)
fmt.Printf("- os/version: %s\n", osVersion)
fmt.Printf("- os/kernel: %s\n", osKernel)
fmt.Printf("- os/type: %s\n", runtime.GOOS)
fmt.Printf("- os/arch: %s\n", runtime.GOARCH)
fmt.Printf("- go/version: %s\n", runtime.Version())
@@ -389,7 +400,7 @@ func initConfig() {
configflags.SetFlags(ci)
// Load the config
configfile.LoadConfig(ctx)
configfile.Install()
// Start accounting
accounting.Start(ctx)
@@ -553,7 +564,7 @@ func Main() {
setupRootCommand(Root)
AddBackendFlags()
if err := Root.Execute(); err != nil {
if strings.HasPrefix(err.Error(), "unknown command") {
if strings.HasPrefix(err.Error(), "unknown command") && selfupdateEnabled {
Root.PrintErrf("You could use '%s selfupdate' to get latest features.\n\n", Root.CommandPath())
}
log.Fatalf("Fatal error: %v", err)

View File

@@ -21,6 +21,7 @@ import (
"github.com/rclone/rclone/cmd/mountlib"
"github.com/rclone/rclone/fs"
"github.com/rclone/rclone/lib/atexit"
"github.com/rclone/rclone/lib/buildinfo"
"github.com/rclone/rclone/vfs"
)
@@ -35,6 +36,7 @@ func init() {
cmd.Aliases = append(cmd.Aliases, "cmount")
}
mountlib.AddRc("cmount", mount)
buildinfo.Tags = append(buildinfo.Tags, "cmount")
}
// Find the option string in the current options

View File

@@ -22,6 +22,7 @@ func init() {
cmd.Root.AddCommand(configCommand)
configCommand.AddCommand(configEditCommand)
configCommand.AddCommand(configFileCommand)
configCommand.AddCommand(configTouchCommand)
configCommand.AddCommand(configShowCommand)
configCommand.AddCommand(configDumpCommand)
configCommand.AddCommand(configProvidersCommand)
@@ -41,9 +42,9 @@ var configCommand = &cobra.Command{
remotes and manage existing ones. You may also set or remove a
password to protect your configuration.
`,
Run: func(command *cobra.Command, args []string) {
RunE: func(command *cobra.Command, args []string) error {
cmd.CheckArgs(0, 0, command, args)
config.EditConfig(context.Background())
return config.EditConfig(context.Background())
},
}
@@ -63,6 +64,15 @@ var configFileCommand = &cobra.Command{
},
}
var configTouchCommand = &cobra.Command{
Use: "touch",
Short: `Ensure configuration file exists.`,
Run: func(command *cobra.Command, args []string) {
cmd.CheckArgs(0, 0, command, args)
config.SaveConfig()
},
}
var configShowCommand = &cobra.Command{
Use: "show [<remote>]",
Short: `Print (decrypted) config file, or the config for a single remote.`,
@@ -262,8 +272,7 @@ This normally means going through the interactive oauth flow again.
if fsInfo.Config == nil {
return errors.Errorf("%s: doesn't support Reconnect", configName)
}
fsInfo.Config(ctx, configName, config)
return nil
return fsInfo.Config(ctx, configName, config)
},
}

View File

@@ -36,7 +36,7 @@ var commandDefinition = &cobra.Command{
Download a URL's content and copy it to the destination without saving
it in temporary storage.
Setting ` + "`--auto-filename`" + `will cause the file name to be retrieved from
Setting ` + "`--auto-filename`" + ` will cause the file name to be retrieved from
the from URL (after any redirections) and used in the destination
path. With ` + "`--print-filename`" + ` in addition, the resuling file name will
be printed.

View File

@@ -36,8 +36,8 @@ If you supply the |--rmdirs| flag, it will remove all empty directories along wi
You can also use the separate command |rmdir| or |rmdirs| to
delete empty directories only.
For example, to delete all files bigger than 100MBytes, you may first want to check what
would be deleted (use either):
For example, to delete all files bigger than 100 MiB, you may first want to
check what would be deleted (use either):
rclone --min-size 100M lsl remote:path
rclone --dry-run --min-size 100M delete remote:path
@@ -46,8 +46,8 @@ Then proceed with the actual delete:
rclone --min-size 100M delete remote:path
That reads "delete everything with a minimum size of 100 MB", hence
delete all files bigger than 100MBytes.
That reads "delete everything with a minimum size of 100 MiB", hence
delete all files bigger than 100 MiB.
**Important**: Since this can cause data loss, test first with the
|--dry-run| or the |--interactive|/|-i| flag.

View File

@@ -3,7 +3,6 @@ package link
import (
"context"
"fmt"
"time"
"github.com/rclone/rclone/cmd"
"github.com/rclone/rclone/fs"
@@ -13,7 +12,7 @@ import (
)
var (
expire = fs.Duration(time.Hour * 24 * 365 * 100)
expire = fs.DurationOff
unlink = false
)

View File

@@ -206,9 +206,9 @@ When that happens, it is the user's responsibility to stop the mount manually.
The size of the mounted file system will be set according to information retrieved
from the remote, the same as returned by the [rclone about](https://rclone.org/commands/rclone_about/)
command. Remotes with unlimited storage may report the used size only,
then an additional 1PB of free space is assumed. If the remote does not
then an additional 1 PiB of free space is assumed. If the remote does not
[support](https://rclone.org/overview/#optional-features) the about feature
at all, then 1PB is set as both the total and the free size.
at all, then 1 PiB is set as both the total and the free size.
**Note**: As of |rclone| 1.52.2, |rclone mount| now requires Go version 1.13
or newer on some platforms depending on the underlying FUSE library in use.
@@ -334,7 +334,7 @@ metadata about files like in UNIX. One case that may arise is that other program
(incorrectly) interprets this as the file being accessible by everyone. For example
an SSH client may warn about "unprotected private key file".
WinFsp 2021 (version 1.9, still in beta) introduces a new FUSE option "FileSecurity",
WinFsp 2021 (version 1.9) introduces a new FUSE option "FileSecurity",
that allows the complete specification of file security descriptors using
[SDDL](https://docs.microsoft.com/en-us/windows/win32/secauthz/security-descriptor-string-format).
With this you can work around issues such as the mentioned "unprotected private key file"
@@ -342,19 +342,38 @@ by specifying |-o FileSecurity="D:P(A;;FA;;;OW)"|, for file all access (FA) to t
#### Windows caveats
Note that drives created as Administrator are not visible by other
accounts (including the account that was elevated as
Administrator). So if you start a Windows drive from an Administrative
Command Prompt and then try to access the same drive from Explorer
(which does not run as Administrator), you will not be able to see the
new drive.
Drives created as Administrator are not visible to other accounts,
not even an account that was elevated to Administrator with the
User Account Control (UAC) feature. A result of this is that if you mount
to a drive letter from a Command Prompt run as Administrator, and then try
to access the same drive from Windows Explorer (which does not run as
Administrator), you will not be able to see the mounted drive.
The easiest way around this is to start the drive from a normal
command prompt. It is also possible to start a drive from the SYSTEM
account (using [the WinFsp.Launcher
infrastructure](https://github.com/billziss-gh/winfsp/wiki/WinFsp-Service-Architecture))
which creates drives accessible for everyone on the system or
alternatively using [the nssm service manager](https://nssm.cc/usage).
If you don't need to access the drive from applications running with
administrative privileges, the easiest way around this is to always
create the mount from a non-elevated command prompt.
To make mapped drives available to the user account that created them
regardless if elevated or not, there is a special Windows setting called
[linked connections](https://docs.microsoft.com/en-us/troubleshoot/windows-client/networking/mapped-drives-not-available-from-elevated-command#detail-to-configure-the-enablelinkedconnections-registry-entry)
that can be enabled.
It is also possible to make a drive mount available to everyone on the system,
by running the process creating it as the built-in SYSTEM account.
There are several ways to do this: One is to use the command-line
utility [PsExec](https://docs.microsoft.com/en-us/sysinternals/downloads/psexec),
from Microsoft's Sysinternals suite, which has option |-s| to start
processes as the SYSTEM account. Another alternative is to run the mount
command from a Windows Scheduled Task, or a Windows Service, configured
to run as the SYSTEM account. A third alternative is to use the
[WinFsp.Launcher infrastructure](https://github.com/billziss-gh/winfsp/wiki/WinFsp-Service-Architecture)).
Note that when running rclone as another user, it will not use
the configuration file from your profile unless you tell it to
with the [|--config|](https://rclone.org/docs/#config-config-file) option.
Read more in the [install documentation](https://rclone.org/install/).
Note that mapping to a directory path, instead of a drive letter,
does not suffer from the same limitations.
### Limitations

View File

@@ -21,7 +21,7 @@ import (
func TestRc(t *testing.T) {
ctx := context.Background()
configfile.LoadConfig(ctx)
configfile.Install()
mount := rc.Calls.Get("mount/mount")
assert.NotNil(t, mount)
unmount := rc.Calls.Get("mount/unmount")

View File

@@ -373,7 +373,7 @@ func (u *UI) Draw() error {
extras := ""
if u.showCounts {
if count > 0 {
extras += fmt.Sprintf("%8v ", fs.SizeSuffix(count))
extras += fmt.Sprintf("%8v ", fs.CountSuffix(count))
} else {
extras += " "
}
@@ -385,9 +385,9 @@ func (u *UI) Draw() error {
}
if u.showDirAverageSize {
if averageSize > 0 {
extras += fmt.Sprintf("%8v ", fs.SizeSuffix(int64(averageSize)))
extras += fmt.Sprintf("%9v ", fs.SizeSuffix(int64(averageSize)))
} else {
extras += " "
extras += " "
}
}
@@ -406,7 +406,7 @@ func (u *UI) Draw() error {
}
extras += "[" + graph[graphBars-bars:2*graphBars-bars] + "] "
}
Linef(0, y, w, fg, bg, ' ', "%c %8v %s%c%s%s", fileFlag, fs.SizeSuffix(size), extras, mark, path.Base(entry.Remote()), message)
Linef(0, y, w, fg, bg, ' ', "%c %9v %s%c%s%s", fileFlag, fs.SizeSuffix(size), extras, mark, path.Base(entry.Remote()), message)
y++
}
}
@@ -485,11 +485,15 @@ func (u *UI) removeEntry(pos int) {
// delete the entry at the current position
func (u *UI) delete() {
if u.d == nil || len(u.entries) == 0 {
return
}
ctx := context.Background()
dirPos := u.sortPerm[u.dirPosMap[u.path].entry]
entry := u.entries[dirPos]
cursorPos := u.dirPosMap[u.path]
dirPos := u.sortPerm[cursorPos.entry]
dirEntry := u.entries[dirPos]
u.boxMenu = []string{"cancel", "confirm"}
if obj, isFile := entry.(fs.Object); isFile {
if obj, isFile := dirEntry.(fs.Object); isFile {
u.boxMenuHandler = func(f fs.Fs, p string, o int) (string, error) {
if o != 1 {
return "Aborted!", nil
@@ -499,27 +503,33 @@ func (u *UI) delete() {
return "", err
}
u.removeEntry(dirPos)
if cursorPos.entry >= len(u.entries) {
u.move(-1) // move back onto a valid entry
}
return "Successfully deleted file!", nil
}
u.popupBox([]string{
"Delete this file?",
u.fsName + entry.String()})
u.fsName + dirEntry.String()})
} else {
u.boxMenuHandler = func(f fs.Fs, p string, o int) (string, error) {
if o != 1 {
return "Aborted!", nil
}
err := operations.Purge(ctx, f, entry.String())
err := operations.Purge(ctx, f, dirEntry.String())
if err != nil {
return "", err
}
u.removeEntry(dirPos)
if cursorPos.entry >= len(u.entries) {
u.move(-1) // move back onto a valid entry
}
return "Successfully purged folder!", nil
}
u.popupBox([]string{
"Purge this directory?",
"ALL files in it will be deleted",
u.fsName + entry.String()})
u.fsName + dirEntry.String()})
}
}

View File

@@ -7,12 +7,19 @@ import (
"time"
"github.com/rclone/rclone/cmd"
"github.com/rclone/rclone/fs/config/flags"
"github.com/rclone/rclone/fs/operations"
"github.com/spf13/cobra"
)
var (
size = int64(-1)
)
func init() {
cmd.Root.AddCommand(commandDefinition)
cmdFlags := commandDefinition.Flags()
flags.Int64VarP(cmdFlags, &size, "size", "", size, "File size hint to preallocate")
}
var commandDefinition = &cobra.Command{
@@ -37,6 +44,13 @@ must fit into RAM. The cutoff needs to be small enough to adhere
the limits of your remote, please see there. Generally speaking,
setting this cutoff too high will decrease your performance.
Use the |--size| flag to preallocate the file in advance at the remote end
and actually stream it, even if remote backend doesn't support streaming.
|--size| should be the exact size of the input stream in bytes. If the
size of the stream is different in length to the |--size| passed in
then the transfer will likely fail.
Note that the upload can also not be retried because the data is
not kept around until the upload succeeds. If you need to transfer
a lot of data, you're better off caching locally and then
@@ -51,7 +65,7 @@ a lot of data, you're better off caching locally and then
fdst, dstFileName := cmd.NewFsDstFile(args)
cmd.Run(false, false, command, func() error {
_, err := operations.Rcat(context.Background(), fdst, dstFileName, os.Stdin, time.Now())
_, err := operations.RcatSize(context.Background(), fdst, dstFileName, os.Stdin, size, time.Now())
return err
})
},

View File

@@ -1,3 +1,5 @@
// +build !noselfupdate
package selfupdate
// Note: "|" will be replaced by backticks in the help string below
@@ -27,7 +29,7 @@ If the old version contains only dots and digits (for example |v1.54.0|)
then it's a stable release so you won't need the |--beta| flag. Beta releases
have an additional information similar to |v1.54.0-beta.5111.06f1c0c61|.
(if you are a developer and use a locally built rclone, the version number
will end with |-DEV|, you will have to rebuild it as it obvisously can't
will end with |-DEV|, you will have to rebuild it as it obviously can't
be distributed).
If you previously installed rclone via a package manager, the package may

View File

@@ -0,0 +1,11 @@
// +build noselfupdate
package selfupdate
import (
"github.com/rclone/rclone/lib/buildinfo"
)
func init() {
buildinfo.Tags = append(buildinfo.Tags, "noselfupdate")
}

View File

@@ -1,3 +1,5 @@
// +build !noselfupdate
package selfupdate
import (
@@ -143,14 +145,9 @@ func InstallUpdate(ctx context.Context, opt *Options) error {
return errors.New("--stable and --beta are mutually exclusive")
}
gotCmount := false
for _, tag := range buildinfo.Tags {
if tag == "cmount" {
gotCmount = true
break
}
}
if gotCmount && !cmount.ProvidedBy(runtime.GOOS) {
// The `cmount` tag is added by cmd/cmount/mount.go only if build is static.
_, tags := buildinfo.GetLinkingAndTags()
if strings.Contains(" "+tags+" ", " cmount ") && !cmount.ProvidedBy(runtime.GOOS) {
return errors.New("updating would discard the mount FUSE capability, aborting")
}

View File

@@ -1,3 +1,5 @@
// +build !noselfupdate
package selfupdate
import (

View File

@@ -1,3 +1,5 @@
// +build !noselfupdate
package selfupdate
import (

View File

@@ -1,4 +1,5 @@
// +build !windows,!plan9,!js
// +build !noselfupdate
package selfupdate

View File

@@ -1,4 +1,5 @@
// +build plan9 js
// +build !noselfupdate
package selfupdate

View File

@@ -1,4 +1,5 @@
// +build windows
// +build !noselfupdate
package selfupdate

View File

@@ -0,0 +1,5 @@
// +build noselfupdate
package cmd
const selfupdateEnabled = false

View File

@@ -0,0 +1,7 @@
// +build !noselfupdate
package cmd
// This constant must be in the `cmd` package rather than `cmd/selfupdate`
// to prevent build failure due to dependency loop.
const selfupdateEnabled = true

View File

@@ -41,7 +41,7 @@ func startServer(t *testing.T, f fs.Fs) {
}
func TestInit(t *testing.T) {
configfile.LoadConfig(context.Background())
configfile.Install()
f, err := fs.NewFs(context.Background(), "testdata/files")
l, _ := f.List(context.Background(), "")

View File

@@ -61,7 +61,7 @@ var (
func TestInit(t *testing.T) {
ctx := context.Background()
// Configure the remote
configfile.LoadConfig(context.Background())
configfile.Install()
// fs.Config.LogLevel = fs.LogLevelDebug
// fs.Config.DumpHeaders = true
// fs.Config.DumpBodies = true

View File

@@ -367,7 +367,7 @@ footer {
}
};
function readableFileSize(size) {
var units = ['B', 'KB', 'MB', 'GB', 'TB', 'PB', 'EB', 'ZB', 'YB'];
var units = ['B', 'KiB', 'MiB', 'GiB', 'TiB', 'PiB', 'EiB', 'ZiB', 'YiB'];
var i = 0;
while(size >= 1024) {
size /= 1024;

View File

@@ -1,7 +1,6 @@
package restic
import (
"context"
"crypto/rand"
"encoding/hex"
"io"
@@ -65,8 +64,7 @@ func createOverwriteDeleteSeq(t testing.TB, path string) []TestRequest {
// TestResticHandler runs tests on the restic handler code, especially in append-only mode.
func TestResticHandler(t *testing.T) {
ctx := context.Background()
configfile.LoadConfig(ctx)
configfile.Install()
buf := make([]byte, 32)
_, err := io.ReadFull(rand.Reader, buf)
require.NoError(t, err)

View File

@@ -44,7 +44,7 @@ var commandDefinition = &cobra.Command{
}
fmt.Printf("Total objects: %d\n", results.Count)
fmt.Printf("Total size: %s (%d Bytes)\n", fs.SizeSuffix(results.Bytes).Unit("Bytes"), results.Bytes)
fmt.Printf("Total size: %s (%d bytes)\n", fs.SizeSuffix(results.Bytes).ByteUnit(), results.Bytes)
return nil
})

View File

@@ -0,0 +1,54 @@
// Package changenotify tests rclone's changenotify support
package changenotify
import (
"context"
"errors"
"time"
"github.com/rclone/rclone/cmd"
"github.com/rclone/rclone/cmd/test"
"github.com/rclone/rclone/fs"
"github.com/rclone/rclone/fs/config/flags"
"github.com/spf13/cobra"
)
var (
pollInterval = 10 * time.Second
)
func init() {
test.Command.AddCommand(commandDefinition)
cmdFlags := commandDefinition.Flags()
flags.DurationVarP(cmdFlags, &pollInterval, "poll-interval", "", pollInterval, "Time to wait between polling for changes.")
}
var commandDefinition = &cobra.Command{
Use: "changenotify remote:",
Short: `Log any change notify requests for the remote passed in.`,
RunE: func(command *cobra.Command, args []string) error {
cmd.CheckArgs(1, 1, command, args)
f := cmd.NewFsSrc(args)
ctx := context.Background()
// Start polling function
features := f.Features()
if do := features.ChangeNotify; do != nil {
pollChan := make(chan time.Duration)
do(ctx, changeNotify, pollChan)
pollChan <- pollInterval
fs.Logf(nil, "Waiting for changes, polling every %v", pollInterval)
} else {
return errors.New("poll-interval is not supported by this remote")
}
select {}
},
}
// changeNotify invalidates the directory cache for the relativePath
// passed in.
//
// if entryType is a directory it invalidates the parent of the directory too.
func changeNotify(relativePath string, entryType fs.EntryType) {
fs.Logf(nil, "%q: %v", relativePath, entryType)
}

View File

@@ -3,12 +3,12 @@
package makefiles
import (
cryptrand "crypto/rand"
"io"
"log"
"math/rand"
"os"
"path/filepath"
"time"
"github.com/rclone/rclone/cmd"
"github.com/rclone/rclone/cmd/test"
@@ -27,8 +27,10 @@ var (
maxFileSize = fs.SizeSuffix(100)
minFileNameLength = 4
maxFileNameLength = 12
seed = int64(1)
// Globals
randSource *rand.Rand
directoriesToCreate int
totalDirectories int
fileNames = map[string]struct{}{} // keep a note of which file name we've used already
@@ -44,6 +46,7 @@ func init() {
flags.FVarP(cmdFlags, &maxFileSize, "max-file-size", "", "Maximum size of files to create")
flags.IntVarP(cmdFlags, &minFileNameLength, "min-name-length", "", minFileNameLength, "Minimum size of file names")
flags.IntVarP(cmdFlags, &maxFileNameLength, "max-name-length", "", maxFileNameLength, "Maximum size of file names")
flags.Int64VarP(cmdFlags, &seed, "seed", "", seed, "Seed for the random number generator (0 for random)")
}
var commandDefinition = &cobra.Command{
@@ -51,28 +54,36 @@ var commandDefinition = &cobra.Command{
Short: `Make a random file hierarchy in <dir>`,
Run: func(command *cobra.Command, args []string) {
cmd.CheckArgs(1, 1, command, args)
if seed == 0 {
seed = time.Now().UnixNano()
fs.Logf(nil, "Using random seed = %d", seed)
}
randSource = rand.New(rand.NewSource(seed))
outputDirectory := args[0]
directoriesToCreate = numberOfFiles / averageFilesPerDirectory
averageSize := (minFileSize + maxFileSize) / 2
log.Printf("Creating %d files of average size %v in %d directories in %q.", numberOfFiles, averageSize, directoriesToCreate, outputDirectory)
start := time.Now()
fs.Logf(nil, "Creating %d files of average size %v in %d directories in %q.", numberOfFiles, averageSize, directoriesToCreate, outputDirectory)
root := &dir{name: outputDirectory, depth: 1}
for totalDirectories < directoriesToCreate {
root.createDirectories()
}
dirs := root.list("", []string{})
totalBytes := int64(0)
for i := 0; i < numberOfFiles; i++ {
dir := dirs[rand.Intn(len(dirs))]
writeFile(dir, fileName())
dir := dirs[randSource.Intn(len(dirs))]
totalBytes += writeFile(dir, fileName())
}
log.Printf("Done.")
dt := time.Since(start)
fs.Logf(nil, "Written %viB in %v at %viB/s.", fs.SizeSuffix(totalBytes), dt.Round(time.Millisecond), fs.SizeSuffix((totalBytes*int64(time.Second))/int64(dt)))
},
}
// fileName creates a unique random file or directory name
func fileName() (name string) {
for {
length := rand.Intn(maxFileNameLength-minFileNameLength) + minFileNameLength
name = random.String(length)
length := randSource.Intn(maxFileNameLength-minFileNameLength) + minFileNameLength
name = random.StringFn(length, randSource.Intn)
if _, found := fileNames[name]; !found {
break
}
@@ -99,7 +110,7 @@ func (d *dir) createDirectories() {
}
d.children = append(d.children, newDir)
totalDirectories++
switch rand.Intn(4) {
switch randSource.Intn(4) {
case 0:
if d.depth < maxDepth {
newDir.createDirectories()
@@ -122,7 +133,7 @@ func (d *dir) list(path string, output []string) []string {
}
// writeFile writes a random file at dir/name
func writeFile(dir, name string) {
func writeFile(dir, name string) int64 {
err := os.MkdirAll(dir, 0777)
if err != nil {
log.Fatalf("Failed to make directory %q: %v", dir, err)
@@ -132,8 +143,8 @@ func writeFile(dir, name string) {
if err != nil {
log.Fatalf("Failed to open file %q: %v", path, err)
}
size := rand.Int63n(int64(maxFileSize-minFileSize)) + int64(minFileSize)
_, err = io.CopyN(fd, cryptrand.Reader, size)
size := randSource.Int63n(int64(maxFileSize-minFileSize)) + int64(minFileSize)
_, err = io.CopyN(fd, randSource, size)
if err != nil {
log.Fatalf("Failed to write %v bytes to file %q: %v", size, path, err)
}
@@ -141,4 +152,6 @@ func writeFile(dir, name string) {
if err != nil {
log.Fatalf("Failed to close file %q: %v", path, err)
}
fs.Infof(path, "Written file size %v", fs.SizeSuffix(size))
return size
}

View File

@@ -29,13 +29,16 @@ var commandDefinition = &cobra.Command{
Use: "version",
Short: `Show the version number.`,
Long: `
Show the rclone version number, the go version, the build target OS and
architecture, build tags and the type of executable (static or dynamic).
Show the rclone version number, the go version, the build target
OS and architecture, the runtime OS and kernel version and bitness,
build tags and the type of executable (static or dynamic).
For example:
$ rclone version
rclone v1.54
rclone v1.55.0
- os/version: ubuntu 18.04 (64 bit)
- os/kernel: 4.15.0-136-generic (x86_64)
- os/type: linux
- os/arch: amd64
- go/version: go1.16

View File

@@ -26,12 +26,12 @@ func TestVersionWorksWithoutAccessibleConfigFile(t *testing.T) {
}
// re-wire
oldOsStdout := os.Stdout
oldConfigPath := config.ConfigPath
config.ConfigPath = path
oldConfigPath := config.GetConfigPath()
assert.NoError(t, config.SetConfigPath(path))
os.Stdout = nil
defer func() {
os.Stdout = oldOsStdout
config.ConfigPath = oldConfigPath
assert.NoError(t, config.SetConfigPath(oldConfigPath))
}()
cmd.Root.SetArgs([]string{"version"})

View File

@@ -152,6 +152,7 @@ WebDAV or S3, that work out of the box.)
{{< provider name="SugarSync" home="https://sugarsync.com/" config="/sugarsync/" >}}
{{< provider name="Tardigrade" home="https://tardigrade.io/" config="/tardigrade/" >}}
{{< provider name="Tencent Cloud Object Storage (COS)" home="https://intl.cloud.tencent.com/product/cos" config="/s3/#tencent-cos" >}}
{{< provider name="Uptobox" home="https://uptobox.com" config="/uptobox/" >}}
{{< provider name="Wasabi" home="https://wasabi.com/" config="/s3/#wasabi" >}}
{{< provider name="WebDAV" home="https://en.wikipedia.org/wiki/WebDAV" config="/webdav/" >}}
{{< provider name="Yandex Disk" home="https://disk.yandex.com/" config="/yandex/" >}}

View File

@@ -227,16 +227,16 @@ Checkpoint for internal polling (debug).
#### --acd-upload-wait-per-gb
Additional time per GB to wait after a failed complete upload to see if it appears.
Additional time per GiB to wait after a failed complete upload to see if it appears.
Sometimes Amazon Drive gives an error when a file has been fully
uploaded but the file appears anyway after a little while. This
happens sometimes for files over 1GB in size and nearly every time for
files bigger than 10GB. This parameter controls the time rclone waits
happens sometimes for files over 1 GiB in size and nearly every time for
files bigger than 10 GiB. This parameter controls the time rclone waits
for the file to appear.
The default value for this parameter is 3 minutes per GB, so by
default it will wait 3 minutes for every GB uploaded to see if the
The default value for this parameter is 3 minutes per GiB, so by
default it will wait 3 minutes for every GiB uploaded to see if the
file appears.
You can disable this feature by setting it to 0. This may cause
@@ -260,7 +260,7 @@ Files >= this size will be downloaded via their tempLink.
Files this size or more will be downloaded via their "tempLink". This
is to work around a problem with Amazon Drive which blocks downloads
of files bigger than about 10GB. The default for this is 9GB which
of files bigger than about 10 GiB. The default for this is 9 GiB which
shouldn't need to be changed.
To download files above this threshold, rclone requests a "tempLink"
@@ -299,7 +299,7 @@ Amazon Drive has an internal limit of file sizes that can be uploaded
to the service. This limit is not officially published, but all files
larger than this will fail.
At the time of writing (Jan 2016) is in the area of 50GB per file.
At the time of writing (Jan 2016) is in the area of 50 GiB per file.
This means that larger files are likely to fail.
Unfortunately there is no way for rclone to see that this failure is

View File

@@ -372,7 +372,7 @@ put them back in again.` >}}
* Fred <fred@creativeprojects.tech>
* Sébastien Gross <renard@users.noreply.github.com>
* Maxime Suret <11944422+msuret@users.noreply.github.com>
* Caleb Case <caleb@storj.io>
* Caleb Case <caleb@storj.io> <calebcase@gmail.com>
* Ben Zenker <imbenzenker@gmail.com>
* Martin Michlmayr <tbm@cyrius.com>
* Brandon McNama <bmcnama@pagerduty.com>
@@ -478,3 +478,13 @@ put them back in again.` >}}
* Manish Kumar <krmanish260@gmail.com>
* x0b <x0bdev@gmail.com>
* CERN through the CS3MESH4EOSC Project
* Nick Gaya <nicholasgaya+github@gmail.com>
* Ashok Gelal <401055+ashokgelal@users.noreply.github.com>
* Dominik Mydlil <dominik.mydlil@outlook.com>
* Nazar Mishturak <nazarmx@gmail.com>
* Ansh Mittal <iamAnshMittal@gmail.com>
* noabody <noabody@yahoo.com>
* OleFrost <82263101+olefrost@users.noreply.github.com>
* Kenny Parsons <kennyparsons93@gmail.com>
* Jeffrey Tolar <tolar.jeffrey@gmail.com>
* jtagcat <git-514635f7@jtag.cat>

View File

@@ -269,7 +269,7 @@ Leave blank normally.
#### --azureblob-upload-cutoff
Cutoff for switching to chunked upload (<= 256MB). (Deprecated)
Cutoff for switching to chunked upload (<= 256 MiB). (Deprecated)
- Config: upload_cutoff
- Env Var: RCLONE_AZUREBLOB_UPLOAD_CUTOFF
@@ -278,7 +278,7 @@ Cutoff for switching to chunked upload (<= 256MB). (Deprecated)
#### --azureblob-chunk-size
Upload chunk size (<= 100MB).
Upload chunk size (<= 100 MiB).
Note that this is stored in memory and there may be up to
"--transfers" chunks stored at once in memory.

View File

@@ -155,8 +155,8 @@ depending on your hardware, how big the files are, how much you want
to load your computer, etc. The default of `--transfers 4` is
definitely too low for Backblaze B2 though.
Note that uploading big files (bigger than 200 MB by default) will use
a 96 MB RAM buffer by default. There can be at most `--transfers` of
Note that uploading big files (bigger than 200 MiB by default) will use
a 96 MiB RAM buffer by default. There can be at most `--transfers` of
these in use at any moment, so this sets the upper limit on the memory
used.
@@ -172,11 +172,6 @@ the file instead of hiding it.
Old versions of files, where available, are visible using the
`--b2-versions` flag.
**NB** Note that `--b2-versions` does not work with crypt at the
moment [#1627](https://github.com/rclone/rclone/issues/1627). Using
[--backup-dir](/docs/#backup-dir-dir) with rclone is the recommended
way of working around this.
If you wish to remove all the old versions then you can use the
`rclone cleanup remote:bucket` command which will delete all the old
versions of files, leaving the current ones intact. You can also
@@ -406,7 +401,7 @@ Cutoff for switching to chunked upload.
Files above this size will be uploaded in chunks of "--b2-chunk-size".
This value should be set no larger than 4.657GiB (== 5GB).
This value should be set no larger than 4.657 GiB (== 5 GB).
- Config: upload_cutoff
- Env Var: RCLONE_B2_UPLOAD_CUTOFF
@@ -420,7 +415,7 @@ Cutoff for switching to multipart copy
Any files larger than this that need to be server-side copied will be
copied in chunks of this size.
The minimum is 0 and the maximum is 4.6GB.
The minimum is 0 and the maximum is 4.6 GiB.
- Config: copy_cutoff
- Env Var: RCLONE_B2_COPY_CUTOFF

View File

@@ -225,10 +225,10 @@ as they can't be used in JSON strings.
### Transfers ###
For files above 50MB rclone will use a chunked transfer. Rclone will
For files above 50 MiB rclone will use a chunked transfer. Rclone will
upload up to `--transfers` chunks at the same time (shared among all
the multipart uploads). Chunks are buffered in memory and are
normally 8MB so increasing `--transfers` will increase memory use.
normally 8 MiB so increasing `--transfers` will increase memory use.
### Deleting files ###
@@ -369,7 +369,7 @@ Fill in for rclone to use a non root folder as its starting point.
#### --box-upload-cutoff
Cutoff for switching to multipart upload (>= 50MB).
Cutoff for switching to multipart upload (>= 50 MiB).
- Config: upload_cutoff
- Env Var: RCLONE_BOX_UPLOAD_CUTOFF

View File

@@ -70,11 +70,11 @@ password:
The size of a chunk. Lower value good for slow connections but can affect seamless reading.
Default: 5M
Choose a number from below, or type in your own value
1 / 1MB
\ "1m"
2 / 5 MB
1 / 1 MiB
\ "1M"
2 / 5 MiB
\ "5M"
3 / 10 MB
3 / 10 MiB
\ "10M"
chunk_size> 2
How much time should object info (file size, file hashes, etc.) be stored in cache. Use a very high value if you don't plan on changing the source FS from outside the cache.
@@ -91,11 +91,11 @@ info_age> 2
The maximum size of stored chunks. When the storage grows beyond this size, the oldest chunks will be deleted.
Default: 10G
Choose a number from below, or type in your own value
1 / 500 MB
1 / 500 MiB
\ "500M"
2 / 1 GB
2 / 1 GiB
\ "1G"
3 / 10 GB
3 / 10 GiB
\ "10G"
chunk_total_size> 3
Remote config
@@ -364,11 +364,11 @@ will need to be cleared or unexpected EOF errors will occur.
- Default: 5M
- Examples:
- "1m"
- 1MB
- 1 MiB
- "5M"
- 5 MB
- 5 MiB
- "10M"
- 10 MB
- 10 MiB
#### --cache-info-age
@@ -401,11 +401,11 @@ oldest chunks until it goes under this value.
- Default: 10G
- Examples:
- "500M"
- 500 MB
- 500 MiB
- "1G"
- 1 GB
- 1 GiB
- "10G"
- 10 GB
- 10 GiB
### Advanced Options

View File

@@ -5,6 +5,44 @@ description: "Rclone Changelog"
# Changelog
## v1.55.1 - 2021-04-26
[See commits](https://github.com/rclone/rclone/compare/v1.55.0...v1.55.1)
* Bug Fixes
* selfupdate
* Dont detect FUSE if build is static (Ivan Andreev)
* Add build tag noselfupdate (Ivan Andreev)
* sync: Fix incorrect error reported by graceful cutoff (Nick Craig-Wood)
* install.sh: fix macOS arm64 download (Nick Craig-Wood)
* build: Fix version numbers in android branch builds (Nick Craig-Wood)
* docs
* Contributing.md: update setup instructions for go1.16 (Nick Gaya)
* WinFsp 2021 is out of beta (albertony)
* Minor cleanup of space around code section (albertony)
* Fixed some typos (albertony)
* VFS
* Fix a code path which allows dirty data to be removed causing data loss (Nick Craig-Wood)
* Compress
* Fix compressed name regexp (buengese)
* Drive
* Fix backend copyid of google doc to directory (Nick Craig-Wood)
* Don't open browser when service account... (Ansh Mittal)
* Dropbox
* Add missing team_data.member scope for use with --impersonate (Nick Craig-Wood)
* Fix About after scopes changes - rclone config reconnect needed (Nick Craig-Wood)
* Fix Unable to decrypt returned paths from changeNotify (Nick Craig-Wood)
* FTP
* Fix implicit TLS (Ivan Andreev)
* Onedrive
* Work around for random "Unable to initialize RPS" errors (OleFrost)
* SFTP
* Revert sftp library to v1.12.0 from v1.13.0 to fix performance regression (Nick Craig-Wood)
* Fix Update ReadFrom failed: failed to send packet: EOF errors (Nick Craig-Wood)
* Zoho
* Fix error when region isn't set (buengese)
* Do not ask for mountpoint twice when using headless setup (buengese)
## v1.55.0 - 2021-03-31
[See commits](https://github.com/rclone/rclone/compare/v1.54.0...v1.55.0)

View File

@@ -43,7 +43,7 @@ Normally should contain a ':' and a path, e.g. "myremote:path/to/dir",
Enter a string value. Press Enter for the default ("").
remote> remote:path
Files larger than chunk size will be split in chunks.
Enter a size with suffix k,M,G,T. Press Enter for the default ("2G").
Enter a size with suffix K,M,G,T. Press Enter for the default ("2G").
chunk_size> 100M
Choose how chunker handles hash sums. All modes but "none" require metadata.
Enter a string value. Press Enter for the default ("md5").

View File

@@ -23,7 +23,7 @@ If you supply the `--rmdirs` flag, it will remove all empty directories along wi
You can also use the separate command `rmdir` or `rmdirs` to
delete empty directories only.
For example, to delete all files bigger than 100MBytes, you may first want to check what
For example, to delete all files bigger than 100 MiByte, you may first want to check what
would be deleted (use either):
rclone --min-size 100M lsl remote:path
@@ -33,8 +33,8 @@ Then proceed with the actual delete:
rclone --min-size 100M delete remote:path
That reads "delete everything with a minimum size of 100 MB", hence
delete all files bigger than 100MBytes.
That reads "delete everything with a minimum size of 100 MiB", hence
delete all files bigger than 100 MiByte.
**Important**: Since this can cause data loss, test first with the
`--dry-run` or the `--interactive`/`-i` flag.

View File

@@ -56,9 +56,9 @@ When that happens, it is the user's responsibility to stop the mount manually.
The size of the mounted file system will be set according to information retrieved
from the remote, the same as returned by the [rclone about](https://rclone.org/commands/rclone_about/)
command. Remotes with unlimited storage may report the used size only,
then an additional 1PB of free space is assumed. If the remote does not
then an additional 1 PiB of free space is assumed. If the remote does not
[support](https://rclone.org/overview/#optional-features) the about feature
at all, then 1PB is set as both the total and the free size.
at all, then 1 PiB is set as both the total and the free size.
**Note**: As of `rclone` 1.52.2, `rclone mount` now requires Go version 1.13
or newer on some platforms depending on the underlying FUSE library in use.

Some files were not shown because too many files have changed in this diff Show More