1
0
mirror of https://github.com/rclone/rclone.git synced 2026-02-04 10:43:14 +00:00

Compare commits

..

94 Commits

Author SHA1 Message Date
Nick Craig-Wood
114631de51 webdav: chunked uploading - WIP DO NOT MERGE 2021-04-26 17:58:04 +01:00
Nick Craig-Wood
71db19d8d8 fstests: allow chunked upload tests to be skipped 2021-04-26 17:58:04 +01:00
Thibault Coupin
3258dad743 webdav: lower case in error messages 2021-04-26 17:58:04 +01:00
Thibault Coupin
fc6bd0dd77 webdav: add chunked uploading option for nextcloud 2021-04-26 17:58:04 +01:00
Nick Craig-Wood
f4068d406b Add Jeffrey Tolar to contributors 2021-04-26 16:57:21 +01:00
Jeffrey Tolar
7511b6f4f1 b2: don't include the bucket name in public link file prefixes
Including the bucket name as part of the `fileNamePrefix` passed to
`b2_get_download_authorization` results in a link valid for objects that
have the bucket name as part of the object path; e.g.,

    rclone link :b2:some-bucket/some-file

would result in a public link valid for the object
`some-bucket/some-file` in the `some-bucket` bucket (in rclone-remote
parlance, `:b2:some-bucket/some-bucket/some-file`). This will almost
certainly result in a broken link.

The B2 docs don't explicitly specify this behavior, but the example
given for `fileNamePrefix` provides some clarification.

See https://www.backblaze.com/b2/docs/b2_get_download_authorization.html.
2021-04-26 16:56:41 +01:00
Nick Craig-Wood
e618ea83dd s3: remove WebIdentityRoleProvider to fix crash on auth #5255
This code removes the code added in

15d19131bd s3: use aws web identity role provider

This code no longer works because it doesn't initialise the
tokenFetcher - leading to a nil pointer crash.

The proper way to initialise this is with the
NewWebIdentityCredentials but it isn't clear where to get the other
parameters: roleARN, roleSessionName, path.

In the linked issue a user reports rclone working with EKS anyway, so
perhaps this code is no longer needed.

If it is needed, hopefully someone who knows AWS better will come
along and fix it!

See: https://forum.rclone.org/t/add-support-for-aws-sso/23569
2021-04-26 16:55:50 +01:00
Nick Craig-Wood
34dc257c55 Add Kenny Parsons to contributors 2021-04-26 16:55:50 +01:00
Kenny Parsons
4cacf5d30c docs: clarify and add examples for sftp docs
- added clarification to default remote path if no path is specified 
- added examples for mounting a remote path (other than the default home directory) to a local folder.
2021-04-26 16:13:42 +01:00
Nick Craig-Wood
0537791d14 sftp: Fix performance regression by re-enabling concurrent writes #5197
Betweeen rclone v1.54 and v1.55 there was an approx 3x performance
regression when transferring to distant SFTP servers (in particular
rsync.net).

This turned out to be due to the library github.com/pkg/sftp rclone
uses. Concurrent writes used to be enabled in this library by default
(for v1.12.0 as used in rclone v1.54) but they are no longer enabled
(for v1.13.0 as used in rclone v1.55) for safety reasons and it is
necessary to enable them specifically.

The safety concerns are due to the uncertainty as to whether writes
come in order and whether a half completed file might have holes in
it. This isn't a problem for rclone since a) it doesn't restart
uploads and b) it has a post-transfer checksum test.

This change introduces a new flag `--sftp-disable-concurrent-writes`
to control the feature which defaults to false, meaning that
concurrent writes are enabled as in v1.54.

However this isn't quite enough to fix the problem as the sftp library
needs to be able to sniff the size of the stream from the reader
passed in, so this also adds a `Size` interface to the reader to
enable this. This involved a patch to the library.

The library was reverted to v1.12.0 for v1.55.1 - this patch installs
v1.13.0+master to fix the Size interface problem.

See: https://github.com/pkg/sftp/issues/426
2021-04-26 09:24:28 +01:00
Nick Craig-Wood
4b1d28550a Changelog updates from Version v1.55.1 2021-04-26 09:22:49 +01:00
Nick Craig-Wood
d27c35ee4a box: use upload preflight check to avoid listings in file uploads
Before this change, rclone checked to see if an object existed before
doing an upload by listing the destination directory. This was very
inefficient, especially with large directories.

After this change rclone uses the pre upload check API call which
checks to see if it is OK to upload an object, and also returns the ID
of an existing object which saves rclone having to do a directory
listing.
2021-04-25 11:45:44 +01:00
Nick Craig-Wood
ffec0d4f03 Add OleFrost to contributors 2021-04-25 11:45:39 +01:00
OleFrost
89daa9efd1 onedrive: Work around for random "Unable to initialize RPS" errors
OneDrive randomly returns the error message: "InvalidAuthenticationToken: Unable to initialize RPS". These unexpected errors typically caused the entire rclone command to fail.

This work around recognizes these errors and marks them for a low level retry, that mostly succeeds. This will make rclone commands complete without being noticeable affected.

Fixes: #5270
2021-04-24 23:05:34 +01:00
Nick Craig-Wood
ee502a757f ncdu: update termbox-go library to fix crash - fixes #5259 2021-04-24 15:17:14 +01:00
Cnly
386acaa110 oauthutil: fix #5265 old authorize result not recognised 2021-04-23 01:20:52 +08:00
buengese
efdee3a5fe compress: fix compressed name regexp 2021-04-22 18:38:38 +02:00
Nick Craig-Wood
5d85e6bc9c dropbox: fix Unable to decrypt returned paths from changeNotify - fixes #5165
This was caused by incorrect use of strings.TrimLeft where
strings.TrimPrefix was required.
2021-04-21 10:52:05 +01:00
Nick Craig-Wood
4a9469a3dc test changenotify: add command to help debugging changenotify 2021-04-21 10:52:05 +01:00
Nick Craig-Wood
f8884a7200 build: fix version numbers in android branch builds 2021-04-20 17:40:06 +01:00
Nick Craig-Wood
2a40f00077 vfs: fix a code path which allows dirty data to be removed causing data loss
Before this change the VFS layer could remove a locally cached file
even if it had data which needed to be written back, thus causing data loss.

See: https://forum.rclone.org/t/rclone-1-55-doesnt-save-file-changes-if-the-file-has-been-reopened-during-upload-google-drive-mount/23646
2021-04-20 16:36:38 +01:00
Nick Craig-Wood
9799fdbae2 Add noabody to contributors 2021-04-20 16:36:38 +01:00
Nick Craig-Wood
492504a601 Add new email address for Caleb Case 2021-04-20 16:36:25 +01:00
Nick Craig-Wood
0c03a7fead Add Ansh Mittal to contributors 2021-04-20 16:31:40 +01:00
Nick Craig-Wood
7afb4487ef build: update all dependencies 2021-04-20 00:00:13 +01:00
noabody
b9d0ed4f5c make_manual.py: fix missing comma for doc build after uptobox merge
This fixes a problem introduced in

cd69f9e6e8 uptobox: add docs
2021-04-19 16:18:18 +01:00
Caleb Case
baa4c039a0 backend/tardigrade: Upgrade to uplink v1.4.6
Release notes: https://github.com/storj/uplink/releases/tag/v1.4.6

Follow up PRs will take advantage of the new bucket error and negative
offset support to remove roundtrips.
2021-04-19 16:14:56 +01:00
Alex Chen
31a8211afa oauthutil: raise fatal error if token expired without refresh token (#5252) 2021-04-18 12:04:13 +08:00
albertony
3544e09e95 config: treat any config file paths with filename notfound as memory-only config (#5235) 2021-04-18 00:09:03 +02:00
Ansh Mittal
b456be4303 drive: don't open browser when service account...
credentials specified 

Fixes #5104
2021-04-17 19:49:53 +01:00
Nick Craig-Wood
3e96752079 dropbox: add missing team_data.member scope for use with --impersonate
See: https://forum.rclone.org/t/dropbox-business-not-accepting-oauth2/23390/32
2021-04-17 17:40:08 +01:00
buengese
4a5cbf2a19 cmd/ncdu: fix out of range panic in delete 2021-04-16 23:20:03 +02:00
Nick Craig-Wood
dcd4edc9f5 dropbox: fix About after scopes changes - rclone config reconnect needed
This adds the missing scope for the About call. To use it it will be
necessary to refresh the token with `rclone config reconnect`.

See: https://forum.rclone.org/t/dropbox-too-many-requests-or-write-operations-trying-again-in-15-seconds/23316/33
2021-04-16 15:07:03 +01:00
Nick Craig-Wood
7f5e347d94 Add Nazar Mishturak to contributors 2021-04-16 15:07:03 +01:00
Cnly
040677ab5b onedrive: also report root error if unable to cancel multipart upload 2021-04-16 12:41:38 +08:00
albertony
6366d3dfc5 docs: extend description of drive mount access on windows 2021-04-13 22:33:19 +02:00
albertony
60d376c323 docs: add guide to configuring autorun in install documentation 2021-04-13 22:33:19 +02:00
albertony
7b1ca716bf config: add touch command to ensure config exists at configured location (#5226)
A new command `rclone config touch` which calls config.SaveConfig().
Useful during testing of configuration location things.
It will ensure the config file exists and test that it is writable.
2021-04-13 19:25:09 +03:00
albertony
d8711cf7f9 config: create config file in windows appdata directory by default (#5226)
Use %AppData% as primary default for configuration file on Windows,
which is more in line with Windows standards, while existing default
of using home directory is more Unix standards - though that made rclone
more consistent accross different OS.

Fixes #4667
2021-04-13 19:25:09 +03:00
buengese
cd69f9e6e8 uptobox: add docs 2021-04-13 17:46:07 +02:00
buengese
a737ff21af uptobox: integration tests 2021-04-13 17:46:07 +02:00
buengese
ad9aa693a3 new backend: uptobox 2021-04-13 17:46:07 +02:00
Nazar Mishturak
964c3e0732 rcat: add --size flag for more efficient uploads of known size - fixes #4403
This allows preallocating space at remote end with RcatSize.
2021-04-13 12:25:47 +01:00
Nick Craig-Wood
a46a3c0811 test makefiles: add log levels and speed summary 2021-04-12 18:14:01 +01:00
Nick Craig-Wood
60dcafe04d test makefiles: add --seed flag and make data generated repeatable #5214 2021-04-12 18:14:01 +01:00
Nick Craig-Wood
813bf029d4 Add Dominik Mydlil to contributors 2021-04-12 18:14:01 +01:00
albertony
f2d3264054 config: prevent use of windows reserved names in config file name 2021-04-12 18:17:19 +02:00
albertony
23a0d4a1e6 config: fix issues with memory-only config file paths
Fixes #5222
2021-04-12 18:17:19 +02:00
albertony
b96ebfc40b docs: less confusing example with config path option 2021-04-12 18:17:19 +02:00
Dominik Mydlil
3fe2aaf96c crypt: support timestamped filenames from --b2-versions
With the file version format standardized in lib/version, `crypt` can
now treat the version strings separately from the encrypted/decrypted
file names. This allows --b2-versions to work with `crypt`.

Fixes #1627

Co-authored-by: Luc Ritchie <luc.ritchie@gmail.com>
2021-04-12 15:59:18 +01:00
Dominik Mydlil
c163e6b250 b2: factor version handling into lib/version
Standardizes the filename version tagging so that it can be used by any
backend.
2021-04-12 15:59:18 +01:00
Nick Craig-Wood
c1492cfa28 test: add sftp to rsync.net to integration tests 2021-04-12 15:52:31 +01:00
Nick Craig-Wood
38a8071a58 Add Ashok Gelal to contributors 2021-04-12 15:52:31 +01:00
Ashok Gelal
8c68a76a4a install.sh: silence the progress output with curl requests
This commit silences the progress output from the curl requests made by the install.sh script.

Having a progress seems to break some automated scripts and there isn't a way to pass some
flags to these curl requests to disable them.
2021-04-12 14:18:29 +01:00
Dan Dascalescu
e7b736f8ca docs: fix minor typo in symlinks / junction points 2021-04-10 15:34:34 +02:00
Nick Craig-Wood
cb30a8c80e webdav: fix sharepoint auth over http - fixes #4418
Before this change rclone would auth over https even when the server
was configured with http.

Authing over http obviously isn't ideal, however this type of server
is on-premise and doesn't work over https.
2021-04-10 11:59:56 +01:00
Ivan Andreev
629a3eeca2 backend/ftp: fix implicit TLS after PR #4266 (#5219)
PR #4266 modified ftpConnection to make ftp library into using
a custom dial function which is QoS aware and takes care of TLS.
However the ServerConn.Login function from the ftp library also needs
TLS config passed explicitly as a trigger for sending PSBZ and PROT
options to FTP server. This was not taken care of resulting in
failure to connect via FTP with implicit TLS.
This PR fixes that.

Fixes #5210
2021-04-09 01:43:50 +03:00
Nick Craig-Wood
f52ae75a51 rclone authorize: Send and receive extra config options to fix oauth
Before this change any backends which required extra config in the
oauth phase (like the `region` for zoho) didn't work with `rclone
authorize`.

This change serializes the extra config and passes it to `rclone
authorize` and returns new config items to be set from rclone
authorize.

`rclone authorize` will still accept its previous configuration
parameters for use with old rclones.

Fixes #5178
2021-04-08 12:34:15 +01:00
Nick Craig-Wood
9d5c5bf7ab fs: add Options.NonDefault to read options which aren't at their default #5178 2021-04-08 12:34:15 +01:00
Nick Craig-Wood
53573b4a09 configmap: Add Encode and Decode methods to Simple for command line encoding #5178 2021-04-08 12:34:15 +01:00
Nick Craig-Wood
3622e064f5 configmap: Add priorities to configmap Setters #5178 2021-04-08 12:34:15 +01:00
Nick Craig-Wood
6d28ea7ab5 fs: factor config override detection into its own function #5178 2021-04-08 12:34:15 +01:00
Nick Craig-Wood
b9fd02039b authorize: refactor to use new config interfaces #5178 2021-04-08 12:34:15 +01:00
Nick Craig-Wood
1a41c930f3 configmap: add ClearSetters to get rid of all setters #5178 2021-04-08 12:34:15 +01:00
albertony
ddb7eb6e0a docs: fixed some typos 2021-04-08 10:19:03 +02:00
buengese
c114695a66 zoho: do not ask for mountpoint twice when using headless setup 2021-04-08 00:23:27 +02:00
Nick Craig-Wood
fcba51557f dropbox: set visibility in link sharing when --expire is set
Note that due to a bug in the dropbox SDK you'll need to set --expire
to access this.

See: https://github.com/dropbox/dropbox-sdk-go-unofficial/issues/75
See: https://forum.rclone.org/t/rclone-link-dropbox-permissions/23211
2021-04-07 13:58:37 +01:00
Nick Craig-Wood
9393225a1d link: use "off" value for unset expiry 2021-04-07 13:58:37 +01:00
albertony
3d3ff61f74 docs: minor cleanup of space around code section 2021-04-07 08:47:29 +02:00
albertony
d98f192425 docs: WinFsp 2021 is out of beta 2021-04-07 08:13:40 +02:00
Nick Craig-Wood
54771e4402 sync: fix incorrect error reported by graceful cutoff - fixes #5203
Before this change, a sync which was finished with a graceful transfer
cutoff could return "context canceled" instead of the correct error.

This fixes the problem by ignoring "context canceled" errors if we
have done a graceful stop.
2021-04-06 13:08:42 +01:00
Nick Craig-Wood
dc286529bc drive: fix backend copyid of google doc to directory - fixes #5196
Before this change the google doc was being copied to the directory
without an extension.
2021-04-06 11:46:52 +01:00
Nick Craig-Wood
7dc7c021db sftp: fix Update ReadFrom failed: failed to send packet: EOF errors
In

a3fcadddc8 sftp: close idle connections after --sftp-idle-timeout (1m by default)

Idle SFTP connections were closed after 1 minute. However due to the
way SSH multiplexes connections over a single SSH connection this
meant that if uploads or downloads went on for more than one minute
they failed with "EOF errors" as their underlying connection was
closed.

This fixes the problem by not clearing idle connections if there are
any transfers in progress.

Fixes #5197
2021-04-06 10:01:49 +01:00
Nick Craig-Wood
fe1aa13069 sftp: revert sftp library to v1.12.0 from v1.13.0 to fix performance regression #5197
This reverts the library update done in this commit.

713f8f357d sftp: fix "file not found" errors for read once servers

Reverting this commit triples the performance to a far away sftp server.

See: https://github.com/pkg/sftp/issues/426
2021-04-06 10:01:49 +01:00
Nick Craig-Wood
5fa8e7d957 Add Nick Gaya to contributors 2021-04-06 10:01:49 +01:00
Nick Gaya
9db7c51eaa sync: don't warn about --no-traverse when --files-from is set 2021-04-05 20:36:39 +01:00
Ivan Andreev
3859fe2f52 cmd/version: print os/version, kernel and bitness (#5204)
Related to #5121

Note: OpenBSD is stub yet. This will be fixed after upstream PR gets resolved
https://github.com/shirou/gopsutil/pull/993
2021-04-05 21:53:09 +03:00
buengese
0caf417779 zoho: fix error when region isn't set 2021-04-05 15:11:30 +02:00
Ivan Andreev
9eab258ffb build: add build tag noselfupdate
Allow downstream packaging to build rclone without selfupdate command:
$ go build -tags noselfupdate

Fixes #5187
2021-04-04 11:22:09 +03:00
Nick Gaya
7df57cd625 contributing.md: update setup instructions for go1.16 2021-04-04 09:10:43 +01:00
Nick Gaya
1fd9b483c8 onedrive: add list_chunk option
Add --onedrive-list-chunk option similar to existing options for azureblob, drive, and s3.

Suggested as a workaround for a OneDrive pagination bug

See: https://forum.rclone.org/t/unexpected-duplicates-on-onedrive-with-0s-in-filename/23164/8
2021-04-04 09:08:16 +01:00
Ivan Andreev
93353c431b selfupdate: dont detect FUSE if build is static
Before this patch selfupdate detected ANY build with cmount tag as a build
having libFUSE capabilities. However, only dynamic builds really have it.
The official linux builds are static and have the cmount tag as of the time
of this writing. This results in inability to update official linux binaries.
This patch fixes that. The build can be fixed independently.
2021-04-03 21:54:15 +03:00
Nick Craig-Wood
886dfd23e2 fichier: check if more than one upload link is returned #5152 2021-04-03 15:00:50 +01:00
Nick Craig-Wood
116a8021bb drive: switch to the Drives API for looking up shared drives - fixes #3139
Before this change rclone used the deprecated teamdrives API. This
change uses the new drives API (which seems to be the teamdrives API
renames).
2021-04-03 14:21:20 +01:00
Nick Craig-Wood
9e2fbe0f1a install.sh: fix macOS arm64 download - fixes #5183 2021-03-31 21:48:31 +01:00
Nick Craig-Wood
6d65d116df Start v1.56.0-DEV development 2021-03-31 19:51:43 +01:00
Ivan Andreev
edaeb51ea9 backlog: ticket templates should recommend to update rclone
Aligns Bug and Feature github templates with rclone forum
and instructs submitter to proactively update rclone.
2021-03-31 19:13:50 +01:00
Nick Craig-Wood
6e2e2d9eb2 Version v1.55.0 2021-03-31 19:12:08 +01:00
Nick Craig-Wood
20e15e52a9 vfs: fix Create causing windows explorer to truncate files on CTRL-C CTRL-V
Before this fix, doing CTRL-C and CTRL-V on a file in Windows explorer
caused the **source** and the the destination to be truncated to 0.

This is because Windows opens the source file with Create with flags
`O_RDWR|O_CREATE|O_EXCL` but doesn't write to it - it only reads from
it. Rclone was taking the call to Create as a signal to always make a
new file, but this is incorrect.

This fix reads an existing file from the directory if it exists when
Create is called rather than always creating a new one. This fixes the
problem.

Fixes #5181
2021-03-31 14:48:02 +01:00
Nick Craig-Wood
d0f8b4f479 fs/cache: fix recreation of backends after they have expired
Before this change, on the first attempt to create a backend we used a
non-canonicalized string. When the backend expired the second attempt
to create it would use the canonicalized string (because it was in the
remap cache) which would fail because it was now `name{XXXX}:`

This change makes sure that whenever we create a backend we always use
the non-canonicalized string.

See: https://forum.rclone.org/t/connection-string-inconsistencies-on-beta/23171
2021-03-30 18:46:30 +01:00
Nick Craig-Wood
58d82a5c73 rc: allow fs= params to be a JSON blob 2021-03-30 17:07:27 +01:00
Nick Craig-Wood
c0c74003f2 fs/cache: add --fs-cache-expire-duration to control the fs cache
This commit makes the previously statically configured fs cache configurable.

It introduces two parameters `--fs-cache-expire-duration` and
`--fs-cache-expire-interval` to control the caching of the items.

It also adds new interfaces to lib/cache to set these.
2021-03-30 12:46:47 +01:00
Nick Craig-Wood
60bc7a079a rc: factor rc.Error out of rcserver for re-use in librclone #4891 2021-03-30 12:46:05 +01:00
Nick Craig-Wood
20c5ca08fb test_all: fix crash when using -clean 2021-03-29 23:12:53 +01:00
143 changed files with 11571 additions and 1878 deletions

View File

@@ -5,19 +5,31 @@ about: Report a problem with rclone
<!--
Welcome :-) We understand you are having a problem with rclone; we want to help you with that!
We understand you are having a problem with rclone; we want to help you with that!
If you've just got a question or aren't sure if you've found a bug then please use the rclone forum:
**STOP and READ**
**YOUR POST WILL BE REMOVED IF IT IS LOW QUALITY**:
Please show the effort you've put in to solving the problem and please be specific.
People are volunteering their time to help! Low effort posts are not likely to get good answers!
If you think you might have found a bug, try to replicate it with the latest beta (or stable).
The update instructions are available at https://rclone.org/commands/rclone_selfupdate/
If you can still replicate it or just got a question then please use the rclone forum:
https://forum.rclone.org/
instead of filing an issue for a quick response.
for a quick response instead of filing an issue on this repo.
If you think you might have found a bug, please can you try to replicate it with the latest beta?
If nothing else helps, then please fill in the info below which helps us help you.
https://beta.rclone.org/
If you can still replicate it with the latest beta, then please fill in the info below which makes our lives much easier. A log with -vv will make our day :-)
**DO NOT REDACT** any information except passwords/keys/personal info.
You should use 3 backticks to begin and end your paste to make it readable.
Make sure to include a log obtained with '-vv'.
You can also use '-vv --log-file bug.log' and a service such as https://pastebin.com or https://gist.github.com/
Thank you
@@ -25,6 +37,11 @@ The Rclone Developers
-->
#### The associated forum post URL from `https://forum.rclone.org`
#### What is the problem you are having with rclone?
@@ -37,7 +54,7 @@ The Rclone Developers
#### Which cloud storage system are you using? (e.g. Google Drive)
#### Which cloud storage system are you using? (e.g. Google Drive)

View File

@@ -7,12 +7,16 @@ about: Suggest a new feature or enhancement for rclone
Welcome :-)
So you've got an idea to improve rclone? We love that! You'll be glad to hear we've incorporated hundreds of ideas from contributors already.
So you've got an idea to improve rclone? We love that!
You'll be glad to hear we've incorporated hundreds of ideas from contributors already.
Here is a checklist of things to do:
Probably the latest beta (or stable) release has your feature, so try to update your rclone.
The update instructions are available at https://rclone.org/commands/rclone_selfupdate/
1. Please search the old issues first for your idea and +1 or comment on an existing issue if possible.
2. Discuss on the forum first: https://forum.rclone.org/
If it still isn't there, here is a checklist of things to do:
1. Search the old issues for your idea and +1 or comment on an existing issue if possible.
2. Discuss on the forum: https://forum.rclone.org/
3. Make a feature request issue (this is the right place!).
4. Be prepared to get involved making the feature :-)
@@ -23,6 +27,10 @@ The Rclone Developers
-->
#### The associated forum post URL from `https://forum.rclone.org`
#### What is your current rclone version (output from `rclone version`)?

View File

@@ -221,6 +221,8 @@ jobs:
steps:
- name: Checkout
uses: actions/checkout@v2
with:
fetch-depth: 0
# Upgrade together with NDK version
- name: Set up Go 1.14

View File

@@ -33,10 +33,11 @@ page](https://github.com/rclone/rclone).
Now in your terminal
go get -u github.com/rclone/rclone
cd $GOPATH/src/github.com/rclone/rclone
git clone https://github.com/rclone/rclone.git
cd rclone
git remote rename origin upstream
git remote add origin git@github.com:YOURUSER/rclone.git
go build
Make a branch to add your new feature

1289
MANUAL.html generated

File diff suppressed because it is too large Load Diff

1783
MANUAL.md generated

File diff suppressed because it is too large Load Diff

1867
MANUAL.txt generated

File diff suppressed because it is too large Load Diff

View File

@@ -1 +1 @@
v1.55.0
v1.56.0

View File

@@ -41,6 +41,7 @@ import (
_ "github.com/rclone/rclone/backend/swift"
_ "github.com/rclone/rclone/backend/tardigrade"
_ "github.com/rclone/rclone/backend/union"
_ "github.com/rclone/rclone/backend/uptobox"
_ "github.com/rclone/rclone/backend/webdav"
_ "github.com/rclone/rclone/backend/yandex"
_ "github.com/rclone/rclone/backend/zoho"

View File

@@ -2,12 +2,11 @@ package api
import (
"fmt"
"path"
"strconv"
"strings"
"time"
"github.com/rclone/rclone/fs/fserrors"
"github.com/rclone/rclone/lib/version"
)
// Error describes a B2 error response
@@ -63,16 +62,17 @@ func (t *Timestamp) UnmarshalJSON(data []byte) error {
return nil
}
const versionFormat = "-v2006-01-02-150405.000"
// HasVersion returns true if it looks like the passed filename has a timestamp on it.
//
// Note that the passed filename's timestamp may still be invalid even if this
// function returns true.
func HasVersion(remote string) bool {
return version.Match(remote)
}
// AddVersion adds the timestamp as a version string into the filename passed in.
func (t Timestamp) AddVersion(remote string) string {
ext := path.Ext(remote)
base := remote[:len(remote)-len(ext)]
s := time.Time(t).Format(versionFormat)
// Replace the '.' with a '-'
s = strings.Replace(s, ".", "-", -1)
return base + s + ext
return version.Add(remote, time.Time(t))
}
// RemoveVersion removes the timestamp from a filename as a version string.
@@ -80,24 +80,9 @@ func (t Timestamp) AddVersion(remote string) string {
// It returns the new file name and a timestamp, or the old filename
// and a zero timestamp.
func RemoveVersion(remote string) (t Timestamp, newRemote string) {
newRemote = remote
ext := path.Ext(remote)
base := remote[:len(remote)-len(ext)]
if len(base) < len(versionFormat) {
return
}
versionStart := len(base) - len(versionFormat)
// Check it ends in -xxx
if base[len(base)-4] != '-' {
return
}
// Replace with .xxx for parsing
base = base[:len(base)-4] + "." + base[len(base)-3:]
newT, err := time.Parse(versionFormat, base[versionStart:])
if err != nil {
return
}
return Timestamp(newT), base[:versionStart] + ext
time, newRemote := version.Remove(remote)
t = Timestamp(time)
return
}
// IsZero returns true if the timestamp is uninitialized

View File

@@ -13,7 +13,6 @@ import (
var (
emptyT api.Timestamp
t0 = api.Timestamp(fstest.Time("1970-01-01T01:01:01.123456789Z"))
t0r = api.Timestamp(fstest.Time("1970-01-01T01:01:01.123000000Z"))
t1 = api.Timestamp(fstest.Time("2001-02-03T04:05:06.123000000Z"))
)
@@ -36,40 +35,6 @@ func TestTimestampUnmarshalJSON(t *testing.T) {
assert.Equal(t, (time.Time)(t1), (time.Time)(tActual))
}
func TestTimestampAddVersion(t *testing.T) {
for _, test := range []struct {
t api.Timestamp
in string
expected string
}{
{t0, "potato.txt", "potato-v1970-01-01-010101-123.txt"},
{t1, "potato", "potato-v2001-02-03-040506-123"},
{t1, "", "-v2001-02-03-040506-123"},
} {
actual := test.t.AddVersion(test.in)
assert.Equal(t, test.expected, actual, test.in)
}
}
func TestTimestampRemoveVersion(t *testing.T) {
for _, test := range []struct {
in string
expectedT api.Timestamp
expectedRemote string
}{
{"potato.txt", emptyT, "potato.txt"},
{"potato-v1970-01-01-010101-123.txt", t0r, "potato.txt"},
{"potato-v2001-02-03-040506-123", t1, "potato"},
{"-v2001-02-03-040506-123", t1, ""},
{"potato-v2A01-02-03-040506-123", emptyT, "potato-v2A01-02-03-040506-123"},
{"potato-v2001-02-03-040506=123", emptyT, "potato-v2001-02-03-040506=123"},
} {
actualT, actualRemote := api.RemoveVersion(test.in)
assert.Equal(t, test.expectedT, actualT, test.in)
assert.Equal(t, test.expectedRemote, actualRemote, test.in)
}
}
func TestTimestampIsZero(t *testing.T) {
assert.True(t, emptyT.IsZero())
assert.False(t, t0.IsZero())

View File

@@ -1353,7 +1353,7 @@ func (f *Fs) getDownloadAuthorization(ctx context.Context, bucket, remote string
}
var request = api.GetDownloadAuthorizationRequest{
BucketID: bucketID,
FileNamePrefix: f.opt.Enc.FromStandardPath(path.Join(f.root, remote)),
FileNamePrefix: f.opt.Enc.FromStandardPath(path.Join(f.rootDirectory, remote)),
ValidDurationInSeconds: validDurationInSeconds,
}
var response api.GetDownloadAuthorizationResponse

View File

@@ -36,13 +36,13 @@ func (t *Time) UnmarshalJSON(data []byte) error {
// Error is returned from box when things go wrong
type Error struct {
Type string `json:"type"`
Status int `json:"status"`
Code string `json:"code"`
ContextInfo json.RawMessage
HelpURL string `json:"help_url"`
Message string `json:"message"`
RequestID string `json:"request_id"`
Type string `json:"type"`
Status int `json:"status"`
Code string `json:"code"`
ContextInfo json.RawMessage `json:"context_info"`
HelpURL string `json:"help_url"`
Message string `json:"message"`
RequestID string `json:"request_id"`
}
// Error returns a string for the error and satisfies the error interface
@@ -132,6 +132,38 @@ type UploadFile struct {
ContentModifiedAt Time `json:"content_modified_at"`
}
// PreUploadCheck is the request for upload preflight check
type PreUploadCheck struct {
Name string `json:"name"`
Parent Parent `json:"parent"`
Size *int64 `json:"size,omitempty"`
}
// PreUploadCheckResponse is the response from upload preflight check
// if successful
type PreUploadCheckResponse struct {
UploadToken string `json:"upload_token"`
UploadURL string `json:"upload_url"`
}
// PreUploadCheckConflict is returned in the ContextInfo error field
// from PreUploadCheck when the error code is "item_name_in_use"
type PreUploadCheckConflict struct {
Conflicts struct {
Type string `json:"type"`
ID string `json:"id"`
FileVersion struct {
Type string `json:"type"`
ID string `json:"id"`
Sha1 string `json:"sha1"`
} `json:"file_version"`
SequenceID string `json:"sequence_id"`
Etag string `json:"etag"`
Sha1 string `json:"sha1"`
Name string `json:"name"`
} `json:"conflicts"`
}
// UpdateFileModTime is used in Update File Info
type UpdateFileModTime struct {
ContentModifiedAt Time `json:"content_modified_at"`

View File

@@ -686,22 +686,80 @@ func (f *Fs) createObject(ctx context.Context, remote string, modTime time.Time,
return o, leaf, directoryID, nil
}
// preUploadCheck checks to see if a file can be uploaded
//
// It returns "", nil if the file is good to go
// It returns "ID", nil if the file must be updated
func (f *Fs) preUploadCheck(ctx context.Context, leaf, directoryID string, size int64) (ID string, err error) {
check := api.PreUploadCheck{
Name: f.opt.Enc.FromStandardName(leaf),
Parent: api.Parent{
ID: directoryID,
},
}
if size >= 0 {
check.Size = &size
}
opts := rest.Opts{
Method: "OPTIONS",
Path: "/files/content/",
}
var result api.PreUploadCheckResponse
var resp *http.Response
err = f.pacer.Call(func() (bool, error) {
resp, err = f.srv.CallJSON(ctx, &opts, &check, &result)
return shouldRetry(ctx, resp, err)
})
if err != nil {
if apiErr, ok := err.(*api.Error); ok && apiErr.Code == "item_name_in_use" {
var conflict api.PreUploadCheckConflict
err = json.Unmarshal(apiErr.ContextInfo, &conflict)
if err != nil {
return "", errors.Wrap(err, "pre-upload check: JSON decode failed")
}
if conflict.Conflicts.Type != api.ItemTypeFile {
return "", errors.Wrap(err, "pre-upload check: can't overwrite non file with file")
}
return conflict.Conflicts.ID, nil
}
return "", errors.Wrap(err, "pre-upload check")
}
return "", nil
}
// Put the object
//
// Copy the reader in to the new object which is returned
//
// The new object may have been created if an error is returned
func (f *Fs) Put(ctx context.Context, in io.Reader, src fs.ObjectInfo, options ...fs.OpenOption) (fs.Object, error) {
existingObj, err := f.newObjectWithInfo(ctx, src.Remote(), nil)
switch err {
case nil:
return existingObj, existingObj.Update(ctx, in, src, options...)
case fs.ErrorObjectNotFound:
// Not found so create it
return f.PutUnchecked(ctx, in, src)
default:
// If directory doesn't exist, file doesn't exist so can upload
remote := src.Remote()
leaf, directoryID, err := f.dirCache.FindPath(ctx, remote, false)
if err != nil {
if err == fs.ErrorDirNotFound {
return f.PutUnchecked(ctx, in, src, options...)
}
return nil, err
}
// Preflight check the upload, which returns the ID if the
// object already exists
ID, err := f.preUploadCheck(ctx, leaf, directoryID, src.Size())
if err != nil {
return nil, err
}
if ID == "" {
return f.PutUnchecked(ctx, in, src, options...)
}
// If object exists then create a skeleton one with just id
o := &Object{
fs: f,
remote: remote,
id: ID,
}
return o, o.Update(ctx, in, src, options...)
}
// PutStream uploads to the remote path with the modTime given of indeterminate size

View File

@@ -53,7 +53,7 @@ const (
Gzip = 2
)
var nameRegexp = regexp.MustCompile("^(.+?)\\.([A-Za-z0-9+_]{11})$")
var nameRegexp = regexp.MustCompile("^(.+?)\\.([A-Za-z0-9-_]{11})$")
// Register with Fs
func init() {

View File

@@ -12,12 +12,14 @@ import (
"strconv"
"strings"
"sync"
"time"
"unicode/utf8"
"github.com/pkg/errors"
"github.com/rclone/rclone/backend/crypt/pkcs7"
"github.com/rclone/rclone/fs"
"github.com/rclone/rclone/fs/accounting"
"github.com/rclone/rclone/lib/version"
"github.com/rfjakob/eme"
"golang.org/x/crypto/nacl/secretbox"
"golang.org/x/crypto/scrypt"
@@ -442,11 +444,32 @@ func (c *Cipher) encryptFileName(in string) string {
if !c.dirNameEncrypt && i != (len(segments)-1) {
continue
}
// Strip version string so that only the non-versioned part
// of the file name gets encrypted/obfuscated
hasVersion := false
var t time.Time
if i == (len(segments)-1) && version.Match(segments[i]) {
var s string
t, s = version.Remove(segments[i])
// version.Remove can fail, in which case it returns segments[i]
if s != segments[i] {
segments[i] = s
hasVersion = true
}
}
if c.mode == NameEncryptionStandard {
segments[i] = c.encryptSegment(segments[i])
} else {
segments[i] = c.obfuscateSegment(segments[i])
}
// Add back a version to the encrypted/obfuscated
// file name, if we stripped it off earlier
if hasVersion {
segments[i] = version.Add(segments[i], t)
}
}
return strings.Join(segments, "/")
}
@@ -477,6 +500,21 @@ func (c *Cipher) decryptFileName(in string) (string, error) {
if !c.dirNameEncrypt && i != (len(segments)-1) {
continue
}
// Strip version string so that only the non-versioned part
// of the file name gets decrypted/deobfuscated
hasVersion := false
var t time.Time
if i == (len(segments)-1) && version.Match(segments[i]) {
var s string
t, s = version.Remove(segments[i])
// version.Remove can fail, in which case it returns segments[i]
if s != segments[i] {
segments[i] = s
hasVersion = true
}
}
if c.mode == NameEncryptionStandard {
segments[i], err = c.decryptSegment(segments[i])
} else {
@@ -486,6 +524,12 @@ func (c *Cipher) decryptFileName(in string) (string, error) {
if err != nil {
return "", err
}
// Add back a version to the decrypted/deobfuscated
// file name, if we stripped it off earlier
if hasVersion {
segments[i] = version.Add(segments[i], t)
}
}
return strings.Join(segments, "/"), nil
}
@@ -494,10 +538,18 @@ func (c *Cipher) decryptFileName(in string) (string, error) {
func (c *Cipher) DecryptFileName(in string) (string, error) {
if c.mode == NameEncryptionOff {
remainingLength := len(in) - len(encryptedSuffix)
if remainingLength > 0 && strings.HasSuffix(in, encryptedSuffix) {
return in[:remainingLength], nil
if remainingLength == 0 || !strings.HasSuffix(in, encryptedSuffix) {
return "", ErrorNotAnEncryptedFile
}
return "", ErrorNotAnEncryptedFile
decrypted := in[:remainingLength]
if version.Match(decrypted) {
_, unversioned := version.Remove(decrypted)
if unversioned == "" {
return "", ErrorNotAnEncryptedFile
}
}
// Leave the version string on, if it was there
return decrypted, nil
}
return c.decryptFileName(in)
}

View File

@@ -160,22 +160,29 @@ func TestEncryptFileName(t *testing.T) {
assert.Equal(t, "p0e52nreeaj0a5ea7s64m4j72s", c.EncryptFileName("1"))
assert.Equal(t, "p0e52nreeaj0a5ea7s64m4j72s/l42g6771hnv3an9cgc8cr2n1ng", c.EncryptFileName("1/12"))
assert.Equal(t, "p0e52nreeaj0a5ea7s64m4j72s/l42g6771hnv3an9cgc8cr2n1ng/qgm4avr35m5loi1th53ato71v0", c.EncryptFileName("1/12/123"))
assert.Equal(t, "p0e52nreeaj0a5ea7s64m4j72s-v2001-02-03-040506-123", c.EncryptFileName("1-v2001-02-03-040506-123"))
assert.Equal(t, "p0e52nreeaj0a5ea7s64m4j72s/l42g6771hnv3an9cgc8cr2n1ng-v2001-02-03-040506-123", c.EncryptFileName("1/12-v2001-02-03-040506-123"))
// Standard mode with directory name encryption off
c, _ = newCipher(NameEncryptionStandard, "", "", false)
assert.Equal(t, "p0e52nreeaj0a5ea7s64m4j72s", c.EncryptFileName("1"))
assert.Equal(t, "1/l42g6771hnv3an9cgc8cr2n1ng", c.EncryptFileName("1/12"))
assert.Equal(t, "1/12/qgm4avr35m5loi1th53ato71v0", c.EncryptFileName("1/12/123"))
assert.Equal(t, "p0e52nreeaj0a5ea7s64m4j72s-v2001-02-03-040506-123", c.EncryptFileName("1-v2001-02-03-040506-123"))
assert.Equal(t, "1/l42g6771hnv3an9cgc8cr2n1ng-v2001-02-03-040506-123", c.EncryptFileName("1/12-v2001-02-03-040506-123"))
// Now off mode
c, _ = newCipher(NameEncryptionOff, "", "", true)
assert.Equal(t, "1/12/123.bin", c.EncryptFileName("1/12/123"))
// Obfuscation mode
c, _ = newCipher(NameEncryptionObfuscated, "", "", true)
assert.Equal(t, "49.6/99.23/150.890/53.!!lipps", c.EncryptFileName("1/12/123/!hello"))
assert.Equal(t, "49.6/99.23/150.890/53-v2001-02-03-040506-123.!!lipps", c.EncryptFileName("1/12/123/!hello-v2001-02-03-040506-123"))
assert.Equal(t, "49.6/99.23/150.890/162.uryyB-v2001-02-03-040506-123.GKG", c.EncryptFileName("1/12/123/hello-v2001-02-03-040506-123.txt"))
assert.Equal(t, "161.\u00e4", c.EncryptFileName("\u00a1"))
assert.Equal(t, "160.\u03c2", c.EncryptFileName("\u03a0"))
// Obfuscation mode with directory name encryption off
c, _ = newCipher(NameEncryptionObfuscated, "", "", false)
assert.Equal(t, "1/12/123/53.!!lipps", c.EncryptFileName("1/12/123/!hello"))
assert.Equal(t, "1/12/123/53-v2001-02-03-040506-123.!!lipps", c.EncryptFileName("1/12/123/!hello-v2001-02-03-040506-123"))
assert.Equal(t, "161.\u00e4", c.EncryptFileName("\u00a1"))
assert.Equal(t, "160.\u03c2", c.EncryptFileName("\u03a0"))
}
@@ -194,14 +201,19 @@ func TestDecryptFileName(t *testing.T) {
{NameEncryptionStandard, true, "p0e52nreeaj0a5ea7s64m4j72s/l42g6771hnv3an9cgc8cr2n1ng/qgm4avr35m5loi1th53ato71v0", "1/12/123", nil},
{NameEncryptionStandard, true, "p0e52nreeaj0a5ea7s64m4j72s/l42g6771hnv3an9cgc8cr2n1/qgm4avr35m5loi1th53ato71v0", "", ErrorNotAMultipleOfBlocksize},
{NameEncryptionStandard, false, "1/12/qgm4avr35m5loi1th53ato71v0", "1/12/123", nil},
{NameEncryptionStandard, true, "p0e52nreeaj0a5ea7s64m4j72s-v2001-02-03-040506-123", "1-v2001-02-03-040506-123", nil},
{NameEncryptionOff, true, "1/12/123.bin", "1/12/123", nil},
{NameEncryptionOff, true, "1/12/123.bix", "", ErrorNotAnEncryptedFile},
{NameEncryptionOff, true, ".bin", "", ErrorNotAnEncryptedFile},
{NameEncryptionOff, true, "1/12/123-v2001-02-03-040506-123.bin", "1/12/123-v2001-02-03-040506-123", nil},
{NameEncryptionOff, true, "1/12/123-v1970-01-01-010101-123-v2001-02-03-040506-123.bin", "1/12/123-v1970-01-01-010101-123-v2001-02-03-040506-123", nil},
{NameEncryptionOff, true, "1/12/123-v1970-01-01-010101-123-v2001-02-03-040506-123.txt.bin", "1/12/123-v1970-01-01-010101-123-v2001-02-03-040506-123.txt", nil},
{NameEncryptionObfuscated, true, "!.hello", "hello", nil},
{NameEncryptionObfuscated, true, "hello", "", ErrorNotAnEncryptedFile},
{NameEncryptionObfuscated, true, "161.\u00e4", "\u00a1", nil},
{NameEncryptionObfuscated, true, "160.\u03c2", "\u03a0", nil},
{NameEncryptionObfuscated, false, "1/12/123/53.!!lipps", "1/12/123/!hello", nil},
{NameEncryptionObfuscated, false, "1/12/123/53-v2001-02-03-040506-123.!!lipps", "1/12/123/!hello-v2001-02-03-040506-123", nil},
} {
c, _ := newCipher(test.mode, "", "", test.dirNameEncrypt)
actual, actualErr := c.DecryptFileName(test.in)

View File

@@ -199,7 +199,7 @@ func init() {
m.Set("root_folder_id", "appDataFolder")
}
if opt.ServiceAccountFile == "" {
if opt.ServiceAccountFile == "" && opt.ServiceAccountCredentials == "" {
err = oauthutil.Config(ctx, "drive", name, m, driveConfig, nil)
if err != nil {
log.Fatalf("Failed to configure token: %v", err)
@@ -2959,12 +2959,12 @@ func (f *Fs) makeShortcut(ctx context.Context, srcPath string, dstFs *Fs, dstPat
}
// List all team drives
func (f *Fs) listTeamDrives(ctx context.Context) (drives []*drive.TeamDrive, err error) {
drives = []*drive.TeamDrive{}
listTeamDrives := f.svc.Teamdrives.List().PageSize(100)
func (f *Fs) listTeamDrives(ctx context.Context) (drives []*drive.Drive, err error) {
drives = []*drive.Drive{}
listTeamDrives := f.svc.Drives.List().PageSize(100)
var defaultFs Fs // default Fs with default Options
for {
var teamDrives *drive.TeamDriveList
var teamDrives *drive.DriveList
err = f.pacer.Call(func() (bool, error) {
teamDrives, err = listTeamDrives.Context(ctx).Do()
return defaultFs.shouldRetry(ctx, err)
@@ -2972,7 +2972,7 @@ func (f *Fs) listTeamDrives(ctx context.Context) (drives []*drive.TeamDrive, err
if err != nil {
return drives, errors.Wrap(err, "listing Team Drives failed")
}
drives = append(drives, teamDrives.TeamDrives...)
drives = append(drives, teamDrives.Drives...)
if teamDrives.NextPageToken == "" {
break
}
@@ -3069,7 +3069,7 @@ func (f *Fs) copyID(ctx context.Context, id, dest string) (err error) {
return err
}
if destLeaf == "" {
destLeaf = info.Name
destLeaf = path.Base(o.Remote())
}
if destDir == "" {
destDir = "."

View File

@@ -99,8 +99,10 @@ var (
"files.content.write",
"files.content.read",
"sharing.write",
"account_info.read", // needed for About
// "file_requests.write",
// "members.read", // needed for impersonate - but causes app to need to be approved by Dropbox Team Admin during the flow
// "team_data.member"
},
// Endpoint: oauth2.Endpoint{
// AuthURL: "https://www.dropbox.com/1/oauth2/authorize",
@@ -130,8 +132,8 @@ func getOauthConfig(m configmap.Mapper) *oauth2.Config {
}
// Make a copy of the config
config := *dropboxConfig
// Make a copy of the scopes with "members.read" appended
config.Scopes = append(config.Scopes, "members.read")
// Make a copy of the scopes with extra scopes requires appended
config.Scopes = append(config.Scopes, "members.read", "team_data.member")
return &config
}
@@ -1084,13 +1086,30 @@ func (f *Fs) PublicLink(ctx context.Context, remote string, expire fs.Duration,
fs.Debugf(f, "attempting to share '%s' (absolute path: %s)", remote, absPath)
createArg := sharing.CreateSharedLinkWithSettingsArg{
Path: absPath,
// FIXME this gives settings_error/not_authorized/.. errors
// and the expires setting isn't in the documentation so remove
// for now.
// Settings: &sharing.SharedLinkSettings{
// Expires: time.Now().Add(time.Duration(expire)).UTC().Round(time.Second),
// },
Settings: &sharing.SharedLinkSettings{
RequestedVisibility: &sharing.RequestedVisibility{
Tagged: dropbox.Tagged{Tag: sharing.RequestedVisibilityPublic},
},
Audience: &sharing.LinkAudience{
Tagged: dropbox.Tagged{Tag: sharing.LinkAudiencePublic},
},
Access: &sharing.RequestedLinkAccessLevel{
Tagged: dropbox.Tagged{Tag: sharing.RequestedLinkAccessLevelViewer},
},
},
}
if expire < fs.DurationOff {
expiryTime := time.Now().Add(time.Duration(expire)).UTC().Round(time.Second)
createArg.Settings.Expires = expiryTime
}
// FIXME note we can't set Settings for non enterprise dropbox
// because of https://github.com/dropbox/dropbox-sdk-go-unofficial/issues/75
// however this only goes wrong when we set Expires, so as a
// work-around remove Settings unless expire is set.
if expire == fs.DurationOff {
createArg.Settings = nil
}
var linkRes sharing.IsSharedLinkMetadata
err = f.pacer.Call(func() (bool, error) {
linkRes, err = f.sharing.CreateSharedLinkWithSettings(&createArg)
@@ -1334,13 +1353,13 @@ func (f *Fs) changeNotifyRunner(ctx context.Context, notifyFunc func(string, fs.
switch info := entry.(type) {
case *files.FolderMetadata:
entryType = fs.EntryDirectory
entryPath = strings.TrimLeft(info.PathDisplay, f.slashRootSlash)
entryPath = strings.TrimPrefix(info.PathDisplay, f.slashRootSlash)
case *files.FileMetadata:
entryType = fs.EntryObject
entryPath = strings.TrimLeft(info.PathDisplay, f.slashRootSlash)
entryPath = strings.TrimPrefix(info.PathDisplay, f.slashRootSlash)
case *files.DeletedMetadata:
entryType = fs.EntryObject
entryPath = strings.TrimLeft(info.PathDisplay, f.slashRootSlash)
entryPath = strings.TrimPrefix(info.PathDisplay, f.slashRootSlash)
default:
fs.Errorf(entry, "dropbox ChangeNotify: ignoring unknown EntryType %T", entry)
continue

View File

@@ -348,8 +348,10 @@ func (f *Fs) putUnchecked(ctx context.Context, in io.Reader, remote string, size
return nil, err
}
if len(fileUploadResponse.Links) != 1 {
return nil, errors.New("unexpected amount of files")
if len(fileUploadResponse.Links) == 0 {
return nil, errors.New("upload response not found")
} else if len(fileUploadResponse.Links) > 1 {
fs.Debugf(remote, "Multiple upload responses found, using the first")
}
link := fileUploadResponse.Links[0]

View File

@@ -241,23 +241,6 @@ func (dl *debugLog) Write(p []byte) (n int, err error) {
return len(p), nil
}
type dialCtx struct {
f *Fs
ctx context.Context
}
// dial a new connection with fshttp dialer
func (d *dialCtx) dial(network, address string) (net.Conn, error) {
conn, err := fshttp.NewDialer(d.ctx).Dial(network, address)
if err != nil {
return nil, err
}
if d.f.tlsConf != nil {
conn = tls.Client(conn, d.f.tlsConf)
}
return conn, err
}
// shouldRetry returns a boolean as to whether this err deserve to be
// retried. It returns the err as a convenience
func shouldRetry(ctx context.Context, err error) (bool, error) {
@@ -277,9 +260,22 @@ func shouldRetry(ctx context.Context, err error) (bool, error) {
// Open a new connection to the FTP server.
func (f *Fs) ftpConnection(ctx context.Context) (c *ftp.ServerConn, err error) {
fs.Debugf(f, "Connecting to FTP server")
dCtx := dialCtx{f, ctx}
ftpConfig := []ftp.DialOption{ftp.DialWithDialFunc(dCtx.dial)}
if f.opt.ExplicitTLS {
// Make ftp library dial with fshttp dialer optionally using TLS
dial := func(network, address string) (conn net.Conn, err error) {
conn, err = fshttp.NewDialer(ctx).Dial(network, address)
if f.tlsConf != nil && err == nil {
conn = tls.Client(conn, f.tlsConf)
}
return
}
ftpConfig := []ftp.DialOption{ftp.DialWithDialFunc(dial)}
if f.opt.TLS {
// Our dialer takes care of TLS but ftp library also needs tlsConf
// as a trigger for sending PSBZ and PROT options to server.
ftpConfig = append(ftpConfig, ftp.DialWithTLS(f.tlsConf))
} else if f.opt.ExplicitTLS {
ftpConfig = append(ftpConfig, ftp.DialWithExplicitTLS(f.tlsConf))
// Initial connection needs to be cleartext for explicit TLS
conn, err := fshttp.NewDialer(ctx).Dial("tcp", f.dialAddr)

View File

@@ -361,6 +361,11 @@ This will only work if you are copying between two OneDrive *Personal* drives AN
the files to copy are already shared between them. In other cases, rclone will
fall back to normal copy (which will be slightly slower).`,
Advanced: true,
}, {
Name: "list_chunk",
Help: "Size of listing chunk.",
Default: 1000,
Advanced: true,
}, {
Name: "no_versions",
Default: false,
@@ -468,6 +473,7 @@ type Options struct {
DriveType string `config:"drive_type"`
ExposeOneNoteFiles bool `config:"expose_onenote_files"`
ServerSideAcrossConfigs bool `config:"server_side_across_configs"`
ListChunk int64 `config:"list_chunk"`
NoVersions bool `config:"no_versions"`
LinkScope string `config:"link_scope"`
LinkType string `config:"link_type"`
@@ -560,6 +566,9 @@ func shouldRetry(ctx context.Context, resp *http.Response, err error) (bool, err
if len(resp.Header["Www-Authenticate"]) == 1 && strings.Index(resp.Header["Www-Authenticate"][0], "expired_token") >= 0 {
retry = true
fs.Debugf(nil, "Should retry: %v", err)
} else if err != nil && strings.Contains(err.Error(), "Unable to initialize RPS") {
retry = true
fs.Debugf(nil, "HTTP 401: Unable to initialize RPS. Trying again.")
}
case 429: // Too Many Requests.
// see https://docs.microsoft.com/en-us/sharepoint/dev/general-development/how-to-avoid-getting-throttled-or-blocked-in-sharepoint-online
@@ -896,7 +905,7 @@ type listAllFn func(*api.Item) bool
func (f *Fs) listAll(ctx context.Context, dirID string, directoriesOnly bool, filesOnly bool, fn listAllFn) (found bool, err error) {
// Top parameter asks for bigger pages of data
// https://dev.onedrive.com/odata/optional-query-parameters.htm
opts := f.newOptsCall(dirID, "GET", "/children?$top=1000")
opts := f.newOptsCall(dirID, "GET", fmt.Sprintf("/children?$top=%d", f.opt.ListChunk))
OUTER:
for {
var result api.ListChildrenResponse
@@ -1423,7 +1432,7 @@ func (f *Fs) PublicLink(ctx context.Context, remote string, expire fs.Duration,
Password: f.opt.LinkPassword,
}
if expire < fs.Duration(time.Hour*24*365*100) {
if expire < fs.DurationOff {
expiry := time.Now().Add(time.Duration(expire))
share.Expiry = &expiry
}
@@ -1851,7 +1860,7 @@ func (o *Object) uploadMultipart(ctx context.Context, in io.Reader, size int64,
fs.Debugf(o, "Cancelling multipart upload: %v", err)
cancelErr := o.cancelUploadSession(ctx, uploadURL)
if cancelErr != nil {
fs.Logf(o, "Failed to cancel multipart upload: %v", cancelErr)
fs.Logf(o, "Failed to cancel multipart upload: %v (upload failed due to: %v)", cancelErr, err)
}
})()

View File

@@ -26,7 +26,6 @@ import (
"github.com/aws/aws-sdk-go/aws/corehandlers"
"github.com/aws/aws-sdk-go/aws/credentials"
"github.com/aws/aws-sdk-go/aws/credentials/ec2rolecreds"
"github.com/aws/aws-sdk-go/aws/credentials/stscreds"
"github.com/aws/aws-sdk-go/aws/defaults"
"github.com/aws/aws-sdk-go/aws/ec2metadata"
"github.com/aws/aws-sdk-go/aws/endpoints"
@@ -1511,11 +1510,6 @@ func s3Connection(ctx context.Context, opt *Options, client *http.Client) (*s3.S
}),
ExpiryWindow: 3 * time.Minute,
},
// Pick up IAM role if we are in EKS
&stscreds.WebIdentityRoleProvider{
ExpiryWindow: 3 * time.Minute,
},
}
cred := credentials.NewChainCredentials(providers)

View File

@@ -16,6 +16,7 @@ import (
"strconv"
"strings"
"sync"
"sync/atomic"
"time"
"github.com/pkg/errors"
@@ -223,6 +224,17 @@ have a server which returns
Then you may need to enable this flag.
If concurrent reads are disabled, the use_fstat option is ignored.
`,
Advanced: true,
}, {
Name: "disable_concurrent_writes",
Default: false,
Help: `If set don't use concurrent writes
Normally rclone uses concurrent writes to upload files. This improves
the performance greatly, especially for distant servers.
This option disables concurrent writes should that be necessary.
`,
Advanced: true,
}, {
@@ -243,29 +255,30 @@ Set to 0 to keep connections indefinitely.
// Options defines the configuration for this backend
type Options struct {
Host string `config:"host"`
User string `config:"user"`
Port string `config:"port"`
Pass string `config:"pass"`
KeyPem string `config:"key_pem"`
KeyFile string `config:"key_file"`
KeyFilePass string `config:"key_file_pass"`
PubKeyFile string `config:"pubkey_file"`
KnownHostsFile string `config:"known_hosts_file"`
KeyUseAgent bool `config:"key_use_agent"`
UseInsecureCipher bool `config:"use_insecure_cipher"`
DisableHashCheck bool `config:"disable_hashcheck"`
AskPassword bool `config:"ask_password"`
PathOverride string `config:"path_override"`
SetModTime bool `config:"set_modtime"`
Md5sumCommand string `config:"md5sum_command"`
Sha1sumCommand string `config:"sha1sum_command"`
SkipLinks bool `config:"skip_links"`
Subsystem string `config:"subsystem"`
ServerCommand string `config:"server_command"`
UseFstat bool `config:"use_fstat"`
DisableConcurrentReads bool `config:"disable_concurrent_reads"`
IdleTimeout fs.Duration `config:"idle_timeout"`
Host string `config:"host"`
User string `config:"user"`
Port string `config:"port"`
Pass string `config:"pass"`
KeyPem string `config:"key_pem"`
KeyFile string `config:"key_file"`
KeyFilePass string `config:"key_file_pass"`
PubKeyFile string `config:"pubkey_file"`
KnownHostsFile string `config:"known_hosts_file"`
KeyUseAgent bool `config:"key_use_agent"`
UseInsecureCipher bool `config:"use_insecure_cipher"`
DisableHashCheck bool `config:"disable_hashcheck"`
AskPassword bool `config:"ask_password"`
PathOverride string `config:"path_override"`
SetModTime bool `config:"set_modtime"`
Md5sumCommand string `config:"md5sum_command"`
Sha1sumCommand string `config:"sha1sum_command"`
SkipLinks bool `config:"skip_links"`
Subsystem string `config:"subsystem"`
ServerCommand string `config:"server_command"`
UseFstat bool `config:"use_fstat"`
DisableConcurrentReads bool `config:"disable_concurrent_reads"`
DisableConcurrentWrites bool `config:"disable_concurrent_writes"`
IdleTimeout fs.Duration `config:"idle_timeout"`
}
// Fs stores the interface to the remote SFTP files
@@ -286,6 +299,7 @@ type Fs struct {
drain *time.Timer // used to drain the pool when we stop using the connections
pacer *fs.Pacer // pacer for operations
savedpswd string
transfers int32 // count in use references
}
// Object is a remote SFTP file that has been stat'd (so it exists, but is not necessarily open for reading)
@@ -348,6 +362,23 @@ func (c *conn) closed() error {
return nil
}
// Show that we are doing an upload or download
//
// Call removeTransfer() when done
func (f *Fs) addTransfer() {
atomic.AddInt32(&f.transfers, 1)
}
// Show the upload or download done
func (f *Fs) removeTransfer() {
atomic.AddInt32(&f.transfers, -1)
}
// getTransfers shows whether there are any transfers in progress
func (f *Fs) getTransfers() int32 {
return atomic.LoadInt32(&f.transfers)
}
// Open a new connection to the SFTP server.
func (f *Fs) sftpConnection(ctx context.Context) (c *conn, err error) {
// Rate limit rate of new connections
@@ -396,7 +427,11 @@ func (f *Fs) newSftpClient(conn *ssh.Client, opts ...sftp.ClientOption) (*sftp.C
opts = append(opts,
sftp.UseFstat(f.opt.UseFstat),
sftp.UseConcurrentReads(!f.opt.DisableConcurrentReads),
sftp.UseConcurrentWrites(!f.opt.DisableConcurrentWrites),
)
if f.opt.DisableConcurrentReads { // FIXME
fs.Errorf(f, "Ignoring disable_concurrent_reads after library reversion - see #5197")
}
return sftp.NewClientPipe(pr, pw, opts...)
}
@@ -474,6 +509,13 @@ func (f *Fs) putSftpConnection(pc **conn, err error) {
func (f *Fs) drainPool(ctx context.Context) (err error) {
f.poolMu.Lock()
defer f.poolMu.Unlock()
if transfers := f.getTransfers(); transfers != 0 {
fs.Debugf(f, "Not closing %d unused connections as %d transfers in progress", len(f.pool), transfers)
if f.opt.IdleTimeout > 0 {
f.drain.Reset(time.Duration(f.opt.IdleTimeout)) // nudge on the pool emptying timer
}
return nil
}
if f.opt.IdleTimeout > 0 {
f.drain.Stop()
}
@@ -1380,18 +1422,22 @@ func (o *Object) Storable() bool {
// objectReader represents a file open for reading on the SFTP server
type objectReader struct {
f *Fs
sftpFile *sftp.File
pipeReader *io.PipeReader
done chan struct{}
}
func newObjectReader(sftpFile *sftp.File) *objectReader {
func (f *Fs) newObjectReader(sftpFile *sftp.File) *objectReader {
pipeReader, pipeWriter := io.Pipe()
file := &objectReader{
f: f,
sftpFile: sftpFile,
pipeReader: pipeReader,
done: make(chan struct{}),
}
// Show connection in use
f.addTransfer()
go func() {
// Use sftpFile.WriteTo to pump data so that it gets a
@@ -1421,6 +1467,8 @@ func (file *objectReader) Close() (err error) {
_ = file.pipeReader.Close()
// Wait for the background process to finish
<-file.done
// Show connection no longer in use
file.f.removeTransfer()
return err
}
@@ -1454,12 +1502,27 @@ func (o *Object) Open(ctx context.Context, options ...fs.OpenOption) (in io.Read
return nil, errors.Wrap(err, "Open Seek failed")
}
}
in = readers.NewLimitedReadCloser(newObjectReader(sftpFile), limit)
in = readers.NewLimitedReadCloser(o.fs.newObjectReader(sftpFile), limit)
return in, nil
}
type sizeReader struct {
io.Reader
size int64
}
// Size returns the expected size of the stream
//
// It is used in sftpFile.ReadFrom as a hint to work out the
// concurrency needed
func (sr *sizeReader) Size() int64 {
return sr.size
}
// Update a remote sftp file using the data <in> and ModTime from <src>
func (o *Object) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, options ...fs.OpenOption) error {
o.fs.addTransfer() // Show transfer in progress
defer o.fs.removeTransfer()
// Clear the hash cache since we are about to update the object
o.md5sum = nil
o.sha1sum = nil
@@ -1487,7 +1550,7 @@ func (o *Object) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, op
fs.Debugf(src, "Removed after failed upload: %v", err)
}
}
_, err = file.ReadFrom(in)
_, err = file.ReadFrom(&sizeReader{Reader: in, size: src.Size()})
if err != nil {
remove()
return errors.Wrap(err, "Update ReadFrom failed")

View File

@@ -0,0 +1,170 @@
package api
import "fmt"
// Error contains the error code and message returned by the API
type Error struct {
Success bool `json:"success,omitempty"`
StatusCode int `json:"statusCode,omitempty"`
Message string `json:"message,omitempty"`
Data string `json:"data,omitempty"`
}
// Error returns a string for the error and satisfies the error interface
func (e Error) Error() string {
out := fmt.Sprintf("api error %d", e.StatusCode)
if e.Message != "" {
out += ": " + e.Message
}
if e.Data != "" {
out += ": " + e.Data
}
return out
}
// FolderEntry represents a Uptobox subfolder when listing folder contents
type FolderEntry struct {
FolderID uint64 `json:"fld_id"`
Description string `json:"fld_descr"`
Password string `json:"fld_password"`
FullPath string `json:"fullPath"`
Path string `json:"fld_name"`
Name string `json:"name"`
Hash string `json:"hash"`
}
// FolderInfo represents the current folder when listing folder contents
type FolderInfo struct {
FolderID uint64 `json:"fld_id"`
Hash string `json:"hash"`
FileCount uint64 `json:"fileCount"`
TotalFileSize int64 `json:"totalFileSize"`
}
// FileInfo represents a file when listing folder contents
type FileInfo struct {
Name string `json:"file_name"`
Description string `json:"file_descr"`
Created string `json:"file_created"`
Size int64 `json:"file_size"`
Downloads uint64 `json:"file_downloads"`
Code string `json:"file_code"`
Password string `json:"file_password"`
Public int `json:"file_public"`
LastDownload string `json:"file_last_download"`
ID uint64 `json:"id"`
}
// ReadMetadataResponse is the response when listing folder contents
type ReadMetadataResponse struct {
StatusCode int `json:"statusCode"`
Message string `json:"message"`
Data struct {
CurrentFolder FolderInfo `json:"currentFolder"`
Folders []FolderEntry `json:"folders"`
Files []FileInfo `json:"files"`
PageCount int `json:"pageCount"`
TotalFileCount int `json:"totalFileCount"`
TotalFileSize int64 `json:"totalFileSize"`
} `json:"data"`
}
// UploadInfo is the response when initiating an upload
type UploadInfo struct {
StatusCode int `json:"statusCode"`
Message string `json:"message"`
Data struct {
UploadLink string `json:"uploadLink"`
MaxUpload string `json:"maxUpload"`
} `json:"data"`
}
// UploadResponse is the respnse to a successful upload
type UploadResponse struct {
Files []struct {
Name string `json:"name"`
Size int64 `json:"size"`
URL string `json:"url"`
DeleteURL string `json:"deleteUrl"`
} `json:"files"`
}
// UpdateResponse is a generic response to various action on files (rename/copy/move)
type UpdateResponse struct {
Message string `json:"message"`
StatusCode int `json:"statusCode"`
}
// Download is the response when requesting a download link
type Download struct {
StatusCode int `json:"statusCode"`
Message string `json:"message"`
Data struct {
DownloadLink string `json:"dlLink"`
} `json:"data"`
}
// MetadataRequestOptions represents all the options when listing folder contents
type MetadataRequestOptions struct {
Limit uint64
Offset uint64
SearchField string
Search string
}
// CreateFolderRequest is used for creating a folder
type CreateFolderRequest struct {
Token string `json:"token"`
Path string `json:"path"`
Name string `json:"name"`
}
// DeleteFolderRequest is used for deleting a folder
type DeleteFolderRequest struct {
Token string `json:"token"`
FolderID uint64 `json:"fld_id"`
}
// CopyMoveFileRequest is used for moving/copying a file
type CopyMoveFileRequest struct {
Token string `json:"token"`
FileCodes string `json:"file_codes"`
DestinationFolderID uint64 `json:"destination_fld_id"`
Action string `json:"action"`
}
// MoveFolderRequest is used for moving a folder
type MoveFolderRequest struct {
Token string `json:"token"`
FolderID uint64 `json:"fld_id"`
DestinationFolderID uint64 `json:"destination_fld_id"`
Action string `json:"action"`
}
// RenameFolderRequest is used for renaming a folder
type RenameFolderRequest struct {
Token string `json:"token"`
FolderID uint64 `json:"fld_id"`
NewName string `json:"new_name"`
}
// UpdateFileInformation is used for renaming a file
type UpdateFileInformation struct {
Token string `json:"token"`
FileCode string `json:"file_code"`
NewName string `json:"new_name,omitempty"`
Description string `json:"description,omitempty"`
Password string `json:"password,omitempty"`
Public string `json:"public,omitempty"`
}
// RemoveFileRequest is used for deleting a file
type RemoveFileRequest struct {
Token string `json:"token"`
FileCodes string `json:"file_codes"`
}
// Token represents the authentication token
type Token struct {
Token string `json:"token"`
}

1055
backend/uptobox/uptobox.go Normal file

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,21 @@
// Test Uptobox filesystem interface
package uptobox_test
import (
"testing"
"github.com/rclone/rclone/backend/uptobox"
"github.com/rclone/rclone/fstest"
"github.com/rclone/rclone/fstest/fstests"
)
// TestIntegration runs integration tests against the remote
func TestIntegration(t *testing.T) {
if *fstest.RemoteName == "" {
*fstest.RemoteName = "TestUptobox:"
}
fstests.Run(t, &fstests.Opt{
RemoteName: *fstest.RemoteName,
NilObject: (*uptobox.Object)(nil),
})
}

View File

@@ -125,7 +125,7 @@ func (ca *CookieAuth) getSPCookie(conf *SharepointSuccessResponse) (*CookieRespo
return nil, errors.Wrap(err, "Error while constructing endpoint URL")
}
u, err := url.Parse("https://" + spRoot.Host + "/_forms/default.aspx?wa=wsignin1.0")
u, err := url.Parse(spRoot.Scheme + "://" + spRoot.Host + "/_forms/default.aspx?wa=wsignin1.0")
if err != nil {
return nil, errors.Wrap(err, "Error while constructing login URL")
}

View File

@@ -10,7 +10,9 @@ package webdav
import (
"bytes"
"context"
"crypto/md5"
"crypto/tls"
"encoding/hex"
"encoding/xml"
"fmt"
"io"
@@ -18,6 +20,7 @@ import (
"net/url"
"os/exec"
"path"
"regexp"
"strconv"
"strings"
"sync"
@@ -34,6 +37,7 @@ import (
"github.com/rclone/rclone/fs/fserrors"
"github.com/rclone/rclone/fs/fshttp"
"github.com/rclone/rclone/fs/hash"
"github.com/rclone/rclone/lib/atexit"
"github.com/rclone/rclone/lib/encoder"
"github.com/rclone/rclone/lib/pacer"
"github.com/rclone/rclone/lib/rest"
@@ -113,6 +117,14 @@ func init() {
Name: config.ConfigEncoding,
Help: configEncodingHelp,
Advanced: true,
}, {
Name: "chunk_size",
Help: `Chunk size to use for uploading (Nextcloud only)
Set to 0 to disable chunked uploading.
`,
Advanced: true,
Default: fs.SizeSuffix(0), // off by default
}},
})
}
@@ -126,6 +138,7 @@ type Options struct {
BearerToken string `config:"bearer_token"`
BearerTokenCommand string `config:"bearer_token_command"`
Enc encoder.MultiEncoder `config:"encoding"`
ChunkSize fs.SizeSuffix `config:"chunk_size"`
}
// Fs represents a remote webdav
@@ -136,6 +149,7 @@ type Fs struct {
features *fs.Features // optional features
endpoint *url.URL // URL of the host
endpointURL string // endpoint as a string
uploadURL string // upload URL for nextcloud chunked
srv *rest.Client // the connection to the one drive server
pacer *fs.Pacer // pacer for API calls
precision time.Duration // mod time precision
@@ -146,6 +160,7 @@ type Fs struct {
hasMD5 bool // set if can use owncloud style checksums for MD5
hasSHA1 bool // set if can use owncloud style checksums for SHA1
ntlmAuthMu sync.Mutex // mutex to serialize NTLM auth roundtrips
canChunk bool // set if nextcloud and chunk_size is set
}
// Object describes a webdav object
@@ -457,6 +472,12 @@ func NewFs(ctx context.Context, name, root string, m configmap.Mapper) (fs.Fs, e
return f, nil
}
// set the chunk size for testing
func (f *Fs) setUploadChunkSize(cs fs.SizeSuffix) (old fs.SizeSuffix, err error) {
old, f.opt.ChunkSize = f.opt.ChunkSize, cs
return
}
// sets the BearerToken up
func (f *Fs) setBearerToken(token string) {
f.opt.BearerToken = token
@@ -500,6 +521,8 @@ func (f *Fs) fetchAndSetBearerToken() error {
return nil
}
var matchNextcloudURL = regexp.MustCompile(`^.*/dav/files/[^/]+/?$`)
// setQuirks adjusts the Fs for the vendor passed in
func (f *Fs) setQuirks(ctx context.Context, vendor string) error {
switch vendor {
@@ -513,6 +536,12 @@ func (f *Fs) setQuirks(ctx context.Context, vendor string) error {
f.precision = time.Second
f.useOCMtime = true
f.hasSHA1 = true
f.canChunk = true
if f.opt.ChunkSize != 0 && !matchNextcloudURL.MatchString(f.endpointURL) {
return errors.New("chunked upload with nextcloud must use /dav/files/USER endpoint not /webdav")
}
f.uploadURL = strings.Replace(f.endpointURL, "/dav/files/", "/dav/uploads/", 1)
fs.Logf(nil, f.uploadURL)
case "sharepoint":
// To mount sharepoint, two Cookies are required
// They have to be set instead of BasicAuth
@@ -956,7 +985,7 @@ func (f *Fs) copyOrMove(ctx context.Context, src fs.Object, remote string, metho
dstPath := f.filePath(remote)
err := f.mkParentDir(ctx, dstPath)
if err != nil {
return nil, errors.Wrap(err, "Copy mkParentDir failed")
return nil, errors.Wrap(err, "copy mkParentDir failed")
}
destinationURL, err := rest.URLJoin(f.endpoint, dstPath)
if err != nil {
@@ -980,11 +1009,11 @@ func (f *Fs) copyOrMove(ctx context.Context, src fs.Object, remote string, metho
return f.shouldRetry(ctx, resp, err)
})
if err != nil {
return nil, errors.Wrap(err, "Copy call failed")
return nil, errors.Wrap(err, "copy call failed")
}
dstObj, err := f.NewObject(ctx, remote)
if err != nil {
return nil, errors.Wrap(err, "Copy NewObject failed")
return nil, errors.Wrap(err, "copy NewObject failed")
}
return dstObj, nil
}
@@ -1047,18 +1076,18 @@ func (f *Fs) DirMove(ctx context.Context, src fs.Fs, srcRemote, dstRemote string
return fs.ErrorDirExists
}
if err != fs.ErrorDirNotFound {
return errors.Wrap(err, "DirMove dirExists dst failed")
return errors.Wrap(err, "dirMove dirExists dst failed")
}
// Make sure the parent directory exists
err = f.mkParentDir(ctx, dstPath)
if err != nil {
return errors.Wrap(err, "DirMove mkParentDir dst failed")
return errors.Wrap(err, "dirMove mkParentDir dst failed")
}
destinationURL, err := rest.URLJoin(f.endpoint, dstPath)
if err != nil {
return errors.Wrap(err, "DirMove couldn't join URL")
return errors.Wrap(err, "dirMove couldn't join URL")
}
var resp *http.Response
@@ -1067,7 +1096,7 @@ func (f *Fs) DirMove(ctx context.Context, src fs.Fs, srcRemote, dstRemote string
Path: addSlash(srcPath),
NoResponse: true,
ExtraHeaders: map[string]string{
"Destination": destinationURL.String(),
"Destination": addSlash(destinationURL.String()),
"Overwrite": "F",
},
}
@@ -1076,7 +1105,7 @@ func (f *Fs) DirMove(ctx context.Context, src fs.Fs, srcRemote, dstRemote string
return f.shouldRetry(ctx, resp, err)
})
if err != nil {
return errors.Wrap(err, "DirMove MOVE call failed")
return errors.Wrap(err, "dirMove MOVE call failed")
}
return nil
}
@@ -1259,39 +1288,67 @@ func (o *Object) Open(ctx context.Context, options ...fs.OpenOption) (in io.Read
func (o *Object) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, options ...fs.OpenOption) (err error) {
err = o.fs.mkParentDir(ctx, o.filePath())
if err != nil {
return errors.Wrap(err, "Update mkParentDir failed")
return errors.Wrap(err, "update mkParentDir failed")
}
size := src.Size()
var resp *http.Response
opts := rest.Opts{
Method: "PUT",
Path: o.filePath(),
Body: in,
NoResponse: true,
ContentLength: &size, // FIXME this isn't necessary with owncloud - See https://github.com/nextcloud/nextcloud-snap/issues/365
ContentType: fs.MimeType(ctx, src),
Options: options,
if o.fs.canChunk && o.fs.opt.ChunkSize > 0 && size > int64(o.fs.opt.ChunkSize) {
err = o.updateChunked(ctx, in, src, options...)
if err != nil {
return err
}
} else {
contentType := fs.MimeType(ctx, src)
filePath := o.filePath()
extraHeaders := o.extraHeaders(ctx, src)
err = o.updateSimple(ctx, in, filePath, size, contentType, extraHeaders, o.fs.endpointURL, options...)
if err != nil {
return err
}
}
// read metadata from remote
o.hasMetaData = false
return o.readMetaData(ctx)
}
func (o *Object) extraHeaders(ctx context.Context, src fs.ObjectInfo) map[string]string {
extraHeaders := map[string]string{}
if o.fs.useOCMtime || o.fs.hasMD5 || o.fs.hasSHA1 {
opts.ExtraHeaders = map[string]string{}
if o.fs.useOCMtime {
opts.ExtraHeaders["X-OC-Mtime"] = fmt.Sprintf("%d", src.ModTime(ctx).Unix())
extraHeaders["X-OC-Mtime"] = fmt.Sprintf("%d", src.ModTime(ctx).Unix())
}
// Set one upload checksum
// Owncloud uses one checksum only to check the upload and stores its own SHA1 and MD5
// Nextcloud stores the checksum you supply (SHA1 or MD5) but only stores one
if o.fs.hasSHA1 {
if sha1, _ := src.Hash(ctx, hash.SHA1); sha1 != "" {
opts.ExtraHeaders["OC-Checksum"] = "SHA1:" + sha1
extraHeaders["OC-Checksum"] = "SHA1:" + sha1
}
}
if o.fs.hasMD5 && opts.ExtraHeaders["OC-Checksum"] == "" {
if o.fs.hasMD5 && extraHeaders["OC-Checksum"] == "" {
if md5, _ := src.Hash(ctx, hash.MD5); md5 != "" {
opts.ExtraHeaders["OC-Checksum"] = "MD5:" + md5
extraHeaders["OC-Checksum"] = "MD5:" + md5
}
}
}
return extraHeaders
}
// Standard update
func (o *Object) updateSimple(ctx context.Context, in io.Reader, filePath string, size int64, contentType string, extraHeaders map[string]string, rootURL string, options ...fs.OpenOption) (err error) {
var resp *http.Response
opts := rest.Opts{
Method: "PUT",
Path: filePath,
Body: in,
NoResponse: true,
ContentLength: &size, // FIXME this isn't necessary with owncloud - See https://github.com/nextcloud/nextcloud-snap/issues/365
ContentType: contentType,
Options: options,
ExtraHeaders: extraHeaders,
RootURL: rootURL,
}
err = o.fs.pacer.CallNoRetry(func() (bool, error) {
resp, err = o.fs.srv.Call(ctx, &opts)
return o.fs.shouldRetry(ctx, resp, err)
@@ -1307,9 +1364,85 @@ func (o *Object) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, op
_ = o.Remove(ctx)
return err
}
// read metadata from remote
o.hasMetaData = false
return o.readMetaData(ctx)
return nil
}
// Chunked update for Nextcloud (see
// https://docs.nextcloud.com/server/20/developer_manual/client_apis/WebDAV/chunking.html)
func (o *Object) updateChunked(ctx context.Context, in io.Reader, src fs.ObjectInfo, options ...fs.OpenOption) (err error) {
hasher := md5.New()
_, err = hasher.Write([]byte(o.filePath()))
if err != nil {
return errors.Wrap(err, "chunked upload couldn't hash URL")
}
uploadDir := "rclone-chunked-upload-" + hex.EncodeToString(hasher.Sum(nil))
fs.Debugf(src, "Starting multipart upload to temp dir %q", uploadDir)
opts := rest.Opts{
Method: "MKCOL",
Path: uploadDir + "/",
NoResponse: true,
RootURL: o.fs.uploadURL,
}
err = o.fs.pacer.Call(func() (bool, error) {
resp, err := o.fs.srv.Call(ctx, &opts)
return o.fs.shouldRetry(ctx, resp, err)
})
if err != nil {
return errors.Wrap(err, "making upload directory failed")
}
defer atexit.OnError(&err, func() {
// Try to abort the upload, but ignore the error.
fs.Debugf(src, "Cancelling chunked upload")
_ = o.fs.Purge(ctx, uploadDir)
})()
var (
size = src.Size()
uploadedSize = int64(0)
partObj = &Object{
fs: o.fs,
}
)
for uploadedSize < size {
// Upload chunk
contentLength := int64(partObj.fs.opt.ChunkSize)
if size-uploadedSize < contentLength {
contentLength = size - uploadedSize
}
partObj.remote = fmt.Sprintf("%s/%015d-%015d", uploadDir, uploadedSize, uploadedSize+contentLength)
extraHeaders := map[string]string{}
err = partObj.updateSimple(ctx, io.LimitReader(in, int64(partObj.fs.opt.ChunkSize)), partObj.remote, contentLength, "", extraHeaders, o.fs.uploadURL, options...)
if err != nil {
return errors.Wrap(err, "uploading chunk failed")
}
uploadedSize += contentLength
}
// Finish
var resp *http.Response
opts = rest.Opts{
Method: "MOVE",
Path: o.fs.filePath(path.Join(uploadDir, ".file")),
NoResponse: true,
Options: options,
RootURL: o.fs.uploadURL,
}
destinationURL, err := rest.URLJoin(o.fs.endpoint, o.filePath())
if err != nil {
return errors.Wrap(err, "finalize chunked upload couldn't join URL")
}
opts.ExtraHeaders = o.extraHeaders(ctx, src)
opts.ExtraHeaders["Destination"] = destinationURL.String()
err = o.fs.pacer.CallNoRetry(func() (bool, error) {
resp, err = o.fs.srv.Call(ctx, &opts)
return o.fs.shouldRetry(ctx, resp, err)
})
if err != nil {
return errors.Wrap(err, "finalize chunked upload failed")
}
return nil
}
// Remove an object

View File

@@ -1,10 +1,10 @@
// Test Webdav filesystem interface
package webdav_test
package webdav
import (
"testing"
"github.com/rclone/rclone/backend/webdav"
"github.com/rclone/rclone/fs"
"github.com/rclone/rclone/fstest"
"github.com/rclone/rclone/fstest/fstests"
)
@@ -13,7 +13,10 @@ import (
func TestIntegration(t *testing.T) {
fstests.Run(t, &fstests.Opt{
RemoteName: "TestWebdavNextcloud:",
NilObject: (*webdav.Object)(nil),
NilObject: (*Object)(nil),
ChunkedUpload: fstests.ChunkedUploadConfig{
MinChunkSize: 1 * fs.MebiByte,
},
})
}
@@ -24,7 +27,10 @@ func TestIntegration2(t *testing.T) {
}
fstests.Run(t, &fstests.Opt{
RemoteName: "TestWebdavOwncloud:",
NilObject: (*webdav.Object)(nil),
NilObject: (*Object)(nil),
ChunkedUpload: fstests.ChunkedUploadConfig{
Skip: true,
},
})
}
@@ -35,7 +41,10 @@ func TestIntegration3(t *testing.T) {
}
fstests.Run(t, &fstests.Opt{
RemoteName: "TestWebdavRclone:",
NilObject: (*webdav.Object)(nil),
NilObject: (*Object)(nil),
ChunkedUpload: fstests.ChunkedUploadConfig{
Skip: true,
},
})
}
@@ -46,6 +55,10 @@ func TestIntegration4(t *testing.T) {
}
fstests.Run(t, &fstests.Opt{
RemoteName: "TestWebdavNTLM:",
NilObject: (*webdav.Object)(nil),
NilObject: (*Object)(nil),
})
}
func (f *Fs) SetUploadChunkSize(cs fs.SizeSuffix) (fs.SizeSuffix, error) {
return f.setUploadChunkSize(cs)
}

View File

@@ -96,6 +96,11 @@ func init() {
log.Fatalf("Failed to configure token: %v", err)
}
}
if fs.GetConfig(ctx).AutoConfirm {
return
}
if err = setupRoot(ctx, name, m); err != nil {
log.Fatalf("Failed to configure root directory: %v", err)
}
@@ -161,7 +166,7 @@ type Object struct {
func setupRegion(m configmap.Mapper) {
region, ok := m.Get("region")
if !ok {
if !ok || region == "" {
log.Fatalf("No region set\n")
}
rootURL = fmt.Sprintf("https://workdrive.zoho.%s/api/v1", region)

View File

@@ -62,6 +62,7 @@ docs = [
"sftp.md",
"sugarsync.md",
"tardigrade.md",
"uptobox.md",
"union.md",
"webdav.md",
"yandex.md",

View File

@@ -44,10 +44,10 @@ var commandDefinition = &cobra.Command{
Use: "about remote:",
Short: `Get quota information from the remote.`,
Long: `
` + "`rclone about`" + `prints quota information about a remote to standard
` + "`rclone about`" + ` prints quota information about a remote to standard
output. The output is typically used, free, quota and trash contents.
E.g. Typical output from` + "`rclone about remote:`" + `is:
E.g. Typical output from ` + "`rclone about remote:`" + ` is:
Total: 17G
Used: 7.444G
@@ -75,7 +75,7 @@ Applying a ` + "`--full`" + ` flag to the command prints the bytes in full, e.g.
Trashed: 104857602
Other: 8849156022
A ` + "`--json`" + `flag generates conveniently computer readable output, e.g.
A ` + "`--json`" + ` flag generates conveniently computer readable output, e.g.
{
"total": 18253611008,

View File

@@ -54,6 +54,7 @@ import (
_ "github.com/rclone/rclone/cmd/size"
_ "github.com/rclone/rclone/cmd/sync"
_ "github.com/rclone/rclone/cmd/test"
_ "github.com/rclone/rclone/cmd/test/changenotify"
_ "github.com/rclone/rclone/cmd/test/histogram"
_ "github.com/rclone/rclone/cmd/test/info"
_ "github.com/rclone/rclone/cmd/test/makefiles"

View File

@@ -75,8 +75,19 @@ const (
// ShowVersion prints the version to stdout
func ShowVersion() {
osVersion, osKernel := buildinfo.GetOSVersion()
if osVersion == "" {
osVersion = "unknown"
}
if osKernel == "" {
osKernel = "unknown"
}
linking, tagString := buildinfo.GetLinkingAndTags()
fmt.Printf("rclone %s\n", fs.Version)
fmt.Printf("- os/version: %s\n", osVersion)
fmt.Printf("- os/kernel: %s\n", osKernel)
fmt.Printf("- os/type: %s\n", runtime.GOOS)
fmt.Printf("- os/arch: %s\n", runtime.GOARCH)
fmt.Printf("- go/version: %s\n", runtime.Version())
@@ -553,7 +564,7 @@ func Main() {
setupRootCommand(Root)
AddBackendFlags()
if err := Root.Execute(); err != nil {
if strings.HasPrefix(err.Error(), "unknown command") {
if strings.HasPrefix(err.Error(), "unknown command") && selfupdateEnabled {
Root.PrintErrf("You could use '%s selfupdate' to get latest features.\n\n", Root.CommandPath())
}
log.Fatalf("Fatal error: %v", err)

View File

@@ -21,6 +21,7 @@ import (
"github.com/rclone/rclone/cmd/mountlib"
"github.com/rclone/rclone/fs"
"github.com/rclone/rclone/lib/atexit"
"github.com/rclone/rclone/lib/buildinfo"
"github.com/rclone/rclone/vfs"
)
@@ -35,6 +36,7 @@ func init() {
cmd.Aliases = append(cmd.Aliases, "cmount")
}
mountlib.AddRc("cmount", mount)
buildinfo.Tags = append(buildinfo.Tags, "cmount")
}
// Find the option string in the current options

View File

@@ -22,6 +22,7 @@ func init() {
cmd.Root.AddCommand(configCommand)
configCommand.AddCommand(configEditCommand)
configCommand.AddCommand(configFileCommand)
configCommand.AddCommand(configTouchCommand)
configCommand.AddCommand(configShowCommand)
configCommand.AddCommand(configDumpCommand)
configCommand.AddCommand(configProvidersCommand)
@@ -63,6 +64,15 @@ var configFileCommand = &cobra.Command{
},
}
var configTouchCommand = &cobra.Command{
Use: "touch",
Short: `Ensure configuration file exists.`,
Run: func(command *cobra.Command, args []string) {
cmd.CheckArgs(0, 0, command, args)
config.SaveConfig()
},
}
var configShowCommand = &cobra.Command{
Use: "show [<remote>]",
Short: `Print (decrypted) config file, or the config for a single remote.`,

View File

@@ -36,7 +36,7 @@ var commandDefinition = &cobra.Command{
Download a URL's content and copy it to the destination without saving
it in temporary storage.
Setting ` + "`--auto-filename`" + `will cause the file name to be retrieved from
Setting ` + "`--auto-filename`" + ` will cause the file name to be retrieved from
the from URL (after any redirections) and used in the destination
path. With ` + "`--print-filename`" + ` in addition, the resuling file name will
be printed.

View File

@@ -3,7 +3,6 @@ package link
import (
"context"
"fmt"
"time"
"github.com/rclone/rclone/cmd"
"github.com/rclone/rclone/fs"
@@ -13,7 +12,7 @@ import (
)
var (
expire = fs.Duration(time.Hour * 24 * 365 * 100)
expire = fs.DurationOff
unlink = false
)

View File

@@ -334,7 +334,7 @@ metadata about files like in UNIX. One case that may arise is that other program
(incorrectly) interprets this as the file being accessible by everyone. For example
an SSH client may warn about "unprotected private key file".
WinFsp 2021 (version 1.9, still in beta) introduces a new FUSE option "FileSecurity",
WinFsp 2021 (version 1.9) introduces a new FUSE option "FileSecurity",
that allows the complete specification of file security descriptors using
[SDDL](https://docs.microsoft.com/en-us/windows/win32/secauthz/security-descriptor-string-format).
With this you can work around issues such as the mentioned "unprotected private key file"
@@ -342,19 +342,38 @@ by specifying |-o FileSecurity="D:P(A;;FA;;;OW)"|, for file all access (FA) to t
#### Windows caveats
Note that drives created as Administrator are not visible by other
accounts (including the account that was elevated as
Administrator). So if you start a Windows drive from an Administrative
Command Prompt and then try to access the same drive from Explorer
(which does not run as Administrator), you will not be able to see the
new drive.
Drives created as Administrator are not visible to other accounts,
not even an account that was elevated to Administrator with the
User Account Control (UAC) feature. A result of this is that if you mount
to a drive letter from a Command Prompt run as Administrator, and then try
to access the same drive from Windows Explorer (which does not run as
Administrator), you will not be able to see the mounted drive.
The easiest way around this is to start the drive from a normal
command prompt. It is also possible to start a drive from the SYSTEM
account (using [the WinFsp.Launcher
infrastructure](https://github.com/billziss-gh/winfsp/wiki/WinFsp-Service-Architecture))
which creates drives accessible for everyone on the system or
alternatively using [the nssm service manager](https://nssm.cc/usage).
If you don't need to access the drive from applications running with
administrative privileges, the easiest way around this is to always
create the mount from a non-elevated command prompt.
To make mapped drives available to the user account that created them
regardless if elevated or not, there is a special Windows setting called
[linked connections](https://docs.microsoft.com/en-us/troubleshoot/windows-client/networking/mapped-drives-not-available-from-elevated-command#detail-to-configure-the-enablelinkedconnections-registry-entry)
that can be enabled.
It is also possible to make a drive mount available to everyone on the system,
by running the process creating it as the built-in SYSTEM account.
There are several ways to do this: One is to use the command-line
utility [PsExec](https://docs.microsoft.com/en-us/sysinternals/downloads/psexec),
from Microsoft's Sysinternals suite, which has option |-s| to start
processes as the SYSTEM account. Another alternative is to run the mount
command from a Windows Scheduled Task, or a Windows Service, configured
to run as the SYSTEM account. A third alternative is to use the
[WinFsp.Launcher infrastructure](https://github.com/billziss-gh/winfsp/wiki/WinFsp-Service-Architecture)).
Note that when running rclone as another user, it will not use
the configuration file from your profile unless you tell it to
with the [|--config|](https://rclone.org/docs/#config-config-file) option.
Read more in the [install documentation](https://rclone.org/install/).
Note that mapping to a directory path, instead of a drive letter,
does not suffer from the same limitations.
### Limitations

View File

@@ -485,11 +485,15 @@ func (u *UI) removeEntry(pos int) {
// delete the entry at the current position
func (u *UI) delete() {
if u.d == nil || len(u.entries) == 0 {
return
}
ctx := context.Background()
dirPos := u.sortPerm[u.dirPosMap[u.path].entry]
entry := u.entries[dirPos]
cursorPos := u.dirPosMap[u.path]
dirPos := u.sortPerm[cursorPos.entry]
dirEntry := u.entries[dirPos]
u.boxMenu = []string{"cancel", "confirm"}
if obj, isFile := entry.(fs.Object); isFile {
if obj, isFile := dirEntry.(fs.Object); isFile {
u.boxMenuHandler = func(f fs.Fs, p string, o int) (string, error) {
if o != 1 {
return "Aborted!", nil
@@ -499,27 +503,33 @@ func (u *UI) delete() {
return "", err
}
u.removeEntry(dirPos)
if cursorPos.entry >= len(u.entries) {
u.move(-1) // move back onto a valid entry
}
return "Successfully deleted file!", nil
}
u.popupBox([]string{
"Delete this file?",
u.fsName + entry.String()})
u.fsName + dirEntry.String()})
} else {
u.boxMenuHandler = func(f fs.Fs, p string, o int) (string, error) {
if o != 1 {
return "Aborted!", nil
}
err := operations.Purge(ctx, f, entry.String())
err := operations.Purge(ctx, f, dirEntry.String())
if err != nil {
return "", err
}
u.removeEntry(dirPos)
if cursorPos.entry >= len(u.entries) {
u.move(-1) // move back onto a valid entry
}
return "Successfully purged folder!", nil
}
u.popupBox([]string{
"Purge this directory?",
"ALL files in it will be deleted",
u.fsName + entry.String()})
u.fsName + dirEntry.String()})
}
}

View File

@@ -7,12 +7,19 @@ import (
"time"
"github.com/rclone/rclone/cmd"
"github.com/rclone/rclone/fs/config/flags"
"github.com/rclone/rclone/fs/operations"
"github.com/spf13/cobra"
)
var (
size = int64(-1)
)
func init() {
cmd.Root.AddCommand(commandDefinition)
cmdFlags := commandDefinition.Flags()
flags.Int64VarP(cmdFlags, &size, "size", "", size, "File size hint to preallocate")
}
var commandDefinition = &cobra.Command{
@@ -37,6 +44,13 @@ must fit into RAM. The cutoff needs to be small enough to adhere
the limits of your remote, please see there. Generally speaking,
setting this cutoff too high will decrease your performance.
Use the |--size| flag to preallocate the file in advance at the remote end
and actually stream it, even if remote backend doesn't support streaming.
|--size| should be the exact size of the input stream in bytes. If the
size of the stream is different in length to the |--size| passed in
then the transfer will likely fail.
Note that the upload can also not be retried because the data is
not kept around until the upload succeeds. If you need to transfer
a lot of data, you're better off caching locally and then
@@ -51,7 +65,7 @@ a lot of data, you're better off caching locally and then
fdst, dstFileName := cmd.NewFsDstFile(args)
cmd.Run(false, false, command, func() error {
_, err := operations.Rcat(context.Background(), fdst, dstFileName, os.Stdin, time.Now())
_, err := operations.RcatSize(context.Background(), fdst, dstFileName, os.Stdin, size, time.Now())
return err
})
},

View File

@@ -1,3 +1,5 @@
// +build !noselfupdate
package selfupdate
// Note: "|" will be replaced by backticks in the help string below
@@ -27,7 +29,7 @@ If the old version contains only dots and digits (for example |v1.54.0|)
then it's a stable release so you won't need the |--beta| flag. Beta releases
have an additional information similar to |v1.54.0-beta.5111.06f1c0c61|.
(if you are a developer and use a locally built rclone, the version number
will end with |-DEV|, you will have to rebuild it as it obvisously can't
will end with |-DEV|, you will have to rebuild it as it obviously can't
be distributed).
If you previously installed rclone via a package manager, the package may

View File

@@ -0,0 +1,11 @@
// +build noselfupdate
package selfupdate
import (
"github.com/rclone/rclone/lib/buildinfo"
)
func init() {
buildinfo.Tags = append(buildinfo.Tags, "noselfupdate")
}

View File

@@ -1,3 +1,5 @@
// +build !noselfupdate
package selfupdate
import (
@@ -143,14 +145,9 @@ func InstallUpdate(ctx context.Context, opt *Options) error {
return errors.New("--stable and --beta are mutually exclusive")
}
gotCmount := false
for _, tag := range buildinfo.Tags {
if tag == "cmount" {
gotCmount = true
break
}
}
if gotCmount && !cmount.ProvidedBy(runtime.GOOS) {
// The `cmount` tag is added by cmd/cmount/mount.go only if build is static.
_, tags := buildinfo.GetLinkingAndTags()
if strings.Contains(" "+tags+" ", " cmount ") && !cmount.ProvidedBy(runtime.GOOS) {
return errors.New("updating would discard the mount FUSE capability, aborting")
}

View File

@@ -1,3 +1,5 @@
// +build !noselfupdate
package selfupdate
import (

View File

@@ -1,3 +1,5 @@
// +build !noselfupdate
package selfupdate
import (

View File

@@ -1,4 +1,5 @@
// +build !windows,!plan9,!js
// +build !noselfupdate
package selfupdate

View File

@@ -1,4 +1,5 @@
// +build plan9 js
// +build !noselfupdate
package selfupdate

View File

@@ -1,4 +1,5 @@
// +build windows
// +build !noselfupdate
package selfupdate

View File

@@ -0,0 +1,5 @@
// +build noselfupdate
package cmd
const selfupdateEnabled = false

View File

@@ -0,0 +1,7 @@
// +build !noselfupdate
package cmd
// This constant must be in the `cmd` package rather than `cmd/selfupdate`
// to prevent build failure due to dependency loop.
const selfupdateEnabled = true

View File

@@ -0,0 +1,54 @@
// Package changenotify tests rclone's changenotify support
package changenotify
import (
"context"
"errors"
"time"
"github.com/rclone/rclone/cmd"
"github.com/rclone/rclone/cmd/test"
"github.com/rclone/rclone/fs"
"github.com/rclone/rclone/fs/config/flags"
"github.com/spf13/cobra"
)
var (
pollInterval = 10 * time.Second
)
func init() {
test.Command.AddCommand(commandDefinition)
cmdFlags := commandDefinition.Flags()
flags.DurationVarP(cmdFlags, &pollInterval, "poll-interval", "", pollInterval, "Time to wait between polling for changes.")
}
var commandDefinition = &cobra.Command{
Use: "changenotify remote:",
Short: `Log any change notify requests for the remote passed in.`,
RunE: func(command *cobra.Command, args []string) error {
cmd.CheckArgs(1, 1, command, args)
f := cmd.NewFsSrc(args)
ctx := context.Background()
// Start polling function
features := f.Features()
if do := features.ChangeNotify; do != nil {
pollChan := make(chan time.Duration)
do(ctx, changeNotify, pollChan)
pollChan <- pollInterval
fs.Logf(nil, "Waiting for changes, polling every %v", pollInterval)
} else {
return errors.New("poll-interval is not supported by this remote")
}
select {}
},
}
// changeNotify invalidates the directory cache for the relativePath
// passed in.
//
// if entryType is a directory it invalidates the parent of the directory too.
func changeNotify(relativePath string, entryType fs.EntryType) {
fs.Logf(nil, "%q: %v", relativePath, entryType)
}

View File

@@ -3,12 +3,12 @@
package makefiles
import (
cryptrand "crypto/rand"
"io"
"log"
"math/rand"
"os"
"path/filepath"
"time"
"github.com/rclone/rclone/cmd"
"github.com/rclone/rclone/cmd/test"
@@ -27,8 +27,10 @@ var (
maxFileSize = fs.SizeSuffix(100)
minFileNameLength = 4
maxFileNameLength = 12
seed = int64(1)
// Globals
randSource *rand.Rand
directoriesToCreate int
totalDirectories int
fileNames = map[string]struct{}{} // keep a note of which file name we've used already
@@ -44,6 +46,7 @@ func init() {
flags.FVarP(cmdFlags, &maxFileSize, "max-file-size", "", "Maximum size of files to create")
flags.IntVarP(cmdFlags, &minFileNameLength, "min-name-length", "", minFileNameLength, "Minimum size of file names")
flags.IntVarP(cmdFlags, &maxFileNameLength, "max-name-length", "", maxFileNameLength, "Maximum size of file names")
flags.Int64VarP(cmdFlags, &seed, "seed", "", seed, "Seed for the random number generator (0 for random)")
}
var commandDefinition = &cobra.Command{
@@ -51,28 +54,36 @@ var commandDefinition = &cobra.Command{
Short: `Make a random file hierarchy in <dir>`,
Run: func(command *cobra.Command, args []string) {
cmd.CheckArgs(1, 1, command, args)
if seed == 0 {
seed = time.Now().UnixNano()
fs.Logf(nil, "Using random seed = %d", seed)
}
randSource = rand.New(rand.NewSource(seed))
outputDirectory := args[0]
directoriesToCreate = numberOfFiles / averageFilesPerDirectory
averageSize := (minFileSize + maxFileSize) / 2
log.Printf("Creating %d files of average size %v in %d directories in %q.", numberOfFiles, averageSize, directoriesToCreate, outputDirectory)
start := time.Now()
fs.Logf(nil, "Creating %d files of average size %v in %d directories in %q.", numberOfFiles, averageSize, directoriesToCreate, outputDirectory)
root := &dir{name: outputDirectory, depth: 1}
for totalDirectories < directoriesToCreate {
root.createDirectories()
}
dirs := root.list("", []string{})
totalBytes := int64(0)
for i := 0; i < numberOfFiles; i++ {
dir := dirs[rand.Intn(len(dirs))]
writeFile(dir, fileName())
dir := dirs[randSource.Intn(len(dirs))]
totalBytes += writeFile(dir, fileName())
}
log.Printf("Done.")
dt := time.Since(start)
fs.Logf(nil, "Written %viB in %v at %viB/s.", fs.SizeSuffix(totalBytes), dt.Round(time.Millisecond), fs.SizeSuffix((totalBytes*int64(time.Second))/int64(dt)))
},
}
// fileName creates a unique random file or directory name
func fileName() (name string) {
for {
length := rand.Intn(maxFileNameLength-minFileNameLength) + minFileNameLength
name = random.String(length)
length := randSource.Intn(maxFileNameLength-minFileNameLength) + minFileNameLength
name = random.StringFn(length, randSource.Intn)
if _, found := fileNames[name]; !found {
break
}
@@ -99,7 +110,7 @@ func (d *dir) createDirectories() {
}
d.children = append(d.children, newDir)
totalDirectories++
switch rand.Intn(4) {
switch randSource.Intn(4) {
case 0:
if d.depth < maxDepth {
newDir.createDirectories()
@@ -122,7 +133,7 @@ func (d *dir) list(path string, output []string) []string {
}
// writeFile writes a random file at dir/name
func writeFile(dir, name string) {
func writeFile(dir, name string) int64 {
err := os.MkdirAll(dir, 0777)
if err != nil {
log.Fatalf("Failed to make directory %q: %v", dir, err)
@@ -132,8 +143,8 @@ func writeFile(dir, name string) {
if err != nil {
log.Fatalf("Failed to open file %q: %v", path, err)
}
size := rand.Int63n(int64(maxFileSize-minFileSize)) + int64(minFileSize)
_, err = io.CopyN(fd, cryptrand.Reader, size)
size := randSource.Int63n(int64(maxFileSize-minFileSize)) + int64(minFileSize)
_, err = io.CopyN(fd, randSource, size)
if err != nil {
log.Fatalf("Failed to write %v bytes to file %q: %v", size, path, err)
}
@@ -141,4 +152,6 @@ func writeFile(dir, name string) {
if err != nil {
log.Fatalf("Failed to close file %q: %v", path, err)
}
fs.Infof(path, "Written file size %v", fs.SizeSuffix(size))
return size
}

View File

@@ -29,13 +29,16 @@ var commandDefinition = &cobra.Command{
Use: "version",
Short: `Show the version number.`,
Long: `
Show the rclone version number, the go version, the build target OS and
architecture, build tags and the type of executable (static or dynamic).
Show the rclone version number, the go version, the build target
OS and architecture, the runtime OS and kernel version and bitness,
build tags and the type of executable (static or dynamic).
For example:
$ rclone version
rclone v1.54
rclone v1.55.0
- os/version: ubuntu 18.04 (64 bit)
- os/kernel: 4.15.0-136-generic (x86_64)
- os/type: linux
- os/arch: amd64
- go/version: go1.16

View File

@@ -26,12 +26,12 @@ func TestVersionWorksWithoutAccessibleConfigFile(t *testing.T) {
}
// re-wire
oldOsStdout := os.Stdout
oldConfigPath := config.ConfigPath
config.ConfigPath = path
oldConfigPath := config.GetConfigPath()
assert.NoError(t, config.SetConfigPath(path))
os.Stdout = nil
defer func() {
os.Stdout = oldOsStdout
config.ConfigPath = oldConfigPath
assert.NoError(t, config.SetConfigPath(oldConfigPath))
}()
cmd.Root.SetArgs([]string{"version"})

View File

@@ -152,6 +152,7 @@ WebDAV or S3, that work out of the box.)
{{< provider name="SugarSync" home="https://sugarsync.com/" config="/sugarsync/" >}}
{{< provider name="Tardigrade" home="https://tardigrade.io/" config="/tardigrade/" >}}
{{< provider name="Tencent Cloud Object Storage (COS)" home="https://intl.cloud.tencent.com/product/cos" config="/s3/#tencent-cos" >}}
{{< provider name="Uptobox" home="https://uptobox.com" config="/uptobox/" >}}
{{< provider name="Wasabi" home="https://wasabi.com/" config="/s3/#wasabi" >}}
{{< provider name="WebDAV" home="https://en.wikipedia.org/wiki/WebDAV" config="/webdav/" >}}
{{< provider name="Yandex Disk" home="https://disk.yandex.com/" config="/yandex/" >}}

View File

@@ -372,7 +372,7 @@ put them back in again.` >}}
* Fred <fred@creativeprojects.tech>
* Sébastien Gross <renard@users.noreply.github.com>
* Maxime Suret <11944422+msuret@users.noreply.github.com>
* Caleb Case <caleb@storj.io>
* Caleb Case <caleb@storj.io> <calebcase@gmail.com>
* Ben Zenker <imbenzenker@gmail.com>
* Martin Michlmayr <tbm@cyrius.com>
* Brandon McNama <bmcnama@pagerduty.com>
@@ -477,3 +477,13 @@ put them back in again.` >}}
* Lucas Messenger <lmesseng@cisco.com>
* Manish Kumar <krmanish260@gmail.com>
* x0b <x0bdev@gmail.com>
* CERN through the CS3MESH4EOSC Project
* Nick Gaya <nicholasgaya+github@gmail.com>
* Ashok Gelal <401055+ashokgelal@users.noreply.github.com>
* Dominik Mydlil <dominik.mydlil@outlook.com>
* Nazar Mishturak <nazarmx@gmail.com>
* Ansh Mittal <iamAnshMittal@gmail.com>
* noabody <noabody@yahoo.com>
* OleFrost <82263101+olefrost@users.noreply.github.com>
* Kenny Parsons <kennyparsons93@gmail.com>
* Jeffrey Tolar <tolar.jeffrey@gmail.com>

View File

@@ -392,6 +392,22 @@ See: the [encoding section in the overview](/overview/#encoding) for more info.
- Type: MultiEncoder
- Default: Slash,BackSlash,Del,Ctl,RightPeriod,InvalidUtf8
#### --azureblob-public-access
Public access level of a container: blob, container.
- Config: public_access
- Env Var: RCLONE_AZUREBLOB_PUBLIC_ACCESS
- Type: string
- Default: ""
- Examples:
- ""
- The container and its blobs can be accessed only with an authorized request. It's a default value
- "blob"
- Blob data within this container can be read via anonymous request.
- "container"
- Allow full public read access for container and blob data.
{{< rem autogenerated options stop >}}
### Limitations ###

View File

@@ -172,11 +172,6 @@ the file instead of hiding it.
Old versions of files, where available, are visible using the
`--b2-versions` flag.
**NB** Note that `--b2-versions` does not work with crypt at the
moment [#1627](https://github.com/rclone/rclone/issues/1627). Using
[--backup-dir](/docs/#backup-dir-dir) with rclone is the recommended
way of working around this.
If you wish to remove all the old versions then you can use the
`rclone cleanup remote:bucket` command which will delete all the old
versions of files, leaving the current ones intact. You can also

View File

@@ -5,6 +5,198 @@ description: "Rclone Changelog"
# Changelog
## v1.55.1 - 2021-04-26
[See commits](https://github.com/rclone/rclone/compare/v1.55.0...v1.55.1)
* Bug Fixes
* selfupdate
* Dont detect FUSE if build is static (Ivan Andreev)
* Add build tag noselfupdate (Ivan Andreev)
* sync: Fix incorrect error reported by graceful cutoff (Nick Craig-Wood)
* install.sh: fix macOS arm64 download (Nick Craig-Wood)
* build: Fix version numbers in android branch builds (Nick Craig-Wood)
* docs
* Contributing.md: update setup instructions for go1.16 (Nick Gaya)
* WinFsp 2021 is out of beta (albertony)
* Minor cleanup of space around code section (albertony)
* Fixed some typos (albertony)
* VFS
* Fix a code path which allows dirty data to be removed causing data loss (Nick Craig-Wood)
* Compress
* Fix compressed name regexp (buengese)
* Drive
* Fix backend copyid of google doc to directory (Nick Craig-Wood)
* Don't open browser when service account... (Ansh Mittal)
* Dropbox
* Add missing team_data.member scope for use with --impersonate (Nick Craig-Wood)
* Fix About after scopes changes - rclone config reconnect needed (Nick Craig-Wood)
* Fix Unable to decrypt returned paths from changeNotify (Nick Craig-Wood)
* FTP
* Fix implicit TLS (Ivan Andreev)
* Onedrive
* Work around for random "Unable to initialize RPS" errors (OleFrost)
* SFTP
* Revert sftp library to v1.12.0 from v1.13.0 to fix performance regression (Nick Craig-Wood)
* Fix Update ReadFrom failed: failed to send packet: EOF errors (Nick Craig-Wood)
* Zoho
* Fix error when region isn't set (buengese)
* Do not ask for mountpoint twice when using headless setup (buengese)
## v1.55.0 - 2021-03-31
[See commits](https://github.com/rclone/rclone/compare/v1.54.0...v1.55.0)
* New commands
* [selfupdate](/commands/rclone_selfupdate/) (Ivan Andreev)
* Allows rclone to update itself in-place or via a package (using `--package` flag)
* Reads cryptographically signed signatures for non beta releases
* Works on all OSes.
* [test](/commands/rclone_test/) - these are test commands - use with care!
* `histogram` - Makes a histogram of file name characters.
* `info` - Discovers file name or other limitations for paths.
* `makefiles` - Make a random file hierarchy for testing.
* `memory` - Load all the objects at remote:path into memory and report memory stats.
* New Features
* [Connection strings](/docs/#connection-strings)
* Config parameters can now be passed as part of the remote name as a connection string.
* For example to do the equivalent of `--drive-shared-with-me` use `drive,shared_with_me:`
* Make sure we don't save on the fly remote config to the config file (Nick Craig-Wood)
* Make sure backends with additional config have a different name for caching (Nick Craig-Wood)
* This work was sponsored by CERN, through the [CS3MESH4EOSC Project](https://cs3mesh4eosc.eu/).
* CS3MESH4EOSC has received funding from the European Unions Horizon 2020
* research and innovation programme under Grant Agreement no. 863353.
* build
* Update go build version to go1.16 and raise minimum go version to go1.13 (Nick Craig-Wood)
* Make a macOS ARM64 build to support Apple Silicon (Nick Craig-Wood)
* Install macfuse 4.x instead of osxfuse 3.x (Nick Craig-Wood)
* Use `GO386=softfloat` instead of deprecated `GO386=387` for 386 builds (Nick Craig-Wood)
* Disable IOS builds for the time being (Nick Craig-Wood)
* Androids builds made with up to date NDK (x0b)
* Add an rclone user to the Docker image but don't use it by default (cynthia kwok)
* dedupe: Make largest directory primary to minimize data moved (Saksham Khanna)
* config
* Wrap config library in an interface (Fionera)
* Make config file system pluggable (Nick Craig-Wood)
* `--config ""` or `"/notfound"` for in memory config only (Nick Craig-Wood)
* Clear fs cache of stale entries when altering config (Nick Craig-Wood)
* copyurl: Add option to print resulting auto-filename (albertony)
* delete: Make `--rmdirs` obey the filters (Nick Craig-Wood)
* docs - many fixes and reworks from edwardxml, albertony, pvalls, Ivan Andreev, Evan Harris, buengese, Alexey Tabakman
* encoder/filename - add SCSU as tables (Klaus Post)
* Add multiple paths support to `--compare-dest` and `--copy-dest` flag (K265)
* filter: Make `--exclude "dir/"` equivalent to `--exclude "dir/**"` (Nick Craig-Wood)
* fshttp: Add DSCP support with `--dscp` for QoS with differentiated services (Max Sum)
* lib/cache: Add Delete and DeletePrefix methods (Nick Craig-Wood)
* lib/file
* Make pre-allocate detect disk full errors and return them (Nick Craig-Wood)
* Don't run preallocate concurrently (Nick Craig-Wood)
* Retry preallocate on EINTR (Nick Craig-Wood)
* operations: Made copy and sync operations obey a RetryAfterError (Ankur Gupta)
* rc
* Add string alternatives for setting options over the rc (Nick Craig-Wood)
* Add `options/local` to see the options configured in the context (Nick Craig-Wood)
* Add `_config` parameter to set global config for just this rc call (Nick Craig-Wood)
* Implement passing filter config with `_filter` parameter (Nick Craig-Wood)
* Add `fscache/clear` and `fscache/entries` to control the fs cache (Nick Craig-Wood)
* Avoid +Inf value for speed in `core/stats` (albertony)
* Add a full set of stats to `core/stats` (Nick Craig-Wood)
* Allow `fs=` params to be a JSON blob (Nick Craig-Wood)
* rcd: Added systemd notification during the `rclone rcd` command. (Naveen Honest Raj)
* rmdirs: Make `--rmdirs` obey the filters (Nick Craig-Wood)
* version: Show build tags and type of executable (Ivan Andreev)
* Bug Fixes
* install.sh: make it fail on download errors (Ivan Andreev)
* Fix excessive retries missing `--max-duration` timeout (Nick Craig-Wood)
* Fix crash when `--low-level-retries=0` (Nick Craig-Wood)
* Fix failed token refresh on mounts created via the rc (Nick Craig-Wood)
* fshttp: Fix bandwidth limiting after bad merge (Nick Craig-Wood)
* lib/atexit
* Unregister interrupt handler once it has fired so users can interrupt again (Nick Craig-Wood)
* Fix occasional failure to unmount with CTRL-C (Nick Craig-Wood)
* Fix deadlock calling Finalise while Run is running (Nick Craig-Wood)
* lib/rest: Fix multipart uploads not stopping on context cancel (Nick Craig-Wood)
* Mount
* Allow mounting to root directory on windows (albertony)
* Improved handling of relative paths on windows (albertony)
* Fix unicode issues with accented characters on macOS (Nick Craig-Wood)
* Docs: document the new FileSecurity option in WinFsp 2021 (albertony)
* Docs: add note about volume path syntax on windows (albertony)
* Fix caching of old directories after renaming them (Nick Craig-Wood)
* Update cgofuse to the latest version to bring in macfuse 4 fix (Nick Craig-Wood)
* VFS
* `--vfs-used-is-size` to report used space using recursive scan (tYYGH)
* Don't set modification time if it was already correct (Nick Craig-Wood)
* Fix Create causing windows explorer to truncate files on CTRL-C CTRL-V (Nick Craig-Wood)
* Fix modtimes not updating when writing via cache (Nick Craig-Wood)
* Fix modtimes changing by fractional seconds after upload (Nick Craig-Wood)
* Fix modtime set if `--vfs-cache-mode writes`/`full` and no write (Nick Craig-Wood)
* Rename files in cache and cancel uploads on directory rename (Nick Craig-Wood)
* Fix directory renaming by renaming dirs cached in memory (Nick Craig-Wood)
* Local
* Add flag `--local-no-preallocate` (David Sze)
* Make `nounc` an advanced option except on Windows (albertony)
* Don't ignore preallocate disk full errors (Nick Craig-Wood)
* Cache
* Add `--fs-cache-expire-duration` to control the fs cache (Nick Craig-Wood)
* Crypt
* Add option to not encrypt data (Vesnyx)
* Log hash ok on upload (albertony)
* Azure Blob
* Add container public access level support. (Manish Kumar)
* B2
* Fix HTML files downloaded via cloudflare (Nick Craig-Wood)
* Box
* Fix transfers getting stuck on token expiry after API change (Nick Craig-Wood)
* Chunker
* Partially implement no-rename transactions (Maxwell Calman)
* Drive
* Don't stop server side copy if couldn't read description (Nick Craig-Wood)
* Pass context on to drive SDK - to help with cancellation (Nick Craig-Wood)
* Dropbox
* Add polling for changes support (Robert Thomas)
* Make `--timeout 0` work properly (Nick Craig-Wood)
* Raise priority of rate limited message to INFO to make it more noticeable (Nick Craig-Wood)
* Fichier
* Implement copy & move (buengese)
* Implement public link (buengese)
* FTP
* Implement Shutdown method (Nick Craig-Wood)
* Close idle connections after `--ftp-idle-timeout` (1m by default) (Nick Craig-Wood)
* Make `--timeout 0` work properly (Nick Craig-Wood)
* Add `--ftp-close-timeout` flag for use with awkward ftp servers (Nick Craig-Wood)
* Retry connections and logins on 421 errors (Nick Craig-Wood)
* Hdfs
* Fix permissions for when directory is created (Lucas Messenger)
* Onedrive
* Make `--timeout 0` work properly (Nick Craig-Wood)
* S3
* Fix `--s3-profile` which wasn't working (Nick Craig-Wood)
* SFTP
* Close idle connections after `--sftp-idle-timeout` (1m by default) (Nick Craig-Wood)
* Fix "file not found" errors for read once servers (Nick Craig-Wood)
* Fix SetModTime stat failed: object not found with `--sftp-set-modtime=false` (Nick Craig-Wood)
* Swift
* Update github.com/ncw/swift to v2.0.0 (Nick Craig-Wood)
* Implement copying large objects (nguyenhuuluan434)
* Union
* Fix crash when using epff policy (Nick Craig-Wood)
* Fix union attempting to update files on a read only file system (Nick Craig-Wood)
* Refactor to use fspath.SplitFs instead of fs.ParseRemote (Nick Craig-Wood)
* Fix initialisation broken in refactor (Nick Craig-Wood)
* WebDAV
* Add support for sharepoint with NTLM authentication (Rauno Ots)
* Make sharepoint-ntlm docs more consistent (Alex Chen)
* Improve terminology in sharepoint-ntlm docs (Ivan Andreev)
* Disable HTTP/2 for NTLM authentication (georne)
* Fix sharepoint-ntlm error 401 for parallel actions (Ivan Andreev)
* Check that purged directory really exists (Ivan Andreev)
* Yandex
* Make `--timeout 0` work properly (Nick Craig-Wood)
* Zoho
* Replace client id - you will need to `rclone config reconnect` after this (buengese)
* Add forgotten setupRegion() to NewFs - this finally fixes regions other than EU (buengese)
## v1.54.1 - 2021-03-08
[See commits](https://github.com/rclone/rclone/compare/v1.54.0...v1.54.1)

View File

@@ -416,4 +416,27 @@ Choose how chunker should handle files with missing or invalid chunks.
- "false"
- Warn user, skip incomplete file and proceed.
#### --chunker-transactions
Choose how chunker should handle temporary files during transactions.
- Config: transactions
- Env Var: RCLONE_CHUNKER_TRANSACTIONS
- Type: string
- Default: "rename"
- Examples:
- "rename"
- Rename temporary files after a successful transaction.
- "norename"
- Leave temporary file names and write transaction ID to metadata file.
- Metadata is required for no rename transactions (meta format cannot be "none").
- If you are using norename transactions you should be careful not to downgrade Rclone
- as older versions of Rclone don't support this transaction style and will misinterpret
- files manipulated by norename transactions.
- This method is EXPERIMENTAL, don't use on production systems.
- "auto"
- Rename or norename will be used depending on capabilities of the backend.
- If meta format is set to "none", rename transactions will always be used.
- This method is EXPERIMENTAL, don't use on production systems.
{{< rem autogenerated options stop >}}

View File

@@ -72,11 +72,13 @@ See the [global flags page](/flags/) for global options not listed here.
* [rclone rcd](/commands/rclone_rcd/) - Run rclone listening to remote control commands only.
* [rclone rmdir](/commands/rclone_rmdir/) - Remove the empty directory at path.
* [rclone rmdirs](/commands/rclone_rmdirs/) - Remove empty directories under the path.
* [rclone selfupdate](/commands/rclone_selfupdate/) - Update the rclone binary.
* [rclone serve](/commands/rclone_serve/) - Serve a remote over a protocol.
* [rclone settier](/commands/rclone_settier/) - Changes storage class/tier of objects in remote.
* [rclone sha1sum](/commands/rclone_sha1sum/) - Produces an sha1sum file for all the objects in the path.
* [rclone size](/commands/rclone_size/) - Prints the total size and number of objects in remote:path.
* [rclone sync](/commands/rclone_sync/) - Make source and dest identical, modifying destination only.
* [rclone test](/commands/rclone_test/) - Run a test command
* [rclone touch](/commands/rclone_touch/) - Create new file or change file modification time.
* [rclone tree](/commands/rclone_tree/) - List the contents of the remote in a tree like fashion.
* [rclone version](/commands/rclone_version/) - Show the version number.

View File

@@ -15,15 +15,16 @@ Copy url content to dest.
Download a URL's content and copy it to the destination without saving
it in temporary storage.
Setting --auto-filename will cause the file name to be retrieved from
Setting `--auto-filename`will cause the file name to be retrieved from
the from URL (after any redirections) and used in the destination
path.
path. With `--print-filename` in addition, the resuling file name will
be printed.
Setting --no-clobber will prevent overwriting file on the
Setting `--no-clobber` will prevent overwriting file on the
destination if there is one with the same name.
Setting --stdout or making the output file name "-" will cause the
output to be written to standard output.
Setting `--stdout` or making the output file name `-`
will cause the output to be written to standard output.
```
@@ -33,10 +34,11 @@ rclone copyurl https://example.com dest:path [flags]
## Options
```
-a, --auto-filename Get the file name from the URL and use it for destination file path
-h, --help help for copyurl
--no-clobber Prevent overwriting file with same name
--stdout Write the output to stdout rather than a file
-a, --auto-filename Get the file name from the URL and use it for destination file path
-h, --help help for copyurl
--no-clobber Prevent overwriting file with same name
-p, --print-filename Print the resulting name from --auto-filename
--stdout Write the output to stdout rather than a file
```
See the [global flags page](/flags/) for global options not listed here.

View File

@@ -17,8 +17,8 @@ By default `dedupe` interactively finds files with duplicate
names and offers to delete all but one or rename them to be
different. This is known as deduping by name.
Deduping by name is only useful with backends like Google Drive which
can have duplicate file names. It can be run on wrapping backends
Deduping by name is only useful with a small group of backends (e.g. Google Drive,
Opendrive) that can have duplicate file names. It can be run on wrapping backends
(e.g. crypt) if they wrap a backend which supports duplicate file
names.

View File

@@ -29,15 +29,15 @@ is an **empty** **existing** directory:
On Windows you can start a mount in different ways. See [below](#mounting-modes-on-windows)
for details. The following examples will mount to an automatically assigned drive,
to specific drive letter `X:`, to path `C:\path\to\nonexistent\directory`
(which must be **non-existent** subdirectory of an **existing** parent directory or drive,
to specific drive letter `X:`, to path `C:\path\parent\mount`
(where parent directory or drive must exist, and mount must **not** exist,
and is not supported when [mounting as a network drive](#mounting-modes-on-windows)), and
the last example will mount as network share `\\cloud\remote` and map it to an
automatically assigned drive:
rclone mount remote:path/to/files *
rclone mount remote:path/to/files X:
rclone mount remote:path/to/files C:\path\to\nonexistent\directory
rclone mount remote:path/to/files C:\path\parent\mount
rclone mount remote:path/to/files \\cloud\remote
When the program ends while in foreground mode, either via Ctrl+C or receiving
@@ -91,14 +91,14 @@ and experience unexpected program errors, freezes or other issues, consider moun
as a network drive instead.
When mounting as a fixed disk drive you can either mount to an unused drive letter,
or to a path - which must be **non-existent** subdirectory of an **existing** parent
or to a path representing a **non-existent** subdirectory of an **existing** parent
directory or drive. Using the special value `*` will tell rclone to
automatically assign the next available drive letter, starting with Z: and moving backward.
Examples:
rclone mount remote:path/to/files *
rclone mount remote:path/to/files X:
rclone mount remote:path/to/files C:\path\to\nonexistent\directory
rclone mount remote:path/to/files C:\path\parent\mount
rclone mount remote:path/to/files X:
Option `--volname` can be used to set a custom volume name for the mounted
@@ -171,10 +171,24 @@ Note that the mapping of permissions is not always trivial, and the result
you see in Windows Explorer may not be exactly like you expected.
For example, when setting a value that includes write access, this will be
mapped to individual permissions "write attributes", "write data" and "append data",
but not "write extended attributes" (WinFsp does not support extended attributes,
see [this](https://github.com/billziss-gh/winfsp/wiki/NTFS-Compatibility)).
Windows will then show this as basic permission "Special" instead of "Write",
because "Write" includes the "write extended attributes" permission.
but not "write extended attributes". Windows will then show this as basic
permission "Special" instead of "Write", because "Write" includes the
"write extended attributes" permission.
If you set POSIX permissions for only allowing access to the owner, using
`--file-perms 0600 --dir-perms 0700`, the user group and the built-in "Everyone"
group will still be given some special permissions, such as "read attributes"
and "read permissions", in Windows. This is done for compatibility reasons,
e.g. to allow users without additional permissions to be able to read basic
metadata about files like in UNIX. One case that may arise is that other programs
(incorrectly) interprets this as the file being accessible by everyone. For example
an SSH client may warn about "unprotected private key file".
WinFsp 2021 (version 1.9, still in beta) introduces a new FUSE option "FileSecurity",
that allows the complete specification of file security descriptors using
[SDDL](https://docs.microsoft.com/en-us/windows/win32/secauthz/security-descriptor-string-format).
With this you can work around issues such as the mentioned "unprotected private key file"
by specifying `-o FileSecurity="D:P(A;;FA;;;OW)"`, for file all access (FA) to the owner (OW).
### Windows caveats
@@ -378,6 +392,13 @@ for two reasons. Firstly because it is only checked every
`--vfs-cache-poll-interval`. Secondly because open files cannot be
evicted from the cache.
You **should not** run two copies of rclone using the same VFS cache
with the same or overlapping remotes if using `--vfs-cache-mode > off`.
This can potentially cause data corruption if you do. You can work
around this by giving each rclone its own cache hierarchy with
`--cache-dir`. You don't need to worry about this if the remotes in
use don't overlap.
### --vfs-cache-mode off
In this mode (the default) the cache will read directly from the remote and write
@@ -521,6 +542,19 @@ If the flag is not provided on the command line, then its default value depends
on the operating system where rclone runs: "true" on Windows and macOS, "false"
otherwise. If the flag is provided without a value, then it is "true".
## Alternate report of used bytes
Some backends, most notably S3, do not report the amount of bytes used.
If you need this information to be available when running `df` on the
filesystem, then pass the flag `--vfs-used-is-size` to rclone.
With this flag set, instead of relying on the backend to report this
information, rclone will scan the whole remote similar to `rclone size`
and compute the total used space itself.
_WARNING._ Contrary to `rclone size`, this flag ignores filters so that the
result is accurate. However, this is very inefficient and may cost lots of API
calls resulting in extra charges. Use it as a last resort and only with caching.
```
rclone mount remote:path /path/to/mountpoint [flags]
@@ -565,6 +599,7 @@ rclone mount remote:path /path/to/mountpoint [flags]
--vfs-read-chunk-size SizeSuffix Read the source objects in chunks. (default 128M)
--vfs-read-chunk-size-limit SizeSuffix If greater than --vfs-read-chunk-size, double the chunk size after each chunk read, until the limit is reached. 'off' is unlimited. (default off)
--vfs-read-wait duration Time to wait for in-sequence read before seeking. (default 20ms)
--vfs-used-is-size rclone size Use the rclone size algorithm for Used size.
--vfs-write-back duration Time to writeback files after last use when using cache. (default 5s)
--vfs-write-wait duration Time to wait for in-sequence write before giving error. (default 1s)
--volname string Set the volume name. Supported on Windows and OSX only.

View File

@@ -0,0 +1,84 @@
---
title: "rclone selfupdate"
description: "Update the rclone binary."
slug: rclone_selfupdate
url: /commands/rclone_selfupdate/
# autogenerated - DO NOT EDIT, instead edit the source code in cmd/selfupdate/ and as part of making a release run "make commanddocs"
---
# rclone selfupdate
Update the rclone binary.
## Synopsis
This command downloads the latest release of rclone and replaces
the currently running binary. The download is verified with a hashsum
and cryptographically signed signature.
If used without flags (or with implied `--stable` flag), this command
will install the latest stable release. However, some issues may be fixed
(or features added) only in the latest beta release. In such cases you should
run the command with the `--beta` flag, i.e. `rclone selfupdate --beta`.
You can check in advance what version would be installed by adding the
`--check` flag, then repeat the command without it when you are satisfied.
Sometimes the rclone team may recommend you a concrete beta or stable
rclone release to troubleshoot your issue or add a bleeding edge feature.
The `--version VER` flag, if given, will update to the concrete version
instead of the latest one. If you omit micro version from `VER` (for
example `1.53`), the latest matching micro version will be used.
Upon successful update rclone will print a message that contains a previous
version number. You will need it if you later decide to revert your update
for some reason. Then you'll have to note the previous version and run the
following command: `rclone selfupdate [--beta] OLDVER`.
If the old version contains only dots and digits (for example `v1.54.0`)
then it's a stable release so you won't need the `--beta` flag. Beta releases
have an additional information similar to `v1.54.0-beta.5111.06f1c0c61`.
(if you are a developer and use a locally built rclone, the version number
will end with `-DEV`, you will have to rebuild it as it obvisously can't
be distributed).
If you previously installed rclone via a package manager, the package may
include local documentation or configure services. You may wish to update
with the flag `--package deb` or `--package rpm` (whichever is correct for
your OS) to update these too. This command with the default `--package zip`
will update only the rclone executable so the local manual may become
inaccurate after it.
The `rclone mount` command (https://rclone.org/commands/rclone_mount/) may
or may not support extended FUSE options depending on the build and OS.
`selfupdate` will refuse to update if the capability would be discarded.
Note: Windows forbids deletion of a currently running executable so this
command will rename the old executable to 'rclone.old.exe' upon success.
Please note that this command was not available before rclone version 1.55.
If it fails for you with the message `unknown command "selfupdate"` then
you will need to update manually following the install instructions located
at https://rclone.org/install/
```
rclone selfupdate [flags]
```
## Options
```
--beta Install beta release.
--check Check for latest release, do not download.
-h, --help help for selfupdate
--output string Save the downloaded binary at a given path (default: replace running binary)
--package string Package format: zip|deb|rpm (default: zip)
--stable Install stable release (this is the default)
--version string Install the given rclone version (default: latest)
```
See the [global flags page](/flags/) for global options not listed here.
## SEE ALSO
* [rclone](/commands/rclone/) - Show help for rclone commands, flags and backends.

View File

@@ -134,6 +134,13 @@ for two reasons. Firstly because it is only checked every
`--vfs-cache-poll-interval`. Secondly because open files cannot be
evicted from the cache.
You **should not** run two copies of rclone using the same VFS cache
with the same or overlapping remotes if using `--vfs-cache-mode > off`.
This can potentially cause data corruption if you do. You can work
around this by giving each rclone its own cache hierarchy with
`--cache-dir`. You don't need to worry about this if the remotes in
use don't overlap.
### --vfs-cache-mode off
In this mode (the default) the cache will read directly from the remote and write
@@ -277,6 +284,19 @@ If the flag is not provided on the command line, then its default value depends
on the operating system where rclone runs: "true" on Windows and macOS, "false"
otherwise. If the flag is provided without a value, then it is "true".
## Alternate report of used bytes
Some backends, most notably S3, do not report the amount of bytes used.
If you need this information to be available when running `df` on the
filesystem, then pass the flag `--vfs-used-is-size` to rclone.
With this flag set, instead of relying on the backend to report this
information, rclone will scan the whole remote similar to `rclone size`
and compute the total used space itself.
_WARNING._ Contrary to `rclone size`, this flag ignores filters so that the
result is accurate. However, this is very inefficient and may cost lots of API
calls resulting in extra charges. Use it as a last resort and only with caching.
```
rclone serve dlna remote:path [flags]
@@ -309,6 +329,7 @@ rclone serve dlna remote:path [flags]
--vfs-read-chunk-size SizeSuffix Read the source objects in chunks. (default 128M)
--vfs-read-chunk-size-limit SizeSuffix If greater than --vfs-read-chunk-size, double the chunk size after each chunk read, until the limit is reached. 'off' is unlimited. (default off)
--vfs-read-wait duration Time to wait for in-sequence read before seeking. (default 20ms)
--vfs-used-is-size rclone size Use the rclone size algorithm for Used size.
--vfs-write-back duration Time to writeback files after last use when using cache. (default 5s)
--vfs-write-wait duration Time to wait for in-sequence write before giving error. (default 1s)
```

View File

@@ -133,6 +133,13 @@ for two reasons. Firstly because it is only checked every
`--vfs-cache-poll-interval`. Secondly because open files cannot be
evicted from the cache.
You **should not** run two copies of rclone using the same VFS cache
with the same or overlapping remotes if using `--vfs-cache-mode > off`.
This can potentially cause data corruption if you do. You can work
around this by giving each rclone its own cache hierarchy with
`--cache-dir`. You don't need to worry about this if the remotes in
use don't overlap.
### --vfs-cache-mode off
In this mode (the default) the cache will read directly from the remote and write
@@ -276,6 +283,19 @@ If the flag is not provided on the command line, then its default value depends
on the operating system where rclone runs: "true" on Windows and macOS, "false"
otherwise. If the flag is provided without a value, then it is "true".
## Alternate report of used bytes
Some backends, most notably S3, do not report the amount of bytes used.
If you need this information to be available when running `df` on the
filesystem, then pass the flag `--vfs-used-is-size` to rclone.
With this flag set, instead of relying on the backend to report this
information, rclone will scan the whole remote similar to `rclone size`
and compute the total used space itself.
_WARNING._ Contrary to `rclone size`, this flag ignores filters so that the
result is accurate. However, this is very inefficient and may cost lots of API
calls resulting in extra charges. Use it as a last resort and only with caching.
## Auth Proxy
If you supply the parameter `--auth-proxy /path/to/program` then
@@ -394,6 +414,7 @@ rclone serve ftp remote:path [flags]
--vfs-read-chunk-size SizeSuffix Read the source objects in chunks. (default 128M)
--vfs-read-chunk-size-limit SizeSuffix If greater than --vfs-read-chunk-size, double the chunk size after each chunk read, until the limit is reached. 'off' is unlimited. (default off)
--vfs-read-wait duration Time to wait for in-sequence read before seeking. (default 20ms)
--vfs-used-is-size rclone size Use the rclone size algorithm for Used size.
--vfs-write-back duration Time to writeback files after last use when using cache. (default 5s)
--vfs-write-wait duration Time to wait for in-sequence write before giving error. (default 1s)
```

View File

@@ -205,6 +205,13 @@ for two reasons. Firstly because it is only checked every
`--vfs-cache-poll-interval`. Secondly because open files cannot be
evicted from the cache.
You **should not** run two copies of rclone using the same VFS cache
with the same or overlapping remotes if using `--vfs-cache-mode > off`.
This can potentially cause data corruption if you do. You can work
around this by giving each rclone its own cache hierarchy with
`--cache-dir`. You don't need to worry about this if the remotes in
use don't overlap.
### --vfs-cache-mode off
In this mode (the default) the cache will read directly from the remote and write
@@ -348,6 +355,19 @@ If the flag is not provided on the command line, then its default value depends
on the operating system where rclone runs: "true" on Windows and macOS, "false"
otherwise. If the flag is provided without a value, then it is "true".
## Alternate report of used bytes
Some backends, most notably S3, do not report the amount of bytes used.
If you need this information to be available when running `df` on the
filesystem, then pass the flag `--vfs-used-is-size` to rclone.
With this flag set, instead of relying on the backend to report this
information, rclone will scan the whole remote similar to `rclone size`
and compute the total used space itself.
_WARNING._ Contrary to `rclone size`, this flag ignores filters so that the
result is accurate. However, this is very inefficient and may cost lots of API
calls resulting in extra charges. Use it as a last resort and only with caching.
```
rclone serve http remote:path [flags]
@@ -390,6 +410,7 @@ rclone serve http remote:path [flags]
--vfs-read-chunk-size SizeSuffix Read the source objects in chunks. (default 128M)
--vfs-read-chunk-size-limit SizeSuffix If greater than --vfs-read-chunk-size, double the chunk size after each chunk read, until the limit is reached. 'off' is unlimited. (default off)
--vfs-read-wait duration Time to wait for in-sequence read before seeking. (default 20ms)
--vfs-used-is-size rclone size Use the rclone size algorithm for Used size.
--vfs-write-back duration Time to writeback files after last use when using cache. (default 5s)
--vfs-write-wait duration Time to wait for in-sequence write before giving error. (default 1s)
```

View File

@@ -144,6 +144,13 @@ for two reasons. Firstly because it is only checked every
`--vfs-cache-poll-interval`. Secondly because open files cannot be
evicted from the cache.
You **should not** run two copies of rclone using the same VFS cache
with the same or overlapping remotes if using `--vfs-cache-mode > off`.
This can potentially cause data corruption if you do. You can work
around this by giving each rclone its own cache hierarchy with
`--cache-dir`. You don't need to worry about this if the remotes in
use don't overlap.
### --vfs-cache-mode off
In this mode (the default) the cache will read directly from the remote and write
@@ -287,6 +294,19 @@ If the flag is not provided on the command line, then its default value depends
on the operating system where rclone runs: "true" on Windows and macOS, "false"
otherwise. If the flag is provided without a value, then it is "true".
## Alternate report of used bytes
Some backends, most notably S3, do not report the amount of bytes used.
If you need this information to be available when running `df` on the
filesystem, then pass the flag `--vfs-used-is-size` to rclone.
With this flag set, instead of relying on the backend to report this
information, rclone will scan the whole remote similar to `rclone size`
and compute the total used space itself.
_WARNING._ Contrary to `rclone size`, this flag ignores filters so that the
result is accurate. However, this is very inefficient and may cost lots of API
calls resulting in extra charges. Use it as a last resort and only with caching.
## Auth Proxy
If you supply the parameter `--auth-proxy /path/to/program` then
@@ -404,6 +424,7 @@ rclone serve sftp remote:path [flags]
--vfs-read-chunk-size SizeSuffix Read the source objects in chunks. (default 128M)
--vfs-read-chunk-size-limit SizeSuffix If greater than --vfs-read-chunk-size, double the chunk size after each chunk read, until the limit is reached. 'off' is unlimited. (default off)
--vfs-read-wait duration Time to wait for in-sequence read before seeking. (default 20ms)
--vfs-used-is-size rclone size Use the rclone size algorithm for Used size.
--vfs-write-back duration Time to writeback files after last use when using cache. (default 5s)
--vfs-write-wait duration Time to wait for in-sequence write before giving error. (default 1s)
```

View File

@@ -213,6 +213,13 @@ for two reasons. Firstly because it is only checked every
`--vfs-cache-poll-interval`. Secondly because open files cannot be
evicted from the cache.
You **should not** run two copies of rclone using the same VFS cache
with the same or overlapping remotes if using `--vfs-cache-mode > off`.
This can potentially cause data corruption if you do. You can work
around this by giving each rclone its own cache hierarchy with
`--cache-dir`. You don't need to worry about this if the remotes in
use don't overlap.
### --vfs-cache-mode off
In this mode (the default) the cache will read directly from the remote and write
@@ -356,6 +363,19 @@ If the flag is not provided on the command line, then its default value depends
on the operating system where rclone runs: "true" on Windows and macOS, "false"
otherwise. If the flag is provided without a value, then it is "true".
## Alternate report of used bytes
Some backends, most notably S3, do not report the amount of bytes used.
If you need this information to be available when running `df` on the
filesystem, then pass the flag `--vfs-used-is-size` to rclone.
With this flag set, instead of relying on the backend to report this
information, rclone will scan the whole remote similar to `rclone size`
and compute the total used space itself.
_WARNING._ Contrary to `rclone size`, this flag ignores filters so that the
result is accurate. However, this is very inefficient and may cost lots of API
calls resulting in extra charges. Use it as a last resort and only with caching.
## Auth Proxy
If you supply the parameter `--auth-proxy /path/to/program` then
@@ -482,6 +502,7 @@ rclone serve webdav remote:path [flags]
--vfs-read-chunk-size SizeSuffix Read the source objects in chunks. (default 128M)
--vfs-read-chunk-size-limit SizeSuffix If greater than --vfs-read-chunk-size, double the chunk size after each chunk read, until the limit is reached. 'off' is unlimited. (default off)
--vfs-read-wait duration Time to wait for in-sequence read before seeking. (default 20ms)
--vfs-used-is-size rclone size Use the rclone size algorithm for Used size.
--vfs-write-back duration Time to writeback files after last use when using cache. (default 5s)
--vfs-write-wait duration Time to wait for in-sequence write before giving error. (default 1s)
```

View File

@@ -15,7 +15,8 @@ Make source and dest identical, modifying destination only.
Sync the source to the destination, changing the destination
only. Doesn't transfer unchanged files, testing by size and
modification time or MD5SUM. Destination is updated to match
source, including deleting files if necessary.
source, including deleting files if necessary (except duplicate
objects, see below).
**Important**: Since this can cause data loss, test first with the
`--dry-run` or the `--interactive`/`-i` flag.
@@ -23,7 +24,8 @@ source, including deleting files if necessary.
rclone sync -i SOURCE remote:DESTINATION
Note that files in the destination won't be deleted if there were any
errors at any point.
errors at any point. Duplicate objects (files with the same name, on
those providers that support it) are also not yet handled.
It is always the contents of the directory that is synced, not the
directory so when source:path is a directory, it's the contents of
@@ -35,6 +37,9 @@ go there.
**Note**: Use the `-P`/`--progress` flag to view real-time transfer statistics
**Note**: Use the `rclone dedupe` command to deal with "Duplicate object/directory found in source/destination - ignoring" errors.
See [this forum post](https://forum.rclone.org/t/sync-not-clearing-duplicates/14372) for more info.
```
rclone sync source:path dest:path [flags]

View File

@@ -0,0 +1,41 @@
---
title: "rclone test"
description: "Run a test command"
slug: rclone_test
url: /commands/rclone_test/
# autogenerated - DO NOT EDIT, instead edit the source code in cmd/test/ and as part of making a release run "make commanddocs"
---
# rclone test
Run a test command
## Synopsis
Rclone test is used to run test commands.
Select which test comand you want with the subcommand, eg
rclone test memory remote:
Each subcommand has its own options which you can see in their help.
**NB** Be careful running these commands, they may do strange things
so reading their documentation first is recommended.
## Options
```
-h, --help help for test
```
See the [global flags page](/flags/) for global options not listed here.
## SEE ALSO
* [rclone](/commands/rclone/) - Show help for rclone commands, flags and backends.
* [rclone test histogram](/commands/rclone_test_histogram/) - Makes a histogram of file name characters.
* [rclone test info](/commands/rclone_test_info/) - Discovers file name or other limitations for paths.
* [rclone test makefiles](/commands/rclone_test_makefiles/) - Make a random file hierarchy in <dir>
* [rclone test memory](/commands/rclone_test_memory/) - Load all the objects at remote:path into memory and report memory stats.

View File

@@ -0,0 +1,36 @@
---
title: "rclone test histogram"
description: "Makes a histogram of file name characters."
slug: rclone_test_histogram
url: /commands/rclone_test_histogram/
# autogenerated - DO NOT EDIT, instead edit the source code in cmd/test/histogram/ and as part of making a release run "make commanddocs"
---
# rclone test histogram
Makes a histogram of file name characters.
## Synopsis
This command outputs JSON which shows the histogram of characters used
in filenames in the remote:path specified.
The data doesn't contain any identifying information but is useful for
the rclone developers when developing filename compression.
```
rclone test histogram [remote:path] [flags]
```
## Options
```
-h, --help help for histogram
```
See the [global flags page](/flags/) for global options not listed here.
## SEE ALSO
* [rclone test](/commands/rclone_test/) - Run a test command

View File

@@ -0,0 +1,44 @@
---
title: "rclone test info"
description: "Discovers file name or other limitations for paths."
slug: rclone_test_info
url: /commands/rclone_test_info/
# autogenerated - DO NOT EDIT, instead edit the source code in cmd/test/info/ and as part of making a release run "make commanddocs"
---
# rclone test info
Discovers file name or other limitations for paths.
## Synopsis
rclone info discovers what filenames and upload methods are possible
to write to the paths passed in and how long they can be. It can take some
time. It will write test files into the remote:path passed in. It outputs
a bit of go code for each one.
**NB** this can create undeletable files and other hazards - use with care
```
rclone test info [remote:path]+ [flags]
```
## Options
```
--all Run all tests.
--check-control Check control characters.
--check-length Check max filename length.
--check-normalization Check UTF-8 Normalization.
--check-streaming Check uploads with indeterminate file size.
-h, --help help for info
--upload-wait duration Wait after writing a file.
--write-json string Write results to file.
```
See the [global flags page](/flags/) for global options not listed here.
## SEE ALSO
* [rclone test](/commands/rclone_test/) - Run a test command

View File

@@ -0,0 +1,33 @@
---
title: "rclone test makefiles"
description: "Make a random file hierarchy in <dir>"
slug: rclone_test_makefiles
url: /commands/rclone_test_makefiles/
# autogenerated - DO NOT EDIT, instead edit the source code in cmd/test/makefiles/ and as part of making a release run "make commanddocs"
---
# rclone test makefiles
Make a random file hierarchy in <dir>
```
rclone test makefiles <dir> [flags]
```
## Options
```
--files int Number of files to create (default 1000)
--files-per-directory int Average number of files per directory (default 10)
-h, --help help for makefiles
--max-file-size SizeSuffix Maximum size of files to create (default 100)
--max-name-length int Maximum size of file names (default 12)
--min-file-size SizeSuffix Minimum size of file to create
--min-name-length int Minimum size of file names (default 4)
```
See the [global flags page](/flags/) for global options not listed here.
## SEE ALSO
* [rclone test](/commands/rclone_test/) - Run a test command

View File

@@ -0,0 +1,27 @@
---
title: "rclone test memory"
description: "Load all the objects at remote:path into memory and report memory stats."
slug: rclone_test_memory
url: /commands/rclone_test_memory/
# autogenerated - DO NOT EDIT, instead edit the source code in cmd/test/memory/ and as part of making a release run "make commanddocs"
---
# rclone test memory
Load all the objects at remote:path into memory and report memory stats.
```
rclone test memory remote:path [flags]
```
## Options
```
-h, --help help for memory
```
See the [global flags page](/flags/) for global options not listed here.
## SEE ALSO
* [rclone test](/commands/rclone_test/) - Run a test command

View File

@@ -12,14 +12,21 @@ Show the version number.
## Synopsis
Show the version number, the go version and the architecture.
Show the rclone version number, the go version, the build target OS and
architecture, build tags and the type of executable (static or dynamic).
Eg
For example:
$ rclone version
rclone v1.41
- os/arch: linux/amd64
- go version: go1.10
rclone v1.54
- os/type: linux
- os/arch: amd64
- go/version: go1.16
- go/linking: static
- go/tags: none
Note: before rclone version 1.55 the os/type and os/arch lines were merged,
and the "go/version" line was tagged as "go version".
If you supply the --check flag, then it will do an online check to
compare your version with the latest release and the latest beta.

View File

@@ -517,6 +517,20 @@ names, or for debugging purposes.
- Type: bool
- Default: false
#### --crypt-no-data-encryption
Option to either encrypt file data or leave it unencrypted.
- Config: no_data_encryption
- Env Var: RCLONE_CRYPT_NO_DATA_ENCRYPTION
- Type: bool
- Default: false
- Examples:
- "true"
- Don't encrypt file data, leave it unencrypted.
- "false"
- Encrypt file data.
### Backend commands
Here are the commands specific to the crypt backend.

View File

@@ -57,6 +57,7 @@ See the following for detailed instructions for
* [SugarSync](/sugarsync/)
* [Tardigrade](/tardigrade/)
* [Union](/union/)
* [Uptobox](/uptobox/)
* [WebDAV](/webdav/)
* [Yandex Disk](/yandex/)
* [Zoho WorkDrive](/zoho/)
@@ -639,25 +640,54 @@ See `--copy-dest` and `--backup-dir`.
### --config=CONFIG_FILE ###
Specify the location of the rclone configuration file.
Specify the location of the rclone configuration file, to override
the default. E.g. `rclone config --config="rclone.conf"`.
Normally the config file is in your home directory as a file called
`.config/rclone/rclone.conf` (or `.rclone.conf` if created with an
older version). If `$XDG_CONFIG_HOME` is set it will be at
`$XDG_CONFIG_HOME/rclone/rclone.conf`.
The exact default is a bit complex to describe, due to changes
introduced through different versions of rclone while preserving
backwards compatibility, but in most cases it is as simple as:
If there is a file `rclone.conf` in the same directory as the rclone
executable it will be preferred. This file must be created manually
for Rclone to use it, it will never be created automatically.
- `%APPDATA%/rclone/rclone.conf` on Windows
- `~/.config/rclone/rclone.conf` on other
The complete logic is as follows: Rclone will look for an existing
configuration file in any of the following locations, in priority order:
1. `rclone.conf` (in program directory, where rclone executable is)
2. `%APPDATA%/rclone/rclone.conf` (only on Windows)
3. `$XDG_CONFIG_HOME/rclone/rclone.conf` (on all systems, including Windows)
4. `~/.config/rclone/rclone.conf` (see below for explanation of ~ symbol)
5. `~/.rclone.conf`
If no existing configuration file is found, then a new one will be created
in the following location:
- On Windows: Location 2 listed above, except in the unlikely event
that `APPDATA` is not defined, then location 4 is used instead.
- On Unix: Location 3 if `XDG_CONFIG_HOME` is defined, else location 4.
- Fallback to location 5 (on all OS), when the rclone directory cannot be
created, but if also a home directory was not found then path
`.rclone.conf` relative to current working directory will be used as
a final resort.
The `~` symbol in paths above represent the home directory of the current user
on any OS, and the value is defined as following:
- On Windows: `%HOME%` if defined, else `%USERPROFILE%`, or else `%HOMEDRIVE%\%HOMEPATH%`.
- On Unix: `$HOME` if defined, else by looking up current user in OS-specific user database
(e.g. passwd file), or else use the result from shell command `cd && pwd`.
If you run `rclone config file` you will see where the default
location is for you.
Use this flag to override the config location, e.g. `rclone
--config=".myconfig" .config`.
The fact that an existing file `rclone.conf` in the same directory
as the rclone executable is always preferred, means that it is easy
to run in "portable" mode by downloading rclone executable to a
writable directory and then create an empty file `rclone.conf` in the
same directory.
If the location is set to empty string `""` or the special value
`/notfound`, or the os null device represented by value `NUL` on
If the location is set to empty string `""` or path to a file
with name `notfound`, or the os null device represented by value `NUL` on
Windows and `/dev/null` on Unix systems, then rclone will keep the
config file in memory only.
@@ -787,6 +817,27 @@ triggering follow-on actions if data was copied, or skipping if not.
NB: Enabling this option turns a usually non-fatal error into a potentially
fatal one - please check and adjust your scripts accordingly!
### --fs-cache-expire-duration=TIME
When using rclone via the API rclone caches created remotes for 5
minutes by default in the "fs cache". This means that if you do
repeated actions on the same remote then rclone won't have to build it
again from scratch, which makes it more efficient.
This flag sets the time that the remotes are cached for. If you set it
to `0` (or negative) then rclone won't cache the remotes at all.
Note that if you use some flags, eg `--backup-dir` and if this is set
to `0` rclone may build two remotes (one for the source or destination
and one for the `--backup-dir` where it may have only built one
before.
### --fs-cache-expire-interval=TIME
This controls how often rclone checks for cached remotes to expire.
See the `--fs-cache-expire-duration` documentation above for more
info. The default is 60s, set to 0 to disable expiry.
### --header ###
Add an HTTP header for all transactions. The flag can be repeated to
@@ -1869,11 +1920,12 @@ Nevertheless, rclone will read any configuration file found
according to the rules described [above](https://rclone.org/docs/#config-config-file).
If an encrypted configuration file is found, this means you will be prompted for
password (unless using `--password-command`). To avoid this, you can bypass
the loading of the configuration file by overriding the location with an empty
string `""` or the special value `/notfound`, or the os null device represented
by value `NUL` on Windows and `/dev/null` on Unix systems (before rclone
version 1.55 only this null device alternative was supported).
E.g. `rclone --config="" genautocomplete bash`.
the loading of the default configuration file by overriding the location,
e.g. with one of the documented special values for memory-only configuration:
```
rclone genautocomplete bash --config=""
```
Developer options
-----------------
@@ -2098,7 +2150,7 @@ mys3:
Note that if you want to create a remote using environment variables
you must create the `..._TYPE` variable as above.
Note also that now rclone has [connectionstrings](#connection-strings),
Note also that now rclone has [connection strings](#connection-strings),
it is probably easier to use those instead which makes the above example
rclone lsd :s3,access_key_id=XXX,secret_access_key=XXX:

View File

@@ -197,6 +197,21 @@ memory. It can be set smaller if you are tight on memory.
Impersonate this user when using a business account.
Note that if you want to use impersonate, you should make sure this
flag is set when running "rclone config" as this will cause rclone to
request the "members.read" scope which it won't normally. This is
needed to lookup a members email address into the internal ID that
dropbox uses in the API.
Using the "members.read" scope will require a Dropbox Team Admin
to approve during the OAuth flow.
You will have to use your own App (setting your own client_id and
client_secret) to use this option as currently rclone's default set of
permissions doesn't include "members.read". This can be added once
v1.55 or later is in use everywhere.
- Config: impersonate
- Env Var: RCLONE_DROPBOX_IMPERSONATE
- Type: string
@@ -270,6 +285,12 @@ dropbox:dir` will return the error `Failed to purge: There are too
many files involved in this operation`. As a work-around do an
`rclone delete dropbox:dir` followed by an `rclone rmdir dropbox:dir`.
When using `rclone link` you'll need to set `--expire` if using a
non-personal account otherwise the visibility may not be correct.
(Note that `--expire` isn't supported on personal accounts). See the
[forum discussion](https://forum.rclone.org/t/rclone-link-dropbox-permissions/23211) and the
[dropbox SDK issue](https://github.com/dropbox/dropbox-sdk-go-unofficial/issues/75).
### Get your own Dropbox App ID ###
When you use rclone with Dropbox in its default configuration you are using rclone's App ID. This is shared between all the rclone users.

View File

@@ -27,10 +27,10 @@ These flags are available for every command.
-c, --checksum Skip based on checksum (if available) & size, not mod-time & size
--client-cert string Client SSL certificate (PEM) for mutual TLS auth
--client-key string Client SSL private key (PEM) for mutual TLS auth
--compare-dest string Include additional server-side path during comparison.
--compare-dest stringArray Include additional comma separated server-side paths during comparison.
--config string Config file. (default "$HOME/.config/rclone/rclone.conf")
--contimeout duration Connect timeout (default 1m0s)
--copy-dest string Implies --compare-dest but also copies files from path into destination.
--copy-dest stringArray Implies --compare-dest but also copies files from paths into destination.
--cpuprofile string Write cpu profile to file
--cutoff-mode string Mode to stop transfers when reaching the max transfer limit HARD|SOFT|CAUTIOUS (default "HARD")
--delete-after When synchronizing, delete files on destination after transferring (default)
@@ -39,10 +39,10 @@ These flags are available for every command.
--delete-excluded Delete files on dest excluded from sync
--disable string Disable a comma separated list of features. Use help to see a list.
-n, --dry-run Do a trial run with no permanent changes
--dscp string Set DSCP value to connections. Can be value or names, eg. CS1, LE, DF, AF21.
--dump DumpFlags List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles
--dump-bodies Dump HTTP headers and bodies - may contain sensitive info
--dump-headers Dump HTTP headers - may contain sensitive info
--dscp DSCP Name or Value (default 0)
--error-on-no-transfer Sets exit code 9 if no files are transferred, useful in scripts
--exclude stringArray Exclude files matching pattern
--exclude-from stringArray Read exclude patterns from file (use - to read from stdin)
@@ -53,6 +53,8 @@ These flags are available for every command.
--files-from-raw stringArray Read list of source-file names from file without any processing of lines (use - to read from stdin)
-f, --filter stringArray Add a file-filtering rule
--filter-from stringArray Read filtering patterns from a file (use - to read from stdin)
--fs-cache-expire-duration duration cache remotes for this long (0 to disable caching) (default 5m0s)
--fs-cache-expire-interval duration interval to check for expired remotes (default 1m0s)
--header stringArray Set HTTP header for all transactions
--header-download stringArray Set HTTP header for download transactions
--header-upload stringArray Set HTTP header for upload transactions
@@ -151,7 +153,7 @@ These flags are available for every command.
--use-json-log Use json log format.
--use-mmap Use mmap allocator (see docs).
--use-server-modtime Use server modified time instead of object metadata
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.54.0")
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.55.0")
-v, --verbose count Print lots more stuff (repeat for more)
```
@@ -184,6 +186,7 @@ and may be set in the config file.
--azureblob-msi-client-id string Object ID of the user-assigned MSI to use, if any. Leave blank if msi_object_id or msi_mi_res_id specified.
--azureblob-msi-mi-res-id string Azure resource ID of the user-assigned MSI to use, if any. Leave blank if msi_client_id or msi_object_id specified.
--azureblob-msi-object-id string Object ID of the user-assigned MSI to use, if any. Leave blank if msi_client_id or msi_mi_res_id specified.
--azureblob-public-access string Public access level of a container: blob, container.
--azureblob-sas-url string SAS URL for container level access only
--azureblob-service-principal-file string Path to file containing credentials for use with a service principal.
--azureblob-upload-cutoff string Cutoff for switching to chunked upload (<= 256MB). (Deprecated)
@@ -247,6 +250,7 @@ and may be set in the config file.
-L, --copy-links Follow symlinks and copy the pointed to item.
--crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true)
--crypt-filename-encryption string How to encrypt the filenames. (default "standard")
--crypt-no-data-encryption Option to either encrypt file data or leave it unencrypted.
--crypt-password string Password or pass phrase for encryption. (obscured)
--crypt-password2 string Password or pass phrase for salt. Optional but recommended. (obscured)
--crypt-remote string Remote to encrypt/decrypt.
@@ -282,7 +286,7 @@ and may be set in the config file.
--drive-starred-only Only show files that are starred.
--drive-stop-on-download-limit Make download limit errors be fatal
--drive-stop-on-upload-limit Make upload limit errors be fatal
--drive-team-drive string ID of the Team Drive
--drive-team-drive string ID of the Shared Drive (Team Drive)
--drive-token string OAuth Access Token as a JSON blob.
--drive-token-url string Token server url.
--drive-trashed-only Only show files that are in the trash.
@@ -311,12 +315,14 @@ and may be set in the config file.
--filefabric-token-expiry string Token expiry time
--filefabric-url string URL of the Enterprise File Fabric to connect to
--filefabric-version string Version read from the file fabric
--ftp-close-timeout Duration Maximum time to wait for a response to close. (default 1m0s)
--ftp-concurrency int Maximum number of FTP simultaneous connections, 0 for unlimited
--ftp-disable-epsv Disable using EPSV even if server advertises support
--ftp-disable-mlsd Disable using MLSD even if server advertises support
--ftp-encoding MultiEncoder This sets the encoding for the backend. (default Slash,Del,Ctl,RightSpace,Dot)
--ftp-explicit-tls Use Explicit FTPS (FTP over TLS)
--ftp-host string FTP host to connect to
--ftp-idle-timeout Duration Max time before closing idle connections (default 1m0s)
--ftp-no-check-certificate Do not verify the TLS certificate of the server
--ftp-pass string FTP password (obscured)
--ftp-port string FTP port, leave blank to use default (21)
@@ -378,6 +384,7 @@ and may be set in the config file.
--local-case-sensitive Force the filesystem to report itself as case sensitive.
--local-encoding MultiEncoder This sets the encoding for the backend. (default Slash,Dot)
--local-no-check-updated Don't check to see if the files change during upload
--local-no-preallocate Disable preallocation of disk space for transferred files
--local-no-set-modtime Disable setting modtime
--local-no-sparse Disable sparse files for multi-thread downloads
--local-no-unicode-normalization Don't apply unicode normalization to paths and filenames (Deprecated)
@@ -408,6 +415,7 @@ and may be set in the config file.
--onedrive-link-password string Set the password for links created by the link command.
--onedrive-link-scope string Set the scope of the links created by the link command. (default "anonymous")
--onedrive-link-type string Set the type of the links created by the link command. (default "view")
--onedrive-list-chunk int Size of listing chunk. (default 1000)
--onedrive-no-versions Remove all versions on modifying operations
--onedrive-region string Choose national cloud region for OneDrive. (default "global")
--onedrive-server-side-across-configs Allow server-side operations (e.g. copy) to work across different onedrive configs.
@@ -482,8 +490,10 @@ and may be set in the config file.
--seafile-url string URL of seafile host to connect to
--seafile-user string User name (usually email address)
--sftp-ask-password Allow asking for SFTP password when needed.
--sftp-disable-concurrent-reads If set don't use concurrent reads
--sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available.
--sftp-host string SSH host to connect to
--sftp-idle-timeout Duration Max time before closing idle connections (default 1m0s)
--sftp-key-file string Path to PEM-encoded private key file, leave blank or set key-use-agent to use ssh-agent.
--sftp-key-file-pass string The passphrase to decrypt the PEM-encoded private key file. (obscured)
--sftp-key-pem string Raw PEM-encoded private key, If specified, will override key_file parameter.
@@ -553,9 +563,10 @@ and may be set in the config file.
--union-upstreams string List of space separated upstreams.
--webdav-bearer-token string Bearer token instead of user/pass (e.g. a Macaroon)
--webdav-bearer-token-command string Command to run to get a bearer token
--webdav-encoding string This sets the encoding for the backend.
--webdav-pass string Password. (obscured)
--webdav-url string URL of http host to connect to
--webdav-user string User name
--webdav-user string User name. In case NTLM authentication is used, the username should be in the format 'Domain\User'.
--webdav-vendor string Name of the Webdav site/service/software you are using
--yandex-auth-url string Auth server URL.
--yandex-client-id string OAuth Client Id
@@ -563,6 +574,11 @@ and may be set in the config file.
--yandex-encoding MultiEncoder This sets the encoding for the backend. (default Slash,Del,Ctl,InvalidUtf8,Dot)
--yandex-token string OAuth Access Token as a JSON blob.
--yandex-token-url string Token server url.
--zoho-auth-url string Auth server URL.
--zoho-client-id string OAuth Client Id
--zoho-client-secret string OAuth Client Secret
--zoho-encoding MultiEncoder This sets the encoding for the backend. (default Del,Ctl,InvalidUtf8)
--zoho-region string Zoho region to connect to. You'll have to use the region you organization is registered in.
--zoho-token string OAuth Access Token as a JSON blob.
--zoho-token-url string Token server url.
```

View File

@@ -223,6 +223,30 @@ Disable using MLSD even if server advertises support
- Type: bool
- Default: false
#### --ftp-idle-timeout
Max time before closing idle connections
If no connections have been returned to the connection pool in the time
given, rclone will empty the connection pool.
Set to 0 to keep connections indefinitely.
- Config: idle_timeout
- Env Var: RCLONE_FTP_IDLE_TIMEOUT
- Type: Duration
- Default: 1m0s
#### --ftp-close-timeout
Maximum time to wait for a response to close.
- Config: close_timeout
- Env Var: RCLONE_FTP_CLOSE_TIMEOUT
- Type: Duration
- Default: 1m0s
#### --ftp-encoding
This sets the encoding for the backend.

View File

@@ -12,6 +12,7 @@ Rclone is a Go program and comes as a single binary file.
* [Download](/downloads/) the relevant binary.
* Extract the `rclone` or `rclone.exe` binary from the archive
* Run `rclone config` to setup. See [rclone config docs](/docs/) for more details.
* Optionally configure [automatic execution](#autostart).
See below for some expanded Linux / macOS instructions.
@@ -226,3 +227,147 @@ Instructions
roles:
- rclone
```
# Autostart #
After installing and configuring rclone, as described above, you are ready to use rclone
as an interactive command line utility. If your goal is to perform *periodic* operations,
such as a regular [sync](https://rclone.org/commands/rclone_sync/), you will probably want
to configure your rclone command in your operating system's scheduler. If you need to
expose *service*-like features, such as [remote control](https://rclone.org/rc/),
[GUI](https://rclone.org/gui/), [serve](https://rclone.org/commands/rclone_serve/)
or [mount](https://rclone.org/commands/rclone_move/), you will often want an rclone
command always running in the background, and configuring it to run in a service infrastructure
may be a better option. Below are some alternatives on how to achieve this on
different operating systems.
NOTE: Before setting up autorun it is highly recommended that you have tested your command
manually from a Command Prompt first.
## Autostart on Windows ##
The most relevant alternatives for autostart on Windows are:
- Run at user log on using the Startup folder
- Run at user log on, at system startup or at schedule using Task Scheduler
- Run at system startup using Windows service
### Running in background
Rclone is a console application, so if not starting from an existing Command Prompt,
e.g. when starting rclone.exe from a shortcut, it will open a Command Prompt window.
When configuring rclone to run from task scheduler and windows service you are able
to set it to run hidden in background. From rclone version 1.54 you can also make it
run hidden from anywhere by adding option `--no-console` (it may still flash briefly
when the program starts). Since rclone normally writes information and any error
messages to the console, you must redirect this to a file to be able to see it.
Rclone has a built-in option `--log-file` for that.
Example command to run a sync in background:
```
c:\rclone\rclone.exe sync c:\files remote:/files --no-console --log-file c:\rclone\logs\sync_files.txt
```
### User account
As mentioned in the [mount](https://rclone.org/commands/rclone_move/) documentation,
mounted drives created as Administrator are not visible to other accounts, not even the
account that was elevated as Administrator. By running the mount command as the
built-in `SYSTEM` user account, it will create drives accessible for everyone on
the system. Both scheduled task and Windows service can be used to achieve this.
NOTE: Remember that when rclone runs as the `SYSTEM` user, the user profile
that it sees will not be yours. This means that if you normally run rclone with
configuration file in the default location, to be able to use the same configuration
when running as the system user you must explicitely tell rclone where to find
it with the [`--config`](https://rclone.org/docs/#config-config-file) option,
or else it will look in the system users profile path (`C:\Windows\System32\config\systemprofile`).
To test your command manually from a Command Prompt, you can run it with
the [PsExec](https://docs.microsoft.com/en-us/sysinternals/downloads/psexec)
utility from Microsoft's Sysinternals suite, which takes option `-s` to
execute commands as the `SYSTEM` user.
### Start from Startup folder ###
To quickly execute an rclone command you can simply create a standard
Windows Explorer shortcut for the complete rclone command you want to run. If you
store this shortcut in the special "Startup" start-menu folder, Windows will
automatically run it at login. To open this folder in Windows Explorer,
enter path `%APPDATA%\Microsoft\Windows\Start Menu\Programs\Startup`,
or `C:\ProgramData\Microsoft\Windows\Start Menu\Programs\StartUp` if you want
the command to start for *every* user that logs in.
This is the easiest approach to autostarting of rclone, but it offers no
functionality to set it to run as different user, or to set conditions or
actions on certain events. Setting up a scheduled task as described below
will often give you better results.
### Start from Task Scheduler ###
Task Scheduler is an administrative tool built into Windows, and it can be used to
configure rclone to be started automatically in a highly configurable way, e.g.
periodically on a schedule, on user log on, or at system startup. It can run
be configured to run as the current user, or for a mount command that needs to
be available to all users it can run as the `SYSTEM` user.
For technical information, see
https://docs.microsoft.com/windows/win32/taskschd/task-scheduler-start-page.
### Run as service ###
For running rclone at system startup, you can create a Windows service that executes
your rclone command, as an alternative to scheduled task configured to run at startup.
#### Mount command built-in service integration ####
For mount commands, Rclone has a built-in Windows service integration via the third party
WinFsp library it uses. Registering as a regular Windows service easy, as you just have to
execute the built-in PowerShell command `New-Service` (requires administrative privileges).
Example of a PowerShell command that creates a Windows service for mounting
some `remote:/files` as drive letter `X:`, for *all* users (service will be running as the
local system account):
```
New-Service -Name Rclone -BinaryPathName 'c:\rclone\rclone.exe mount remote:/files X: --config c:\rclone\config\rclone.conf --log-file c:\rclone\logs\mount.txt'
```
The [WinFsp service infrastructure](https://github.com/billziss-gh/winfsp/wiki/WinFsp-Service-Architecture)
supports incorporating services for file system implementations, such as rclone,
into its own launcher service, as kind of "child services". This has the additional
advantage that it also implements a network provider that integrates into
Windows standard methods for managing network drives. This is currently not
officially supported by Rclone, but with WinFsp version 2019.3 B2 / v1.5B2 or later
it should be possible through path rewriting as described [here](https://github.com/rclone/rclone/issues/3340).
#### Third party service integration ####
To Windows service running any rclone command, the excellent third party utility
[NSSM](http://nssm.cc), the "Non-Sucking Service Manager", can be used.
It includes some advanced features such as adjusting process periority, defining
process environment variables, redirect to file anything written to stdout, and
customized response to different exit codes, with a GUI to configure everything from
(although it can also be used from command line ).
There are also several other alternatives. To mention one more,
[WinSW](https://github.com/winsw/winsw), "Windows Service Wrapper", is worth checking out.
It requires .NET Framework, but it is preinstalled on newer versions of Windows, and it
also provides alternative standalone distributions which includes necessary runtime (.NET 5).
WinSW is a command-line only utility, where you have to manually create an XML file with
service configuration. This may be a drawback for some, but it can also be an advantage
as it is easy to back up and re-use the configuration
settings, without having go through manual steps in a GUI. One thing to note is that
by default it does not restart the service on error, one have to explicit enable this
in the configuration file (via the "onfailure" parameter).
## Autostart on Linux
### Start as a service
To always run rclone in background, relevant for mount commands etc,
you can use systemd to set up rclone as a system or user service. Running as a
system service ensures that it is run at startup even if the user it is running as
has no active session. Running rclone as a user service ensures that it only
starts after the configured user has logged into the system.
### Run periodically from cron
To run a periodic command, such as a copy/sync, you can set up a cron job.

View File

@@ -53,9 +53,9 @@ export XDG_CONFIG_HOME=config
#check installed version of rclone to determine if update is necessary
version=$(rclone --version 2>>errors | head -n 1)
if [ -z "$install_beta" ]; then
current_version=$(curl -f https://downloads.rclone.org/version.txt)
current_version=$(curl -fsS https://downloads.rclone.org/version.txt)
else
current_version=$(curl -f https://beta.rclone.org/version.txt)
current_version=$(curl -fsS https://beta.rclone.org/version.txt)
fi
if [ "$version" = "$current_version" ]; then
@@ -101,12 +101,12 @@ case "$OS_type" in
i?86|x86)
OS_type='386'
;;
aarch64|arm64)
OS_type='arm64'
;;
arm*)
OS_type='arm'
;;
aarch64)
OS_type='arm64'
;;
*)
echo 'OS type not supported'
exit 2
@@ -123,7 +123,7 @@ else
rclone_zip="rclone-beta-latest-${OS}-${OS_type}.zip"
fi
curl -Of "$download_link"
curl -OfsS "$download_link"
unzip_dir="tmp_unzip_dir_for_rclone"
# there should be an entry in this switch for each element of unzip_tools_list
case "$unzip_tool" in

View File

@@ -172,7 +172,7 @@ like symlinks under Windows).
If you supply `--copy-links` or `-L` then rclone will follow the
symlink and copy the pointed to file or directory. Note that this
flag is incompatible with `-links` / `-l`.
flag is incompatible with `--links` / `-l`.
This flag applies to all commands.
@@ -320,9 +320,9 @@ filesystem.
where it isn't supported (e.g. Windows) it will be ignored.
{{< rem autogenerated options start" - DO NOT EDIT - instead edit fs.RegInfo in backend/local/local.go then run make backenddocs" >}}
### Standard Options
### Advanced Options
Here are the standard options specific to local (Local Disk).
Here are the advanced options specific to local (Local Disk).
#### --local-nounc
@@ -336,10 +336,6 @@ Disable UNC (long path names) conversion on Windows
- "true"
- Disables long file names
### Advanced Options
Here are the advanced options specific to local (Local Disk).
#### --copy-links / -L
Follow symlinks and copy the pointed to item.

View File

@@ -325,6 +325,15 @@ fall back to normal copy (which will be slightly slower).
- Type: bool
- Default: false
#### --onedrive-list-chunk
Size of listing chunk.
- Config: list_chunk
- Env Var: RCLONE_ONEDRIVE_LIST_CHUNK
- Type: int
- Default: 1000
#### --onedrive-no-versions
Remove all versions on modifying operations

View File

@@ -48,6 +48,7 @@ Here is an overview of the major features of each cloud storage system.
| SFTP | MD5, SHA1 ² | Yes | Depends | No | - |
| SugarSync | - | No | No | No | - |
| Tardigrade | - | Yes | No | No | - |
| Uptobox | - | No | No | Yes | - |
| WebDAV | MD5, SHA1 ³ | Yes ⁴ | Depends | No | - |
| Yandex Disk | MD5 | Yes | No | No | R |
| Zoho WorkDrive | - | No | No | No | - |
@@ -361,6 +362,7 @@ upon backend specific capabilities.
| SFTP | No | No | Yes | Yes | No | No | Yes | No | Yes | Yes |
| SugarSync | Yes | Yes | Yes | Yes | No | No | Yes | Yes | No | Yes |
| Tardigrade | Yes † | No | No | No | No | Yes | Yes | No | No | No |
| Uptobox | No | Yes | Yes | Yes | No | No | No | No | No | No |
| WebDAV | Yes | Yes | Yes | Yes | No | No | Yes ‡ | No | Yes | Yes |
| Yandex Disk | Yes | Yes | Yes | Yes | Yes | No | Yes | Yes | Yes | Yes |
| Zoho WorkDrive | Yes | Yes | Yes | Yes | No | No | No | No | Yes | Yes |

View File

@@ -378,6 +378,55 @@ call and taken by the [options/set](#options-set) calls as well as the
- `BandwidthSpec` - this will be set and returned as a string, eg
"1M".
## Specifying remotes to work on
Remotes are specified with the `fs=`, `srcFs=`, `dstFs=`
parameters depending on the command being used.
The parameters can be a string as per the rest of rclone, eg
`s3:bucket/path` or `:sftp:/my/dir`. They can also be specified as
JSON blobs.
If specifyng a JSON blob it should be a object mapping strings to
strings. These values will be used to configure the remote. There are
3 special values which may be set:
- `type` - set to `type` to specify a remote called `:type:`
- `_name` - set to `name` to specify a remote called `name:`
- `_root` - sets the root of the remote - may be empty
One of `_name` or `type` should normally be set. If the `local`
backend is desired then `type` should be set to `local`. If `_root`
isn't specified then it defaults to the root of the remote.
For example this JSON is equivalent to `remote:/tmp`
```
{
"_name": "remote",
"_path": "/tmp"
}
```
And this is equivalent to `:sftp,host='example.com':/tmp`
```
{
"type": "sftp",
"host": "example.com",
"_path": "/tmp"
}
```
And this is equivalent to `/tmp/dir`
```
{
type = "local",
_ path = "/tmp/dir"
}
```
## Supported commands
{{< rem autogenerated start "- run make rcdocs - don't edit here" >}}
### backend/command: Runs a backend command. {#backend-command}
@@ -716,18 +765,22 @@ Returns the following values:
```
{
"speed": average speed in bytes/sec since start of the process,
"bytes": total transferred bytes since the start of the process,
"bytes": total transferred bytes since the start of the group,
"checks": number of files checked,
"deletes" : number of files deleted,
"elapsedTime": time in floating point seconds since rclone was started,
"errors": number of errors,
"fatalError": whether there has been at least one FatalError,
"retryError": whether there has been at least one non-NoRetryError,
"checks": number of checked files,
"transfers": number of transferred files,
"deletes" : number of deleted files,
"renames" : number of renamed files,
"eta": estimated time in seconds until the group completes,
"fatalError": boolean whether there has been at least one fatal error,
"lastError": last error string,
"renames" : number of files renamed,
"retryError": boolean showing whether there has been at least one non-NoRetryError,
"speed": average speed in bytes/sec since start of the group,
"totalBytes": total number of bytes in the group,
"totalChecks": total number of checks in the group,
"totalTransfers": total number of transfers in the group,
"transferTime" : total time spent on running jobs,
"elapsedTime": time in seconds since the start of the process,
"lastError": last occurred error,
"transfers": number of transferred files,
"transferring": an array of currently active file transfers:
[
{
@@ -808,6 +861,8 @@ This shows the current version of go and the go runtime
- os - OS in use as according to Go
- arch - cpu architecture in use according to Go
- goVersion - version of Go runtime in use
- linking - type of rclone executable (static or dynamic)
- goTags - space separated build tags or "none"
### debug/set-block-profile-rate: Set runtime.SetBlockProfileRate for blocking profiling. {#debug-set-block-profile-rate}
@@ -847,6 +902,26 @@ Results
- previousRate - int
### fscache/clear: Clear the Fs cache. {#fscache-clear}
This clears the fs cache. This is where remotes created from backends
are cached for a short while to make repeated rc calls more efficient.
If you change the parameters of a backend then you may want to call
this to clear an existing remote out of the cache before re-creating
it.
**Authentication is required for this call.**
### fscache/entries: Returns the number of entries in the fs cache. {#fscache-entries}
This returns the number of entries in the fs cache.
Returns
- entries - number of items in the cache
**Authentication is required for this call.**
### job/list: Lists the IDs of the running jobs {#job-list}
Parameters - None
@@ -1207,6 +1282,7 @@ This takes the following parameters
- fs - a remote name string e.g. "drive:"
- remote - a path within that remote e.g. "dir"
- each part in body represents a file to be uploaded
See the [uploadfile command](/commands/rclone_uploadfile/) command for more information on the above.
**Authentication is required for this call.**
@@ -1215,11 +1291,31 @@ This takes the following parameters
Returns
- options - a list of the options block names
### options/get: Get all the options {#options-get}
### options/get: Get all the global options {#options-get}
Returns an object where keys are option block names and values are an
object with the current option values in.
Note that these are the global options which are unaffected by use of
the _config and _filter parameters. If you wish to read the parameters
set in _config then use options/config and for _filter use options/filter.
This shows the internal names of the option within rclone which should
map to the external options very easily with a few exceptions.
### options/local: Get the currently active config for this call {#options-local}
Returns an object with the keys "config" and "filter".
The "config" key contains the local config and the "filter" key contains
the local filters.
Note that these are the local options specific to this rc call. If
_config was not supplied then they will be the global options.
Likewise with "_filter".
This call is mostly useful for seeing if _config and _filter passing
is working.
This shows the internal names of the option within rclone which should
map to the external options very easily with a few exceptions.
@@ -1372,6 +1468,7 @@ This takes the following parameters
- srcFs - a remote name string e.g. "drive:src" for the source
- dstFs - a remote name string e.g. "drive:dst" for the destination
- createEmptySrcDirs - create empty src directories on destination if set
See the [copy command](/commands/rclone_copy/) command for more information on the above.
@@ -1384,6 +1481,7 @@ This takes the following parameters
- srcFs - a remote name string e.g. "drive:src" for the source
- dstFs - a remote name string e.g. "drive:dst" for the destination
- createEmptySrcDirs - create empty src directories on destination if set
- deleteEmptySrcDirs - delete empty src directories if set
@@ -1397,6 +1495,7 @@ This takes the following parameters
- srcFs - a remote name string e.g. "drive:src" for the source
- dstFs - a remote name string e.g. "drive:dst" for the destination
- createEmptySrcDirs - create empty src directories on destination if set
See the [sync command](/commands/rclone_sync/) command for more information on the above.

View File

@@ -21,7 +21,10 @@ SSH installations.
Paths are specified as `remote:path`. If the path does not begin with
a `/` it is relative to the home directory of the user. An empty path
`remote:` refers to the user's home directory.
`remote:` refers to the user's home directory. For example, `rclone lsd remote:`
would list the home directory of the user cofigured in the rclone remote config
(`i.e /home/sftpuser`). However, `rclone lsd remote:/` would list the root
directory for remote machine (i.e. `/`)
"Note that some SFTP servers will need the leading / - Synology is a
good example of this. rsync.net, on the other hand, requires users to
@@ -84,6 +87,10 @@ See all directories in the home directory
rclone lsd remote:
See all directories in the root directory
rclone lsd remote:/
Make a new directory
rclone mkdir remote:path/to/directory
@@ -97,6 +104,11 @@ excess files in the directory.
rclone sync -i /home/local/directory remote:directory
Mount the remote path `/srv/www-data/` to the local path
`/mnt/www-data`
rclone mount remote:/srv/www-data/ /mnt/www-data
### SSH Authentication ###
The SFTP remote supports three authentication methods:
@@ -496,6 +508,44 @@ any given time.
- Type: bool
- Default: false
#### --sftp-disable-concurrent-reads
If set don't use concurrent reads
Normally concurrent reads are safe to use and not using them will
degrade performance, so this option is disabled by default.
Some servers limit the amount number of times a file can be
downloaded. Using concurrent reads can trigger this limit, so if you
have a server which returns
Failed to copy: file does not exist
Then you may need to enable this flag.
If concurrent reads are disabled, the use_fstat option is ignored.
- Config: disable_concurrent_reads
- Env Var: RCLONE_SFTP_DISABLE_CONCURRENT_READS
- Type: bool
- Default: false
#### --sftp-idle-timeout
Max time before closing idle connections
If no connections have been returned to the connection pool in the time
given, rclone will empty the connection pool.
Set to 0 to keep connections indefinitely.
- Config: idle_timeout
- Env Var: RCLONE_SFTP_IDLE_TIMEOUT
- Type: Duration
- Default: 1m0s
{{< rem autogenerated options stop >}}
### Limitations ###

141
docs/content/uptobox.md Normal file
View File

@@ -0,0 +1,141 @@
---
title: "Uptobox"
description: "Rclone docs for Uptobox"
---
{{< icon "fa fa-archive" >}} Uptobox
-----------------------------------------
This is a Backend for Uptobox file storage service. Uptobox is closer to a one-click hoster than a traditional
cloud storage provider and therefore not suitable for long term storage.
Paths are specified as `remote:path`
Paths may be as deep as required, e.g. `remote:directory/subdirectory`.
## Setup
To configure an Uptobox backend you'll need your personal api token. You'll find it in you
[account settings](https://uptobox.com/my_account)
### Example
Here is an example of how to make a remote called `remote` with the default setup. First run:
rclone config
This will guide you through an interactive setup process:
```
Current remotes:
Name Type
==== ====
TestUptobox uptobox
e) Edit existing remote
n) New remote
d) Delete remote
r) Rename remote
c) Copy remote
s) Set configuration password
q) Quit config
e/n/d/r/c/s/q> n
name> uptobox
Type of storage to configure.
Enter a string value. Press Enter for the default ("").
Choose a number from below, or type in your own value
[...]
37 / Uptobox
\ "uptobox"
[...]
Storage> uptobox
** See help for uptobox backend at: https://rclone.org/uptobox/ **
Your API Key, get it from https://uptobox.com/my_account
Enter a string value. Press Enter for the default ("").
api_key> xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
Edit advanced config? (y/n)
y) Yes
n) No (default)
y/n> n
Remote config
--------------------
[uptobox]
type = uptobox
api_key = xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
--------------------
y) Yes this is OK (default)
e) Edit this remote
d) Delete this remote
y/e/d>
```
Once configured you can then use `rclone` like this,
List directories in top level of your Uptobox
rclone lsd remote:
List all the files in your Uptobox
rclone ls remote:
To copy a local directory to an Uptobox directory called backup
rclone copy /home/source remote:backup
### Modified time and hashes
Uptobox supports neither modified times nor checksums.
#### Restricted filename characters
In addition to the [default restricted characters set](/overview/#restricted-characters)
the following characters are also replaced:
| Character | Value | Replacement |
| --------- |:-----:|:-----------:|
| " | 0x22 | |
| ` | 0x41 | |
Invalid UTF-8 bytes will also be [replaced](/overview/#invalid-utf8),
as they can't be used in XML strings.
{{< rem autogenerated options start" - DO NOT EDIT - instead edit fs.RegInfo in backend/uptobox/uptobox.go then run make backenddocs" >}}
### Standard Options
Here are the standard options specific to uptobox (Uptobox).
#### --uptobox-api-key
Your API Key, get it from https://uptobox.com/my_account
- Config: api_key
- Env Var: RCLONE_UPTOBOX_API_KEY
- Type: string
- Default: ""
### Advanced Options
Here are the advanced options specific to uptobox (Uptobox).
#### --uptobox-encoding
This sets the encoding for the backend.
See: the [encoding section in the overview](/overview/#encoding) for more info.
- Config: encoding
- Env Var: RCLONE_UPTOBOX_ENCODING
- Type: MultiEncoder
- Default: Slash,LtGt,DoubleQuote,SingleQuote,BackQuote,Dollar,BackSlash,Del,Ctl,LeftSpace,RightSpace,InvalidUtf8,Dot
{{< rem autogenerated options stop >}}
### Limitations
Uptobox will delete inactive files that have not been accessed in 60 days.
`rclone about` is not supported by this backend an overview of used space can however
been seen in the uptobox web interface.

View File

@@ -137,23 +137,21 @@ Name of the Webdav site/service/software you are using
- "owncloud"
- Owncloud
- "sharepoint"
- Sharepoint
- Sharepoint Online, authenticated by Microsoft account.
- "sharepoint-ntlm"
- Sharepoint with NTLM authentication
- Sharepoint with NTLM authentication. Usually self-hosted or on-premises.
- "other"
- Other site/service or software
#### --webdav-user
User name
User name. In case NTLM authentication is used, the username should be in the format 'Domain\User'.
- Config: user
- Env Var: RCLONE_WEBDAV_USER
- Type: string
- Default: ""
In case vendor mode `sharepoint-ntlm` is used, the user name is in the form `DOMAIN\user`
#### --webdav-pass
Password.
@@ -187,6 +185,19 @@ Command to run to get a bearer token
- Type: string
- Default: ""
#### --webdav-encoding
This sets the encoding for the backend.
See: the [encoding section in the overview](/overview/#encoding) for more info.
Default encoding is Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,Hash,Percent,BackSlash,Del,Ctl,LeftSpace,LeftTilde,RightSpace,RightPeriod,InvalidUtf8 for sharepoint-ntlm or identity otherwise.
- Config: encoding
- Env Var: RCLONE_WEBDAV_ENCODING
- Type: string
- Default: ""
{{< rem autogenerated options stop >}}
## Provider notes ##

View File

@@ -128,6 +128,26 @@ from filenames during upload.
Here are the standard options specific to zoho (Zoho).
#### --zoho-client-id
OAuth Client Id
Leave blank normally.
- Config: client_id
- Env Var: RCLONE_ZOHO_CLIENT_ID
- Type: string
- Default: ""
#### --zoho-client-secret
OAuth Client Secret
Leave blank normally.
- Config: client_secret
- Env Var: RCLONE_ZOHO_CLIENT_SECRET
- Type: string
- Default: ""
#### --zoho-region
Zoho region to connect to. You'll have to use the region you organization is registered in.
@@ -150,6 +170,35 @@ Zoho region to connect to. You'll have to use the region you organization is reg
Here are the advanced options specific to zoho (Zoho).
#### --zoho-token
OAuth Access Token as a JSON blob.
- Config: token
- Env Var: RCLONE_ZOHO_TOKEN
- Type: string
- Default: ""
#### --zoho-auth-url
Auth server URL.
Leave blank to use the provider defaults.
- Config: auth_url
- Env Var: RCLONE_ZOHO_AUTH_URL
- Type: string
- Default: ""
#### --zoho-token-url
Token server url.
Leave blank to use the provider defaults.
- Config: token_url
- Env Var: RCLONE_ZOHO_TOKEN_URL
- Type: string
- Default: ""
#### --zoho-encoding
This sets the encoding for the backend.

View File

@@ -98,6 +98,7 @@
<a class="dropdown-item" href="/sftp/"><i class="fa fa-server"></i> SFTP</a>
<a class="dropdown-item" href="/sugarsync/"><i class="fas fa-dove"></i> SugarSync</a>
<a class="dropdown-item" href="/tardigrade/"><i class="fas fa-dove"></i> Tardigrade</a>
<a class="dropdown-item" href="/uptobox/"><i class="fa fa-archive"></i> Uptobox</a>
<a class="dropdown-item" href="/union/"><i class="fa fa-link"></i> Union (merge backends)</a>
<a class="dropdown-item" href="/webdav/"><i class="fa fa-server"></i> WebDAV</a>
<a class="dropdown-item" href="/yandex/"><i class="fa fa-space-shuttle"></i> Yandex Disk</a>

View File

@@ -1 +1 @@
v1.55.0
v1.56.0

37
fs/cache/cache.go vendored
View File

@@ -12,14 +12,26 @@ import (
)
var (
c = cache.New()
once sync.Once // creation
c *cache.Cache
mu sync.Mutex // mutex to protect remap
remap = map[string]string{} // map user supplied names to canonical names
)
// Create the cache just once
func createOnFirstUse() {
once.Do(func() {
ci := fs.GetConfig(context.Background())
c = cache.New()
c.SetExpireDuration(ci.FsCacheExpireDuration)
c.SetExpireInterval(ci.FsCacheExpireInterval)
})
}
// Canonicalize looks up fsString in the mapping from user supplied
// names to canonical names and return the canonical form
func Canonicalize(fsString string) string {
createOnFirstUse()
mu.Lock()
canonicalName, ok := remap[fsString]
mu.Unlock()
@@ -43,10 +55,11 @@ func addMapping(fsString, canonicalName string) {
// GetFn gets an fs.Fs named fsString either from the cache or creates
// it afresh with the create function
func GetFn(ctx context.Context, fsString string, create func(ctx context.Context, fsString string) (fs.Fs, error)) (f fs.Fs, err error) {
fsString = Canonicalize(fsString)
createOnFirstUse()
canonicalFsString := Canonicalize(fsString)
created := false
value, err := c.Get(fsString, func(fsString string) (f interface{}, ok bool, err error) {
f, err = create(ctx, fsString)
value, err := c.Get(canonicalFsString, func(canonicalFsString string) (f interface{}, ok bool, err error) {
f, err = create(ctx, fsString) // always create the backend with the original non-canonicalised string
ok = err == nil || err == fs.ErrorIsFile
created = ok
return f, ok, err
@@ -58,19 +71,19 @@ func GetFn(ctx context.Context, fsString string, create func(ctx context.Context
// Check we stored the Fs at the canonical name
if created {
canonicalName := fs.ConfigString(f)
if canonicalName != fsString {
if canonicalName != canonicalFsString {
// Note that if err == fs.ErrorIsFile at this moment
// then we can't rename the remote as it will have the
// wrong error status, we need to add a new one.
if err == nil {
fs.Debugf(nil, "fs cache: renaming cache item %q to be canonical %q", fsString, canonicalName)
value, found := c.Rename(fsString, canonicalName)
fs.Debugf(nil, "fs cache: renaming cache item %q to be canonical %q", canonicalFsString, canonicalName)
value, found := c.Rename(canonicalFsString, canonicalName)
if found {
f = value.(fs.Fs)
}
addMapping(fsString, canonicalName)
addMapping(canonicalFsString, canonicalName)
} else {
fs.Debugf(nil, "fs cache: adding new entry for parent of %q, %q", fsString, canonicalName)
fs.Debugf(nil, "fs cache: adding new entry for parent of %q, %q", canonicalFsString, canonicalName)
Put(canonicalName, f)
}
}
@@ -80,6 +93,7 @@ func GetFn(ctx context.Context, fsString string, create func(ctx context.Context
// Pin f into the cache until Unpin is called
func Pin(f fs.Fs) {
createOnFirstUse()
c.Pin(fs.ConfigString(f))
}
@@ -97,6 +111,7 @@ func PinUntilFinalized(f fs.Fs, x interface{}) {
// Unpin f from the cache
func Unpin(f fs.Fs) {
createOnFirstUse()
c.Pin(fs.ConfigString(f))
}
@@ -127,6 +142,7 @@ func GetArr(ctx context.Context, fsStrings []string) (f []fs.Fs, err error) {
// Put puts an fs.Fs named fsString into the cache
func Put(fsString string, f fs.Fs) {
createOnFirstUse()
canonicalName := fs.ConfigString(f)
c.Put(canonicalName, f)
addMapping(fsString, canonicalName)
@@ -136,15 +152,18 @@ func Put(fsString string, f fs.Fs) {
//
// Returns number of entries deleted
func ClearConfig(name string) (deleted int) {
createOnFirstUse()
return c.DeletePrefix(name + ":")
}
// Clear removes everything from the cache
func Clear() {
createOnFirstUse()
c.Clear()
}
// Entries returns the number of entries in the cache
func Entries() int {
createOnFirstUse()
return c.Entries()
}

View File

@@ -33,7 +33,7 @@ func mockNewFs(t *testing.T) (func(), func(ctx context.Context, path string) (fs
panic("unreachable")
}
cleanup := func() {
c.Clear()
Clear()
}
return cleanup, create
}
@@ -42,12 +42,12 @@ func TestGet(t *testing.T) {
cleanup, create := mockNewFs(t)
defer cleanup()
assert.Equal(t, 0, c.Entries())
assert.Equal(t, 0, Entries())
f, err := GetFn(context.Background(), "mock:/", create)
require.NoError(t, err)
assert.Equal(t, 1, c.Entries())
assert.Equal(t, 1, Entries())
f2, err := GetFn(context.Background(), "mock:/", create)
require.NoError(t, err)
@@ -59,13 +59,13 @@ func TestGetFile(t *testing.T) {
cleanup, create := mockNewFs(t)
defer cleanup()
assert.Equal(t, 0, c.Entries())
assert.Equal(t, 0, Entries())
f, err := GetFn(context.Background(), "mock:/file.txt", create)
require.Equal(t, fs.ErrorIsFile, err)
require.NotNil(t, f)
assert.Equal(t, 2, c.Entries())
assert.Equal(t, 2, Entries())
f2, err := GetFn(context.Background(), "mock:/file.txt", create)
require.Equal(t, fs.ErrorIsFile, err)
@@ -85,13 +85,13 @@ func TestGetFile2(t *testing.T) {
cleanup, create := mockNewFs(t)
defer cleanup()
assert.Equal(t, 0, c.Entries())
assert.Equal(t, 0, Entries())
f, err := GetFn(context.Background(), "mock:file.txt", create)
require.Equal(t, fs.ErrorIsFile, err)
require.NotNil(t, f)
assert.Equal(t, 2, c.Entries())
assert.Equal(t, 2, Entries())
f2, err := GetFn(context.Background(), "mock:file.txt", create)
require.Equal(t, fs.ErrorIsFile, err)
@@ -111,13 +111,13 @@ func TestGetError(t *testing.T) {
cleanup, create := mockNewFs(t)
defer cleanup()
assert.Equal(t, 0, c.Entries())
assert.Equal(t, 0, Entries())
f, err := GetFn(context.Background(), "mock:/error", create)
require.Equal(t, errSentinel, err)
require.Equal(t, nil, f)
assert.Equal(t, 0, c.Entries())
assert.Equal(t, 0, Entries())
}
func TestPut(t *testing.T) {
@@ -126,17 +126,17 @@ func TestPut(t *testing.T) {
f := mockfs.NewFs(context.Background(), "mock", "/alien")
assert.Equal(t, 0, c.Entries())
assert.Equal(t, 0, Entries())
Put("mock:/alien", f)
assert.Equal(t, 1, c.Entries())
assert.Equal(t, 1, Entries())
fNew, err := GetFn(context.Background(), "mock:/alien", create)
require.NoError(t, err)
require.Equal(t, f, fNew)
assert.Equal(t, 1, c.Entries())
assert.Equal(t, 1, Entries())
// Check canonicalisation
@@ -146,7 +146,7 @@ func TestPut(t *testing.T) {
require.NoError(t, err)
require.Equal(t, f, fNew)
assert.Equal(t, 1, c.Entries())
assert.Equal(t, 1, Entries())
}
@@ -170,7 +170,7 @@ func TestClearConfig(t *testing.T) {
cleanup, create := mockNewFs(t)
defer cleanup()
assert.Equal(t, 0, c.Entries())
assert.Equal(t, 0, Entries())
_, err := GetFn(context.Background(), "mock:/file.txt", create)
require.Equal(t, fs.ErrorIsFile, err)
@@ -190,11 +190,11 @@ func TestClear(t *testing.T) {
_, err := GetFn(context.Background(), "mock:/", create)
require.NoError(t, err)
assert.Equal(t, 1, c.Entries())
assert.Equal(t, 1, Entries())
Clear()
assert.Equal(t, 0, c.Entries())
assert.Equal(t, 0, Entries())
}
func TestEntries(t *testing.T) {

View File

@@ -123,6 +123,8 @@ type ConfigInfo struct {
RefreshTimes bool
NoConsole bool
TrafficClass uint8
FsCacheExpireDuration time.Duration
FsCacheExpireInterval time.Duration
}
// NewConfig creates a new config with everything set to the default
@@ -160,6 +162,8 @@ func NewConfig() *ConfigInfo {
c.MultiThreadStreams = 4
c.TrackRenamesStrategy = "hash"
c.FsCacheExpireDuration = 300 * time.Second
c.FsCacheExpireInterval = 60 * time.Second
return c
}

View File

@@ -2,9 +2,11 @@ package config
import (
"context"
"fmt"
"github.com/pkg/errors"
"github.com/rclone/rclone/fs"
"github.com/rclone/rclone/fs/config/configmap"
)
// Authorize is for remote authorization of headless machines.
@@ -16,33 +18,61 @@ import (
func Authorize(ctx context.Context, args []string, noAutoBrowser bool) error {
ctx = suppressConfirm(ctx)
switch len(args) {
case 1, 3:
case 1, 2, 3:
default:
return errors.Errorf("invalid number of arguments: %d", len(args))
}
newType := args[0]
f := fs.MustFind(newType)
if f.Config == nil {
return errors.Errorf("can't authorize fs %q", newType)
Type := args[0] // FIXME could read this from input
ri, err := fs.Find(Type)
if err != nil {
return err
}
if ri.Config == nil {
return errors.Errorf("can't authorize fs %q", Type)
}
// Name used for temporary fs
name := "**temp-fs**"
// Make sure we delete it
defer DeleteRemote(name)
// Config map for remote
inM := configmap.Simple{}
// Indicate that we are running rclone authorize
Data.SetValue(name, ConfigAuthorize, "true")
inM[ConfigAuthorize] = "true"
if noAutoBrowser {
Data.SetValue(name, ConfigAuthNoBrowser, "true")
inM[ConfigAuthNoBrowser] = "true"
}
if len(args) == 3 {
Data.SetValue(name, ConfigClientID, args[1])
Data.SetValue(name, ConfigClientSecret, args[2])
// Add extra parameters if supplied
if len(args) == 2 {
err := inM.Decode(args[1])
if err != nil {
return err
}
} else if len(args) == 3 {
inM[ConfigClientID] = args[1]
inM[ConfigClientSecret] = args[2]
}
m := fs.ConfigMap(f, name, nil)
f.Config(ctx, name, m)
// Name used for temporary remote
name := "**temp-fs**"
m := fs.ConfigMap(ri, name, inM)
outM := configmap.Simple{}
m.ClearSetters()
m.AddSetter(outM)
m.AddGetter(outM, configmap.PriorityNormal)
ri.Config(ctx, name, m)
// Print the code for the user to paste
out := outM["token"]
// If received a config blob, then return one
if len(args) == 2 {
out, err = outM.Encode()
if err != nil {
return err
}
}
fmt.Printf("Paste the following into your remote machine --->\n%s\n<---End paste\n", out)
return nil
}

Some files were not shown because too many files have changed in this diff Show More