1
0
mirror of https://github.com/rclone/rclone.git synced 2025-12-06 00:03:32 +00:00

Compare commits

...

123 Commits

Author SHA1 Message Date
Nick Craig-Wood
f60476e30a cmount: make work under OpenBSD - fixes #1727 2020-09-01 13:50:10 +01:00
Nick Craig-Wood
a910ec398d vfs: make mount tests run on OpenBSD 2020-09-01 13:49:53 +01:00
Aaron Gokaslan
7dcbebf9bc jottacloud: rename unused variable to _ in jottacloud.go 2020-08-31 18:11:36 +01:00
Nick Craig-Wood
c31defbbd3 fs: add debug to show when a backend is being created
See: https://forum.rclone.org/t/rclone-rc-backend-command-not-working-as-expected/18834/
2020-08-31 14:51:06 +01:00
Nick Craig-Wood
e54ce35019 build: change beta numbering to be semver compatible - Fixes #4516
From now on the betas will be numbered for the version that they will
become, so:

v1.53.0-beta.NNNN.CCCCC

Where N is commit number and C is commit. When released this will
become v1.53.0 and the beta will become v1.54.0-beta.NNN.CCCCC.

The commit number is the count of the commits since the root of the
tree since we can no longer use the the git version numbers since the
last tag.

This will simplify building the stable branch but that release
procedure hasn't been revised yet.

This commit also injects the name of the branch for the beta builds
into the download path.
2020-08-31 13:55:04 +01:00
Nick Craig-Wood
75d54d720c version: replace internal code with github.com/coreos/go-semver
We were already importing go-semver so it makes sense to remove the
duplicated semver parsing code and just use go-semver
2020-08-31 13:55:04 +01:00
Nick Craig-Wood
cc0421cb9e rc/webgui: skip AddPlugin and RemovePlugin tests if download fails 2020-08-31 13:45:06 +01:00
Nick Craig-Wood
9c01ac9894 rc/webgui: improve error handling on web fetches 2020-08-31 13:45:06 +01:00
Chaitanya Bankanhal
20300d1f61 plugins: Change failing plugin test to new repo rclone/rclone-test-plugin 2020-08-31 13:45:06 +01:00
Chaitanya Bankanhal
6231beefc5 webui: Fix broken webui because of plugins redirection 2020-08-31 13:45:06 +01:00
Nick Craig-Wood
068cfdaa00 drive: fix "panic: send on closed channel" when recycling dir entries
In this commit:

cbf3d43561 drive: fix missing items when listing using --fast-list / ListR

We introduced a bug where under specific circumstances it could cause
a "panic: send on closed channel".

This was caused by:

- rclone engaging the workaround from the commit above
- one of the listing routines returning an error
- this caused the `in` channel to be closed to stop the readers
- however the workaround was recycling stuff into the `in` channel at the time
- hence the panic on closed channel

This fix factors out the sending to the `in` channel into `sendJob`
and calls this both from the master go routine and the list
runners. `sendJob` detects the `in` channel being closed properly and
also deals correctly with contention on the `in` channel.

Fixes #4511
2020-08-31 11:41:15 +01:00
Nick Craig-Wood
7d62d1fc97 Add aus to contributors 2020-08-31 11:41:15 +01:00
Nick Craig-Wood
e13ac28b8d Add Leo Luan to contributors 2020-08-31 11:41:15 +01:00
Lucas Kanashiro
b30ee57cd9 backend/local/aaaa: remove this unneeded file
This file was introduced as part of f39655093 probably by
mistake. There is no reference for this file in the local
backend directory.

Fixes #4536
2020-08-30 22:35:58 +01:00
Egor Margineanu
921e384c4d s3: update IBM COS endpoints - fixes #4522 2020-08-30 17:21:11 +01:00
Aaron Gokaslan
bf685f600e webgui: fixes previously unhandled error in JSON marshall in fs/rc/webgui/plugins.go:writeToFile 2020-08-30 17:15:03 +01:00
aus
b6d3cad70e sftp: add options for subsystem and server_command - fixes #1801 2020-08-25 21:38:13 +01:00
Leo Luan
c665201b85 vfs: support synchronous cache space recovery upon ENOSPC
This patch provides the support of synchronous cache space recovery
to allow read threads to recover from ENOSPC errors when cache space
can be recovered from cache items that are not in use or safe to be
reset/emptied .

The patch complements the existing cache cleaning process in two ways.

Firstly, the existing cache cleaning process is time-driven that runs
periodically. The cache space can run out while the cache cleaner
thread is still waiting for its next scheduled run. The io threads
encountering ENOSPC return an internal error to the applications
in this case even when cache space can be recovered to avoid this
error. This patch addresses this problem by having the read threads
kick the cache cleaner thread in this condition to recover cache
space preventing unnecessary ENOSPC errors from being seen by the
applications.

Secondly, this patch enhances the cache cleaner to support cache
item reset. Currently the cache purge process removes cache
items that are not in use. This may not be sufficient when the
total size of the working set exceeds the cache directory's
capacity. Like in the current code, this patch starts the purge
process by removing cache files that are not in use. Cache items
whose access times are older than vfs-cache-max-age are removed first.
After that, other not-in-use items are removed in LRU order until
vfs-cache-max-size is reached. If the vfs-cache-max-size (the quota)
is still not reached at this time, this patch adds a cache reset
step to reset/empty cache files that are still in use but not
dirtied.  This enables application processes to continue without
seeing an error even when the working set depletes the cache space
as long as there is not a large write working set hoarding the
entire cache space.

By design this patch does not add ENOSPC error recovery for write
IOs. Rclone does not empty a write cache item until the file data
is written back to the backend upon close. Allowing more cache
space to be consumed by dirty cache items when the cache space is
already running low would increase the risk of exhausting the cache
space in a way that the vfs mount becomes unreadable.
2020-08-25 21:12:06 +01:00
Chaitanya Bankanhal
d6996e3347 plugins: Add url query params to regex for referrer path 2020-08-24 10:56:04 +01:00
Chaitanya Bankanhal
dffcc99373 plugins: Create availablePlugins config file if it does not exist. 2020-08-24 10:56:04 +01:00
Chaitanya Bankanhal
09b79679cd plugins: restructure and add tests for pluginsctl/* calls 2020-08-24 10:56:04 +01:00
Chaitanya Bankanhal
cf68e61f40 Add redirection for plugin urls 2020-08-24 10:56:04 +01:00
Chaitanya Bankanhal
22674d1146 plugins: Add reverse proxy pluginsHandler for serving plugins 2020-08-24 10:56:04 +01:00
Chaitanya Bankanhal
f9ee0dc3f2 plugins: allow installation and use of plugins and test plugins with rclone-webui 2020-08-24 10:56:04 +01:00
Chaitanya Bankanhal
65fa6a946a webui: Expose webui downloader and other utility for use with plugins 2020-08-24 10:56:04 +01:00
Chaitanya
4cf82118d9 rc: add plugins support 2020-08-24 10:56:04 +01:00
Chaitanya
5f56611a76 webgui: Move to new package fs/rc/webgui. 2020-08-24 10:56:04 +01:00
Nick Craig-Wood
0f7a2f0f3c fichier: Detect Flood detected: IP Locked error and sleep for 30s
This is in an attempt to make the integration tests pass.
2020-08-23 18:01:22 +01:00
Nick Craig-Wood
be2b310ace Add Jay McEntire to contributors 2020-08-23 18:01:22 +01:00
Jay McEntire
45afe97e8e drive: Added --drive-starred-only to only show starred files - fixes #3928 2020-08-21 17:30:41 +01:00
Nick Craig-Wood
fee8f21ce1 pcloud: Add example hostnames to configurator and more docs - Fixes #4493
When using `rclone authorize` the hostname doesn't get set in the
config file.

This commit allows it to be set in the configurator and gives the user
a hint that it needs setting.
2020-08-21 16:14:02 +01:00
Nick Craig-Wood
1abc252ed3 onedrive: document refresh token expiry - fixes #4512 2020-08-21 15:56:41 +01:00
Nick Craig-Wood
801a820c54 s3: fix detection of bucket existing
This reverts part of

151f03378f s3: fix upload of single files into buckets without create permission

This erroneously assumed that a HEAD request on a non existent object
would return "NotFound" if the bucket was found. In fact it returns
"NotFound" when the bucket isn't found also.

This will break the fix for #4297 - however that can be made to work
using the new --s3-assume-bucket-exists flag
2020-08-21 13:28:08 +01:00
Nick Craig-Wood
2bcc66c805 drive: fix duplication of Google docs on server side copy #4517
Before this change, rclone was looking for the file without the
extension to see if it existed which meant that it never did.

This change checks the destination file exists firsts, before removing
the extension.
2020-08-20 20:19:33 +01:00
Nick Craig-Wood
b5ba077a2f drive: work around drive bug which didn't set modtime of copied docs
Google drive appears to no longer be copying the modification time of
google docs.

Setting the mod time immediately after the copy doesn't work either,
so this patch copies the object, waits for 1 second and then sets the
modtime.

Fixes #4517
2020-08-20 20:19:33 +01:00
Nick Craig-Wood
0931b84940 pcloud: Fix rclone link for files
This was only working for files in the root directory and wasn't
looking at the encoding.

This is fixed to use NewObject which takes both things into account
and it makes the share by ID instead of by path.

This problem was spotted by the integration tests.
2020-08-20 20:09:55 +01:00
Nick Craig-Wood
94a0991584 vfs: set the modtime of the cache file immediately
Before this change we set the modtime of the cache file when all
writers had finished.

This has the unfortunate effect that the file is uploaded with the
wrong modtime which means on backends which can't set modtimes except
when uploading files it is wrong.

This change sets the modtime of the cache file immediately in the
cache and in turn sets the modtime in the file info.
2020-08-20 16:24:04 +01:00
Nick Craig-Wood
9d3d397f50 test_all: disable chunker + mailru tests while mailru is broken #4376 2020-08-20 12:50:20 +01:00
Nick Craig-Wood
38e8415e77 test_all: remove Digital Ocean s3 integration tests due to excessive rate limiting
This is what I wrote to Digital Ocean support on July 10, 2020 - alas
it didn't result in the rate limits dropping, so reluctantly I'm going
to remove DO from the integration tests since they never pass and have
no hope of ever passing while this rate limit is in effect.

----

Somewhere towards the end of June 2020 or the start of July 2020 my
integration tests between rclone ( https://rclone.org ) and Digital
Ocean started failing.

I tried moving the tests to different regions (currently they are
using AMS1 because I'm in Europe) with no improvement.

Rclone seems to be hitting this rate limit as documented here:
https://www.digitalocean.com/docs/spaces/#limits

- 2 COPYs per 5 minutes on any individual object in a Space

Rclone creates small objects about 100 bytes in size and renames them
a few times - this involves using the COPY call as S3 does not have a
rename API. The tests do this more than twice per object so hit the 5
minute timeout I think. Rclone does exponential backoff and fails
after 10 retries not having reached 5 minutes delay after 10 retries.

Having a 5 minute lockout on an S3 compatible API is surprising!

Rclone integration tests with about 30 other providers, none of which
have a rate limit like this.

I understand the need for a COPY rate limit as server side copying
large files can be resource intensive. However a 5 minute lockout for
copying 100 byte files seems excessive!

Might I humbly suggest that you reduce or eliminate this rate limit
for small files?

----

This was the reply

Unfortunately it is not possible to raise this limit or remove it
currently on our platform. I do see how this would interfere with type
of applications that need to copy many small files and will be happy
to take the feedback to our engineering team to see how we can improve
the spaces system in the future
2020-08-20 12:50:04 +01:00
Nick Craig-Wood
fb9edbe34e test_all: export more internal variables to index.json for analysis 2020-08-20 12:23:33 +01:00
Nick Craig-Wood
85f9bd1abf union: fix tests by looking for fs.ErrorDirNotFound found in Purge and About
Before this change we errored out if one upstream errored in Purge or
About.

This change checks for fs.ErrorDirNotFound and skips that backend in
this case.
2020-08-19 18:04:16 +01:00
Nick Craig-Wood
63e4d2952b fstests: Suggest that Purge on a nonexistent dir should return fs.ErrorDirNotFound 2020-08-19 18:03:42 +01:00
Nick Craig-Wood
52247e9a9f local: return fs.ErrorDirNot found from About and Purge
Before this a stat error was returned which wasn't very helpful.
2020-08-19 18:02:21 +01:00
Nick Craig-Wood
d2ad293fae vfs: fix rename tests by waiting for writes to complete
Before this change the background writing of the file was racing with
the test of the object on the remote.

This meant that the tests passed locally but failed on a lot of the
remotes.
2020-08-19 17:04:17 +01:00
Nick Craig-Wood
6082096f7e vfs: check file exists in cache before renaming/setmodtime/deleting
Before this change we didn't check the file exists before renaming it,
setting its modification time or deleting it. If the file isn't in the
cache we don't need to do the action since it has been done on the
actual object, so these errors were producing unecessary log messages.

This change checks to see if the file exists first before doing those
actions.
2020-08-19 17:01:59 +01:00
Nick Craig-Wood
9a6fcd035b vfs: file: fix some locking issues reading f.d without the lock
Before this change we were reading Files.d without the lock. This
isn't allowed as d can change when the file is renamed.
2020-08-19 17:01:33 +01:00
Nick Craig-Wood
47d08ac1f1 vfs: recommend --vfs-cache-modes writes on backends which can't stream 2020-08-18 17:33:27 +01:00
Nick Craig-Wood
c4c6a1ee7d Add Kaloyan Raev to contributors 2020-08-18 17:33:27 +01:00
Anagh Kumar Baranwal
29d6358f34 docs: Updated docker docs regarding usage of the RC API from outside the
container

Signed-off-by: Anagh Kumar Baranwal <6824881+darthShadow@users.noreply.github.com>
2020-08-17 17:11:22 +01:00
Chaitanya Bankanhal
6308153ae7 rc: pass response writer when needsResponse is set instead of pointer
rc: Fix failing tests for *http.ResponseWriter
2020-08-17 17:09:31 +01:00
Chaitanya Bankanhal
a9713cd0ed core/command: Add streaming output for long running commands. 2020-08-17 17:09:31 +01:00
Chaitanya Bankanhal
1cae4152f9 rc: add NeedsResponse for rc calls 2020-08-17 17:09:31 +01:00
Nick Craig-Wood
4884bee8ba core/command: pretend to be "rclone version" to make tests pass 2020-08-17 17:09:31 +01:00
Chaitanya Bankanhal
54fc2821cd core/command: Add version command instead of ls 2020-08-17 17:09:31 +01:00
Chaitanya Bankanhal
5549fd25fc core/command: Allow rc to execute rclone terminal commands.
Allow command parameter to be skipped.
2020-08-17 17:09:31 +01:00
Kaloyan Raev
3d5a63607e backend/tardigrade: Upgrade to uplink v1.2.0
Uplink v1.2.0 comes with two improvements related to rclone:
* Fix for resource leak in uploads.
* The socket dialer comes with better congestion control in some
environments. On Linux environments, if a congestion controller named
'ledbat' is installed, it will be used. Consider installing
https://github.com/silviov/TCP-LEDBAT
2020-08-13 16:32:18 +01:00
Klaus Post
cb7534dcdf lib: Add file name compression
Allows to compress short arbitrary strings and returns a string using base64 url encoding.

Generator for tables included and a few samples has been added. Add more to init.go

Tested with fuzzing for crash resistance and symmetry, see fuzz.go
2020-08-13 16:14:11 +01:00
Nick Craig-Wood
770a6f2cad build: build with go1.15.x by default now that it is released 2020-08-12 09:51:22 +01:00
Nick Craig-Wood
aab9aa8a2e js: add experimental interface for integrating rclone into browsers
This works by compiling rclone to wasm and exporting the RC api to
javascript.
2020-08-10 17:32:21 +01:00
Nick Craig-Wood
3a14b1d5a9 build: make rclone build with wasm
Needed to drop
- azureblob backend
- cache backend
- qingstor backend
- cachestats command
- ncdu command
2020-08-10 17:32:21 +01:00
Nick Craig-Wood
ac044b1c54 Add Tim Gallant to contributors 2020-08-10 17:32:21 +01:00
Chaitanya Bankanhal
61c7ea4085 rc: fix rc/uploadfile only working for root of the fs 2020-08-10 17:09:46 +01:00
Nick Craig-Wood
01280798e9 build: drop macOS 386 build as it is no longer supported by go1.15
The go team made the decision to drop support for 32 bit macOS as 32
bit apps are no longer supported by macOS and 32 bit hardware hasn't
been produced by Apple for over 10 years.
2020-08-09 12:59:17 +01:00
Nick Craig-Wood
db56d30078 build: build with go1.15-rc2 2020-08-09 10:38:02 +01:00
Nick Craig-Wood
a00274d2ab build: update test builder to go1.15-rc2 2020-08-08 17:15:43 +01:00
Nick Craig-Wood
82975109af Start v1.52.3-DEV development 2020-08-08 10:35:06 +01:00
Tim Gallant
30eb094f28 oauthutil: adds SharedOptions for OAuth backends
1. adds SharedOptions data structure to oauthutil
2. adds config.ConfigToken option to oauthutil.SharedOptions
3. updates the backends that have oauth functionality

Fixes #2849
2020-08-07 16:32:01 +01:00
Nick Craig-Wood
b401a727f7 onedrive: add --onedrive-no-versions flag to remove old versions - fixes #4106 2020-08-07 15:58:30 +01:00
Nick Craig-Wood
8eb16ce89c onedrive: implement rclone cleanup #4106 2020-08-07 15:58:30 +01:00
Nick Craig-Wood
8e7eb37456 drive: implement backend command untrash
rclone backend untrash drive:directory

This was based on: https://gitlab.com/B4dM4n/drive-untrash

See: https://forum.rclone.org/t/rclone-teamdrive-undelete/18278/3
2020-08-07 11:10:37 +01:00
Nick Craig-Wood
4d7f91309b vfs: fix download threads timing out
Before this fix, download threads would fill up the buffer and then
timeout even though data was still being read from them. If the client
was streaming slower than network speed this caused the downloader to
stop and be restarted continuously. This caused more potential for
skips in the download and unecessary network transactions.

This patch fixes that behaviour - as long as a downloader is being
read from more often than once every 5 seconds, it won't timeout.

This was done by:

- kicking the downloader whenever ensureDownloader is called
- making the downloader loop if it has already downloaded past the maxOffset
- making setRange() always kick the downloader
2020-08-06 17:26:18 +01:00
Nick Craig-Wood
109b695621 vfs: add --vfs-read-ahead parameter for use with --vfs-cache-mode full
This parameter causes extra read-ahead over --buffer-size which is not
buffered in memory but on disk.
2020-08-06 17:26:18 +01:00
Nick Craig-Wood
177d2f2f79 build: add script for torturing the VFS 2020-08-06 17:26:18 +01:00
Nick Craig-Wood
f5439ddc54 accounting: fix deadlock in stats printing
The deadlock was caused in transfermap.go by calling mu.RLock() in one
function then calling it again in a sub function. Normally this is
fine, however this leaves a window where mu.Lock() can be called. When
mu.Lock() is called it doesn't allow the second mu.RLock() and
deadlocks.

    Thead 1                    Thread 2
    String():mu.RLock()
                               del():mu.Lock()
    sortedSlice():mu.RLock()                     - DEADLOCK

Lesson learnt: don't try using locks recursively ever!

This patch fixes the problem by removing the second mu.RLock(). This
was done by factoring the code that was calling it into the
transfermap.go file so all the locking can be seen at once which was
ultimately the cause of the problem - the code which used the locks
was too far away from the rest of the code using the lock.

This problem was introduced in:

bfa5715017 fs/accounting: sort transfers by start time

Which hasn't been released in a stable version yet
2020-08-05 17:13:00 +01:00
Nick Craig-Wood
324077fb48 swift: fix update multipart object removing all of its own parts
After uploading a multipart object, rclone deletes any unused parts.

Probably as part of the listing unification, the detection of the
parts beloning to the current upload was failing and calling Update
was deleting the parts for the current object.

This change fixes the detection and deletes all the old parts but none
of the new ones now.

Fixes #4075
2020-08-03 14:45:03 +01:00
Nick Craig-Wood
f50ab981f7 drive: stop using root_folder_id as a cache #4419
Previous to this change rclone cached the looked up root_folder_id in
the root_folder_id config variable.

This has caused a lot of confusion and a few attempts at workarounds
and ultimately was a mistake.

This reverts rclone attempting to cache anything in root_folder_id and
returns that variable to be entirely user modified.

It gives a little hint in the debug that rclone could be sped up
slightly by setting it, but it is up to the user to think about
whether that would be OK or not.

    Google drive root '': root_folder_id = "XXX" - save this in the config to speed up startup

It does not change root_folder_id itself, leaving this to the user.

See: https://forum.rclone.org/t/rclone-gdrive-no-longer-returning-anything/17215
2020-08-02 11:47:07 +01:00
Nick Craig-Wood
0c620ad076 Add David Ibarra to contributors 2020-08-02 11:47:07 +01:00
David Ibarra
49cf2eb7e4 cmd/obscure: Allow obscure command to accept password on STDIN
`rclone obscure` currently only accepts a command line argument of `password` to generate
an obfuscated password. This is an issue since generating obfuscated passwords programatically
requires sending the plain text password as a shell argument, which can cause problems if the
password contains shell characters, or if the password is from an untrusted source.

This patch opens up STDIN which will allow developers to open the STDIN source and print a password
directly to `rclone obscure`, which can increase safety and convenince.
2020-08-02 11:32:47 +01:00
Nick Craig-Wood
a2afa9aadd fs: Add directory to optional Purge interface - fixes #1891
- add a directory to the optional Purge interface
- fix up all the backends
- add an additional integration test to test for the feature
- use the new feature in operations.Purge

Many of the backends had been prepared in advance for this so the
change was trivial for them.
2020-07-31 17:43:17 +01:00
Nick Craig-Wood
c2f3949ded Add tyhuber1 to contributors 2020-07-30 16:44:13 +01:00
tyhuber1
bf355c4527 local: Add --local-no-set-modtime option to prevent modtime changes
If this option is enabled, rclone will not set modtime of uploaded files and
the backend will return ModTimeNotSupported as its Precision.

Normally rclone updates modification time of files after they are done
uploading. This can cause permissions issues on Linux platforms when
rclone is copying to a CIFS mount where the user rclone is
running as does not own the file uploaded. If this option is enabled,
rclone will no longer update the modtime after copying a file.

See: https://forum.rclone.org/t/chtimes-error-on-local-mounted-copy/17784
2020-07-30 16:43:17 +01:00
Nick Craig-Wood
3daa63cae8 mount: fix volume name broken in recent refactor 2020-07-29 14:23:00 +01:00
Nick Craig-Wood
4441e012cf vfs: fix saving from chrome without --vfs-cache-mode writes #4293
Due to Chrome's rather complicated use of file handles when saving
files from the download windows, rclone was attempting to truncate a
closed file.

The file appeared closed due to the handling of 0 length files.

This patch removes the check for the file being closed in the
WriteFileHandle.Truncate call. This is safe because the only action
this method takes is to emit an error message if the file is the wrong
size.

See: https://forum.rclone.org/t/google-drive-cannot-save-files-directly-from-browser-to-gdrive-mounted-path/17992/
2020-07-28 17:18:31 +01:00
Nick Craig-Wood
122a47fba6 accounting: Allow transfers to be canceled with context #3257
This makes all transfers cancelable even if the backend doesn't
support context as all transfers are done using the Accounting
framework.
2020-07-28 16:41:17 +01:00
Nick Craig-Wood
421585dd72 accounting: add context to Account and propagate changes #3257
This is preparation for getting the Accounting to check the context,
buf first we need to get it in place. Since this is one of those
changes that makes lots of noise, this is in a seperate commit.
2020-07-28 16:41:17 +01:00
Nick Craig-Wood
0bab9903ee drive: factor creation of the Fs so it can be re-used in team drive listing 2020-07-28 16:24:00 +01:00
Nick Craig-Wood
700deb0a81 drive: add rclone backend drives to list shared drives (teamdrives)
See: https://forum.rclone.org/t/google-drive-remotes-team-drive-list-commend/17595
2020-07-28 16:24:00 +01:00
Nick Craig-Wood
1222b78ec4 cmount: add support for reading unknown length files using direct IO
This means that on Linux and OSX at least reading a google doc from a
mount will behave sensibly.
2020-07-28 16:23:11 +01:00
Nick Craig-Wood
0ee16b51c4 mount: On Windows don't add -o uid/gid=-1 if user supplies -o uid/gid.
Before this change if the user supplied `-o uid=XXX` then rclone would
write `-o uid=-1 -o uid=XXX` so duplicating the uid value.

After this change rclone doesn't write the default `-1` version.

This fix affects `uid` and `gid`.

See: https://forum.rclone.org/t/issue-with-rclone-mount-and-resilio-sync/14730/27
2020-07-28 16:22:29 +01:00
Nick Craig-Wood
26001d520a fs: add --bwlimit-file flag to limit speeds of individual file transfers 2020-07-28 11:46:24 +01:00
David
8bf265c775 box: allow authentication with access token - fixes #4114 2020-07-28 11:43:44 +01:00
Nick Craig-Wood
62f0bbb598 dedupe: Make it obey the --size-only flag for duplicate detection #4321 2020-07-28 11:40:37 +01:00
Nick Craig-Wood
d5f4c74697 s3: implement cleanup and backend command to list & remove multipart uploads
This implements `rclone cleanup` to remove multipart uploads over 24
hours old. It also implements the backend command
`list-multipart-uploads` to see which ones are available and `cleanup`
to delete them with a configurable expiry interval.

See #4302
2020-07-28 11:37:46 +01:00
Nick Craig-Wood
8f42532b6d sync: add --track-renames-strategy leaf
See: https://forum.rclone.org/t/how-to-minimize-bandwith-w-r-t-renames-during-sync/16928/22
2020-07-28 11:34:27 +01:00
Nick Craig-Wood
2288a5c617 s3: implement profile and shared_credentials_file options
It is impossible to use two different profiles at the same time -
these config vars enable that.

See: https://forum.rclone.org/t/s3-source-destination-named-profile/17417
2020-07-28 11:32:32 +01:00
Nick Craig-Wood
957311f479 b2: fix transfers when using download_url
Before this fix, if an object had ID set and download_url was in use,
downloading the object would give this error:

    failed to open for download: bucket example_bucket does not have file: /b2api/v1/b2_download_file_by_id (404 not_found)

After this fix we only download by ID if download_url is not set

See: https://forum.rclone.org/t/correct-format-for-rclone-b2-download-url-variable/15498
2020-07-28 11:30:01 +01:00
Nick Craig-Wood
2cc381b91d build: disable lib/plugin under gccgo to make rclone build with gccgo 2020-07-28 09:56:31 +01:00
Nick Craig-Wood
f406dbbb4d s3: add --s3-no-check-bucket for minimising rclone transactions and perms
Fixes #4449
2020-07-27 17:49:40 +01:00
Nick Craig-Wood
3b2322285a Add kcris to contributors 2020-07-27 17:49:40 +01:00
kcris
47d093e863 drive: update docs to show use of sharing with a user instead of impersonate 2020-07-27 17:10:28 +01:00
Nick Craig-Wood
b2ae94de5b mount: fix mount flags not working
This was broken in the recent refactor.

See: https://forum.rclone.org/t/issue-with-allow-other-in-beta/18133
2020-07-27 15:24:28 +01:00
Nick Craig-Wood
4afea1ebaf docs: update install from source instructions
This has changed post Go modules.

In particular it recommends against the go get `-u` flag.

See: https://forum.rclone.org/t/install-from-source-go-get-errors/18114
2020-07-27 11:47:46 +01:00
Nick Craig-Wood
711736054f Add Jack to contributors 2020-07-26 12:07:04 +01:00
Jack
d64212d902 serve/restic: expose interfaces so that rclone can be used as a library from within restic
This patch enables rclone to be used as a library from within restic

- exposes NewServer
- exposes Server
- implements http.RoundTripper

Co-authored-by: Jack Deng <jackdeng@gmail.com>
2020-07-26 12:06:47 +01:00
Chaitanya Bankanhal
8913679d88 accounting: Fix elapsed time not show actual time since beginning
This fixes the elapsed time display in the statistics output in the rc and in the log messages.
2020-07-26 11:59:50 +01:00
Nick Craig-Wood
4f9a80e2d3 build: actions update, cache, go1.15-rc1 build
- Use cache to store package versions
- Update actions/setup-go to v2
- Add go1.15-rc1 build
- Make seperate build step
- stop downloading code into special path
- leave adding ~/go/bin to PATH to sction/setup-go
- remove docker build from xgo as we are building rclone anyway
- remove modules setting since it is now always on
- use ./... instead of listing files in tests
2020-07-25 18:52:33 +01:00
Nick Craig-Wood
aa93b39d9b build: fix tests on go1.15
go1.15 introduced a stricter policy for what you can convert with
`string()` and now `go vet` warns if you try to do `string(int)`.

See: https://github.com/golang/go/issues/32479
2020-07-25 18:51:28 +01:00
Nick Craig-Wood
101f82c6b3 drive: drop "Disabling ListR" messages down to debug
This was causing unecessary anguish for users since these messages are
harmless and really only interesting for debugging.

See: https://forum.rclone.org/t/rclone-gdrive-error/18098
2020-07-25 16:50:55 +01:00
Nick Craig-Wood
d35673efc6 webdav: fix directory creation with 4shared - fixes #4428
When we run MKCOL on 4shared on a directory that already exists, this
returns a 409/Conflict error. However this error code usually means
that the intermediate collections need creating.

The actual error code to return when trying to create a directory that
already exists isn't specified in the RFC, only that an error MUST be
returned and there are already 3 statuses checked in the code.

However using 409 makes rclone's usual strategy for making directories
fail and return the 409 error.

This patch tries the MKCOL and if it returns an unrecognised error
code, then calls PROPFIND on the directory to discover whether the
directory really exists or not.

This should also cover other WebDAV servers returning other error
messages we haven't accounted for in the code yet.
2020-07-24 17:26:42 +01:00
Nick Craig-Wood
3286d1992b mount: warn macOS users that mount implementation is changing #4393 2020-07-24 15:41:31 +01:00
Nick Craig-Wood
4ac662d144 cmount: fix macOS losing directory contents #4393
Before this change when reading directories we would use the directory
handle and the Readdir(-1) call on the directory handle. This worked
fine for the first read, but if the directory was read again on the
same handle Readdir(-1) returns nothing (as per its design).

It turns out that macOS leaves the directory handle open and just
re-reads the data from it, so this problem causes directories to start
out full then subsequently appear empty.

macOS/OSXFUSE is passing an offset of 0 to the Readdir call telling
rclone to seek in the directory, but we've told FUSE that we can't
seek by always returning ofst=0 in the fill function.

This fix works around the problem by reading the directory from the
path each time, ignoring the actual handle. This should be no less
efficient.

We will return an ESPIPE if offset is ever non 0.

There are possible corner cases reading deleted directories which this
ignores.
2020-07-24 15:38:08 +01:00
Nick Craig-Wood
d73a418a55 cmount: always supply stat information in Readdir
It is cheap to make the stat information here - we give FUSE a file
type to look at least.
2020-07-24 15:12:05 +01:00
Nick Craig-Wood
306a3e0cd7 cmount: catch panics in initialization and turn into error messages 2020-07-24 15:12:05 +01:00
Nick Craig-Wood
975a53c9e3 build: enable cmount on macOS #4393 2020-07-24 15:12:05 +01:00
Nick Craig-Wood
78fdc5805b vendor: Update github.com/billziss-gh/cgofuse to v1.4.0 #4393 2020-07-24 15:12:05 +01:00
Nick Craig-Wood
8f9d5af26d cache: remove mount tests as they aren't being run and cause maintenance issues
Before this change the cache backend contained its own routines for
mounting testing on that mount.

These tests are never run on the CI and cause a maintenance burden.

This commit removes the tests.
2020-07-24 11:57:49 +01:00
Nick Craig-Wood
6ff5787b40 mount: add VFS and Mount options to mount/listmounts 2020-07-24 10:48:51 +01:00
Nick Craig-Wood
3c1c6d2f01 mount: add mountOpt to mount/mount rc 2020-07-24 10:48:51 +01:00
Nick Craig-Wood
0272a7f405 mount: change interface of mount commands to take mount options
This is in preparation of being able to pass mount options to the rc
command "mount/mount"
2020-07-24 10:48:51 +01:00
Nick Craig-Wood
e1d34ef427 mount: factor Mount into mountlib and tidy signal handling
This factors common code from mount, cmount and mount2 into mountlib.

It also uses atexit for unregistering the mount.
2020-07-23 13:08:38 +01:00
Nick Craig-Wood
26b4698212 mount: make mount/mount remote control take vfsOpt option
See: https://forum.rclone.org/t/passing-mount-options-like-vfs-cache-mode-when-using-rclone-rc-mount-mount/17863
2020-07-23 12:30:41 +01:00
Nick Craig-Wood
2871268505 mount: change interface of mount commands to take VFS
This is in preparation of being able to pass options to the rc command
"mount/mount"
2020-07-23 12:30:41 +01:00
Nick Craig-Wood
744828a4de rc: allow JSON parameters to simplify command line usage
If the parameter being passed is an object then it can be passed as a
JSON string rather than using the `--json` flag which simplifies the
command line.

rclone rc operations/list fs=/tmp remote=test opt='{"showHash": true}'

Rather than

rclone rc operations/list --json '{"fs": "/tmp", "remote": "test", "opt": {"showHash": true}}'
2020-07-22 18:40:52 +01:00
188 changed files with 5625 additions and 2271 deletions

View File

@@ -19,24 +19,23 @@ jobs:
strategy:
fail-fast: false
matrix:
job_name: ['linux', 'mac', 'windows_amd64', 'windows_386', 'other_os', 'race', 'go1.11', 'go1.12', 'go1.13']
job_name: ['linux', 'mac', 'windows_amd64', 'windows_386', 'other_os', 'go1.11', 'go1.12', 'go1.13', 'go1.14']
include:
- job_name: linux
os: ubuntu-latest
go: '1.14.x'
modules: 'on'
go: '1.15.x'
gotags: cmount
build_flags: '-include "^linux/"'
check: true
quicktest: true
racequicktest: true
deploy: true
- job_name: mac
os: macOS-latest
go: '1.14.x'
modules: 'on'
gotags: '' # cmount doesn't work on osx travis for some reason
go: '1.15.x'
gotags: 'cmount'
build_flags: '-include "^darwin/amd64" -cgo'
quicktest: true
racequicktest: true
@@ -44,8 +43,7 @@ jobs:
- job_name: windows_amd64
os: windows-latest
go: '1.14.x'
modules: 'on'
go: '1.15.x'
gotags: cmount
build_flags: '-include "^windows/amd64" -cgo'
quicktest: true
@@ -54,8 +52,7 @@ jobs:
- job_name: windows_386
os: windows-latest
go: '1.14.x'
modules: 'on'
go: '1.15.x'
gotags: cmount
goarch: '386'
cgo: '1'
@@ -65,59 +62,51 @@ jobs:
- job_name: other_os
os: ubuntu-latest
go: '1.14.x'
modules: 'on'
go: '1.15.x'
build_flags: '-exclude "^(windows/|darwin/amd64|linux/)"'
compile_all: true
deploy: true
- job_name: race
os: ubuntu-latest
go: '1.14.x'
modules: 'on'
quicktest: true
racequicktest: true
- job_name: go1.11
os: ubuntu-latest
go: '1.11.x'
modules: 'on'
quicktest: true
- job_name: go1.12
os: ubuntu-latest
go: '1.12.x'
modules: 'on'
quicktest: true
- job_name: go1.13
os: ubuntu-latest
go: '1.13.x'
modules: 'on'
quicktest: true
- job_name: go1.14
os: ubuntu-latest
go: '1.14.x'
quicktest: true
racequicktest: true
name: ${{ matrix.job_name }}
runs-on: ${{ matrix.os }}
steps:
- name: Checkout
uses: actions/checkout@v1
uses: actions/checkout@v2
with:
# Checkout into a fixed path to avoid import path problems on go < 1.11
path: ./src/github.com/rclone/rclone
fetch-depth: 0
- name: Install Go
uses: actions/setup-go@v1
uses: actions/setup-go@v2
with:
stable: 'false'
go-version: ${{ matrix.go }}
- name: Set environment variables
shell: bash
run: |
echo '::set-env name=GOPATH::${{ runner.workspace }}'
echo '::add-path::${{ runner.workspace }}/bin'
echo '::set-env name=GO111MODULE::${{ matrix.modules }}'
echo '::set-env name=GOTAGS::${{ matrix.gotags }}'
echo '::set-env name=BUILD_FLAGS::${{ matrix.build_flags }}'
if [[ "${{ matrix.goarch }}" != "" ]]; then echo '::set-env name=GOARCH::${{ matrix.goarch }}' ; fi
@@ -167,10 +156,22 @@ jobs:
printf "\n\nSystem environment:\n\n"
env
- name: Run tests
- name: Go module cache
uses: actions/cache@v2
with:
path: ~/go/pkg/mod
key: ${{ runner.os }}-go-${{ hashFiles('**/go.sum') }}
restore-keys: |
${{ runner.os }}-go-
- name: Build rclone
shell: bash
run: |
make
- name: Run tests
shell: bash
run: |
make quicktest
if: matrix.quicktest
@@ -231,7 +232,7 @@ jobs:
GO111MODULE=off go get -v github.com/karalabe/xgo # don't add to go.mod
# xgo \
# -image=billziss/xgo-cgofuse \
# -targets=darwin/386,darwin/amd64,linux/386,linux/amd64,windows/386,windows/amd64 \
# -targets=darwin/amd64,linux/386,linux/amd64,windows/386,windows/amd64 \
# -tags cmount \
# -dest build \
# .
@@ -242,9 +243,9 @@ jobs:
.
- name: Build rclone
shell: bash
run: |
docker pull golang
docker run --rm -v "$PWD":/usr/src/rclone -w /usr/src/rclone golang go build -mod=mod -v
make
- name: Upload artifacts
run: |

1
.gitignore vendored
View File

@@ -9,3 +9,4 @@ rclone.iml
*.test
*.log
*.iml
fuzz-build.zip

View File

@@ -7,27 +7,28 @@ RELEASE_TAG := $(shell git tag -l --points-at HEAD)
VERSION := $(shell cat VERSION)
# Last tag on this branch
LAST_TAG := $(shell git describe --tags --abbrev=0)
# Next version
NEXT_VERSION := $(shell echo $(VERSION) | perl -lpe 's/v//; $$_ += 0.01; $$_ = sprintf("v%.2f.0", $$_)')
# If we are working on a release, override branch to master
ifdef RELEASE_TAG
BRANCH := master
LAST_TAG := $(shell git describe --abbrev=0 --tags $(VERSION)^)
endif
TAG_BRANCH := -$(BRANCH)
BRANCH_PATH := branch/
TAG_BRANCH := .$(BRANCH)
BRANCH_PATH := branch/$(BRANCH)/
# If building HEAD or master then unset TAG_BRANCH and BRANCH_PATH
ifeq ($(subst HEAD,,$(subst master,,$(BRANCH))),)
TAG_BRANCH :=
BRANCH_PATH :=
endif
# Make version suffix -DDD-gCCCCCCCC (D=commits since last relase, C=Commit) or blank
VERSION_SUFFIX := $(shell git describe --abbrev=8 --tags | perl -lpe 's/^v\d+\.\d+\.\d+//; s/^-(\d+)/"-".sprintf("%03d",$$1)/e;')
# TAG is current version + number of commits since last release + branch
# Make version suffix -beta.NNNN.CCCCCCCC (N=Commit number, C=Commit)
VERSION_SUFFIX := -beta.$(shell git rev-list --count HEAD).$(shell git show --no-patch --no-notes --pretty='%h' HEAD)
# TAG is current version + commit number + commit + branch
TAG := $(VERSION)$(VERSION_SUFFIX)$(TAG_BRANCH)
NEXT_VERSION := $(shell echo $(VERSION) | perl -lpe 's/v//; $$_ += 0.01; $$_ = sprintf("v%.2f.0", $$_)')
ifndef RELEASE_TAG
TAG := $(TAG)-beta
ifdef RELEASE_TAG
TAG := $(RELEASE_TAG)
endif
GO_VERSION := $(shell go version)
GO_FILES := $(shell go list ./... )
ifdef BETA_SUBDIR
BETA_SUBDIR := /$(BETA_SUBDIR)
endif
@@ -57,7 +58,6 @@ vars:
@echo BRANCH="'$(BRANCH)'"
@echo TAG="'$(TAG)'"
@echo VERSION="'$(VERSION)'"
@echo NEXT_VERSION="'$(NEXT_VERSION)'"
@echo GO_VERSION="'$(GO_VERSION)'"
@echo BETA_URL="'$(BETA_URL)'"
@@ -75,10 +75,10 @@ test: rclone test_all
# Quick test
quicktest:
RCLONE_CONFIG="/notfound" go test $(BUILDTAGS) $(GO_FILES)
RCLONE_CONFIG="/notfound" go test $(BUILDTAGS) ./...
racequicktest:
RCLONE_CONFIG="/notfound" go test $(BUILDTAGS) -cpu=2 -race $(GO_FILES)
RCLONE_CONFIG="/notfound" go test $(BUILDTAGS) -cpu=2 -race ./...
# Do source code quality checks
check: rclone
@@ -221,25 +221,24 @@ fetch_binaries:
serve: website
cd docs && hugo server -v -w --disableFastRender
tag: doc
@echo "Old tag is $(VERSION)"
@echo "New tag is $(NEXT_VERSION)"
echo -e "package fs\n\n// Version of rclone\nvar Version = \"$(NEXT_VERSION)\"\n" | gofmt > fs/version.go
echo -n "$(NEXT_VERSION)" > docs/layouts/partials/version.html
echo "$(NEXT_VERSION)" > VERSION
git tag -s -m "Version $(NEXT_VERSION)" $(NEXT_VERSION)
bin/make_changelog.py $(LAST_TAG) $(NEXT_VERSION) > docs/content/changelog.md.new
tag: retag doc
bin/make_changelog.py $(LAST_TAG) $(VERSION) > docs/content/changelog.md.new
mv docs/content/changelog.md.new docs/content/changelog.md
@echo "Edit the new changelog in docs/content/changelog.md"
@echo "Then commit all the changes"
@echo git commit -m \"Version $(NEXT_VERSION)\" -a -v
@echo git commit -m \"Version $(VERSION)\" -a -v
@echo "And finally run make retag before make cross etc"
retag:
@echo "Version is $(VERSION)"
git tag -f -s -m "Version $(VERSION)" $(VERSION)
startdev:
echo -e "package fs\n\n// Version of rclone\nvar Version = \"$(VERSION)-DEV\"\n" | gofmt > fs/version.go
@echo "Version is $(VERSION)"
@echo "Next version is $(NEXT_VERSION)"
echo -e "package fs\n\n// Version of rclone\nvar Version = \"$(NEXT_VERSION)-DEV\"\n" | gofmt > fs/version.go
echo -n "$(NEXT_VERSION)" > docs/layouts/partials/version.html
echo "$(NEXT_VERSION)" > VERSION
git commit -m "Start $(VERSION)-DEV development" fs/version.go
winzip:

View File

@@ -69,6 +69,8 @@ this will be done already.
Now
* FIXME this is now broken with new semver layout - needs fixing
* FIXME the TAG=${NEW_TAG} shouldn't be necessary any more
* git co ${BASE_TAG}-stable
* git cherry-pick any fixes
* Test (see above)

View File

@@ -1 +1 @@
v1.52.2
v1.53.0

View File

@@ -76,23 +76,7 @@ func init() {
log.Fatalf("Failed to configure token: %v", err)
}
},
Options: []fs.Option{{
Name: config.ConfigClientID,
Help: "Amazon Application Client ID.",
Required: true,
}, {
Name: config.ConfigClientSecret,
Help: "Amazon Application Client Secret.",
Required: true,
}, {
Name: config.ConfigAuthURL,
Help: "Auth server URL.\nLeave blank to use Amazon's.",
Advanced: true,
}, {
Name: config.ConfigTokenURL,
Help: "Token server url.\nleave blank to use Amazon's.",
Advanced: true,
}, {
Options: append(oauthutil.SharedOptions, []fs.Option{{
Name: "checkpoint",
Help: "Checkpoint for internal polling (debug).",
Hide: fs.OptionHideBoth,
@@ -143,7 +127,7 @@ underlying S3 storage.`,
// Encode invalid UTF-8 bytes as json doesn't handle them properly.
Default: (encoder.Base |
encoder.EncodeInvalidUtf8),
}},
}}...),
})
}
@@ -937,8 +921,8 @@ func (f *Fs) Hashes() hash.Set {
// Optional interface: Only implement this if you have a way of
// deleting all the files quicker than just running Remove() on the
// result of List()
func (f *Fs) Purge(ctx context.Context) error {
return f.purgeCheck(ctx, "", false)
func (f *Fs) Purge(ctx context.Context, dir string) error {
return f.purgeCheck(ctx, dir, false)
}
// ------------------------------------------------------------

View File

@@ -1,6 +1,6 @@
// Package azureblob provides an interface to the Microsoft Azure blob object storage system
// +build !plan9,!solaris,go1.13
// +build !plan9,!solaris,!js,go1.13
package azureblob
@@ -967,8 +967,7 @@ func (f *Fs) Hashes() hash.Set {
}
// Purge deletes all the files and directories including the old versions.
func (f *Fs) Purge(ctx context.Context) error {
dir := "" // forward compat!
func (f *Fs) Purge(ctx context.Context, dir string) error {
container, directory := f.split(dir)
if container == "" || directory != "" {
// Delegate to caller if not root of a container

View File

@@ -1,4 +1,4 @@
// +build !plan9,!solaris,go1.13
// +build !plan9,!solaris,!js,go1.13
package azureblob

View File

@@ -1,6 +1,6 @@
// Test AzureBlob filesystem interface
// +build !plan9,!solaris,go1.13
// +build !plan9,!solaris,!js,go1.13
package azureblob

View File

@@ -1,6 +1,6 @@
// Build for azureblob for unsupported platforms to stop go complaining
// about "no buildable Go source files "
// +build plan9 solaris !go1.13
// +build plan9 solaris js !go1.13
package azureblob

View File

@@ -1143,7 +1143,8 @@ func (f *Fs) deleteByID(ctx context.Context, ID, Name string) error {
// if oldOnly is true then it deletes only non current files.
//
// Implemented here so we can make sure we delete old versions.
func (f *Fs) purge(ctx context.Context, bucket, directory string, oldOnly bool) error {
func (f *Fs) purge(ctx context.Context, dir string, oldOnly bool) error {
bucket, directory := f.split(dir)
if bucket == "" {
return errors.New("can't purge from root")
}
@@ -1218,19 +1219,19 @@ func (f *Fs) purge(ctx context.Context, bucket, directory string, oldOnly bool)
wg.Wait()
if !oldOnly {
checkErr(f.Rmdir(ctx, ""))
checkErr(f.Rmdir(ctx, dir))
}
return errReturn
}
// Purge deletes all the files and directories including the old versions.
func (f *Fs) Purge(ctx context.Context) error {
return f.purge(ctx, f.rootBucket, f.rootDirectory, false)
func (f *Fs) Purge(ctx context.Context, dir string) error {
return f.purge(ctx, dir, false)
}
// CleanUp deletes all the hidden files.
func (f *Fs) CleanUp(ctx context.Context) error {
return f.purge(ctx, f.rootBucket, f.rootDirectory, true)
return f.purge(ctx, "", true)
}
// copy does a server side copy from dstObj <- srcObj
@@ -1672,8 +1673,8 @@ func (o *Object) Open(ctx context.Context, options ...fs.OpenOption) (in io.Read
opts.RootURL = o.fs.opt.DownloadURL
}
// Download by id if set otherwise by name
if o.id != "" {
// Download by id if set and not using DownloadURL otherwise by name
if o.id != "" && o.fs.opt.DownloadURL == "" {
opts.Path += "/b2api/v1/b2_download_file_by_id?fileId=" + urlEncode(o.id)
} else {
bucket, bucketPath := o.split()

View File

@@ -87,26 +87,23 @@ func init() {
Config: func(name string, m configmap.Mapper) {
jsonFile, ok := m.Get("box_config_file")
boxSubType, boxSubTypeOk := m.Get("box_sub_type")
boxAccessToken, boxAccessTokenOk := m.Get("access_token")
var err error
// If using box config.json, use JWT auth
if ok && boxSubTypeOk && jsonFile != "" && boxSubType != "" {
err = refreshJWTToken(jsonFile, boxSubType, name, m)
if err != nil {
log.Fatalf("Failed to configure token with jwt authentication: %v", err)
}
} else {
// Else, if not using an access token, use oauth2
} else if boxAccessToken == "" || !boxAccessTokenOk {
err = oauthutil.Config("box", name, m, oauthConfig, nil)
if err != nil {
log.Fatalf("Failed to configure token with oauth authentication: %v", err)
}
}
},
Options: []fs.Option{{
Name: config.ConfigClientID,
Help: "Box App Client Id.\nLeave blank normally.",
}, {
Name: config.ConfigClientSecret,
Help: "Box App Client Secret\nLeave blank normally.",
}, {
Options: append(oauthutil.SharedOptions, []fs.Option{{
Name: "root_folder_id",
Help: "Fill in for rclone to use a non root folder as its starting point.",
Default: "0",
@@ -114,6 +111,9 @@ func init() {
}, {
Name: "box_config_file",
Help: "Box App config.json location\nLeave blank normally." + env.ShellExpandHelp,
}, {
Name: "access_token",
Help: "Box App Primary Access Token\nLeave blank normally.",
}, {
Name: "box_sub_type",
Default: "user",
@@ -149,7 +149,7 @@ func init() {
encoder.EncodeBackSlash |
encoder.EncodeRightSpace |
encoder.EncodeInvalidUtf8),
}},
}}...),
})
}
@@ -247,6 +247,7 @@ type Options struct {
CommitRetries int `config:"commit_retries"`
Enc encoder.MultiEncoder `config:"encoding"`
RootFolderID string `config:"root_folder_id"`
AccessToken string `config:"access_token"`
}
// Fs represents a remote box
@@ -385,16 +386,22 @@ func NewFs(name, root string, m configmap.Mapper) (fs.Fs, error) {
}
root = parsePath(root)
oAuthClient, ts, err := oauthutil.NewClient(name, m, oauthConfig)
if err != nil {
return nil, errors.Wrap(err, "failed to configure Box")
client := fshttp.NewClient(fs.Config)
var ts *oauthutil.TokenSource
// If not using an accessToken, create an oauth client and tokensource
if opt.AccessToken == "" {
client, ts, err = oauthutil.NewClient(name, m, oauthConfig)
if err != nil {
return nil, errors.Wrap(err, "failed to configure Box")
}
}
f := &Fs{
name: name,
root: root,
opt: *opt,
srv: rest.NewClient(oAuthClient).SetRoot(rootURL),
srv: rest.NewClient(client).SetRoot(rootURL),
pacer: fs.NewPacer(pacer.NewDefault(pacer.MinSleep(minSleep), pacer.MaxSleep(maxSleep), pacer.DecayConstant(decayConstant))),
uploadToken: pacer.NewTokenDispenser(fs.Config.Transfers),
}
@@ -404,23 +411,30 @@ func NewFs(name, root string, m configmap.Mapper) (fs.Fs, error) {
}).Fill(f)
f.srv.SetErrorHandler(errorHandler)
// If using an accessToken, set the Authorization header
if f.opt.AccessToken != "" {
f.srv.SetHeader("Authorization", "Bearer "+f.opt.AccessToken)
}
jsonFile, ok := m.Get("box_config_file")
boxSubType, boxSubTypeOk := m.Get("box_sub_type")
// If using box config.json and JWT, renewing should just refresh the token and
// should do so whether there are uploads pending or not.
if ok && boxSubTypeOk && jsonFile != "" && boxSubType != "" {
f.tokenRenewer = oauthutil.NewRenew(f.String(), ts, func() error {
err := refreshJWTToken(jsonFile, boxSubType, name, m)
return err
})
f.tokenRenewer.Start()
} else {
// Renew the token in the background
f.tokenRenewer = oauthutil.NewRenew(f.String(), ts, func() error {
_, err := f.readMetaDataForPath(ctx, "")
return err
})
if ts != nil {
// If using box config.json and JWT, renewing should just refresh the token and
// should do so whether there are uploads pending or not.
if ok && boxSubTypeOk && jsonFile != "" && boxSubType != "" {
f.tokenRenewer = oauthutil.NewRenew(f.String(), ts, func() error {
err := refreshJWTToken(jsonFile, boxSubType, name, m)
return err
})
f.tokenRenewer.Start()
} else {
// Renew the token in the background
f.tokenRenewer = oauthutil.NewRenew(f.String(), ts, func() error {
_, err := f.readMetaDataForPath(ctx, "")
return err
})
}
}
// Get rootFolderID
@@ -842,8 +856,8 @@ func (f *Fs) Copy(ctx context.Context, src fs.Object, remote string) (fs.Object,
// Optional interface: Only implement this if you have a way of
// deleting all the files quicker than just running Remove() on the
// result of List()
func (f *Fs) Purge(ctx context.Context) error {
return f.purgeCheck(ctx, "", false)
func (f *Fs) Purge(ctx context.Context, dir string) error {
return f.purgeCheck(ctx, dir, false)
}
// move a file or folder
@@ -1258,8 +1272,10 @@ func (o *Object) upload(ctx context.Context, in io.Reader, leaf, directoryID str
//
// The new object may have been created if an error is returned
func (o *Object) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, options ...fs.OpenOption) (err error) {
o.fs.tokenRenewer.Start()
defer o.fs.tokenRenewer.Stop()
if o.fs.tokenRenewer != nil {
o.fs.tokenRenewer.Start()
defer o.fs.tokenRenewer.Stop()
}
size := src.Size()
modTime := src.ModTime(ctx)

View File

@@ -1,4 +1,4 @@
// +build !plan9
// +build !plan9,!js
package cache
@@ -1702,17 +1702,20 @@ func (f *Fs) Hashes() hash.Set {
return f.Fs.Hashes()
}
// Purge all files in the root and the root directory
func (f *Fs) Purge(ctx context.Context) error {
fs.Infof(f, "purging cache")
f.cache.Purge()
// Purge all files in the directory
func (f *Fs) Purge(ctx context.Context, dir string) error {
if dir == "" {
// FIXME this isn't quite right as it should purge the dir prefix
fs.Infof(f, "purging cache")
f.cache.Purge()
}
do := f.Fs.Features().Purge
if do == nil {
return nil
return fs.ErrorCantPurge
}
err := do(ctx)
err := do(ctx, dir)
if err != nil {
return err
}

View File

@@ -1,4 +1,4 @@
// +build !plan9
// +build !plan9,!js
// +build !race
package cache_test
@@ -16,7 +16,6 @@ import (
"os"
"path"
"path/filepath"
"runtime"
"runtime/debug"
"strings"
"testing"
@@ -31,13 +30,10 @@ import (
"github.com/rclone/rclone/fs/config"
"github.com/rclone/rclone/fs/config/configmap"
"github.com/rclone/rclone/fs/object"
"github.com/rclone/rclone/fs/rc"
"github.com/rclone/rclone/fstest"
"github.com/rclone/rclone/fstest/testy"
"github.com/rclone/rclone/lib/random"
"github.com/rclone/rclone/vfs"
"github.com/rclone/rclone/vfs/vfsflags"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
)
@@ -53,9 +49,7 @@ const (
var (
remoteName string
mountDir string
uploadDir string
useMount bool
runInstance *run
errNotSupported = errors.New("not supported")
decryptedToEncryptedRemotes = map[string]string{
@@ -91,9 +85,7 @@ var (
func init() {
goflag.StringVar(&remoteName, "remote-internal", "TestInternalCache", "Remote to test with, defaults to local filesystem")
goflag.StringVar(&mountDir, "mount-dir-internal", "", "")
goflag.StringVar(&uploadDir, "upload-dir-internal", "", "")
goflag.BoolVar(&useMount, "cache-use-mount", false, "Test only with mount")
}
// TestMain drives the tests
@@ -101,7 +93,7 @@ func TestMain(m *testing.M) {
goflag.Parse()
var rc int
log.Printf("Running with the following params: \n remote: %v, \n mount: %v", remoteName, useMount)
log.Printf("Running with the following params: \n remote: %v", remoteName)
runInstance = newRun()
rc = m.Run()
os.Exit(rc)
@@ -274,31 +266,6 @@ func TestInternalObjNotFound(t *testing.T) {
require.Nil(t, obj)
}
func TestInternalRemoteWrittenFileFoundInMount(t *testing.T) {
if !runInstance.useMount {
t.Skip("test needs mount mode")
}
id := fmt.Sprintf("tirwffim%v", time.Now().Unix())
rootFs, boltDb := runInstance.newCacheFs(t, remoteName, id, true, true, nil, nil)
defer runInstance.cleanupFs(t, rootFs, boltDb)
cfs, err := runInstance.getCacheFs(rootFs)
require.NoError(t, err)
var testData []byte
if runInstance.rootIsCrypt {
testData, err = base64.StdEncoding.DecodeString(cryptedTextBase64)
require.NoError(t, err)
} else {
testData = []byte("test content")
}
runInstance.writeObjectBytes(t, cfs.UnWrap(), runInstance.encryptRemoteIfNeeded(t, "test"), testData)
data, err := runInstance.readDataFromRemote(t, rootFs, "test", 0, int64(len([]byte("test content"))), false)
require.NoError(t, err)
require.Equal(t, "test content", string(data))
}
func TestInternalCachedWrittenContentMatches(t *testing.T) {
testy.SkipUnreliable(t)
id := fmt.Sprintf("ticwcm%v", time.Now().Unix())
@@ -694,79 +661,6 @@ func TestInternalChangeSeenAfterDirCacheFlush(t *testing.T) {
require.Equal(t, wrappedTime.Unix(), co.ModTime(context.Background()).Unix())
}
func TestInternalChangeSeenAfterRc(t *testing.T) {
cacheExpire := rc.Calls.Get("cache/expire")
assert.NotNil(t, cacheExpire)
id := fmt.Sprintf("ticsarc%v", time.Now().Unix())
rootFs, boltDb := runInstance.newCacheFs(t, remoteName, id, false, true, nil, nil)
defer runInstance.cleanupFs(t, rootFs, boltDb)
if !runInstance.useMount {
t.Skipf("needs mount")
}
if !runInstance.wrappedIsExternal {
t.Skipf("needs drive")
}
cfs, err := runInstance.getCacheFs(rootFs)
require.NoError(t, err)
chunkSize := cfs.ChunkSize()
// create some rand test data
testData := randStringBytes(int(chunkSize*4 + chunkSize/2))
runInstance.writeRemoteBytes(t, rootFs, "data.bin", testData)
// update in the wrapped fs
o, err := cfs.UnWrap().NewObject(context.Background(), runInstance.encryptRemoteIfNeeded(t, "data.bin"))
require.NoError(t, err)
wrappedTime := time.Now().Add(-1 * time.Hour)
err = o.SetModTime(context.Background(), wrappedTime)
require.NoError(t, err)
// get a new instance from the cache
co, err := rootFs.NewObject(context.Background(), "data.bin")
require.NoError(t, err)
require.NotEqual(t, o.ModTime(context.Background()).String(), co.ModTime(context.Background()).String())
// Call the rc function
m, err := cacheExpire.Fn(context.Background(), rc.Params{"remote": "data.bin"})
require.NoError(t, err)
require.Contains(t, m, "status")
require.Contains(t, m, "message")
require.Equal(t, "ok", m["status"])
require.Contains(t, m["message"], "cached file cleared")
// get a new instance from the cache
co, err = rootFs.NewObject(context.Background(), "data.bin")
require.NoError(t, err)
require.Equal(t, wrappedTime.Unix(), co.ModTime(context.Background()).Unix())
_, err = runInstance.list(t, rootFs, "")
require.NoError(t, err)
// create some rand test data
testData2 := randStringBytes(int(chunkSize))
runInstance.writeObjectBytes(t, cfs.UnWrap(), runInstance.encryptRemoteIfNeeded(t, "test2"), testData2)
// list should have 1 item only
li1, err := runInstance.list(t, rootFs, "")
require.NoError(t, err)
require.Len(t, li1, 1)
// Call the rc function
m, err = cacheExpire.Fn(context.Background(), rc.Params{"remote": "/"})
require.NoError(t, err)
require.Contains(t, m, "status")
require.Contains(t, m, "message")
require.Equal(t, "ok", m["status"])
require.Contains(t, m["message"], "cached directory cleared")
// list should have 2 items now
li2, err := runInstance.list(t, rootFs, "")
require.NoError(t, err)
require.Len(t, li2, 2)
}
func TestInternalCacheWrites(t *testing.T) {
id := "ticw"
rootFs, boltDb := runInstance.newCacheFs(t, remoteName, id, false, true, nil, map[string]string{"writes": "true"})
@@ -914,15 +808,9 @@ func TestInternalBug2117(t *testing.T) {
type run struct {
okDiff time.Duration
runDefaultCfgMap configmap.Simple
mntDir string
tmpUploadDir string
useMount bool
isMounted bool
rootIsCrypt bool
wrappedIsExternal bool
unmountFn func() error
unmountRes chan error
vfs *vfs.VFS
tempFiles []*os.File
dbPath string
chunkPath string
@@ -932,9 +820,7 @@ type run struct {
func newRun() *run {
var err error
r := &run{
okDiff: time.Second * 9, // really big diff here but the build machines seem to be slow. need a different way for this
useMount: useMount,
isMounted: false,
okDiff: time.Second * 9, // really big diff here but the build machines seem to be slow. need a different way for this
}
// Read in all the defaults for all the options
@@ -947,32 +833,6 @@ func newRun() *run {
r.runDefaultCfgMap.Set(option.Name, fmt.Sprint(option.Default))
}
if mountDir == "" {
if runtime.GOOS != "windows" {
r.mntDir, err = ioutil.TempDir("", "rclonecache-mount")
if err != nil {
log.Fatalf("Failed to create mount dir: %v", err)
return nil
}
} else {
// Find a free drive letter
drive := ""
for letter := 'E'; letter <= 'Z'; letter++ {
drive = string(letter) + ":"
_, err := os.Stat(drive + "\\")
if os.IsNotExist(err) {
goto found
}
}
log.Print("Couldn't find free drive letter for test")
found:
r.mntDir = drive
}
} else {
r.mntDir = mountDir
}
log.Printf("Mount Dir: %v", r.mntDir)
if uploadDir == "" {
r.tmpUploadDir, err = ioutil.TempDir("", "rclonecache-tmp")
if err != nil {
@@ -1086,33 +946,21 @@ func (r *run) newCacheFs(t *testing.T, remote, id string, needRemote, purge bool
}
if purge {
_ = f.Features().Purge(context.Background())
_ = f.Features().Purge(context.Background(), "")
require.NoError(t, err)
}
err = f.Mkdir(context.Background(), "")
require.NoError(t, err)
if r.useMount && !r.isMounted {
r.mountFs(t, f)
}
return f, boltDb
}
func (r *run) cleanupFs(t *testing.T, f fs.Fs, b *cache.Persistent) {
if r.useMount && r.isMounted {
r.unmountFs(t, f)
}
err := f.Features().Purge(context.Background())
err := f.Features().Purge(context.Background(), "")
require.NoError(t, err)
cfs, err := r.getCacheFs(f)
require.NoError(t, err)
cfs.StopBackgroundRunners()
if r.useMount && runtime.GOOS != "windows" {
err = os.RemoveAll(r.mntDir)
require.NoError(t, err)
}
err = os.RemoveAll(r.tmpUploadDir)
require.NoError(t, err)
@@ -1152,37 +1000,11 @@ func (r *run) writeObjectString(t *testing.T, f fs.Fs, remote, content string) f
}
func (r *run) writeRemoteBytes(t *testing.T, f fs.Fs, remote string, data []byte) {
var err error
if r.useMount {
err = r.retryBlock(func() error {
return ioutil.WriteFile(path.Join(r.mntDir, remote), data, 0600)
}, 3, time.Second*3)
require.NoError(t, err)
r.vfs.WaitForWriters(10 * time.Second)
} else {
r.writeObjectBytes(t, f, remote, data)
}
r.writeObjectBytes(t, f, remote, data)
}
func (r *run) writeRemoteReader(t *testing.T, f fs.Fs, remote string, in io.ReadCloser) {
defer func() {
_ = in.Close()
}()
if r.useMount {
out, err := os.Create(path.Join(r.mntDir, remote))
require.NoError(t, err)
defer func() {
_ = out.Close()
}()
_, err = io.Copy(out, in)
require.NoError(t, err)
r.vfs.WaitForWriters(10 * time.Second)
} else {
r.writeObjectReader(t, f, remote, in)
}
r.writeObjectReader(t, f, remote, in)
}
func (r *run) writeObjectBytes(t *testing.T, f fs.Fs, remote string, data []byte) fs.Object {
@@ -1199,10 +1021,6 @@ func (r *run) writeObjectReader(t *testing.T, f fs.Fs, remote string, in io.Read
objInfo := object.NewStaticObjectInfo(remote, modTime, -1, true, nil, f)
obj, err := f.Put(context.Background(), in, objInfo)
require.NoError(t, err)
if r.useMount {
r.vfs.WaitForWriters(10 * time.Second)
}
return obj
}
@@ -1210,26 +1028,16 @@ func (r *run) updateObjectRemote(t *testing.T, f fs.Fs, remote string, data1 []b
var err error
var obj fs.Object
if r.useMount {
err = ioutil.WriteFile(path.Join(r.mntDir, remote), data1, 0600)
require.NoError(t, err)
r.vfs.WaitForWriters(10 * time.Second)
err = ioutil.WriteFile(path.Join(r.mntDir, remote), data2, 0600)
require.NoError(t, err)
r.vfs.WaitForWriters(10 * time.Second)
obj, err = f.NewObject(context.Background(), remote)
} else {
in1 := bytes.NewReader(data1)
in2 := bytes.NewReader(data2)
objInfo1 := object.NewStaticObjectInfo(remote, time.Now(), int64(len(data1)), true, nil, f)
objInfo2 := object.NewStaticObjectInfo(remote, time.Now(), int64(len(data2)), true, nil, f)
in1 := bytes.NewReader(data1)
in2 := bytes.NewReader(data2)
objInfo1 := object.NewStaticObjectInfo(remote, time.Now(), int64(len(data1)), true, nil, f)
objInfo2 := object.NewStaticObjectInfo(remote, time.Now(), int64(len(data2)), true, nil, f)
obj, err = f.Put(context.Background(), in1, objInfo1)
require.NoError(t, err)
obj, err = f.NewObject(context.Background(), remote)
require.NoError(t, err)
err = obj.Update(context.Background(), in2, objInfo2)
}
obj, err = f.Put(context.Background(), in1, objInfo1)
require.NoError(t, err)
obj, err = f.NewObject(context.Background(), remote)
require.NoError(t, err)
err = obj.Update(context.Background(), in2, objInfo2)
require.NoError(t, err)
return obj
@@ -1239,30 +1047,12 @@ func (r *run) readDataFromRemote(t *testing.T, f fs.Fs, remote string, offset, e
size := end - offset
checkSample := make([]byte, size)
if r.useMount {
f, err := os.Open(path.Join(r.mntDir, remote))
defer func() {
_ = f.Close()
}()
if err != nil {
return checkSample, err
}
_, _ = f.Seek(offset, io.SeekStart)
totalRead, err := io.ReadFull(f, checkSample)
checkSample = checkSample[:totalRead]
if err == io.EOF || err == io.ErrUnexpectedEOF {
err = nil
}
if err != nil {
return checkSample, err
}
} else {
co, err := f.NewObject(context.Background(), remote)
if err != nil {
return checkSample, err
}
checkSample = r.readDataFromObj(t, co, offset, end, noLengthCheck)
co, err := f.NewObject(context.Background(), remote)
if err != nil {
return checkSample, err
}
checkSample = r.readDataFromObj(t, co, offset, end, noLengthCheck)
if !noLengthCheck && size != int64(len(checkSample)) {
return checkSample, errors.Errorf("read size doesn't match expected: %v <> %v", len(checkSample), size)
}
@@ -1285,28 +1075,19 @@ func (r *run) readDataFromObj(t *testing.T, o fs.Object, offset, end int64, noLe
}
func (r *run) mkdir(t *testing.T, f fs.Fs, remote string) {
var err error
if r.useMount {
err = os.Mkdir(path.Join(r.mntDir, remote), 0700)
} else {
err = f.Mkdir(context.Background(), remote)
}
err := f.Mkdir(context.Background(), remote)
require.NoError(t, err)
}
func (r *run) rm(t *testing.T, f fs.Fs, remote string) error {
var err error
if r.useMount {
err = os.Remove(path.Join(r.mntDir, remote))
var obj fs.Object
obj, err = f.NewObject(context.Background(), remote)
if err != nil {
err = f.Rmdir(context.Background(), remote)
} else {
var obj fs.Object
obj, err = f.NewObject(context.Background(), remote)
if err != nil {
err = f.Rmdir(context.Background(), remote)
} else {
err = obj.Remove(context.Background())
}
err = obj.Remove(context.Background())
}
return err
@@ -1315,18 +1096,10 @@ func (r *run) rm(t *testing.T, f fs.Fs, remote string) error {
func (r *run) list(t *testing.T, f fs.Fs, remote string) ([]interface{}, error) {
var err error
var l []interface{}
if r.useMount {
var list []os.FileInfo
list, err = ioutil.ReadDir(path.Join(r.mntDir, remote))
for _, ll := range list {
l = append(l, ll)
}
} else {
var list fs.DirEntries
list, err = f.List(context.Background(), remote)
for _, ll := range list {
l = append(l, ll)
}
var list fs.DirEntries
list, err = f.List(context.Background(), remote)
for _, ll := range list {
l = append(l, ll)
}
return l, err
}
@@ -1355,13 +1128,7 @@ func (r *run) copyFile(t *testing.T, f fs.Fs, src, dst string) error {
func (r *run) dirMove(t *testing.T, rootFs fs.Fs, src, dst string) error {
var err error
if runInstance.useMount {
err = os.Rename(path.Join(runInstance.mntDir, src), path.Join(runInstance.mntDir, dst))
if err != nil {
return err
}
r.vfs.WaitForWriters(10 * time.Second)
} else if rootFs.Features().DirMove != nil {
if rootFs.Features().DirMove != nil {
err = rootFs.Features().DirMove(context.Background(), rootFs, src, dst)
if err != nil {
return err
@@ -1377,13 +1144,7 @@ func (r *run) dirMove(t *testing.T, rootFs fs.Fs, src, dst string) error {
func (r *run) move(t *testing.T, rootFs fs.Fs, src, dst string) error {
var err error
if runInstance.useMount {
err = os.Rename(path.Join(runInstance.mntDir, src), path.Join(runInstance.mntDir, dst))
if err != nil {
return err
}
r.vfs.WaitForWriters(10 * time.Second)
} else if rootFs.Features().Move != nil {
if rootFs.Features().Move != nil {
obj1, err := rootFs.NewObject(context.Background(), src)
if err != nil {
return err
@@ -1403,13 +1164,7 @@ func (r *run) move(t *testing.T, rootFs fs.Fs, src, dst string) error {
func (r *run) copy(t *testing.T, rootFs fs.Fs, src, dst string) error {
var err error
if r.useMount {
err = r.copyFile(t, rootFs, path.Join(r.mntDir, src), path.Join(r.mntDir, dst))
if err != nil {
return err
}
r.vfs.WaitForWriters(10 * time.Second)
} else if rootFs.Features().Copy != nil {
if rootFs.Features().Copy != nil {
obj, err := rootFs.NewObject(context.Background(), src)
if err != nil {
return err
@@ -1429,13 +1184,6 @@ func (r *run) copy(t *testing.T, rootFs fs.Fs, src, dst string) error {
func (r *run) modTime(t *testing.T, rootFs fs.Fs, src string) (time.Time, error) {
var err error
if r.useMount {
fi, err := os.Stat(path.Join(runInstance.mntDir, src))
if err != nil {
return time.Time{}, err
}
return fi.ModTime(), nil
}
obj1, err := rootFs.NewObject(context.Background(), src)
if err != nil {
return time.Time{}, err
@@ -1446,13 +1194,6 @@ func (r *run) modTime(t *testing.T, rootFs fs.Fs, src string) (time.Time, error)
func (r *run) size(t *testing.T, rootFs fs.Fs, src string) (int64, error) {
var err error
if r.useMount {
fi, err := os.Stat(path.Join(runInstance.mntDir, src))
if err != nil {
return int64(0), err
}
return fi.Size(), nil
}
obj1, err := rootFs.NewObject(context.Background(), src)
if err != nil {
return int64(0), err
@@ -1463,28 +1204,15 @@ func (r *run) size(t *testing.T, rootFs fs.Fs, src string) (int64, error) {
func (r *run) updateData(t *testing.T, rootFs fs.Fs, src, data, append string) error {
var err error
if r.useMount {
var f *os.File
f, err = os.OpenFile(path.Join(runInstance.mntDir, src), os.O_TRUNC|os.O_CREATE|os.O_WRONLY, 0644)
if err != nil {
return err
}
defer func() {
_ = f.Close()
r.vfs.WaitForWriters(10 * time.Second)
}()
_, err = f.WriteString(data + append)
} else {
var obj1 fs.Object
obj1, err = rootFs.NewObject(context.Background(), src)
if err != nil {
return err
}
data1 := []byte(data + append)
r := bytes.NewReader(data1)
objInfo1 := object.NewStaticObjectInfo(src, time.Now(), int64(len(data1)), true, nil, rootFs)
err = obj1.Update(context.Background(), r, objInfo1)
var obj1 fs.Object
obj1, err = rootFs.NewObject(context.Background(), src)
if err != nil {
return err
}
data1 := []byte(data + append)
reader := bytes.NewReader(data1)
objInfo1 := object.NewStaticObjectInfo(src, time.Now(), int64(len(data1)), true, nil, rootFs)
err = obj1.Update(context.Background(), reader, objInfo1)
return err
}

View File

@@ -1,21 +0,0 @@
// +build !linux !go1.13
// +build !darwin !go1.13
// +build !freebsd !go1.13
// +build !windows
// +build !race
package cache_test
import (
"testing"
"github.com/rclone/rclone/fs"
)
func (r *run) mountFs(t *testing.T, f fs.Fs) {
panic("mountFs not defined for this platform")
}
func (r *run) unmountFs(t *testing.T, f fs.Fs) {
panic("unmountFs not defined for this platform")
}

View File

@@ -1,79 +0,0 @@
// +build linux,go1.13 darwin,go1.13 freebsd,go1.13
// +build !race
package cache_test
import (
"os"
"testing"
"time"
"bazil.org/fuse"
fusefs "bazil.org/fuse/fs"
"github.com/rclone/rclone/cmd/mount"
"github.com/rclone/rclone/cmd/mountlib"
"github.com/rclone/rclone/fs"
"github.com/stretchr/testify/require"
)
func (r *run) mountFs(t *testing.T, f fs.Fs) {
device := f.Name() + ":" + f.Root()
var options = []fuse.MountOption{
fuse.MaxReadahead(uint32(mountlib.MaxReadAhead)),
fuse.Subtype("rclone"),
fuse.FSName(device), fuse.VolumeName(device),
fuse.NoAppleDouble(),
fuse.NoAppleXattr(),
//fuse.AllowOther(),
}
err := os.MkdirAll(r.mntDir, os.ModePerm)
require.NoError(t, err)
c, err := fuse.Mount(r.mntDir, options...)
require.NoError(t, err)
filesys := mount.NewFS(f)
server := fusefs.New(c, nil)
// Serve the mount point in the background returning error to errChan
r.unmountRes = make(chan error, 1)
go func() {
err := server.Serve(filesys)
closeErr := c.Close()
if err == nil {
err = closeErr
}
r.unmountRes <- err
}()
// check if the mount process has an error to report
<-c.Ready
require.NoError(t, c.MountError)
r.unmountFn = func() error {
// Shutdown the VFS
filesys.VFS.Shutdown()
return fuse.Unmount(r.mntDir)
}
r.vfs = filesys.VFS
r.isMounted = true
}
func (r *run) unmountFs(t *testing.T, f fs.Fs) {
var err error
for i := 0; i < 4; i++ {
err = r.unmountFn()
if err != nil {
//log.Printf("signal to umount failed - retrying: %v", err)
time.Sleep(3 * time.Second)
continue
}
break
}
require.NoError(t, err)
err = <-r.unmountRes
require.NoError(t, err)
err = r.vfs.CleanUp()
require.NoError(t, err)
r.isMounted = false
}

View File

@@ -1,125 +0,0 @@
// +build windows
// +build !race
package cache_test
import (
"fmt"
"os"
"testing"
"time"
"github.com/billziss-gh/cgofuse/fuse"
"github.com/pkg/errors"
"github.com/rclone/rclone/cmd/cmount"
"github.com/rclone/rclone/cmd/mountlib"
"github.com/rclone/rclone/fs"
"github.com/stretchr/testify/require"
)
// waitFor runs fn() until it returns true or the timeout expires
func waitFor(fn func() bool) (ok bool) {
const totalWait = 10 * time.Second
const individualWait = 10 * time.Millisecond
for i := 0; i < int(totalWait/individualWait); i++ {
ok = fn()
if ok {
return ok
}
time.Sleep(individualWait)
}
return false
}
func (r *run) mountFs(t *testing.T, f fs.Fs) {
// FIXME implement cmount
t.Skip("windows not supported yet")
device := f.Name() + ":" + f.Root()
options := []string{
"-o", "fsname=" + device,
"-o", "subtype=rclone",
"-o", fmt.Sprintf("max_readahead=%d", mountlib.MaxReadAhead),
"-o", "uid=-1",
"-o", "gid=-1",
"-o", "allow_other",
// This causes FUSE to supply O_TRUNC with the Open
// call which is more efficient for cmount. However
// it does not work with cgofuse on Windows with
// WinFSP so cmount must work with or without it.
"-o", "atomic_o_trunc",
"--FileSystemName=rclone",
}
fsys := cmount.NewFS(f)
host := fuse.NewFileSystemHost(fsys)
// Serve the mount point in the background returning error to errChan
r.unmountRes = make(chan error, 1)
go func() {
var err error
ok := host.Mount(r.mntDir, options)
if !ok {
err = errors.New("mount failed")
}
r.unmountRes <- err
}()
// unmount
r.unmountFn = func() error {
// Shutdown the VFS
fsys.VFS.Shutdown()
if host.Unmount() {
if !waitFor(func() bool {
_, err := os.Stat(r.mntDir)
return err != nil
}) {
t.Fatalf("mountpoint %q didn't disappear after unmount - continuing anyway", r.mntDir)
}
return nil
}
return errors.New("host unmount failed")
}
// Wait for the filesystem to become ready, checking the file
// system didn't blow up before starting
select {
case err := <-r.unmountRes:
require.NoError(t, err)
case <-time.After(time.Second * 3):
}
// Wait for the mount point to be available on Windows
// On Windows the Init signal comes slightly before the mount is ready
if !waitFor(func() bool {
_, err := os.Stat(r.mntDir)
return err == nil
}) {
t.Errorf("mountpoint %q didn't became available on mount", r.mntDir)
}
r.vfs = fsys.VFS
r.isMounted = true
}
func (r *run) unmountFs(t *testing.T, f fs.Fs) {
// FIXME implement cmount
t.Skip("windows not supported yet")
var err error
for i := 0; i < 4; i++ {
err = r.unmountFn()
if err != nil {
//log.Printf("signal to umount failed - retrying: %v", err)
time.Sleep(3 * time.Second)
continue
}
break
}
require.NoError(t, err)
err = <-r.unmountRes
require.NoError(t, err)
err = r.vfs.CleanUp()
require.NoError(t, err)
r.isMounted = false
}

View File

@@ -1,6 +1,6 @@
// Test Cache filesystem interface
// +build !plan9
// +build !plan9,!js
// +build !race
package cache_test

View File

@@ -1,6 +1,6 @@
// Build for cache for unsupported platforms to stop go complaining
// about "no buildable Go source files "
// +build plan9
// +build plan9 js
package cache

View File

@@ -1,4 +1,4 @@
// +build !plan9
// +build !plan9,!js
// +build !race
package cache_test

View File

@@ -1,4 +1,4 @@
// +build !plan9
// +build !plan9,!js
package cache

View File

@@ -1,4 +1,4 @@
// +build !plan9
// +build !plan9,!js
package cache

View File

@@ -1,4 +1,4 @@
// +build !plan9
// +build !plan9,!js
package cache

View File

@@ -1,4 +1,4 @@
// +build !plan9
// +build !plan9,!js
package cache

View File

@@ -1,4 +1,4 @@
// +build !plan9
// +build !plan9,!js
package cache

View File

@@ -1,4 +1,4 @@
// +build !plan9
// +build !plan9,!js
package cache

View File

@@ -1333,7 +1333,7 @@ func (f *Fs) Rmdir(ctx context.Context, dir string) error {
return f.base.Rmdir(ctx, dir)
}
// Purge all files in the root and the root directory
// Purge all files in the directory
//
// Implement this if you have a way of deleting all the files
// quicker than just running Remove() on the result of List()
@@ -1344,12 +1344,12 @@ func (f *Fs) Rmdir(ctx context.Context, dir string) error {
// As a result it removes not only composite chunker files with their
// active chunks but also all hidden temporary chunks in the directory.
//
func (f *Fs) Purge(ctx context.Context) error {
func (f *Fs) Purge(ctx context.Context, dir string) error {
do := f.base.Features().Purge
if do == nil {
return fs.ErrorCantPurge
}
return do(ctx)
return do(ctx, dir)
}
// Remove an object (chunks and metadata, if any)

View File

@@ -427,18 +427,18 @@ func (f *Fs) Rmdir(ctx context.Context, dir string) error {
return f.Fs.Rmdir(ctx, f.cipher.EncryptDirName(dir))
}
// Purge all files in the root and the root directory
// Purge all files in the directory specified
//
// Implement this if you have a way of deleting all the files
// quicker than just running Remove() on the result of List()
//
// Return an error if it doesn't exist
func (f *Fs) Purge(ctx context.Context) error {
func (f *Fs) Purge(ctx context.Context, dir string) error {
do := f.Fs.Features().Purge
if do == nil {
return fs.ErrorCantPurge
}
return do(ctx)
return do(ctx, dir)
}
// Copy src to this remote using server side copy operations.

View File

@@ -37,6 +37,7 @@ import (
"github.com/rclone/rclone/fs/fserrors"
"github.com/rclone/rclone/fs/fshttp"
"github.com/rclone/rclone/fs/hash"
"github.com/rclone/rclone/fs/operations"
"github.com/rclone/rclone/fs/walk"
"github.com/rclone/rclone/lib/dircache"
"github.com/rclone/rclone/lib/encoder"
@@ -69,7 +70,7 @@ const (
// 1<<18 is the minimum size supported by the Google uploader, and there is no maximum.
minChunkSize = 256 * fs.KibiByte
defaultChunkSize = 8 * fs.MebiByte
partialFields = "id,name,size,md5Checksum,trashed,modifiedTime,createdTime,mimeType,parents,webViewLink,shortcutDetails"
partialFields = "id,name,size,md5Checksum,trashed,explicitlyTrashed,modifiedTime,createdTime,mimeType,parents,webViewLink,shortcutDetails"
listRGrouping = 50 // number of IDs to search at once when using ListR
listRInputBuffer = 1000 // size of input buffer when using ListR
)
@@ -192,13 +193,7 @@ func init() {
log.Fatalf("Failed to configure team drive: %v", err)
}
},
Options: []fs.Option{{
Name: config.ConfigClientID,
Help: "Google Application Client Id\nSetting your own is recommended.\nSee https://rclone.org/drive/#making-your-own-client-id for how to create your own.\nIf you leave this blank, it will use an internal key which is low performance.",
}, {
Name: config.ConfigClientSecret,
Help: "Google Application Client Secret\nSetting your own is recommended.",
}, {
Options: append(oauthutil.SharedOptions, []fs.Option{{
Name: "scope",
Help: "Scope that rclone should use when requesting access from drive.",
Examples: []fs.OptionExample{{
@@ -224,9 +219,6 @@ Leave blank normally.
Fill in to access "Computers" folders (see docs), or for rclone to use
a non root folder as its starting point.
Note that if this is blank, the first time rclone runs it will fill it
in with the ID of the root folder.
`,
}, {
Name: "service_account_file",
@@ -289,6 +281,11 @@ commands (copy, sync, etc), and with all other commands too.`,
Default: false,
Help: "Only show files that are in the trash.\nThis will show trashed files in their original directory structure.",
Advanced: true,
}, {
Name: "starred_only",
Default: false,
Help: "Only show files that are starred.",
Advanced: true,
}, {
Name: "formats",
Default: "",
@@ -350,12 +347,9 @@ date is used.`,
Help: "Size of listing chunk 100-1000. 0 to disable.",
Advanced: true,
}, {
Name: "impersonate",
Default: "",
Help: `Impersonate this user when using a service account.
Note that if this is used then "root_folder_id" will be ignored.
`,
Name: "impersonate",
Default: "",
Help: `Impersonate this user when using a service account.`,
Advanced: true,
}, {
Name: "alternate_export",
@@ -494,7 +488,7 @@ If this flag is set then rclone will ignore shortcut files completely.
// Encode invalid UTF-8 bytes as json doesn't handle them properly.
// Don't encode / as it's a valid name character in drive.
Default: encoder.EncodeInvalidUtf8,
}},
}}...),
})
// register duplicate MIME types first
@@ -524,6 +518,7 @@ type Options struct {
SkipChecksumGphotos bool `config:"skip_checksum_gphotos"`
SharedWithMe bool `config:"shared_with_me"`
TrashedOnly bool `config:"trashed_only"`
StarredOnly bool `config:"starred_only"`
Extensions string `config:"formats"`
ExportExtensions string `config:"export_formats"`
ImportExtensions string `config:"import_formats"`
@@ -707,6 +702,7 @@ func (f *Fs) list(ctx context.Context, dirIDs []string, title string, directorie
}
query = append(query, q)
}
// Search with sharedWithMe will always return things listed in "Shared With Me" (without any parents)
// We must not filter with parent when we try list "ROOT" with drive-shared-with-me
// If we need to list file inside those shared folders, we must search it without sharedWithMe
@@ -718,8 +714,16 @@ func (f *Fs) list(ctx context.Context, dirIDs []string, title string, directorie
if parentsQuery.Len() > 1 {
_, _ = parentsQuery.WriteString(" or ")
}
if f.opt.SharedWithMe && dirID == f.rootFolderID {
_, _ = parentsQuery.WriteString("sharedWithMe=true")
if (f.opt.SharedWithMe || f.opt.StarredOnly) && dirID == f.rootFolderID {
if f.opt.SharedWithMe {
_, _ = parentsQuery.WriteString("sharedWithMe=true")
}
if f.opt.StarredOnly {
if f.opt.SharedWithMe {
_, _ = parentsQuery.WriteString(" and ")
}
_, _ = parentsQuery.WriteString("starred=true")
}
} else {
_, _ = fmt.Fprintf(parentsQuery, "'%s' in parents", dirID)
}
@@ -929,55 +933,32 @@ func configTeamDrive(ctx context.Context, opt *Options, m configmap.Mapper, name
if !config.Confirm(false) {
return nil
}
client, err := createOAuthClient(opt, name, m)
f, err := newFs(name, "", m)
if err != nil {
return errors.Wrap(err, "config team drive failed to create oauth client")
}
svc, err := drive.New(client)
if err != nil {
return errors.Wrap(err, "config team drive failed to make drive client")
return errors.Wrap(err, "failed to make Fs to list teamdrives")
}
fmt.Printf("Fetching team drive list...\n")
var driveIDs, driveNames []string
listTeamDrives := svc.Teamdrives.List().PageSize(100)
listFailed := false
var defaultFs Fs // default Fs with default Options
for {
var teamDrives *drive.TeamDriveList
err = newPacer(opt).Call(func() (bool, error) {
teamDrives, err = listTeamDrives.Context(ctx).Do()
return defaultFs.shouldRetry(err)
})
if err != nil {
fmt.Printf("Listing team drives failed: %v\n", err)
listFailed = true
break
}
for _, drive := range teamDrives.TeamDrives {
driveIDs = append(driveIDs, drive.Id)
driveNames = append(driveNames, drive.Name)
}
if teamDrives.NextPageToken == "" {
break
}
listTeamDrives.PageToken(teamDrives.NextPageToken)
teamDrives, err := f.listTeamDrives(ctx)
if err != nil {
return err
}
var driveID string
if !listFailed && len(driveIDs) == 0 {
if len(teamDrives) == 0 {
fmt.Printf("No team drives found in your account")
} else {
driveID = config.Choose("Enter a Team Drive ID", driveIDs, driveNames, true)
return nil
}
var driveIDs, driveNames []string
for _, teamDrive := range teamDrives {
driveIDs = append(driveIDs, teamDrive.Id)
driveNames = append(driveNames, teamDrive.Name)
}
driveID := config.Choose("Enter a Team Drive ID", driveIDs, driveNames, true)
m.Set("team_drive", driveID)
m.Set("root_folder_id", "")
opt.TeamDriveID = driveID
opt.RootFolderID = ""
return nil
}
// newPacer makes a pacer configured for drive
func newPacer(opt *Options) *fs.Pacer {
return fs.NewPacer(pacer.NewGoogleDrive(pacer.MinSleep(opt.PacerMinSleep), pacer.Burst(opt.PacerBurst)))
}
// getClient makes an http client according to the options
func getClient(opt *Options) *http.Client {
t := fshttp.NewTransportCustom(fs.Config, func(t *http.Transport) {
@@ -1060,9 +1041,11 @@ func (f *Fs) setUploadCutoff(cs fs.SizeSuffix) (old fs.SizeSuffix, err error) {
return
}
// NewFs constructs an Fs from the path, container:path
func NewFs(name, path string, m configmap.Mapper) (fs.Fs, error) {
ctx := context.Background()
// newFs partially constructs Fs from the path
//
// It constructs a valid Fs but doesn't attempt to figure out whether
// it is a file or a directory.
func newFs(name, path string, m configmap.Mapper) (*Fs, error) {
// Parse config into Options struct
opt := new(Options)
err := configstruct.Set(m, opt)
@@ -1092,7 +1075,7 @@ func NewFs(name, path string, m configmap.Mapper) (fs.Fs, error) {
name: name,
root: root,
opt: *opt,
pacer: newPacer(opt),
pacer: fs.NewPacer(pacer.NewGoogleDrive(pacer.MinSleep(opt.PacerMinSleep), pacer.Burst(opt.PacerBurst))),
m: m,
grouping: listRGrouping,
listRmu: new(sync.Mutex),
@@ -1122,25 +1105,26 @@ func NewFs(name, path string, m configmap.Mapper) (fs.Fs, error) {
}
}
// If impersonating warn about root_folder_id if set and unset it
//
// This is because rclone v1.51 and v1.52 cached root_folder_id when
// using impersonate which they shouldn't have done. It is possible
// someone is using impersonate and root_folder_id in which case this
// breaks their workflow. There isn't an easy way around that.
if opt.RootFolderID != "" && opt.RootFolderID != "appDataFolder" && opt.Impersonate != "" {
fs.Logf(f, "Ignoring cached root_folder_id when using --drive-impersonate")
opt.RootFolderID = ""
return f, nil
}
// NewFs constructs an Fs from the path, container:path
func NewFs(name, path string, m configmap.Mapper) (fs.Fs, error) {
ctx := context.Background()
f, err := newFs(name, path, m)
if err != nil {
return nil, err
}
// set root folder for a team drive or query the user root folder
if opt.RootFolderID != "" {
// override root folder if set or cached in the config and not impersonating
f.rootFolderID = opt.RootFolderID
// Set the root folder ID
if f.opt.RootFolderID != "" {
// use root_folder ID if set
f.rootFolderID = f.opt.RootFolderID
} else if f.isTeamDrive {
// otherwise use team_drive if set
f.rootFolderID = f.opt.TeamDriveID
} else {
// Look up the root ID and cache it in the config
// otherwise look up the actual root ID
rootID, err := f.getRootID()
if err != nil {
if gerr, ok := errors.Cause(err).(*googleapi.Error); ok && gerr.Code == 404 {
@@ -1152,27 +1136,24 @@ func NewFs(name, path string, m configmap.Mapper) (fs.Fs, error) {
}
}
f.rootFolderID = rootID
// Don't cache the root folder ID if impersonating
if opt.Impersonate == "" {
m.Set("root_folder_id", rootID)
}
fs.Debugf(f, "root_folder_id = %q - save this in the config to speed up startup", rootID)
}
f.dirCache = dircache.New(root, f.rootFolderID, f)
f.dirCache = dircache.New(f.root, f.rootFolderID, f)
// Parse extensions
if opt.Extensions != "" {
if opt.ExportExtensions != defaultExportExtensions {
if f.opt.Extensions != "" {
if f.opt.ExportExtensions != defaultExportExtensions {
return nil, errors.New("only one of 'formats' and 'export_formats' can be specified")
}
opt.Extensions, opt.ExportExtensions = "", opt.Extensions
f.opt.Extensions, f.opt.ExportExtensions = "", f.opt.Extensions
}
f.exportExtensions, _, err = parseExtensions(opt.ExportExtensions, defaultExportExtensions)
f.exportExtensions, _, err = parseExtensions(f.opt.ExportExtensions, defaultExportExtensions)
if err != nil {
return nil, err
}
_, f.importMimeTypes, err = parseExtensions(opt.ImportExtensions)
_, f.importMimeTypes, err = parseExtensions(f.opt.ImportExtensions)
if err != nil {
return nil, err
}
@@ -1181,7 +1162,7 @@ func NewFs(name, path string, m configmap.Mapper) (fs.Fs, error) {
err = f.dirCache.FindRoot(ctx, false)
if err != nil {
// Assume it is a file
newRoot, remote := dircache.SplitPath(root)
newRoot, remote := dircache.SplitPath(f.root)
tempF := *f
tempF.dirCache = dircache.New(newRoot, f.rootFolderID, &tempF)
tempF.root = newRoot
@@ -1662,7 +1643,7 @@ func (s listRSlices) Less(i, j int) bool {
// In each cycle it will read up to grouping entries from the in channel without blocking.
// If an error occurs it will be send to the out channel and then return. Once the in channel is closed,
// nil is send to the out channel and the function returns.
func (f *Fs) listRRunner(ctx context.Context, wg *sync.WaitGroup, in chan listREntry, out chan<- error, cb func(fs.DirEntry) error) {
func (f *Fs) listRRunner(ctx context.Context, wg *sync.WaitGroup, in chan listREntry, out chan<- error, cb func(fs.DirEntry) error, sendJob func(listREntry)) {
var dirs []string
var paths []string
var grouping int32
@@ -1741,26 +1722,19 @@ func (f *Fs) listRRunner(ctx context.Context, wg *sync.WaitGroup, in chan listRE
// https://issuetracker.google.com/issues/149522397
if len(dirs) > 1 && !foundItems {
if atomic.SwapInt32(&f.grouping, 1) != 1 {
fs.Logf(f, "Disabling ListR to work around bug in drive as multi listing (%d) returned no entries", len(dirs))
fs.Debugf(f, "Disabling ListR to work around bug in drive as multi listing (%d) returned no entries", len(dirs))
}
var recycled = make([]listREntry, len(dirs))
f.listRmu.Lock()
for i := range dirs {
recycled[i] = listREntry{id: dirs[i], path: paths[i]}
// Requeue the jobs
job := listREntry{id: dirs[i], path: paths[i]}
sendJob(job)
// Make a note of these dirs - if they all turn
// out to be empty then we can re-enable grouping
f.listRempties[dirs[i]] = struct{}{}
}
f.listRmu.Unlock()
// recycle these in the background so we don't deadlock
// the listR runners if they all get here
wg.Add(len(recycled))
go func() {
for _, entry := range recycled {
in <- entry
}
fs.Debugf(f, "Recycled %d entries", len(recycled))
}()
fs.Debugf(f, "Recycled %d entries", len(dirs))
}
// If using a grouping of 1 and dir was empty then check to see if it
// is part of the group that caused grouping to be disabled.
@@ -1774,7 +1748,7 @@ func (f *Fs) listRRunner(ctx context.Context, wg *sync.WaitGroup, in chan listRE
// empty so must have made a mistake
if len(f.listRempties) == 0 {
if atomic.SwapInt32(&f.grouping, listRGrouping) != listRGrouping {
fs.Logf(f, "Re-enabling ListR as previous detection was in error")
fs.Debugf(f, "Re-enabling ListR as previous detection was in error")
}
}
}
@@ -1829,21 +1803,33 @@ func (f *Fs) ListR(ctx context.Context, dir string, callback fs.ListRCallback) (
overflow := []listREntry{}
listed := 0
cb := func(entry fs.DirEntry) error {
// Send a job to the input channel if not closed. If the job
// won't fit then queue it in the overflow slice.
//
// This will not block if the channel is full.
sendJob := func(job listREntry) {
mu.Lock()
defer mu.Unlock()
if d, isDir := entry.(*fs.Dir); isDir && in != nil {
job := listREntry{actualID(d.ID()), d.Remote()}
select {
case in <- job:
// Adding the wg after we've entered the item is
// safe here because we know when the callback
// is called we are holding a waitgroup.
wg.Add(1)
default:
overflow = append(overflow, job)
}
if in == nil {
return
}
wg.Add(1)
select {
case in <- job:
default:
overflow = append(overflow, job)
wg.Add(-1)
}
}
// Send the entry to the caller, queueing any directories as new jobs
cb := func(entry fs.DirEntry) error {
if d, isDir := entry.(*fs.Dir); isDir {
job := listREntry{actualID(d.ID()), d.Remote()}
sendJob(job)
}
mu.Lock()
defer mu.Unlock()
listed++
return list.Add(entry)
}
@@ -1852,7 +1838,7 @@ func (f *Fs) ListR(ctx context.Context, dir string, callback fs.ListRCallback) (
in <- listREntry{directoryID, dir}
for i := 0; i < fs.Config.Checkers; i++ {
go f.listRRunner(ctx, &wg, in, out, cb)
go f.listRRunner(ctx, &wg, in, out, cb, sendJob)
}
go func() {
// wait until the all directories are processed
@@ -2218,10 +2204,9 @@ func (f *Fs) delete(ctx context.Context, id string, useTrash bool) error {
})
}
// Rmdir deletes a directory
//
// Returns an error if it isn't empty
func (f *Fs) Rmdir(ctx context.Context, dir string) error {
// purgeCheck removes the dir directory, if check is set then it
// refuses to do so if it has anything in
func (f *Fs) purgeCheck(ctx context.Context, dir string, check bool) error {
root := path.Join(f.root, dir)
dc := f.dirCache
directoryID, err := dc.FindDir(ctx, dir, false)
@@ -2234,20 +2219,22 @@ func (f *Fs) Rmdir(ctx context.Context, dir string) error {
return f.delete(ctx, shortcutID, f.opt.UseTrash)
}
var trashedFiles = false
found, err := f.list(ctx, []string{directoryID}, "", false, false, true, func(item *drive.File) bool {
if !item.Trashed {
fs.Debugf(dir, "Rmdir: contains file: %q", item.Name)
return true
if check {
found, err := f.list(ctx, []string{directoryID}, "", false, false, true, func(item *drive.File) bool {
if !item.Trashed {
fs.Debugf(dir, "Rmdir: contains file: %q", item.Name)
return true
}
fs.Debugf(dir, "Rmdir: contains trashed file: %q", item.Name)
trashedFiles = true
return false
})
if err != nil {
return err
}
if found {
return errors.Errorf("directory not empty")
}
fs.Debugf(dir, "Rmdir: contains trashed file: %q", item.Name)
trashedFiles = true
return false
})
if err != nil {
return err
}
if found {
return errors.Errorf("directory not empty")
}
if root != "" {
// trash the directory if it had trashed files
@@ -2257,6 +2244,8 @@ func (f *Fs) Rmdir(ctx context.Context, dir string) error {
if err != nil {
return err
}
} else if check {
return errors.New("can't purge root directory")
}
f.dirCache.FlushDir(dir)
if err != nil {
@@ -2265,6 +2254,13 @@ func (f *Fs) Rmdir(ctx context.Context, dir string) error {
return nil
}
// Rmdir deletes a directory
//
// Returns an error if it isn't empty
func (f *Fs) Rmdir(ctx context.Context, dir string) error {
return f.purgeCheck(ctx, dir, true)
}
// Precision of the object storage system
func (f *Fs) Precision() time.Duration {
return time.Millisecond
@@ -2282,13 +2278,13 @@ func (f *Fs) Precision() time.Duration {
func (f *Fs) Copy(ctx context.Context, src fs.Object, remote string) (fs.Object, error) {
var srcObj *baseObject
ext := ""
readDescription := false
isDoc := false
switch src := src.(type) {
case *Object:
srcObj = &src.baseObject
case *documentObject:
srcObj, ext = &src.baseObject, src.ext()
readDescription = true
isDoc = true
case *linkObject:
srcObj, ext = &src.baseObject, src.ext()
default:
@@ -2296,6 +2292,12 @@ func (f *Fs) Copy(ctx context.Context, src fs.Object, remote string) (fs.Object,
return nil, fs.ErrorCantCopy
}
// Look to see if there is an existing object before we remove
// the extension from the remote
existingObject, _ := f.NewObject(ctx, remote)
// Adjust the remote name to be without the extension if we
// are about to create a doc.
if ext != "" {
if !strings.HasSuffix(remote, ext) {
fs.Debugf(src, "Can't copy - not same document type")
@@ -2304,15 +2306,12 @@ func (f *Fs) Copy(ctx context.Context, src fs.Object, remote string) (fs.Object,
remote = remote[:len(remote)-len(ext)]
}
// Look to see if there is an existing object
existingObject, _ := f.NewObject(ctx, remote)
createInfo, err := f.createFileInfo(ctx, remote, src.ModTime(ctx))
if err != nil {
return nil, err
}
if readDescription {
if isDoc {
// preserve the description on copy for docs
info, err := f.getFile(actualID(srcObj.id), "description")
if err != nil {
@@ -2344,6 +2343,22 @@ func (f *Fs) Copy(ctx context.Context, src fs.Object, remote string) (fs.Object,
if err != nil {
return nil, err
}
// Google docs aren't preserving their mod time after copy, so set them explicitly
// See: https://github.com/rclone/rclone/issues/4517
//
// FIXME remove this when google fixes the problem!
if isDoc {
// A short sleep is needed here in order to make the
// change effective, without it is is ignored. This is
// probably some eventual consistency nastiness.
sleepTime := 2 * time.Second
fs.Debugf(f, "Sleeping for %v before setting the modtime to work around drive bug - see #4517", sleepTime)
time.Sleep(sleepTime)
err = newObject.SetModTime(ctx, src.ModTime(ctx))
if err != nil {
return nil, err
}
}
if existingObject != nil {
err = existingObject.Remove(ctx)
if err != nil {
@@ -2358,23 +2373,11 @@ func (f *Fs) Copy(ctx context.Context, src fs.Object, remote string) (fs.Object,
// Optional interface: Only implement this if you have a way of
// deleting all the files quicker than just running Remove() on the
// result of List()
func (f *Fs) Purge(ctx context.Context) error {
if f.root == "" {
return errors.New("can't purge root directory")
}
func (f *Fs) Purge(ctx context.Context, dir string) error {
if f.opt.TrashedOnly {
return errors.New("Can't purge with --drive-trashed-only. Use delete if you want to selectively delete files")
}
rootID, err := f.dirCache.RootID(ctx, false)
if err != nil {
return err
}
err = f.delete(ctx, shortcutID(rootID), f.opt.UseTrash)
f.dirCache.ResetRoot()
if err != nil {
return err
}
return nil
return f.purgeCheck(ctx, dir, false)
}
// CleanUp empties the trash
@@ -2877,6 +2880,98 @@ func (f *Fs) makeShortcut(ctx context.Context, srcPath string, dstFs *Fs, dstPat
return dstFs.newObjectWithInfo(dstPath, info)
}
// List all team drives
func (f *Fs) listTeamDrives(ctx context.Context) (drives []*drive.TeamDrive, err error) {
drives = []*drive.TeamDrive{}
listTeamDrives := f.svc.Teamdrives.List().PageSize(100)
var defaultFs Fs // default Fs with default Options
for {
var teamDrives *drive.TeamDriveList
err = f.pacer.Call(func() (bool, error) {
teamDrives, err = listTeamDrives.Context(ctx).Do()
return defaultFs.shouldRetry(err)
})
if err != nil {
return drives, errors.Wrap(err, "listing team drives failed")
}
drives = append(drives, teamDrives.TeamDrives...)
if teamDrives.NextPageToken == "" {
break
}
listTeamDrives.PageToken(teamDrives.NextPageToken)
}
return drives, nil
}
type unTrashResult struct {
Untrashed int
Errors int
}
func (r unTrashResult) Error() string {
return fmt.Sprintf("%d errors while untrashing - see log", r.Errors)
}
// Restore the trashed files from dir, directoryID recursing if needed
func (f *Fs) unTrash(ctx context.Context, dir string, directoryID string, recurse bool) (r unTrashResult, err error) {
directoryID = actualID(directoryID)
fs.Debugf(dir, "finding trash to restore in directory %q", directoryID)
_, err = f.list(ctx, []string{directoryID}, "", false, false, true, func(item *drive.File) bool {
remote := path.Join(dir, item.Name)
if item.ExplicitlyTrashed {
fs.Infof(remote, "restoring %q", item.Id)
if operations.SkipDestructive(ctx, remote, "restore") {
return false
}
update := drive.File{
ForceSendFields: []string{"Trashed"}, // necessary to set false value
Trashed: false,
}
err := f.pacer.Call(func() (bool, error) {
_, err := f.svc.Files.Update(item.Id, &update).
SupportsAllDrives(true).
Fields("trashed").
Do()
return f.shouldRetry(err)
})
if err != nil {
err = errors.Wrap(err, "failed to restore")
r.Errors++
fs.Errorf(remote, "%v", err)
} else {
r.Untrashed++
}
}
if recurse && item.MimeType == "application/vnd.google-apps.folder" {
if !isShortcutID(item.Id) {
rNew, _ := f.unTrash(ctx, remote, item.Id, recurse)
r.Untrashed += rNew.Untrashed
r.Errors += rNew.Errors
}
}
return false
})
if err != nil {
err = errors.Wrap(err, "failed to list directory")
r.Errors++
fs.Errorf(dir, "%v", err)
}
if r.Errors != 0 {
return r, r
}
return r, nil
}
// Untrash dir
func (f *Fs) unTrashDir(ctx context.Context, dir string, recurse bool) (r unTrashResult, err error) {
directoryID, err := f.dirCache.FindDir(ctx, dir, false)
if err != nil {
r.Errors++
return r, err
}
return f.unTrash(ctx, dir, directoryID, true)
}
var commandHelp = []fs.CommandHelp{{
Name: "get",
Short: "Get command for fetching the drive config parameters",
@@ -2928,6 +3023,55 @@ authenticated with "drive2:" can't read files from "drive:".
Opts: map[string]string{
"target": "optional target remote for the shortcut destination",
},
}, {
Name: "drives",
Short: "List the shared drives available to this account",
Long: `This command lists the shared drives (teamdrives) available to this
account.
Usage:
rclone backend drives drive:
This will return a JSON list of objects like this
[
{
"id": "0ABCDEF-01234567890",
"kind": "drive#teamDrive",
"name": "My Drive"
},
{
"id": "0ABCDEFabcdefghijkl",
"kind": "drive#teamDrive",
"name": "Test Drive"
}
]
`,
}, {
Name: "untrash",
Short: "Untrash files and directories",
Long: `This command untrashes all the files and directories in the directory
passed in recursively.
Usage:
This takes an optional directory to trash which make this easier to
use via the API.
rclone backend untrash drive:directory
rclone backend -i untrash drive:directory subdir
Use the -i flag to see what would be restored before restoring it.
Result:
{
"Untrashed": 17,
"Errors": 0
}
`,
}}
// Command the backend to run a named command
@@ -2991,6 +3135,14 @@ func (f *Fs) Command(ctx context.Context, name string, arg []string, opt map[str
}
}
return f.makeShortcut(ctx, arg[0], dstFs, arg[1])
case "drives":
return f.listTeamDrives(ctx)
case "untrash":
dir := ""
if len(arg) > 0 {
dir = arg[0]
}
return f.unTrashDir(ctx, dir, true)
default:
return nil, fs.ErrorCommandNotFound
}

View File

@@ -10,13 +10,16 @@ import (
"path/filepath"
"strings"
"testing"
"time"
"github.com/pkg/errors"
_ "github.com/rclone/rclone/backend/local"
"github.com/rclone/rclone/fs"
"github.com/rclone/rclone/fs/hash"
"github.com/rclone/rclone/fs/operations"
"github.com/rclone/rclone/fstest"
"github.com/rclone/rclone/fstest/fstests"
"github.com/rclone/rclone/lib/random"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
"google.golang.org/api/drive/v3"
@@ -361,6 +364,50 @@ func (f *Fs) InternalTestShortcuts(t *testing.T) {
})
}
// TestIntegration/FsMkdir/FsPutFiles/Internal/UnTrash
func (f *Fs) InternalTestUnTrash(t *testing.T) {
ctx := context.Background()
// Make some objects, one in a subdir
contents := random.String(100)
file1 := fstest.NewItem("trashDir/toBeTrashed", contents, time.Now())
_, obj1 := fstests.PutTestContents(ctx, t, f, &file1, contents, false)
file2 := fstest.NewItem("trashDir/subdir/toBeTrashed", contents, time.Now())
_, _ = fstests.PutTestContents(ctx, t, f, &file2, contents, false)
// Check objects
checkObjects := func() {
fstest.CheckListingWithRoot(t, f, "trashDir", []fstest.Item{
file1,
file2,
}, []string{
"trashDir/subdir",
}, f.Precision())
}
checkObjects()
// Make sure we are using the trash
require.Equal(t, true, f.opt.UseTrash)
// Remove the object and the dir
require.NoError(t, obj1.Remove(ctx))
require.NoError(t, f.Purge(ctx, "trashDir/subdir"))
// Check objects gone
fstest.CheckListingWithRoot(t, f, "trashDir", []fstest.Item{}, []string{}, f.Precision())
// Restore the object and directory
r, err := f.unTrashDir(ctx, "trashDir", true)
require.NoError(t, err)
assert.Equal(t, unTrashResult{Errors: 0, Untrashed: 2}, r)
// Check objects restored
checkObjects()
// Remove the test dir
require.NoError(t, f.Purge(ctx, "trashDir"))
}
func (f *Fs) InternalTest(t *testing.T) {
// These tests all depend on each other so run them as nested tests
t.Run("DocumentImport", func(t *testing.T) {
@@ -376,6 +423,7 @@ func (f *Fs) InternalTest(t *testing.T) {
})
})
t.Run("Shortcuts", f.InternalTestShortcuts)
t.Run("UnTrash", f.InternalTestUnTrash)
}
var _ fstests.InternalTester = (*Fs)(nil)

View File

@@ -125,13 +125,7 @@ func init() {
log.Fatalf("Failed to configure token: %v", err)
}
},
Options: []fs.Option{{
Name: config.ConfigClientID,
Help: "Dropbox App Client Id\nLeave blank normally.",
}, {
Name: config.ConfigClientSecret,
Help: "Dropbox App Client Secret\nLeave blank normally.",
}, {
Options: append(oauthutil.SharedOptions, []fs.Option{{
Name: "chunk_size",
Help: fmt.Sprintf(`Upload chunk size. (< %v).
@@ -161,7 +155,7 @@ memory. It can be set smaller if you are tight on memory.`, maxChunkSize),
encoder.EncodeDel |
encoder.EncodeRightSpace |
encoder.EncodeInvalidUtf8),
}},
}}...),
})
}
@@ -611,10 +605,9 @@ func (f *Fs) Mkdir(ctx context.Context, dir string) error {
return err
}
// Rmdir deletes the container
//
// Returns an error if it isn't empty
func (f *Fs) Rmdir(ctx context.Context, dir string) error {
// purgeCheck removes the root directory, if check is set then it
// refuses to do so if it has anything in
func (f *Fs) purgeCheck(ctx context.Context, dir string, check bool) (err error) {
root := path.Join(f.slashRoot, dir)
// can't remove root
@@ -622,31 +615,33 @@ func (f *Fs) Rmdir(ctx context.Context, dir string) error {
return errors.New("can't remove root directory")
}
// check directory exists
_, err := f.getDirMetadata(root)
if err != nil {
return errors.Wrap(err, "Rmdir")
}
if check {
// check directory exists
_, err = f.getDirMetadata(root)
if err != nil {
return errors.Wrap(err, "Rmdir")
}
root = f.opt.Enc.FromStandardPath(root)
// check directory empty
arg := files.ListFolderArg{
Path: root,
Recursive: false,
}
if root == "/" {
arg.Path = "" // Specify root folder as empty string
}
var res *files.ListFolderResult
err = f.pacer.Call(func() (bool, error) {
res, err = f.srv.ListFolder(&arg)
return shouldRetry(err)
})
if err != nil {
return errors.Wrap(err, "Rmdir")
}
if len(res.Entries) != 0 {
return errors.New("directory not empty")
root = f.opt.Enc.FromStandardPath(root)
// check directory empty
arg := files.ListFolderArg{
Path: root,
Recursive: false,
}
if root == "/" {
arg.Path = "" // Specify root folder as empty string
}
var res *files.ListFolderResult
err = f.pacer.Call(func() (bool, error) {
res, err = f.srv.ListFolder(&arg)
return shouldRetry(err)
})
if err != nil {
return errors.Wrap(err, "Rmdir")
}
if len(res.Entries) != 0 {
return errors.New("directory not empty")
}
}
// remove it
@@ -657,6 +652,13 @@ func (f *Fs) Rmdir(ctx context.Context, dir string) error {
return err
}
// Rmdir deletes the container
//
// Returns an error if it isn't empty
func (f *Fs) Rmdir(ctx context.Context, dir string) error {
return f.purgeCheck(ctx, dir, true)
}
// Precision returns the precision
func (f *Fs) Precision() time.Duration {
return time.Second
@@ -719,15 +721,8 @@ func (f *Fs) Copy(ctx context.Context, src fs.Object, remote string) (fs.Object,
// Optional interface: Only implement this if you have a way of
// deleting all the files quicker than just running Remove() on the
// result of List()
func (f *Fs) Purge(ctx context.Context) (err error) {
// Let dropbox delete the filesystem tree
err = f.pacer.Call(func() (bool, error) {
_, err = f.srv.DeleteV2(&files.DeleteArg{
Path: f.opt.Enc.FromStandardPath(f.slashRoot),
})
return shouldRetry(err)
})
return err
func (f *Fs) Purge(ctx context.Context, dir string) (err error) {
return f.purgeCheck(ctx, dir, false)
}
// Move src to this remote using server side move operations.

View File

@@ -6,6 +6,7 @@ import (
"net/http"
"regexp"
"strconv"
"strings"
"time"
"github.com/pkg/errors"
@@ -28,6 +29,20 @@ var retryErrorCodes = []int{
// shouldRetry returns a boolean as to whether this resp and err
// deserve to be retried. It returns the err as a convenience
func shouldRetry(resp *http.Response, err error) (bool, error) {
// Detect this error which the integration tests provoke
// error HTTP error 403 (403 Forbidden) returned body: "{\"message\":\"Flood detected: IP Locked #374\",\"status\":\"KO\"}"
//
// https://1fichier.com/api.html
//
// file/ls.cgi is limited :
//
// Warning (can be changed in case of abuses) :
// List all files of the account is limited to 1 request per hour.
// List folders is limited to 5 000 results and 1 request per folder per 30s.
if err != nil && strings.Contains(err.Error(), "Flood detected") {
fs.Debugf(nil, "Sleeping for 30 seconds due to: %v", err)
time.Sleep(30 * time.Second)
}
return fserrors.ShouldRetry(err) || fserrors.ShouldRetryHTTP(resp, retryErrorCodes), err
}

View File

@@ -88,13 +88,7 @@ func init() {
log.Fatalf("Failed to configure token: %v", err)
}
},
Options: []fs.Option{{
Name: config.ConfigClientID,
Help: "Google Application Client Id\nLeave blank normally.",
}, {
Name: config.ConfigClientSecret,
Help: "Google Application Client Secret\nLeave blank normally.",
}, {
Options: append(oauthutil.SharedOptions, []fs.Option{{
Name: "project_number",
Help: "Project number.\nOptional - needed only for list/create/delete buckets - see your developer console.",
}, {
@@ -261,7 +255,7 @@ Docs: https://cloud.google.com/storage/docs/bucket-policy-only
Default: (encoder.Base |
encoder.EncodeCrLf |
encoder.EncodeInvalidUtf8),
}},
}}...),
})
}

View File

@@ -21,7 +21,6 @@ import (
"github.com/pkg/errors"
"github.com/rclone/rclone/backend/googlephotos/api"
"github.com/rclone/rclone/fs"
"github.com/rclone/rclone/fs/config"
"github.com/rclone/rclone/fs/config/configmap"
"github.com/rclone/rclone/fs/config/configstruct"
"github.com/rclone/rclone/fs/config/obscure"
@@ -110,13 +109,7 @@ func init() {
`)
},
Options: []fs.Option{{
Name: config.ConfigClientID,
Help: "Google Application Client Id\nLeave blank normally.",
}, {
Name: config.ConfigClientSecret,
Help: "Google Application Client Secret\nLeave blank normally.",
}, {
Options: append(oauthutil.SharedOptions, []fs.Option{{
Name: "read_only",
Default: false,
Help: `Set to make the Google Photos backend read only.
@@ -139,7 +132,7 @@ you want to read the media.`,
Default: 2000,
Help: `Year limits the photos to be downloaded to those which are uploaded after the given year`,
Advanced: true,
}},
}}...),
})
}

View File

@@ -20,7 +20,6 @@ import (
"github.com/pkg/errors"
"github.com/rclone/rclone/backend/swift"
"github.com/rclone/rclone/fs"
"github.com/rclone/rclone/fs/config"
"github.com/rclone/rclone/fs/config/configmap"
"github.com/rclone/rclone/fs/config/configstruct"
"github.com/rclone/rclone/fs/config/obscure"
@@ -63,13 +62,7 @@ func init() {
log.Fatalf("Failed to configure token: %v", err)
}
},
Options: append([]fs.Option{{
Name: config.ConfigClientID,
Help: "Hubic Client Id\nLeave blank normally.",
}, {
Name: config.ConfigClientSecret,
Help: "Hubic Client Secret\nLeave blank normally.",
}}, swift.SharedOptions...),
Options: append(oauthutil.SharedOptions, swift.SharedOptions...),
})
}

View File

@@ -353,7 +353,7 @@ func doAuthV1(ctx context.Context, srv *rest.Client, username, password string)
authCode = strings.Replace(authCode, "-", "", -1) // remove any "-" contained in the code so we have a 6 digit number
opts.ExtraHeaders = make(map[string]string)
opts.ExtraHeaders["X-Jottacloud-Otp"] = authCode
resp, err = srv.CallJSON(ctx, &opts, nil, &jsonToken)
_, err = srv.CallJSON(ctx, &opts, nil, &jsonToken)
}
}
}
@@ -1070,8 +1070,8 @@ func (f *Fs) Precision() time.Duration {
}
// Purge deletes all the files and the container
func (f *Fs) Purge(ctx context.Context) error {
return f.purgeCheck(ctx, "", false)
func (f *Fs) Purge(ctx context.Context, dir string) error {
return f.purgeCheck(ctx, dir, false)
}
// copyOrMoves copies or moves directories or files depending on the method parameter

View File

View File

@@ -4,6 +4,7 @@ package local
import (
"context"
"os"
"syscall"
"github.com/pkg/errors"
@@ -15,6 +16,9 @@ func (f *Fs) About(ctx context.Context) (*fs.Usage, error) {
var s syscall.Statfs_t
err := syscall.Statfs(f.root, &s)
if err != nil {
if os.IsNotExist(err) {
return nil, fs.ErrorDirNotFound
}
return nil, errors.Wrap(err, "failed to read disk usage")
}
bs := int64(s.Bsize) // nolint: unconvert

View File

@@ -1,4 +1,4 @@
// +build windows plan9
// +build windows plan9 js
package local

View File

@@ -1,4 +1,4 @@
// +build !windows,!plan9
// +build !windows,!plan9,!js
package local

View File

@@ -144,6 +144,17 @@ the OS zeros the file. However sparse files may be undesirable as they
cause disk fragmentation and can be slow to work with.`,
Default: false,
Advanced: true,
}, {
Name: "no_set_modtime",
Help: `Disable setting modtime
Normally rclone updates modification time of files after they are done
uploading. This can cause permissions issues on Linux platforms when
the user rclone is running as does not own the file uploaded, such as
when copying to a CIFS mount owned by another user. If this option is
enabled, rclone will no longer update the modtime after copying a file.`,
Default: false,
Advanced: true,
}, {
Name: config.ConfigEncoding,
Help: config.ConfigEncodingHelp,
@@ -166,6 +177,7 @@ type Options struct {
CaseSensitive bool `config:"case_sensitive"`
CaseInsensitive bool `config:"case_insensitive"`
NoSparse bool `config:"no_sparse"`
NoSetModTime bool `config:"no_set_modtime"`
Enc encoder.MultiEncoder `config:"encoding"`
}
@@ -542,6 +554,10 @@ func (f *Fs) Rmdir(ctx context.Context, dir string) error {
// Precision of the file system
func (f *Fs) Precision() (precision time.Duration) {
if f.opt.NoSetModTime {
return fs.ModTimeNotSupported
}
f.precisionOk.Do(func() {
f.precision = f.readPrecision()
})
@@ -600,20 +616,25 @@ func (f *Fs) readPrecision() (precision time.Duration) {
return
}
// Purge deletes all the files and directories
// Purge deletes all the files in the directory
//
// Optional interface: Only implement this if you have a way of
// deleting all the files quicker than just running Remove() on the
// result of List()
func (f *Fs) Purge(ctx context.Context) error {
fi, err := f.lstat(f.root)
func (f *Fs) Purge(ctx context.Context, dir string) error {
dir = f.localPath(dir)
fi, err := f.lstat(dir)
if err != nil {
// already purged
if os.IsNotExist(err) {
return fs.ErrorDirNotFound
}
return err
}
if !fi.Mode().IsDir() {
return errors.Errorf("can't purge non directory: %q", f.root)
return errors.Errorf("can't purge non directory: %q", dir)
}
return os.RemoveAll(f.root)
return os.RemoveAll(dir)
}
// Move src to this remote using server side move operations.
@@ -878,6 +899,9 @@ func (o *Object) ModTime(ctx context.Context) time.Time {
// SetModTime sets the modification time of the local fs object
func (o *Object) SetModTime(ctx context.Context, modTime time.Time) error {
if o.fs.opt.NoSetModTime {
return nil
}
var err error
if o.translatedLink {
err = lChtimes(o.path, modTime, modTime)

View File

@@ -1162,12 +1162,12 @@ func (f *Fs) Rmdir(ctx context.Context, dir string) error {
return f.purgeWithCheck(ctx, dir, true, "rmdir")
}
// Purge deletes all the files and the root directory
// Purge deletes all the files in the directory
// Optional interface: Only implement this if you have a way of deleting
// all the files quicker than just running Remove() on the result of List()
func (f *Fs) Purge(ctx context.Context) error {
func (f *Fs) Purge(ctx context.Context, dir string) error {
// fs.Debugf(f, ">>> Purge")
return f.purgeWithCheck(ctx, "", false, "purge")
return f.purgeWithCheck(ctx, dir, false, "purge")
}
// purgeWithCheck() removes the root directory.

View File

@@ -669,13 +669,13 @@ func (f *Fs) Precision() time.Duration {
return fs.ModTimeNotSupported
}
// Purge deletes all the files and the container
// Purge deletes all the files in the directory
//
// Optional interface: Only implement this if you have a way of
// deleting all the files quicker than just running Remove() on the
// result of List()
func (f *Fs) Purge(ctx context.Context) error {
return f.purgeCheck("", false)
func (f *Fs) Purge(ctx context.Context, dir string) error {
return f.purgeCheck(dir, false)
}
// move a file or folder (srcFs, srcRemote, info) to (f, dstRemote)

View File

@@ -410,3 +410,28 @@ func (i *Item) GetParentReference() *ItemReference {
func (i *Item) IsRemote() bool {
return i.RemoteItem != nil
}
// User details for each version
type User struct {
Email string `json:"email"`
ID string `json:"id"`
DisplayName string `json:"displayName"`
}
// LastModifiedBy for each version
type LastModifiedBy struct {
User User `json:"user"`
}
// Version info
type Version struct {
ID string `json:"id"`
LastModifiedDateTime time.Time `json:"lastModifiedDateTime"`
Size int `json:"size"`
LastModifiedBy LastModifiedBy `json:"lastModifiedBy"`
}
// VersionsResponse is returned from /versions
type VersionsResponse struct {
Versions []Version `json:"value"`
}

View File

@@ -14,6 +14,7 @@ import (
"path"
"strconv"
"strings"
"sync"
"time"
"github.com/pkg/errors"
@@ -26,6 +27,8 @@ import (
"github.com/rclone/rclone/fs/config/obscure"
"github.com/rclone/rclone/fs/fserrors"
"github.com/rclone/rclone/fs/hash"
"github.com/rclone/rclone/fs/operations"
"github.com/rclone/rclone/fs/walk"
"github.com/rclone/rclone/lib/atexit"
"github.com/rclone/rclone/lib/dircache"
"github.com/rclone/rclone/lib/encoder"
@@ -238,13 +241,7 @@ func init() {
m.Set(configDriveType, rootItem.ParentReference.DriveType)
config.SaveConfig()
},
Options: []fs.Option{{
Name: config.ConfigClientID,
Help: "Microsoft App Client Id\nLeave blank normally.",
}, {
Name: config.ConfigClientSecret,
Help: "Microsoft App Client Secret\nLeave blank normally.",
}, {
Options: append(oauthutil.SharedOptions, []fs.Option{{
Name: "chunk_size",
Help: `Chunk size to upload files with - must be multiple of 320k (327,680 bytes).
@@ -284,6 +281,23 @@ different Onedrives. Note that this isn't enabled by default
because it isn't easy to tell if it will work between any two
configurations.`,
Advanced: true,
}, {
Name: "no_versions",
Default: false,
Help: `Remove all versions on modifying operations
Onedrive for business creates versions when rclone uploads new files
overwriting an existing one and when it sets the modification time.
These versions take up space out of the quota.
This flag checks for versions after file upload and setting
modification time and removes all but the last version.
**NB** Onedrive personal can't currently delete versions so don't use
this flag there.
`,
Advanced: true,
}, {
Name: config.ConfigEncoding,
Help: config.ConfigEncodingHelp,
@@ -330,7 +344,7 @@ configurations.`,
encoder.EncodeRightSpace |
encoder.EncodeWin |
encoder.EncodeInvalidUtf8),
}},
}}...),
})
}
@@ -341,6 +355,7 @@ type Options struct {
DriveType string `config:"drive_type"`
ExposeOneNoteFiles bool `config:"expose_onenote_files"`
ServerSideAcrossConfigs bool `config:"server_side_across_configs"`
NoVersions bool `config:"no_versions"`
Enc encoder.MultiEncoder `config:"encoding"`
}
@@ -1073,13 +1088,13 @@ func (f *Fs) Copy(ctx context.Context, src fs.Object, remote string) (fs.Object,
return dstObj, nil
}
// Purge deletes all the files and the container
// Purge deletes all the files in the directory
//
// Optional interface: Only implement this if you have a way of
// deleting all the files quicker than just running Remove() on the
// result of List()
func (f *Fs) Purge(ctx context.Context) error {
return f.purgeCheck(ctx, "", false)
func (f *Fs) Purge(ctx context.Context, dir string) error {
return f.purgeCheck(ctx, dir, false)
}
// Move src to this remote using server side move operations.
@@ -1275,6 +1290,73 @@ func (f *Fs) PublicLink(ctx context.Context, remote string, expire fs.Duration,
return result.Link.WebURL, nil
}
// CleanUp deletes all the hidden files.
func (f *Fs) CleanUp(ctx context.Context) error {
token := make(chan struct{}, fs.Config.Checkers)
var wg sync.WaitGroup
err := walk.Walk(ctx, f, "", true, -1, func(path string, entries fs.DirEntries, err error) error {
err = entries.ForObjectError(func(obj fs.Object) error {
o, ok := obj.(*Object)
if !ok {
return errors.New("internal error: not a onedrive object")
}
wg.Add(1)
token <- struct{}{}
go func() {
defer func() {
<-token
wg.Done()
}()
err := o.deleteVersions(ctx)
if err != nil {
fs.Errorf(o, "Failed to remove versions: %v", err)
}
}()
return nil
})
wg.Wait()
return err
})
return err
}
// Finds and removes any old versions for o
func (o *Object) deleteVersions(ctx context.Context) error {
opts := newOptsCall(o.id, "GET", "/versions")
var versions api.VersionsResponse
err := o.fs.pacer.Call(func() (bool, error) {
resp, err := o.fs.srv.CallJSON(ctx, &opts, nil, &versions)
return shouldRetry(resp, err)
})
if err != nil {
return err
}
if len(versions.Versions) < 2 {
return nil
}
for _, version := range versions.Versions[1:] {
err = o.deleteVersion(ctx, version.ID)
if err != nil {
return err
}
}
return nil
}
// Finds and removes any old versions for o
func (o *Object) deleteVersion(ctx context.Context, ID string) error {
if operations.SkipDestructive(ctx, fmt.Sprintf("%s of %s", ID, o.remote), "delete version") {
return nil
}
fs.Infof(o, "removing version %q", ID)
opts := newOptsCall(o.id, "DELETE", "/versions/"+ID)
opts.NoResponse = true
return o.fs.pacer.Call(func() (bool, error) {
resp, err := o.fs.srv.Call(ctx, &opts)
return shouldRetry(resp, err)
})
}
// ------------------------------------------------------------
// Fs returns the parent Fs
@@ -1438,6 +1520,13 @@ func (o *Object) setModTime(ctx context.Context, modTime time.Time) (*api.Item,
resp, err := o.fs.srv.CallJSON(ctx, &opts, &update, &info)
return shouldRetry(resp, err)
})
// Remove versions if required
if o.fs.opt.NoVersions {
err := o.deleteVersions(ctx)
if err != nil {
fs.Errorf(o, "Failed to remove versions: %v", err)
}
}
return info, err
}
@@ -1744,6 +1833,14 @@ func (o *Object) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, op
return err
}
// If updating the file then remove versions
if o.fs.opt.NoVersions && o.hasMetaData {
err = o.deleteVersions(ctx)
if err != nil {
fs.Errorf(o, "Failed to remove versions: %v", err)
}
}
return o.setMetaData(info)
}
@@ -1840,6 +1937,7 @@ var (
_ fs.DirCacheFlusher = (*Fs)(nil)
_ fs.Abouter = (*Fs)(nil)
_ fs.PublicLinker = (*Fs)(nil)
_ fs.CleanUpper = (*Fs)(nil)
_ fs.Object = (*Object)(nil)
_ fs.MimeTyper = &Object{}
_ fs.IDer = &Object{}

View File

@@ -506,13 +506,13 @@ func (f *Fs) DirMove(ctx context.Context, src fs.Fs, srcRemote, dstRemote string
return nil
}
// Purge deletes all the files and the container
// Purge deletes all the files in the directory
//
// Optional interface: Only implement this if you have a way of
// deleting all the files quicker than just running Remove() on the
// result of List()
func (f *Fs) Purge(ctx context.Context) error {
return f.purgeCheck(ctx, "", false)
func (f *Fs) Purge(ctx context.Context, dir string) error {
return f.purgeCheck(ctx, dir, false)
}
// Return an Object from a path

View File

@@ -103,13 +103,7 @@ func init() {
log.Fatalf("Failed to configure token: %v", err)
}
},
Options: []fs.Option{{
Name: config.ConfigClientID,
Help: "Pcloud App Client Id\nLeave blank normally.",
}, {
Name: config.ConfigClientSecret,
Help: "Pcloud App Client Secret\nLeave blank normally.",
}, {
Options: append(oauthutil.SharedOptions, []fs.Option{{
Name: config.ConfigEncoding,
Help: config.ConfigEncodingHelp,
Advanced: true,
@@ -128,10 +122,20 @@ func init() {
Name: "hostname",
Help: `Hostname to connect to.
This is normally set when rclone initially does the oauth connection.`,
This is normally set when rclone initially does the oauth connection,
however you will need to set it by hand if you are using remote config
with rclone authorize.
`,
Default: defaultHostname,
Advanced: true,
}},
Examples: []fs.OptionExample{{
Value: defaultHostname,
Help: "Original/US region",
}, {
Value: "eapi.pcloud.com",
Help: "EU region",
}},
}}...),
})
}
@@ -671,13 +675,13 @@ func (f *Fs) Copy(ctx context.Context, src fs.Object, remote string) (fs.Object,
return dstObj, nil
}
// Purge deletes all the files and the container
// Purge deletes all the files in the directory
//
// Optional interface: Only implement this if you have a way of
// deleting all the files quicker than just running Remove() on the
// result of List()
func (f *Fs) Purge(ctx context.Context) error {
return f.purgeCheck(ctx, "", false)
func (f *Fs) Purge(ctx context.Context, dir string) error {
return f.purgeCheck(ctx, dir, false)
}
// CleanUp empties the trash
@@ -820,14 +824,19 @@ func (f *Fs) linkDir(ctx context.Context, dirID string, expire fs.Duration) (str
}
func (f *Fs) linkFile(ctx context.Context, path string, expire fs.Duration) (string, error) {
obj, err := f.NewObject(ctx, path)
if err != nil {
return "", err
}
o := obj.(*Object)
opts := rest.Opts{
Method: "POST",
Path: "/getfilepublink",
Parameters: url.Values{},
}
var result api.PubLinkResult
opts.Parameters.Set("path", path)
err := f.pacer.Call(func() (bool, error) {
opts.Parameters.Set("fileid", fileIDtoNumber(o.id))
err = f.pacer.Call(func() (bool, error) {
resp, err := f.srv.CallJSON(ctx, &opts, nil, &result)
err = result.Error.Update(err)
return shouldRetry(resp, err)
@@ -840,11 +849,6 @@ func (f *Fs) linkFile(ctx context.Context, path string, expire fs.Duration) (str
// PublicLink adds a "readable by anyone with link" permission on the given file or folder.
func (f *Fs) PublicLink(ctx context.Context, remote string, expire fs.Duration, unlink bool) (string, error) {
err := f.dirCache.FindRoot(ctx, false)
if err != nil {
return "", err
}
dirID, err := f.dirCache.FindDir(ctx, remote, false)
if err == fs.ErrorDirNotFound {
return f.linkFile(ctx, remote, expire)

View File

@@ -609,13 +609,13 @@ func (f *Fs) Precision() time.Duration {
return fs.ModTimeNotSupported
}
// Purge deletes all the files and the container
// Purge deletes all the files in the directory
//
// Optional interface: Only implement this if you have a way of
// deleting all the files quicker than just running Remove() on the
// result of List()
func (f *Fs) Purge(ctx context.Context) error {
return f.purgeCheck(ctx, "", false)
func (f *Fs) Purge(ctx context.Context, dir string) error {
return f.purgeCheck(ctx, dir, false)
}
// move a file or folder

View File

@@ -458,10 +458,9 @@ func (f *Fs) Mkdir(ctx context.Context, dir string) (err error) {
return err
}
// Rmdir deletes the container
//
// Returns an error if it isn't empty
func (f *Fs) Rmdir(ctx context.Context, dir string) (err error) {
// purgeCheck removes the root directory, if check is set then it
// refuses to do so if it has anything in
func (f *Fs) purgeCheck(ctx context.Context, dir string, check bool) (err error) {
// defer log.Trace(f, "dir=%v", dir)("err=%v", &err)
root := strings.Trim(path.Join(f.root, dir), "/")
@@ -478,18 +477,20 @@ func (f *Fs) Rmdir(ctx context.Context, dir string) (err error) {
}
dirID := atoi(directoryID)
// check directory empty
var children []putio.File
err = f.pacer.Call(func() (bool, error) {
// fs.Debugf(f, "listing files: %d", dirID)
children, _, err = f.client.Files.List(ctx, dirID)
return shouldRetry(err)
})
if err != nil {
return errors.Wrap(err, "Rmdir")
}
if len(children) != 0 {
return errors.New("directory not empty")
if check {
// check directory empty
var children []putio.File
err = f.pacer.Call(func() (bool, error) {
// fs.Debugf(f, "listing files: %d", dirID)
children, _, err = f.client.Files.List(ctx, dirID)
return shouldRetry(err)
})
if err != nil {
return errors.Wrap(err, "Rmdir")
}
if len(children) != 0 {
return errors.New("directory not empty")
}
}
// remove it
@@ -502,35 +503,26 @@ func (f *Fs) Rmdir(ctx context.Context, dir string) (err error) {
return err
}
// Rmdir deletes the container
//
// Returns an error if it isn't empty
func (f *Fs) Rmdir(ctx context.Context, dir string) (err error) {
return f.purgeCheck(ctx, dir, true)
}
// Precision returns the precision
func (f *Fs) Precision() time.Duration {
return time.Second
}
// Purge deletes all the files and the container
// Purge deletes all the files in the directory
//
// Optional interface: Only implement this if you have a way of
// deleting all the files quicker than just running Remove() on the
// result of List()
func (f *Fs) Purge(ctx context.Context) (err error) {
func (f *Fs) Purge(ctx context.Context, dir string) (err error) {
// defer log.Trace(f, "")("err=%v", &err)
if f.root == "" {
return errors.New("can't purge root directory")
}
rootIDs, err := f.dirCache.RootID(ctx, false)
if err != nil {
return err
}
rootID := atoi(rootIDs)
// Let putio delete the filesystem tree
err = f.pacer.Call(func() (bool, error) {
// fs.Debugf(f, "deleting file: %d", rootID)
err = f.client.Files.Delete(ctx, rootID)
return shouldRetry(err)
})
f.dirCache.ResetRoot()
return err
return f.purgeCheck(ctx, dir, false)
}
// Copy src to this remote using server side copy operations.

View File

@@ -1,7 +1,7 @@
// Package qingstor provides an interface to QingStor object storage
// Home: https://www.qingcloud.com/
// +build !plan9
// +build !plan9,!js
package qingstor

View File

@@ -1,6 +1,6 @@
// Test QingStor filesystem interface
// +build !plan9
// +build !plan9,!js
package qingstor

View File

@@ -1,6 +1,6 @@
// Build for unsupported platforms to stop go complaining
// about "no buildable Go source files "
// +build plan9
// +build plan9 js
package qingstor

View File

@@ -1,6 +1,6 @@
// Upload object to QingStor
// +build !plan9
// +build !plan9,!js
package qingstor

View File

@@ -1,18 +1,6 @@
// Package s3 provides an interface to Amazon S3 oject storage
package s3
// FIXME need to prevent anything but ListDir working for s3://
/*
Progress of port to aws-sdk
* Don't really need o.meta at all?
What happens if you CTRL-C a multipart upload
* get an incomplete upload
* disappears when you delete the bucket
*/
import (
"bytes"
"context"
@@ -214,107 +202,191 @@ func init() {
Help: "Endpoint for IBM COS S3 API.\nSpecify if using an IBM COS On Premise.",
Provider: "IBMCOS",
Examples: []fs.OptionExample{{
Value: "s3-api.us-geo.objectstorage.softlayer.net",
Value: "s3.us.cloud-object-storage.appdomain.cloud",
Help: "US Cross Region Endpoint",
}, {
Value: "s3-api.dal.us-geo.objectstorage.softlayer.net",
Value: "s3.dal.us.cloud-object-storage.appdomain.cloud",
Help: "US Cross Region Dallas Endpoint",
}, {
Value: "s3-api.wdc-us-geo.objectstorage.softlayer.net",
Value: "s3.wdc.us.cloud-object-storage.appdomain.cloud",
Help: "US Cross Region Washington DC Endpoint",
}, {
Value: "s3-api.sjc-us-geo.objectstorage.softlayer.net",
Value: "s3.sjc.us.cloud-object-storage.appdomain.cloud",
Help: "US Cross Region San Jose Endpoint",
}, {
Value: "s3-api.us-geo.objectstorage.service.networklayer.com",
Value: "s3.private.us.cloud-object-storage.appdomain.cloud",
Help: "US Cross Region Private Endpoint",
}, {
Value: "s3-api.dal-us-geo.objectstorage.service.networklayer.com",
Value: "s3.private.dal.us.cloud-object-storage.appdomain.cloud",
Help: "US Cross Region Dallas Private Endpoint",
}, {
Value: "s3-api.wdc-us-geo.objectstorage.service.networklayer.com",
Value: "s3.private.wdc.us.cloud-object-storage.appdomain.cloud",
Help: "US Cross Region Washington DC Private Endpoint",
}, {
Value: "s3-api.sjc-us-geo.objectstorage.service.networklayer.com",
Value: "s3.private.sjc.us.cloud-object-storage.appdomain.cloud",
Help: "US Cross Region San Jose Private Endpoint",
}, {
Value: "s3.us-east.objectstorage.softlayer.net",
Value: "s3.us-east.cloud-object-storage.appdomain.cloud",
Help: "US Region East Endpoint",
}, {
Value: "s3.us-east.objectstorage.service.networklayer.com",
Value: "s3.private.us-east.cloud-object-storage.appdomain.cloud",
Help: "US Region East Private Endpoint",
}, {
Value: "s3.us-south.objectstorage.softlayer.net",
Value: "s3.us-south.cloud-object-storage.appdomain.cloud",
Help: "US Region South Endpoint",
}, {
Value: "s3.us-south.objectstorage.service.networklayer.com",
Value: "s3.private.us-south.cloud-object-storage.appdomain.cloud",
Help: "US Region South Private Endpoint",
}, {
Value: "s3.eu-geo.objectstorage.softlayer.net",
Value: "s3.eu.cloud-object-storage.appdomain.cloud",
Help: "EU Cross Region Endpoint",
}, {
Value: "s3.fra-eu-geo.objectstorage.softlayer.net",
Value: "s3.fra.eu.cloud-object-storage.appdomain.cloud",
Help: "EU Cross Region Frankfurt Endpoint",
}, {
Value: "s3.mil-eu-geo.objectstorage.softlayer.net",
Value: "s3.mil.eu.cloud-object-storage.appdomain.cloud",
Help: "EU Cross Region Milan Endpoint",
}, {
Value: "s3.ams-eu-geo.objectstorage.softlayer.net",
Value: "s3.ams.eu.cloud-object-storage.appdomain.cloud",
Help: "EU Cross Region Amsterdam Endpoint",
}, {
Value: "s3.eu-geo.objectstorage.service.networklayer.com",
Value: "s3.private.eu.cloud-object-storage.appdomain.cloud",
Help: "EU Cross Region Private Endpoint",
}, {
Value: "s3.fra-eu-geo.objectstorage.service.networklayer.com",
Value: "s3.private.fra.eu.cloud-object-storage.appdomain.cloud",
Help: "EU Cross Region Frankfurt Private Endpoint",
}, {
Value: "s3.mil-eu-geo.objectstorage.service.networklayer.com",
Value: "s3.private.mil.eu.cloud-object-storage.appdomain.cloud",
Help: "EU Cross Region Milan Private Endpoint",
}, {
Value: "s3.ams-eu-geo.objectstorage.service.networklayer.com",
Value: "s3.private.ams.eu.cloud-object-storage.appdomain.cloud",
Help: "EU Cross Region Amsterdam Private Endpoint",
}, {
Value: "s3.eu-gb.objectstorage.softlayer.net",
Value: "s3.eu-gb.cloud-object-storage.appdomain.cloud",
Help: "Great Britain Endpoint",
}, {
Value: "s3.eu-gb.objectstorage.service.networklayer.com",
Value: "s3.private.eu-gb.cloud-object-storage.appdomain.cloud",
Help: "Great Britain Private Endpoint",
}, {
Value: "s3.ap-geo.objectstorage.softlayer.net",
Value: "s3.eu-de.cloud-object-storage.appdomain.cloud",
Help: "EU Region DE Endpoint",
}, {
Value: "s3.private.eu-de.cloud-object-storage.appdomain.cloud",
Help: "EU Region DE Private Endpoint",
}, {
Value: "s3.ap.cloud-object-storage.appdomain.cloud",
Help: "APAC Cross Regional Endpoint",
}, {
Value: "s3.tok-ap-geo.objectstorage.softlayer.net",
Value: "s3.tok.ap.cloud-object-storage.appdomain.cloud",
Help: "APAC Cross Regional Tokyo Endpoint",
}, {
Value: "s3.hkg-ap-geo.objectstorage.softlayer.net",
Value: "s3.hkg.ap.cloud-object-storage.appdomain.cloud",
Help: "APAC Cross Regional HongKong Endpoint",
}, {
Value: "s3.seo-ap-geo.objectstorage.softlayer.net",
Value: "s3.seo.ap.cloud-object-storage.appdomain.cloud",
Help: "APAC Cross Regional Seoul Endpoint",
}, {
Value: "s3.ap-geo.objectstorage.service.networklayer.com",
Value: "s3.private.ap.cloud-object-storage.appdomain.cloud",
Help: "APAC Cross Regional Private Endpoint",
}, {
Value: "s3.tok-ap-geo.objectstorage.service.networklayer.com",
Value: "s3.private.tok.ap.cloud-object-storage.appdomain.cloud",
Help: "APAC Cross Regional Tokyo Private Endpoint",
}, {
Value: "s3.hkg-ap-geo.objectstorage.service.networklayer.com",
Value: "s3.private.hkg.ap.cloud-object-storage.appdomain.cloud",
Help: "APAC Cross Regional HongKong Private Endpoint",
}, {
Value: "s3.seo-ap-geo.objectstorage.service.networklayer.com",
Value: "s3.private.seo.ap.cloud-object-storage.appdomain.cloud",
Help: "APAC Cross Regional Seoul Private Endpoint",
}, {
Value: "s3.mel01.objectstorage.softlayer.net",
Value: "s3.jp-tok.cloud-object-storage.appdomain.cloud",
Help: "APAC Region Japan Endpoint",
}, {
Value: "s3.private.jp-tok.cloud-object-storage.appdomain.cloud",
Help: "APAC Region Japan Private Endpoint",
}, {
Value: "s3.au-syd.cloud-object-storage.appdomain.cloud",
Help: "APAC Region Australia Endpoint",
}, {
Value: "s3.private.au-syd.cloud-object-storage.appdomain.cloud",
Help: "APAC Region Australia Private Endpoint",
}, {
Value: "s3.ams03.cloud-object-storage.appdomain.cloud",
Help: "Amsterdam Single Site Endpoint",
}, {
Value: "s3.private.ams03.cloud-object-storage.appdomain.cloud",
Help: "Amsterdam Single Site Private Endpoint",
}, {
Value: "s3.che01.cloud-object-storage.appdomain.cloud",
Help: "Chennai Single Site Endpoint",
}, {
Value: "s3.private.che01.cloud-object-storage.appdomain.cloud",
Help: "Chennai Single Site Private Endpoint",
}, {
Value: "s3.mel01.cloud-object-storage.appdomain.cloud",
Help: "Melbourne Single Site Endpoint",
}, {
Value: "s3.mel01.objectstorage.service.networklayer.com",
Value: "s3.private.mel01.cloud-object-storage.appdomain.cloud",
Help: "Melbourne Single Site Private Endpoint",
}, {
Value: "s3.tor01.objectstorage.softlayer.net",
Value: "s3.osl01.cloud-object-storage.appdomain.cloud",
Help: "Oslo Single Site Endpoint",
}, {
Value: "s3.private.osl01.cloud-object-storage.appdomain.cloud",
Help: "Oslo Single Site Private Endpoint",
}, {
Value: "s3.tor01.cloud-object-storage.appdomain.cloud",
Help: "Toronto Single Site Endpoint",
}, {
Value: "s3.tor01.objectstorage.service.networklayer.com",
Value: "s3.private.tor01.cloud-object-storage.appdomain.cloud",
Help: "Toronto Single Site Private Endpoint",
}, {
Value: "s3.seo01.cloud-object-storage.appdomain.cloud",
Help: "Seoul Single Site Endpoint",
}, {
Value: "s3.private.seo01.cloud-object-storage.appdomain.cloud",
Help: "Seoul Single Site Private Endpoint",
}, {
Value: "s3.mon01.cloud-object-storage.appdomain.cloud",
Help: "Montreal Single Site Endpoint",
}, {
Value: "s3.private.mon01.cloud-object-storage.appdomain.cloud",
Help: "Montreal Single Site Private Endpoint",
}, {
Value: "s3.mex01.cloud-object-storage.appdomain.cloud",
Help: "Mexico Single Site Endpoint",
}, {
Value: "s3.private.mex01.cloud-object-storage.appdomain.cloud",
Help: "Mexico Single Site Private Endpoint",
}, {
Value: "s3.sjc04.cloud-object-storage.appdomain.cloud",
Help: "San Jose Single Site Endpoint",
}, {
Value: "s3.private.sjc04.cloud-object-storage.appdomain.cloud",
Help: "San Jose Single Site Private Endpoint",
}, {
Value: "s3.mil01.cloud-object-storage.appdomain.cloud",
Help: "Milan Single Site Endpoint",
}, {
Value: "s3.private.mil01.cloud-object-storage.appdomain.cloud",
Help: "Milan Single Site Private Endpoint",
}, {
Value: "s3.hkg02.cloud-object-storage.appdomain.cloud",
Help: "Hong Kong Single Site Endpoint",
}, {
Value: "s3.private.hkg02.cloud-object-storage.appdomain.cloud",
Help: "Hong Kong Single Site Private Endpoint",
}, {
Value: "s3.par01.cloud-object-storage.appdomain.cloud",
Help: "Paris Single Site Endpoint",
}, {
Value: "s3.private.par01.cloud-object-storage.appdomain.cloud",
Help: "Paris Single Site Private Endpoint",
}, {
Value: "s3.sng01.cloud-object-storage.appdomain.cloud",
Help: "Singapore Single Site Endpoint",
}, {
Value: "s3.private.sng01.cloud-object-storage.appdomain.cloud",
Help: "Singapore Single Site Private Endpoint",
}},
}, {
// oss endpoints: https://help.aliyun.com/document_detail/31837.html
@@ -853,6 +925,31 @@ for data integrity checking but can cause long delays for large files
to start uploading.`,
Default: false,
Advanced: true,
}, {
Name: "shared_credentials_file",
Help: `Path to the shared credentials file
If env_auth = true then rclone can use a shared credentials file.
If this variable is empty rclone will look for the
"AWS_SHARED_CREDENTIALS_FILE" env variable. If the env value is empty
it will default to the current user's home directory.
Linux/OSX: "$HOME/.aws/credentials"
Windows: "%USERPROFILE%\.aws\credentials"
`,
Advanced: true,
}, {
Name: "profile",
Help: `Profile to use in the shared credentials file
If env_auth = true then rclone can use a shared credentials file. This
variable controls which profile is used in that file.
If empty it will default to the environment variable "AWS_PROFILE" or
"default" if that environment variable is also not set.
`,
Advanced: true,
}, {
Name: "session_token",
Help: "An AWS session token",
@@ -923,6 +1020,15 @@ In Ceph, this can be increased with the "rgw list buckets max chunk" option.
`,
Default: 1000,
Advanced: true,
}, {
Name: "no_check_bucket",
Help: `If set don't attempt to check the bucket exists or create it
This can be useful when trying to minimise the number of transactions
rclone does if you know the bucket exists already.
`,
Default: false,
Advanced: true,
}, {
Name: config.ConfigEncoding,
Help: config.ConfigEncodingHelp,
@@ -994,6 +1100,8 @@ type Options struct {
ChunkSize fs.SizeSuffix `config:"chunk_size"`
MaxUploadParts int64 `config:"max_upload_parts"`
DisableChecksum bool `config:"disable_checksum"`
SharedCredentialsFile string `config:"shared_credentials_file"`
Profile string `config:"profile"`
SessionToken string `config:"session_token"`
UploadConcurrency int `config:"upload_concurrency"`
ForcePathStyle bool `config:"force_path_style"`
@@ -1001,6 +1109,7 @@ type Options struct {
UseAccelerateEndpoint bool `config:"use_accelerate_endpoint"`
LeavePartsOnError bool `config:"leave_parts_on_error"`
ListChunk int64 `config:"list_chunk"`
NoCheckBucket bool `config:"no_check_bucket"`
Enc encoder.MultiEncoder `config:"encoding"`
MemoryPoolFlushTime fs.Duration `config:"memory_pool_flush_time"`
MemoryPoolUseMmap bool `config:"memory_pool_use_mmap"`
@@ -1156,7 +1265,10 @@ func s3Connection(opt *Options) (*s3.S3, *session.Session, error) {
// A SharedCredentialsProvider retrieves credentials
// from the current user's home directory. It checks
// AWS_SHARED_CREDENTIALS_FILE and AWS_PROFILE too.
&credentials.SharedCredentialsProvider{},
&credentials.SharedCredentialsProvider{
Filename: opt.SharedCredentialsFile, // If empty will look for "AWS_SHARED_CREDENTIALS_FILE" env variable.
Profile: opt.Profile, // If empty will look gor "AWS_PROFILE" env var or "default" if not set.
},
// Pick up IAM role if we're in an ECS task
defaults.RemoteCredProvider(*def.Config, def.Handlers),
@@ -1786,6 +1898,9 @@ func (f *Fs) Mkdir(ctx context.Context, dir string) error {
// makeBucket creates the bucket if it doesn't exist
func (f *Fs) makeBucket(ctx context.Context, bucket string) error {
if f.opt.NoCheckBucket {
return nil
}
return f.cache.Create(bucket, func() error {
req := s3.CreateBucketInput{
Bucket: &bucket,
@@ -2084,6 +2199,58 @@ if not.
"lifetime": "Lifetime of the active copy in days",
"description": "The optional description for the job.",
},
}, {
Name: "list-multipart-uploads",
Short: "List the unfinished multipart uploads",
Long: `This command lists the unfinished multipart uploads in JSON format.
rclone backend list-multipart s3:bucket/path/to/object
It returns a dictionary of buckets with values as lists of unfinished
multipart uploads.
You can call it with no bucket in which case it lists all bucket, with
a bucket or with a bucket and path.
{
"rclone": [
{
"Initiated": "2020-06-26T14:20:36Z",
"Initiator": {
"DisplayName": "XXX",
"ID": "arn:aws:iam::XXX:user/XXX"
},
"Key": "KEY",
"Owner": {
"DisplayName": null,
"ID": "XXX"
},
"StorageClass": "STANDARD",
"UploadId": "XXX"
}
],
"rclone-1000files": [],
"rclone-dst": []
}
`,
}, {
Name: "cleanup",
Short: "Remove unfinished multipart uploads.",
Long: `This command removes unfinished multipart uploads of age greater than
max-age which defaults to 24 hours.
Note that you can use -i/--dry-run with this command to see what it
would do.
rclone backend cleanup s3:bucket/path/to/object
rclone backend cleanup -o max-age=7w s3:bucket/path/to/object
Durations are parsed as per the rest of rclone, 2h, 7d, 7w etc.
`,
Opts: map[string]string{
"max-age": "Max age of upload to delete",
},
}}
// Command the backend to run a named command
@@ -2158,11 +2325,137 @@ func (f *Fs) Command(ctx context.Context, name string, arg []string, opt map[str
return out, err
}
return out, nil
case "list-multipart-uploads":
return f.listMultipartUploadsAll(ctx)
case "cleanup":
maxAge := 24 * time.Hour
if opt["max-age"] != "" {
maxAge, err = fs.ParseDuration(opt["max-age"])
if err != nil {
return nil, errors.Wrap(err, "bad max-age")
}
}
return nil, f.cleanUp(ctx, maxAge)
default:
return nil, fs.ErrorCommandNotFound
}
}
// listMultipartUploads lists all outstanding multipart uploads for (bucket, key)
//
// Note that rather lazily we treat key as a prefix so it matches
// directories and objects. This could suprise the user if they ask
// for "dir" and it returns "dirKey"
func (f *Fs) listMultipartUploads(ctx context.Context, bucket, key string) (uploads []*s3.MultipartUpload, err error) {
var (
keyMarker *string
uploadIDMarker *string
)
uploads = []*s3.MultipartUpload{}
for {
req := s3.ListMultipartUploadsInput{
Bucket: &bucket,
MaxUploads: &f.opt.ListChunk,
KeyMarker: keyMarker,
UploadIdMarker: uploadIDMarker,
Prefix: &key,
}
var resp *s3.ListMultipartUploadsOutput
err = f.pacer.Call(func() (bool, error) {
resp, err = f.c.ListMultipartUploads(&req)
return f.shouldRetry(err)
})
if err != nil {
return nil, errors.Wrapf(err, "list multipart uploads bucket %q key %q", bucket, key)
}
uploads = append(uploads, resp.Uploads...)
if !aws.BoolValue(resp.IsTruncated) {
break
}
keyMarker = resp.NextKeyMarker
uploadIDMarker = resp.NextUploadIdMarker
}
return uploads, nil
}
func (f *Fs) listMultipartUploadsAll(ctx context.Context) (uploadsMap map[string][]*s3.MultipartUpload, err error) {
uploadsMap = make(map[string][]*s3.MultipartUpload)
bucket, directory := f.split("")
if bucket != "" {
uploads, err := f.listMultipartUploads(ctx, bucket, directory)
if err != nil {
return uploadsMap, err
}
uploadsMap[bucket] = uploads
return uploadsMap, nil
}
entries, err := f.listBuckets(ctx)
if err != nil {
return uploadsMap, err
}
for _, entry := range entries {
bucket := entry.Remote()
uploads, listErr := f.listMultipartUploads(ctx, bucket, "")
if listErr != nil {
err = listErr
fs.Errorf(f, "%v", err)
}
uploadsMap[bucket] = uploads
}
return uploadsMap, err
}
// cleanUpBucket removes all pending multipart uploads for a given bucket over the age of maxAge
func (f *Fs) cleanUpBucket(ctx context.Context, bucket string, maxAge time.Duration, uploads []*s3.MultipartUpload) (err error) {
fs.Infof(f, "cleaning bucket %q of pending multipart uploads older than %v", bucket, maxAge)
for _, upload := range uploads {
if upload.Initiated != nil && upload.Key != nil && upload.UploadId != nil {
age := time.Since(*upload.Initiated)
what := fmt.Sprintf("pending multipart upload for bucket %q key %q dated %v (%v ago)", bucket, *upload.Key, upload.Initiated, age)
if age > maxAge {
fs.Infof(f, "removing %s", what)
if operations.SkipDestructive(ctx, what, "remove pending upload") {
continue
}
req := s3.AbortMultipartUploadInput{
Bucket: &bucket,
UploadId: upload.UploadId,
Key: upload.Key,
}
_, abortErr := f.c.AbortMultipartUpload(&req)
if abortErr != nil {
err = errors.Wrapf(abortErr, "failed to remove %s", what)
fs.Errorf(f, "%v", err)
}
} else {
fs.Debugf(f, "ignoring %s", what)
}
}
}
return err
}
// CleanUp removes all pending multipart uploads
func (f *Fs) cleanUp(ctx context.Context, maxAge time.Duration) (err error) {
uploadsMap, err := f.listMultipartUploadsAll(ctx)
if err != nil {
return err
}
for bucket, uploads := range uploadsMap {
cleanErr := f.cleanUpBucket(ctx, bucket, maxAge, uploads)
if err != nil {
fs.Errorf(f, "Failed to cleanup bucket %q: %v", bucket, cleanErr)
err = cleanErr
}
}
return err
}
// CleanUp removes all pending multipart uploads older than 24 hours
func (f *Fs) CleanUp(ctx context.Context) (err error) {
return f.cleanUp(ctx, 24*time.Hour)
}
// ------------------------------------------------------------
// Fs returns the parent Fs
@@ -2237,11 +2530,6 @@ func (o *Object) readMetaData(ctx context.Context) (err error) {
if err != nil {
if awsErr, ok := err.(awserr.RequestFailure); ok {
if awsErr.StatusCode() == http.StatusNotFound {
// NotFound indicates bucket was OK
// NoSuchBucket would be returned if bucket was bad
if awsErr.Code() == "NotFound" {
o.fs.cache.MarkOK(bucket)
}
return fs.ErrorObjectNotFound
}
}
@@ -2781,6 +3069,7 @@ var (
_ fs.PutStreamer = &Fs{}
_ fs.ListRer = &Fs{}
_ fs.Commander = &Fs{}
_ fs.CleanUpper = &Fs{}
_ fs.Object = &Object{}
_ fs.MimeTyper = &Object{}
_ fs.GetTierer = &Object{}

View File

@@ -584,29 +584,38 @@ func (f *Fs) Mkdir(ctx context.Context, dir string) error {
return nil
}
// Rmdir removes the directory or library if empty
//
// Return an error if it doesn't exist or isn't empty
func (f *Fs) Rmdir(ctx context.Context, dir string) error {
// purgeCheck removes the root directory, if check is set then it
// refuses to do so if it has anything in
func (f *Fs) purgeCheck(ctx context.Context, dir string, check bool) error {
libraryName, dirPath := f.splitPath(dir)
libraryID, err := f.getLibraryID(ctx, libraryName)
if err != nil {
return err
}
directoryEntries, err := f.getDirectoryEntries(ctx, libraryID, dirPath, false)
if err != nil {
return err
}
if len(directoryEntries) > 0 {
return fs.ErrorDirectoryNotEmpty
if check {
directoryEntries, err := f.getDirectoryEntries(ctx, libraryID, dirPath, false)
if err != nil {
return err
}
if len(directoryEntries) > 0 {
return fs.ErrorDirectoryNotEmpty
}
}
if dirPath == "" || dirPath == "/" {
return f.deleteLibrary(ctx, libraryID)
}
return f.deleteDir(ctx, libraryID, dirPath)
}
// Rmdir removes the directory or library if empty
//
// Return an error if it doesn't exist or isn't empty
func (f *Fs) Rmdir(ctx context.Context, dir string) error {
return f.purgeCheck(ctx, dir, true)
}
// ==================== Optional Interface fs.ListRer ====================
// ListR lists the objects and directories of the Fs starting
@@ -893,33 +902,14 @@ func (f *Fs) DirMove(ctx context.Context, src fs.Fs, srcRemote, dstRemote string
// ==================== Optional Interface fs.Purger ====================
// Purge all files in the root and the root directory
// Purge all files in the directory
//
// Implement this if you have a way of deleting all the files
// quicker than just running Remove() on the result of List()
//
// Return an error if it doesn't exist
func (f *Fs) Purge(ctx context.Context) error {
if f.libraryName == "" {
return errors.New("Cannot delete from the root of the server. Please select a library")
}
libraryID, err := f.getLibraryID(ctx, f.libraryName)
if err != nil {
return err
}
if f.rootDirectory == "" {
// Delete library
err = f.deleteLibrary(ctx, libraryID)
if err != nil {
return err
}
return nil
}
err = f.deleteDir(ctx, libraryID, f.rootDirectory)
if err != nil {
return err
}
return nil
func (f *Fs) Purge(ctx context.Context, dir string) error {
return f.purgeCheck(ctx, dir, false)
}
// ==================== Optional Interface fs.CleanUpper ====================

View File

@@ -164,6 +164,18 @@ Home directory can be found in a shared folder called "home"
Default: false,
Help: "Set to skip any symlinks and any other non regular files.",
Advanced: true,
}, {
Name: "subsystem",
Default: "sftp",
Help: "Specifies the SSH2 subsystem on the remote host.",
Advanced: true,
}, {
Name: "server_command",
Default: "",
Help: `Specifies the path or command to run a sftp server on the remote host.
The subsystem option is ignored when server_command is defined.`,
Advanced: true,
}},
}
fs.Register(fsi)
@@ -187,6 +199,8 @@ type Options struct {
Md5sumCommand string `config:"md5sum_command"`
Sha1sumCommand string `config:"sha1sum_command"`
SkipLinks bool `config:"skip_links"`
Subsystem string `config:"subsystem"`
ServerCommand string `config:"server_command"`
}
// Fs stores the interface to the remote SFTP files
@@ -290,7 +304,7 @@ func (f *Fs) sftpConnection() (c *conn, err error) {
if err != nil {
return nil, errors.Wrap(err, "couldn't connect SSH")
}
c.sftpClient, err = sftp.NewClient(c.sshClient)
c.sftpClient, err = f.newSftpClient(c.sshClient)
if err != nil {
_ = c.sshClient.Close()
return nil, errors.Wrap(err, "couldn't initialise SFTP")
@@ -299,6 +313,35 @@ func (f *Fs) sftpConnection() (c *conn, err error) {
return c, nil
}
// Creates a new SFTP client on conn, using the specified subsystem
// or sftp server, and zero or more option functions
func (f *Fs) newSftpClient(conn *ssh.Client, opts ...sftp.ClientOption) (*sftp.Client, error) {
s, err := conn.NewSession()
if err != nil {
return nil, err
}
pw, err := s.StdinPipe()
if err != nil {
return nil, err
}
pr, err := s.StdoutPipe()
if err != nil {
return nil, err
}
if f.opt.ServerCommand != "" {
if err := s.Start(f.opt.ServerCommand); err != nil {
return nil, err
}
} else {
if err := s.RequestSubsystem(f.opt.Subsystem); err != nil {
return nil, err
}
}
return sftp.NewClientPipe(pr, pw, opts...)
}
// Get an SFTP connection from the pool, or open a new one
func (f *Fs) getSftpConnection() (c *conn, err error) {
f.poolMu.Lock()

View File

@@ -853,8 +853,8 @@ func (f *Fs) Precision() time.Duration {
// Optional interface: Only implement this if you have a way of
// deleting all the files quicker than just running Remove() on the
// result of List()
func (f *Fs) Purge(ctx context.Context) error {
return f.purgeCheck(ctx, "", false)
func (f *Fs) Purge(ctx context.Context, dir string) error {
return f.purgeCheck(ctx, dir, false)
}
// updateItem patches a file or folder

View File

@@ -895,12 +895,12 @@ func (f *Fs) Copy(ctx context.Context, src fs.Object, remote string) (fs.Object,
return dstObj, nil
}
// Purge deletes all the files and the container
// Purge deletes all the files in the directory
//
// Optional interface: Only implement this if you have a way of
// deleting all the files quicker than just running Remove() on the
// result of List()
func (f *Fs) Purge(ctx context.Context) error {
func (f *Fs) Purge(ctx context.Context, dir string) error {
// Caution: Deleting a folder may orphan objects. It's important
// to remove the contents of the folder before you delete the
// folder. That's because removing a folder using DELETE does not
@@ -920,7 +920,7 @@ func (f *Fs) Purge(ctx context.Context) error {
if f.opt.HardDelete {
return fs.ErrorCantPurge
}
return f.purgeCheck(ctx, "", false)
return f.purgeCheck(ctx, dir, false)
}
// moveFile moves a file server side

View File

@@ -840,17 +840,21 @@ func (f *Fs) Precision() time.Duration {
return time.Nanosecond
}
// Purge deletes all the files and directories
// Purge deletes all the files in the directory
//
// Implemented here so we can make sure we delete directory markers
func (f *Fs) Purge(ctx context.Context) error {
func (f *Fs) Purge(ctx context.Context, dir string) error {
container, directory := f.split(dir)
if container == "" {
return fs.ErrorListBucketRequired
}
// Delete all the files including the directory markers
toBeDeleted := make(chan fs.Object, fs.Config.Transfers)
delErr := make(chan error, 1)
go func() {
delErr <- operations.DeleteFiles(ctx, toBeDeleted)
}()
err := f.list(f.rootContainer, f.rootDirectory, f.rootDirectory, f.rootContainer == "", true, true, func(entry fs.DirEntry) error {
err := f.list(container, directory, f.rootDirectory, false, true, true, func(entry fs.DirEntry) error {
if o, ok := entry.(*Object); ok {
toBeDeleted <- o
}
@@ -864,7 +868,7 @@ func (f *Fs) Purge(ctx context.Context) error {
if err != nil {
return err
}
return f.Rmdir(ctx, "")
return f.Rmdir(ctx, dir)
}
// Copy src to this remote using server side copy operations.
@@ -1111,13 +1115,18 @@ func min(x, y int64) int64 {
//
// if except is passed in then segments with that prefix won't be deleted
func (o *Object) removeSegments(except string) error {
segmentsContainer, prefix, err := o.getSegmentsDlo()
err = o.fs.listContainerRoot(segmentsContainer, prefix, "", false, true, true, func(remote string, object *swift.Object, isDirectory bool) error {
segmentsContainer, _, err := o.getSegmentsDlo()
if err != nil {
return err
}
except = path.Join(o.remote, except)
// fs.Debugf(o, "segmentsContainer %q prefix %q", segmentsContainer, prefix)
err = o.fs.listContainerRoot(segmentsContainer, o.remote, "", false, true, true, func(remote string, object *swift.Object, isDirectory bool) error {
if isDirectory {
return nil
}
if except != "" && strings.HasPrefix(remote, except) {
// fs.Debugf(o, "Ignoring current segment file %q in container %q", segmentsRoot+remote, segmentsContainer)
// fs.Debugf(o, "Ignoring current segment file %q in container %q", remote, segmentsContainer)
return nil
}
fs.Debugf(o, "Removing segment file %q in container %q", remote, segmentsContainer)

View File

@@ -162,13 +162,13 @@ func (f *Fs) Mkdir(ctx context.Context, dir string) error {
return errs.Err()
}
// Purge all files in the root and the root directory
// Purge all files in the directory
//
// Implement this if you have a way of deleting all the files
// quicker than just running Remove() on the result of List()
//
// Return an error if it doesn't exist
func (f *Fs) Purge(ctx context.Context) error {
func (f *Fs) Purge(ctx context.Context, dir string) error {
for _, r := range f.upstreams {
if r.Features().Purge == nil {
return fs.ErrorCantPurge
@@ -180,7 +180,10 @@ func (f *Fs) Purge(ctx context.Context) error {
}
errs := Errors(make([]error, len(upstreams)))
multithread(len(upstreams), func(i int) {
err := upstreams[i].Features().Purge(ctx)
err := upstreams[i].Features().Purge(ctx, dir)
if errors.Cause(err) == fs.ErrorDirNotFound {
err = nil
}
errs[i] = errors.Wrap(err, upstreams[i].Name())
})
return errs.Err()
@@ -504,6 +507,9 @@ func (f *Fs) About(ctx context.Context) (*fs.Usage, error) {
}
for _, u := range f.upstreams {
usg, err := u.About(ctx)
if errors.Cause(err) == fs.ErrorDirNotFound {
continue
}
if err != nil {
return nil, err
}

View File

@@ -335,6 +335,9 @@ func (f *Fs) updateUsageCore(lock bool) error {
usage, err := f.RootFs.Features().About(ctx)
if err != nil {
f.cacheUpdate = false
if errors.Cause(err) == fs.ErrorDirNotFound {
err = nil
}
return err
}
if lock {

View File

@@ -686,8 +686,8 @@ func (f *Fs) PutStream(ctx context.Context, in io.Reader, src fs.ObjectInfo, opt
// mkParentDir makes the parent of the native path dirPath if
// necessary and any directories above that
func (f *Fs) mkParentDir(ctx context.Context, dirPath string) error {
// defer log.Trace(dirPath, "")("")
func (f *Fs) mkParentDir(ctx context.Context, dirPath string) (err error) {
// defer log.Trace(dirPath, "")("err=%v", &err)
// chop off trailing / if it exists
if strings.HasSuffix(dirPath, "/") {
dirPath = dirPath[:len(dirPath)-1]
@@ -699,6 +699,27 @@ func (f *Fs) mkParentDir(ctx context.Context, dirPath string) error {
return f.mkdir(ctx, parent)
}
// _dirExists - list dirPath to see if it exists
//
// dirPath should be a native path ending in a /
func (f *Fs) _dirExists(ctx context.Context, dirPath string) (exists bool) {
opts := rest.Opts{
Method: "PROPFIND",
Path: dirPath,
ExtraHeaders: map[string]string{
"Depth": "0",
},
}
var result api.Multistatus
var resp *http.Response
var err error
err = f.pacer.Call(func() (bool, error) {
resp, err = f.srv.CallXML(ctx, &opts, nil, &result)
return f.shouldRetry(resp, err)
})
return err == nil
}
// low level mkdir, only makes the directory, doesn't attempt to create parents
func (f *Fs) _mkdir(ctx context.Context, dirPath string) error {
// We assume the root is already created
@@ -719,19 +740,29 @@ func (f *Fs) _mkdir(ctx context.Context, dirPath string) error {
return f.shouldRetry(resp, err)
})
if apiErr, ok := err.(*api.Error); ok {
// already exists
// Check if it already exists. The response code for this isn't
// defined in the RFC so the implementations vary wildly.
//
// owncloud returns 423/StatusLocked if the create is already in progress
if apiErr.StatusCode == http.StatusMethodNotAllowed || apiErr.StatusCode == http.StatusNotAcceptable || apiErr.StatusCode == http.StatusLocked {
return nil
}
// 4shared returns a 409/StatusConflict here which clashes
// horribly with the intermediate paths don't exist meaning. So
// check to see if actually exists. This will correct other
// error codes too.
if f._dirExists(ctx, dirPath) {
return nil
}
}
return err
}
// mkdir makes the directory and parents using native paths
func (f *Fs) mkdir(ctx context.Context, dirPath string) error {
// defer log.Trace(dirPath, "")("")
err := f._mkdir(ctx, dirPath)
func (f *Fs) mkdir(ctx context.Context, dirPath string) (err error) {
// defer log.Trace(dirPath, "")("err=%v", &err)
err = f._mkdir(ctx, dirPath)
if apiErr, ok := err.(*api.Error); ok {
// parent does not exist so create it first then try again
if apiErr.StatusCode == http.StatusConflict {
@@ -868,13 +899,13 @@ func (f *Fs) Copy(ctx context.Context, src fs.Object, remote string) (fs.Object,
return f.copyOrMove(ctx, src, remote, "COPY")
}
// Purge deletes all the files and the container
// Purge deletes all the files in the directory
//
// Optional interface: Only implement this if you have a way of
// deleting all the files quicker than just running Remove() on the
// result of List()
func (f *Fs) Purge(ctx context.Context) error {
return f.purgeCheck(ctx, "", false)
func (f *Fs) Purge(ctx context.Context, dir string) error {
return f.purgeCheck(ctx, dir, false)
}
// Move src to this remote using server side move operations.

View File

@@ -67,13 +67,7 @@ func init() {
return
}
},
Options: []fs.Option{{
Name: config.ConfigClientID,
Help: "Yandex Client Id\nLeave blank normally.",
}, {
Name: config.ConfigClientSecret,
Help: "Yandex Client Secret\nLeave blank normally.",
}, {
Options: append(oauthutil.SharedOptions, []fs.Option{{
Name: config.ConfigEncoding,
Help: config.ConfigEncodingHelp,
Advanced: true,
@@ -81,7 +75,7 @@ func init() {
// it doesn't seem worth making an exception for this
Default: (encoder.Display |
encoder.EncodeInvalidUtf8),
}},
}}...),
})
}
@@ -637,13 +631,13 @@ func (f *Fs) Rmdir(ctx context.Context, dir string) error {
return f.purgeCheck(ctx, dir, true)
}
// Purge deletes all the files and the container
// Purge deletes all the files in the directory
//
// Optional interface: Only implement this if you have a way of
// deleting all the files quicker than just running Remove() on the
// result of List()
func (f *Fs) Purge(ctx context.Context) error {
return f.purgeCheck(ctx, "", false)
func (f *Fs) Purge(ctx context.Context, dir string) error {
return f.purgeCheck(ctx, dir, false)
}
// copyOrMoves copies or moves directories or files depending on the method parameter

View File

@@ -45,7 +45,6 @@ var (
var osarches = []string{
"windows/386",
"windows/amd64",
"darwin/386",
"darwin/amd64",
"linux/386",
"linux/amd64",
@@ -67,6 +66,7 @@ var osarches = []string{
"plan9/386",
"plan9/amd64",
"solaris/amd64",
"js/wasm",
}
// Special environment flags for a given arch
@@ -321,14 +321,16 @@ func compileArch(version, goos, goarch, dir string) bool {
return false
}
if !*compileOnly {
artifacts := []string{buildZip(dir)}
// build a .deb and .rpm if appropriate
if goos == "linux" {
artifacts = append(artifacts, buildDebAndRpm(dir, version, goarch)...)
}
if *copyAs != "" {
for _, artifact := range artifacts {
run("ln", artifact, strings.Replace(artifact, "-"+version, "-"+*copyAs, 1))
if goos != "js" {
artifacts := []string{buildZip(dir)}
// build a .deb and .rpm if appropriate
if goos == "linux" {
artifacts = append(artifacts, buildDebAndRpm(dir, version, goarch)...)
}
if *copyAs != "" {
for _, artifact := range artifacts {
run("ln", artifact, strings.Replace(artifact, "-"+version, "-"+*copyAs, 1))
}
}
}
// tidy up

26
bin/test-repeat-vfs.sh Executable file
View File

@@ -0,0 +1,26 @@
#!/bin/bash
# Thrash the VFS tests
set -e
# Optionally set the iterations with the first parameter
iterations=${1:-100}
base=$(dirname $(dirname $(realpath "$0")))
echo ${base}
run=${base}/bin/test-repeat.sh
echo ${run}
testdirs="
vfs
vfs/vfscache
vfs/vfscache/writeback
vfs/vfscache/downloaders
cmd/cmount
"
for testdir in ${testdirs}; do
echo "Testing ${testdir} with ${iterations} iterations"
cd ${base}/${testdir}
${run} -i=${iterations} -race -tags=cmount
done

View File

@@ -1,4 +1,4 @@
// +build !plan9
// +build !plan9,!js
package cachestats

View File

@@ -1,6 +1,6 @@
// Build for cache for unsupported platforms to stop go complaining
// about "no buildable Go source files "
// +build plan9
// +build plan9 js
package cachestats

View File

@@ -1,6 +1,6 @@
// +build cmount
// +build cgo
// +build linux darwin freebsd windows
// +build linux darwin freebsd openbsd windows
package cmount
@@ -8,6 +8,7 @@ import (
"io"
"os"
"path"
"runtime"
"sync"
"time"
@@ -17,7 +18,6 @@ import (
"github.com/rclone/rclone/fs"
"github.com/rclone/rclone/fs/log"
"github.com/rclone/rclone/vfs"
"github.com/rclone/rclone/vfs/vfsflags"
)
const fhUnset = ^uint64(0)
@@ -32,10 +32,10 @@ type FS struct {
}
// NewFS makes a new FS
func NewFS(f fs.Fs) *FS {
func NewFS(VFS *vfs.VFS) *FS {
fsys := &FS{
VFS: vfs.New(f, &vfsflags.Opt),
f: f,
VFS: VFS,
f: VFS.Fs(),
ready: make(chan (struct{})),
}
return fsys
@@ -218,12 +218,24 @@ func (fsys *FS) Readdir(dirPath string,
itemsRead := -1
defer log.Trace(dirPath, "ofst=%d, fh=0x%X", ofst, fh)("items=%d, errc=%d", &itemsRead, &errc)
node, errc := fsys.getHandle(fh)
dir, errc := fsys.lookupDir(dirPath)
if errc != 0 {
return errc
}
items, err := node.Readdir(-1)
// We can't seek in directories and FUSE should know that so
// return an error if ofst is ever set.
if ofst > 0 {
// However openbsd doesn't seem to know this - perhaps a bug in its
// FUSE implementation or a bug in cgofuse?
// See: https://github.com/billziss-gh/cgofuse/issues/49
if runtime.GOOS == "openbsd" {
return 0
}
return -fuse.ESPIPE
}
nodes, err := dir.ReadDirAll()
if err != nil {
return translateError(err)
}
@@ -232,7 +244,7 @@ func (fsys *FS) Readdir(dirPath string,
// for getattr (but FUSE only looks at st_ino and the
// file-type bits of st_mode).
//
// FIXME If you call host.SetCapReaddirPlus() then WinFsp will
// We have called host.SetCapReaddirPlus() so WinFsp will
// use the full stat information - a Useful optimization on
// Windows.
//
@@ -243,25 +255,19 @@ func (fsys *FS) Readdir(dirPath string,
// directory is read in a single readdir operation.
fill(".", nil, 0)
fill("..", nil, 0)
for _, item := range items {
node, ok := item.(vfs.Node)
if ok {
name := node.Name()
if len(name) > mountlib.MaxLeafSize {
fs.Errorf(dirPath, "Name too long (%d bytes) for FUSE, skipping: %s", len(name), name)
continue
}
if usingReaddirPlus {
// We have called host.SetCapReaddirPlus() so supply the stat information
var stat fuse.Stat_t
_ = fsys.stat(node, &stat) // not capable of returning an error
fill(name, &stat, 0)
} else {
fill(name, nil, 0)
}
for _, node := range nodes {
name := node.Name()
if len(name) > mountlib.MaxLeafSize {
fs.Errorf(dirPath, "Name too long (%d bytes) for FUSE, skipping: %s", len(name), name)
continue
}
// We have called host.SetCapReaddirPlus() so supply the stat information
// It is very cheap at this point so supply it regardless of OS capabilities
var stat fuse.Stat_t
_ = fsys.stat(node, &stat) // not capable of returning an error
fill(name, &stat, 0)
}
itemsRead = len(items)
itemsRead = len(nodes)
return 0
}
@@ -290,41 +296,65 @@ func (fsys *FS) Statfs(path string, stat *fuse.Statfs_t) (errc int) {
return 0
}
// Open opens a file
func (fsys *FS) Open(path string, flags int) (errc int, fh uint64) {
defer log.Trace(path, "flags=0x%X", flags)("errc=%d, fh=0x%X", &errc, &fh)
// OpenEx opens a file
func (fsys *FS) OpenEx(path string, fi *fuse.FileInfo_t) (errc int) {
defer log.Trace(path, "flags=0x%X", fi.Flags)("errc=%d, fh=0x%X", &errc, &fi.Fh)
fi.Fh = fhUnset
// translate the fuse flags to os flags
flags = translateOpenFlags(flags)
flags := translateOpenFlags(fi.Flags)
handle, err := fsys.VFS.OpenFile(path, flags, 0777)
if err != nil {
return translateError(err), fhUnset
return translateError(err)
}
// FIXME add support for unknown length files setting direct_io
// See: https://github.com/billziss-gh/cgofuse/issues/38
// If size unknown then use direct io to read
if entry := handle.Node().DirEntry(); entry != nil && entry.Size() < 0 {
fi.DirectIo = true
}
return 0, fsys.openHandle(handle)
fi.Fh = fsys.openHandle(handle)
return 0
}
// Open opens a file
func (fsys *FS) Open(path string, flags int) (errc int, fh uint64) {
var fi = fuse.FileInfo_t{
Flags: flags,
}
errc = fsys.OpenEx(path, &fi)
return errc, fi.Fh
}
// CreateEx creates and opens a file.
func (fsys *FS) CreateEx(filePath string, mode uint32, fi *fuse.FileInfo_t) (errc int) {
defer log.Trace(filePath, "flags=0x%X, mode=0%o", fi.Flags, mode)("errc=%d, fh=0x%X", &errc, &fi.Fh)
fi.Fh = fhUnset
leaf, parentDir, errc := fsys.lookupParentDir(filePath)
if errc != 0 {
return errc
}
file, err := parentDir.Create(leaf, fi.Flags)
if err != nil {
return translateError(err)
}
// translate the fuse flags to os flags
flags := translateOpenFlags(fi.Flags) | os.O_CREATE
handle, err := file.Open(flags)
if err != nil {
return translateError(err)
}
fi.Fh = fsys.openHandle(handle)
return 0
}
// Create creates and opens a file.
func (fsys *FS) Create(filePath string, flags int, mode uint32) (errc int, fh uint64) {
defer log.Trace(filePath, "flags=0x%X, mode=0%o", flags, mode)("errc=%d, fh=0x%X", &errc, &fh)
leaf, parentDir, errc := fsys.lookupParentDir(filePath)
if errc != 0 {
return errc, fhUnset
var fi = fuse.FileInfo_t{
Flags: flags,
}
file, err := parentDir.Create(leaf, flags)
if err != nil {
return translateError(err), fhUnset
}
// translate the fuse flags to os flags
flags = translateOpenFlags(flags) | os.O_CREATE
handle, err := file.Open(flags)
if err != nil {
return translateError(err), fhUnset
}
return 0, fsys.openHandle(handle)
errc = fsys.CreateEx(filePath, mode, &fi)
return errc, fi.Fh
}
// Truncate truncates a file to size
@@ -595,3 +625,12 @@ func translateOpenFlags(inFlags int) (outFlags int) {
// NB O_SYNC isn't defined by fuse
return outFlags
}
// Make sure interfaces are satisfied
var (
_ fuse.FileSystemInterface = (*FS)(nil)
_ fuse.FileSystemOpenEx = (*FS)(nil)
//_ fuse.FileSystemChflags = (*FS)(nil)
//_ fuse.FileSystemSetcrtime = (*FS)(nil)
//_ fuse.FileSystemSetchgtime = (*FS)(nil)
)

View File

@@ -4,33 +4,22 @@
// +build cmount
// +build cgo
// +build linux darwin freebsd windows
// +build linux darwin freebsd openbsd windows
package cmount
import (
"fmt"
"os"
"os/signal"
"runtime"
"syscall"
"strings"
"time"
"github.com/billziss-gh/cgofuse/fuse"
"github.com/okzk/sdnotify"
"github.com/pkg/errors"
"github.com/rclone/rclone/cmd/mountlib"
"github.com/rclone/rclone/fs"
"github.com/rclone/rclone/lib/atexit"
"github.com/rclone/rclone/vfs"
"github.com/rclone/rclone/vfs/vfsflags"
)
const (
// SetCapReaddirPlus informs the host that the hosted file system has the readdir-plus
// capability [Windows only]. A file system that has the readdir-plus capability can send
// full stat information during Readdir, thus avoiding extraneous Getattr calls.
usingReaddirPlus = runtime.GOOS == "windows"
)
func init() {
@@ -38,78 +27,95 @@ func init() {
if runtime.GOOS == "windows" {
name = "mount"
}
mountlib.NewMountCommand(name, false, Mount)
// Add mount to rc
mountlib.NewMountCommand(name, false, mount)
mountlib.AddRc("cmount", mount)
}
// mountOptions configures the options from the command line flags
func mountOptions(device string, mountpoint string) (options []string) {
func mountOptions(VFS *vfs.VFS, device string, mountpoint string, opt *mountlib.Options) (options []string) {
// Options
options = []string{
"-o", "fsname=" + device,
"-o", "subtype=rclone",
"-o", fmt.Sprintf("max_readahead=%d", mountlib.MaxReadAhead),
"-o", fmt.Sprintf("attr_timeout=%g", mountlib.AttrTimeout.Seconds()),
// This causes FUSE to supply O_TRUNC with the Open
// call which is more efficient for cmount. However
// it does not work with cgofuse on Windows with
// WinFSP so cmount must work with or without it.
"-o", "atomic_o_trunc",
"-o", fmt.Sprintf("attr_timeout=%g", opt.AttrTimeout.Seconds()),
}
if mountlib.DebugFUSE {
if runtime.GOOS != "openbsd" {
options = append(options,
"-o", fmt.Sprintf("max_readahead=%d", opt.MaxReadAhead),
// This causes FUSE to supply O_TRUNC with the Open
// call which is more efficient for cmount. However
// it does not work with cgofuse on Windows with
// WinFSP so cmount must work with or without it.
"-o", "atomic_o_trunc",
)
}
if opt.DebugFUSE {
options = append(options, "-o", "debug")
}
// OSX options
if runtime.GOOS == "darwin" {
if mountlib.NoAppleDouble {
if opt.NoAppleDouble {
options = append(options, "-o", "noappledouble")
}
if mountlib.NoAppleXattr {
if opt.NoAppleXattr {
options = append(options, "-o", "noapplexattr")
}
}
// determine if ExtraOptions already has an opt in
hasOption := func(optionName string) bool {
optionName += "="
for _, option := range opt.ExtraOptions {
if strings.HasPrefix(option, optionName) {
return true
}
}
return false
}
// Windows options
if runtime.GOOS == "windows" {
// These cause WinFsp to mean the current user
options = append(options, "-o", "uid=-1")
options = append(options, "-o", "gid=-1")
if !hasOption("uid") {
options = append(options, "-o", "uid=-1")
}
if !hasOption("gid") {
options = append(options, "-o", "gid=-1")
}
options = append(options, "--FileSystemName=rclone")
}
if runtime.GOOS == "darwin" || runtime.GOOS == "windows" {
if mountlib.VolumeName != "" {
options = append(options, "-o", "volname="+mountlib.VolumeName)
if opt.VolumeName != "" {
options = append(options, "-o", "volname="+opt.VolumeName)
}
}
if mountlib.AllowNonEmpty {
if opt.AllowNonEmpty {
options = append(options, "-o", "nonempty")
}
if mountlib.AllowOther {
if opt.AllowOther {
options = append(options, "-o", "allow_other")
}
if mountlib.AllowRoot {
if opt.AllowRoot {
options = append(options, "-o", "allow_root")
}
if mountlib.DefaultPermissions {
if opt.DefaultPermissions {
options = append(options, "-o", "default_permissions")
}
if vfsflags.Opt.ReadOnly {
if VFS.Opt.ReadOnly {
options = append(options, "-o", "ro")
}
if mountlib.WritebackCache {
if opt.WritebackCache {
// FIXME? options = append(options, "-o", WritebackCache())
}
if mountlib.DaemonTimeout != 0 {
options = append(options, "-o", fmt.Sprintf("daemon_timeout=%d", int(mountlib.DaemonTimeout.Seconds())))
if opt.DaemonTimeout != 0 {
options = append(options, "-o", fmt.Sprintf("daemon_timeout=%d", int(opt.DaemonTimeout.Seconds())))
}
for _, option := range mountlib.ExtraOptions {
for _, option := range opt.ExtraOptions {
options = append(options, "-o", option)
}
for _, option := range mountlib.ExtraFlags {
for _, option := range opt.ExtraFlags {
options = append(options, option)
}
return options
@@ -135,35 +141,39 @@ func waitFor(fn func() bool) (ok bool) {
//
// returns an error, and an error channel for the serve process to
// report an error when fusermount is called.
func mount(f fs.Fs, mountpoint string) (*vfs.VFS, <-chan error, func() error, error) {
func mount(VFS *vfs.VFS, mountpoint string, opt *mountlib.Options) (<-chan error, func() error, error) {
f := VFS.Fs()
fs.Debugf(f, "Mounting on %q", mountpoint)
// Check the mountpoint - in Windows the mountpoint mustn't exist before the mount
if runtime.GOOS != "windows" {
fi, err := os.Stat(mountpoint)
if err != nil {
return nil, nil, nil, errors.Wrap(err, "mountpoint")
return nil, nil, errors.Wrap(err, "mountpoint")
}
if !fi.IsDir() {
return nil, nil, nil, errors.New("mountpoint is not a directory")
return nil, nil, errors.New("mountpoint is not a directory")
}
}
// Create underlying FS
fsys := NewFS(f)
fsys := NewFS(VFS)
host := fuse.NewFileSystemHost(fsys)
if usingReaddirPlus {
host.SetCapReaddirPlus(true)
}
host.SetCapReaddirPlus(true) // only works on Windows
host.SetCapCaseInsensitive(f.Features().CaseInsensitive)
// Create options
options := mountOptions(f.Name()+":"+f.Root(), mountpoint)
options := mountOptions(VFS, f.Name()+":"+f.Root(), mountpoint, opt)
fs.Debugf(f, "Mounting with options: %q", options)
// Serve the mount point in the background returning error to errChan
errChan := make(chan error, 1)
go func() {
defer func() {
if r := recover(); r != nil {
errChan <- errors.Errorf("mount failed: %v", r)
}
}()
var err error
ok := host.Mount(mountpoint, options)
if !ok {
@@ -199,7 +209,7 @@ func mount(f fs.Fs, mountpoint string) (*vfs.VFS, <-chan error, func() error, er
select {
case err := <-errChan:
err = errors.Wrap(err, "mount stopped before calling Init")
return nil, nil, nil, err
return nil, nil, err
case <-fsys.ready:
}
@@ -214,53 +224,5 @@ func mount(f fs.Fs, mountpoint string) (*vfs.VFS, <-chan error, func() error, er
}
}
return fsys.VFS, errChan, unmount, nil
}
// Mount mounts the remote at mountpoint.
//
// If noModTime is set then it
func Mount(f fs.Fs, mountpoint string) error {
// Mount it
FS, errChan, unmount, err := mount(f, mountpoint)
if err != nil {
return errors.Wrap(err, "failed to mount FUSE fs")
}
// Note cgofuse unmounts the fs on SIGINT etc
sigHup := make(chan os.Signal, 1)
signal.Notify(sigHup, syscall.SIGHUP)
atexit.Register(func() {
_ = unmount()
})
if err := sdnotify.Ready(); err != nil && err != sdnotify.ErrSdNotifyNoSocket {
return errors.Wrap(err, "failed to notify systemd")
}
waitloop:
for {
select {
// umount triggered outside the app
case err = <-errChan:
break waitloop
// user sent SIGHUP to clear the cache
case <-sigHup:
root, err := FS.Root()
if err != nil {
fs.Errorf(f, "Error reading root: %v", err)
} else {
root.ForgetAll()
}
}
}
_ = sdnotify.Stopping()
if err != nil {
return errors.Wrap(err, "failed to umount FUSE fs")
}
return nil
return errChan, unmount, nil
}

View File

@@ -1,6 +1,6 @@
// +build cmount
// +build cgo
// +build linux darwin freebsd windows
// +build linux darwin freebsd openbsd windows
// +build !race !windows
// FIXME this doesn't work with the race detector under Windows either

View File

@@ -1,6 +1,6 @@
// Build for cmount for unsupported platforms to stop go complaining
// about "no buildable Go source files "
// +build !linux,!darwin,!freebsd,!windows !cgo !cmount
// +build !linux,!darwin,!freebsd,!openbsd,!windows !cgo !cmount
package cmount

View File

@@ -22,19 +22,32 @@ func init() {
var commandDefinition = &cobra.Command{
Use: "dedupe [mode] remote:path",
Short: `Interactively find duplicate files and delete/rename them.`,
Short: `Interactively find duplicate filenames and delete/rename them.`,
Long: `
By default ` + "`" + `dedupe` + "`" + ` interactively finds duplicate files and offers to
delete all but one or rename them to be different. Only useful with
Google Drive which can have duplicate file names.
By default ` + "`dedupe`" + ` interactively finds files with duplicate
names and offers to delete all but one or rename them to be
different.
This is only useful with backends like Google Drive which can have
duplicate file names. It can be run on wrapping backends (eg crypt) if
they wrap a backend which supports duplicate file names.
In the first pass it will merge directories with the same name. It
will do this iteratively until all the identical directories have been
merged.
will do this iteratively until all the identically named directories
have been merged.
The ` + "`" + `dedupe` + "`" + ` command will delete all but one of any identical (same
md5sum) files it finds without confirmation. This means that for most
duplicated files the ` + "`" + `dedupe` + "`" + ` command will not be interactive.
In the second pass, for every group of duplicate file names, it will
delete all but one identical files it finds without confirmation.
This means that for most duplicated files the ` + "`dedupe`" + `
command will not be interactive.
` + "`dedupe`" + ` considers files to be identical if they have the
same hash. If the backend does not support hashes (eg crypt wrapping
Google Drive) then they will never be found to be identical. If you
use the ` + "`--size-only`" + ` flag then files will be considered
identical if they have the same size (any hash will be ignored). This
can be useful on crypt backends which do not support hashes.
**Important**: Since this can cause data loss, test first with the
` + "`--dry-run` or the `--interactive`/`-i`" + ` flag.
@@ -52,26 +65,26 @@ Before - with duplicates
1744073 2016-03-05 16:22:38.104000000 two.txt
564374 2016-03-05 16:22:52.118000000 two.txt
Now the ` + "`" + `dedupe` + "`" + ` session
Now the ` + "`dedupe`" + ` session
$ rclone dedupe drive:dupes
2016/03/05 16:24:37 Google drive root 'dupes': Looking for duplicates using interactive mode.
one.txt: Found 4 duplicates - deleting identical copies
one.txt: Deleting 2/3 identical duplicates (md5sum "1eedaa9fe86fd4b8632e2ac549403b36")
one.txt: Found 4 files with duplicate names
one.txt: Deleting 2/3 identical duplicates (MD5 "1eedaa9fe86fd4b8632e2ac549403b36")
one.txt: 2 duplicates remain
1: 6048320 bytes, 2016-03-05 16:23:16.798000000, md5sum 1eedaa9fe86fd4b8632e2ac549403b36
2: 564374 bytes, 2016-03-05 16:23:06.731000000, md5sum 7594e7dc9fc28f727c42ee3e0749de81
1: 6048320 bytes, 2016-03-05 16:23:16.798000000, MD5 1eedaa9fe86fd4b8632e2ac549403b36
2: 564374 bytes, 2016-03-05 16:23:06.731000000, MD5 7594e7dc9fc28f727c42ee3e0749de81
s) Skip and do nothing
k) Keep just one (choose which in next step)
r) Rename all to be different (by changing file.jpg to file-1.jpg)
s/k/r> k
Enter the number of the file to keep> 1
one.txt: Deleted 1 extra copies
two.txt: Found 3 duplicates - deleting identical copies
two.txt: Found 3 files with duplicates names
two.txt: 3 duplicates remain
1: 564374 bytes, 2016-03-05 16:22:52.118000000, md5sum 7594e7dc9fc28f727c42ee3e0749de81
2: 6048320 bytes, 2016-03-05 16:22:46.185000000, md5sum 1eedaa9fe86fd4b8632e2ac549403b36
3: 1744073 bytes, 2016-03-05 16:22:38.104000000, md5sum 851957f7fb6f0bc4ce76be966d336802
1: 564374 bytes, 2016-03-05 16:22:52.118000000, MD5 7594e7dc9fc28f727c42ee3e0749de81
2: 6048320 bytes, 2016-03-05 16:22:46.185000000, MD5 1eedaa9fe86fd4b8632e2ac549403b36
3: 1744073 bytes, 2016-03-05 16:22:38.104000000, MD5 851957f7fb6f0bc4ce76be966d336802
s) Skip and do nothing
k) Keep just one (choose which in next step)
r) Rename all to be different (by changing file.jpg to file-1.jpg)

View File

@@ -19,6 +19,7 @@ import (
// Dir represents a directory entry
type Dir struct {
*vfs.Dir
fsys *FS
}
// Check interface satisfied
@@ -27,7 +28,7 @@ var _ fusefs.Node = (*Dir)(nil)
// Attr updates the attributes of a directory
func (d *Dir) Attr(ctx context.Context, a *fuse.Attr) (err error) {
defer log.Trace(d, "")("attr=%+v, err=%v", a, &err)
a.Valid = mountlib.AttrTimeout
a.Valid = d.fsys.opt.AttrTimeout
a.Gid = d.VFS().Opt.GID
a.Uid = d.VFS().Opt.UID
a.Mode = os.ModeDir | d.VFS().Opt.DirPerms
@@ -75,7 +76,7 @@ func (d *Dir) Lookup(ctx context.Context, req *fuse.LookupRequest, resp *fuse.Lo
if err != nil {
return nil, translateError(err)
}
resp.EntryValid = mountlib.AttrTimeout
resp.EntryValid = d.fsys.opt.AttrTimeout
// Check the mnode to see if it has a fuse Node cached
// We must return the same fuse nodes for vfs Nodes
node, ok := mnode.Sys().(fusefs.Node)
@@ -84,9 +85,9 @@ func (d *Dir) Lookup(ctx context.Context, req *fuse.LookupRequest, resp *fuse.Lo
}
switch x := mnode.(type) {
case *vfs.File:
node = &File{x}
node = &File{x, d.fsys}
case *vfs.Dir:
node = &Dir{x}
node = &Dir{x, d.fsys}
default:
panic("bad type")
}
@@ -139,7 +140,7 @@ func (d *Dir) Create(ctx context.Context, req *fuse.CreateRequest, resp *fuse.Cr
if err != nil {
return nil, nil, translateError(err)
}
node = &File{file}
node = &File{file, d.fsys}
file.SetSys(node) // cache the FUSE node for later
return node, &FileHandle{fh}, err
}
@@ -153,7 +154,7 @@ func (d *Dir) Mkdir(ctx context.Context, req *fuse.MkdirRequest) (node fusefs.No
if err != nil {
return nil, translateError(err)
}
node = &Dir{dir}
node = &Dir{dir, d.fsys}
dir.SetSys(node) // cache the FUSE node for later
return node, nil
}

View File

@@ -8,7 +8,6 @@ import (
"bazil.org/fuse"
fusefs "bazil.org/fuse/fs"
"github.com/rclone/rclone/cmd/mountlib"
"github.com/rclone/rclone/fs/log"
"github.com/rclone/rclone/vfs"
)
@@ -16,6 +15,7 @@ import (
// File represents a file
type File struct {
*vfs.File
fsys *FS
}
// Check interface satisfied
@@ -24,7 +24,7 @@ var _ fusefs.Node = (*File)(nil)
// Attr fills out the attributes for the file
func (f *File) Attr(ctx context.Context, a *fuse.Attr) (err error) {
defer log.Trace(f, "")("a=%+v, err=%v", a, &err)
a.Valid = mountlib.AttrTimeout
a.Valid = f.fsys.opt.AttrTimeout
modTime := f.File.ModTime()
Size := uint64(f.File.Size())
Blocks := (Size + 511) / 512

View File

@@ -15,23 +15,24 @@ import (
"github.com/rclone/rclone/fs"
"github.com/rclone/rclone/fs/log"
"github.com/rclone/rclone/vfs"
"github.com/rclone/rclone/vfs/vfsflags"
)
// FS represents the top level filing system
type FS struct {
*vfs.VFS
f fs.Fs
f fs.Fs
opt *mountlib.Options
}
// Check interface satisfied
var _ fusefs.FS = (*FS)(nil)
// NewFS makes a new FS
func NewFS(f fs.Fs) *FS {
func NewFS(VFS *vfs.VFS, opt *mountlib.Options) *FS {
fsys := &FS{
VFS: vfs.New(f, &vfsflags.Opt),
f: f,
VFS: VFS,
f: VFS.Fs(),
opt: opt,
}
return fsys
}
@@ -43,7 +44,7 @@ func (f *FS) Root() (node fusefs.Node, err error) {
if err != nil {
return nil, translateError(err)
}
return &Dir{root}, nil
return &Dir{root, f}, nil
}
// Check interface satisfied

View File

@@ -6,74 +6,67 @@ package mount
import (
"fmt"
"os"
"os/signal"
"syscall"
"runtime"
"bazil.org/fuse"
fusefs "bazil.org/fuse/fs"
"github.com/okzk/sdnotify"
"github.com/pkg/errors"
"github.com/rclone/rclone/cmd/mountlib"
"github.com/rclone/rclone/fs"
"github.com/rclone/rclone/lib/atexit"
"github.com/rclone/rclone/vfs"
"github.com/rclone/rclone/vfs/vfsflags"
)
func init() {
mountlib.NewMountCommand("mount", false, Mount)
// Add mount to rc
mountlib.NewMountCommand("mount", false, mount)
mountlib.AddRc("mount", mount)
}
// mountOptions configures the options from the command line flags
func mountOptions(device string) (options []fuse.MountOption) {
func mountOptions(VFS *vfs.VFS, device string, opt *mountlib.Options) (options []fuse.MountOption) {
options = []fuse.MountOption{
fuse.MaxReadahead(uint32(mountlib.MaxReadAhead)),
fuse.MaxReadahead(uint32(opt.MaxReadAhead)),
fuse.Subtype("rclone"),
fuse.FSName(device),
fuse.VolumeName(mountlib.VolumeName),
fuse.VolumeName(opt.VolumeName),
// Options from benchmarking in the fuse module
//fuse.MaxReadahead(64 * 1024 * 1024),
//fuse.WritebackCache(),
}
if mountlib.AsyncRead {
if opt.AsyncRead {
options = append(options, fuse.AsyncRead())
}
if mountlib.NoAppleDouble {
if opt.NoAppleDouble {
options = append(options, fuse.NoAppleDouble())
}
if mountlib.NoAppleXattr {
if opt.NoAppleXattr {
options = append(options, fuse.NoAppleXattr())
}
if mountlib.AllowNonEmpty {
if opt.AllowNonEmpty {
options = append(options, fuse.AllowNonEmptyMount())
}
if mountlib.AllowOther {
if opt.AllowOther {
options = append(options, fuse.AllowOther())
}
if mountlib.AllowRoot {
if opt.AllowRoot {
// options = append(options, fuse.AllowRoot())
fs.Errorf(nil, "Ignoring --allow-root. Support has been removed upstream - see https://github.com/bazil/fuse/issues/144 for more info")
}
if mountlib.DefaultPermissions {
if opt.DefaultPermissions {
options = append(options, fuse.DefaultPermissions())
}
if vfsflags.Opt.ReadOnly {
if VFS.Opt.ReadOnly {
options = append(options, fuse.ReadOnly())
}
if mountlib.WritebackCache {
if opt.WritebackCache {
options = append(options, fuse.WritebackCache())
}
if mountlib.DaemonTimeout != 0 {
options = append(options, fuse.DaemonTimeout(fmt.Sprint(int(mountlib.DaemonTimeout.Seconds()))))
if opt.DaemonTimeout != 0 {
options = append(options, fuse.DaemonTimeout(fmt.Sprint(int(opt.DaemonTimeout.Seconds()))))
}
if len(mountlib.ExtraOptions) > 0 {
if len(opt.ExtraOptions) > 0 {
fs.Errorf(nil, "-o/--option not supported with this FUSE backend")
}
if len(mountlib.ExtraFlags) > 0 {
if len(opt.ExtraFlags) > 0 {
fs.Errorf(nil, "--fuse-flag not supported with this FUSE backend")
}
return options
@@ -85,14 +78,25 @@ func mountOptions(device string) (options []fuse.MountOption) {
//
// returns an error, and an error channel for the serve process to
// report an error when fusermount is called.
func mount(f fs.Fs, mountpoint string) (*vfs.VFS, <-chan error, func() error, error) {
fs.Debugf(f, "Mounting on %q", mountpoint)
c, err := fuse.Mount(mountpoint, mountOptions(f.Name()+":"+f.Root())...)
if err != nil {
return nil, nil, nil, err
func mount(VFS *vfs.VFS, mountpoint string, opt *mountlib.Options) (<-chan error, func() error, error) {
if runtime.GOOS == "darwin" {
fs.Logf(nil, "macOS users: please try \"rclone cmount\" as it will be the default in v1.54")
}
filesys := NewFS(f)
if opt.DebugFUSE {
fuse.Debug = func(msg interface{}) {
fs.Debugf("fuse", "%v", msg)
}
}
f := VFS.Fs()
fs.Debugf(f, "Mounting on %q", mountpoint)
c, err := fuse.Mount(mountpoint, mountOptions(VFS, f.Name()+":"+f.Root(), opt)...)
if err != nil {
return nil, nil, err
}
filesys := NewFS(VFS, opt)
server := fusefs.New(c, nil)
// Serve the mount point in the background returning error to errChan
@@ -109,7 +113,7 @@ func mount(f fs.Fs, mountpoint string) (*vfs.VFS, <-chan error, func() error, er
// check if the mount process has an error to report
<-c.Ready
if err := c.MountError; err != nil {
return nil, nil, nil, err
return nil, nil, err
}
unmount := func() error {
@@ -118,63 +122,5 @@ func mount(f fs.Fs, mountpoint string) (*vfs.VFS, <-chan error, func() error, er
return fuse.Unmount(mountpoint)
}
return filesys.VFS, errChan, unmount, nil
}
// Mount mounts the remote at mountpoint.
//
// If noModTime is set then it
func Mount(f fs.Fs, mountpoint string) error {
if mountlib.DebugFUSE {
fuse.Debug = func(msg interface{}) {
fs.Debugf("fuse", "%v", msg)
}
}
// Mount it
FS, errChan, unmount, err := mount(f, mountpoint)
if err != nil {
return errors.Wrap(err, "failed to mount FUSE fs")
}
sigInt := make(chan os.Signal, 1)
signal.Notify(sigInt, syscall.SIGINT, syscall.SIGTERM)
sigHup := make(chan os.Signal, 1)
signal.Notify(sigHup, syscall.SIGHUP)
atexit.IgnoreSignals()
atexit.Register(func() {
_ = unmount()
})
if err := sdnotify.Ready(); err != nil && err != sdnotify.ErrSdNotifyNoSocket {
return errors.Wrap(err, "failed to notify systemd")
}
waitloop:
for {
select {
// umount triggered outside the app
case err = <-errChan:
break waitloop
// Program abort: umount
case <-sigInt:
err = unmount()
break waitloop
// user sent SIGHUP to clear the cache
case <-sigHup:
root, err := FS.Root()
if err != nil {
fs.Errorf(f, "Error reading root: %v", err)
} else {
root.ForgetAll()
}
}
}
_ = sdnotify.Stopping()
if err != nil {
return errors.Wrap(err, "failed to umount FUSE fs")
}
return nil
return errChan, unmount, nil
}

View File

@@ -33,13 +33,15 @@ import (
// FOPEN_DIRECT_IO flag from their `Open` method. See directio_test.go
// for an example.
type FileHandle struct {
h vfs.Handle
h vfs.Handle
fsys *FS
}
// Create a new FileHandle
func newFileHandle(h vfs.Handle) *FileHandle {
func newFileHandle(h vfs.Handle, fsys *FS) *FileHandle {
return &FileHandle{
h: h,
h: h,
fsys: fsys,
}
}
@@ -115,7 +117,7 @@ var _ fusefs.FileFsyncer = (*FileHandle)(nil)
// is assumed, and the 'blocks' field is set accordingly.
func (f *FileHandle) Getattr(ctx context.Context, out *fuse.AttrOut) (errno syscall.Errno) {
defer log.Trace(f, "")("attr=%v, errno=%v", &out, &errno)
setAttrOut(f.h.Node(), out)
f.fsys.setAttrOut(f.h.Node(), out)
return 0
}
@@ -125,7 +127,7 @@ var _ fusefs.FileGetattrer = (*FileHandle)(nil)
func (f *FileHandle) Setattr(ctx context.Context, in *fuse.SetAttrIn, out *fuse.AttrOut) (errno syscall.Errno) {
defer log.Trace(f, "in=%v", in)("attr=%v, errno=%v", &out, &errno)
var err error
setAttrOut(f.h.Node(), out)
f.fsys.setAttrOut(f.h.Node(), out)
size, ok := in.GetSize()
if ok {
err = f.h.Truncate(int64(size))

View File

@@ -14,20 +14,21 @@ import (
"github.com/rclone/rclone/fs"
"github.com/rclone/rclone/fs/log"
"github.com/rclone/rclone/vfs"
"github.com/rclone/rclone/vfs/vfsflags"
)
// FS represents the top level filing system
type FS struct {
VFS *vfs.VFS
f fs.Fs
opt *mountlib.Options
}
// NewFS creates a pathfs.FileSystem from the fs.Fs passed in
func NewFS(f fs.Fs) *FS {
func NewFS(VFS *vfs.VFS, opt *mountlib.Options) *FS {
fsys := &FS{
VFS: vfs.New(f, &vfsflags.Opt),
f: f,
VFS: VFS,
f: VFS.Fs(),
opt: opt,
}
return fsys
}
@@ -85,16 +86,16 @@ func setAttr(node vfs.Node, attr *fuse.Attr) {
}
// fill in AttrOut from node
func setAttrOut(node vfs.Node, out *fuse.AttrOut) {
func (f *FS) setAttrOut(node vfs.Node, out *fuse.AttrOut) {
setAttr(node, &out.Attr)
out.SetTimeout(mountlib.AttrTimeout)
out.SetTimeout(f.opt.AttrTimeout)
}
// fill in EntryOut from node
func setEntryOut(node vfs.Node, out *fuse.EntryOut) {
func (f *FS) setEntryOut(node vfs.Node, out *fuse.EntryOut) {
setAttr(node, &out.Attr)
out.SetEntryTimeout(mountlib.AttrTimeout)
out.SetAttrTimeout(mountlib.AttrTimeout)
out.SetEntryTimeout(f.opt.AttrTimeout)
out.SetAttrTimeout(f.opt.AttrTimeout)
}
// Translate errors from mountlib into Syscall error numbers

View File

@@ -7,27 +7,18 @@ package mount2
import (
"fmt"
"log"
"os"
"os/signal"
"runtime"
"syscall"
fusefs "github.com/hanwen/go-fuse/v2/fs"
"github.com/hanwen/go-fuse/v2/fuse"
"github.com/okzk/sdnotify"
"github.com/pkg/errors"
"github.com/rclone/rclone/cmd/mountlib"
"github.com/rclone/rclone/fs"
"github.com/rclone/rclone/lib/atexit"
"github.com/rclone/rclone/vfs"
)
func init() {
mountlib.NewMountCommand("mount2", true, Mount)
// Add mount to rc
mountlib.NewMountCommand("mount2", true, mount)
mountlib.AddRc("mount2", mount)
}
// mountOptions configures the options from the command line flags
@@ -36,12 +27,12 @@ func init() {
func mountOptions(fsys *FS, f fs.Fs) (mountOpts *fuse.MountOptions) {
device := f.Name() + ":" + f.Root()
mountOpts = &fuse.MountOptions{
AllowOther: mountlib.AllowOther,
AllowOther: fsys.opt.AllowOther,
FsName: device,
Name: "rclone",
DisableXAttrs: true,
Debug: mountlib.DebugFUSE,
MaxReadAhead: int(mountlib.MaxReadAhead),
Debug: fsys.opt.DebugFUSE,
MaxReadAhead: int(fsys.opt.MaxReadAhead),
// RememberInodes: true,
// SingleThreaded: true,
@@ -105,22 +96,22 @@ func mountOptions(fsys *FS, f fs.Fs) (mountOpts *fuse.MountOptions) {
}
var opts []string
// FIXME doesn't work opts = append(opts, fmt.Sprintf("max_readahead=%d", maxReadAhead))
if mountlib.AllowNonEmpty {
if fsys.opt.AllowNonEmpty {
opts = append(opts, "nonempty")
}
if mountlib.AllowOther {
if fsys.opt.AllowOther {
opts = append(opts, "allow_other")
}
if mountlib.AllowRoot {
if fsys.opt.AllowRoot {
opts = append(opts, "allow_root")
}
if mountlib.DefaultPermissions {
if fsys.opt.DefaultPermissions {
opts = append(opts, "default_permissions")
}
if fsys.VFS.Opt.ReadOnly {
opts = append(opts, "ro")
}
if mountlib.WritebackCache {
if fsys.opt.WritebackCache {
log.Printf("FIXME --write-back-cache not supported")
// FIXME opts = append(opts,fuse.WritebackCache())
}
@@ -156,10 +147,11 @@ func mountOptions(fsys *FS, f fs.Fs) (mountOpts *fuse.MountOptions) {
//
// returns an error, and an error channel for the serve process to
// report an error when fusermount is called.
func mount(f fs.Fs, mountpoint string) (*vfs.VFS, <-chan error, func() error, error) {
func mount(VFS *vfs.VFS, mountpoint string, opt *mountlib.Options) (<-chan error, func() error, error) {
f := VFS.Fs()
fs.Debugf(f, "Mounting on %q", mountpoint)
fsys := NewFS(f)
fsys := NewFS(VFS, opt)
// nodeFsOpts := &fusefs.PathNodeFsOptions{
// ClientInodes: false,
// Debug: mountlib.DebugFUSE,
@@ -179,28 +171,28 @@ func mount(f fs.Fs, mountpoint string) (*vfs.VFS, <-chan error, func() error, er
// FIXME fill out
opts := fusefs.Options{
MountOptions: *mountOpts,
EntryTimeout: &mountlib.AttrTimeout,
AttrTimeout: &mountlib.AttrTimeout,
EntryTimeout: &opt.AttrTimeout,
AttrTimeout: &opt.AttrTimeout,
// UID
// GID
}
root, err := fsys.Root()
if err != nil {
return nil, nil, nil, err
return nil, nil, err
}
rawFS := fusefs.NewNodeFS(root, &opts)
server, err := fuse.NewServer(rawFS, mountpoint, &opts.MountOptions)
if err != nil {
return nil, nil, nil, err
return nil, nil, err
}
//mountOpts := &fuse.MountOptions{}
//server, err := fusefs.Mount(mountpoint, fsys, &opts)
// server, err := fusefs.Mount(mountpoint, root, &opts)
// if err != nil {
// return nil, nil, nil, err
// return nil, nil, err
// }
umount := func() error {
@@ -222,60 +214,9 @@ func mount(f fs.Fs, mountpoint string) (*vfs.VFS, <-chan error, func() error, er
fs.Debugf(f, "Waiting for the mount to start...")
err = server.WaitMount()
if err != nil {
return nil, nil, nil, err
return nil, nil, err
}
fs.Debugf(f, "Mount started")
return fsys.VFS, errs, umount, nil
}
// Mount mounts the remote at mountpoint.
//
// If noModTime is set then it
func Mount(f fs.Fs, mountpoint string) error {
// Mount it
vfs, errChan, unmount, err := mount(f, mountpoint)
if err != nil {
return errors.Wrap(err, "failed to mount FUSE fs")
}
sigInt := make(chan os.Signal, 1)
signal.Notify(sigInt, syscall.SIGINT, syscall.SIGTERM)
sigHup := make(chan os.Signal, 1)
signal.Notify(sigHup, syscall.SIGHUP)
atexit.Register(func() {
_ = unmount()
})
if err := sdnotify.Ready(); err != nil && err != sdnotify.ErrSdNotifyNoSocket {
return errors.Wrap(err, "failed to notify systemd")
}
waitloop:
for {
select {
// umount triggered outside the app
case err = <-errChan:
break waitloop
// Program abort: umount
case <-sigInt:
err = unmount()
break waitloop
// user sent SIGHUP to clear the cache
case <-sigHup:
root, err := vfs.Root()
if err != nil {
fs.Errorf(f, "Error reading root: %v", err)
} else {
root.ForgetAll()
}
}
}
_ = sdnotify.Stopping()
if err != nil {
return errors.Wrap(err, "failed to umount FUSE fs")
}
return nil
return errs, umount, nil
}

View File

@@ -111,7 +111,7 @@ var _ = (fusefs.NodeStatfser)((*Node)(nil))
// with the Options.NullPermissions setting. If blksize is unset, 4096
// is assumed, and the 'blocks' field is set accordingly.
func (n *Node) Getattr(ctx context.Context, f fusefs.FileHandle, out *fuse.AttrOut) syscall.Errno {
setAttrOut(n.node, out)
n.fsys.setAttrOut(n.node, out)
return 0
}
@@ -121,7 +121,7 @@ var _ = (fusefs.NodeGetattrer)((*Node)(nil))
func (n *Node) Setattr(ctx context.Context, f fusefs.FileHandle, in *fuse.SetAttrIn, out *fuse.AttrOut) (errno syscall.Errno) {
defer log.Trace(n, "in=%v", in)("out=%#v, errno=%v", &out, &errno)
var err error
setAttrOut(n.node, out)
n.fsys.setAttrOut(n.node, out)
size, ok := in.GetSize()
if ok {
err = n.node.Truncate(int64(size))
@@ -158,7 +158,7 @@ func (n *Node) Open(ctx context.Context, flags uint32) (fh fusefs.FileHandle, fu
if entry := n.node.DirEntry(); entry != nil && entry.Size() < 0 {
fuseFlags |= fuse.FOPEN_DIRECT_IO
}
return newFileHandle(handle), fuseFlags, 0
return newFileHandle(handle, n.fsys), fuseFlags, 0
}
var _ = (fusefs.NodeOpener)((*Node)(nil))
@@ -197,7 +197,7 @@ func (n *Node) Lookup(ctx context.Context, name string, out *fuse.EntryOut) (ino
// FIXME
// out.SetEntryTimeout(dt time.Duration)
// out.SetAttrTimeout(dt time.Duration)
setEntryOut(vfsNode, out)
n.fsys.setEntryOut(vfsNode, out)
return n.NewInode(ctx, newNode, fusefs.StableAttr{Mode: out.Attr.Mode}), 0
}
@@ -306,7 +306,7 @@ func (n *Node) Mkdir(ctx context.Context, name string, mode uint32, out *fuse.En
return nil, translateError(err)
}
newNode := newNode(n.fsys, newDir)
setEntryOut(newNode.node, out)
n.fsys.setEntryOut(newNode.node, out)
newInode := n.NewInode(ctx, newNode, fusefs.StableAttr{Mode: out.Attr.Mode})
return newInode, 0
}
@@ -333,7 +333,7 @@ func (n *Node) Create(ctx context.Context, name string, flags uint32, mode uint3
if err != nil {
return nil, nil, 0, translateError(err)
}
fh = newFileHandle(handle)
fh = newFileHandle(handle, n.fsys)
// FIXME
// fh = &fusefs.WithFlags{
// File: fh,
@@ -346,7 +346,7 @@ func (n *Node) Create(ctx context.Context, name string, flags uint32, mode uint3
if errno != 0 {
return nil, nil, 0, errno
}
setEntryOut(vfsNode, out)
n.fsys.setEntryOut(vfsNode, out)
newNode := newNode(n.fsys, vfsNode)
fs.Debugf(nil, "attr=%#v", out.Attr)
newInode := n.NewInode(ctx, newNode, fusefs.StableAttr{Mode: out.Attr.Mode})

View File

@@ -4,46 +4,61 @@ import (
"io"
"log"
"os"
"os/signal"
"path/filepath"
"runtime"
"strings"
"syscall"
"time"
"github.com/okzk/sdnotify"
"github.com/pkg/errors"
"github.com/rclone/rclone/cmd"
"github.com/rclone/rclone/fs"
"github.com/rclone/rclone/fs/config"
"github.com/rclone/rclone/fs/config/flags"
"github.com/rclone/rclone/fs/rc"
"github.com/rclone/rclone/lib/atexit"
"github.com/rclone/rclone/vfs"
"github.com/rclone/rclone/vfs/vfsflags"
"github.com/spf13/cobra"
"github.com/spf13/pflag"
)
// Options set by command line flags
var (
DebugFUSE = false
AllowNonEmpty = false
AllowRoot = false
AllowOther = false
DefaultPermissions = false
WritebackCache = false
Daemon = false
MaxReadAhead fs.SizeSuffix = 128 * 1024
// Options for creating the mount
type Options struct {
DebugFUSE bool
AllowNonEmpty bool
AllowRoot bool
AllowOther bool
DefaultPermissions bool
WritebackCache bool
Daemon bool
MaxReadAhead fs.SizeSuffix
ExtraOptions []string
ExtraFlags []string
AttrTimeout = 1 * time.Second // how long the kernel caches attribute for
AttrTimeout time.Duration // how long the kernel caches attribute for
VolumeName string
NoAppleDouble = true // use noappledouble by default
NoAppleXattr = false // do not use noapplexattr by default
NoAppleDouble bool
NoAppleXattr bool
DaemonTimeout time.Duration // OSXFUSE only
AsyncRead = true // do async reads by default
)
AsyncRead bool
}
// DefaultOpt is the default values for creating the mount
var DefaultOpt = Options{
MaxReadAhead: 128 * 1024,
AttrTimeout: 1 * time.Second, // how long the kernel caches attribute for
NoAppleDouble: true, // use noappledouble by default
NoAppleXattr: false, // do not use noapplexattr by default
AsyncRead: true, // do async reads by default
}
type (
// UnmountFn is called to unmount the file system
UnmountFn func() error
// MountFn is called to mount the file system
MountFn func(f fs.Fs, mountpoint string) (*vfs.VFS, <-chan error, func() error, error)
MountFn func(VFS *vfs.VFS, mountpoint string, opt *Options) (<-chan error, func() error, error)
)
// Global constants
@@ -54,7 +69,35 @@ const (
func init() {
// DaemonTimeout defaults to non zero for macOS
if runtime.GOOS == "darwin" {
DaemonTimeout = 15 * time.Minute
DefaultOpt.DaemonTimeout = 15 * time.Minute
}
}
// Options set by command line flags
var (
Opt = DefaultOpt
)
// AddFlags adds the non filing system specific flags to the command
func AddFlags(flagSet *pflag.FlagSet) {
rc.AddOption("mount", &Opt)
flags.BoolVarP(flagSet, &Opt.DebugFUSE, "debug-fuse", "", Opt.DebugFUSE, "Debug the FUSE internals - needs -v.")
flags.BoolVarP(flagSet, &Opt.AllowNonEmpty, "allow-non-empty", "", Opt.AllowNonEmpty, "Allow mounting over a non-empty directory (not Windows).")
flags.BoolVarP(flagSet, &Opt.AllowRoot, "allow-root", "", Opt.AllowRoot, "Allow access to root user.")
flags.BoolVarP(flagSet, &Opt.AllowOther, "allow-other", "", Opt.AllowOther, "Allow access to other users.")
flags.BoolVarP(flagSet, &Opt.DefaultPermissions, "default-permissions", "", Opt.DefaultPermissions, "Makes kernel enforce access control based on the file mode.")
flags.BoolVarP(flagSet, &Opt.WritebackCache, "write-back-cache", "", Opt.WritebackCache, "Makes kernel buffer writes before sending them to rclone. Without this, writethrough caching is used.")
flags.FVarP(flagSet, &Opt.MaxReadAhead, "max-read-ahead", "", "The number of bytes that can be prefetched for sequential reads.")
flags.DurationVarP(flagSet, &Opt.AttrTimeout, "attr-timeout", "", Opt.AttrTimeout, "Time for which file/directory attributes are cached.")
flags.StringArrayVarP(flagSet, &Opt.ExtraOptions, "option", "o", []string{}, "Option for libfuse/WinFsp. Repeat if required.")
flags.StringArrayVarP(flagSet, &Opt.ExtraFlags, "fuse-flag", "", []string{}, "Flags or arguments to be passed direct to libfuse/WinFsp. Repeat if required.")
flags.BoolVarP(flagSet, &Opt.Daemon, "daemon", "", Opt.Daemon, "Run mount as a daemon (background mode).")
flags.StringVarP(flagSet, &Opt.VolumeName, "volname", "", Opt.VolumeName, "Set the volume name (not supported by all OSes).")
flags.DurationVarP(flagSet, &Opt.DaemonTimeout, "daemon-timeout", "", Opt.DaemonTimeout, "Time limit for rclone to respond to kernel (not supported by all OSes).")
flags.BoolVarP(flagSet, &Opt.AsyncRead, "async-read", "", Opt.AsyncRead, "Use asynchronous reads.")
if runtime.GOOS == "darwin" {
flags.BoolVarP(flagSet, &Opt.NoAppleDouble, "noappledouble", "", Opt.NoAppleDouble, "Sets the OSXFUSE option noappledouble.")
flags.BoolVarP(flagSet, &Opt.NoAppleXattr, "noapplexattr", "", Opt.NoAppleXattr, "Sets the OSXFUSE option noapplexattr.")
}
}
@@ -106,7 +149,7 @@ func checkMountpointOverlap(root, mountpoint string) error {
}
// NewMountCommand makes a mount command with the given name and Mount function
func NewMountCommand(commandName string, hidden bool, Mount func(f fs.Fs, mountpoint string) error) *cobra.Command {
func NewMountCommand(commandName string, hidden bool, mount MountFn) *cobra.Command {
var commandDefinition = &cobra.Command{
Use: commandName + " remote:path /path/to/mountpoint",
Hidden: hidden,
@@ -296,8 +339,9 @@ be copied to the vfs cache before opening with --vfs-cache-mode full.
` + vfs.Help,
Run: func(command *cobra.Command, args []string) {
cmd.CheckArgs(2, 2, command, args)
opt := Opt // make a copy of the options
if Daemon {
if opt.Daemon {
config.PassConfigKeyForDaemonization = true
}
@@ -317,36 +361,37 @@ be copied to the vfs cache before opening with --vfs-cache-mode full.
// Skip checkMountEmpty if --allow-non-empty flag is used or if
// the Operating System is Windows
if !AllowNonEmpty && runtime.GOOS != "windows" {
if !opt.AllowNonEmpty && runtime.GOOS != "windows" {
err := checkMountEmpty(mountpoint)
if err != nil {
log.Fatalf("Fatal error: %v", err)
}
} else if AllowNonEmpty && runtime.GOOS == "windows" {
} else if opt.AllowNonEmpty && runtime.GOOS == "windows" {
fs.Logf(nil, "--allow-non-empty flag does nothing on Windows")
}
// Work out the volume name, removing special
// characters from it if necessary
if VolumeName == "" {
VolumeName = fdst.Name() + ":" + fdst.Root()
if opt.VolumeName == "" {
opt.VolumeName = fdst.Name() + ":" + fdst.Root()
}
VolumeName = strings.Replace(VolumeName, ":", " ", -1)
VolumeName = strings.Replace(VolumeName, "/", " ", -1)
VolumeName = strings.TrimSpace(VolumeName)
if runtime.GOOS == "windows" && len(VolumeName) > 32 {
VolumeName = VolumeName[:32]
opt.VolumeName = strings.Replace(opt.VolumeName, ":", " ", -1)
opt.VolumeName = strings.Replace(opt.VolumeName, "/", " ", -1)
opt.VolumeName = strings.TrimSpace(opt.VolumeName)
if runtime.GOOS == "windows" && len(opt.VolumeName) > 32 {
opt.VolumeName = opt.VolumeName[:32]
}
// Start background task if --background is specified
if Daemon {
if opt.Daemon {
daemonized := startBackgroundMode()
if daemonized {
return
}
}
err := Mount(fdst, mountpoint)
VFS := vfs.New(fdst, &vfsflags.Opt)
err := Mount(VFS, mountpoint, mount, &opt)
if err != nil {
log.Fatalf("Fatal error: %v", err)
}
@@ -358,28 +403,7 @@ be copied to the vfs cache before opening with --vfs-cache-mode full.
// Add flags
cmdFlags := commandDefinition.Flags()
flags.BoolVarP(cmdFlags, &DebugFUSE, "debug-fuse", "", DebugFUSE, "Debug the FUSE internals - needs -v.")
// mount options
flags.BoolVarP(cmdFlags, &AllowNonEmpty, "allow-non-empty", "", AllowNonEmpty, "Allow mounting over a non-empty directory (not Windows).")
flags.BoolVarP(cmdFlags, &AllowRoot, "allow-root", "", AllowRoot, "Allow access to root user.")
flags.BoolVarP(cmdFlags, &AllowOther, "allow-other", "", AllowOther, "Allow access to other users.")
flags.BoolVarP(cmdFlags, &DefaultPermissions, "default-permissions", "", DefaultPermissions, "Makes kernel enforce access control based on the file mode.")
flags.BoolVarP(cmdFlags, &WritebackCache, "write-back-cache", "", WritebackCache, "Makes kernel buffer writes before sending them to rclone. Without this, writethrough caching is used.")
flags.FVarP(cmdFlags, &MaxReadAhead, "max-read-ahead", "", "The number of bytes that can be prefetched for sequential reads.")
flags.DurationVarP(cmdFlags, &AttrTimeout, "attr-timeout", "", AttrTimeout, "Time for which file/directory attributes are cached.")
flags.StringArrayVarP(cmdFlags, &ExtraOptions, "option", "o", []string{}, "Option for libfuse/WinFsp. Repeat if required.")
flags.StringArrayVarP(cmdFlags, &ExtraFlags, "fuse-flag", "", []string{}, "Flags or arguments to be passed direct to libfuse/WinFsp. Repeat if required.")
flags.BoolVarP(cmdFlags, &Daemon, "daemon", "", Daemon, "Run mount as a daemon (background mode).")
flags.StringVarP(cmdFlags, &VolumeName, "volname", "", VolumeName, "Set the volume name (not supported by all OSes).")
flags.DurationVarP(cmdFlags, &DaemonTimeout, "daemon-timeout", "", DaemonTimeout, "Time limit for rclone to respond to kernel (not supported by all OSes).")
flags.BoolVarP(cmdFlags, &AsyncRead, "async-read", "", AsyncRead, "Use asynchronous reads.")
if runtime.GOOS == "darwin" {
flags.BoolVarP(cmdFlags, &NoAppleDouble, "noappledouble", "", NoAppleDouble, "Sets the OSXFUSE option noappledouble.")
flags.BoolVarP(cmdFlags, &NoAppleXattr, "noapplexattr", "", NoAppleXattr, "Sets the OSXFUSE option noapplexattr.")
}
// Add in the generic flags
AddFlags(cmdFlags)
vfsflags.AddFlags(cmdFlags)
return commandDefinition
@@ -407,3 +431,60 @@ func ClipBlocks(b *uint64) {
*b = max
}
}
// Mount mounts the remote at mountpoint.
//
// If noModTime is set then it
func Mount(VFS *vfs.VFS, mountpoint string, mount MountFn, opt *Options) error {
if opt == nil {
opt = &DefaultOpt
}
// Mount it
errChan, unmount, err := mount(VFS, mountpoint, opt)
if err != nil {
return errors.Wrap(err, "failed to mount FUSE fs")
}
// Unmount on exit
fnHandle := atexit.Register(func() {
_ = unmount()
_ = sdnotify.Stopping()
})
defer atexit.Unregister(fnHandle)
// Notify systemd
if err := sdnotify.Ready(); err != nil && err != sdnotify.ErrSdNotifyNoSocket {
return errors.Wrap(err, "failed to notify systemd")
}
// Reload VFS cache on SIGHUP
sigHup := make(chan os.Signal, 1)
signal.Notify(sigHup, syscall.SIGHUP)
waitloop:
for {
select {
// umount triggered outside the app
case err = <-errChan:
break waitloop
// user sent SIGHUP to clear the cache
case <-sigHup:
root, err := VFS.Root()
if err != nil {
fs.Errorf(VFS.Fs(), "Error reading root: %v", err)
} else {
root.ForgetAll()
}
}
}
_ = unmount()
_ = sdnotify.Stopping()
if err != nil {
return errors.Wrap(err, "failed to umount FUSE fs")
}
return nil
}

View File

@@ -10,6 +10,9 @@ import (
"github.com/pkg/errors"
"github.com/rclone/rclone/fs"
"github.com/rclone/rclone/fs/rc"
"github.com/rclone/rclone/vfs"
"github.com/rclone/rclone/vfs/vfscommon"
"github.com/rclone/rclone/vfs/vfsflags"
)
// MountInfo defines the configuration for a mount
@@ -18,6 +21,8 @@ type MountInfo struct {
MountPoint string `json:"MountPoint"`
MountedOn time.Time `json:"MountedOn"`
Fs string `json:"Fs"`
MountOpt *Options
VFSOpt *vfscommon.Options
}
var (
@@ -53,11 +58,19 @@ This takes the following parameters
- fs - a remote path to be mounted (required)
- mountPoint: valid path on the local machine (required)
- mountType: One of the values (mount, cmount, mount2) specifies the mount implementation to use
- mountOpt: a JSON object with Mount options in.
- vfsOpt: a JSON object with VFS options in.
Eg
rclone rc mount/mount fs=mydrive: mountPoint=/home/<user>/mountPoint
rclone rc mount/mount fs=mydrive: mountPoint=/home/<user>/mountPoint mountType=mount
rclone rc mount/mount fs=TestDrive: mountPoint=/mnt/tmp vfsOpt='{"CacheMode": 2}' mountOpt='{"AllowOther": true}'
The vfsOpt are as described in options/get and can be seen in the the
"vfs" section when running and the mountOpt can be seen in the "mount" section.
rclone rc options/get
`,
})
}
@@ -69,6 +82,18 @@ func mountRc(_ context.Context, in rc.Params) (out rc.Params, err error) {
return nil, err
}
vfsOpt := vfsflags.Opt
err = in.GetStructMissingOK("vfsOpt", &vfsOpt)
if err != nil {
return nil, err
}
mountOpt := Opt
err = in.GetStructMissingOK("mountOpt", &mountOpt)
if err != nil {
return nil, err
}
mountType, err := in.GetString("mountType")
mountMu.Lock()
@@ -91,7 +116,8 @@ func mountRc(_ context.Context, in rc.Params) (out rc.Params, err error) {
}
if mountFns[mountType] != nil {
_, _, unmountFn, err := mountFns[mountType](fdst, mountPoint)
VFS := vfs.New(fdst, &vfsOpt)
_, unmountFn, err := mountFns[mountType](VFS, mountPoint, &mountOpt)
if err != nil {
log.Printf("mount FAILED: %v", err)
@@ -103,6 +129,8 @@ func mountRc(_ context.Context, in rc.Params) (out rc.Params, err error) {
MountedOn: time.Now(),
Fs: fdst.Name(),
MountPoint: mountPoint,
VFSOpt: &vfsOpt,
MountOpt: &mountOpt,
}
fs.Debugf(nil, "Mount for %s created at %s using %s", fdst.String(), mountPoint, mountType)

View File

@@ -68,6 +68,9 @@ func TestRc(t *testing.T) {
in := rc.Params{
"fs": localDir,
"mountPoint": mountPoint,
"vfsOpt": rc.Params{
"FilePerms": 0400,
},
}
// check file.txt is not there
@@ -86,6 +89,9 @@ func TestRc(t *testing.T) {
fi, err := os.Stat(filePath)
require.NoError(t, err)
assert.Equal(t, int64(5), fi.Size())
if runtime.GOOS == "linux" {
assert.Equal(t, os.FileMode(0400), fi.Mode())
}
// FIXME the OS sometimes appears to be using the mount
// immediately after it appears so wait a moment

View File

@@ -1,6 +1,6 @@
// Package ncdu implements a text based user interface for exploring a remote
//+build !plan9,!solaris
//+build !plan9,!solaris,!js
package ncdu

View File

@@ -1,6 +1,6 @@
// Build for ncdu for unsupported platforms to stop go complaining
// about "no buildable Go source files "
// +build plan9 solaris
// +build plan9 solaris js
package ncdu

View File

@@ -3,6 +3,9 @@ package obscure
import (
"fmt"
"io/ioutil"
"os"
"github.com/rclone/rclone/cmd"
"github.com/rclone/rclone/fs/config/obscure"
"github.com/spf13/cobra"
@@ -26,13 +29,29 @@ Many equally important things (like access tokens) are not obscured in
the config file. However it is very hard to shoulder surf a 64
character hex token.
This command can also accept a password through STDIN instead of an
argument by passing a hyphen as an argument. Example:
echo "secretpassword" | rclone obscure -
If there is no data on STDIN to read, rclone obscure will default to
obfuscating the hyphen itself.
If you want to encrypt the config file then please use config file
encryption - see [rclone config](/commands/rclone_config/) for more
info.`,
Run: func(command *cobra.Command, args []string) {
cmd.CheckArgs(1, 1, command, args)
var password string
fi, _ := os.Stdin.Stat()
if args[0] == "-" && (fi.Mode()&os.ModeCharDevice) == 0 {
bytes, _ := ioutil.ReadAll(os.Stdin)
password = string(bytes)
} else {
password = args[0]
}
cmd.Run(false, false, command, func() error {
obscured := obscure.MustObscure(args[0])
obscured := obscure.MustObscure(password)
fmt.Println(obscured)
return nil
})

View File

@@ -79,7 +79,7 @@ func Object(w http.ResponseWriter, r *http.Request, o fs.Object) {
defer func() {
tr.Done(err)
}()
in := tr.Account(file) // account the transfer (no buffering)
in := tr.Account(r.Context(), file) // account the transfer (no buffering)
w.WriteHeader(code)

View File

@@ -21,8 +21,8 @@ import (
"github.com/rclone/rclone/fs/fserrors"
"github.com/rclone/rclone/fs/operations"
"github.com/rclone/rclone/fs/walk"
"github.com/rclone/rclone/lib/terminal"
"github.com/spf13/cobra"
"golang.org/x/crypto/ssh/terminal"
"golang.org/x/net/http2"
)
@@ -126,7 +126,7 @@ with a path of ` + "`/<username>/`" + `.
cmd.CheckArgs(1, 1, command, args)
f := cmd.NewFsSrc(args)
cmd.Run(false, true, command, func() error {
s := newServer(f, &httpflags.Opt)
s := NewServer(f, &httpflags.Opt)
if stdio {
if terminal.IsTerminal(int(os.Stdout.Fd())) {
return errors.New("Refusing to run HTTP2 server directly on a terminal, please let restic start rclone")
@@ -139,7 +139,7 @@ with a path of ` + "`/<username>/`" + `.
httpSrv := &http2.Server{}
opts := &http2.ServeConnOpts{
Handler: http.HandlerFunc(s.handler),
Handler: s,
}
httpSrv.ServeConn(conn, opts)
return nil
@@ -158,26 +158,27 @@ const (
resticAPIV2 = "application/vnd.x.restic.rest.v2"
)
// server contains everything to run the server
type server struct {
// Server contains everything to run the Server
type Server struct {
*httplib.Server
f fs.Fs
}
func newServer(f fs.Fs, opt *httplib.Options) *server {
// NewServer returns an HTTP server that speaks the rest protocol
func NewServer(f fs.Fs, opt *httplib.Options) *Server {
mux := http.NewServeMux()
s := &server{
s := &Server{
Server: httplib.NewServer(mux, opt),
f: f,
}
mux.HandleFunc(s.Opt.BaseURL+"/", s.handler)
mux.HandleFunc(s.Opt.BaseURL+"/", s.ServeHTTP)
return s
}
// Serve runs the http server in the background.
//
// Use s.Close() and s.Wait() to shutdown server
func (s *server) Serve() error {
func (s *Server) Serve() error {
err := s.Server.Serve()
if err != nil {
return err
@@ -205,8 +206,8 @@ func makeRemote(path string) string {
return prefix + fileName[:2] + "/" + fileName
}
// handler reads incoming requests and dispatches them
func (s *server) handler(w http.ResponseWriter, r *http.Request) {
// ServeHTTP reads incoming requests and dispatches them
func (s *Server) ServeHTTP(w http.ResponseWriter, r *http.Request) {
w.Header().Set("Accept-Ranges", "bytes")
w.Header().Set("Server", "rclone/"+fs.Version)
@@ -248,7 +249,7 @@ func (s *server) handler(w http.ResponseWriter, r *http.Request) {
}
// get the remote
func (s *server) serveObject(w http.ResponseWriter, r *http.Request, remote string) {
func (s *Server) serveObject(w http.ResponseWriter, r *http.Request, remote string) {
o, err := s.f.NewObject(r.Context(), remote)
if err != nil {
fs.Debugf(remote, "%s request error: %v", r.Method, err)
@@ -259,7 +260,7 @@ func (s *server) serveObject(w http.ResponseWriter, r *http.Request, remote stri
}
// postObject posts an object to the repository
func (s *server) postObject(w http.ResponseWriter, r *http.Request, remote string) {
func (s *Server) postObject(w http.ResponseWriter, r *http.Request, remote string) {
if appendOnly {
// make sure the file does not exist yet
_, err := s.f.NewObject(r.Context(), remote)
@@ -282,7 +283,7 @@ func (s *server) postObject(w http.ResponseWriter, r *http.Request, remote strin
}
// delete the remote
func (s *server) deleteObject(w http.ResponseWriter, r *http.Request, remote string) {
func (s *Server) deleteObject(w http.ResponseWriter, r *http.Request, remote string) {
if appendOnly {
parts := strings.Split(r.URL.Path, "/")
@@ -331,7 +332,7 @@ func (ls *listItems) add(entry fs.DirEntry) {
}
// listObjects lists all Objects of a given type in an arbitrary order.
func (s *server) listObjects(w http.ResponseWriter, r *http.Request, remote string) {
func (s *Server) listObjects(w http.ResponseWriter, r *http.Request, remote string) {
fs.Debugf(remote, "list request")
if r.Header.Get("Accept") != resticAPIV2 {
@@ -372,7 +373,7 @@ func (s *server) listObjects(w http.ResponseWriter, r *http.Request, remote stri
// createRepo creates repository directories.
//
// We don't bother creating the data dirs as rclone will create them on the fly
func (s *server) createRepo(w http.ResponseWriter, r *http.Request, remote string) {
func (s *Server) createRepo(w http.ResponseWriter, r *http.Request, remote string) {
fs.Infof(remote, "Creating repository")
if r.URL.Query().Get("create") != "true" {

View File

@@ -126,10 +126,10 @@ func TestResticHandler(t *testing.T) {
// make a new file system in the temp dir
f := cmd.NewFsSrc([]string{tempdir})
srv := newServer(f, &httpflags.Opt)
srv := NewServer(f, &httpflags.Opt)
// create the repo
checkRequest(t, srv.handler,
checkRequest(t, srv.ServeHTTP,
newRequest(t, "POST", "/?create=true", nil),
[]wantFunc{wantCode(http.StatusOK)})
@@ -137,7 +137,7 @@ func TestResticHandler(t *testing.T) {
t.Run("", func(t *testing.T) {
for i, seq := range test.seq {
t.Logf("request %v: %v %v", i, seq.req.Method, seq.req.URL.Path)
checkRequest(t, srv.handler, seq.req, seq.want)
checkRequest(t, srv.ServeHTTP, seq.req, seq.want)
}
})
}

View File

@@ -57,7 +57,7 @@ func TestResticPrivateRepositories(t *testing.T) {
// make a new file system in the temp dir
f := cmd.NewFsSrc([]string{tempdir})
srv := newServer(f, &httpflags.Opt)
srv := NewServer(f, &httpflags.Opt)
// Requesting /test/ should allow access
reqs := []*http.Request{
@@ -66,7 +66,7 @@ func TestResticPrivateRepositories(t *testing.T) {
newAuthenticatedRequest(t, "GET", "/test/config", nil),
}
for _, req := range reqs {
checkRequest(t, srv.handler, req, []wantFunc{wantCode(http.StatusOK)})
checkRequest(t, srv.ServeHTTP, req, []wantFunc{wantCode(http.StatusOK)})
}
// Requesting everything else should raise forbidden errors
@@ -76,7 +76,7 @@ func TestResticPrivateRepositories(t *testing.T) {
newAuthenticatedRequest(t, "GET", "/other_user/config", nil),
}
for _, req := range reqs {
checkRequest(t, srv.handler, req, []wantFunc{wantCode(http.StatusForbidden)})
checkRequest(t, srv.ServeHTTP, req, []wantFunc{wantCode(http.StatusForbidden)})
}
}

View File

@@ -41,7 +41,7 @@ func TestRestic(t *testing.T) {
assert.NoError(t, err)
// Start the server
w := newServer(fremote, &opt)
w := NewServer(fremote, &opt)
assert.NoError(t, w.Serve())
defer func() {
w.Close()

View File

@@ -7,11 +7,11 @@ import (
"strings"
"time"
"github.com/coreos/go-semver/semver"
"github.com/pkg/errors"
"github.com/rclone/rclone/cmd"
"github.com/rclone/rclone/fs"
"github.com/rclone/rclone/fs/config/flags"
"github.com/rclone/rclone/fs/version"
"github.com/spf13/cobra"
)
@@ -66,8 +66,16 @@ Or
},
}
// strip a leading v off the string
func stripV(s string) string {
if len(s) > 0 && s[0] == 'v' {
return s[1:]
}
return s
}
// getVersion gets the version by checking the download repository passed in
func getVersion(url string) (v version.Version, vs string, date time.Time, err error) {
func getVersion(url string) (v *semver.Version, vs string, date time.Time, err error) {
resp, err := http.Get(url)
if err != nil {
return v, vs, date, err
@@ -89,16 +97,16 @@ func getVersion(url string) (v version.Version, vs string, date time.Time, err e
if err != nil {
return v, vs, date, err
}
v, err = version.New(vs)
v, err = semver.NewVersion(stripV(vs))
return v, vs, date, err
}
// check the current version against available versions
func checkVersion() {
// Get Current version
vCurrent, err := version.New(fs.Version)
vCurrent, err := semver.NewVersion(stripV(fs.Version))
if err != nil {
fs.Errorf(nil, "Failed to get parse version: %v", err)
fs.Errorf(nil, "Failed to parse version: %v", err)
}
const timeFormat = "2006-01-02"
@@ -108,12 +116,12 @@ func checkVersion() {
fs.Errorf(nil, "Failed to get rclone %s version: %v", what, err)
return
}
fmt.Printf("%-8s%-13v %20s\n",
fmt.Printf("%-8s%-40v %20s\n",
what+":",
v,
"(released "+t.Format(timeFormat)+")",
)
if v.Cmp(vCurrent) > 0 {
if v.Compare(*vCurrent) > 0 {
fmt.Printf(" upgrade: %s\n", url+vs)
}
}
@@ -126,7 +134,7 @@ func checkVersion() {
"beta",
"https://beta.rclone.org/",
)
if vCurrent.IsGit() {
if strings.HasSuffix(fs.Version, "-DEV") {
fmt.Println("Your version is compiled from git so comparisons may be wrong.")
}
}

View File

@@ -395,3 +395,12 @@ put them back in again.` >}}
* Kevin <keyam@microsoft.com>
* Morten Linderud <morten@linderud.pw>
* Dmitry Ustalov <dmitry.ustalov@gmail.com>
* Jack <196648+jdeng@users.noreply.github.com>
* kcris <cristian.tarsoaga@gmail.com>
* tyhuber1 <68970760+tyhuber1@users.noreply.github.com>
* David Ibarra <david.ibarra@realty.com>
* Tim Gallant <tim@lilt.com>
* Kaloyan Raev <kaloyan@storj.io>
* Jay McEntire <jay.mcentire@gmail.com>
* Leo Luan <leoluan@us.ibm.com>
* aus <549081+aus@users.noreply.github.com>

View File

@@ -41,9 +41,18 @@ client_secret>
Box App config.json location
Leave blank normally.
Enter a string value. Press Enter for the default ("").
config_json>
'enterprise' or 'user' depending on the type of token being requested.
box_config_file>
Box App Primary Access Token
Leave blank normally.
Enter a string value. Press Enter for the default ("").
access_token>
Enter a string value. Press Enter for the default ("user").
Choose a number from below, or type in your own value
1 / Rclone should act on behalf of a user
\ "user"
2 / Rclone should act on behalf of a service account
\ "enterprise"
box_sub_type>
Remote config
Use auto config?

View File

@@ -5,6 +5,36 @@ description: "Rclone Changelog"
# Changelog
## v1.52.3 - 2020-08-07
[See commits](https://github.com/rclone/rclone/compare/v1.52.2...v1.52.3)
* Bug Fixes
* docs
* Disable smart typography (eg en-dash) in MANUAL.* and man page (Nick Craig-Wood)
* Update install.md to reflect minimum Go version (Evan Harris)
* Update install from source instructions (Nick Craig-Wood)
* make_manual: Support SOURCE_DATE_EPOCH (Morten Linderud)
* log: Fix --use-json-log going to stderr not --log-file on Windows (Nick Craig-Wood)
* serve dlna: Fix file list on Samsung Series 6+ TVs (Matteo Pietro Dazzi)
* sync: Fix deadlock with --track-renames-strategy modtime (Nick Craig-Wood)
* Cache
* Fix moveto/copyto remote:file remote:file2 (Nick Craig-Wood)
* Drive
* Stop using root_folder_id as a cache (Nick Craig-Wood)
* Make dangling shortcuts appear in listings (Nick Craig-Wood)
* Drop "Disabling ListR" messages down to debug (Nick Craig-Wood)
* Workaround and policy for Google Drive API (Dmitry Ustalov)
* FTP
* Add note to docs about home vs root directory selection (Nick Craig-Wood)
* Onedrive
* Fix reverting to Copy when Move would have worked (Nick Craig-Wood)
* Avoid comma rendered in URL in onedrive.md (Kevin)
* Pcloud
* Fix oauth on European region "eapi.pcloud.com" (Nick Craig-Wood)
* S3
* Fix bucket Region auto detection when Region unset in config (Nick Craig-Wood)
## v1.52.2 - 2020-06-24
[See commits](https://github.com/rclone/rclone/compare/v1.52.1...v1.52.2)

View File

@@ -422,6 +422,20 @@ change the bwlimit dynamically:
rclone rc core/bwlimit rate=1M
### --bwlimit-file=BANDWIDTH_SPEC ###
This option controls per file bandwidth limit. For the options see the
`--bwlimit` flag.
For example use this to allow no transfers to be faster than 1MByte/s
--bwlimit-file 1M
This can be used in conjunction with `--bwlimit`.
Note that if a schedule is provided the file will use the schedule in
effect at the start of the transfer.
### --buffer-size=SIZE ###
Use this sized buffer to speed up file transfers. Each `--transfer`
@@ -1324,13 +1338,25 @@ Note also that `--track-renames` is incompatible with
`--delete-before` and will select `--delete-after` instead of
`--delete-during`.
### --track-renames-strategy (hash,modtime) ###
### --track-renames-strategy (hash,modtime,leaf,size) ###
This option changes the matching criteria for `--track-renames` to match
by any combination of modtime, hash, size. Matching by size is always enabled
no matter what option is selected here. This also means
that it enables `--track-renames` support for encrypted destinations.
If nothing is specified, the default option is matching by hashes.
This option changes the matching criteria for `--track-renames`.
The matching is controlled by a comma separated selection of these tokens:
- `modtime` - the modification time of the file - not supported on all backends
- `hash` - the hash of the file contents - not supported on all backends
- `leaf` - the name of the file not including its directory name
- `size` - the size of the file (this is always enabled)
So using `--track-renames-strategy modtime,leaf` would match files
based on modification time, the leaf of the file name and the size
only.
Using `--track-renames-strategy modtime` or `leaf` can enable
`--track-renames` support for encrypted destinations.
If nothing is specified, the default option is matching by `hash`es.
### --delete-(before,during,after) ###

View File

@@ -10,7 +10,7 @@ Rclone Download {{< version >}}
| Arch-OS | Windows | macOS | Linux | .deb | .rpm | FreeBSD | NetBSD | OpenBSD | Plan9 | Solaris |
|:-------:|:-------:|:-----:|:-----:|:----:|:----:|:-------:|:------:|:-------:|:-----:|:-------:|
| Intel/AMD - 64 Bit | {{< download windows amd64 >}} | {{< download osx amd64 >}} | {{< download linux amd64 >}} | {{< download linux amd64 deb >}} | {{< download linux amd64 rpm >}} | {{< download freebsd amd64 >}} | {{< download netbsd amd64 >}} | {{< download openbsd amd64 >}} | {{< download plan9 amd64 >}} | {{< download solaris amd64 >}} |
| Intel/AMD - 32 Bit | {{< download windows 386 >}} | {{< download osx 386 >}} | {{< download linux 386 >}} | {{< download linux 386 deb >}} | {{< download linux 386 rpm >}} | {{< download freebsd 386 >}} | {{< download netbsd 386 >}} | {{< download openbsd 386 >}} | {{< download plan9 386 >}} | - |
| Intel/AMD - 32 Bit | {{< download windows 386 >}} | - | {{< download linux 386 >}} | {{< download linux 386 deb >}} | {{< download linux 386 rpm >}} | {{< download freebsd 386 >}} | {{< download netbsd 386 >}} | {{< download openbsd 386 >}} | {{< download plan9 386 >}} | - |
| ARMv6 - 32 Bit | - | - | {{< download linux arm >}} | {{< download linux arm deb >}} | {{< download linux arm rpm >}} | {{< download freebsd arm >}} | {{< download netbsd arm >}} | - | - | - |
| ARMv7 - 32 Bit | - | - | {{< download linux arm-v7 >}} | {{< download linux arm-v7 deb >}} | {{< download linux arm-v7 rpm >}} | {{< download freebsd arm-v7 >}} | {{< download netbsd arm-v7 >}} | - | - | - |
| ARM - 64 Bit | - | - | {{< download linux arm64 >}} | {{< download linux arm64 deb >}} | {{< download linux arm64 rpm >}} | - | - | - | - | - |
@@ -38,16 +38,25 @@ Beta releases
[Beta releases](https://beta.rclone.org) are generated from each commit
to master. Note these are named like
{Version Tag}-{Commit Number}-g{Git Commit Hash}
{Version Tag}.beta.{Commit Number}.{Git Commit Hash}
You can match the `Git Commit Hash` up with the [git
log](https://github.com/rclone/rclone/commits/master). The most recent
release will have the largest `Version Tag` and `Commit Number` and
will normally be at the end of the list.
eg
v1.53.0-beta.4677.b657a2204
The `Version Tag` is the version that the beta release will become
when it is released. You can match the `Git Commit Hash` up with the
[git log](https://github.com/rclone/rclone/commits/master). The most
recent release will have the largest `Version Tag` and `Commit Number`
and will normally be at the end of the list.
Some beta releases may have a branch name also:
{Version Tag}-{Commit Number}-g{Git Commit Hash}-{Branch Name}
{Version Tag}-beta.{Commit Number}.{Git Commit Hash}.{Branch Name}
eg
v1.53.0-beta.4677.b657a2204.semver
The presence of `Branch Name` indicates that this is a feature under
development which will at some point be merged into the normal betas
@@ -70,7 +79,7 @@ script) from a URL which doesn't change then you can use these links.
| Arch-OS | Windows | macOS | Linux | .deb | .rpm | FreeBSD | NetBSD | OpenBSD | Plan9 | Solaris |
|:-------:|:-------:|:-----:|:-----:|:----:|:----:|:-------:|:------:|:-------:|:-----:|:-------:|
| Intel/AMD - 64 Bit | {{< cdownload windows amd64 >}} | {{< cdownload osx amd64 >}} | {{< cdownload linux amd64 >}} | {{< cdownload linux amd64 deb >}} | {{< cdownload linux amd64 rpm >}} | {{< cdownload freebsd amd64 >}} | {{< cdownload netbsd amd64 >}} | {{< cdownload openbsd amd64 >}} | {{< cdownload plan9 amd64 >}} | {{< cdownload solaris amd64 >}} |
| Intel/AMD - 32 Bit | {{< cdownload windows 386 >}} | {{< cdownload osx 386 >}} | {{< cdownload linux 386 >}} | {{< cdownload linux 386 deb >}} | {{< cdownload linux 386 rpm >}} | {{< cdownload freebsd 386 >}} | {{< cdownload netbsd 386 >}} | {{< cdownload openbsd 386 >}} | {{< cdownload plan9 386 >}} | - |
| Intel/AMD - 32 Bit | {{< cdownload windows 386 >}} | - | {{< cdownload linux 386 >}} | {{< cdownload linux 386 deb >}} | {{< cdownload linux 386 rpm >}} | {{< cdownload freebsd 386 >}} | {{< cdownload netbsd 386 >}} | {{< cdownload openbsd 386 >}} | {{< cdownload plan9 386 >}} | - |
| ARMv6 - 32 Bit | - | - | {{< cdownload linux arm >}} | {{< cdownload linux arm deb >}} | {{< cdownload linux arm rpm >}} | {{< cdownload freebsd arm >}} | {{< cdownload netbsd arm >}} | - | - | - |
| ARMv7 - 32 Bit | - | - | {{< cdownload linux arm-v7 >}} | {{< cdownload linux arm-v7 deb >}} | {{< cdownload linux arm-v7 rpm >}} | {{< cdownload freebsd arm-v7 >}} | {{< cdownload netbsd arm-v7 >}} | - | - | - |
| ARM - 64 Bit | - | - | {{< cdownload linux arm64 >}} | {{< cdownload linux arm64 deb >}} | {{< cdownload linux arm64 rpm >}} | - | - | - | - | - |

View File

@@ -273,6 +273,12 @@ the magic, pretending to be user foo.
- `gdrive:backup` - use the remote called gdrive, work in
the folder named backup.
Note: in case you configured a specific root folder on gdrive and rclone is unable to access the contents of that folder when using `--drive-impersonate`, do this instead:
- in the gdrive web interface, share your root folder with the user/email of the new Service Account you created/selected at step #1
- use rclone without specifying the `--drive-impersonate` option, like this:
`rclone -v foo@example.com lsf gdrive:backup`
### Team drives ###
If you want to configure the remote to point to a Google Team Drive

View File

@@ -136,6 +136,13 @@ from the rclone image.
reside on the host with a non-root UID:GID, you need to pass these on the container
start command line.
- If you want to access the RC interface (either via the API or the Web UI), it is
required to set the `--rc-addr` to `:5572` in order to connect to it from outside
the container. An explanation about why this is necessary is present [here](https://web.archive.org/web/20200808071950/https://pythonspeed.com/articles/docker-connection-refused/).
* NOTE: Users running this container with the docker network set to `host` should
probably set it to listen to localhost only, with `127.0.0.1:5572` as the value for
`--rc-addr`
- It is possible to use `rclone mount` inside a userspace Docker container, and expose
the resulting fuse mount to the host. The exact `docker run` options to do that might
vary slightly between hosts. See, e.g. the discussion in this
@@ -183,14 +190,23 @@ latest release is recommended. Then
go build
./rclone version
This will leave you a checked out version of rclone you can modify.
This will leave you a checked out version of rclone you can modify and
send pull requests with. If you use `make` instead of `go build` then
the rclone build will have the correct version information in it.
You can also build rclone with:
You can also build the latest stable rclone with:
go get -u -v github.com/rclone/rclone
go get github.com/rclone/rclone
and this will build the binary in `$GOPATH/bin` (`~/go/bin/rclone` by
default) after downloading the source to the go module cache..
or the latest version (equivalent to the beta) with
go get github.com/rclone/rclone@master
These will build the binary in `$(go env GOPATH)/bin`
(`~/go/bin/rclone` by default) after downloading the source to the go
module cache. Note - do **not** use the `-u` flag here. This causes go
to try to update the depencencies that rclone uses and sometimes these
don't work with the current version of rclone.
## Installation with Ansible ##

View File

@@ -294,7 +294,12 @@ See: the [encoding section in the overview](/overview/#encoding) for more info.
{{< rem autogenerated options stop >}}
### Limitations ###
### Limitations
If you don't use rclone for 90 days the refresh token will
expire. This will result in authorization problems. This is easy to
fix by running the `rclone config reconnect remote:` command to get a
new token and refresh token.
#### Naming ####
@@ -324,24 +329,45 @@ list files: UnknownError:`. See
An official document about the limitations for different types of OneDrive can be found [here](https://support.office.com/en-us/article/invalid-file-names-and-file-types-in-onedrive-onedrive-for-business-and-sharepoint-64883a5d-228e-48f5-b3d2-eb39e07630fa).
### Versioning issue ###
### Versions
Every change in OneDrive causes the service to create a new version.
This counts against a users quota.
For example changing the modification time of a file creates a second
version, so the file is using twice the space.
Every change in a file OneDrive causes the service to create a new
version of the the file. This counts against a users quota. For
example changing the modification time of a file creates a second
version, so the file apparently uses twice the space.
The `copy` is the only rclone command affected by this as we copy
the file and then afterwards set the modification time to match the
source file.
For example the `copy` command is affected by this as rclone copies
the file and then afterwards sets the modification time to match the
source file which uses another version.
**Note**: Starting October 2018, users will no longer be able to disable versioning by default. This is because Microsoft has brought an [update](https://techcommunity.microsoft.com/t5/Microsoft-OneDrive-Blog/New-Updates-to-OneDrive-and-SharePoint-Team-Site-Versioning/ba-p/204390) to the mechanism. To change this new default setting, a PowerShell command is required to be run by a SharePoint admin. If you are an admin, you can run these commands in PowerShell to change that setting:
You can use the `rclone cleanup` command (see below) to remove all old
versions.
Or you can set the `no_versions` parameter to `true` and rclone will
remove versions after operations which create new versions. This takes
extra transactions so only enable it if you need it.
**Note** At the time of writing Onedrive Personal creates versions
(but not for setting the modification time) but the API for removing
them returns "API not found" so cleanup and `no_versions` should not
be used on Onedrive Personal.
### Disabling versioning
Starting October 2018, users will no longer be able to
disable versioning by default. This is because Microsoft has brought
an
[update](https://techcommunity.microsoft.com/t5/Microsoft-OneDrive-Blog/New-Updates-to-OneDrive-and-SharePoint-Team-Site-Versioning/ba-p/204390)
to the mechanism. To change this new default setting, a PowerShell
command is required to be run by a SharePoint admin. If you are an
admin, you can run these commands in PowerShell to change that
setting:
1. `Install-Module -Name Microsoft.Online.SharePoint.PowerShell` (in case you haven't installed this already)
1. `Import-Module Microsoft.Online.SharePoint.PowerShell -DisableNameChecking`
1. `Connect-SPOService -Url https://YOURSITE-admin.sharepoint.com -Credential YOU@YOURSITE.COM` (replacing `YOURSITE`, `YOU`, `YOURSITE.COM` with the actual values; this will prompt for your credentials)
1. `Set-SPOTenant -EnableMinimumVersionRequirement $False`
1. `Disconnect-SPOService` (to disconnect from the server)
2. `Import-Module Microsoft.Online.SharePoint.PowerShell -DisableNameChecking`
3. `Connect-SPOService -Url https://YOURSITE-admin.sharepoint.com -Credential YOU@YOURSITE.COM` (replacing `YOURSITE`, `YOU`, `YOURSITE.COM` with the actual values; this will prompt for your credentials)
4. `Set-SPOTenant -EnableMinimumVersionRequirement $False`
5. `Disconnect-SPOService` (to disconnect from the server)
*Below are the steps for normal users to disable versioning. If you don't see the "No Versioning" option, make sure the above requirements are met.*
@@ -359,6 +385,20 @@ Note: This will disable the creation of new file versions, but will not remove a
8. Use rclone to upload or modify files. (I also use the --no-update-modtime flag)
9. Restore the versioning settings after using rclone. (Optional)
### Cleanup
OneDrive supports `rclone cleanup` which causes rclone to look through
every file under the path supplied and delete all version but the
current version. Because this involves traversing all the files, then
querying each file for versions it can be quite slow. Rclone does
`--checkers` tests in parallel. The command also supports `-i` which
is a great way to see what it would do.
rclone cleanup -i remote:path/subdir # interactively remove all old version for path/subdir
rclone cleanup remote:path/subdir # unconditionally remove all old version for path/subdir
**NB** Onedrive personal can't currently delete versions
### Troubleshooting ###
#### Unexpected file size/hash differences on Sharepoint ####

Some files were not shown because too many files have changed in this diff Show More