1
0
mirror of https://github.com/rclone/rclone.git synced 2025-12-21 02:33:49 +00:00

Compare commits

..

398 Commits

Author SHA1 Message Date
Nick Craig-Wood
e3a77f218b operations: don't remove destination files on errored transfers
See: https://forum.rclone.org/t/transfer-on-mega-in-ftp-mode-is-not-working/24642/7
2021-06-16 12:02:45 +01:00
Ivan Andreev
80bccacd83 fs: split overgrown fs.go (#5405)
Nothing is added or removed and no package is renamed by this change.
Just rearrange definitions between source files in the fs directory.

New source files:
- types.go      Filesystem types and interfaces
- features.go   Features and optional interfaces
- registry.go   Filesystem registry and backend options
- newfs.go      NewFs and its helpers
- configmap.go  Getters and Setters for ConfigMap
- pacer.go      Pacer with logging and calculator
The final fs.go contains what is left.

Also rename options.go to open_options.go
to dissociate from registry options.
2021-06-14 14:42:49 +03:00
Nick Craig-Wood
3349b055f5 fichier: fix move of files in the same directory
See: https://forum.rclone.org/t/1fichier-rclone-does-not-allow-to-rename-files-and-folders-when-you-mount-a-1fichier-disk-drive/24726/24
2021-06-11 14:21:23 +01:00
Nick Craig-Wood
bef0c23e00 fichier: make error messages report text from the API
See: https://forum.rclone.org/t/1fichier-rclone-does-not-allow-to-rename-files-and-folders-when-you-mount-a-1fichier-disk-drive/24726/24
2021-06-11 14:21:23 +01:00
Nick Craig-Wood
84201ed891 zoho: improve wording for region - fixes #5377 2021-06-11 14:21:23 +01:00
Nick Craig-Wood
04608428bf Add Florian Penzkofer to contributors 2021-06-11 14:21:23 +01:00
Nick Craig-Wood
6aaa06d7be Add darrenrhs to contributors 2021-06-11 14:21:23 +01:00
Nick Craig-Wood
e53bad5353 Add Reid Buzby to contributors 2021-06-11 14:21:23 +01:00
Nick Craig-Wood
f5397246eb Add Chris Lu to contributors 2021-06-11 14:21:23 +01:00
Nick Craig-Wood
b8b73f2656 Add database64128 to contributors 2021-06-11 14:21:23 +01:00
Nick Craig-Wood
96b67ce0ec Add Tyson Moore to contributors 2021-06-11 14:21:23 +01:00
Nick Craig-Wood
e2beeffd76 Add Tom to contributors 2021-06-11 14:21:23 +01:00
Nick Craig-Wood
30b949642d Add acsfer to contributors 2021-06-11 14:21:23 +01:00
Florian Penzkofer
92b3518c78 fichier: support downloading password protected files and folders 2021-06-10 19:00:26 +02:00
Ivan Andreev
062919e08c deprecate cache backend (#5382) 2021-06-10 19:52:55 +03:00
darrenrhs
654f5309b0 docs: drive: include requirement to publish app in step-by-step - fixes #5393 2021-06-10 17:00:52 +01:00
albertony
318fa4472b docs: fix incorrect syntax in config update example 2021-06-10 08:59:18 +02:00
Reid Buzby
5104e24153 docs: fix incorrect token type for yandex
https://forum.rclone.org/t/yandex-documentation/24445/2
2021-06-09 13:04:55 +02:00
albertony
9d87a5192d docs: fix code section formatting in filtering docs
Fixes #5387
2021-06-08 18:53:18 +02:00
Ivan Andreev
29f967dba3 make commanddocs for v1.56 (#5383) 2021-06-08 18:57:04 +03:00
Chris Lu
1f846c18d4 s3: Add SeaweedFS 2021-06-08 09:59:57 +01:00
albertony
41f561bf26 jottacloud: fix invalid url in output from link command
Fixes #5370
2021-05-31 10:40:21 +02:00
database64128
df60e6323c 🧹 GCS: Clean up time format constants 2021-05-28 14:44:50 +01:00
database64128
58006a925a 📑 GCS: Update docs on mtime
- Mention the new modification time behavior and the modify window issue.
- Unify markdown format.
- ref rclone/rclone#5331
2021-05-28 14:44:50 +01:00
database64128
ee2fac1855 🕰️ GCS: Compatible with gsutil's mtime metadata
- Write `goog-reserved-file-mtime` in addition to `mtime`.
- Fallback to `goog-reserved-file-mtime` if `mtime` doesn't exist.
- ref rclone/rclone#5331
2021-05-28 14:44:50 +01:00
Tyson Moore
2188fe38e5 docs: add caveat about DSCP on Windows 2021-05-28 13:43:38 +01:00
Tyson Moore
b5f8f0973b fshttp: implement graceful DSCP error handling 2021-05-28 13:43:38 +01:00
Tyson Moore
85b8ba9469 fshttp: rework address parsing for DSCP (fixes #5293) 2021-05-28 13:43:38 +01:00
Tom
04a1f673f0 serve sftp: add --stdio flag to serve via stdio - fixes #5311 2021-05-28 13:40:32 +01:00
albertony
0574ebf44a vfs: do not print notice about missing poll-interval support when set to 0
Fixes #5359
2021-05-28 13:09:15 +02:00
albertony
22e86ce335 vfs: fix that umask option cannot be set as environment variable (#5351)
Fixes #5350
2021-05-22 20:48:02 +02:00
acsfer
c9fce20249 tardigrade: add warning about too many open files - Fixes #5310 2021-05-21 20:04:57 +01:00
Ivan Andreev
5b6f637461 fs/hash: align hashsum names and update documentation (#5339)
- Unify all hash names as lowercase alphanumerics without punctuation.
- Legacy names continue to work but disappear from docs, they can be depreciated or dropped later.
- Make rclone hashsum print supported hash list in case of wrong spelling.
- Update documentation.

Fixes #5071
Fixes #4841
2021-05-21 17:32:33 +03:00
albertony
07f2f3a62e docs: fix link to paths on windows section 2021-05-19 22:11:17 +02:00
albertony
6dc190ec93 docs: mention that network/unc paths are supported in local filesystem on windows 2021-05-19 22:11:17 +02:00
Nick Craig-Wood
71f75a1d95 operations: fix tests work on compress by supplying incompressible data 2021-05-18 17:38:32 +01:00
Nick Craig-Wood
1b44035e45 filefabric: fix listing after change of from field from "int" to int. 2021-05-18 17:11:16 +01:00
Nick Craig-Wood
054b467f32 check: log the hash in use like cryptcheck does
See: https://forum.rclone.org/t/does-a-rclone-check-on-similar-remotes-still-compute-hashes/24288/15
2021-05-18 16:21:19 +01:00
Ivan Andreev
23da913d03 dbhashsum: drop command deprecated a year ago - #4837 (#5336)
dbhashsum was deprecated in rclone 1.52 on 2020-05-27
this patch drops the command completely since rclone 1.56
2021-05-18 12:27:17 +03:00
Nick Craig-Wood
c0cda087a8 s3: don't check to see if remote is object if it ends with /
Before this change, rclone would always check the root to see if it
was an object.

This change doesn't check to see if the root is an object if the path
ends with a /

This avoids a transaction where rclone HEADs the path to see if it
exists.

See #4990
2021-05-17 16:43:34 +01:00
Nick Craig-Wood
1773717a47 fs/march: improve errors when root source/destination doesn't exist
See: https://forum.rclone.org/t/rclone-attempts-to-read-files-in-the-destination-directory-when-the-source-doesnt-exist/23412
2021-05-17 16:38:03 +01:00
Nick Craig-Wood
04308dcaa1 local: add --local-unicode-normalization (and remove --local-no-unicode-normalization)
macOS stores files in NFD form and transferring them like this to some
systems causes the Korean language to display incorrectly.

This adds the flag --local-unicode-normalization to optionally
normalize the file names to NFC.

This also removes the (long deprecated) --local-no-unicode-normalization flag

See: https://forum.rclone.org/t/support-for-korean-jaso-conversion/19435
2021-05-17 16:34:25 +01:00
Nick Craig-Wood
06f27384dd b2: fix versions and .files with no extension - fixes #5244 2021-05-17 16:20:29 +01:00
Nick Craig-Wood
82f1f7d2c4 config: expand docs on config protocol #3455 2021-05-17 12:10:58 +01:00
Nick Craig-Wood
6555d3eb33 onedrive: fix failed to configure: empty token found error #3455
This bug was caused as part of the config rework
2021-05-17 12:10:58 +01:00
Nick Craig-Wood
03229cf394 bin/config.py: add --rc flag for testing to an rclone rcd #3455 2021-05-17 12:10:58 +01:00
Nick Craig-Wood
f572bf7829 Add sp31415t1 to contributors 2021-05-17 12:10:58 +01:00
sp31415t1
f593558dc2 docs: improve --disable help 2021-05-14 15:44:58 +01:00
Ivan Andreev
08040a57b0 dropbox: improve "own App IP" instructions (#5325)
Instructions in https://rclone.org/dropbox/#get-your-own-dropbox-app-id
are a little incomplete. I had to guess a few extra details to make things work.
This patch adds missing parts.

Fixes #5242
2021-05-14 17:42:30 +03:00
Alexey Ivanov
2fa7a3c0fb dropbox: simplify chunked uploads
Signed-off-by: Alexey Ivanov <rbtz@dropbox.com>
2021-05-14 14:07:44 +01:00
Nick Craig-Wood
798d1293df Add Alexey Ivanov to contributors 2021-05-14 14:07:44 +01:00
Nick Craig-Wood
75c417ad93 dropbox: fix async batch missing the last few entries 2021-05-14 14:07:44 +01:00
Nick Craig-Wood
5ee646f264 dropbox: make batcher retry all errors so it doesn't exit early
See: https://forum.rclone.org/t/dropbox-too-many-requests-or-write-operations-trying-again-in-15-seconds/23316/18
2021-05-14 14:07:44 +01:00
Nick Craig-Wood
4a4aca4da7 dropbox: fix deadlock in batch Commit 2021-05-14 14:07:44 +01:00
Nick Craig-Wood
2e4b65f888 dropbox: add --dropbox-batch-mode flag to speed up uploading #5156
This adds 3 upload modes for dropbox off, sync and async and makes
sync the default.

This should improve uploads (especially for small files) greatly.
2021-05-14 14:07:44 +01:00
Nick Craig-Wood
77cda6773c config: tidy code to use UpdateRemote/CreateRemote instead of editOptions #3455 2021-05-14 14:07:44 +01:00
Nick Craig-Wood
dbc5167281 bin: add config.py as an example of how to use the state based config #3455 2021-05-14 14:07:44 +01:00
Nick Craig-Wood
635d1e10ae config create: add --state and --result parameters #3455 2021-05-14 14:07:44 +01:00
Nick Craig-Wood
296ceadda6 fs: add --all to rclone config create/update to ask all the config questions #3455
This also factors the config questions into a state based mechanism so
a backend can be configured using the same dialog as rclone config but
remotely.
2021-05-14 14:07:44 +01:00
Nick Craig-Wood
7ae2891252 fs: Add Exclusive parameter to Option to choose Examples only #3455 2021-05-14 14:07:44 +01:00
Nick Craig-Wood
99caf79ffe config: allow config create and friends to take key=value parameters #3455 2021-05-14 14:07:44 +01:00
Nick Craig-Wood
095cf9e4be config create: add --non-interactive and --continue parameters #3455
This adds a mechanism to add external interfaces to rclone using the
state based configuration.
2021-05-14 14:07:44 +01:00
buengese
e57553930f jottacloud: fix legacy auth with state based config system
...also some minor cleanup
2021-05-14 14:07:44 +01:00
Nick Craig-Wood
f122808d86 fs: add names to each config parameter so we can override them #3455 2021-05-14 14:07:44 +01:00
Nick Craig-Wood
94dbfa4ea6 fs: change Config callback into state based callback #3455
This is a very large change which turns the post Config function in
backends into a state based call and response system so that
alternative user interfaces can be added.

The existing config logic has been converted, but it is quite
complicated and folloup commits will likely be needed to fix it!

Follow up commits will add a command line and API based way of using
this configuration system.
2021-05-14 14:07:44 +01:00
Nick Craig-Wood
6f2e525821 Add Antoon Prins to contributors 2021-05-14 14:07:44 +01:00
Ivan Andreev
119bddc10b selfupdate: fix archive name on macos 2021-05-13 22:35:39 +03:00
albertony
28e9fd45cc vfs: avoid unnecessary subdir in cache path
Fixes #5316
2021-05-13 11:16:42 +02:00
Antoon Prins
326f3b35ff webdav: add headers option 2021-05-12 09:52:07 +01:00
albertony
ce83228cb2 sftp: expand tilde and environment variables in configured known_hosts_file (#5322)
Fixes #5220
2021-05-11 19:58:26 +02:00
Chris Macklin
732bc08ced config: replace defaultConfig with a thread-safe in-memory implementation 2021-05-07 16:04:09 +01:00
Nick Craig-Wood
6ef7178ee4 local: always use readlink to read symlink size
It was discovered on some Android systems, the stat size of a symlink
is different to the size that readlink returns.

This was giving errors like this

    transport connection broken: http: ContentLength=30 with Body length 28

There are enough exceptions to the size of readlink being different to
the size of stat that this patch now always does readlink to work out
the size of a symlink.

Since symlinks are relatively uncommon this shouldn't affect
performance too much and will mean that the size is always correct.

This deprecates the --local-zero-size-links flag which is now
effectively always enabled.

See: https://forum.rclone.org/t/problem-with-symlinks-and-links/23840/
2021-05-04 08:53:09 +01:00
Nick Craig-Wood
9ff6f48d74 Remove accidentally committed *.orig and *.rej files and ignore 2021-05-03 07:58:29 +01:00
Nick Craig-Wood
532af77fd1 Add Chris Macklin to contributors 2021-05-03 07:58:29 +01:00
Nolan Woods
ab7dfe0c87 http: clean up Bind to better use middleware 2021-05-02 11:31:01 +01:00
Nolan Woods
e489a101f6 lib/http: add default 404 handler 2021-05-02 11:30:02 +01:00
Chris Macklin
35a86193b7 accounting: deglobalize startTime/elapsedTime - fixes #5282 2021-05-01 14:51:21 +01:00
x0b
2833941da8 build: add gomobile android build 2021-04-30 20:39:04 +01:00
Nick Craig-Wood
9e6c23d9af fs: add --disable-http2 for global http2 disable #5253 2021-04-30 20:31:04 +01:00
Nick Craig-Wood
8bef972262 Add Gautam Kumar to contributors 2021-04-30 20:31:04 +01:00
Nick Craig-Wood
0a968818f6 Add Nolan Woods to contributors 2021-04-30 20:31:04 +01:00
Nick Craig-Wood
c2ac353183 Add lewisxy to contributors 2021-04-30 20:31:04 +01:00
Nick Craig-Wood
773da395fb Add Tatsuya Noyori to contributors 2021-04-30 20:31:04 +01:00
Gautam Kumar
9e8cd6bff9 docs: fixed some typos 2021-04-28 22:55:27 +01:00
Nolan Woods
5d2e327b6f http: Replace httplib with lib/http 2021-04-28 22:54:15 +01:00
Nolan Woods
77221d7528 httplib: Deprecate package 2021-04-28 22:54:15 +01:00
Nolan Woods
1971c1ef87 httplib: Move httplib/serve/data to ../serve/http/data 2021-04-28 22:54:15 +01:00
Nolan Woods
7e7dbe16c2 httplib: Add --template config and flags to serve/data 2021-04-28 22:54:15 +01:00
Nolan Woods
002d323c94 lib/http: Move HTTP object serialization logic to lib/http 2021-04-28 22:54:15 +01:00
Nolan Woods
4ad62ec016 lib/http: Add authentication middleware with basic auth implementation 2021-04-28 22:54:15 +01:00
Nolan Woods
95ee14bb2c feat: Add lib/http
lib/http provides an abstraction for a central http server that services can bind routes to
2021-04-28 22:54:15 +01:00
Romeo Kienzler
88aabd1f71 docs: corrected spelling
from "Check the integrity of an encrypted remote." to "Check the integrity of a crypted remote."
2021-04-28 22:50:55 +01:00
Nick Craig-Wood
34627c5c7e librclone: update docs for merge #4891 2021-04-28 20:42:00 +01:00
Nick Craig-Wood
e33303df94 librclone: add basic Python bindings with tests #4891 2021-04-28 16:55:08 +01:00
Nick Craig-Wood
665eceaec3 librclone: catch panics at the language change boundary #4891 2021-04-28 16:55:08 +01:00
Nick Craig-Wood
ba09ee18bb librclone: factor into gomobile and internal implementation #4891
This was needed because gomobile can't use a main package wheras this
is required to make a normal shared C library.
2021-04-28 16:55:08 +01:00
Nick Craig-Wood
62bf63d36f librclone: add tests for build and execute them in the actions #4891 2021-04-28 16:55:08 +01:00
Nick Craig-Wood
f38c262471 librclone: change interface for C code and add Mobile interface #4891
This changes the interface for the C code to return a struct on the
stack that is defined in the code rather than one which is defined by
the cgo compiler. This is more future proof and inline with the
gomobile interface.

This also adds a gomobile interface RcloneMobileRPC which uses generic
go types conforming to the gobind restrictions.

It also fixes up initialisation errors.
2021-04-28 16:55:08 +01:00
Nick Craig-Wood
5db88fed2b librclone: exports, errors, docs and examples #4891
- rename C exports to be namespaced with Rclone prefix
- fix error handling in RcloneRPC
- add more examples
- add more docs
- add README
- simplify ctest Makefile
2021-04-28 16:55:08 +01:00
lewisxy
316e65589b librclone: export the rclone RC as a C library #4891 2021-04-28 16:55:08 +01:00
Tatsuya Noyori
4401d180aa s3: add --s3-no-head-object
This stops rclone doing any HEAD requests on objects.
2021-04-28 11:05:54 +01:00
Nick Craig-Wood
9ccd870267 Move the how to use GitHub info in the bug/issue templates to the end
This is so that we see the text of the bug/issue first rather than the
how to use GitHub issue which is very useful when posting bug reports
to the forum or social media.
2021-04-28 09:40:19 +01:00
Nick Craig-Wood
16d1da2c1e vfs: remove item.metaDirty as it was confusing and not used
See discussion in #5277
2021-04-28 09:33:22 +01:00
Nick Craig-Wood
00a0ee1899 vfs: fix modtime changing when reading file into cache - fixes #5277
Before this change but after:

aea8776a43 vfs: fix modtimes not updating when writing via cache #4763

When a file was opened read-only the modtime was read from the cached
file. However this modtime wasn't correct leading to an incorrect
result.

This change fixes the definition of `item.IsDirty` to be true only
when the data is dirty. This fixes the problem as a read only file
isn't considered dirty.
2021-04-28 09:33:22 +01:00
Nick Craig-Wood
b78c9a65fa backends: remove log.Fatal and replace with error returns #5234
This changes the Config interface so that it returns an error.
2021-04-27 18:18:08 +01:00
Nick Craig-Wood
ef3c350686 box: return errors instead of calling log.Fatal with them #5234 2021-04-27 18:18:08 +01:00
Nick Craig-Wood
742af80972 Add jtagcat to contributors 2021-04-27 18:18:08 +01:00
albertony
08a2df51be Use decimal prefixes for counts
Fixes #5126
2021-04-27 02:25:52 +03:00
albertony
2925e1384c Use binary prefixes for size and rate units
Includes adding support for additional size input suffix Mi and MiB, treated equivalent to M.
Extends binary suffix output with letter i, e.g. Ki and Mi.
Centralizes creation of bit/byte unit strings.
2021-04-27 02:25:52 +03:00
albertony
2ec0c8d45f stats: correct spelling of data rate units 2021-04-27 02:25:52 +03:00
albertony
98579608ec docs: cleanup spelling of size and rate units 2021-04-27 02:25:52 +03:00
Caleb Case
a1a41aa0c1 backend/tardigrade: use negative offset
v1.4.6 of uplink allows us to do a negative offset from the end of the
file. This removes a round trip when requesting the last N bytes of a
file.

Previous to v1.4.6 of uplink it wasn't possible to do a negative offset
on download. This meant that to fulfill the semantics of http range
headers it was necessary to first fetch the size of the object via a
stat call and compute absolute offset and length.
2021-04-27 02:20:08 +03:00
albertony
f8d56bebaf config: delay load config file (#5258)
Restructuring of config code in v1.55 resulted in config
file being loaded early at process startup. If configuration
file is encrypted this means user will need to supply the password,
even when running commands that does not use config.
This also lead to an issue where mount with --deamon failed to
decrypt the config file when it had to prompt user for passord.

Fixes #5236
Fixes #5228
2021-04-26 23:37:49 +02:00
jtagcat
5d799431a7 GitHub issue templates: Add GH Etiquette. 2021-04-26 18:12:37 +01:00
Leo Luan
8f23cae1c0 vfs: Add cache reset for --vfs-cache-max-size handling at cache poll interval
The vfs-cache-max-size parameter is probably confusing to many users.
The cache cleaner checks cache size periodically at the --vfs-cache-poll-interval
(default 60 seconds) interval and remove cache items in the following order.

(1) cache items that are not in use and with age > vfs-cache-max-age
(2) if the cache space used at this time still is larger than
vfs-cache-max-size, the cleaner continues to remove cache items that are
not in use.

The cache cleaning process does not remove cache items that are currently in use.
If the total space consumed by in-use cache items exceeds vfs-cache-max-size, the
periodical cache cleaner thread does not do anything further and leaves the in-use
cache items alone with a total space larger than vfs-cache-max-size.

A cache reset feature was introduced in 1.53 which resets in-use (but not dirty,
i.e., not being updated) cache items when additional cache data incurs an ENOSPC
error.  But this code was not activated in the periodical cache cleaning thread.

This patch adds the cache reset step in the cache cleaner thread during cache
poll to reset cache items until the total size of the remaining cache items is
below vfs-cache-max-size.
2021-04-26 17:55:52 +01:00
Mathieu Carbou
964088affa build: Only run event-based workflow scripts under rclone repo with manual override
This updates the actions to only run event-based workflow scripts
under the rclone repository only and not forks. It also adds the
ability to manually trigger a build from a branch in rclone repository
and forks.

Fixes #5272
2021-04-26 17:52:03 +01:00
Nick Craig-Wood
f4068d406b Add Jeffrey Tolar to contributors 2021-04-26 16:57:21 +01:00
Jeffrey Tolar
7511b6f4f1 b2: don't include the bucket name in public link file prefixes
Including the bucket name as part of the `fileNamePrefix` passed to
`b2_get_download_authorization` results in a link valid for objects that
have the bucket name as part of the object path; e.g.,

    rclone link :b2:some-bucket/some-file

would result in a public link valid for the object
`some-bucket/some-file` in the `some-bucket` bucket (in rclone-remote
parlance, `:b2:some-bucket/some-bucket/some-file`). This will almost
certainly result in a broken link.

The B2 docs don't explicitly specify this behavior, but the example
given for `fileNamePrefix` provides some clarification.

See https://www.backblaze.com/b2/docs/b2_get_download_authorization.html.
2021-04-26 16:56:41 +01:00
Nick Craig-Wood
e618ea83dd s3: remove WebIdentityRoleProvider to fix crash on auth #5255
This code removes the code added in

15d19131bd s3: use aws web identity role provider

This code no longer works because it doesn't initialise the
tokenFetcher - leading to a nil pointer crash.

The proper way to initialise this is with the
NewWebIdentityCredentials but it isn't clear where to get the other
parameters: roleARN, roleSessionName, path.

In the linked issue a user reports rclone working with EKS anyway, so
perhaps this code is no longer needed.

If it is needed, hopefully someone who knows AWS better will come
along and fix it!

See: https://forum.rclone.org/t/add-support-for-aws-sso/23569
2021-04-26 16:55:50 +01:00
Nick Craig-Wood
34dc257c55 Add Kenny Parsons to contributors 2021-04-26 16:55:50 +01:00
Kenny Parsons
4cacf5d30c docs: clarify and add examples for sftp docs
- added clarification to default remote path if no path is specified 
- added examples for mounting a remote path (other than the default home directory) to a local folder.
2021-04-26 16:13:42 +01:00
Nick Craig-Wood
0537791d14 sftp: Fix performance regression by re-enabling concurrent writes #5197
Betweeen rclone v1.54 and v1.55 there was an approx 3x performance
regression when transferring to distant SFTP servers (in particular
rsync.net).

This turned out to be due to the library github.com/pkg/sftp rclone
uses. Concurrent writes used to be enabled in this library by default
(for v1.12.0 as used in rclone v1.54) but they are no longer enabled
(for v1.13.0 as used in rclone v1.55) for safety reasons and it is
necessary to enable them specifically.

The safety concerns are due to the uncertainty as to whether writes
come in order and whether a half completed file might have holes in
it. This isn't a problem for rclone since a) it doesn't restart
uploads and b) it has a post-transfer checksum test.

This change introduces a new flag `--sftp-disable-concurrent-writes`
to control the feature which defaults to false, meaning that
concurrent writes are enabled as in v1.54.

However this isn't quite enough to fix the problem as the sftp library
needs to be able to sniff the size of the stream from the reader
passed in, so this also adds a `Size` interface to the reader to
enable this. This involved a patch to the library.

The library was reverted to v1.12.0 for v1.55.1 - this patch installs
v1.13.0+master to fix the Size interface problem.

See: https://github.com/pkg/sftp/issues/426
2021-04-26 09:24:28 +01:00
Nick Craig-Wood
4b1d28550a Changelog updates from Version v1.55.1 2021-04-26 09:22:49 +01:00
Nick Craig-Wood
d27c35ee4a box: use upload preflight check to avoid listings in file uploads
Before this change, rclone checked to see if an object existed before
doing an upload by listing the destination directory. This was very
inefficient, especially with large directories.

After this change rclone uses the pre upload check API call which
checks to see if it is OK to upload an object, and also returns the ID
of an existing object which saves rclone having to do a directory
listing.
2021-04-25 11:45:44 +01:00
Nick Craig-Wood
ffec0d4f03 Add OleFrost to contributors 2021-04-25 11:45:39 +01:00
OleFrost
89daa9efd1 onedrive: Work around for random "Unable to initialize RPS" errors
OneDrive randomly returns the error message: "InvalidAuthenticationToken: Unable to initialize RPS". These unexpected errors typically caused the entire rclone command to fail.

This work around recognizes these errors and marks them for a low level retry, that mostly succeeds. This will make rclone commands complete without being noticeable affected.

Fixes: #5270
2021-04-24 23:05:34 +01:00
Nick Craig-Wood
ee502a757f ncdu: update termbox-go library to fix crash - fixes #5259 2021-04-24 15:17:14 +01:00
Cnly
386acaa110 oauthutil: fix #5265 old authorize result not recognised 2021-04-23 01:20:52 +08:00
buengese
efdee3a5fe compress: fix compressed name regexp 2021-04-22 18:38:38 +02:00
Nick Craig-Wood
5d85e6bc9c dropbox: fix Unable to decrypt returned paths from changeNotify - fixes #5165
This was caused by incorrect use of strings.TrimLeft where
strings.TrimPrefix was required.
2021-04-21 10:52:05 +01:00
Nick Craig-Wood
4a9469a3dc test changenotify: add command to help debugging changenotify 2021-04-21 10:52:05 +01:00
Nick Craig-Wood
f8884a7200 build: fix version numbers in android branch builds 2021-04-20 17:40:06 +01:00
Nick Craig-Wood
2a40f00077 vfs: fix a code path which allows dirty data to be removed causing data loss
Before this change the VFS layer could remove a locally cached file
even if it had data which needed to be written back, thus causing data loss.

See: https://forum.rclone.org/t/rclone-1-55-doesnt-save-file-changes-if-the-file-has-been-reopened-during-upload-google-drive-mount/23646
2021-04-20 16:36:38 +01:00
Nick Craig-Wood
9799fdbae2 Add noabody to contributors 2021-04-20 16:36:38 +01:00
Nick Craig-Wood
492504a601 Add new email address for Caleb Case 2021-04-20 16:36:25 +01:00
Nick Craig-Wood
0c03a7fead Add Ansh Mittal to contributors 2021-04-20 16:31:40 +01:00
Nick Craig-Wood
7afb4487ef build: update all dependencies 2021-04-20 00:00:13 +01:00
noabody
b9d0ed4f5c make_manual.py: fix missing comma for doc build after uptobox merge
This fixes a problem introduced in

cd69f9e6e8 uptobox: add docs
2021-04-19 16:18:18 +01:00
Caleb Case
baa4c039a0 backend/tardigrade: Upgrade to uplink v1.4.6
Release notes: https://github.com/storj/uplink/releases/tag/v1.4.6

Follow up PRs will take advantage of the new bucket error and negative
offset support to remove roundtrips.
2021-04-19 16:14:56 +01:00
Alex Chen
31a8211afa oauthutil: raise fatal error if token expired without refresh token (#5252) 2021-04-18 12:04:13 +08:00
albertony
3544e09e95 config: treat any config file paths with filename notfound as memory-only config (#5235) 2021-04-18 00:09:03 +02:00
Ansh Mittal
b456be4303 drive: don't open browser when service account...
credentials specified 

Fixes #5104
2021-04-17 19:49:53 +01:00
Nick Craig-Wood
3e96752079 dropbox: add missing team_data.member scope for use with --impersonate
See: https://forum.rclone.org/t/dropbox-business-not-accepting-oauth2/23390/32
2021-04-17 17:40:08 +01:00
buengese
4a5cbf2a19 cmd/ncdu: fix out of range panic in delete 2021-04-16 23:20:03 +02:00
Nick Craig-Wood
dcd4edc9f5 dropbox: fix About after scopes changes - rclone config reconnect needed
This adds the missing scope for the About call. To use it it will be
necessary to refresh the token with `rclone config reconnect`.

See: https://forum.rclone.org/t/dropbox-too-many-requests-or-write-operations-trying-again-in-15-seconds/23316/33
2021-04-16 15:07:03 +01:00
Nick Craig-Wood
7f5e347d94 Add Nazar Mishturak to contributors 2021-04-16 15:07:03 +01:00
Cnly
040677ab5b onedrive: also report root error if unable to cancel multipart upload 2021-04-16 12:41:38 +08:00
albertony
6366d3dfc5 docs: extend description of drive mount access on windows 2021-04-13 22:33:19 +02:00
albertony
60d376c323 docs: add guide to configuring autorun in install documentation 2021-04-13 22:33:19 +02:00
albertony
7b1ca716bf config: add touch command to ensure config exists at configured location (#5226)
A new command `rclone config touch` which calls config.SaveConfig().
Useful during testing of configuration location things.
It will ensure the config file exists and test that it is writable.
2021-04-13 19:25:09 +03:00
albertony
d8711cf7f9 config: create config file in windows appdata directory by default (#5226)
Use %AppData% as primary default for configuration file on Windows,
which is more in line with Windows standards, while existing default
of using home directory is more Unix standards - though that made rclone
more consistent accross different OS.

Fixes #4667
2021-04-13 19:25:09 +03:00
buengese
cd69f9e6e8 uptobox: add docs 2021-04-13 17:46:07 +02:00
buengese
a737ff21af uptobox: integration tests 2021-04-13 17:46:07 +02:00
buengese
ad9aa693a3 new backend: uptobox 2021-04-13 17:46:07 +02:00
Nazar Mishturak
964c3e0732 rcat: add --size flag for more efficient uploads of known size - fixes #4403
This allows preallocating space at remote end with RcatSize.
2021-04-13 12:25:47 +01:00
Nick Craig-Wood
a46a3c0811 test makefiles: add log levels and speed summary 2021-04-12 18:14:01 +01:00
Nick Craig-Wood
60dcafe04d test makefiles: add --seed flag and make data generated repeatable #5214 2021-04-12 18:14:01 +01:00
Nick Craig-Wood
813bf029d4 Add Dominik Mydlil to contributors 2021-04-12 18:14:01 +01:00
albertony
f2d3264054 config: prevent use of windows reserved names in config file name 2021-04-12 18:17:19 +02:00
albertony
23a0d4a1e6 config: fix issues with memory-only config file paths
Fixes #5222
2021-04-12 18:17:19 +02:00
albertony
b96ebfc40b docs: less confusing example with config path option 2021-04-12 18:17:19 +02:00
Dominik Mydlil
3fe2aaf96c crypt: support timestamped filenames from --b2-versions
With the file version format standardized in lib/version, `crypt` can
now treat the version strings separately from the encrypted/decrypted
file names. This allows --b2-versions to work with `crypt`.

Fixes #1627

Co-authored-by: Luc Ritchie <luc.ritchie@gmail.com>
2021-04-12 15:59:18 +01:00
Dominik Mydlil
c163e6b250 b2: factor version handling into lib/version
Standardizes the filename version tagging so that it can be used by any
backend.
2021-04-12 15:59:18 +01:00
Nick Craig-Wood
c1492cfa28 test: add sftp to rsync.net to integration tests 2021-04-12 15:52:31 +01:00
Nick Craig-Wood
38a8071a58 Add Ashok Gelal to contributors 2021-04-12 15:52:31 +01:00
Ashok Gelal
8c68a76a4a install.sh: silence the progress output with curl requests
This commit silences the progress output from the curl requests made by the install.sh script.

Having a progress seems to break some automated scripts and there isn't a way to pass some
flags to these curl requests to disable them.
2021-04-12 14:18:29 +01:00
Dan Dascalescu
e7b736f8ca docs: fix minor typo in symlinks / junction points 2021-04-10 15:34:34 +02:00
Nick Craig-Wood
cb30a8c80e webdav: fix sharepoint auth over http - fixes #4418
Before this change rclone would auth over https even when the server
was configured with http.

Authing over http obviously isn't ideal, however this type of server
is on-premise and doesn't work over https.
2021-04-10 11:59:56 +01:00
Ivan Andreev
629a3eeca2 backend/ftp: fix implicit TLS after PR #4266 (#5219)
PR #4266 modified ftpConnection to make ftp library into using
a custom dial function which is QoS aware and takes care of TLS.
However the ServerConn.Login function from the ftp library also needs
TLS config passed explicitly as a trigger for sending PSBZ and PROT
options to FTP server. This was not taken care of resulting in
failure to connect via FTP with implicit TLS.
This PR fixes that.

Fixes #5210
2021-04-09 01:43:50 +03:00
Nick Craig-Wood
f52ae75a51 rclone authorize: Send and receive extra config options to fix oauth
Before this change any backends which required extra config in the
oauth phase (like the `region` for zoho) didn't work with `rclone
authorize`.

This change serializes the extra config and passes it to `rclone
authorize` and returns new config items to be set from rclone
authorize.

`rclone authorize` will still accept its previous configuration
parameters for use with old rclones.

Fixes #5178
2021-04-08 12:34:15 +01:00
Nick Craig-Wood
9d5c5bf7ab fs: add Options.NonDefault to read options which aren't at their default #5178 2021-04-08 12:34:15 +01:00
Nick Craig-Wood
53573b4a09 configmap: Add Encode and Decode methods to Simple for command line encoding #5178 2021-04-08 12:34:15 +01:00
Nick Craig-Wood
3622e064f5 configmap: Add priorities to configmap Setters #5178 2021-04-08 12:34:15 +01:00
Nick Craig-Wood
6d28ea7ab5 fs: factor config override detection into its own function #5178 2021-04-08 12:34:15 +01:00
Nick Craig-Wood
b9fd02039b authorize: refactor to use new config interfaces #5178 2021-04-08 12:34:15 +01:00
Nick Craig-Wood
1a41c930f3 configmap: add ClearSetters to get rid of all setters #5178 2021-04-08 12:34:15 +01:00
albertony
ddb7eb6e0a docs: fixed some typos 2021-04-08 10:19:03 +02:00
buengese
c114695a66 zoho: do not ask for mountpoint twice when using headless setup 2021-04-08 00:23:27 +02:00
Nick Craig-Wood
fcba51557f dropbox: set visibility in link sharing when --expire is set
Note that due to a bug in the dropbox SDK you'll need to set --expire
to access this.

See: https://github.com/dropbox/dropbox-sdk-go-unofficial/issues/75
See: https://forum.rclone.org/t/rclone-link-dropbox-permissions/23211
2021-04-07 13:58:37 +01:00
Nick Craig-Wood
9393225a1d link: use "off" value for unset expiry 2021-04-07 13:58:37 +01:00
albertony
3d3ff61f74 docs: minor cleanup of space around code section 2021-04-07 08:47:29 +02:00
albertony
d98f192425 docs: WinFsp 2021 is out of beta 2021-04-07 08:13:40 +02:00
Nick Craig-Wood
54771e4402 sync: fix incorrect error reported by graceful cutoff - fixes #5203
Before this change, a sync which was finished with a graceful transfer
cutoff could return "context canceled" instead of the correct error.

This fixes the problem by ignoring "context canceled" errors if we
have done a graceful stop.
2021-04-06 13:08:42 +01:00
Nick Craig-Wood
dc286529bc drive: fix backend copyid of google doc to directory - fixes #5196
Before this change the google doc was being copied to the directory
without an extension.
2021-04-06 11:46:52 +01:00
Nick Craig-Wood
7dc7c021db sftp: fix Update ReadFrom failed: failed to send packet: EOF errors
In

a3fcadddc8 sftp: close idle connections after --sftp-idle-timeout (1m by default)

Idle SFTP connections were closed after 1 minute. However due to the
way SSH multiplexes connections over a single SSH connection this
meant that if uploads or downloads went on for more than one minute
they failed with "EOF errors" as their underlying connection was
closed.

This fixes the problem by not clearing idle connections if there are
any transfers in progress.

Fixes #5197
2021-04-06 10:01:49 +01:00
Nick Craig-Wood
fe1aa13069 sftp: revert sftp library to v1.12.0 from v1.13.0 to fix performance regression #5197
This reverts the library update done in this commit.

713f8f357d sftp: fix "file not found" errors for read once servers

Reverting this commit triples the performance to a far away sftp server.

See: https://github.com/pkg/sftp/issues/426
2021-04-06 10:01:49 +01:00
Nick Craig-Wood
5fa8e7d957 Add Nick Gaya to contributors 2021-04-06 10:01:49 +01:00
Nick Gaya
9db7c51eaa sync: don't warn about --no-traverse when --files-from is set 2021-04-05 20:36:39 +01:00
Ivan Andreev
3859fe2f52 cmd/version: print os/version, kernel and bitness (#5204)
Related to #5121

Note: OpenBSD is stub yet. This will be fixed after upstream PR gets resolved
https://github.com/shirou/gopsutil/pull/993
2021-04-05 21:53:09 +03:00
buengese
0caf417779 zoho: fix error when region isn't set 2021-04-05 15:11:30 +02:00
Ivan Andreev
9eab258ffb build: add build tag noselfupdate
Allow downstream packaging to build rclone without selfupdate command:
$ go build -tags noselfupdate

Fixes #5187
2021-04-04 11:22:09 +03:00
Nick Gaya
7df57cd625 contributing.md: update setup instructions for go1.16 2021-04-04 09:10:43 +01:00
Nick Gaya
1fd9b483c8 onedrive: add list_chunk option
Add --onedrive-list-chunk option similar to existing options for azureblob, drive, and s3.

Suggested as a workaround for a OneDrive pagination bug

See: https://forum.rclone.org/t/unexpected-duplicates-on-onedrive-with-0s-in-filename/23164/8
2021-04-04 09:08:16 +01:00
Ivan Andreev
93353c431b selfupdate: dont detect FUSE if build is static
Before this patch selfupdate detected ANY build with cmount tag as a build
having libFUSE capabilities. However, only dynamic builds really have it.
The official linux builds are static and have the cmount tag as of the time
of this writing. This results in inability to update official linux binaries.
This patch fixes that. The build can be fixed independently.
2021-04-03 21:54:15 +03:00
Nick Craig-Wood
886dfd23e2 fichier: check if more than one upload link is returned #5152 2021-04-03 15:00:50 +01:00
Nick Craig-Wood
116a8021bb drive: switch to the Drives API for looking up shared drives - fixes #3139
Before this change rclone used the deprecated teamdrives API. This
change uses the new drives API (which seems to be the teamdrives API
renames).
2021-04-03 14:21:20 +01:00
Nick Craig-Wood
9e2fbe0f1a install.sh: fix macOS arm64 download - fixes #5183 2021-03-31 21:48:31 +01:00
Nick Craig-Wood
6d65d116df Start v1.56.0-DEV development 2021-03-31 19:51:43 +01:00
Ivan Andreev
edaeb51ea9 backlog: ticket templates should recommend to update rclone
Aligns Bug and Feature github templates with rclone forum
and instructs submitter to proactively update rclone.
2021-03-31 19:13:50 +01:00
Nick Craig-Wood
6e2e2d9eb2 Version v1.55.0 2021-03-31 19:12:08 +01:00
Nick Craig-Wood
20e15e52a9 vfs: fix Create causing windows explorer to truncate files on CTRL-C CTRL-V
Before this fix, doing CTRL-C and CTRL-V on a file in Windows explorer
caused the **source** and the the destination to be truncated to 0.

This is because Windows opens the source file with Create with flags
`O_RDWR|O_CREATE|O_EXCL` but doesn't write to it - it only reads from
it. Rclone was taking the call to Create as a signal to always make a
new file, but this is incorrect.

This fix reads an existing file from the directory if it exists when
Create is called rather than always creating a new one. This fixes the
problem.

Fixes #5181
2021-03-31 14:48:02 +01:00
Nick Craig-Wood
d0f8b4f479 fs/cache: fix recreation of backends after they have expired
Before this change, on the first attempt to create a backend we used a
non-canonicalized string. When the backend expired the second attempt
to create it would use the canonicalized string (because it was in the
remap cache) which would fail because it was now `name{XXXX}:`

This change makes sure that whenever we create a backend we always use
the non-canonicalized string.

See: https://forum.rclone.org/t/connection-string-inconsistencies-on-beta/23171
2021-03-30 18:46:30 +01:00
Nick Craig-Wood
58d82a5c73 rc: allow fs= params to be a JSON blob 2021-03-30 17:07:27 +01:00
Nick Craig-Wood
c0c74003f2 fs/cache: add --fs-cache-expire-duration to control the fs cache
This commit makes the previously statically configured fs cache configurable.

It introduces two parameters `--fs-cache-expire-duration` and
`--fs-cache-expire-interval` to control the caching of the items.

It also adds new interfaces to lib/cache to set these.
2021-03-30 12:46:47 +01:00
Nick Craig-Wood
60bc7a079a rc: factor rc.Error out of rcserver for re-use in librclone #4891 2021-03-30 12:46:05 +01:00
Nick Craig-Wood
20c5ca08fb test_all: fix crash when using -clean 2021-03-29 23:12:53 +01:00
Nick Craig-Wood
fc57648b75 lib/rest: fix multipart uploads stopping on context cancel
Before this change when the context was cancelled (due to
--max-duration for example) this could deadlock when uploading
multipart uploads.

This change fixes the problem by introducing another go routine to
monitor the context and close the pipe with an error when the context
errors.
2021-03-29 19:09:47 +01:00
Nick Craig-Wood
8c5c91e68f lib/readers: add NewContextReader to error on context errors 2021-03-29 19:09:47 +01:00
Nick Craig-Wood
9dd39e8524 Add x0b to contributors 2021-03-29 19:09:47 +01:00
albertony
9c9186183d docs: add short description of configuration file format (#5142)
Fixes #572
2021-03-27 17:26:01 +01:00
Nick Craig-Wood
2ccf416e83 build: add version to android builds and fix upload 2021-03-26 09:18:54 +00:00
x0b
5577c7b760 build: replace xgo with NDK for Android builds 2021-03-26 09:18:54 +00:00
Nick Craig-Wood
f6dbb98a1d cmount: update cgofuse to the latest version to bring in macfuse 4 fix 2021-03-25 09:02:17 +00:00
Nick Craig-Wood
d042f3194f b2: fix html files downloaded via cloudflare
When reading files from B2 via cloudflare using --b2-download-url
cloudflare strips the Content-Length headers (presumably so it can
inject stuff into the body).

This caused rclone to think the file was corrupted as the length
didn't match.

The patch uses the old length read from the listing if there is no
Content-Length.

See: https://forum.rclone.org/t/b2-cloudflare-error-directory-not-found/23026
2021-03-24 17:06:59 +00:00
Nick Craig-Wood
524cd327e6 build: update notes on how to build the release manually with docker 2021-03-24 14:22:27 +00:00
Nick Craig-Wood
b8c1cf7451 union: fix initialisation broken in refactor - fixes #5139
This commit broke the initialisation of the union backend

f17d7c0012 union: refactor to use fspath.SplitFs instead of fs.ParseRemote #4996

This patch fixes it.
2021-03-24 09:47:38 +00:00
Nick Craig-Wood
0fa68bda02 fspath: fix path parsing on Windows - fixes #5143
In this commit

8a46dd1b57 fspath: Implement a connection string parser #4996

The parsing code was re-written. This didn't quite work as before,
failing to adjust local paths on Windows when it should.

This patch fixes the problem and implements tests for it.
2021-03-24 09:47:03 +00:00
Nick Craig-Wood
1378bfee63 box: fix transfers getting stuck on token expiry after API change
Box recently changed their API, changing the case of returned API items

> On May 10th, 2021, as part of our continued infrastructure upgrade,
> Box's API response headers will standardize to return in a case
> insensitive manner, in line with industry best practices and our API
> documentation. Applications that are using these headers, such as
> "location" and "retry-after", will need to verify that their
> applications are checking for these headers in a case-insensitive
> fashion.

Rclone was reading the raw headers from the `http.Header` and not
using the `Get` accessor method which meant that it was sensitive to
case changes.

This fixes the problem by using the `Get` accessor method.

See: https://forum.rclone.org/t/box-backend-incompatible-with-box-api-changes-being-deployed/22972
2021-03-24 09:45:17 +00:00
nguyenhuuluan434
d6870473a1 swift: implement copying large objects 2021-03-24 08:56:39 +00:00
albertony
12cd322643 crypt: log hash ok on upload 2021-03-23 18:36:51 +01:00
Ivan Andreev
1406b6c3c9 install.sh: fail on download errors
This patch makes install.sh always run curl with flag "-f"
so it fails on download errors.
2021-03-23 11:29:00 +03:00
Ivan Andreev
088a83872d install.sh: fix some shellcheck warnings 2021-03-23 11:29:00 +03:00
Nick Craig-Wood
cb46092883 lib/atexit: unregister interrupt handler once it has fired so users can interrupt again 2021-03-23 08:03:00 +00:00
Nick Craig-Wood
a2cd5d8fa3 lib/atexit: fix deadlock calling Finalise while Run is running 2021-03-23 08:03:00 +00:00
Ivan Andreev
1fe2460e38 selfupdate: abort if updating would discard fuse semantics 2021-03-22 22:55:24 +03:00
Ivan Andreev
ef5c212f9b version: show build tags and type of executable
This patch modifies the output of `rclone version`.
The `os/arch` line is split into `os/type` and `os/arch`.
The `go version` line is now tagged as `go/version` for consistency.

Additionally the `go/linking` line tells whether the rclone
was linked as a static or dynamic executable.
The new `go/tags` line shows a space separated list of build tags.

The info about linking and build tags is also added to the output
of the `core/version` RC endpoint.
2021-03-22 22:55:24 +03:00
Nick Craig-Wood
268a7ff7b8 rc: add a full set of stats to core/stats
This patch adds the missing stats to the output of core/stats

- totalChecks
- totalTransfers
- totalBytes
- eta

This now includes enough information to rebuild the normal stats
output from rclone including percentage completions and ETAs.

Fixes #5116
2021-03-22 10:10:36 +00:00
Nick Craig-Wood
b47d6001a9 vfs: fix directory renaming by renaming dirs cached in memory
Before this change, if a directory was renamed and it or any children
had virtual entries in it they weren't flushed.

The consequence of this was that the directory path got out sync with
the actual position of the directory in the tree, leading to listings
of the old directory rather than the new one.

The fix renames any directories remaining after the ForgetAll to have
the correct path which fixes the problem.

See: https://forum.rclone.org/t/after-a-directory-renmane-using-mv-files-are-not-visible-any-longer/22797
2021-03-22 09:07:01 +00:00
Nick Craig-Wood
a4c4ddf052 vfs: rename files in cache and cancel uploads on directory rename
Before this change rclone did not cancel an uploads or rename the
cached files in the directory cache when a directory was renamed.

This caused issues with uploads arriving in the wrong place on bucket
based file systems.

See: https://forum.rclone.org/t/after-a-directory-renmane-using-mv-files-are-not-visible-any-longer/22797
2021-03-22 09:07:01 +00:00
Nick Craig-Wood
4cc2a7f342 mount: fix caching of old directories after renaming them
It was discovered `rclone mount` (but not `rclone cmount`) cached
directories after rename which it shouldn't have done.

This caused IO errors when trying to access files in renamed
directories on bucket based file systems.

This turned out to be the kernel caching the directories as basil/fuse
sets their expiry time to 60s for some reason.

This fix invalidates the relevant kernel cache entries in the for the
directories which fixes the problem.

Fixes: #4977
See: https://forum.rclone.org/t/after-a-directory-renmane-using-mv-files-are-not-visible-any-longer/22797
2021-03-22 09:07:01 +00:00
Nick Craig-Wood
c72d2c67ed vfs: add debug dump function to dump the state of the VFS cache 2021-03-22 09:06:44 +00:00
Nick Craig-Wood
9deab5a563 dropbox: raise priority of rate limited message to INFO to make it more noticeable
If you exceed rate limits, dropbox tells you to wait for 300 seconds -
this is rather a long time for the user to be waiting for rclone to
finish, so emit a NOTICE level log instead of a DEBUG.
2021-03-22 09:04:25 +00:00
buengese
da5b0cb611 zoho: add forgotten setupRegion() to NewFs
- this finally fixes regions other than eu
2021-03-21 02:15:22 +01:00
buengese
0187bc494a zoho: replace client id 2021-03-21 02:15:22 +01:00
Ivan Andreev
2bdbf00fa3 selfupdate: add instructions on reverting the update (#5141) 2021-03-18 23:11:16 +03:00
Nick Craig-Wood
9ee3ad70e9 sftp: fix SetModTime stat failed: object not found with --sftp-set-modtime=false
Some sftp servers don't allow the user to access the file after upload.

In this case the error message indicates that using
--sftp-set-modtime=false would fix the problem. However it doesn't
because SetModTime does a stat call which can't be disabled.

    Update SetModTime failed: SetModTime stat failed: object not found

After upload this patch checks for an `object not found` error if
set_modtime == false and ignores it, returning the expected size of
the object instead.

It also makes SetModTime do nothing if set_modtime = false

https://forum.rclone.org/t/sftp-update-setmodtime-failed/22873
2021-03-18 16:31:51 +00:00
albertony
ce182adf46 copyurl: add option to print resulting auto-filename (#5095)
Fixes #5094
2021-03-18 10:04:59 +01:00
albertony
97fc3b9046 rc: avoid +Inf value for speed in core/stats (#5134)
Fixes #5132
2021-03-18 10:02:30 +01:00
Nick Craig-Wood
e59acd16c6 drive: remove duplicated Context(ctx) calls
These were added by accident in

d9959b0271 drive: pass context on to drive SDK - this will help with cancellation

Which added lots of new Context() calls but duplicated some existing
ones.
2021-03-17 16:46:58 +00:00
albertony
acfd7e2403 docs: add note about limitation with pattern-list to filtering docs (#5118)
Fixes #5112
2021-03-17 14:34:46 +01:00
Nick Craig-Wood
f47893873d fs: fix failed token refresh on mounts created via the rc
Users have noticed that backends created via the rc have been failing
to refresh their tokens with this error:

    Token refresh failed try 1/5: context canceled

This is because the rc server cancels the context used to make the
backend when the request has finished. This same context is used to
refresh the token and the oauth library checks to see if the context
has been cancelled.

This patch creates a new context for the cached backends and copies
the global and filter config into the new context.

See: https://forum.rclone.org/t/google-drive-token-refresh-failed/22283
2021-03-16 16:29:22 +00:00
Nick Craig-Wood
b9a015e5b9 s3: fix --s3-profile which wasn't working - fixes #4757 2021-03-16 16:25:07 +00:00
Nick Craig-Wood
d72d9e591a ftp: retry connections and logins on 421 errors #3984
Before this we just failed if the ftp connection or login failed.

This change adds a pacer just for the ftp connect and retries if the
connection failed to Dial or the login returns a 421 error.
2021-03-16 16:17:22 +00:00
Nick Craig-Wood
df451e1e70 ftp: add --ftp-close-timeout flag for use with awkward ftp servers #3984 2021-03-16 16:17:22 +00:00
Nick Craig-Wood
d9959b0271 drive: pass context on to drive SDK - this will help with cancellation 2021-03-16 16:17:22 +00:00
Nick Craig-Wood
f2c0f82fc6 backends: Add context checking to remaining backends #4504
This is a follow up to 4013bc4a4c which missed some backends.

It adds a ctx parameter to shouldRetry and checks it.
2021-03-16 16:17:22 +00:00
albertony
f76c6cc893 docs: describe how to bypass loading of config file
Fixes #5125
2021-03-16 14:25:00 +00:00
Nick Craig-Wood
5e95877840 vfs: fix modtime set if --vfs-cache-mode writes/full and no write
When using --vfs-cache-mode writes or full if a file was opened for
write intent, the modtime was set and the file was closed without
being modified the modtime would never be written back to storage.

The sequence of events

- app opens file with write intent
- app does set modtime
- rclone sets the modtime on the cache file, but not the remote file
  because it is open for write and can't be set yet
- app closes the file without changing it
- rclone doesn't upload the file because the file wasn't changed so
  the modtime doesn't get updated

This fixes the problem by making sure any unapplied modtime changes
are applied even if the file is not modified when being closed.

Fixes #4795
2021-03-16 13:36:48 +00:00
Nick Craig-Wood
8b491f7f3d vfs: fix modtimes changing by fractional seconds after upload #4763
Before this change, rclone would return the modification times of the
cache file or the pending modtime which would be more accurate than
the modtime that the backend was capable of.

This meant that the modtime would be change slightly when the item was
actually uploaded.

For example modification times on Google Drive would be rounded to the
nearest millisecond.

This fixes the VFS layer to always return modtimes directly from an
object stored on the remote, or rounded to the precision that the
remote is capable of.
2021-03-16 13:31:47 +00:00
Nick Craig-Wood
aea8776a43 vfs: fix modtimes not updating when writing via cache - fixes #4763
This reads modtime from a dirty cache item if it exists. This mirrors
the way reading the size works.

This fixes the mod time not updating when the file is written, only
when the writeback completes.

See: https://forum.rclone.org/t/rclone-mount-and-changing-timestamps-after-writes/22629
2021-03-16 13:31:47 +00:00
Nick Craig-Wood
c387eb8c09 vfs: don't set modification time if it was already correct
Before this change, rclone would set the modification time of an
object after it had been uploaded. However with --vfs-cache-mode
writes and above, the modification time of the object is already
correct as the cache backing file gets set with the correct
modification time before upload.

Setting the modification time causes another version to be created on
backends such as S3 so it should be avoided if possible.

This change checks to see if the modification time needs changing and
only sets it if necessary.

See: https://forum.rclone.org/t/produce-2-versions-when-overwrite-an-object-in-min-io/19634
2021-03-16 13:31:47 +00:00
Nick Craig-Wood
a12b2746b4 fs: make sure backends with additional config have a different name #4996
Backends for which additional config is detected (in the config string
or on the command line or as environment variables) will gain a suffix
`{XXXXX}` where `XXXX` is a base64 encoded md5hash of the config
string.

This fixes backend caching with config string remotes.

This much requested feature now works properly:

    rclone copy -vv drive,shared_with_me:file.txt drive:
2021-03-15 19:22:07 +00:00
Nick Craig-Wood
3dbef2b2fd fs: make sure we don't save on the fly remote config to the config file #4996 2021-03-15 19:22:07 +00:00
Nick Craig-Wood
f111e0eaf8 fs: enable configmap to be able to tell values set vs config file values #4996
This adds AddOverrideGetter and GetOverride methods to config map and
uses them in fs.ConfigMap.

This enables us to tell which values have been set and which are just
read from the config file or at their defaults.

This also deletes the unused AddGetters method in configmap.
2021-03-15 19:22:07 +00:00
Nick Craig-Wood
96207f342c configmap: add consistent String() method to configmap.Simple #4996 2021-03-15 19:22:07 +00:00
Nick Craig-Wood
e25ac4dcf0 fs: Use connection string config as highest priority config #4996 2021-03-15 19:22:07 +00:00
Nick Craig-Wood
28f6efe955 cmd: refactor to use fspath.SplitFs instead of fs.ParseRemote #4996 2021-03-15 19:22:07 +00:00
Nick Craig-Wood
f17d7c0012 union: refactor to use fspath.SplitFs instead of fs.ParseRemote #4996 2021-03-15 19:22:07 +00:00
Nick Craig-Wood
3761cf68b4 chunker: refactor to use fspath.SplitFs instead of fspath.Parse #4996 2021-03-15 19:22:07 +00:00
Nick Craig-Wood
71554c1371 fspath: factor Split into SplitFs and Split #4996 2021-03-15 19:22:07 +00:00
Nick Craig-Wood
8a46dd1b57 fspath: Implement a connection string parser #4996
This is implemented as a state machine parser so it can emit sensible
error messages.

It does not use the connection strings elsewhere in rclone yet - see
subsequent commits.

An optional fuzzer is implemented for the Parse function.
2021-03-15 19:22:07 +00:00
Nick Craig-Wood
3b21857097 config: clear fs cache of stale entries when altering config - fixes #4811
Before this change if a config was altered via the rc then when a new
backend was created from that config, if there was a backend already
running from the old config in the cache then it would be used instead
of creating a new backend with the new config, thus leading to
confusion.

This change flushes the fs cache of any backends based off a config
when that config is changed is over the rc.
2021-03-15 19:22:07 +00:00
Nick Craig-Wood
a10fbf16ea fs/cache: add ClearConfig method to clear all remotes based on Config #4811 2021-03-15 19:22:07 +00:00
Nick Craig-Wood
f4750928ee lib/cache: add Delete and DeletePrefix methods #4811 2021-03-15 19:22:07 +00:00
Nick Craig-Wood
657be2ace5 rc: add fscache/clear and fscache/entries to control the fs cache #4811 2021-03-15 19:22:07 +00:00
Nick Craig-Wood
feaaca4987 fs/cache: add Entries() for finding how many entries in the fscache #4811 2021-03-15 19:22:07 +00:00
Nick Craig-Wood
ebd9462ea6 union: fix union attempting to update files on a read only file system
Before this change, when using an all create method with one of the
upstreams being read only, if there was an existing file on the read
only remote, it was impossible to update it.

This change detects that situation and creates the file on a
read/write upstream. This file will shadow the file on the read/only
upstream. If it is deleted the read only upstream file will be visible
again.

Fixes #4929
2021-03-15 19:22:07 +00:00
Nick Craig-Wood
6b9e4f939d union: fix crash when using epff policy - fixes #5000
Before this fix using the epff policy could double close a channel.

The fix refactors the code to make that impossible and cancels any
running queries when the first query is found.
2021-03-15 19:22:07 +00:00
Nick Craig-Wood
687a3b1832 vfs: fix data race discovered by the race detector
This fixes a place where we read from item.o without the item.mu held.
2021-03-15 19:22:07 +00:00
Nick Craig-Wood
25d5ed763c fs: make sync and operations tests use context instead of global variables 2021-03-15 19:22:07 +00:00
Nick Craig-Wood
5e038a5e1e lib/file: retry preallocate on EINTR
Before this change, sometimes preallocate failed with EINTR which
rclone ignored.

Retrying the syscall is the correct thing to do and seems to make
preallocate 100% reliable.
2021-03-15 19:22:07 +00:00
Nick Craig-Wood
4b4e531846 build: add missing BUILD_FLAGS to compile_only to speed up other_os build
Before this we were building all architectures unnecessarily in the
compile_all step for the other_os build. There are built elsewhere so
we don't need to build them here too.

This fix adds the missing BUILD_FLAGS which excludes the other builds
and should speed up the workflow.
2021-03-15 19:22:07 +00:00
Nick Craig-Wood
89e8fb4818 local: don't ignore preallocate disk full errors
See: https://forum.rclone.org/t/input-output-error-copying-to-cifs-mount-disk-space-filled/22163
2021-03-15 19:22:07 +00:00
Nick Craig-Wood
b9bf91c510 lib/file: don't run preallocate concurrently
This seems to cause file systems to get the amount of free space
wrong.
2021-03-15 19:22:06 +00:00
Nick Craig-Wood
40b58d59ad lib/file: make pre-allocate detect disk full errors and return them 2021-03-15 19:22:06 +00:00
Nick Craig-Wood
4fbb50422c drive: don't stop server side copy if couldn't read description
Before this change we would error out of a server side copy if we
couldn't read the description for a google doc.

This patch just logs an ERROR message and carries on.

See: https://forum.rclone.org/t/rclone-google-drive-to-google-drive-migration-for-multiple-users/19024/3
2021-03-15 19:22:06 +00:00
Nick Craig-Wood
8d847a4e94 lib/atexit: fix occasional failure to unmount with CTRL-C #4957
Before this change CTRL-C could come in to exit rclone which would
start the atexit actions running. The Fuse unmount then signals rclone
to exit which wasn't waiting for the already running atexit actions to
complete.

This change makes sure that if the atexit actions are started they
should be completed.
2021-03-15 19:22:06 +00:00
Nick Craig-Wood
e3e08a48cb Add Manish Kumar to contributors 2021-03-15 19:22:06 +00:00
Manish Kumar
ff6868900d azureblob: add container public access level support. Fixes #5045 2021-03-15 17:18:47 +00:00
albertony
aab076029f local: make nounc advanced option except on windows 2021-03-15 17:10:27 +00:00
Nick Craig-Wood
294f090361 fs: make sure --low-level-retries, --checkers, --transfers are > 0 2021-03-15 17:05:35 +00:00
Nick Craig-Wood
301e1ad982 fs: fix crash when --low-level-retries=0 - fixes #5024 2021-03-15 17:05:35 +00:00
Nick Craig-Wood
3cf6ea848b config: remove log.Fatal and replace with error passing where possible 2021-03-14 16:03:35 +00:00
Nick Craig-Wood
bb0b6432ae config: --config "" or "/notfound" for in memory config only #4996
If `--config` is set to empty string or the special value `/notfound`
then rclone will keep the config file in memory only.
2021-03-14 16:03:35 +00:00
Nick Craig-Wood
46078d391f config: make config file reads reload the config file if needed #4996
Before this change the config file needed to be explicitly reloaded.
This coupled the config file implementation with the backends
needlessly.

This change stats the config file to see if it needs to be reloaded on
every config file operation.

This allows us to remove calls to

- config.SaveConfig
- config.GetFresh

Which now makes the the only needed interface to the config file be
that provided by configmap.Map when rclone is not being configured.

This also adds tests for configfile
2021-03-14 16:03:35 +00:00
Nick Craig-Wood
849bf20598 build: disable IOS builds for the time being - see #5124 2021-03-13 22:07:46 +00:00
Ivan Andreev
e91f2e342a docs: mention rclone selfupdate in quickstart (#5122) 2021-03-13 23:02:40 +03:00
Nick Craig-Wood
713f8f357d sftp: fix "file not found" errors for read once servers - fixes #5077
It introduces a new flag --sftp-disable-concurrent-reads to stop the
problematic behaviour in the SFTP library for read-once servers.

This upgrades the sftp library to v1.13.0 which has the fix.
2021-03-13 15:38:38 +00:00
Evan Harris
83368998be docs: Updated sync and dedupe command docs #4429 2021-03-13 15:01:32 +00:00
Nick Craig-Wood
4013bc4a4c Fix excessive retries missing --max-duration timeout - fixes #4504
This change checks the context whenever rclone might retry, and
doesn't retry if the current context has an error.

This fixes the pathological behaviour of `--max-duration` refusing to
exit because all the context deadline exceeded errors were being
retried.

This unfortunately meant changing the shouldRetry logic in every
backend and doing a lot of context propagation.

See: https://forum.rclone.org/t/add-flag-to-exit-immediately-when-max-duration-reached/22723
2021-03-13 09:25:44 +00:00
Nick Craig-Wood
32925dae1f Add Lucas Messenger to contributors 2021-03-13 09:25:44 +00:00
Nick Craig-Wood
6cc70997ba Add Naveen Honest Raj to contributors 2021-03-13 09:25:44 +00:00
buengese
d260e3824e docs: cleanup optional feature table 2021-03-12 09:20:01 +00:00
Lucas Messenger
a5bd26395e hdfs: fix permissions for when directory is created 2021-03-12 09:15:14 +00:00
Ivan Andreev
6fa74340a0 cmd: rclone selfupdate (#5080)
Implements self-update command
Fixes #548
Fixes #5076
2021-03-11 22:39:30 +03:00
Saksham Khanna
4d8ef7bca7 cmd/dedupe: make largest directory primary to minimize data moved (#3648)
This change makes dedupe recursively count elements in same-named directories
and make the largest one primary. This allows to minimize the amount of data
moved (or at least the amount of API calls) when dedupe merges them.
It also adds a new fs.Object interface `ParentIDer` with function `ParentID` and
implements it for the drive and opendrive backends. This function returns
parent directory ID for objects on filesystems that allow same-named dirs.
We use it to correctly count sizes of same-named directories.

Fixes #2568

Co-authored-by: Ivan Andreev <ivandeex@gmail.com>
2021-03-11 20:40:29 +03:00
Nick Craig-Wood
6a9ae32012 config: split up main file more and move tests into correct packages
This splits config.go into ui.go for the user interface functions and
authorize.go for the implementation of `rclone authorize`.

It also moves the tests into the correct places (including one from
obscure which was in the wrong place).
2021-03-11 17:29:26 +00:00
Nick Craig-Wood
a7fd65bf2d config: move account initialisation out into accounting
Before this change the initialisation for the accounting package was
done in the config package for some strange historical reason.
2021-03-11 17:29:26 +00:00
Nick Craig-Wood
1fed2d910c config: make config file system pluggable
If you are using rclone a library you can decide to use the rclone
config file system or not by calling

    configfile.LoadConfig(ctx)

If you don't you will need to set `config.Data` to an implementation
of `config.Storage`.

Other changes
- change interface of config.FileGet to remove unused default
- remove MustValue from config.Storage interface
- change GetValue to return string or bool like elsewhere in rclone
- implement a default config file system which panics with helpful error
- implement getWithDefault to replace the removed MustValue
- don't embed goconfig.ConfigFile so we can change the methods
2021-03-11 17:29:26 +00:00
Fionera
c95b580478 config: Wrap config library in an interface 2021-03-11 17:29:26 +00:00
Nick Craig-Wood
2be310cd6e build: fix dependencies for docs build 2021-03-11 17:29:26 +00:00
Naveen Honest Raj
02a5d350f9 rcd: Added systemd notification during the 'rclone rcd' command call. This also fixes #5073.
Signed-off-by: Naveen Honest Raj <naveendurai19@gmail.com>
2021-03-11 17:12:14 +00:00
albertony
18cd2064ec mount: docs: add note about volume path syntax on windows 2021-03-11 17:09:22 +00:00
buengese
59ed70ca91 fichier: implement public link 2021-03-11 00:44:26 +01:00
Nick Craig-Wood
6df56c55b0 Changelog updates from Version v1.54.1 2021-03-08 11:06:11 +00:00
Nick Craig-Wood
94e34cb783 build: fix nfpm install by using the released binary 2021-03-07 16:42:22 +00:00
Robert Thomas
c3e2392f2b dropbox: fix polling support for scoped apps - fixes #5089 (#5092)
This fixes the polling implementation for Dropbox, particularly
when using a scoped app. This also adds a lower end check for the
timeout, as I forgot to include that in the original implementation.
2021-03-05 17:44:47 +00:00
Nick Craig-Wood
f7e3115955 s3: fix Wasabi HEAD requests returning stale data by using only 1 transport
In this commit

fc5b14b620 s3: Added `--s3-disable-http2` to disable http/2

We created our own transport so we could disable http/2. However the
added function is called twice meaning that we create two HTTP
transports. This didn't happen with the original code because the
default transport is cached by fshttp.

Rclone normally does a PUT followed by a HEAD request to check an
upload has been successful.

With the two transports, the PUT and the HEAD were being done on
different HTTP transports. This means that it wasn't re-using the same
HTTP connection, so the HEAD request showed the previous object value.
This caused rclone to declare the upload was corrupted, delete the
object and try again.

This patch makes sure we only create one transport and use it for both
PUT and HEAD requests which fixes the problem with Wasabi.

See: https://forum.rclone.org/t/each-time-rclone-is-run-1-3-fails-2-3-succeeds/22545
2021-03-05 15:34:56 +00:00
Nick Craig-Wood
e01e8010a0 Add Maxwell Calman to contributors 2021-03-05 15:34:56 +00:00
Ivan Andreev
75056dc9b2 ftp: update dependency jlaffaye/ftp (#5097) 2021-03-05 15:58:04 +03:00
Ivan Andreev
7aa7acd926 address stringent ineffectual assignment check in golangci-lint (#5093) 2021-03-04 14:26:48 +03:00
Nick Craig-Wood
0ad38dd6fa dropbox,ftp,onedrive,yandex: make --timeout 0 work properly
See: https://forum.rclone.org/t/an-issue-about-ftp-backend-in-2-different-systems/22551
2021-03-01 12:08:58 +00:00
Maxwell Calman
9cc8ff4dd4 chunker: partially implement no-rename transactions (#4675)
Some storage providers e.g. S3 don't have an efficient rename operation.
Before this change, when chunker finished an upload, the server-side copy
and delete operations that renamed temporary chunks to their final names
could take a significant amount of time.
This PR records transaction identifier (versioning) in the metadata of
chunker composite objects striving to remove the need for rename
operations on such backends.
This approach will be triggered be the new "transactions" configuration
option, which can be "rename" (the default) or "norename".
We implement the new approach for uploads (Put operations).
The chunker Move operation still uses the rename operation of
underlying backend. Filling this gap is left for a later PR.

Co-authored-by: Ivan Andreev <ivandeex@gmail.com>
2021-02-28 10:49:17 +00:00
Nick Craig-Wood
b029fb591f s3: fix failed to create file system with folder level permissions policy
Before this change, if folder level access permissions policy was in
use, with trailing `/` marking the folders then rclone would HEAD the
path without a trailing `/` to work out if it was a file or a folder.
This returned a permission denied error, which rclone returned to the
user.

    Failed to create file system for "s3:bucket/path/": Forbidden: Forbidden
        status code: 403, request id: XXXX, host id:

Previous to this change

53aa03cc44 s3: complete sse-c implementation

rclone would assume any errors when HEAD-ing the object implied it
didn't exist and this test would not fail.

This change reverts the functionality of the test to work as it did
before, meaning any errors on HEAD will make rclone assume the object
does not exist and the path is referring to a directory.

Fixes #4990
2021-02-24 20:35:44 +00:00
Nick Craig-Wood
95e9c4e7f1 Add georne to contributors 2021-02-24 20:35:44 +00:00
Nick Craig-Wood
c40bafb72c Add tYYGH to contributors 2021-02-24 20:35:44 +00:00
Nick Craig-Wood
eac77b06ab Add Romeo Kienzler to contributors 2021-02-24 20:35:44 +00:00
Yaroslav Halchenko
0355d6daf2 CONTRIBUTING.md: recommend to push feature branch with -u + minor tuneups 2021-02-24 20:24:59 +00:00
buengese
c4b8df6903 fichier: implement copy & move 2021-02-24 21:05:41 +01:00
Ivan Andreev
0dd3ae5e0d Add Robert Thomas to contributors 2021-02-24 19:40:54 +03:00
Robert Thomas
e5aa92c922 dropbox: add polling support - fixes #2949
This implements polling support for the Dropbox backend. The Dropbox SDK dependency had to be updated due to an auth issue, which was fixed on Jan 12 2021. A secondary internal Dropbox service was created to handle unauthorized SDK requests, as is necessary when using the ListFolderLongpoll function/endpoint. The config variable was renamed to cfg to avoid potential conflicts with the imported config package.
2021-02-24 09:33:31 +00:00
Ivan Andreev
f6265fbeff Add pvalls to contributors 2021-02-24 03:35:24 +03:00
Ivan Andreev
1397b85214 Add Georg Neugschwandtner to contributors 2021-02-24 03:28:15 +03:00
Ivan Andreev
86a0dae632 Add Rauno Ots to contributors 2021-02-24 03:27:16 +03:00
Ivan Andreev
076ff96f6b webdav: check that purged directory really exists (#2921)
Sharepoint 2016 returns status 204 to the purge request
even if the directory to purge does not really exist.
This change adds an extra check to detect this condition
and returns a proper error code.
2021-02-23 23:27:30 +00:00
Ivan Andreev
985011e73b webdav: fix sharepoint-ntlm error 401 for parallel actions (#2921)
The go-ntlmssp NTLM negotiator has to try various authentication methods.
Intermediate responses from Sharepoint have status code 401, only the
final one is different. When rclone runs a large operation in parallel
goroutines according to --checkers or --transfers, one of threads can
receive intermediate 401 response targeted for another one and returns
the 401 authentication error to the user.
This patch fixes that.
2021-02-23 23:27:30 +00:00
Ivan Andreev
9ca6bf59c6 webdav: enforce encoding to fix errors with sharepoint-ntlm (#2921)
On-premises Sharepoint returns HTTP errors 400 or 500 in
reply to attempts to use file names with special characters
like hash, percent, tilde, invalid UTF-7 and so on.
This patch activates transparent encoding of such characters.
2021-02-23 23:27:30 +00:00
georne
e5d5ae9ab7 webdav: disable HTTP/2 for NTLM authentication (#2921)
As per Microsoft documentation, Windows authentication
(NTLM/Kerberos/Negotiate) is not supported with HTTP/2.
This patch disables transparent HTTP/2 support when the
vendor setting is "sharepoint-ntlm". Otherwise connections
to IIS/10.0 can fail with HTTP_1_1_REQUIRED.

Co-authored-by: Georg Neugschwandtner <georg.neugschwandtner@gmx.net>
2021-02-23 23:27:30 +00:00
Ivan Andreev
ac6bb222f9 webdav: improve terminology in sharepoint-ntlm docs (#2921)
The most popular keyword for the Sharepoint in-house or company
installations is "On-Premises".
"Microsoft OneDrive account" is in fact just a Microsoft account.

Co-authored-by: Georg Neugschwandtner <georg.neugschwandtner@gmx.net>
2021-02-23 23:27:30 +00:00
Alex Chen
62d5876eb4 webdav: make sharepoint-ntlm docs more consistent (#2921)
Clarify difference between Sharepoint Online and
hosted Sharepoint with NTLM authentication.
2021-02-23 23:27:30 +00:00
Rauno Ots
9808a53416 webdav: add support for sharepoint with NTLM authentication (#2921)
Add new option option "sharepoint-ntlm" for the vendor setting.
Use it when your hosted Sharepoint is not tied to the OneDrive
accounts and uses NTLM authentication.
Also add documentation and integration test.

Fixes: #2171
2021-02-23 23:27:30 +00:00
pvalls
cc08f66dc1 docs: singular/plural duplicity for MByte{s} 2021-02-23 11:34:32 +00:00
pvalls
6b8da24eb8 docs: uppercase for MBytes
MBytes is written as Mbytes and MBytes interchangeably.
Use uppercase consistently across all docs.md
2021-02-23 11:34:32 +00:00
buengese
333faa6c68 zoho: fix custom client id's 2021-02-23 11:27:05 +00:00
Nick Craig-Wood
1b92e4636e rc: implement passing filter config with _filter parameter 2021-02-23 10:54:40 +00:00
Nick Craig-Wood
c5a299d5b1 rc: fix options/local to return the filter options 2021-02-23 10:33:03 +00:00
Nick Craig-Wood
04a8859d29 cmount: fix mount dropping on macOS by setting --daemon-timeout 10m
Previously rclone set --daemon-timeout to 15m by default. However
osxfuse seems to be ignoring that value since it is above the maximum
value of 10m. This is conjecture since the source of osxfuse is no
longer available.

Setting the value to 10m seems to resolve the problem.

See: https://forum.rclone.org/t/rclone-mount-frequently-drops-when-using-plex/22352
2021-02-21 12:56:19 +00:00
Nick Craig-Wood
4b5fe3adad delete,rmdirs: make --rmdirs obey the filters
See: https://forum.rclone.org/t/a-problem-with-rclone-delete-from-list/22143
2021-02-19 10:32:28 +00:00
edwardxml
7db68b72f1 docs: directory filter rules 2021-02-18 12:11:56 +01:00
edwardxml
9c667be2a1 docs: remove dead link from rc.md (#5038) 2021-02-18 01:37:17 +03:00
tYYGH
c0cf54067a vfs: --vfs-used-is-size to report used space using recursive scan (#4043)
Some backends, most notably S3, do not report the amount of bytes used.
This patch introduces a new flag that allows instead of relying on the
backend, use recursive scan similar to `rclone size` to compute the total
used space. However, this is ineffective and should be used as a last resort.

Co-authored-by: Yves G <theYinYeti@yalis.fr>
2021-02-17 23:36:13 +03:00
Romeo Kienzler
297ca23abd docs: fix typo in crypt.md (#5037) 2021-02-17 19:11:57 +03:00
Nick Craig-Wood
d809930e1d union: fix mkdir at root with remote:/
Before the this fix if you specified remote:/ then the union backend
would fail to notice the root directory existed.

This was fixed by stripping the trailing / from the root.

See: https://forum.rclone.org/t/upgraded-from-1-45-to-1-54-now-cant-create-new-directory-within-union-mount/22284/
2021-02-17 12:11:34 +00:00
Nick Craig-Wood
fdc0528bd5 Add Dmitry Chepurovskiy to contributors 2021-02-17 12:11:34 +00:00
Nick Craig-Wood
a0320d6e94 Add Vesnyx to contributors 2021-02-17 12:11:34 +00:00
Nick Craig-Wood
89bf036e15 Add K265 to contributors 2021-02-17 12:11:34 +00:00
Dmitry Chepurovskiy
1605f9e14d s3: Fix shared_credentials_file auth
S3 backend shared_credentials_file option wasn't working neither from
config option nor from command line option. This was caused cause
shared_credentials_file_provider works as part of chain provider, but in
case user haven't specified access_token and access_key we had removed
(set nil) to credentials field, that may contain actual credentials got
from ChainProvider.

AWS_SHARED_CREDENTIALS_FILE env varible as far as i understood worked,
cause aws_sdk code handles it as one of default auth options, when
there's not configured credentials.
2021-02-17 12:04:26 +00:00
albertony
cd6fd4be4b mount: docs: document the new FileSecurity option in WinFsp 2021 (#5002) 2021-02-17 03:44:28 +03:00
Vesnyx
4ea7c7aa47 crypt: add option to not encrypt data #1077 (#2981)
Co-authored-by: Ivan Andreev <ivandeex@gmail.com>
2021-02-17 03:40:37 +03:00
Ivan Andreev
5834020316 docker.bash: work correctly with multi-ip containers (#5028)
Currently if container under test has multiple IP addresses,
the `docker_ip` function from `docker.sh` will return a gibberish.
This patch makes it return the first address found.
Additionally, I apply shellcheck on `docker.sh`.
2021-02-17 03:38:02 +03:00
Ivan Andreev
f5066a09cd build: replace go 1.16-rc1 by 1.16.x (#5036) 2021-02-17 03:37:30 +03:00
edwardxml
863bd93c30 docs: fix broken link in sftp page
Just a spare line break had crept in breaking the link form.
2021-02-16 23:24:11 +01:00
edwardxml
d96af3b005 docs: convert bogus example link to code
Convert the bogus example plex url from a url that is auto created to code format that hopefully isn't.
2021-02-16 23:20:49 +01:00
edwardxml
3280ceee3b docs: badly formed link
Fix for a badly formed link created in earlier rewrite
2021-02-16 23:16:03 +01:00
K265
930bca2478 feat: add multiple paths support to --compare-dest and --copy-dest flag 2021-02-16 18:17:04 +00:00
edwardxml
23b12c39bd Docs: Zoho WorkDrive authorisation reword
Mainly the reference to firewalls didn't make sense. Tried to make more precise. Left z in authorize.
2021-02-16 18:07:55 +00:00
Nick Craig-Wood
9d37c208b7 vfs: document simultaneous usage with the same cache shouldn't be used
Fixes #2227
2021-02-16 17:15:05 +00:00
Nick Craig-Wood
c81311722e ftp: close idle connections after --ftp-idle-timeout (1m by default)
This fixes a problem where ftp backends live on forever when using
the rc and use more and more connections.
2021-02-16 12:39:05 +00:00
Nick Craig-Wood
843ddd9136 ftp: implement Shutdown method 2021-02-16 12:39:05 +00:00
Nick Craig-Wood
a3fcadddc8 sftp: close idle connections after --sftp-idle-timeout (1m by default)
This fixes a problem where sftp backends live on forever when using
the rc and use more and more connections.

Fixes #4883
2021-02-16 12:39:05 +00:00
Nick Craig-Wood
a63e1f1383 Add Miron Veryanskiy to contributors 2021-02-16 12:39:05 +00:00
Nick Craig-Wood
5b84adf3b9 test: add "rclone test histogram" for file name distribution stats 2021-02-13 14:24:43 +00:00
Nick Craig-Wood
f890965020 test: add makefiles test command (converted from script) 2021-02-13 14:24:43 +00:00
Nick Craig-Wood
f88a5542cf test: move test commands under "rclone test" and make them visible 2021-02-13 14:24:43 +00:00
Miron Veryanskiy
fd94b3a473 docs: replace #file-caching with #vfs-file-caching
The documentation had dead links pointing to #file-caching. They've been
moved to point to #vfs-file-caching.
2021-02-13 12:56:25 +00:00
Nick Craig-Wood
2aebeb6061 accounting: fix --bwlimit when up or down is off - fixes #5019
Before this change the core bandwidth limit was limited to upload or
download value if the other value was off.

This fix only applies a core bandwidth limit when both values are set.
2021-02-13 12:45:12 +00:00
Nick Craig-Wood
e779cacc82 fshttp: fix bandwidth limiting after bad merge
Reapply missing bwlimiting which was inserted in

0a932dc1f2 Add --bwlimit for upload and download #1873

But accidentally removed when merging

edfe183ba2 fshttp: add DSCP support with --dscp for QoS with differentiated services
2021-02-13 12:45:12 +00:00
Nick Craig-Wood
37e630178e dropbox: add scopes to oauth request and optionally "members.read"
This change adds the scopes rclone wants during the oauth request.
Previously rclone left these blank to get a default set.

This allows rclone to add the "members.read" scope which is necessary
for "impersonate" to work, but only when it is in use as it require
authorisation from a Team Admin.

See: https://forum.rclone.org/t/dropbox-no-members-read/22223/3
2021-02-13 12:35:24 +00:00
Nick Craig-Wood
2cdc071b85 Add Ankur Gupta to contributors 2021-02-13 12:35:24 +00:00
Nick Craig-Wood
496e32fd8a Add cynthia kwok to contributors 2021-02-13 12:35:24 +00:00
Nick Craig-Wood
bf3ba50a0f Add David Sze to contributors 2021-02-13 12:35:24 +00:00
Nick Craig-Wood
22c226b152 Add Alexey Tabakman to contributors 2021-02-13 12:35:23 +00:00
Klaus Post
5ca7f1fe87 encoder/filename: Wrap scsu package 2021-02-12 11:39:39 +00:00
Klaus Post
f14220ef1e encoder/filename: Add 2 more tables and tests. 2021-02-12 11:39:39 +00:00
Klaus Post
424aaac2e1 encoder/filename: Add SCSU as tables
Instead of only adding SCSU, add it as an existing table.

Allow direct SCSU and add a, perhaps, reasonable table as well.

Add byte interfaces that doesn't base64 encode the URL as well with `EncodeBytes` and `DecodeBytes`.

Fuzz tested and decode tests added.
2021-02-12 11:39:39 +00:00
Ankur Gupta
47b69d6300 operations: Made copy and sync operations obey a RetryAfterError 2021-02-11 17:47:34 +00:00
cynthia kwok
c0c2505977 build: add an rclone user to the Docker image but don't use it by default
partially addresses #4831

Co-authored-by: cynful <cynful@users.noreply.github.com>
2021-02-11 17:45:44 +00:00
David Sze
2d7afe8690 local: Add flag --no-preallocate - #3207
Some virtual filesystems (such as Google Drive File Stream) may
incorrectly set the actual file size equal to the preallocated space,
causing checksum and file size checks to fail.

This flag can be used to disable preallocation for local backends of
this type.
2021-02-11 17:25:28 +00:00
Nick Craig-Wood
92187a3b33 cmount: fix unicode issues with accented characters on macOS
This adds

    -o modules=iconv,from_code=UTF-8,to_code=UTF-8-MAC

To the mount options if it isn't already present which fixes mounting
issues on macOS with accented characters in the finder.
2021-02-11 15:13:19 +00:00
Nick Craig-Wood
53aa4b87fd b2: fix failed to create file system with application key limited to a prefix
Before this change, if an application key limited to a prefix was in
use, with trailing `/` marking the folders then rclone would HEAD the
path without a trailing `/` to work out if it was a file or a folder.
This returned a permission denied error, which rclone returned to the
user.

    Failed to create file system for "b2:bucket/path/":
        failed to HEAD for download: Unknown 401  (401 unknown)

This change assumes any errors on HEAD will make rclone assume the
object does not exist and the path is referring to a directory.

See: https://forum.rclone.org/t/b2-error-on-application-key-limited-to-a-prefix/22159/
2021-02-11 15:13:19 +00:00
Max Sum
edfe183ba2 fshttp: add DSCP support with --dscp for QoS with differentiated services 2021-02-10 18:29:18 +00:00
edwardxml
dfc63eb8f1 docs: update filtering docs
Typos from prior major rewrite
2021-02-10 18:21:41 +00:00
edwardxml
f21f2529a3 docs: fix nesting of brackets and backticks in ftp docs 2021-02-10 18:18:01 +00:00
edwardxml
1efb543ad8 docs: add a Windows example to the filtering docs
Add an example pinched from rclone forum

https://forum.rclone.org/t/need-help-to-understand-filtering-commands/22196

Credit to @asdffdsa
2021-02-10 18:09:48 +00:00
edwardxml
92e36fcfc5 docs: update filtering time formats
Correction per @x0b from 
https://forum.rclone.org/t/max-age-min-age-rfc3339-format-rejected/22204
2021-02-10 18:08:25 +00:00
Alexey Tabakman
bf8542c670 docs: update information about Disk-O: desktop client #4988 (#4988) 2021-02-09 21:23:45 +03:00
albertony
cc5a1e90d8 mount: improved handling of relative paths on windows 2021-02-08 20:55:23 +00:00
albertony
b39fa54ab2 mount: allow mounting to root directory on windows 2021-02-08 20:55:23 +00:00
Nick Craig-Wood
f1147fe1dd rc: sync,copy,move: document createEmptySrcDirs parameter - fixes #4489 2021-02-08 12:25:40 +00:00
Nick Craig-Wood
8897377a54 filter: Make --exclude "dir/" equivalent to --exclude "dir/**"
Rclone uses directory exclusions to cut down the listing it has to do,
so before this fix `--exclude dir/` would make sure nothing in `dir/`
was scanned, **except** if --fast-list was used, in which case only
the directory was excluded and everything within it was included.

This is rather unexpected, so this patch makes `--exclude dir/` be
equivalent to `--exclude dir/**`, meaning that excluding a directory
excludes it and its contents.

We can't do the same for --include without changing the semantics of
filtering slightly.

Fixes #3375
2021-02-07 17:29:16 +00:00
Nick Craig-Wood
f50b4e51ed build: make a macOS ARM64 build to support Apple Silicon - Fixes #4786
- add `-macos-sdk` and `-macos-arch` to adjust CGO_CFLAGS and CGO_LDFLAGS
    - select macOS SDK 11.1 and arch arm64 when building
- add -cgo-cflags and -cgo-ldflags to set CGO_CFLAGS and CGO_LDFLAGS
    - add back /usr/local to pickup fuse headers and library
- add `-env` to cross-compile
- add macOS/arm64 to download matrix
2021-02-07 14:59:53 +00:00
Nick Craig-Wood
f135acbdfb build: install macfuse 4.x instead of osxfuse 3.x
The macfuse has been renamed, but brew is still picking up the old
version under the old name.

This corrects the name to macfuse which brings in v4.x which should
support Apple Silicon.
2021-02-07 14:59:53 +00:00
Nick Craig-Wood
cdd99a6f39 fs/accounting: fix occasionally failing test on macOS 2021-02-07 14:59:53 +00:00
Nick Craig-Wood
6ecb5794bc rc: add _config parameter to set global config for just this rc call 2021-02-07 14:56:41 +00:00
Nick Craig-Wood
9a21aff4ed rc: add options/local to see the options configured in the context 2021-02-07 14:56:41 +00:00
Nick Craig-Wood
8574a7bd67 rc: factor async/sync job handing into rc/jobs from rc/rcserver
This fixes async jobs with `rclone rc --loopback` which isn't very
important but sets the stage for _config setting.
2021-02-07 14:56:41 +00:00
Nick Craig-Wood
a0fc10e41a rc: factor out duplicate code in job creation 2021-02-07 14:56:41 +00:00
Nick Craig-Wood
ae3963e4b4 fs: Add string alternatives for setting options over the rc
Before this change options were read and set in native format. This
means for example nanoseconds for durations or an integer for
enumerated types, which isn't very convenient for humans.

This change enables these types to be set with a string with the
syntax as used in the command line instead, so `"10s"` rather than
`10000000000` or `"DEBUG"` rather than `8` for log level.
2021-02-07 14:56:41 +00:00
393 changed files with 29340 additions and 8557 deletions

View File

@@ -5,19 +5,31 @@ about: Report a problem with rclone
<!-- <!--
Welcome :-) We understand you are having a problem with rclone; we want to help you with that! We understand you are having a problem with rclone; we want to help you with that!
If you've just got a question or aren't sure if you've found a bug then please use the rclone forum: **STOP and READ**
**YOUR POST WILL BE REMOVED IF IT IS LOW QUALITY**:
Please show the effort you've put in to solving the problem and please be specific.
People are volunteering their time to help! Low effort posts are not likely to get good answers!
If you think you might have found a bug, try to replicate it with the latest beta (or stable).
The update instructions are available at https://rclone.org/commands/rclone_selfupdate/
If you can still replicate it or just got a question then please use the rclone forum:
https://forum.rclone.org/ https://forum.rclone.org/
instead of filing an issue for a quick response. for a quick response instead of filing an issue on this repo.
If you think you might have found a bug, please can you try to replicate it with the latest beta? If nothing else helps, then please fill in the info below which helps us help you.
https://beta.rclone.org/ **DO NOT REDACT** any information except passwords/keys/personal info.
If you can still replicate it with the latest beta, then please fill in the info below which makes our lives much easier. A log with -vv will make our day :-) You should use 3 backticks to begin and end your paste to make it readable.
Make sure to include a log obtained with '-vv'.
You can also use '-vv --log-file bug.log' and a service such as https://pastebin.com or https://gist.github.com/
Thank you Thank you
@@ -25,6 +37,10 @@ The Rclone Developers
--> -->
#### The associated forum post URL from `https://forum.rclone.org`
#### What is the problem you are having with rclone? #### What is the problem you are having with rclone?
@@ -37,7 +53,7 @@ The Rclone Developers
#### Which cloud storage system are you using? (e.g. Google Drive) #### Which cloud storage system are you using? (e.g. Google Drive)
@@ -48,3 +64,11 @@ The Rclone Developers
#### A log from the command with the `-vv` flag (e.g. output from `rclone -vv copy /tmp remote:tmp`) #### A log from the command with the `-vv` flag (e.g. output from `rclone -vv copy /tmp remote:tmp`)
<!--- Please keep the note below for others who read your bug report. -->
#### How to use GitHub
* Please use the 👍 [reaction](https://blog.github.com/2016-03-10-add-reactions-to-pull-requests-issues-and-comments/) to show that you are affected by the same issue.
* Please don't comment if you have no relevant information to add. It's just extra noise for everyone subscribed to this issue.
* Subscribe to receive notifications on status change and new comments.

View File

@@ -7,12 +7,16 @@ about: Suggest a new feature or enhancement for rclone
Welcome :-) Welcome :-)
So you've got an idea to improve rclone? We love that! You'll be glad to hear we've incorporated hundreds of ideas from contributors already. So you've got an idea to improve rclone? We love that!
You'll be glad to hear we've incorporated hundreds of ideas from contributors already.
Here is a checklist of things to do: Probably the latest beta (or stable) release has your feature, so try to update your rclone.
The update instructions are available at https://rclone.org/commands/rclone_selfupdate/
1. Please search the old issues first for your idea and +1 or comment on an existing issue if possible. If it still isn't there, here is a checklist of things to do:
2. Discuss on the forum first: https://forum.rclone.org/
1. Search the old issues for your idea and +1 or comment on an existing issue if possible.
2. Discuss on the forum: https://forum.rclone.org/
3. Make a feature request issue (this is the right place!). 3. Make a feature request issue (this is the right place!).
4. Be prepared to get involved making the feature :-) 4. Be prepared to get involved making the feature :-)
@@ -22,6 +26,9 @@ The Rclone Developers
--> -->
#### The associated forum post URL from `https://forum.rclone.org`
#### What is your current rclone version (output from `rclone version`)? #### What is your current rclone version (output from `rclone version`)?
@@ -34,3 +41,11 @@ The Rclone Developers
#### How do you think rclone should be changed to solve that? #### How do you think rclone should be changed to solve that?
<!--- Please keep the note below for others who read your feature request. -->
#### How to use GitHub
* Please use the 👍 [reaction](https://blog.github.com/2016-03-10-add-reactions-to-pull-requests-issues-and-comments/) to show that you are affected by the same issue.
* Please don't comment if you have no relevant information to add. It's just extra noise for everyone subscribed to this issue.
* Subscribe to receive notifications on status change and new comments.

View File

@@ -12,38 +12,52 @@ on:
tags: tags:
- '*' - '*'
pull_request: pull_request:
workflow_dispatch:
inputs:
manual:
required: true
default: true
jobs: jobs:
build: build:
if: ${{ github.repository == 'rclone/rclone' || github.event.inputs.manual }}
timeout-minutes: 60 timeout-minutes: 60
strategy: strategy:
fail-fast: false fail-fast: false
matrix: matrix:
job_name: ['linux', 'mac', 'windows_amd64', 'windows_386', 'other_os', 'go1.13', 'go1.14', 'go1.15'] job_name: ['linux', 'mac_amd64', 'mac_arm64', 'windows_amd64', 'windows_386', 'other_os', 'go1.13', 'go1.14', 'go1.15']
include: include:
- job_name: linux - job_name: linux
os: ubuntu-latest os: ubuntu-latest
go: '1.16.0-rc1' go: '1.16.x'
gotags: cmount gotags: cmount
build_flags: '-include "^linux/"' build_flags: '-include "^linux/"'
check: true check: true
quicktest: true quicktest: true
racequicktest: true racequicktest: true
librclonetest: true
deploy: true deploy: true
- job_name: mac - job_name: mac_amd64
os: macOS-latest os: macOS-latest
go: '1.16.0-rc1' go: '1.16.x'
gotags: 'cmount' gotags: 'cmount'
build_flags: '-include "^darwin/amd64" -cgo' build_flags: '-include "^darwin/amd64" -cgo'
quicktest: true quicktest: true
racequicktest: true racequicktest: true
deploy: true deploy: true
- job_name: mac_arm64
os: macOS-latest
go: '1.16.x'
gotags: 'cmount'
build_flags: '-include "^darwin/arm64" -cgo -macos-arch arm64 -macos-sdk macosx11.1 -cgo-cflags=-I/usr/local/include -cgo-ldflags=-L/usr/local/lib'
deploy: true
- job_name: windows_amd64 - job_name: windows_amd64
os: windows-latest os: windows-latest
go: '1.16.0-rc1' go: '1.16.x'
gotags: cmount gotags: cmount
build_flags: '-include "^windows/amd64" -cgo' build_flags: '-include "^windows/amd64" -cgo'
build_args: '-buildmode exe' build_args: '-buildmode exe'
@@ -53,7 +67,7 @@ jobs:
- job_name: windows_386 - job_name: windows_386
os: windows-latest os: windows-latest
go: '1.16.0-rc1' go: '1.16.x'
gotags: cmount gotags: cmount
goarch: '386' goarch: '386'
cgo: '1' cgo: '1'
@@ -64,8 +78,8 @@ jobs:
- job_name: other_os - job_name: other_os
os: ubuntu-latest os: ubuntu-latest
go: '1.16.0-rc1' go: '1.16.x'
build_flags: '-exclude "^(windows/|darwin/amd64|linux/)"' build_flags: '-exclude "^(windows/|darwin/|linux/)"'
compile_all: true compile_all: true
deploy: true deploy: true
@@ -124,7 +138,7 @@ jobs:
shell: bash shell: bash
run: | run: |
brew update brew update
brew install --cask osxfuse brew install --cask macfuse
if: matrix.os == 'macOS-latest' if: matrix.os == 'macOS-latest'
- name: Install Libraries on Windows - name: Install Libraries on Windows
@@ -180,6 +194,14 @@ jobs:
make racequicktest make racequicktest
if: matrix.racequicktest if: matrix.racequicktest
- name: Run librclone tests
shell: bash
run: |
make -C librclone/ctest test
make -C librclone/ctest clean
librclone/python/test_rclone.py
if: matrix.librclonetest
- name: Code quality test - name: Code quality test
shell: bash shell: bash
run: | run: |
@@ -206,50 +228,110 @@ jobs:
# Deploy binaries if enabled in config && not a PR && not a fork # Deploy binaries if enabled in config && not a PR && not a fork
if: matrix.deploy && github.head_ref == '' && github.repository == 'rclone/rclone' if: matrix.deploy && github.head_ref == '' && github.repository == 'rclone/rclone'
xgo: android:
timeout-minutes: 60 if: ${{ github.repository == 'rclone/rclone' || github.event.inputs.manual }}
name: "xgo cross compile" timeout-minutes: 30
runs-on: ubuntu-latest name: "android-all"
runs-on: ubuntu-latest
steps: steps:
- name: Checkout
uses: actions/checkout@v2
with:
fetch-depth: 0
- name: Checkout # Upgrade together with NDK version
uses: actions/checkout@v1 - name: Set up Go 1.14
with: uses: actions/setup-go@v1
# Checkout into a fixed path to avoid import path problems on go < 1.11 with:
path: ./src/github.com/rclone/rclone go-version: 1.14
- name: Set environment variables # Upgrade together with Go version. Using a GitHub-provided version saves around 2 minutes.
shell: bash - name: Force NDK version
run: | run: echo "y" | sudo ${ANDROID_HOME}/tools/bin/sdkmanager --install "ndk;21.4.7075529" | grep -v = || true
echo 'GOPATH=${{ runner.workspace }}' >> $GITHUB_ENV
echo '${{ runner.workspace }}/bin' >> $GITHUB_PATH
- name: Cross-compile rclone - name: Go module cache
run: | uses: actions/cache@v2
docker pull billziss/xgo-cgofuse with:
GO111MODULE=off go get -v github.com/karalabe/xgo # don't add to go.mod path: ~/go/pkg/mod
# xgo \ key: ${{ runner.os }}-go-${{ hashFiles('**/go.sum') }}
# -image=billziss/xgo-cgofuse \ restore-keys: |
# -targets=darwin/amd64,linux/386,linux/amd64,windows/386,windows/amd64 \ ${{ runner.os }}-go-
# -tags cmount \
# -dest build \
# .
xgo \
-image=billziss/xgo-cgofuse \
-targets=android/*,ios/* \
-dest build \
.
- name: Build rclone - name: Set global environment variables
shell: bash shell: bash
run: | run: |
make echo "VERSION=$(make version)" >> $GITHUB_ENV
- name: Upload artifacts - name: build native rclone
run: | run: |
make ci_upload make
env:
RCLONE_CONFIG_PASS: ${{ secrets.RCLONE_CONFIG_PASS }} - name: install gomobile
# Upload artifacts if not a PR && not a fork run: |
if: github.head_ref == '' && github.repository == 'rclone/rclone' go get golang.org/x/mobile/cmd/gobind
go get golang.org/x/mobile/cmd/gomobile
env PATH=$PATH:~/go/bin gomobile init
- name: arm-v7a gomobile build
run: env PATH=$PATH:~/go/bin gomobile bind -v -target=android/arm -javapkg=org.rclone -ldflags '-s -X github.com/rclone/rclone/fs.Version='${VERSION} github.com/rclone/rclone/librclone/gomobile
- name: arm-v7a Set environment variables
shell: bash
run: |
echo "CC=$(echo $ANDROID_HOME/ndk/21.4.7075529/toolchains/llvm/prebuilt/linux-x86_64/bin/armv7a-linux-androideabi16-clang)" >> $GITHUB_ENV
echo "CC_FOR_TARGET=$CC" >> $GITHUB_ENV
echo 'GOOS=android' >> $GITHUB_ENV
echo 'GOARCH=arm' >> $GITHUB_ENV
echo 'GOARM=7' >> $GITHUB_ENV
echo 'CGO_ENABLED=1' >> $GITHUB_ENV
echo 'CGO_LDFLAGS=-fuse-ld=lld -s -w' >> $GITHUB_ENV
- name: arm-v7a build
run: go build -v -tags android -trimpath -ldflags '-s -X github.com/rclone/rclone/fs.Version='${VERSION} -o build/rclone-android-16-armv7a .
- name: arm64-v8a Set environment variables
shell: bash
run: |
echo "CC=$(echo $ANDROID_HOME/ndk/21.4.7075529/toolchains/llvm/prebuilt/linux-x86_64/bin/aarch64-linux-android21-clang)" >> $GITHUB_ENV
echo "CC_FOR_TARGET=$CC" >> $GITHUB_ENV
echo 'GOOS=android' >> $GITHUB_ENV
echo 'GOARCH=arm64' >> $GITHUB_ENV
echo 'CGO_ENABLED=1' >> $GITHUB_ENV
echo 'CGO_LDFLAGS=-fuse-ld=lld -s -w' >> $GITHUB_ENV
- name: arm64-v8a build
run: go build -v -tags android -trimpath -ldflags '-s -X github.com/rclone/rclone/fs.Version='${VERSION} -o build/rclone-android-21-armv8a .
- name: x86 Set environment variables
shell: bash
run: |
echo "CC=$(echo $ANDROID_HOME/ndk/21.4.7075529/toolchains/llvm/prebuilt/linux-x86_64/bin/i686-linux-android16-clang)" >> $GITHUB_ENV
echo "CC_FOR_TARGET=$CC" >> $GITHUB_ENV
echo 'GOOS=android' >> $GITHUB_ENV
echo 'GOARCH=386' >> $GITHUB_ENV
echo 'CGO_ENABLED=1' >> $GITHUB_ENV
echo 'CGO_LDFLAGS=-fuse-ld=lld -s -w' >> $GITHUB_ENV
- name: x86 build
run: go build -v -tags android -trimpath -ldflags '-s -X github.com/rclone/rclone/fs.Version='${VERSION} -o build/rclone-android-16-x86 .
- name: x64 Set environment variables
shell: bash
run: |
echo "CC=$(echo $ANDROID_HOME/ndk/21.4.7075529/toolchains/llvm/prebuilt/linux-x86_64/bin/x86_64-linux-android21-clang)" >> $GITHUB_ENV
echo "CC_FOR_TARGET=$CC" >> $GITHUB_ENV
echo 'GOOS=android' >> $GITHUB_ENV
echo 'GOARCH=amd64' >> $GITHUB_ENV
echo 'CGO_ENABLED=1' >> $GITHUB_ENV
echo 'CGO_LDFLAGS=-fuse-ld=lld -s -w' >> $GITHUB_ENV
- name: x64 build
run: go build -v -tags android -trimpath -ldflags '-s -X github.com/rclone/rclone/fs.Version='${VERSION} -o build/rclone-android-21-x64 .
- name: Upload artifacts
run: |
make ci_upload
env:
RCLONE_CONFIG_PASS: ${{ secrets.RCLONE_CONFIG_PASS }}
# Upload artifacts if not a PR && not a fork
if: github.head_ref == '' && github.repository == 'rclone/rclone'

View File

@@ -7,6 +7,7 @@ on:
jobs: jobs:
build: build:
if: github.repository == 'rclone/rclone'
runs-on: ubuntu-latest runs-on: ubuntu-latest
name: Build image job name: Build image job
steps: steps:

View File

@@ -6,6 +6,7 @@ on:
jobs: jobs:
build: build:
if: github.repository == 'rclone/rclone'
runs-on: ubuntu-latest runs-on: ubuntu-latest
name: Build image job name: Build image job
steps: steps:

3
.gitignore vendored
View File

@@ -1,6 +1,7 @@
*~ *~
_junk/ _junk/
rclone rclone
rclone.exe
build build
docs/public docs/public
rclone.iml rclone.iml
@@ -10,3 +11,5 @@ rclone.iml
*.log *.log
*.iml *.iml
fuzz-build.zip fuzz-build.zip
*.orig
*.rej

View File

@@ -33,10 +33,11 @@ page](https://github.com/rclone/rclone).
Now in your terminal Now in your terminal
go get -u github.com/rclone/rclone git clone https://github.com/rclone/rclone.git
cd $GOPATH/src/github.com/rclone/rclone cd rclone
git remote rename origin upstream git remote rename origin upstream
git remote add origin git@github.com:YOURUSER/rclone.git git remote add origin git@github.com:YOURUSER/rclone.git
go build
Make a branch to add your new feature Make a branch to add your new feature
@@ -72,7 +73,7 @@ Make sure you
When you are done with that When you are done with that
git push origin my-new-feature git push -u origin my-new-feature
Go to the GitHub website and click [Create pull Go to the GitHub website and click [Create pull
request](https://help.github.com/articles/creating-a-pull-request/). request](https://help.github.com/articles/creating-a-pull-request/).
@@ -115,8 +116,8 @@ are skipped if `TestDrive:` isn't defined.
cd backend/drive cd backend/drive
go test -v go test -v
You can then run the integration tests which tests all of rclone's You can then run the integration tests which test all of rclone's
operations. Normally these get run against the local filing system, operations. Normally these get run against the local file system,
but they can be run against any of the remotes. but they can be run against any of the remotes.
cd fs/sync cd fs/sync
@@ -127,7 +128,7 @@ but they can be run against any of the remotes.
go test -v -remote TestDrive: go test -v -remote TestDrive:
If you want to use the integration test framework to run these tests If you want to use the integration test framework to run these tests
all together with an HTML report and test retries then from the altogether with an HTML report and test retries then from the
project root: project root:
go install github.com/rclone/rclone/fstest/test_all go install github.com/rclone/rclone/fstest/test_all
@@ -202,7 +203,7 @@ for the flag help, the remainder is shown to the user in `rclone
config` and is added to the docs with `make backenddocs`. config` and is added to the docs with `make backenddocs`.
The only documentation you need to edit are the `docs/content/*.md` The only documentation you need to edit are the `docs/content/*.md`
files. The MANUAL.*, rclone.1, web site, etc. are all auto generated files. The `MANUAL.*`, `rclone.1`, web site, etc. are all auto generated
from those during the release process. See the `make doc` and `make from those during the release process. See the `make doc` and `make
website` targets in the Makefile if you are interested in how. You website` targets in the Makefile if you are interested in how. You
don't need to run these when adding a feature. don't need to run these when adding a feature.
@@ -265,7 +266,7 @@ rclone uses the [go
modules](https://tip.golang.org/cmd/go/#hdr-Modules__module_versions__and_more) modules](https://tip.golang.org/cmd/go/#hdr-Modules__module_versions__and_more)
support in go1.11 and later to manage its dependencies. support in go1.11 and later to manage its dependencies.
rclone can be built with modules outside of the GOPATH rclone can be built with modules outside of the `GOPATH`.
To add a dependency `github.com/ncw/new_dependency` see the To add a dependency `github.com/ncw/new_dependency` see the
instructions below. These will fetch the dependency and add it to instructions below. These will fetch the dependency and add it to
@@ -333,8 +334,8 @@ Getting going
* Try to implement as many optional methods as possible as it makes the remote more usable. * Try to implement as many optional methods as possible as it makes the remote more usable.
* Use lib/encoder to make sure we can encode any path name and `rclone info` to help determine the encodings needed * Use lib/encoder to make sure we can encode any path name and `rclone info` to help determine the encodings needed
* `rclone purge -v TestRemote:rclone-info` * `rclone purge -v TestRemote:rclone-info`
* `rclone info --remote-encoding None -vv --write-json remote.json TestRemote:rclone-info` * `rclone test info --all --remote-encoding None -vv --write-json remote.json TestRemote:rclone-info`
* `go run cmd/info/internal/build_csv/main.go -o remote.csv remote.json` * `go run cmd/test/info/internal/build_csv/main.go -o remote.csv remote.json`
* open `remote.csv` in a spreadsheet and examine * open `remote.csv` in a spreadsheet and examine
Unit tests Unit tests

View File

@@ -16,6 +16,8 @@ RUN apk --no-cache add ca-certificates fuse tzdata && \
COPY --from=builder /go/src/github.com/rclone/rclone/rclone /usr/local/bin/ COPY --from=builder /go/src/github.com/rclone/rclone/rclone /usr/local/bin/
RUN addgroup -g 1009 rclone && adduser -u 1009 -Ds /bin/sh -G rclone rclone
ENTRYPOINT [ "rclone" ] ENTRYPOINT [ "rclone" ]
WORKDIR /data WORKDIR /data

1291
MANUAL.html generated

File diff suppressed because it is too large Load Diff

1787
MANUAL.md generated

File diff suppressed because it is too large Load Diff

1867
MANUAL.txt generated

File diff suppressed because it is too large Load Diff

View File

@@ -93,7 +93,7 @@ build_dep:
# Get the release dependencies we only install on linux # Get the release dependencies we only install on linux
release_dep_linux: release_dep_linux:
cd /tmp && go get github.com/goreleaser/nfpm/v2/... go run bin/get-github-release.go -extract nfpm goreleaser/nfpm 'nfpm_.*_Linux_x86_64\.tar\.gz'
# Get the release dependencies we only install on Windows # Get the release dependencies we only install on Windows
release_dep_windows: release_dep_windows:
@@ -119,7 +119,7 @@ doc: rclone.1 MANUAL.html MANUAL.txt rcdocs commanddocs
rclone.1: MANUAL.md rclone.1: MANUAL.md
pandoc -s --from markdown-smart --to man MANUAL.md -o rclone.1 pandoc -s --from markdown-smart --to man MANUAL.md -o rclone.1
MANUAL.md: bin/make_manual.py docs/content/*.md commanddocs backenddocs MANUAL.md: bin/make_manual.py docs/content/*.md commanddocs backenddocs rcdocs
./bin/make_manual.py ./bin/make_manual.py
MANUAL.html: MANUAL.md MANUAL.html: MANUAL.md
@@ -187,10 +187,10 @@ upload_github:
./bin/upload-github $(TAG) ./bin/upload-github $(TAG)
cross: doc cross: doc
go run bin/cross-compile.go -release current $(BUILDTAGS) $(BUILD_ARGS) $(TAG) go run bin/cross-compile.go -release current $(BUILD_FLAGS) $(BUILDTAGS) $(BUILD_ARGS) $(TAG)
beta: beta:
go run bin/cross-compile.go $(BUILDTAGS) $(BUILD_ARGS) $(TAG) go run bin/cross-compile.go $(BUILD_FLAGS) $(BUILDTAGS) $(BUILD_ARGS) $(TAG)
rclone -v copy build/ memstore:pub-rclone-org/$(TAG) rclone -v copy build/ memstore:pub-rclone-org/$(TAG)
@echo Beta release ready at https://pub.rclone.org/$(TAG)/ @echo Beta release ready at https://pub.rclone.org/$(TAG)/
@@ -198,7 +198,7 @@ log_since_last_release:
git log $(LAST_TAG).. git log $(LAST_TAG)..
compile_all: compile_all:
go run bin/cross-compile.go -compile-only $(BUILDTAGS) $(BUILD_ARGS) $(TAG) go run bin/cross-compile.go -compile-only $(BUILD_FLAGS) $(BUILDTAGS) $(BUILD_ARGS) $(TAG)
ci_upload: ci_upload:
sudo chown -R $$USER build sudo chown -R $$USER build

View File

@@ -62,6 +62,7 @@ Rclone *("rsync for cloud storage")* is a command line program to sync files and
* Rackspace Cloud Files [:page_facing_up:](https://rclone.org/swift/) * Rackspace Cloud Files [:page_facing_up:](https://rclone.org/swift/)
* Scaleway [:page_facing_up:](https://rclone.org/s3/#scaleway) * Scaleway [:page_facing_up:](https://rclone.org/s3/#scaleway)
* Seafile [:page_facing_up:](https://rclone.org/seafile/) * Seafile [:page_facing_up:](https://rclone.org/seafile/)
* SeaweedFS [:page_facing_up:](https://rclone.org/s3/#seaweedfs)
* SFTP [:page_facing_up:](https://rclone.org/sftp/) * SFTP [:page_facing_up:](https://rclone.org/sftp/)
* StackPath [:page_facing_up:](https://rclone.org/s3/#stackpath) * StackPath [:page_facing_up:](https://rclone.org/s3/#stackpath)
* SugarSync [:page_facing_up:](https://rclone.org/sugarsync/) * SugarSync [:page_facing_up:](https://rclone.org/sugarsync/)
@@ -87,7 +88,6 @@ Please see [the full list of all storage providers and their features](https://r
* Optional large file chunking ([Chunker](https://rclone.org/chunker/)) * Optional large file chunking ([Chunker](https://rclone.org/chunker/))
* Optional transparent compression ([Compress](https://rclone.org/compress/)) * Optional transparent compression ([Compress](https://rclone.org/compress/))
* Optional encryption ([Crypt](https://rclone.org/crypt/)) * Optional encryption ([Crypt](https://rclone.org/crypt/))
* Optional cache ([Cache](https://rclone.org/cache/))
* Optional FUSE mount ([rclone mount](https://rclone.org/commands/rclone_mount/)) * Optional FUSE mount ([rclone mount](https://rclone.org/commands/rclone_mount/))
* Multi-threaded downloads to local disk * Multi-threaded downloads to local disk
* Can [serve](https://rclone.org/commands/rclone_serve/) local or remote files over HTTP/WebDav/FTP/SFTP/dlna * Can [serve](https://rclone.org/commands/rclone_serve/) local or remote files over HTTP/WebDav/FTP/SFTP/dlna

View File

@@ -76,6 +76,24 @@ Now
The rclone docker image should autobuild on via GitHub actions. If it doesn't The rclone docker image should autobuild on via GitHub actions. If it doesn't
or needs to be updated then rebuild like this. or needs to be updated then rebuild like this.
See: https://github.com/ilteoood/docker_buildx/issues/19
See: https://github.com/ilteoood/docker_buildx/blob/master/scripts/install_buildx.sh
```
git co v1.54.1
docker pull golang
export DOCKER_CLI_EXPERIMENTAL=enabled
docker buildx create --name actions_builder --use
docker run --rm --privileged docker/binfmt:820fdd95a9972a5308930a2bdfb8573dd4447ad3
docker run --rm --privileged multiarch/qemu-user-static --reset -p yes
SUPPORTED_PLATFORMS=$(docker buildx inspect --bootstrap | grep 'Platforms:*.*' | cut -d : -f2,3)
echo "Supported platforms: $SUPPORTED_PLATFORMS"
docker buildx build --platform linux/amd64,linux/386,linux/arm64,linux/arm/v7 -t rclone/rclone:1.54.1 -t rclone/rclone:1.54 -t rclone/rclone:1 -t rclone/rclone:latest --push .
docker buildx stop actions_builder
```
### Old build for linux/amd64 only
``` ```
docker pull golang docker pull golang
docker build --rm --ulimit memlock=67108864 -t rclone/rclone:1.52.0 -t rclone/rclone:1.52 -t rclone/rclone:1 -t rclone/rclone:latest . docker build --rm --ulimit memlock=67108864 -t rclone/rclone:1.52.0 -t rclone/rclone:1.52 -t rclone/rclone:1 -t rclone/rclone:latest .

View File

@@ -1 +1 @@
v1.55.0 v1.56.0

View File

@@ -11,6 +11,7 @@ import (
_ "github.com/rclone/rclone/backend/local" // pull in test backend _ "github.com/rclone/rclone/backend/local" // pull in test backend
"github.com/rclone/rclone/fs" "github.com/rclone/rclone/fs"
"github.com/rclone/rclone/fs/config" "github.com/rclone/rclone/fs/config"
"github.com/rclone/rclone/fs/config/configfile"
"github.com/stretchr/testify/require" "github.com/stretchr/testify/require"
) )
@@ -19,7 +20,7 @@ var (
) )
func prepare(t *testing.T, root string) { func prepare(t *testing.T, root string) {
config.LoadConfig(context.Background()) configfile.Install()
// Configure the remote // Configure the remote
config.FileSet(remoteName, "type", "alias") config.FileSet(remoteName, "type", "alias")

View File

@@ -41,6 +41,7 @@ import (
_ "github.com/rclone/rclone/backend/swift" _ "github.com/rclone/rclone/backend/swift"
_ "github.com/rclone/rclone/backend/tardigrade" _ "github.com/rclone/rclone/backend/tardigrade"
_ "github.com/rclone/rclone/backend/union" _ "github.com/rclone/rclone/backend/union"
_ "github.com/rclone/rclone/backend/uptobox"
_ "github.com/rclone/rclone/backend/webdav" _ "github.com/rclone/rclone/backend/webdav"
_ "github.com/rclone/rclone/backend/yandex" _ "github.com/rclone/rclone/backend/yandex"
_ "github.com/rclone/rclone/backend/zoho" _ "github.com/rclone/rclone/backend/zoho"

View File

@@ -16,7 +16,6 @@ import (
"encoding/json" "encoding/json"
"fmt" "fmt"
"io" "io"
"log"
"net/http" "net/http"
"path" "path"
"strings" "strings"
@@ -70,11 +69,10 @@ func init() {
Prefix: "acd", Prefix: "acd",
Description: "Amazon Drive", Description: "Amazon Drive",
NewFs: NewFs, NewFs: NewFs,
Config: func(ctx context.Context, name string, m configmap.Mapper) { Config: func(ctx context.Context, name string, m configmap.Mapper, config fs.ConfigIn) (*fs.ConfigOut, error) {
err := oauthutil.Config(ctx, "amazon cloud drive", name, m, acdConfig, nil) return oauthutil.ConfigOut("", &oauthutil.Options{
if err != nil { OAuth2Config: acdConfig,
log.Fatalf("Failed to configure token: %v", err) })
}
}, },
Options: append(oauthutil.SharedOptions, []fs.Option{{ Options: append(oauthutil.SharedOptions, []fs.Option{{
Name: "checkpoint", Name: "checkpoint",
@@ -83,16 +81,16 @@ func init() {
Advanced: true, Advanced: true,
}, { }, {
Name: "upload_wait_per_gb", Name: "upload_wait_per_gb",
Help: `Additional time per GB to wait after a failed complete upload to see if it appears. Help: `Additional time per GiB to wait after a failed complete upload to see if it appears.
Sometimes Amazon Drive gives an error when a file has been fully Sometimes Amazon Drive gives an error when a file has been fully
uploaded but the file appears anyway after a little while. This uploaded but the file appears anyway after a little while. This
happens sometimes for files over 1GB in size and nearly every time for happens sometimes for files over 1 GiB in size and nearly every time for
files bigger than 10GB. This parameter controls the time rclone waits files bigger than 10 GiB. This parameter controls the time rclone waits
for the file to appear. for the file to appear.
The default value for this parameter is 3 minutes per GB, so by The default value for this parameter is 3 minutes per GiB, so by
default it will wait 3 minutes for every GB uploaded to see if the default it will wait 3 minutes for every GiB uploaded to see if the
file appears. file appears.
You can disable this feature by setting it to 0. This may cause You can disable this feature by setting it to 0. This may cause
@@ -112,7 +110,7 @@ in this situation.`,
Files this size or more will be downloaded via their "tempLink". This Files this size or more will be downloaded via their "tempLink". This
is to work around a problem with Amazon Drive which blocks downloads is to work around a problem with Amazon Drive which blocks downloads
of files bigger than about 10GB. The default for this is 9GB which of files bigger than about 10 GiB. The default for this is 9 GiB which
shouldn't need to be changed. shouldn't need to be changed.
To download files above this threshold, rclone requests a "tempLink" To download files above this threshold, rclone requests a "tempLink"
@@ -205,7 +203,10 @@ var retryErrorCodes = []int{
// shouldRetry returns a boolean as to whether this resp and err // shouldRetry returns a boolean as to whether this resp and err
// deserve to be retried. It returns the err as a convenience // deserve to be retried. It returns the err as a convenience
func (f *Fs) shouldRetry(resp *http.Response, err error) (bool, error) { func (f *Fs) shouldRetry(ctx context.Context, resp *http.Response, err error) (bool, error) {
if fserrors.ContextError(ctx, &err) {
return false, err
}
if resp != nil { if resp != nil {
if resp.StatusCode == 401 { if resp.StatusCode == 401 {
f.tokenRenewer.Invalidate() f.tokenRenewer.Invalidate()
@@ -280,7 +281,7 @@ func NewFs(ctx context.Context, name, root string, m configmap.Mapper) (fs.Fs, e
// Renew the token in the background // Renew the token in the background
f.tokenRenewer = oauthutil.NewRenew(f.String(), ts, func() error { f.tokenRenewer = oauthutil.NewRenew(f.String(), ts, func() error {
_, err := f.getRootInfo() _, err := f.getRootInfo(ctx)
return err return err
}) })
@@ -288,14 +289,14 @@ func NewFs(ctx context.Context, name, root string, m configmap.Mapper) (fs.Fs, e
var resp *http.Response var resp *http.Response
err = f.pacer.Call(func() (bool, error) { err = f.pacer.Call(func() (bool, error) {
_, resp, err = f.c.Account.GetEndpoints() _, resp, err = f.c.Account.GetEndpoints()
return f.shouldRetry(resp, err) return f.shouldRetry(ctx, resp, err)
}) })
if err != nil { if err != nil {
return nil, errors.Wrap(err, "failed to get endpoints") return nil, errors.Wrap(err, "failed to get endpoints")
} }
// Get rootID // Get rootID
rootInfo, err := f.getRootInfo() rootInfo, err := f.getRootInfo(ctx)
if err != nil || rootInfo.Id == nil { if err != nil || rootInfo.Id == nil {
return nil, errors.Wrap(err, "failed to get root") return nil, errors.Wrap(err, "failed to get root")
} }
@@ -337,11 +338,11 @@ func NewFs(ctx context.Context, name, root string, m configmap.Mapper) (fs.Fs, e
} }
// getRootInfo gets the root folder info // getRootInfo gets the root folder info
func (f *Fs) getRootInfo() (rootInfo *acd.Folder, err error) { func (f *Fs) getRootInfo(ctx context.Context) (rootInfo *acd.Folder, err error) {
var resp *http.Response var resp *http.Response
err = f.pacer.Call(func() (bool, error) { err = f.pacer.Call(func() (bool, error) {
rootInfo, resp, err = f.c.Nodes.GetRoot() rootInfo, resp, err = f.c.Nodes.GetRoot()
return f.shouldRetry(resp, err) return f.shouldRetry(ctx, resp, err)
}) })
return rootInfo, err return rootInfo, err
} }
@@ -380,7 +381,7 @@ func (f *Fs) FindLeaf(ctx context.Context, pathID, leaf string) (pathIDOut strin
var subFolder *acd.Folder var subFolder *acd.Folder
err = f.pacer.Call(func() (bool, error) { err = f.pacer.Call(func() (bool, error) {
subFolder, resp, err = folder.GetFolder(f.opt.Enc.FromStandardName(leaf)) subFolder, resp, err = folder.GetFolder(f.opt.Enc.FromStandardName(leaf))
return f.shouldRetry(resp, err) return f.shouldRetry(ctx, resp, err)
}) })
if err != nil { if err != nil {
if err == acd.ErrorNodeNotFound { if err == acd.ErrorNodeNotFound {
@@ -407,7 +408,7 @@ func (f *Fs) CreateDir(ctx context.Context, pathID, leaf string) (newID string,
var info *acd.Folder var info *acd.Folder
err = f.pacer.Call(func() (bool, error) { err = f.pacer.Call(func() (bool, error) {
info, resp, err = folder.CreateFolder(f.opt.Enc.FromStandardName(leaf)) info, resp, err = folder.CreateFolder(f.opt.Enc.FromStandardName(leaf))
return f.shouldRetry(resp, err) return f.shouldRetry(ctx, resp, err)
}) })
if err != nil { if err != nil {
//fmt.Printf("...Error %v\n", err) //fmt.Printf("...Error %v\n", err)
@@ -428,7 +429,7 @@ type listAllFn func(*acd.Node) bool
// Lists the directory required calling the user function on each item found // Lists the directory required calling the user function on each item found
// //
// If the user fn ever returns true then it early exits with found = true // If the user fn ever returns true then it early exits with found = true
func (f *Fs) listAll(dirID string, title string, directoriesOnly bool, filesOnly bool, fn listAllFn) (found bool, err error) { func (f *Fs) listAll(ctx context.Context, dirID string, title string, directoriesOnly bool, filesOnly bool, fn listAllFn) (found bool, err error) {
query := "parents:" + dirID query := "parents:" + dirID
if directoriesOnly { if directoriesOnly {
query += " AND kind:" + folderKind query += " AND kind:" + folderKind
@@ -449,7 +450,7 @@ func (f *Fs) listAll(dirID string, title string, directoriesOnly bool, filesOnly
var resp *http.Response var resp *http.Response
err = f.pacer.CallNoRetry(func() (bool, error) { err = f.pacer.CallNoRetry(func() (bool, error) {
nodes, resp, err = f.c.Nodes.GetNodes(&opts) nodes, resp, err = f.c.Nodes.GetNodes(&opts)
return f.shouldRetry(resp, err) return f.shouldRetry(ctx, resp, err)
}) })
if err != nil { if err != nil {
return false, err return false, err
@@ -508,7 +509,7 @@ func (f *Fs) List(ctx context.Context, dir string) (entries fs.DirEntries, err e
var iErr error var iErr error
for tries := 1; tries <= maxTries; tries++ { for tries := 1; tries <= maxTries; tries++ {
entries = nil entries = nil
_, err = f.listAll(directoryID, "", false, false, func(node *acd.Node) bool { _, err = f.listAll(ctx, directoryID, "", false, false, func(node *acd.Node) bool {
remote := path.Join(dir, *node.Name) remote := path.Join(dir, *node.Name)
switch *node.Kind { switch *node.Kind {
case folderKind: case folderKind:
@@ -667,7 +668,7 @@ func (f *Fs) Put(ctx context.Context, in io.Reader, src fs.ObjectInfo, options .
if ok { if ok {
return false, nil return false, nil
} }
return f.shouldRetry(resp, err) return f.shouldRetry(ctx, resp, err)
}) })
if err != nil { if err != nil {
return nil, err return nil, err
@@ -708,7 +709,7 @@ func (f *Fs) Move(ctx context.Context, src fs.Object, remote string) (fs.Object,
if err != nil { if err != nil {
return nil, err return nil, err
} }
err = f.moveNode(srcObj.remote, dstLeaf, dstDirectoryID, srcObj.info, srcLeaf, srcDirectoryID, false) err = f.moveNode(ctx, srcObj.remote, dstLeaf, dstDirectoryID, srcObj.info, srcLeaf, srcDirectoryID, false)
if err != nil { if err != nil {
return nil, err return nil, err
} }
@@ -803,7 +804,7 @@ func (f *Fs) DirMove(ctx context.Context, src fs.Fs, srcRemote, dstRemote string
var jsonStr string var jsonStr string
err = srcFs.pacer.Call(func() (bool, error) { err = srcFs.pacer.Call(func() (bool, error) {
jsonStr, err = srcInfo.GetMetadata() jsonStr, err = srcInfo.GetMetadata()
return srcFs.shouldRetry(nil, err) return srcFs.shouldRetry(ctx, nil, err)
}) })
if err != nil { if err != nil {
fs.Debugf(src, "DirMove error: error reading src metadata: %v", err) fs.Debugf(src, "DirMove error: error reading src metadata: %v", err)
@@ -815,7 +816,7 @@ func (f *Fs) DirMove(ctx context.Context, src fs.Fs, srcRemote, dstRemote string
return err return err
} }
err = f.moveNode(srcPath, dstLeaf, dstDirectoryID, srcInfo, srcLeaf, srcDirectoryID, true) err = f.moveNode(ctx, srcPath, dstLeaf, dstDirectoryID, srcInfo, srcLeaf, srcDirectoryID, true)
if err != nil { if err != nil {
return err return err
} }
@@ -840,7 +841,7 @@ func (f *Fs) purgeCheck(ctx context.Context, dir string, check bool) error {
if check { if check {
// check directory is empty // check directory is empty
empty := true empty := true
_, err = f.listAll(rootID, "", false, false, func(node *acd.Node) bool { _, err = f.listAll(ctx, rootID, "", false, false, func(node *acd.Node) bool {
switch *node.Kind { switch *node.Kind {
case folderKind: case folderKind:
empty = false empty = false
@@ -865,7 +866,7 @@ func (f *Fs) purgeCheck(ctx context.Context, dir string, check bool) error {
var resp *http.Response var resp *http.Response
err = f.pacer.Call(func() (bool, error) { err = f.pacer.Call(func() (bool, error) {
resp, err = node.Trash() resp, err = node.Trash()
return f.shouldRetry(resp, err) return f.shouldRetry(ctx, resp, err)
}) })
if err != nil { if err != nil {
return err return err
@@ -987,7 +988,7 @@ func (o *Object) readMetaData(ctx context.Context) (err error) {
var info *acd.File var info *acd.File
err = o.fs.pacer.Call(func() (bool, error) { err = o.fs.pacer.Call(func() (bool, error) {
info, resp, err = folder.GetFile(o.fs.opt.Enc.FromStandardName(leaf)) info, resp, err = folder.GetFile(o.fs.opt.Enc.FromStandardName(leaf))
return o.fs.shouldRetry(resp, err) return o.fs.shouldRetry(ctx, resp, err)
}) })
if err != nil { if err != nil {
if err == acd.ErrorNodeNotFound { if err == acd.ErrorNodeNotFound {
@@ -1044,7 +1045,7 @@ func (o *Object) Open(ctx context.Context, options ...fs.OpenOption) (in io.Read
} else { } else {
in, resp, err = file.OpenTempURLHeaders(o.fs.noAuthClient, headers) in, resp, err = file.OpenTempURLHeaders(o.fs.noAuthClient, headers)
} }
return o.fs.shouldRetry(resp, err) return o.fs.shouldRetry(ctx, resp, err)
}) })
return in, err return in, err
} }
@@ -1067,7 +1068,7 @@ func (o *Object) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, op
if ok { if ok {
return false, nil return false, nil
} }
return o.fs.shouldRetry(resp, err) return o.fs.shouldRetry(ctx, resp, err)
}) })
if err != nil { if err != nil {
return err return err
@@ -1077,70 +1078,70 @@ func (o *Object) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, op
} }
// Remove a node // Remove a node
func (f *Fs) removeNode(info *acd.Node) error { func (f *Fs) removeNode(ctx context.Context, info *acd.Node) error {
var resp *http.Response var resp *http.Response
var err error var err error
err = f.pacer.Call(func() (bool, error) { err = f.pacer.Call(func() (bool, error) {
resp, err = info.Trash() resp, err = info.Trash()
return f.shouldRetry(resp, err) return f.shouldRetry(ctx, resp, err)
}) })
return err return err
} }
// Remove an object // Remove an object
func (o *Object) Remove(ctx context.Context) error { func (o *Object) Remove(ctx context.Context) error {
return o.fs.removeNode(o.info) return o.fs.removeNode(ctx, o.info)
} }
// Restore a node // Restore a node
func (f *Fs) restoreNode(info *acd.Node) (newInfo *acd.Node, err error) { func (f *Fs) restoreNode(ctx context.Context, info *acd.Node) (newInfo *acd.Node, err error) {
var resp *http.Response var resp *http.Response
err = f.pacer.Call(func() (bool, error) { err = f.pacer.Call(func() (bool, error) {
newInfo, resp, err = info.Restore() newInfo, resp, err = info.Restore()
return f.shouldRetry(resp, err) return f.shouldRetry(ctx, resp, err)
}) })
return newInfo, err return newInfo, err
} }
// Changes name of given node // Changes name of given node
func (f *Fs) renameNode(info *acd.Node, newName string) (newInfo *acd.Node, err error) { func (f *Fs) renameNode(ctx context.Context, info *acd.Node, newName string) (newInfo *acd.Node, err error) {
var resp *http.Response var resp *http.Response
err = f.pacer.Call(func() (bool, error) { err = f.pacer.Call(func() (bool, error) {
newInfo, resp, err = info.Rename(f.opt.Enc.FromStandardName(newName)) newInfo, resp, err = info.Rename(f.opt.Enc.FromStandardName(newName))
return f.shouldRetry(resp, err) return f.shouldRetry(ctx, resp, err)
}) })
return newInfo, err return newInfo, err
} }
// Replaces one parent with another, effectively moving the file. Leaves other // Replaces one parent with another, effectively moving the file. Leaves other
// parents untouched. ReplaceParent cannot be used when the file is trashed. // parents untouched. ReplaceParent cannot be used when the file is trashed.
func (f *Fs) replaceParent(info *acd.Node, oldParentID string, newParentID string) error { func (f *Fs) replaceParent(ctx context.Context, info *acd.Node, oldParentID string, newParentID string) error {
return f.pacer.Call(func() (bool, error) { return f.pacer.Call(func() (bool, error) {
resp, err := info.ReplaceParent(oldParentID, newParentID) resp, err := info.ReplaceParent(oldParentID, newParentID)
return f.shouldRetry(resp, err) return f.shouldRetry(ctx, resp, err)
}) })
} }
// Adds one additional parent to object. // Adds one additional parent to object.
func (f *Fs) addParent(info *acd.Node, newParentID string) error { func (f *Fs) addParent(ctx context.Context, info *acd.Node, newParentID string) error {
return f.pacer.Call(func() (bool, error) { return f.pacer.Call(func() (bool, error) {
resp, err := info.AddParent(newParentID) resp, err := info.AddParent(newParentID)
return f.shouldRetry(resp, err) return f.shouldRetry(ctx, resp, err)
}) })
} }
// Remove given parent from object, leaving the other possible // Remove given parent from object, leaving the other possible
// parents untouched. Object can end up having no parents. // parents untouched. Object can end up having no parents.
func (f *Fs) removeParent(info *acd.Node, parentID string) error { func (f *Fs) removeParent(ctx context.Context, info *acd.Node, parentID string) error {
return f.pacer.Call(func() (bool, error) { return f.pacer.Call(func() (bool, error) {
resp, err := info.RemoveParent(parentID) resp, err := info.RemoveParent(parentID)
return f.shouldRetry(resp, err) return f.shouldRetry(ctx, resp, err)
}) })
} }
// moveNode moves the node given from the srcLeaf,srcDirectoryID to // moveNode moves the node given from the srcLeaf,srcDirectoryID to
// the dstLeaf,dstDirectoryID // the dstLeaf,dstDirectoryID
func (f *Fs) moveNode(name, dstLeaf, dstDirectoryID string, srcInfo *acd.Node, srcLeaf, srcDirectoryID string, useDirErrorMsgs bool) (err error) { func (f *Fs) moveNode(ctx context.Context, name, dstLeaf, dstDirectoryID string, srcInfo *acd.Node, srcLeaf, srcDirectoryID string, useDirErrorMsgs bool) (err error) {
// fs.Debugf(name, "moveNode dst(%q,%s) <- src(%q,%s)", dstLeaf, dstDirectoryID, srcLeaf, srcDirectoryID) // fs.Debugf(name, "moveNode dst(%q,%s) <- src(%q,%s)", dstLeaf, dstDirectoryID, srcLeaf, srcDirectoryID)
cantMove := fs.ErrorCantMove cantMove := fs.ErrorCantMove
if useDirErrorMsgs { if useDirErrorMsgs {
@@ -1154,7 +1155,7 @@ func (f *Fs) moveNode(name, dstLeaf, dstDirectoryID string, srcInfo *acd.Node, s
if srcLeaf != dstLeaf { if srcLeaf != dstLeaf {
// fs.Debugf(name, "renaming") // fs.Debugf(name, "renaming")
_, err = f.renameNode(srcInfo, dstLeaf) _, err = f.renameNode(ctx, srcInfo, dstLeaf)
if err != nil { if err != nil {
fs.Debugf(name, "Move: quick path rename failed: %v", err) fs.Debugf(name, "Move: quick path rename failed: %v", err)
goto OnConflict goto OnConflict
@@ -1162,7 +1163,7 @@ func (f *Fs) moveNode(name, dstLeaf, dstDirectoryID string, srcInfo *acd.Node, s
} }
if srcDirectoryID != dstDirectoryID { if srcDirectoryID != dstDirectoryID {
// fs.Debugf(name, "trying parent replace: %s -> %s", oldParentID, newParentID) // fs.Debugf(name, "trying parent replace: %s -> %s", oldParentID, newParentID)
err = f.replaceParent(srcInfo, srcDirectoryID, dstDirectoryID) err = f.replaceParent(ctx, srcInfo, srcDirectoryID, dstDirectoryID)
if err != nil { if err != nil {
fs.Debugf(name, "Move: quick path parent replace failed: %v", err) fs.Debugf(name, "Move: quick path parent replace failed: %v", err)
return err return err
@@ -1175,13 +1176,13 @@ OnConflict:
fs.Debugf(name, "Could not directly rename file, presumably because there was a file with the same name already. Instead, the file will now be trashed where such operations do not cause errors. It will be restored to the correct parent after. If any of the subsequent calls fails, the rename/move will be in an invalid state.") fs.Debugf(name, "Could not directly rename file, presumably because there was a file with the same name already. Instead, the file will now be trashed where such operations do not cause errors. It will be restored to the correct parent after. If any of the subsequent calls fails, the rename/move will be in an invalid state.")
// fs.Debugf(name, "Trashing file") // fs.Debugf(name, "Trashing file")
err = f.removeNode(srcInfo) err = f.removeNode(ctx, srcInfo)
if err != nil { if err != nil {
fs.Debugf(name, "Move: remove node failed: %v", err) fs.Debugf(name, "Move: remove node failed: %v", err)
return err return err
} }
// fs.Debugf(name, "Renaming file") // fs.Debugf(name, "Renaming file")
_, err = f.renameNode(srcInfo, dstLeaf) _, err = f.renameNode(ctx, srcInfo, dstLeaf)
if err != nil { if err != nil {
fs.Debugf(name, "Move: rename node failed: %v", err) fs.Debugf(name, "Move: rename node failed: %v", err)
return err return err
@@ -1189,19 +1190,19 @@ OnConflict:
// note: replacing parent is forbidden by API, modifying them individually is // note: replacing parent is forbidden by API, modifying them individually is
// okay though // okay though
// fs.Debugf(name, "Adding target parent") // fs.Debugf(name, "Adding target parent")
err = f.addParent(srcInfo, dstDirectoryID) err = f.addParent(ctx, srcInfo, dstDirectoryID)
if err != nil { if err != nil {
fs.Debugf(name, "Move: addParent failed: %v", err) fs.Debugf(name, "Move: addParent failed: %v", err)
return err return err
} }
// fs.Debugf(name, "removing original parent") // fs.Debugf(name, "removing original parent")
err = f.removeParent(srcInfo, srcDirectoryID) err = f.removeParent(ctx, srcInfo, srcDirectoryID)
if err != nil { if err != nil {
fs.Debugf(name, "Move: removeParent failed: %v", err) fs.Debugf(name, "Move: removeParent failed: %v", err)
return err return err
} }
// fs.Debugf(name, "Restoring") // fs.Debugf(name, "Restoring")
_, err = f.restoreNode(srcInfo) _, err = f.restoreNode(ctx, srcInfo)
if err != nil { if err != nil {
fs.Debugf(name, "Move: restoreNode node failed: %v", err) fs.Debugf(name, "Move: restoreNode node failed: %v", err)
return err return err

View File

@@ -47,8 +47,8 @@ const (
timeFormatIn = time.RFC3339 timeFormatIn = time.RFC3339
timeFormatOut = "2006-01-02T15:04:05.000000000Z07:00" timeFormatOut = "2006-01-02T15:04:05.000000000Z07:00"
storageDefaultBaseURL = "blob.core.windows.net" storageDefaultBaseURL = "blob.core.windows.net"
defaultChunkSize = 4 * fs.MebiByte defaultChunkSize = 4 * fs.Mebi
maxChunkSize = 100 * fs.MebiByte maxChunkSize = 100 * fs.Mebi
uploadConcurrency = 4 uploadConcurrency = 4
defaultAccessTier = azblob.AccessTierNone defaultAccessTier = azblob.AccessTierNone
maxTryTimeout = time.Hour * 24 * 365 //max time of an azure web request response window (whether or not data is flowing) maxTryTimeout = time.Hour * 24 * 365 //max time of an azure web request response window (whether or not data is flowing)
@@ -129,11 +129,11 @@ msi_client_id, or msi_mi_res_id parameters.`,
Advanced: true, Advanced: true,
}, { }, {
Name: "upload_cutoff", Name: "upload_cutoff",
Help: "Cutoff for switching to chunked upload (<= 256MB). (Deprecated)", Help: "Cutoff for switching to chunked upload (<= 256 MiB). (Deprecated)",
Advanced: true, Advanced: true,
}, { }, {
Name: "chunk_size", Name: "chunk_size",
Help: `Upload chunk size (<= 100MB). Help: `Upload chunk size (<= 100 MiB).
Note that this is stored in memory and there may be up to Note that this is stored in memory and there may be up to
"--transfers" chunks stored at once in memory.`, "--transfers" chunks stored at once in memory.`,
@@ -217,6 +217,23 @@ This option controls how often unused buffers will be removed from the pool.`,
encoder.EncodeDel | encoder.EncodeDel |
encoder.EncodeBackSlash | encoder.EncodeBackSlash |
encoder.EncodeRightPeriod), encoder.EncodeRightPeriod),
}, {
Name: "public_access",
Help: "Public access level of a container: blob, container.",
Default: string(azblob.PublicAccessNone),
Examples: []fs.OptionExample{
{
Value: string(azblob.PublicAccessNone),
Help: "The container and its blobs can be accessed only with an authorized request. It's a default value",
}, {
Value: string(azblob.PublicAccessBlob),
Help: "Blob data within this container can be read via anonymous request.",
}, {
Value: string(azblob.PublicAccessContainer),
Help: "Allow full public read access for container and blob data.",
},
},
Advanced: true,
}}, }},
}) })
} }
@@ -241,6 +258,7 @@ type Options struct {
MemoryPoolFlushTime fs.Duration `config:"memory_pool_flush_time"` MemoryPoolFlushTime fs.Duration `config:"memory_pool_flush_time"`
MemoryPoolUseMmap bool `config:"memory_pool_use_mmap"` MemoryPoolUseMmap bool `config:"memory_pool_use_mmap"`
Enc encoder.MultiEncoder `config:"encoding"` Enc encoder.MultiEncoder `config:"encoding"`
PublicAccess string `config:"public_access"`
} }
// Fs represents a remote azure server // Fs represents a remote azure server
@@ -262,6 +280,7 @@ type Fs struct {
imdsPacer *fs.Pacer // Same but for IMDS imdsPacer *fs.Pacer // Same but for IMDS
uploadToken *pacer.TokenDispenser // control concurrency uploadToken *pacer.TokenDispenser // control concurrency
pool *pool.Pool // memory pool pool *pool.Pool // memory pool
publicAccess azblob.PublicAccessType // Container Public Access Level
} }
// Object describes an azure object // Object describes an azure object
@@ -335,6 +354,19 @@ func validateAccessTier(tier string) bool {
} }
} }
// validatePublicAccess checks if azureblob supports use supplied public access level
func validatePublicAccess(publicAccess string) bool {
switch publicAccess {
case string(azblob.PublicAccessNone),
string(azblob.PublicAccessBlob),
string(azblob.PublicAccessContainer):
// valid cases
return true
default:
return false
}
}
// retryErrorCodes is a slice of error codes that we will retry // retryErrorCodes is a slice of error codes that we will retry
var retryErrorCodes = []int{ var retryErrorCodes = []int{
401, // Unauthorized (e.g. "Token has expired") 401, // Unauthorized (e.g. "Token has expired")
@@ -347,7 +379,10 @@ var retryErrorCodes = []int{
// shouldRetry returns a boolean as to whether this resp and err // shouldRetry returns a boolean as to whether this resp and err
// deserve to be retried. It returns the err as a convenience // deserve to be retried. It returns the err as a convenience
func (f *Fs) shouldRetry(err error) (bool, error) { func (f *Fs) shouldRetry(ctx context.Context, err error) (bool, error) {
if fserrors.ContextError(ctx, &err) {
return false, err
}
// FIXME interpret special errors - more to do here // FIXME interpret special errors - more to do here
if storageErr, ok := err.(azblob.StorageError); ok { if storageErr, ok := err.(azblob.StorageError); ok {
switch storageErr.ServiceCode() { switch storageErr.ServiceCode() {
@@ -369,7 +404,7 @@ func (f *Fs) shouldRetry(err error) (bool, error) {
} }
func checkUploadChunkSize(cs fs.SizeSuffix) error { func checkUploadChunkSize(cs fs.SizeSuffix) error {
const minChunkSize = fs.Byte const minChunkSize = fs.SizeSuffixBase
if cs < minChunkSize { if cs < minChunkSize {
return errors.Errorf("%s is less than %s", cs, minChunkSize) return errors.Errorf("%s is less than %s", cs, minChunkSize)
} }
@@ -499,6 +534,11 @@ func NewFs(ctx context.Context, name, root string, m configmap.Mapper) (fs.Fs, e
string(azblob.AccessTierHot), string(azblob.AccessTierCool), string(azblob.AccessTierArchive)) string(azblob.AccessTierHot), string(azblob.AccessTierCool), string(azblob.AccessTierArchive))
} }
if !validatePublicAccess((opt.PublicAccess)) {
return nil, errors.Errorf("Azure Blob: Supported public access level are %s and %s",
string(azblob.PublicAccessBlob), string(azblob.PublicAccessContainer))
}
ci := fs.GetConfig(ctx) ci := fs.GetConfig(ctx)
f := &Fs{ f := &Fs{
name: name, name: name,
@@ -517,6 +557,7 @@ func NewFs(ctx context.Context, name, root string, m configmap.Mapper) (fs.Fs, e
opt.MemoryPoolUseMmap, opt.MemoryPoolUseMmap,
), ),
} }
f.publicAccess = azblob.PublicAccessType(opt.PublicAccess)
f.imdsPacer.SetRetries(5) // per IMDS documentation f.imdsPacer.SetRetries(5) // per IMDS documentation
f.setRoot(root) f.setRoot(root)
f.features = (&fs.Features{ f.features = (&fs.Features{
@@ -578,7 +619,7 @@ func NewFs(ctx context.Context, name, root string, m configmap.Mapper) (fs.Fs, e
// Retry as specified by the documentation: // Retry as specified by the documentation:
// https://docs.microsoft.com/en-us/azure/active-directory/managed-identities-azure-resources/how-to-use-vm-token#retry-guidance // https://docs.microsoft.com/en-us/azure/active-directory/managed-identities-azure-resources/how-to-use-vm-token#retry-guidance
token, err = GetMSIToken(ctx, userMSI) token, err = GetMSIToken(ctx, userMSI)
return f.shouldRetry(err) return f.shouldRetry(ctx, err)
}) })
if err != nil { if err != nil {
@@ -594,7 +635,7 @@ func NewFs(ctx context.Context, name, root string, m configmap.Mapper) (fs.Fs, e
var refreshedToken adal.Token var refreshedToken adal.Token
err := f.imdsPacer.Call(func() (bool, error) { err := f.imdsPacer.Call(func() (bool, error) {
refreshedToken, err = GetMSIToken(ctx, userMSI) refreshedToken, err = GetMSIToken(ctx, userMSI)
return f.shouldRetry(err) return f.shouldRetry(ctx, err)
}) })
if err != nil { if err != nil {
// Failed to refresh. // Failed to refresh.
@@ -803,7 +844,7 @@ func (f *Fs) list(ctx context.Context, container, directory, prefix string, addC
err := f.pacer.Call(func() (bool, error) { err := f.pacer.Call(func() (bool, error) {
var err error var err error
response, err = f.cntURL(container).ListBlobsHierarchySegment(ctx, marker, delimiter, options) response, err = f.cntURL(container).ListBlobsHierarchySegment(ctx, marker, delimiter, options)
return f.shouldRetry(err) return f.shouldRetry(ctx, err)
}) })
if err != nil { if err != nil {
@@ -1029,7 +1070,7 @@ func (f *Fs) listContainersToFn(fn listContainerFn) error {
err := f.pacer.Call(func() (bool, error) { err := f.pacer.Call(func() (bool, error) {
var err error var err error
response, err = f.svcURL.ListContainersSegment(ctx, marker, params) response, err = f.svcURL.ListContainersSegment(ctx, marker, params)
return f.shouldRetry(err) return f.shouldRetry(ctx, err)
}) })
if err != nil { if err != nil {
return err return err
@@ -1081,7 +1122,7 @@ func (f *Fs) makeContainer(ctx context.Context, container string) error {
} }
// now try to create the container // now try to create the container
return f.pacer.Call(func() (bool, error) { return f.pacer.Call(func() (bool, error) {
_, err := f.cntURL(container).Create(ctx, azblob.Metadata{}, azblob.PublicAccessNone) _, err := f.cntURL(container).Create(ctx, azblob.Metadata{}, f.publicAccess)
if err != nil { if err != nil {
if storageErr, ok := err.(azblob.StorageError); ok { if storageErr, ok := err.(azblob.StorageError); ok {
switch storageErr.ServiceCode() { switch storageErr.ServiceCode() {
@@ -1098,7 +1139,7 @@ func (f *Fs) makeContainer(ctx context.Context, container string) error {
} }
} }
} }
return f.shouldRetry(err) return f.shouldRetry(ctx, err)
}) })
}, nil) }, nil)
} }
@@ -1136,10 +1177,10 @@ func (f *Fs) deleteContainer(ctx context.Context, container string) error {
return false, fs.ErrorDirNotFound return false, fs.ErrorDirNotFound
} }
return f.shouldRetry(err) return f.shouldRetry(ctx, err)
} }
return f.shouldRetry(err) return f.shouldRetry(ctx, err)
}) })
}) })
} }
@@ -1212,7 +1253,7 @@ func (f *Fs) Copy(ctx context.Context, src fs.Object, remote string) (fs.Object,
err = f.pacer.Call(func() (bool, error) { err = f.pacer.Call(func() (bool, error) {
startCopy, err = dstBlobURL.StartCopyFromURL(ctx, *source, nil, azblob.ModifiedAccessConditions{}, options, azblob.AccessTierType(f.opt.AccessTier), nil) startCopy, err = dstBlobURL.StartCopyFromURL(ctx, *source, nil, azblob.ModifiedAccessConditions{}, options, azblob.AccessTierType(f.opt.AccessTier), nil)
return f.shouldRetry(err) return f.shouldRetry(ctx, err)
}) })
if err != nil { if err != nil {
return nil, err return nil, err
@@ -1373,7 +1414,7 @@ func (o *Object) readMetaData() (err error) {
var blobProperties *azblob.BlobGetPropertiesResponse var blobProperties *azblob.BlobGetPropertiesResponse
err = o.fs.pacer.Call(func() (bool, error) { err = o.fs.pacer.Call(func() (bool, error) {
blobProperties, err = blob.GetProperties(ctx, options, azblob.ClientProvidedKeyOptions{}) blobProperties, err = blob.GetProperties(ctx, options, azblob.ClientProvidedKeyOptions{})
return o.fs.shouldRetry(err) return o.fs.shouldRetry(ctx, err)
}) })
if err != nil { if err != nil {
// On directories - GetProperties does not work and current SDK does not populate service code correctly hence check regular http response as well // On directories - GetProperties does not work and current SDK does not populate service code correctly hence check regular http response as well
@@ -1408,7 +1449,7 @@ func (o *Object) SetModTime(ctx context.Context, modTime time.Time) error {
blob := o.getBlobReference() blob := o.getBlobReference()
err := o.fs.pacer.Call(func() (bool, error) { err := o.fs.pacer.Call(func() (bool, error) {
_, err := blob.SetMetadata(ctx, o.meta, azblob.BlobAccessConditions{}, azblob.ClientProvidedKeyOptions{}) _, err := blob.SetMetadata(ctx, o.meta, azblob.BlobAccessConditions{}, azblob.ClientProvidedKeyOptions{})
return o.fs.shouldRetry(err) return o.fs.shouldRetry(ctx, err)
}) })
if err != nil { if err != nil {
return err return err
@@ -1451,7 +1492,7 @@ func (o *Object) Open(ctx context.Context, options ...fs.OpenOption) (in io.Read
var downloadResponse *azblob.DownloadResponse var downloadResponse *azblob.DownloadResponse
err = o.fs.pacer.Call(func() (bool, error) { err = o.fs.pacer.Call(func() (bool, error) {
downloadResponse, err = blob.Download(ctx, offset, count, ac, false, azblob.ClientProvidedKeyOptions{}) downloadResponse, err = blob.Download(ctx, offset, count, ac, false, azblob.ClientProvidedKeyOptions{})
return o.fs.shouldRetry(err) return o.fs.shouldRetry(ctx, err)
}) })
if err != nil { if err != nil {
return nil, errors.Wrap(err, "failed to open for download") return nil, errors.Wrap(err, "failed to open for download")
@@ -1592,7 +1633,7 @@ func (o *Object) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, op
// Stream contents of the reader object to the given blob URL // Stream contents of the reader object to the given blob URL
blockBlobURL := blob.ToBlockBlobURL() blockBlobURL := blob.ToBlockBlobURL()
_, err = azblob.UploadStreamToBlockBlob(ctx, in, blockBlobURL, putBlobOptions) _, err = azblob.UploadStreamToBlockBlob(ctx, in, blockBlobURL, putBlobOptions)
return o.fs.shouldRetry(err) return o.fs.shouldRetry(ctx, err)
}) })
if err != nil { if err != nil {
return err return err
@@ -1620,7 +1661,7 @@ func (o *Object) Remove(ctx context.Context) error {
ac := azblob.BlobAccessConditions{} ac := azblob.BlobAccessConditions{}
return o.fs.pacer.Call(func() (bool, error) { return o.fs.pacer.Call(func() (bool, error) {
_, err := blob.Delete(ctx, snapShotOptions, ac) _, err := blob.Delete(ctx, snapShotOptions, ac)
return o.fs.shouldRetry(err) return o.fs.shouldRetry(ctx, err)
}) })
} }
@@ -1649,7 +1690,7 @@ func (o *Object) SetTier(tier string) error {
ctx := context.Background() ctx := context.Background()
err := o.fs.pacer.Call(func() (bool, error) { err := o.fs.pacer.Call(func() (bool, error) {
_, err := blob.SetTier(ctx, desiredAccessTier, azblob.LeaseAccessConditions{}) _, err := blob.SetTier(ctx, desiredAccessTier, azblob.LeaseAccessConditions{})
return o.fs.shouldRetry(err) return o.fs.shouldRetry(ctx, err)
}) })
if err != nil { if err != nil {

View File

@@ -2,12 +2,11 @@ package api
import ( import (
"fmt" "fmt"
"path"
"strconv" "strconv"
"strings"
"time" "time"
"github.com/rclone/rclone/fs/fserrors" "github.com/rclone/rclone/fs/fserrors"
"github.com/rclone/rclone/lib/version"
) )
// Error describes a B2 error response // Error describes a B2 error response
@@ -63,16 +62,17 @@ func (t *Timestamp) UnmarshalJSON(data []byte) error {
return nil return nil
} }
const versionFormat = "-v2006-01-02-150405.000" // HasVersion returns true if it looks like the passed filename has a timestamp on it.
//
// Note that the passed filename's timestamp may still be invalid even if this
// function returns true.
func HasVersion(remote string) bool {
return version.Match(remote)
}
// AddVersion adds the timestamp as a version string into the filename passed in. // AddVersion adds the timestamp as a version string into the filename passed in.
func (t Timestamp) AddVersion(remote string) string { func (t Timestamp) AddVersion(remote string) string {
ext := path.Ext(remote) return version.Add(remote, time.Time(t))
base := remote[:len(remote)-len(ext)]
s := time.Time(t).Format(versionFormat)
// Replace the '.' with a '-'
s = strings.Replace(s, ".", "-", -1)
return base + s + ext
} }
// RemoveVersion removes the timestamp from a filename as a version string. // RemoveVersion removes the timestamp from a filename as a version string.
@@ -80,24 +80,9 @@ func (t Timestamp) AddVersion(remote string) string {
// It returns the new file name and a timestamp, or the old filename // It returns the new file name and a timestamp, or the old filename
// and a zero timestamp. // and a zero timestamp.
func RemoveVersion(remote string) (t Timestamp, newRemote string) { func RemoveVersion(remote string) (t Timestamp, newRemote string) {
newRemote = remote time, newRemote := version.Remove(remote)
ext := path.Ext(remote) t = Timestamp(time)
base := remote[:len(remote)-len(ext)] return
if len(base) < len(versionFormat) {
return
}
versionStart := len(base) - len(versionFormat)
// Check it ends in -xxx
if base[len(base)-4] != '-' {
return
}
// Replace with .xxx for parsing
base = base[:len(base)-4] + "." + base[len(base)-3:]
newT, err := time.Parse(versionFormat, base[versionStart:])
if err != nil {
return
}
return Timestamp(newT), base[:versionStart] + ext
} }
// IsZero returns true if the timestamp is uninitialized // IsZero returns true if the timestamp is uninitialized

View File

@@ -13,7 +13,6 @@ import (
var ( var (
emptyT api.Timestamp emptyT api.Timestamp
t0 = api.Timestamp(fstest.Time("1970-01-01T01:01:01.123456789Z")) t0 = api.Timestamp(fstest.Time("1970-01-01T01:01:01.123456789Z"))
t0r = api.Timestamp(fstest.Time("1970-01-01T01:01:01.123000000Z"))
t1 = api.Timestamp(fstest.Time("2001-02-03T04:05:06.123000000Z")) t1 = api.Timestamp(fstest.Time("2001-02-03T04:05:06.123000000Z"))
) )
@@ -36,40 +35,6 @@ func TestTimestampUnmarshalJSON(t *testing.T) {
assert.Equal(t, (time.Time)(t1), (time.Time)(tActual)) assert.Equal(t, (time.Time)(t1), (time.Time)(tActual))
} }
func TestTimestampAddVersion(t *testing.T) {
for _, test := range []struct {
t api.Timestamp
in string
expected string
}{
{t0, "potato.txt", "potato-v1970-01-01-010101-123.txt"},
{t1, "potato", "potato-v2001-02-03-040506-123"},
{t1, "", "-v2001-02-03-040506-123"},
} {
actual := test.t.AddVersion(test.in)
assert.Equal(t, test.expected, actual, test.in)
}
}
func TestTimestampRemoveVersion(t *testing.T) {
for _, test := range []struct {
in string
expectedT api.Timestamp
expectedRemote string
}{
{"potato.txt", emptyT, "potato.txt"},
{"potato-v1970-01-01-010101-123.txt", t0r, "potato.txt"},
{"potato-v2001-02-03-040506-123", t1, "potato"},
{"-v2001-02-03-040506-123", t1, ""},
{"potato-v2A01-02-03-040506-123", emptyT, "potato-v2A01-02-03-040506-123"},
{"potato-v2001-02-03-040506=123", emptyT, "potato-v2001-02-03-040506=123"},
} {
actualT, actualRemote := api.RemoveVersion(test.in)
assert.Equal(t, test.expectedT, actualT, test.in)
assert.Equal(t, test.expectedRemote, actualRemote, test.in)
}
}
func TestTimestampIsZero(t *testing.T) { func TestTimestampIsZero(t *testing.T) {
assert.True(t, emptyT.IsZero()) assert.True(t, emptyT.IsZero())
assert.False(t, t0.IsZero()) assert.False(t, t0.IsZero())

View File

@@ -54,10 +54,10 @@ const (
decayConstant = 1 // bigger for slower decay, exponential decayConstant = 1 // bigger for slower decay, exponential
maxParts = 10000 maxParts = 10000
maxVersions = 100 // maximum number of versions we search in --b2-versions mode maxVersions = 100 // maximum number of versions we search in --b2-versions mode
minChunkSize = 5 * fs.MebiByte minChunkSize = 5 * fs.Mebi
defaultChunkSize = 96 * fs.MebiByte defaultChunkSize = 96 * fs.Mebi
defaultUploadCutoff = 200 * fs.MebiByte defaultUploadCutoff = 200 * fs.Mebi
largeFileCopyCutoff = 4 * fs.GibiByte // 5E9 is the max largeFileCopyCutoff = 4 * fs.Gibi // 5E9 is the max
memoryPoolFlushTime = fs.Duration(time.Minute) // flush the cached buffers after this long memoryPoolFlushTime = fs.Duration(time.Minute) // flush the cached buffers after this long
memoryPoolUseMmap = false memoryPoolUseMmap = false
) )
@@ -116,7 +116,7 @@ in the [b2 integrations checklist](https://www.backblaze.com/b2/docs/integration
Files above this size will be uploaded in chunks of "--b2-chunk-size". Files above this size will be uploaded in chunks of "--b2-chunk-size".
This value should be set no larger than 4.657GiB (== 5GB).`, This value should be set no larger than 4.657 GiB (== 5 GB).`,
Default: defaultUploadCutoff, Default: defaultUploadCutoff,
Advanced: true, Advanced: true,
}, { }, {
@@ -126,7 +126,7 @@ This value should be set no larger than 4.657GiB (== 5GB).`,
Any files larger than this that need to be server-side copied will be Any files larger than this that need to be server-side copied will be
copied in chunks of this size. copied in chunks of this size.
The minimum is 0 and the maximum is 4.6GB.`, The minimum is 0 and the maximum is 4.6 GiB.`,
Default: largeFileCopyCutoff, Default: largeFileCopyCutoff,
Advanced: true, Advanced: true,
}, { }, {
@@ -305,7 +305,10 @@ var retryErrorCodes = []int{
// shouldRetryNoAuth returns a boolean as to whether this resp and err // shouldRetryNoAuth returns a boolean as to whether this resp and err
// deserve to be retried. It returns the err as a convenience // deserve to be retried. It returns the err as a convenience
func (f *Fs) shouldRetryNoReauth(resp *http.Response, err error) (bool, error) { func (f *Fs) shouldRetryNoReauth(ctx context.Context, resp *http.Response, err error) (bool, error) {
if fserrors.ContextError(ctx, &err) {
return false, err
}
// For 429 or 503 errors look at the Retry-After: header and // For 429 or 503 errors look at the Retry-After: header and
// set the retry appropriately, starting with a minimum of 1 // set the retry appropriately, starting with a minimum of 1
// second if it isn't set. // second if it isn't set.
@@ -336,7 +339,7 @@ func (f *Fs) shouldRetry(ctx context.Context, resp *http.Response, err error) (b
} }
return true, err return true, err
} }
return f.shouldRetryNoReauth(resp, err) return f.shouldRetryNoReauth(ctx, resp, err)
} }
// errorHandler parses a non 2xx error response into an error // errorHandler parses a non 2xx error response into an error
@@ -479,12 +482,9 @@ func NewFs(ctx context.Context, name, root string, m configmap.Mapper) (fs.Fs, e
f.setRoot(newRoot) f.setRoot(newRoot)
_, err := f.NewObject(ctx, leaf) _, err := f.NewObject(ctx, leaf)
if err != nil { if err != nil {
if err == fs.ErrorObjectNotFound { // File doesn't exist so return old f
// File doesn't exist so return old f f.setRoot(oldRoot)
f.setRoot(oldRoot) return f, nil
return f, nil
}
return nil, err
} }
// return an error with an fs which points to the parent // return an error with an fs which points to the parent
return f, fs.ErrorIsFile return f, fs.ErrorIsFile
@@ -507,7 +507,7 @@ func (f *Fs) authorizeAccount(ctx context.Context) error {
} }
err := f.pacer.Call(func() (bool, error) { err := f.pacer.Call(func() (bool, error) {
resp, err := f.srv.CallJSON(ctx, &opts, nil, &f.info) resp, err := f.srv.CallJSON(ctx, &opts, nil, &f.info)
return f.shouldRetryNoReauth(resp, err) return f.shouldRetryNoReauth(ctx, resp, err)
}) })
if err != nil { if err != nil {
return errors.Wrap(err, "failed to authenticate") return errors.Wrap(err, "failed to authenticate")
@@ -1353,7 +1353,7 @@ func (f *Fs) getDownloadAuthorization(ctx context.Context, bucket, remote string
} }
var request = api.GetDownloadAuthorizationRequest{ var request = api.GetDownloadAuthorizationRequest{
BucketID: bucketID, BucketID: bucketID,
FileNamePrefix: f.opt.Enc.FromStandardPath(path.Join(f.root, remote)), FileNamePrefix: f.opt.Enc.FromStandardPath(path.Join(f.rootDirectory, remote)),
ValidDurationInSeconds: validDurationInSeconds, ValidDurationInSeconds: validDurationInSeconds,
} }
var response api.GetDownloadAuthorizationResponse var response api.GetDownloadAuthorizationResponse
@@ -1744,6 +1744,13 @@ func (o *Object) getOrHead(ctx context.Context, method string, options []fs.Open
ContentType: resp.Header.Get("Content-Type"), ContentType: resp.Header.Get("Content-Type"),
Info: Info, Info: Info,
} }
// When reading files from B2 via cloudflare using
// --b2-download-url cloudflare strips the Content-Length
// headers (presumably so it can inject stuff) so use the old
// length read from the listing.
if info.Size < 0 {
info.Size = o.size
}
return resp, info, nil return resp, info, nil
} }

View File

@@ -230,14 +230,14 @@ func (up *largeUpload) transferChunk(ctx context.Context, part int64, body []byt
// //
// The number of bytes in the file being uploaded. Note that // The number of bytes in the file being uploaded. Note that
// this header is required; you cannot leave it out and just // this header is required; you cannot leave it out and just
// use chunked encoding. The minimum size of every part but // use chunked encoding. The minimum size of every part but
// the last one is 100MB. // the last one is 100 MB (100,000,000 bytes)
// //
// X-Bz-Content-Sha1 // X-Bz-Content-Sha1
// //
// The SHA1 checksum of the this part of the file. B2 will // The SHA1 checksum of the this part of the file. B2 will
// check this when the part is uploaded, to make sure that the // check this when the part is uploaded, to make sure that the
// data arrived correctly. The same SHA1 checksum must be // data arrived correctly. The same SHA1 checksum must be
// passed to b2_finish_large_file. // passed to b2_finish_large_file.
opts := rest.Opts{ opts := rest.Opts{
Method: "POST", Method: "POST",

View File

@@ -36,13 +36,13 @@ func (t *Time) UnmarshalJSON(data []byte) error {
// Error is returned from box when things go wrong // Error is returned from box when things go wrong
type Error struct { type Error struct {
Type string `json:"type"` Type string `json:"type"`
Status int `json:"status"` Status int `json:"status"`
Code string `json:"code"` Code string `json:"code"`
ContextInfo json.RawMessage ContextInfo json.RawMessage `json:"context_info"`
HelpURL string `json:"help_url"` HelpURL string `json:"help_url"`
Message string `json:"message"` Message string `json:"message"`
RequestID string `json:"request_id"` RequestID string `json:"request_id"`
} }
// Error returns a string for the error and satisfies the error interface // Error returns a string for the error and satisfies the error interface
@@ -132,6 +132,38 @@ type UploadFile struct {
ContentModifiedAt Time `json:"content_modified_at"` ContentModifiedAt Time `json:"content_modified_at"`
} }
// PreUploadCheck is the request for upload preflight check
type PreUploadCheck struct {
Name string `json:"name"`
Parent Parent `json:"parent"`
Size *int64 `json:"size,omitempty"`
}
// PreUploadCheckResponse is the response from upload preflight check
// if successful
type PreUploadCheckResponse struct {
UploadToken string `json:"upload_token"`
UploadURL string `json:"upload_url"`
}
// PreUploadCheckConflict is returned in the ContextInfo error field
// from PreUploadCheck when the error code is "item_name_in_use"
type PreUploadCheckConflict struct {
Conflicts struct {
Type string `json:"type"`
ID string `json:"id"`
FileVersion struct {
Type string `json:"type"`
ID string `json:"id"`
Sha1 string `json:"sha1"`
} `json:"file_version"`
SequenceID string `json:"sequence_id"`
Etag string `json:"etag"`
Sha1 string `json:"sha1"`
Name string `json:"name"`
} `json:"conflicts"`
}
// UpdateFileModTime is used in Update File Info // UpdateFileModTime is used in Update File Info
type UpdateFileModTime struct { type UpdateFileModTime struct {
ContentModifiedAt Time `json:"content_modified_at"` ContentModifiedAt Time `json:"content_modified_at"`

View File

@@ -17,7 +17,6 @@ import (
"fmt" "fmt"
"io" "io"
"io/ioutil" "io/ioutil"
"log"
"net/http" "net/http"
"net/url" "net/url"
"path" "path"
@@ -84,7 +83,7 @@ func init() {
Name: "box", Name: "box",
Description: "Box", Description: "Box",
NewFs: NewFs, NewFs: NewFs,
Config: func(ctx context.Context, name string, m configmap.Mapper) { Config: func(ctx context.Context, name string, m configmap.Mapper, config fs.ConfigIn) (*fs.ConfigOut, error) {
jsonFile, ok := m.Get("box_config_file") jsonFile, ok := m.Get("box_config_file")
boxSubType, boxSubTypeOk := m.Get("box_sub_type") boxSubType, boxSubTypeOk := m.Get("box_sub_type")
boxAccessToken, boxAccessTokenOk := m.Get("access_token") boxAccessToken, boxAccessTokenOk := m.Get("access_token")
@@ -93,15 +92,15 @@ func init() {
if ok && boxSubTypeOk && jsonFile != "" && boxSubType != "" { if ok && boxSubTypeOk && jsonFile != "" && boxSubType != "" {
err = refreshJWTToken(ctx, jsonFile, boxSubType, name, m) err = refreshJWTToken(ctx, jsonFile, boxSubType, name, m)
if err != nil { if err != nil {
log.Fatalf("Failed to configure token with jwt authentication: %v", err) return nil, errors.Wrap(err, "failed to configure token with jwt authentication")
} }
// Else, if not using an access token, use oauth2 // Else, if not using an access token, use oauth2
} else if boxAccessToken == "" || !boxAccessTokenOk { } else if boxAccessToken == "" || !boxAccessTokenOk {
err = oauthutil.Config(ctx, "box", name, m, oauthConfig, nil) return oauthutil.ConfigOut("", &oauthutil.Options{
if err != nil { OAuth2Config: oauthConfig,
log.Fatalf("Failed to configure token with oauth authentication: %v", err) })
}
} }
return nil, nil
}, },
Options: append(oauthutil.SharedOptions, []fs.Option{{ Options: append(oauthutil.SharedOptions, []fs.Option{{
Name: "root_folder_id", Name: "root_folder_id",
@@ -126,7 +125,7 @@ func init() {
}}, }},
}, { }, {
Name: "upload_cutoff", Name: "upload_cutoff",
Help: "Cutoff for switching to multipart upload (>= 50MB).", Help: "Cutoff for switching to multipart upload (>= 50 MiB).",
Default: fs.SizeSuffix(defaultUploadCutoff), Default: fs.SizeSuffix(defaultUploadCutoff),
Advanced: true, Advanced: true,
}, { }, {
@@ -157,15 +156,15 @@ func refreshJWTToken(ctx context.Context, jsonFile string, boxSubType string, na
jsonFile = env.ShellExpand(jsonFile) jsonFile = env.ShellExpand(jsonFile)
boxConfig, err := getBoxConfig(jsonFile) boxConfig, err := getBoxConfig(jsonFile)
if err != nil { if err != nil {
log.Fatalf("Failed to configure token: %v", err) return errors.Wrap(err, "get box config")
} }
privateKey, err := getDecryptedPrivateKey(boxConfig) privateKey, err := getDecryptedPrivateKey(boxConfig)
if err != nil { if err != nil {
log.Fatalf("Failed to configure token: %v", err) return errors.Wrap(err, "get decrypted private key")
} }
claims, err := getClaims(boxConfig, boxSubType) claims, err := getClaims(boxConfig, boxSubType)
if err != nil { if err != nil {
log.Fatalf("Failed to configure token: %v", err) return errors.Wrap(err, "get claims")
} }
signingHeaders := getSigningHeaders(boxConfig) signingHeaders := getSigningHeaders(boxConfig)
queryParams := getQueryParams(boxConfig) queryParams := getQueryParams(boxConfig)
@@ -317,10 +316,13 @@ var retryErrorCodes = []int{
// shouldRetry returns a boolean as to whether this resp and err // shouldRetry returns a boolean as to whether this resp and err
// deserve to be retried. It returns the err as a convenience // deserve to be retried. It returns the err as a convenience
func shouldRetry(resp *http.Response, err error) (bool, error) { func shouldRetry(ctx context.Context, resp *http.Response, err error) (bool, error) {
if fserrors.ContextError(ctx, &err) {
return false, err
}
authRetry := false authRetry := false
if resp != nil && resp.StatusCode == 401 && len(resp.Header["Www-Authenticate"]) == 1 && strings.Index(resp.Header["Www-Authenticate"][0], "expired_token") >= 0 { if resp != nil && resp.StatusCode == 401 && strings.Contains(resp.Header.Get("Www-Authenticate"), "expired_token") {
authRetry = true authRetry = true
fs.Debugf(nil, "Should retry: %v", err) fs.Debugf(nil, "Should retry: %v", err)
} }
@@ -548,7 +550,7 @@ func (f *Fs) CreateDir(ctx context.Context, pathID, leaf string) (newID string,
} }
err = f.pacer.Call(func() (bool, error) { err = f.pacer.Call(func() (bool, error) {
resp, err = f.srv.CallJSON(ctx, &opts, &mkdir, &info) resp, err = f.srv.CallJSON(ctx, &opts, &mkdir, &info)
return shouldRetry(resp, err) return shouldRetry(ctx, resp, err)
}) })
if err != nil { if err != nil {
//fmt.Printf("...Error %v\n", err) //fmt.Printf("...Error %v\n", err)
@@ -585,7 +587,7 @@ OUTER:
var resp *http.Response var resp *http.Response
err = f.pacer.Call(func() (bool, error) { err = f.pacer.Call(func() (bool, error) {
resp, err = f.srv.CallJSON(ctx, &opts, nil, &result) resp, err = f.srv.CallJSON(ctx, &opts, nil, &result)
return shouldRetry(resp, err) return shouldRetry(ctx, resp, err)
}) })
if err != nil { if err != nil {
return found, errors.Wrap(err, "couldn't list files") return found, errors.Wrap(err, "couldn't list files")
@@ -683,22 +685,80 @@ func (f *Fs) createObject(ctx context.Context, remote string, modTime time.Time,
return o, leaf, directoryID, nil return o, leaf, directoryID, nil
} }
// preUploadCheck checks to see if a file can be uploaded
//
// It returns "", nil if the file is good to go
// It returns "ID", nil if the file must be updated
func (f *Fs) preUploadCheck(ctx context.Context, leaf, directoryID string, size int64) (ID string, err error) {
check := api.PreUploadCheck{
Name: f.opt.Enc.FromStandardName(leaf),
Parent: api.Parent{
ID: directoryID,
},
}
if size >= 0 {
check.Size = &size
}
opts := rest.Opts{
Method: "OPTIONS",
Path: "/files/content/",
}
var result api.PreUploadCheckResponse
var resp *http.Response
err = f.pacer.Call(func() (bool, error) {
resp, err = f.srv.CallJSON(ctx, &opts, &check, &result)
return shouldRetry(ctx, resp, err)
})
if err != nil {
if apiErr, ok := err.(*api.Error); ok && apiErr.Code == "item_name_in_use" {
var conflict api.PreUploadCheckConflict
err = json.Unmarshal(apiErr.ContextInfo, &conflict)
if err != nil {
return "", errors.Wrap(err, "pre-upload check: JSON decode failed")
}
if conflict.Conflicts.Type != api.ItemTypeFile {
return "", errors.Wrap(err, "pre-upload check: can't overwrite non file with file")
}
return conflict.Conflicts.ID, nil
}
return "", errors.Wrap(err, "pre-upload check")
}
return "", nil
}
// Put the object // Put the object
// //
// Copy the reader in to the new object which is returned // Copy the reader in to the new object which is returned
// //
// The new object may have been created if an error is returned // The new object may have been created if an error is returned
func (f *Fs) Put(ctx context.Context, in io.Reader, src fs.ObjectInfo, options ...fs.OpenOption) (fs.Object, error) { func (f *Fs) Put(ctx context.Context, in io.Reader, src fs.ObjectInfo, options ...fs.OpenOption) (fs.Object, error) {
existingObj, err := f.newObjectWithInfo(ctx, src.Remote(), nil) // If directory doesn't exist, file doesn't exist so can upload
switch err { remote := src.Remote()
case nil: leaf, directoryID, err := f.dirCache.FindPath(ctx, remote, false)
return existingObj, existingObj.Update(ctx, in, src, options...) if err != nil {
case fs.ErrorObjectNotFound: if err == fs.ErrorDirNotFound {
// Not found so create it return f.PutUnchecked(ctx, in, src, options...)
return f.PutUnchecked(ctx, in, src) }
default:
return nil, err return nil, err
} }
// Preflight check the upload, which returns the ID if the
// object already exists
ID, err := f.preUploadCheck(ctx, leaf, directoryID, src.Size())
if err != nil {
return nil, err
}
if ID == "" {
return f.PutUnchecked(ctx, in, src, options...)
}
// If object exists then create a skeleton one with just id
o := &Object{
fs: f,
remote: remote,
id: ID,
}
return o, o.Update(ctx, in, src, options...)
} }
// PutStream uploads to the remote path with the modTime given of indeterminate size // PutStream uploads to the remote path with the modTime given of indeterminate size
@@ -740,7 +800,7 @@ func (f *Fs) deleteObject(ctx context.Context, id string) error {
} }
return f.pacer.Call(func() (bool, error) { return f.pacer.Call(func() (bool, error) {
resp, err := f.srv.Call(ctx, &opts) resp, err := f.srv.Call(ctx, &opts)
return shouldRetry(resp, err) return shouldRetry(ctx, resp, err)
}) })
} }
@@ -767,7 +827,7 @@ func (f *Fs) purgeCheck(ctx context.Context, dir string, check bool) error {
var resp *http.Response var resp *http.Response
err = f.pacer.Call(func() (bool, error) { err = f.pacer.Call(func() (bool, error) {
resp, err = f.srv.Call(ctx, &opts) resp, err = f.srv.Call(ctx, &opts)
return shouldRetry(resp, err) return shouldRetry(ctx, resp, err)
}) })
if err != nil { if err != nil {
return errors.Wrap(err, "rmdir failed") return errors.Wrap(err, "rmdir failed")
@@ -839,7 +899,7 @@ func (f *Fs) Copy(ctx context.Context, src fs.Object, remote string) (fs.Object,
var info *api.Item var info *api.Item
err = f.pacer.Call(func() (bool, error) { err = f.pacer.Call(func() (bool, error) {
resp, err = f.srv.CallJSON(ctx, &opts, &copyFile, &info) resp, err = f.srv.CallJSON(ctx, &opts, &copyFile, &info)
return shouldRetry(resp, err) return shouldRetry(ctx, resp, err)
}) })
if err != nil { if err != nil {
return nil, err return nil, err
@@ -877,7 +937,7 @@ func (f *Fs) move(ctx context.Context, endpoint, id, leaf, directoryID string) (
var resp *http.Response var resp *http.Response
err = f.pacer.Call(func() (bool, error) { err = f.pacer.Call(func() (bool, error) {
resp, err = f.srv.CallJSON(ctx, &opts, &move, &info) resp, err = f.srv.CallJSON(ctx, &opts, &move, &info)
return shouldRetry(resp, err) return shouldRetry(ctx, resp, err)
}) })
if err != nil { if err != nil {
return nil, err return nil, err
@@ -895,7 +955,7 @@ func (f *Fs) About(ctx context.Context) (usage *fs.Usage, err error) {
var resp *http.Response var resp *http.Response
err = f.pacer.Call(func() (bool, error) { err = f.pacer.Call(func() (bool, error) {
resp, err = f.srv.CallJSON(ctx, &opts, nil, &user) resp, err = f.srv.CallJSON(ctx, &opts, nil, &user)
return shouldRetry(resp, err) return shouldRetry(ctx, resp, err)
}) })
if err != nil { if err != nil {
return nil, errors.Wrap(err, "failed to read user info") return nil, errors.Wrap(err, "failed to read user info")
@@ -1008,7 +1068,7 @@ func (f *Fs) PublicLink(ctx context.Context, remote string, expire fs.Duration,
var resp *http.Response var resp *http.Response
err = f.pacer.Call(func() (bool, error) { err = f.pacer.Call(func() (bool, error) {
resp, err = f.srv.CallJSON(ctx, &opts, &shareLink, &info) resp, err = f.srv.CallJSON(ctx, &opts, &shareLink, &info)
return shouldRetry(resp, err) return shouldRetry(ctx, resp, err)
}) })
return info.SharedLink.URL, err return info.SharedLink.URL, err
} }
@@ -1026,7 +1086,7 @@ func (f *Fs) deletePermanently(ctx context.Context, itemType, id string) error {
} }
return f.pacer.Call(func() (bool, error) { return f.pacer.Call(func() (bool, error) {
resp, err := f.srv.Call(ctx, &opts) resp, err := f.srv.Call(ctx, &opts)
return shouldRetry(resp, err) return shouldRetry(ctx, resp, err)
}) })
} }
@@ -1048,7 +1108,7 @@ func (f *Fs) CleanUp(ctx context.Context) (err error) {
var resp *http.Response var resp *http.Response
err = f.pacer.Call(func() (bool, error) { err = f.pacer.Call(func() (bool, error) {
resp, err = f.srv.CallJSON(ctx, &opts, nil, &result) resp, err = f.srv.CallJSON(ctx, &opts, nil, &result)
return shouldRetry(resp, err) return shouldRetry(ctx, resp, err)
}) })
if err != nil { if err != nil {
return errors.Wrap(err, "couldn't list trash") return errors.Wrap(err, "couldn't list trash")
@@ -1182,7 +1242,7 @@ func (o *Object) setModTime(ctx context.Context, modTime time.Time) (*api.Item,
var info *api.Item var info *api.Item
err := o.fs.pacer.Call(func() (bool, error) { err := o.fs.pacer.Call(func() (bool, error) {
resp, err := o.fs.srv.CallJSON(ctx, &opts, &update, &info) resp, err := o.fs.srv.CallJSON(ctx, &opts, &update, &info)
return shouldRetry(resp, err) return shouldRetry(ctx, resp, err)
}) })
return info, err return info, err
} }
@@ -1215,7 +1275,7 @@ func (o *Object) Open(ctx context.Context, options ...fs.OpenOption) (in io.Read
} }
err = o.fs.pacer.Call(func() (bool, error) { err = o.fs.pacer.Call(func() (bool, error) {
resp, err = o.fs.srv.Call(ctx, &opts) resp, err = o.fs.srv.Call(ctx, &opts)
return shouldRetry(resp, err) return shouldRetry(ctx, resp, err)
}) })
if err != nil { if err != nil {
return nil, err return nil, err
@@ -1225,7 +1285,7 @@ func (o *Object) Open(ctx context.Context, options ...fs.OpenOption) (in io.Read
// upload does a single non-multipart upload // upload does a single non-multipart upload
// //
// This is recommended for less than 50 MB of content // This is recommended for less than 50 MiB of content
func (o *Object) upload(ctx context.Context, in io.Reader, leaf, directoryID string, modTime time.Time, options ...fs.OpenOption) (err error) { func (o *Object) upload(ctx context.Context, in io.Reader, leaf, directoryID string, modTime time.Time, options ...fs.OpenOption) (err error) {
upload := api.UploadFile{ upload := api.UploadFile{
Name: o.fs.opt.Enc.FromStandardName(leaf), Name: o.fs.opt.Enc.FromStandardName(leaf),
@@ -1255,7 +1315,7 @@ func (o *Object) upload(ctx context.Context, in io.Reader, leaf, directoryID str
} }
err = o.fs.pacer.CallNoRetry(func() (bool, error) { err = o.fs.pacer.CallNoRetry(func() (bool, error) {
resp, err = o.fs.srv.CallJSON(ctx, &opts, &upload, &result) resp, err = o.fs.srv.CallJSON(ctx, &opts, &upload, &result)
return shouldRetry(resp, err) return shouldRetry(ctx, resp, err)
}) })
if err != nil { if err != nil {
return err return err

View File

@@ -44,7 +44,7 @@ func (o *Object) createUploadSession(ctx context.Context, leaf, directoryID stri
var resp *http.Response var resp *http.Response
err = o.fs.pacer.Call(func() (bool, error) { err = o.fs.pacer.Call(func() (bool, error) {
resp, err = o.fs.srv.CallJSON(ctx, &opts, &request, &response) resp, err = o.fs.srv.CallJSON(ctx, &opts, &request, &response)
return shouldRetry(resp, err) return shouldRetry(ctx, resp, err)
}) })
return return
} }
@@ -74,7 +74,7 @@ func (o *Object) uploadPart(ctx context.Context, SessionID string, offset, total
err = o.fs.pacer.Call(func() (bool, error) { err = o.fs.pacer.Call(func() (bool, error) {
opts.Body = wrap(bytes.NewReader(chunk)) opts.Body = wrap(bytes.NewReader(chunk))
resp, err = o.fs.srv.CallJSON(ctx, &opts, nil, &response) resp, err = o.fs.srv.CallJSON(ctx, &opts, nil, &response)
return shouldRetry(resp, err) return shouldRetry(ctx, resp, err)
}) })
if err != nil { if err != nil {
return nil, err return nil, err
@@ -109,10 +109,10 @@ outer:
err = o.fs.pacer.Call(func() (bool, error) { err = o.fs.pacer.Call(func() (bool, error) {
resp, err = o.fs.srv.CallJSON(ctx, &opts, &request, nil) resp, err = o.fs.srv.CallJSON(ctx, &opts, &request, nil)
if err != nil { if err != nil {
return shouldRetry(resp, err) return shouldRetry(ctx, resp, err)
} }
body, err = rest.ReadBody(resp) body, err = rest.ReadBody(resp)
return shouldRetry(resp, err) return shouldRetry(ctx, resp, err)
}) })
delay := defaultDelay delay := defaultDelay
var why string var why string
@@ -167,7 +167,7 @@ func (o *Object) abortUpload(ctx context.Context, SessionID string) (err error)
var resp *http.Response var resp *http.Response
err = o.fs.pacer.Call(func() (bool, error) { err = o.fs.pacer.Call(func() (bool, error) {
resp, err = o.fs.srv.Call(ctx, &opts) resp, err = o.fs.srv.Call(ctx, &opts)
return shouldRetry(resp, err) return shouldRetry(ctx, resp, err)
}) })
return err return err
} }

View File

@@ -98,14 +98,14 @@ changed, any downloaded chunks will be invalid and cache-chunk-path
will need to be cleared or unexpected EOF errors will occur.`, will need to be cleared or unexpected EOF errors will occur.`,
Default: DefCacheChunkSize, Default: DefCacheChunkSize,
Examples: []fs.OptionExample{{ Examples: []fs.OptionExample{{
Value: "1m", Value: "1M",
Help: "1MB", Help: "1 MiB",
}, { }, {
Value: "5M", Value: "5M",
Help: "5 MB", Help: "5 MiB",
}, { }, {
Value: "10M", Value: "10M",
Help: "10 MB", Help: "10 MiB",
}}, }},
}, { }, {
Name: "info_age", Name: "info_age",
@@ -132,13 +132,13 @@ oldest chunks until it goes under this value.`,
Default: DefCacheTotalChunkSize, Default: DefCacheTotalChunkSize,
Examples: []fs.OptionExample{{ Examples: []fs.OptionExample{{
Value: "500M", Value: "500M",
Help: "500 MB", Help: "500 MiB",
}, { }, {
Value: "1G", Value: "1G",
Help: "1 GB", Help: "1 GiB",
}, { }, {
Value: "10G", Value: "10G",
Help: "10 GB", Help: "10 GiB",
}}, }},
}, { }, {
Name: "db_path", Name: "db_path",
@@ -339,8 +339,14 @@ func parseRootPath(path string) (string, error) {
return strings.Trim(path, "/"), nil return strings.Trim(path, "/"), nil
} }
var warnDeprecated sync.Once
// NewFs constructs an Fs from the path, container:path // NewFs constructs an Fs from the path, container:path
func NewFs(ctx context.Context, name, rootPath string, m configmap.Mapper) (fs.Fs, error) { func NewFs(ctx context.Context, name, rootPath string, m configmap.Mapper) (fs.Fs, error) {
warnDeprecated.Do(func() {
fs.Logf(nil, "WARNING: Cache backend is deprecated and may be removed in future. Please use VFS instead.")
})
// Parse config into Options struct // Parse config into Options struct
opt := new(Options) opt := new(Options)
err := configstruct.Set(m, opt) err := configstruct.Set(m, opt)

View File

@@ -836,7 +836,7 @@ func newRun() *run {
if uploadDir == "" { if uploadDir == "" {
r.tmpUploadDir, err = ioutil.TempDir("", "rclonecache-tmp") r.tmpUploadDir, err = ioutil.TempDir("", "rclonecache-tmp")
if err != nil { if err != nil {
log.Fatalf("Failed to create temp dir: %v", err) panic(fmt.Sprintf("Failed to create temp dir: %v", err))
} }
} else { } else {
r.tmpUploadDir = uploadDir r.tmpUploadDir = uploadDir
@@ -892,7 +892,7 @@ func (r *run) newCacheFs(t *testing.T, remote, id string, needRemote, purge bool
m.Set("type", "cache") m.Set("type", "cache")
m.Set("remote", localRemote+":"+filepath.Join(os.TempDir(), localRemote)) m.Set("remote", localRemote+":"+filepath.Join(os.TempDir(), localRemote))
} else { } else {
remoteType := config.FileGet(remote, "type", "") remoteType := config.FileGet(remote, "type")
if remoteType == "" { if remoteType == "" {
t.Skipf("skipped due to invalid remote type for %v", remote) t.Skipf("skipped due to invalid remote type for %v", remote)
return nil, nil return nil, nil
@@ -903,14 +903,14 @@ func (r *run) newCacheFs(t *testing.T, remote, id string, needRemote, purge bool
m.Set("password", cryptPassword1) m.Set("password", cryptPassword1)
m.Set("password2", cryptPassword2) m.Set("password2", cryptPassword2)
} }
remoteRemote := config.FileGet(remote, "remote", "") remoteRemote := config.FileGet(remote, "remote")
if remoteRemote == "" { if remoteRemote == "" {
t.Skipf("skipped due to invalid remote wrapper for %v", remote) t.Skipf("skipped due to invalid remote wrapper for %v", remote)
return nil, nil return nil, nil
} }
remoteRemoteParts := strings.Split(remoteRemote, ":") remoteRemoteParts := strings.Split(remoteRemote, ":")
remoteWrapping := remoteRemoteParts[0] remoteWrapping := remoteRemoteParts[0]
remoteType := config.FileGet(remoteWrapping, "type", "") remoteType := config.FileGet(remoteWrapping, "type")
if remoteType != "cache" { if remoteType != "cache" {
t.Skipf("skipped due to invalid remote type for %v: '%v'", remoteWrapping, remoteType) t.Skipf("skipped due to invalid remote type for %v: '%v'", remoteWrapping, remoteType)
return nil, nil return nil, nil
@@ -1034,7 +1034,7 @@ func (r *run) updateObjectRemote(t *testing.T, f fs.Fs, remote string, data1 []b
objInfo1 := object.NewStaticObjectInfo(remote, time.Now(), int64(len(data1)), true, nil, f) objInfo1 := object.NewStaticObjectInfo(remote, time.Now(), int64(len(data1)), true, nil, f)
objInfo2 := object.NewStaticObjectInfo(remote, time.Now(), int64(len(data2)), true, nil, f) objInfo2 := object.NewStaticObjectInfo(remote, time.Now(), int64(len(data2)), true, nil, f)
obj, err = f.Put(context.Background(), in1, objInfo1) _, err = f.Put(context.Background(), in1, objInfo1)
require.NoError(t, err) require.NoError(t, err)
obj, err = f.NewObject(context.Background(), remote) obj, err = f.NewObject(context.Background(), remote)
require.NoError(t, err) require.NoError(t, err)

View File

@@ -47,7 +47,8 @@ import (
// The following types of chunks are supported: // The following types of chunks are supported:
// data and control, active and temporary. // data and control, active and temporary.
// Chunk type is identified by matching chunk file name // Chunk type is identified by matching chunk file name
// based on the chunk name format configured by user. // based on the chunk name format configured by user and transaction
// style being used.
// //
// Both data and control chunks can be either temporary (aka hidden) // Both data and control chunks can be either temporary (aka hidden)
// or active (non-temporary aka normal aka permanent). // or active (non-temporary aka normal aka permanent).
@@ -63,6 +64,12 @@ import (
// which is transparently converted to the new format. In its maximum // which is transparently converted to the new format. In its maximum
// length of 13 decimals it makes a 7-digit base-36 number. // length of 13 decimals it makes a 7-digit base-36 number.
// //
// When transactions is set to the norename style, data chunks will
// keep their temporary chunk names (with the transacion identifier
// suffix). To distinguish them from temporary chunks, the txn field
// of the metadata file is set to match the transaction identifier of
// the data chunks.
//
// Chunker can tell data chunks from control chunks by the characters // Chunker can tell data chunks from control chunks by the characters
// located in the "hash placeholder" position of configured format. // located in the "hash placeholder" position of configured format.
// Data chunks have decimal digits there. // Data chunks have decimal digits there.
@@ -101,7 +108,7 @@ const maxMetadataSize = 1023
const maxMetadataSizeWritten = 255 const maxMetadataSizeWritten = 255
// Current/highest supported metadata format. // Current/highest supported metadata format.
const metadataVersion = 1 const metadataVersion = 2
// optimizeFirstChunk enables the following optimization in the Put: // optimizeFirstChunk enables the following optimization in the Put:
// If a single chunk is expected, put the first chunk using the // If a single chunk is expected, put the first chunk using the
@@ -148,7 +155,7 @@ Normally should contain a ':' and a path, e.g. "myremote:path/to/dir",
}, { }, {
Name: "chunk_size", Name: "chunk_size",
Advanced: false, Advanced: false,
Default: fs.SizeSuffix(2147483648), // 2GB Default: fs.SizeSuffix(2147483648), // 2 GiB
Help: `Files larger than chunk size will be split in chunks.`, Help: `Files larger than chunk size will be split in chunks.`,
}, { }, {
Name: "name_format", Name: "name_format",
@@ -224,6 +231,31 @@ It has the following fields: ver, size, nchunks, md5, sha1.`,
Help: "Warn user, skip incomplete file and proceed.", Help: "Warn user, skip incomplete file and proceed.",
}, },
}, },
}, {
Name: "transactions",
Advanced: true,
Default: "rename",
Help: `Choose how chunker should handle temporary files during transactions.`,
Hide: fs.OptionHideCommandLine,
Examples: []fs.OptionExample{
{
Value: "rename",
Help: "Rename temporary files after a successful transaction.",
}, {
Value: "norename",
Help: `Leave temporary file names and write transaction ID to metadata file.
Metadata is required for no rename transactions (meta format cannot be "none").
If you are using norename transactions you should be careful not to downgrade Rclone
as older versions of Rclone don't support this transaction style and will misinterpret
files manipulated by norename transactions.
This method is EXPERIMENTAL, don't use on production systems.`,
}, {
Value: "auto",
Help: `Rename or norename will be used depending on capabilities of the backend.
If meta format is set to "none", rename transactions will always be used.
This method is EXPERIMENTAL, don't use on production systems.`,
},
},
}}, }},
}) })
} }
@@ -245,13 +277,10 @@ func NewFs(ctx context.Context, name, rpath string, m configmap.Mapper) (fs.Fs,
return nil, errors.New("can't point remote at itself - check the value of the remote setting") return nil, errors.New("can't point remote at itself - check the value of the remote setting")
} }
baseName, basePath, err := fspath.Parse(remote) baseName, basePath, err := fspath.SplitFs(remote)
if err != nil { if err != nil {
return nil, errors.Wrapf(err, "failed to parse remote %q to wrap", remote) return nil, errors.Wrapf(err, "failed to parse remote %q to wrap", remote)
} }
if baseName != "" {
baseName += ":"
}
// Look for a file first // Look for a file first
remotePath := fspath.JoinRootPath(basePath, rpath) remotePath := fspath.JoinRootPath(basePath, rpath)
baseFs, err := cache.Get(ctx, baseName+remotePath) baseFs, err := cache.Get(ctx, baseName+remotePath)
@@ -271,7 +300,7 @@ func NewFs(ctx context.Context, name, rpath string, m configmap.Mapper) (fs.Fs,
cache.PinUntilFinalized(f.base, f) cache.PinUntilFinalized(f.base, f)
f.dirSort = true // processEntries requires that meta Objects prerun data chunks atm. f.dirSort = true // processEntries requires that meta Objects prerun data chunks atm.
if err := f.configure(opt.NameFormat, opt.MetaFormat, opt.HashType); err != nil { if err := f.configure(opt.NameFormat, opt.MetaFormat, opt.HashType, opt.Transactions); err != nil {
return nil, err return nil, err
} }
@@ -309,13 +338,14 @@ func NewFs(ctx context.Context, name, rpath string, m configmap.Mapper) (fs.Fs,
// Options defines the configuration for this backend // Options defines the configuration for this backend
type Options struct { type Options struct {
Remote string `config:"remote"` Remote string `config:"remote"`
ChunkSize fs.SizeSuffix `config:"chunk_size"` ChunkSize fs.SizeSuffix `config:"chunk_size"`
NameFormat string `config:"name_format"` NameFormat string `config:"name_format"`
StartFrom int `config:"start_from"` StartFrom int `config:"start_from"`
MetaFormat string `config:"meta_format"` MetaFormat string `config:"meta_format"`
HashType string `config:"hash_type"` HashType string `config:"hash_type"`
FailHard bool `config:"fail_hard"` FailHard bool `config:"fail_hard"`
Transactions string `config:"transactions"`
} }
// Fs represents a wrapped fs.Fs // Fs represents a wrapped fs.Fs
@@ -337,12 +367,13 @@ type Fs struct {
opt Options // copy of Options opt Options // copy of Options
features *fs.Features // optional features features *fs.Features // optional features
dirSort bool // reserved for future, ignored dirSort bool // reserved for future, ignored
useNoRename bool // can be set with the transactions option
} }
// configure sets up chunker for given name format, meta format and hash type. // configure sets up chunker for given name format, meta format and hash type.
// It also seeds the source of random transaction identifiers. // It also seeds the source of random transaction identifiers.
// configure must be called only from NewFs or by unit tests. // configure must be called only from NewFs or by unit tests.
func (f *Fs) configure(nameFormat, metaFormat, hashType string) error { func (f *Fs) configure(nameFormat, metaFormat, hashType, transactionMode string) error {
if err := f.setChunkNameFormat(nameFormat); err != nil { if err := f.setChunkNameFormat(nameFormat); err != nil {
return errors.Wrapf(err, "invalid name format '%s'", nameFormat) return errors.Wrapf(err, "invalid name format '%s'", nameFormat)
} }
@@ -352,6 +383,9 @@ func (f *Fs) configure(nameFormat, metaFormat, hashType string) error {
if err := f.setHashType(hashType); err != nil { if err := f.setHashType(hashType); err != nil {
return err return err
} }
if err := f.setTransactionMode(transactionMode); err != nil {
return err
}
randomSeed := time.Now().UnixNano() randomSeed := time.Now().UnixNano()
f.xactIDRand = rand.New(rand.NewSource(randomSeed)) f.xactIDRand = rand.New(rand.NewSource(randomSeed))
@@ -411,6 +445,27 @@ func (f *Fs) setHashType(hashType string) error {
return nil return nil
} }
func (f *Fs) setTransactionMode(transactionMode string) error {
switch transactionMode {
case "rename":
f.useNoRename = false
case "norename":
if !f.useMeta {
return errors.New("incompatible transaction options")
}
f.useNoRename = true
case "auto":
f.useNoRename = !f.CanQuickRename()
if f.useNoRename && !f.useMeta {
f.useNoRename = false
return errors.New("using norename transactions requires metadata")
}
default:
return fmt.Errorf("unsupported transaction mode '%s'", transactionMode)
}
return nil
}
// setChunkNameFormat converts pattern based chunk name format // setChunkNameFormat converts pattern based chunk name format
// into Printf format and Regular expressions for data and // into Printf format and Regular expressions for data and
// control chunks. // control chunks.
@@ -693,6 +748,7 @@ func (f *Fs) processEntries(ctx context.Context, origEntries fs.DirEntries, dirP
byRemote := make(map[string]*Object) byRemote := make(map[string]*Object)
badEntry := make(map[string]bool) badEntry := make(map[string]bool)
isSubdir := make(map[string]bool) isSubdir := make(map[string]bool)
txnByRemote := map[string]string{}
var tempEntries fs.DirEntries var tempEntries fs.DirEntries
for _, dirOrObject := range sortedEntries { for _, dirOrObject := range sortedEntries {
@@ -705,12 +761,18 @@ func (f *Fs) processEntries(ctx context.Context, origEntries fs.DirEntries, dirP
object := f.newObject("", entry, nil) object := f.newObject("", entry, nil)
byRemote[remote] = object byRemote[remote] = object
tempEntries = append(tempEntries, object) tempEntries = append(tempEntries, object)
if f.useNoRename {
txnByRemote[remote], err = object.readXactID(ctx)
if err != nil {
return nil, err
}
}
break break
} }
// this is some kind of chunk // this is some kind of chunk
// metobject should have been created above if present // metobject should have been created above if present
isSpecial := xactID != "" || ctrlType != ""
mainObject := byRemote[mainRemote] mainObject := byRemote[mainRemote]
isSpecial := xactID != txnByRemote[mainRemote] || ctrlType != ""
if mainObject == nil && f.useMeta && !isSpecial { if mainObject == nil && f.useMeta && !isSpecial {
fs.Debugf(f, "skip orphan data chunk %q", remote) fs.Debugf(f, "skip orphan data chunk %q", remote)
break break
@@ -809,10 +871,11 @@ func (f *Fs) scanObject(ctx context.Context, remote string, quickScan bool) (fs.
} }
var ( var (
o *Object o *Object
baseObj fs.Object baseObj fs.Object
err error currentXactID string
sameMain bool err error
sameMain bool
) )
if f.useMeta { if f.useMeta {
@@ -856,7 +919,14 @@ func (f *Fs) scanObject(ctx context.Context, remote string, quickScan bool) (fs.
return nil, errors.Wrap(err, "can't detect composite file") return nil, errors.Wrap(err, "can't detect composite file")
} }
if f.useNoRename {
currentXactID, err = o.readXactID(ctx)
if err != nil {
return nil, err
}
}
caseInsensitive := f.features.CaseInsensitive caseInsensitive := f.features.CaseInsensitive
for _, dirOrObject := range entries { for _, dirOrObject := range entries {
entry, ok := dirOrObject.(fs.Object) entry, ok := dirOrObject.(fs.Object)
if !ok { if !ok {
@@ -878,7 +948,7 @@ func (f *Fs) scanObject(ctx context.Context, remote string, quickScan bool) (fs.
if !sameMain { if !sameMain {
continue // skip alien chunks continue // skip alien chunks
} }
if ctrlType != "" || xactID != "" { if ctrlType != "" || xactID != currentXactID {
if f.useMeta { if f.useMeta {
// temporary/control chunk calls for lazy metadata read // temporary/control chunk calls for lazy metadata read
o.unsure = true o.unsure = true
@@ -993,12 +1063,57 @@ func (o *Object) readMetadata(ctx context.Context) error {
} }
o.md5 = metaInfo.md5 o.md5 = metaInfo.md5
o.sha1 = metaInfo.sha1 o.sha1 = metaInfo.sha1
o.xactID = metaInfo.xactID
} }
o.isFull = true // cache results o.isFull = true // cache results
o.xIDCached = true
return nil return nil
} }
// readXactID returns the transaction ID stored in the passed metadata object
func (o *Object) readXactID(ctx context.Context) (xactID string, err error) {
// if xactID has already been read and cahced return it now
if o.xIDCached {
return o.xactID, nil
}
// Avoid reading metadata for backends that don't use xactID to identify permanent chunks
if !o.f.useNoRename {
return "", errors.New("readXactID requires norename transactions")
}
if o.main == nil {
return "", errors.New("readXactID requires valid metaobject")
}
if o.main.Size() > maxMetadataSize {
return "", nil // this was likely not a metadata object, return empty xactID but don't throw error
}
reader, err := o.main.Open(ctx)
if err != nil {
return "", err
}
data, err := ioutil.ReadAll(reader)
_ = reader.Close() // ensure file handle is freed on windows
if err != nil {
return "", err
}
switch o.f.opt.MetaFormat {
case "simplejson":
if data != nil && len(data) > maxMetadataSizeWritten {
return "", nil // this was likely not a metadata object, return empty xactID but don't throw error
}
var metadata metaSimpleJSON
err = json.Unmarshal(data, &metadata)
if err != nil {
return "", nil // this was likely not a metadata object, return empty xactID but don't throw error
}
xactID = metadata.XactID
}
o.xactID = xactID
o.xIDCached = true
return xactID, nil
}
// put implements Put, PutStream, PutUnchecked, Update // put implements Put, PutStream, PutUnchecked, Update
func (f *Fs) put( func (f *Fs) put(
ctx context.Context, in io.Reader, src fs.ObjectInfo, remote string, options []fs.OpenOption, ctx context.Context, in io.Reader, src fs.ObjectInfo, remote string, options []fs.OpenOption,
@@ -1151,14 +1266,17 @@ func (f *Fs) put(
// If previous object was chunked, remove its chunks // If previous object was chunked, remove its chunks
f.removeOldChunks(ctx, baseRemote) f.removeOldChunks(ctx, baseRemote)
// Rename data chunks from temporary to final names if !f.useNoRename {
for chunkNo, chunk := range c.chunks { // The transaction suffix will be removed for backends with quick rename operations
chunkRemote := f.makeChunkName(baseRemote, chunkNo, "", "") for chunkNo, chunk := range c.chunks {
chunkMoved, errMove := f.baseMove(ctx, chunk, chunkRemote, delFailed) chunkRemote := f.makeChunkName(baseRemote, chunkNo, "", "")
if errMove != nil { chunkMoved, errMove := f.baseMove(ctx, chunk, chunkRemote, delFailed)
return nil, errMove if errMove != nil {
return nil, errMove
}
c.chunks[chunkNo] = chunkMoved
} }
c.chunks[chunkNo] = chunkMoved xactID = ""
} }
if !f.useMeta { if !f.useMeta {
@@ -1178,7 +1296,7 @@ func (f *Fs) put(
switch f.opt.MetaFormat { switch f.opt.MetaFormat {
case "simplejson": case "simplejson":
c.updateHashes() c.updateHashes()
metadata, err = marshalSimpleJSON(ctx, sizeTotal, len(c.chunks), c.md5, c.sha1) metadata, err = marshalSimpleJSON(ctx, sizeTotal, len(c.chunks), c.md5, c.sha1, xactID)
} }
if err == nil { if err == nil {
metaInfo := f.wrapInfo(src, baseRemote, int64(len(metadata))) metaInfo := f.wrapInfo(src, baseRemote, int64(len(metadata)))
@@ -1190,6 +1308,7 @@ func (f *Fs) put(
o := f.newObject("", metaObject, c.chunks) o := f.newObject("", metaObject, c.chunks)
o.size = sizeTotal o.size = sizeTotal
o.xactID = xactID
return o, nil return o, nil
} }
@@ -1329,7 +1448,7 @@ func (c *chunkingReader) dummyRead(in io.Reader, size int64) error {
c.accountBytes(size) c.accountBytes(size)
return nil return nil
} }
const bufLen = 1048576 // 1MB const bufLen = 1048576 // 1 MiB
buf := make([]byte, bufLen) buf := make([]byte, bufLen)
for size > 0 { for size > 0 {
n := size n := size
@@ -1593,7 +1712,7 @@ func (f *Fs) copyOrMove(ctx context.Context, o *Object, remote string, do copyMo
var metadata []byte var metadata []byte
switch f.opt.MetaFormat { switch f.opt.MetaFormat {
case "simplejson": case "simplejson":
metadata, err = marshalSimpleJSON(ctx, newObj.size, len(newChunks), md5, sha1) metadata, err = marshalSimpleJSON(ctx, newObj.size, len(newChunks), md5, sha1, o.xactID)
if err == nil { if err == nil {
metaInfo := f.wrapInfo(metaObject, "", int64(len(metadata))) metaInfo := f.wrapInfo(metaObject, "", int64(len(metadata)))
err = newObj.main.Update(ctx, bytes.NewReader(metadata), metaInfo) err = newObj.main.Update(ctx, bytes.NewReader(metadata), metaInfo)
@@ -1809,7 +1928,13 @@ func (f *Fs) ChangeNotify(ctx context.Context, notifyFunc func(string, fs.EntryT
//fs.Debugf(f, "ChangeNotify: path %q entryType %d", path, entryType) //fs.Debugf(f, "ChangeNotify: path %q entryType %d", path, entryType)
if entryType == fs.EntryObject { if entryType == fs.EntryObject {
mainPath, _, _, xactID := f.parseChunkName(path) mainPath, _, _, xactID := f.parseChunkName(path)
if mainPath != "" && xactID == "" { metaXactID := ""
if f.useNoRename {
metaObject, _ := f.base.NewObject(ctx, mainPath)
dummyObject := f.newObject("", metaObject, nil)
metaXactID, _ = dummyObject.readXactID(ctx)
}
if mainPath != "" && xactID == metaXactID {
path = mainPath path = mainPath
} }
} }
@@ -1830,15 +1955,17 @@ func (f *Fs) Shutdown(ctx context.Context) error {
// Object represents a composite file wrapping one or more data chunks // Object represents a composite file wrapping one or more data chunks
type Object struct { type Object struct {
remote string remote string
main fs.Object // meta object if file is composite, or wrapped non-chunked file, nil if meta format is 'none' main fs.Object // meta object if file is composite, or wrapped non-chunked file, nil if meta format is 'none'
chunks []fs.Object // active data chunks if file is composite, or wrapped file as a single chunk if meta format is 'none' chunks []fs.Object // active data chunks if file is composite, or wrapped file as a single chunk if meta format is 'none'
size int64 // cached total size of chunks in a composite file or -1 for non-chunked files size int64 // cached total size of chunks in a composite file or -1 for non-chunked files
isFull bool // true if metadata has been read isFull bool // true if metadata has been read
unsure bool // true if need to read metadata to detect object type xIDCached bool // true if xactID has been read
md5 string unsure bool // true if need to read metadata to detect object type
sha1 string xactID string // transaction ID for "norename" or empty string for "renamed" chunks
f *Fs md5 string
sha1 string
f *Fs
} }
func (o *Object) addChunk(chunk fs.Object, chunkNo int) error { func (o *Object) addChunk(chunk fs.Object, chunkNo int) error {
@@ -2166,6 +2293,7 @@ type ObjectInfo struct {
src fs.ObjectInfo src fs.ObjectInfo
fs *Fs fs *Fs
nChunks int // number of data chunks nChunks int // number of data chunks
xactID string // transaction ID for "norename" or empty string for "renamed" chunks
size int64 // overrides source size by the total size of data chunks size int64 // overrides source size by the total size of data chunks
remote string // overrides remote name remote string // overrides remote name
md5 string // overrides MD5 checksum md5 string // overrides MD5 checksum
@@ -2264,8 +2392,9 @@ type metaSimpleJSON struct {
Size *int64 `json:"size"` // total size of data chunks Size *int64 `json:"size"` // total size of data chunks
ChunkNum *int `json:"nchunks"` // number of data chunks ChunkNum *int `json:"nchunks"` // number of data chunks
// optional extra fields // optional extra fields
MD5 string `json:"md5,omitempty"` MD5 string `json:"md5,omitempty"`
SHA1 string `json:"sha1,omitempty"` SHA1 string `json:"sha1,omitempty"`
XactID string `json:"txn,omitempty"` // transaction ID for norename transactions
} }
// marshalSimpleJSON // marshalSimpleJSON
@@ -2275,16 +2404,20 @@ type metaSimpleJSON struct {
// - if file contents can be mistaken as meta object // - if file contents can be mistaken as meta object
// - if consistent hashing is On but wrapped remote can't provide given hash // - if consistent hashing is On but wrapped remote can't provide given hash
// //
func marshalSimpleJSON(ctx context.Context, size int64, nChunks int, md5, sha1 string) ([]byte, error) { func marshalSimpleJSON(ctx context.Context, size int64, nChunks int, md5, sha1, xactID string) ([]byte, error) {
version := metadataVersion version := metadataVersion
if xactID == "" && version == 2 {
version = 1
}
metadata := metaSimpleJSON{ metadata := metaSimpleJSON{
// required core fields // required core fields
Version: &version, Version: &version,
Size: &size, Size: &size,
ChunkNum: &nChunks, ChunkNum: &nChunks,
// optional extra fields // optional extra fields
MD5: md5, MD5: md5,
SHA1: sha1, SHA1: sha1,
XactID: xactID,
} }
data, err := json.Marshal(&metadata) data, err := json.Marshal(&metadata)
if err == nil && data != nil && len(data) >= maxMetadataSizeWritten { if err == nil && data != nil && len(data) >= maxMetadataSizeWritten {
@@ -2362,6 +2495,7 @@ func unmarshalSimpleJSON(ctx context.Context, metaObject fs.Object, data []byte)
info.nChunks = *metadata.ChunkNum info.nChunks = *metadata.ChunkNum
info.md5 = metadata.MD5 info.md5 = metadata.MD5
info.sha1 = metadata.SHA1 info.sha1 = metadata.SHA1
info.xactID = metadata.XactID
return info, true, nil return info, true, nil
} }
@@ -2394,6 +2528,11 @@ func (f *Fs) Precision() time.Duration {
return f.base.Precision() return f.base.Precision()
} }
// CanQuickRename returns true if the Fs supports a quick rename operation
func (f *Fs) CanQuickRename() bool {
return f.base.Features().Move != nil
}
// Check the interfaces are satisfied // Check the interfaces are satisfied
var ( var (
_ fs.Fs = (*Fs)(nil) _ fs.Fs = (*Fs)(nil)

View File

@@ -33,7 +33,7 @@ func testPutLarge(t *testing.T, f *Fs, kilobytes int) {
fstests.TestPutLarge(context.Background(), t, f, &fstest.Item{ fstests.TestPutLarge(context.Background(), t, f, &fstest.Item{
ModTime: fstest.Time("2001-02-03T04:05:06.499999999Z"), ModTime: fstest.Time("2001-02-03T04:05:06.499999999Z"),
Path: fmt.Sprintf("chunker-upload-%dk", kilobytes), Path: fmt.Sprintf("chunker-upload-%dk", kilobytes),
Size: int64(kilobytes) * int64(fs.KibiByte), Size: int64(kilobytes) * int64(fs.Kibi),
}) })
}) })
} }
@@ -468,9 +468,15 @@ func testPreventCorruption(t *testing.T, f *Fs) {
return obj return obj
} }
billyObj := newFile("billy") billyObj := newFile("billy")
billyTxn := billyObj.(*Object).xactID
if f.useNoRename {
require.True(t, billyTxn != "")
} else {
require.True(t, billyTxn == "")
}
billyChunkName := func(chunkNo int) string { billyChunkName := func(chunkNo int) string {
return f.makeChunkName(billyObj.Remote(), chunkNo, "", "") return f.makeChunkName(billyObj.Remote(), chunkNo, "", billyTxn)
} }
err := f.Mkdir(ctx, billyChunkName(1)) err := f.Mkdir(ctx, billyChunkName(1))
@@ -487,11 +493,13 @@ func testPreventCorruption(t *testing.T, f *Fs) {
// accessing chunks in strict mode is prohibited // accessing chunks in strict mode is prohibited
f.opt.FailHard = true f.opt.FailHard = true
billyChunk4Name := billyChunkName(4) billyChunk4Name := billyChunkName(4)
billyChunk4, err := f.NewObject(ctx, billyChunk4Name) _, err = f.base.NewObject(ctx, billyChunk4Name)
require.NoError(t, err)
_, err = f.NewObject(ctx, billyChunk4Name)
assertOverlapError(err) assertOverlapError(err)
f.opt.FailHard = false f.opt.FailHard = false
billyChunk4, err = f.NewObject(ctx, billyChunk4Name) billyChunk4, err := f.NewObject(ctx, billyChunk4Name)
assert.NoError(t, err) assert.NoError(t, err)
require.NotNil(t, billyChunk4) require.NotNil(t, billyChunk4)
@@ -520,7 +528,8 @@ func testPreventCorruption(t *testing.T, f *Fs) {
// recreate billy in case it was anyhow corrupted // recreate billy in case it was anyhow corrupted
willyObj := newFile("willy") willyObj := newFile("willy")
willyChunkName := f.makeChunkName(willyObj.Remote(), 1, "", "") willyTxn := willyObj.(*Object).xactID
willyChunkName := f.makeChunkName(willyObj.Remote(), 1, "", willyTxn)
f.opt.FailHard = false f.opt.FailHard = false
willyChunk, err := f.NewObject(ctx, willyChunkName) willyChunk, err := f.NewObject(ctx, willyChunkName)
f.opt.FailHard = true f.opt.FailHard = true
@@ -561,17 +570,20 @@ func testChunkNumberOverflow(t *testing.T, f *Fs) {
modTime := fstest.Time("2001-02-03T04:05:06.499999999Z") modTime := fstest.Time("2001-02-03T04:05:06.499999999Z")
contents := random.String(100) contents := random.String(100)
newFile := func(f fs.Fs, name string) (fs.Object, string) { newFile := func(f fs.Fs, name string) (obj fs.Object, filename string, txnID string) {
filename := path.Join(dir, name) filename = path.Join(dir, name)
item := fstest.Item{Path: filename, ModTime: modTime} item := fstest.Item{Path: filename, ModTime: modTime}
_, obj := fstests.PutTestContents(ctx, t, f, &item, contents, true) _, obj = fstests.PutTestContents(ctx, t, f, &item, contents, true)
require.NotNil(t, obj) require.NotNil(t, obj)
return obj, filename if chunkObj, isChunkObj := obj.(*Object); isChunkObj {
txnID = chunkObj.xactID
}
return
} }
f.opt.FailHard = false f.opt.FailHard = false
file, fileName := newFile(f, "wreaker") file, fileName, fileTxn := newFile(f, "wreaker")
wreak, _ := newFile(f.base, f.makeChunkName("wreaker", wreakNumber, "", "")) wreak, _, _ := newFile(f.base, f.makeChunkName("wreaker", wreakNumber, "", fileTxn))
f.opt.FailHard = false f.opt.FailHard = false
fstest.CheckListingWithRoot(t, f, dir, nil, nil, f.Precision()) fstest.CheckListingWithRoot(t, f, dir, nil, nil, f.Precision())
@@ -650,7 +662,7 @@ func testMetadataInput(t *testing.T, f *Fs) {
} }
} }
metaData, err := marshalSimpleJSON(ctx, 3, 1, "", "") metaData, err := marshalSimpleJSON(ctx, 3, 1, "", "", "")
require.NoError(t, err) require.NoError(t, err)
todaysMeta := string(metaData) todaysMeta := string(metaData)
runSubtest(todaysMeta, "today") runSubtest(todaysMeta, "today")
@@ -664,7 +676,7 @@ func testMetadataInput(t *testing.T, f *Fs) {
runSubtest(futureMeta, "future") runSubtest(futureMeta, "future")
} }
// test that chunker refuses to change on objects with future/unknowm metadata // Test that chunker refuses to change on objects with future/unknown metadata
func testFutureProof(t *testing.T, f *Fs) { func testFutureProof(t *testing.T, f *Fs) {
if f.opt.MetaFormat == "none" { if f.opt.MetaFormat == "none" {
t.Skip("this test requires metadata support") t.Skip("this test requires metadata support")
@@ -738,6 +750,100 @@ func testFutureProof(t *testing.T, f *Fs) {
} }
} }
// The newer method of doing transactions without renaming should still be able to correctly process chunks that were created with renaming
// If you attempt to do the inverse, however, the data chunks will be ignored causing commands to perform incorrectly
func testBackwardsCompatibility(t *testing.T, f *Fs) {
if !f.useMeta {
t.Skip("Can't do norename transactions without metadata")
}
const dir = "backcomp"
ctx := context.Background()
saveOpt := f.opt
saveUseNoRename := f.useNoRename
defer func() {
f.opt.FailHard = false
_ = operations.Purge(ctx, f.base, dir)
f.opt = saveOpt
f.useNoRename = saveUseNoRename
}()
f.opt.ChunkSize = fs.SizeSuffix(10)
modTime := fstest.Time("2001-02-03T04:05:06.499999999Z")
contents := random.String(250)
newFile := func(f fs.Fs, name string) (fs.Object, string) {
filename := path.Join(dir, name)
item := fstest.Item{Path: filename, ModTime: modTime}
_, obj := fstests.PutTestContents(ctx, t, f, &item, contents, true)
require.NotNil(t, obj)
return obj, filename
}
f.opt.FailHard = false
f.useNoRename = false
file, fileName := newFile(f, "renamefile")
f.opt.FailHard = false
item := fstest.NewItem(fileName, contents, modTime)
var items []fstest.Item
items = append(items, item)
f.useNoRename = true
fstest.CheckListingWithRoot(t, f, dir, items, nil, f.Precision())
_, err := f.NewObject(ctx, fileName)
assert.NoError(t, err)
f.opt.FailHard = true
_, err = f.List(ctx, dir)
assert.NoError(t, err)
f.opt.FailHard = false
_ = file.Remove(ctx)
}
func testChunkerServerSideMove(t *testing.T, f *Fs) {
if !f.useMeta {
t.Skip("Can't test norename transactions without metadata")
}
ctx := context.Background()
const dir = "servermovetest"
subRemote := fmt.Sprintf("%s:%s/%s", f.Name(), f.Root(), dir)
subFs1, err := fs.NewFs(ctx, subRemote+"/subdir1")
assert.NoError(t, err)
fs1, isChunkerFs := subFs1.(*Fs)
assert.True(t, isChunkerFs)
fs1.useNoRename = false
fs1.opt.ChunkSize = fs.SizeSuffix(3)
subFs2, err := fs.NewFs(ctx, subRemote+"/subdir2")
assert.NoError(t, err)
fs2, isChunkerFs := subFs2.(*Fs)
assert.True(t, isChunkerFs)
fs2.useNoRename = true
fs2.opt.ChunkSize = fs.SizeSuffix(3)
modTime := fstest.Time("2001-02-03T04:05:06.499999999Z")
item := fstest.Item{Path: "movefile", ModTime: modTime}
contents := "abcdef"
_, file := fstests.PutTestContents(ctx, t, fs1, &item, contents, true)
dstOverwritten, _ := fs2.NewObject(ctx, "movefile")
dstFile, err := operations.Move(ctx, fs2, dstOverwritten, "movefile", file)
assert.NoError(t, err)
assert.Equal(t, int64(len(contents)), dstFile.Size())
r, err := dstFile.Open(ctx)
assert.NoError(t, err)
assert.NotNil(t, r)
data, err := ioutil.ReadAll(r)
assert.NoError(t, err)
assert.Equal(t, contents, string(data))
_ = r.Close()
_ = operations.Purge(ctx, f.base, dir)
}
// InternalTest dispatches all internal tests // InternalTest dispatches all internal tests
func (f *Fs) InternalTest(t *testing.T) { func (f *Fs) InternalTest(t *testing.T) {
t.Run("PutLarge", func(t *testing.T) { t.Run("PutLarge", func(t *testing.T) {
@@ -764,6 +870,12 @@ func (f *Fs) InternalTest(t *testing.T) {
t.Run("FutureProof", func(t *testing.T) { t.Run("FutureProof", func(t *testing.T) {
testFutureProof(t, f) testFutureProof(t, f)
}) })
t.Run("BackwardsCompatibility", func(t *testing.T) {
testBackwardsCompatibility(t, f)
})
t.Run("ChunkerServerSideMove", func(t *testing.T) {
testChunkerServerSideMove(t, f)
})
} }
var _ fstests.InternalTester = (*Fs)(nil) var _ fstests.InternalTester = (*Fs)(nil)

View File

@@ -36,7 +36,7 @@ import (
// Globals // Globals
const ( const (
initialChunkSize = 262144 // Initial and max sizes of chunks when reading parts of the file. Currently initialChunkSize = 262144 // Initial and max sizes of chunks when reading parts of the file. Currently
maxChunkSize = 8388608 // at 256KB and 8 MB. maxChunkSize = 8388608 // at 256 KiB and 8 MiB.
bufferSize = 8388608 bufferSize = 8388608
heuristicBytes = 1048576 heuristicBytes = 1048576
@@ -53,7 +53,7 @@ const (
Gzip = 2 Gzip = 2
) )
var nameRegexp = regexp.MustCompile("^(.+?)\\.([A-Za-z0-9+_]{11})$") var nameRegexp = regexp.MustCompile("^(.+?)\\.([A-Za-z0-9-_]{11})$")
// Register with Fs // Register with Fs
func init() { func init() {

View File

@@ -12,12 +12,14 @@ import (
"strconv" "strconv"
"strings" "strings"
"sync" "sync"
"time"
"unicode/utf8" "unicode/utf8"
"github.com/pkg/errors" "github.com/pkg/errors"
"github.com/rclone/rclone/backend/crypt/pkcs7" "github.com/rclone/rclone/backend/crypt/pkcs7"
"github.com/rclone/rclone/fs" "github.com/rclone/rclone/fs"
"github.com/rclone/rclone/fs/accounting" "github.com/rclone/rclone/fs/accounting"
"github.com/rclone/rclone/lib/version"
"github.com/rfjakob/eme" "github.com/rfjakob/eme"
"golang.org/x/crypto/nacl/secretbox" "golang.org/x/crypto/nacl/secretbox"
"golang.org/x/crypto/scrypt" "golang.org/x/crypto/scrypt"
@@ -442,11 +444,32 @@ func (c *Cipher) encryptFileName(in string) string {
if !c.dirNameEncrypt && i != (len(segments)-1) { if !c.dirNameEncrypt && i != (len(segments)-1) {
continue continue
} }
// Strip version string so that only the non-versioned part
// of the file name gets encrypted/obfuscated
hasVersion := false
var t time.Time
if i == (len(segments)-1) && version.Match(segments[i]) {
var s string
t, s = version.Remove(segments[i])
// version.Remove can fail, in which case it returns segments[i]
if s != segments[i] {
segments[i] = s
hasVersion = true
}
}
if c.mode == NameEncryptionStandard { if c.mode == NameEncryptionStandard {
segments[i] = c.encryptSegment(segments[i]) segments[i] = c.encryptSegment(segments[i])
} else { } else {
segments[i] = c.obfuscateSegment(segments[i]) segments[i] = c.obfuscateSegment(segments[i])
} }
// Add back a version to the encrypted/obfuscated
// file name, if we stripped it off earlier
if hasVersion {
segments[i] = version.Add(segments[i], t)
}
} }
return strings.Join(segments, "/") return strings.Join(segments, "/")
} }
@@ -477,6 +500,21 @@ func (c *Cipher) decryptFileName(in string) (string, error) {
if !c.dirNameEncrypt && i != (len(segments)-1) { if !c.dirNameEncrypt && i != (len(segments)-1) {
continue continue
} }
// Strip version string so that only the non-versioned part
// of the file name gets decrypted/deobfuscated
hasVersion := false
var t time.Time
if i == (len(segments)-1) && version.Match(segments[i]) {
var s string
t, s = version.Remove(segments[i])
// version.Remove can fail, in which case it returns segments[i]
if s != segments[i] {
segments[i] = s
hasVersion = true
}
}
if c.mode == NameEncryptionStandard { if c.mode == NameEncryptionStandard {
segments[i], err = c.decryptSegment(segments[i]) segments[i], err = c.decryptSegment(segments[i])
} else { } else {
@@ -486,6 +524,12 @@ func (c *Cipher) decryptFileName(in string) (string, error) {
if err != nil { if err != nil {
return "", err return "", err
} }
// Add back a version to the decrypted/deobfuscated
// file name, if we stripped it off earlier
if hasVersion {
segments[i] = version.Add(segments[i], t)
}
} }
return strings.Join(segments, "/"), nil return strings.Join(segments, "/"), nil
} }
@@ -494,10 +538,18 @@ func (c *Cipher) decryptFileName(in string) (string, error) {
func (c *Cipher) DecryptFileName(in string) (string, error) { func (c *Cipher) DecryptFileName(in string) (string, error) {
if c.mode == NameEncryptionOff { if c.mode == NameEncryptionOff {
remainingLength := len(in) - len(encryptedSuffix) remainingLength := len(in) - len(encryptedSuffix)
if remainingLength > 0 && strings.HasSuffix(in, encryptedSuffix) { if remainingLength == 0 || !strings.HasSuffix(in, encryptedSuffix) {
return in[:remainingLength], nil return "", ErrorNotAnEncryptedFile
} }
return "", ErrorNotAnEncryptedFile decrypted := in[:remainingLength]
if version.Match(decrypted) {
_, unversioned := version.Remove(decrypted)
if unversioned == "" {
return "", ErrorNotAnEncryptedFile
}
}
// Leave the version string on, if it was there
return decrypted, nil
} }
return c.decryptFileName(in) return c.decryptFileName(in)
} }

View File

@@ -160,22 +160,29 @@ func TestEncryptFileName(t *testing.T) {
assert.Equal(t, "p0e52nreeaj0a5ea7s64m4j72s", c.EncryptFileName("1")) assert.Equal(t, "p0e52nreeaj0a5ea7s64m4j72s", c.EncryptFileName("1"))
assert.Equal(t, "p0e52nreeaj0a5ea7s64m4j72s/l42g6771hnv3an9cgc8cr2n1ng", c.EncryptFileName("1/12")) assert.Equal(t, "p0e52nreeaj0a5ea7s64m4j72s/l42g6771hnv3an9cgc8cr2n1ng", c.EncryptFileName("1/12"))
assert.Equal(t, "p0e52nreeaj0a5ea7s64m4j72s/l42g6771hnv3an9cgc8cr2n1ng/qgm4avr35m5loi1th53ato71v0", c.EncryptFileName("1/12/123")) assert.Equal(t, "p0e52nreeaj0a5ea7s64m4j72s/l42g6771hnv3an9cgc8cr2n1ng/qgm4avr35m5loi1th53ato71v0", c.EncryptFileName("1/12/123"))
assert.Equal(t, "p0e52nreeaj0a5ea7s64m4j72s-v2001-02-03-040506-123", c.EncryptFileName("1-v2001-02-03-040506-123"))
assert.Equal(t, "p0e52nreeaj0a5ea7s64m4j72s/l42g6771hnv3an9cgc8cr2n1ng-v2001-02-03-040506-123", c.EncryptFileName("1/12-v2001-02-03-040506-123"))
// Standard mode with directory name encryption off // Standard mode with directory name encryption off
c, _ = newCipher(NameEncryptionStandard, "", "", false) c, _ = newCipher(NameEncryptionStandard, "", "", false)
assert.Equal(t, "p0e52nreeaj0a5ea7s64m4j72s", c.EncryptFileName("1")) assert.Equal(t, "p0e52nreeaj0a5ea7s64m4j72s", c.EncryptFileName("1"))
assert.Equal(t, "1/l42g6771hnv3an9cgc8cr2n1ng", c.EncryptFileName("1/12")) assert.Equal(t, "1/l42g6771hnv3an9cgc8cr2n1ng", c.EncryptFileName("1/12"))
assert.Equal(t, "1/12/qgm4avr35m5loi1th53ato71v0", c.EncryptFileName("1/12/123")) assert.Equal(t, "1/12/qgm4avr35m5loi1th53ato71v0", c.EncryptFileName("1/12/123"))
assert.Equal(t, "p0e52nreeaj0a5ea7s64m4j72s-v2001-02-03-040506-123", c.EncryptFileName("1-v2001-02-03-040506-123"))
assert.Equal(t, "1/l42g6771hnv3an9cgc8cr2n1ng-v2001-02-03-040506-123", c.EncryptFileName("1/12-v2001-02-03-040506-123"))
// Now off mode // Now off mode
c, _ = newCipher(NameEncryptionOff, "", "", true) c, _ = newCipher(NameEncryptionOff, "", "", true)
assert.Equal(t, "1/12/123.bin", c.EncryptFileName("1/12/123")) assert.Equal(t, "1/12/123.bin", c.EncryptFileName("1/12/123"))
// Obfuscation mode // Obfuscation mode
c, _ = newCipher(NameEncryptionObfuscated, "", "", true) c, _ = newCipher(NameEncryptionObfuscated, "", "", true)
assert.Equal(t, "49.6/99.23/150.890/53.!!lipps", c.EncryptFileName("1/12/123/!hello")) assert.Equal(t, "49.6/99.23/150.890/53.!!lipps", c.EncryptFileName("1/12/123/!hello"))
assert.Equal(t, "49.6/99.23/150.890/53-v2001-02-03-040506-123.!!lipps", c.EncryptFileName("1/12/123/!hello-v2001-02-03-040506-123"))
assert.Equal(t, "49.6/99.23/150.890/162.uryyB-v2001-02-03-040506-123.GKG", c.EncryptFileName("1/12/123/hello-v2001-02-03-040506-123.txt"))
assert.Equal(t, "161.\u00e4", c.EncryptFileName("\u00a1")) assert.Equal(t, "161.\u00e4", c.EncryptFileName("\u00a1"))
assert.Equal(t, "160.\u03c2", c.EncryptFileName("\u03a0")) assert.Equal(t, "160.\u03c2", c.EncryptFileName("\u03a0"))
// Obfuscation mode with directory name encryption off // Obfuscation mode with directory name encryption off
c, _ = newCipher(NameEncryptionObfuscated, "", "", false) c, _ = newCipher(NameEncryptionObfuscated, "", "", false)
assert.Equal(t, "1/12/123/53.!!lipps", c.EncryptFileName("1/12/123/!hello")) assert.Equal(t, "1/12/123/53.!!lipps", c.EncryptFileName("1/12/123/!hello"))
assert.Equal(t, "1/12/123/53-v2001-02-03-040506-123.!!lipps", c.EncryptFileName("1/12/123/!hello-v2001-02-03-040506-123"))
assert.Equal(t, "161.\u00e4", c.EncryptFileName("\u00a1")) assert.Equal(t, "161.\u00e4", c.EncryptFileName("\u00a1"))
assert.Equal(t, "160.\u03c2", c.EncryptFileName("\u03a0")) assert.Equal(t, "160.\u03c2", c.EncryptFileName("\u03a0"))
} }
@@ -194,14 +201,19 @@ func TestDecryptFileName(t *testing.T) {
{NameEncryptionStandard, true, "p0e52nreeaj0a5ea7s64m4j72s/l42g6771hnv3an9cgc8cr2n1ng/qgm4avr35m5loi1th53ato71v0", "1/12/123", nil}, {NameEncryptionStandard, true, "p0e52nreeaj0a5ea7s64m4j72s/l42g6771hnv3an9cgc8cr2n1ng/qgm4avr35m5loi1th53ato71v0", "1/12/123", nil},
{NameEncryptionStandard, true, "p0e52nreeaj0a5ea7s64m4j72s/l42g6771hnv3an9cgc8cr2n1/qgm4avr35m5loi1th53ato71v0", "", ErrorNotAMultipleOfBlocksize}, {NameEncryptionStandard, true, "p0e52nreeaj0a5ea7s64m4j72s/l42g6771hnv3an9cgc8cr2n1/qgm4avr35m5loi1th53ato71v0", "", ErrorNotAMultipleOfBlocksize},
{NameEncryptionStandard, false, "1/12/qgm4avr35m5loi1th53ato71v0", "1/12/123", nil}, {NameEncryptionStandard, false, "1/12/qgm4avr35m5loi1th53ato71v0", "1/12/123", nil},
{NameEncryptionStandard, true, "p0e52nreeaj0a5ea7s64m4j72s-v2001-02-03-040506-123", "1-v2001-02-03-040506-123", nil},
{NameEncryptionOff, true, "1/12/123.bin", "1/12/123", nil}, {NameEncryptionOff, true, "1/12/123.bin", "1/12/123", nil},
{NameEncryptionOff, true, "1/12/123.bix", "", ErrorNotAnEncryptedFile}, {NameEncryptionOff, true, "1/12/123.bix", "", ErrorNotAnEncryptedFile},
{NameEncryptionOff, true, ".bin", "", ErrorNotAnEncryptedFile}, {NameEncryptionOff, true, ".bin", "", ErrorNotAnEncryptedFile},
{NameEncryptionOff, true, "1/12/123-v2001-02-03-040506-123.bin", "1/12/123-v2001-02-03-040506-123", nil},
{NameEncryptionOff, true, "1/12/123-v1970-01-01-010101-123-v2001-02-03-040506-123.bin", "1/12/123-v1970-01-01-010101-123-v2001-02-03-040506-123", nil},
{NameEncryptionOff, true, "1/12/123-v1970-01-01-010101-123-v2001-02-03-040506-123.txt.bin", "1/12/123-v1970-01-01-010101-123-v2001-02-03-040506-123.txt", nil},
{NameEncryptionObfuscated, true, "!.hello", "hello", nil}, {NameEncryptionObfuscated, true, "!.hello", "hello", nil},
{NameEncryptionObfuscated, true, "hello", "", ErrorNotAnEncryptedFile}, {NameEncryptionObfuscated, true, "hello", "", ErrorNotAnEncryptedFile},
{NameEncryptionObfuscated, true, "161.\u00e4", "\u00a1", nil}, {NameEncryptionObfuscated, true, "161.\u00e4", "\u00a1", nil},
{NameEncryptionObfuscated, true, "160.\u03c2", "\u03a0", nil}, {NameEncryptionObfuscated, true, "160.\u03c2", "\u03a0", nil},
{NameEncryptionObfuscated, false, "1/12/123/53.!!lipps", "1/12/123/!hello", nil}, {NameEncryptionObfuscated, false, "1/12/123/53.!!lipps", "1/12/123/!hello", nil},
{NameEncryptionObfuscated, false, "1/12/123/53-v2001-02-03-040506-123.!!lipps", "1/12/123/!hello-v2001-02-03-040506-123", nil},
} { } {
c, _ := newCipher(test.mode, "", "", test.dirNameEncrypt) c, _ := newCipher(test.mode, "", "", test.dirNameEncrypt)
actual, actualErr := c.DecryptFileName(test.in) actual, actualErr := c.DecryptFileName(test.in)

View File

@@ -101,6 +101,21 @@ names, or for debugging purposes.`,
Default: false, Default: false,
Hide: fs.OptionHideConfigurator, Hide: fs.OptionHideConfigurator,
Advanced: true, Advanced: true,
}, {
Name: "no_data_encryption",
Help: "Option to either encrypt file data or leave it unencrypted.",
Default: false,
Advanced: true,
Examples: []fs.OptionExample{
{
Value: "true",
Help: "Don't encrypt file data, leave it unencrypted.",
},
{
Value: "false",
Help: "Encrypt file data.",
},
},
}}, }},
}) })
} }
@@ -209,6 +224,7 @@ type Options struct {
Remote string `config:"remote"` Remote string `config:"remote"`
FilenameEncryption string `config:"filename_encryption"` FilenameEncryption string `config:"filename_encryption"`
DirectoryNameEncryption bool `config:"directory_name_encryption"` DirectoryNameEncryption bool `config:"directory_name_encryption"`
NoDataEncryption bool `config:"no_data_encryption"`
Password string `config:"password"` Password string `config:"password"`
Password2 string `config:"password2"` Password2 string `config:"password2"`
ServerSideAcrossConfigs bool `config:"server_side_across_configs"` ServerSideAcrossConfigs bool `config:"server_side_across_configs"`
@@ -346,6 +362,10 @@ type putFn func(ctx context.Context, in io.Reader, src fs.ObjectInfo, options ..
// put implements Put or PutStream // put implements Put or PutStream
func (f *Fs) put(ctx context.Context, in io.Reader, src fs.ObjectInfo, options []fs.OpenOption, put putFn) (fs.Object, error) { func (f *Fs) put(ctx context.Context, in io.Reader, src fs.ObjectInfo, options []fs.OpenOption, put putFn) (fs.Object, error) {
if f.opt.NoDataEncryption {
return put(ctx, in, f.newObjectInfo(src, nonce{}), options...)
}
// Encrypt the data into wrappedIn // Encrypt the data into wrappedIn
wrappedIn, encrypter, err := f.cipher.encryptData(in) wrappedIn, encrypter, err := f.cipher.encryptData(in)
if err != nil { if err != nil {
@@ -384,13 +404,16 @@ func (f *Fs) put(ctx context.Context, in io.Reader, src fs.ObjectInfo, options [
if err != nil { if err != nil {
return nil, errors.Wrap(err, "failed to read destination hash") return nil, errors.Wrap(err, "failed to read destination hash")
} }
if srcHash != "" && dstHash != "" && srcHash != dstHash { if srcHash != "" && dstHash != "" {
// remove object if srcHash != dstHash {
err = o.Remove(ctx) // remove object
if err != nil { err = o.Remove(ctx)
fs.Errorf(o, "Failed to remove corrupted object: %v", err) if err != nil {
fs.Errorf(o, "Failed to remove corrupted object: %v", err)
}
return nil, errors.Errorf("corrupted on transfer: %v crypted hash differ %q vs %q", ht, srcHash, dstHash)
} }
return nil, errors.Errorf("corrupted on transfer: %v crypted hash differ %q vs %q", ht, srcHash, dstHash) fs.Debugf(src, "%v = %s OK", ht, srcHash)
} }
} }
@@ -617,6 +640,10 @@ func (f *Fs) computeHashWithNonce(ctx context.Context, nonce nonce, src fs.Objec
// //
// Note that we break lots of encapsulation in this function. // Note that we break lots of encapsulation in this function.
func (f *Fs) ComputeHash(ctx context.Context, o *Object, src fs.Object, hashType hash.Type) (hashStr string, err error) { func (f *Fs) ComputeHash(ctx context.Context, o *Object, src fs.Object, hashType hash.Type) (hashStr string, err error) {
if f.opt.NoDataEncryption {
return src.Hash(ctx, hashType)
}
// Read the nonce - opening the file is sufficient to read the nonce in // Read the nonce - opening the file is sufficient to read the nonce in
// use a limited read so we only read the header // use a limited read so we only read the header
in, err := o.Object.Open(ctx, &fs.RangeOption{Start: 0, End: int64(fileHeaderSize) - 1}) in, err := o.Object.Open(ctx, &fs.RangeOption{Start: 0, End: int64(fileHeaderSize) - 1})
@@ -822,9 +849,13 @@ func (o *Object) Remote() string {
// Size returns the size of the file // Size returns the size of the file
func (o *Object) Size() int64 { func (o *Object) Size() int64 {
size, err := o.f.cipher.DecryptedSize(o.Object.Size()) size := o.Object.Size()
if err != nil { if !o.f.opt.NoDataEncryption {
fs.Debugf(o, "Bad size for decrypt: %v", err) var err error
size, err = o.f.cipher.DecryptedSize(size)
if err != nil {
fs.Debugf(o, "Bad size for decrypt: %v", err)
}
} }
return size return size
} }
@@ -842,6 +873,10 @@ func (o *Object) UnWrap() fs.Object {
// Open opens the file for read. Call Close() on the returned io.ReadCloser // Open opens the file for read. Call Close() on the returned io.ReadCloser
func (o *Object) Open(ctx context.Context, options ...fs.OpenOption) (rc io.ReadCloser, err error) { func (o *Object) Open(ctx context.Context, options ...fs.OpenOption) (rc io.ReadCloser, err error) {
if o.f.opt.NoDataEncryption {
return o.Object.Open(ctx, options...)
}
var openOptions []fs.OpenOption var openOptions []fs.OpenOption
var offset, limit int64 = 0, -1 var offset, limit int64 = 0, -1
for _, option := range options { for _, option := range options {

View File

@@ -91,3 +91,26 @@ func TestObfuscate(t *testing.T) {
UnimplementableObjectMethods: []string{"MimeType"}, UnimplementableObjectMethods: []string{"MimeType"},
}) })
} }
// TestNoDataObfuscate runs integration tests against the remote
func TestNoDataObfuscate(t *testing.T) {
if *fstest.RemoteName != "" {
t.Skip("Skipping as -remote set")
}
tempdir := filepath.Join(os.TempDir(), "rclone-crypt-test-obfuscate")
name := "TestCrypt4"
fstests.Run(t, &fstests.Opt{
RemoteName: name + ":",
NilObject: (*crypt.Object)(nil),
ExtraConfig: []fstests.ExtraConfigItem{
{Name: name, Key: "type", Value: "crypt"},
{Name: name, Key: "remote", Value: tempdir},
{Name: name, Key: "password", Value: obscure.MustObscure("potato2")},
{Name: name, Key: "filename_encryption", Value: "obfuscate"},
{Name: name, Key: "no_data_encryption", Value: "true"},
},
SkipBadWindowsCharacters: true,
UnimplementableFsMethods: []string{"OpenWriterAt"},
UnimplementableObjectMethods: []string{"MimeType"},
})
}

View File

@@ -14,7 +14,6 @@ import (
"fmt" "fmt"
"io" "io"
"io/ioutil" "io/ioutil"
"log"
"mime" "mime"
"net/http" "net/http"
"path" "path"
@@ -68,8 +67,8 @@ const (
defaultScope = "drive" defaultScope = "drive"
// chunkSize is the size of the chunks created during a resumable upload and should be a power of two. // chunkSize is the size of the chunks created during a resumable upload and should be a power of two.
// 1<<18 is the minimum size supported by the Google uploader, and there is no maximum. // 1<<18 is the minimum size supported by the Google uploader, and there is no maximum.
minChunkSize = 256 * fs.KibiByte minChunkSize = 256 * fs.Kibi
defaultChunkSize = 8 * fs.MebiByte defaultChunkSize = 8 * fs.Mebi
partialFields = "id,name,size,md5Checksum,trashed,explicitlyTrashed,modifiedTime,createdTime,mimeType,parents,webViewLink,shortcutDetails,exportLinks" partialFields = "id,name,size,md5Checksum,trashed,explicitlyTrashed,modifiedTime,createdTime,mimeType,parents,webViewLink,shortcutDetails,exportLinks"
listRGrouping = 50 // number of IDs to search at once when using ListR listRGrouping = 50 // number of IDs to search at once when using ListR
listRInputBuffer = 1000 // size of input buffer when using ListR listRInputBuffer = 1000 // size of input buffer when using ListR
@@ -183,32 +182,64 @@ func init() {
Description: "Google Drive", Description: "Google Drive",
NewFs: NewFs, NewFs: NewFs,
CommandHelp: commandHelp, CommandHelp: commandHelp,
Config: func(ctx context.Context, name string, m configmap.Mapper) { Config: func(ctx context.Context, name string, m configmap.Mapper, config fs.ConfigIn) (*fs.ConfigOut, error) {
// Parse config into Options struct // Parse config into Options struct
opt := new(Options) opt := new(Options)
err := configstruct.Set(m, opt) err := configstruct.Set(m, opt)
if err != nil { if err != nil {
fs.Errorf(nil, "Couldn't parse config into struct: %v", err) return nil, errors.Wrap(err, "couldn't parse config into struct")
return
} }
// Fill in the scopes switch config.State {
driveConfig.Scopes = driveScopes(opt.Scope) case "":
// Set the root_folder_id if using drive.appfolder // Fill in the scopes
if driveScopesContainsAppFolder(driveConfig.Scopes) { driveConfig.Scopes = driveScopes(opt.Scope)
m.Set("root_folder_id", "appDataFolder")
}
if opt.ServiceAccountFile == "" { // Set the root_folder_id if using drive.appfolder
err = oauthutil.Config(ctx, "drive", name, m, driveConfig, nil) if driveScopesContainsAppFolder(driveConfig.Scopes) {
if err != nil { m.Set("root_folder_id", "appDataFolder")
log.Fatalf("Failed to configure token: %v", err)
} }
if opt.ServiceAccountFile == "" && opt.ServiceAccountCredentials == "" {
return oauthutil.ConfigOut("teamdrive", &oauthutil.Options{
OAuth2Config: driveConfig,
})
}
return fs.ConfigGoto("teamdrive")
case "teamdrive":
if opt.TeamDriveID == "" {
return fs.ConfigConfirm("teamdrive_ok", false, "config_change_team_drive", "Configure this as a Shared Drive (Team Drive)?\n")
}
return fs.ConfigConfirm("teamdrive_ok", false, "config_change_team_drive", fmt.Sprintf("Change current Shared Drive (Team Drive) ID %q?\n", opt.TeamDriveID))
case "teamdrive_ok":
if config.Result == "false" {
m.Set("team_drive", "")
return nil, nil
}
f, err := newFs(ctx, name, "", m)
if err != nil {
return nil, errors.Wrap(err, "failed to make Fs to list Shared Drives")
}
teamDrives, err := f.listTeamDrives(ctx)
if err != nil {
return nil, err
}
if len(teamDrives) == 0 {
return fs.ConfigError("", "No Shared Drives found in your account")
}
return fs.ConfigChoose("teamdrive_final", "config_team_drive", "Shared Drive", len(teamDrives), func(i int) (string, string) {
teamDrive := teamDrives[i]
return teamDrive.Id, teamDrive.Name
})
case "teamdrive_final":
driveID := config.Result
m.Set("team_drive", driveID)
m.Set("root_folder_id", "")
opt.TeamDriveID = driveID
opt.RootFolderID = ""
return nil, nil
} }
err = configTeamDrive(ctx, opt, m, name) return nil, fmt.Errorf("unknown state %q", config.State)
if err != nil {
log.Fatalf("Failed to configure Shared Drive: %v", err)
}
}, },
Options: append(driveOAuthOptions(), []fs.Option{{ Options: append(driveOAuthOptions(), []fs.Option{{
Name: "scope", Name: "scope",
@@ -467,7 +498,7 @@ See: https://github.com/rclone/rclone/issues/3631
Default: false, Default: false,
Help: `Make upload limit errors be fatal Help: `Make upload limit errors be fatal
At the time of writing it is only possible to upload 750GB of data to At the time of writing it is only possible to upload 750 GiB of data to
Google Drive a day (this is an undocumented limit). When this limit is Google Drive a day (this is an undocumented limit). When this limit is
reached Google Drive produces a slightly different error message. When reached Google Drive produces a slightly different error message. When
this flag is set it causes these errors to be fatal. These will stop this flag is set it causes these errors to be fatal. These will stop
@@ -484,7 +515,7 @@ See: https://github.com/rclone/rclone/issues/3857
Default: false, Default: false,
Help: `Make download limit errors be fatal Help: `Make download limit errors be fatal
At the time of writing it is only possible to download 10TB of data from At the time of writing it is only possible to download 10 TiB of data from
Google Drive a day (this is an undocumented limit). When this limit is Google Drive a day (this is an undocumented limit). When this limit is
reached Google Drive produces a slightly different error message. When reached Google Drive produces a slightly different error message. When
this flag is set it causes these errors to be fatal. These will stop this flag is set it causes these errors to be fatal. These will stop
@@ -522,7 +553,7 @@ If this flag is set then rclone will ignore shortcut files completely.
} { } {
for mimeType, extension := range m { for mimeType, extension := range m {
if err := mime.AddExtensionType(extension, mimeType); err != nil { if err := mime.AddExtensionType(extension, mimeType); err != nil {
log.Fatalf("Failed to register MIME type %q: %v", mimeType, err) fs.Errorf("Failed to register MIME type %q: %v", mimeType, err)
} }
} }
} }
@@ -590,13 +621,13 @@ type Fs struct {
} }
type baseObject struct { type baseObject struct {
fs *Fs // what this object is part of fs *Fs // what this object is part of
remote string // The remote path remote string // The remote path
id string // Drive Id of this object id string // Drive Id of this object
modifiedDate string // RFC3339 time it was last modified modifiedDate string // RFC3339 time it was last modified
mimeType string // The object MIME type mimeType string // The object MIME type
bytes int64 // size of the object bytes int64 // size of the object
parents int // number of parents parents []string // IDs of the parent directories
} }
type documentObject struct { type documentObject struct {
baseObject baseObject
@@ -641,7 +672,10 @@ func (f *Fs) Features() *fs.Features {
} }
// shouldRetry determines whether a given err rates being retried // shouldRetry determines whether a given err rates being retried
func (f *Fs) shouldRetry(err error) (bool, error) { func (f *Fs) shouldRetry(ctx context.Context, err error) (bool, error) {
if fserrors.ContextError(ctx, &err) {
return false, err
}
if err == nil { if err == nil {
return false, nil return false, nil
} }
@@ -695,20 +729,20 @@ func containsString(slice []string, s string) bool {
} }
// getFile returns drive.File for the ID passed and fields passed in // getFile returns drive.File for the ID passed and fields passed in
func (f *Fs) getFile(ID string, fields googleapi.Field) (info *drive.File, err error) { func (f *Fs) getFile(ctx context.Context, ID string, fields googleapi.Field) (info *drive.File, err error) {
err = f.pacer.Call(func() (bool, error) { err = f.pacer.Call(func() (bool, error) {
info, err = f.svc.Files.Get(ID). info, err = f.svc.Files.Get(ID).
Fields(fields). Fields(fields).
SupportsAllDrives(true). SupportsAllDrives(true).
Do() Context(ctx).Do()
return f.shouldRetry(err) return f.shouldRetry(ctx, err)
}) })
return info, err return info, err
} }
// getRootID returns the canonical ID for the "root" ID // getRootID returns the canonical ID for the "root" ID
func (f *Fs) getRootID() (string, error) { func (f *Fs) getRootID(ctx context.Context) (string, error) {
info, err := f.getFile("root", "id") info, err := f.getFile(ctx, "root", "id")
if err != nil { if err != nil {
return "", errors.Wrap(err, "couldn't find root directory ID") return "", errors.Wrap(err, "couldn't find root directory ID")
} }
@@ -814,7 +848,7 @@ OUTER:
var files *drive.FileList var files *drive.FileList
err = f.pacer.Call(func() (bool, error) { err = f.pacer.Call(func() (bool, error) {
files, err = list.Fields(googleapi.Field(fields)).Context(ctx).Do() files, err = list.Fields(googleapi.Field(fields)).Context(ctx).Do()
return f.shouldRetry(err) return f.shouldRetry(ctx, err)
}) })
if err != nil { if err != nil {
return false, errors.Wrap(err, "couldn't list directory") return false, errors.Wrap(err, "couldn't list directory")
@@ -837,7 +871,7 @@ OUTER:
if filesOnly && item.ShortcutDetails.TargetMimeType == driveFolderType { if filesOnly && item.ShortcutDetails.TargetMimeType == driveFolderType {
continue continue
} }
item, err = f.resolveShortcut(item) item, err = f.resolveShortcut(ctx, item)
if err != nil { if err != nil {
return false, errors.Wrap(err, "list") return false, errors.Wrap(err, "list")
} }
@@ -855,7 +889,7 @@ OUTER:
if !found { if !found {
continue continue
} }
_, exportName, _, _ := f.findExportFormat(item) _, exportName, _, _ := f.findExportFormat(ctx, item)
if exportName == "" || exportName != title { if exportName == "" || exportName != title {
continue continue
} }
@@ -946,48 +980,6 @@ func parseExtensions(extensionsIn ...string) (extensions, mimeTypes []string, er
return return
} }
// Figure out if the user wants to use a team drive
func configTeamDrive(ctx context.Context, opt *Options, m configmap.Mapper, name string) error {
ci := fs.GetConfig(ctx)
// Stop if we are running non-interactive config
if ci.AutoConfirm {
return nil
}
if opt.TeamDriveID == "" {
fmt.Printf("Configure this as a Shared Drive (Team Drive)?\n")
} else {
fmt.Printf("Change current Shared Drive (Team Drive) ID %q?\n", opt.TeamDriveID)
}
if !config.Confirm(false) {
return nil
}
f, err := newFs(ctx, name, "", m)
if err != nil {
return errors.Wrap(err, "failed to make Fs to list Shared Drives")
}
fmt.Printf("Fetching Shared Drive list...\n")
teamDrives, err := f.listTeamDrives(ctx)
if err != nil {
return err
}
if len(teamDrives) == 0 {
fmt.Printf("No Shared Drives found in your account")
return nil
}
var driveIDs, driveNames []string
for _, teamDrive := range teamDrives {
driveIDs = append(driveIDs, teamDrive.Id)
driveNames = append(driveNames, teamDrive.Name)
}
driveID := config.Choose("Enter a Shared Drive ID", driveIDs, driveNames, true)
m.Set("team_drive", driveID)
m.Set("root_folder_id", "")
opt.TeamDriveID = driveID
opt.RootFolderID = ""
return nil
}
// getClient makes an http client according to the options // getClient makes an http client according to the options
func getClient(ctx context.Context, opt *Options) *http.Client { func getClient(ctx context.Context, opt *Options) *http.Client {
t := fshttp.NewTransportCustom(ctx, func(t *http.Transport) { t := fshttp.NewTransportCustom(ctx, func(t *http.Transport) {
@@ -1155,7 +1147,7 @@ func NewFs(ctx context.Context, name, path string, m configmap.Mapper) (fs.Fs, e
f.rootFolderID = f.opt.TeamDriveID f.rootFolderID = f.opt.TeamDriveID
} else { } else {
// otherwise look up the actual root ID // otherwise look up the actual root ID
rootID, err := f.getRootID() rootID, err := f.getRootID(ctx)
if err != nil { if err != nil {
if gerr, ok := errors.Cause(err).(*googleapi.Error); ok && gerr.Code == 404 { if gerr, ok := errors.Cause(err).(*googleapi.Error); ok && gerr.Code == 404 {
// 404 means that this scope does not have permission to get the // 404 means that this scope does not have permission to get the
@@ -1166,7 +1158,7 @@ func NewFs(ctx context.Context, name, path string, m configmap.Mapper) (fs.Fs, e
} }
} }
f.rootFolderID = rootID f.rootFolderID = rootID
fs.Debugf(f, "root_folder_id = %q - save this in the config to speed up startup", rootID) fs.Debugf(f, "'root_folder_id = %s' - save this in the config to speed up startup", rootID)
} }
f.dirCache = dircache.New(f.root, f.rootFolderID, f) f.dirCache = dircache.New(f.root, f.rootFolderID, f)
@@ -1236,7 +1228,7 @@ func (f *Fs) newBaseObject(remote string, info *drive.File) baseObject {
modifiedDate: modifiedDate, modifiedDate: modifiedDate,
mimeType: info.MimeType, mimeType: info.MimeType,
bytes: size, bytes: size,
parents: len(info.Parents), parents: info.Parents,
} }
} }
@@ -1328,26 +1320,26 @@ func (f *Fs) newLinkObject(remote string, info *drive.File, extension, exportMim
// newObjectWithInfo creates an fs.Object for any drive.File // newObjectWithInfo creates an fs.Object for any drive.File
// //
// When the drive.File cannot be represented as an fs.Object it will return (nil, nil). // When the drive.File cannot be represented as an fs.Object it will return (nil, nil).
func (f *Fs) newObjectWithInfo(remote string, info *drive.File) (fs.Object, error) { func (f *Fs) newObjectWithInfo(ctx context.Context, remote string, info *drive.File) (fs.Object, error) {
// If item has MD5 sum or a length it is a file stored on drive // If item has MD5 sum or a length it is a file stored on drive
if info.Md5Checksum != "" || info.Size > 0 { if info.Md5Checksum != "" || info.Size > 0 {
return f.newRegularObject(remote, info), nil return f.newRegularObject(remote, info), nil
} }
extension, exportName, exportMimeType, isDocument := f.findExportFormat(info) extension, exportName, exportMimeType, isDocument := f.findExportFormat(ctx, info)
return f.newObjectWithExportInfo(remote, info, extension, exportName, exportMimeType, isDocument) return f.newObjectWithExportInfo(ctx, remote, info, extension, exportName, exportMimeType, isDocument)
} }
// newObjectWithExportInfo creates an fs.Object for any drive.File and the result of findExportFormat // newObjectWithExportInfo creates an fs.Object for any drive.File and the result of findExportFormat
// //
// When the drive.File cannot be represented as an fs.Object it will return (nil, nil). // When the drive.File cannot be represented as an fs.Object it will return (nil, nil).
func (f *Fs) newObjectWithExportInfo( func (f *Fs) newObjectWithExportInfo(
remote string, info *drive.File, ctx context.Context, remote string, info *drive.File,
extension, exportName, exportMimeType string, isDocument bool) (o fs.Object, err error) { extension, exportName, exportMimeType string, isDocument bool) (o fs.Object, err error) {
// Note that resolveShortcut will have been called already if // Note that resolveShortcut will have been called already if
// we are being called from a listing. However the drive.Item // we are being called from a listing. However the drive.Item
// will have been resolved so this will do nothing. // will have been resolved so this will do nothing.
info, err = f.resolveShortcut(info) info, err = f.resolveShortcut(ctx, info)
if err != nil { if err != nil {
return nil, errors.Wrap(err, "new object") return nil, errors.Wrap(err, "new object")
} }
@@ -1395,7 +1387,7 @@ func (f *Fs) NewObject(ctx context.Context, remote string) (fs.Object, error) {
} }
remote = remote[:len(remote)-len(extension)] remote = remote[:len(remote)-len(extension)]
obj, err := f.newObjectWithExportInfo(remote, info, extension, exportName, exportMimeType, isDocument) obj, err := f.newObjectWithExportInfo(ctx, remote, info, extension, exportName, exportMimeType, isDocument)
switch { switch {
case err != nil: case err != nil:
return nil, err return nil, err
@@ -1412,7 +1404,7 @@ func (f *Fs) FindLeaf(ctx context.Context, pathID, leaf string) (pathIDOut strin
pathID = actualID(pathID) pathID = actualID(pathID)
found, err = f.list(ctx, []string{pathID}, leaf, true, false, f.opt.TrashedOnly, false, func(item *drive.File) bool { found, err = f.list(ctx, []string{pathID}, leaf, true, false, f.opt.TrashedOnly, false, func(item *drive.File) bool {
if !f.opt.SkipGdocs { if !f.opt.SkipGdocs {
_, exportName, _, isDocument := f.findExportFormat(item) _, exportName, _, isDocument := f.findExportFormat(ctx, item)
if exportName == leaf { if exportName == leaf {
pathIDOut = item.Id pathIDOut = item.Id
return true return true
@@ -1447,8 +1439,8 @@ func (f *Fs) CreateDir(ctx context.Context, pathID, leaf string) (newID string,
info, err = f.svc.Files.Create(createInfo). info, err = f.svc.Files.Create(createInfo).
Fields("id"). Fields("id").
SupportsAllDrives(true). SupportsAllDrives(true).
Do() Context(ctx).Do()
return f.shouldRetry(err) return f.shouldRetry(ctx, err)
}) })
if err != nil { if err != nil {
return "", err return "", err
@@ -1483,15 +1475,15 @@ func linkTemplate(mt string) *template.Template {
}) })
return _linkTemplates[mt] return _linkTemplates[mt]
} }
func (f *Fs) fetchFormats() { func (f *Fs) fetchFormats(ctx context.Context) {
fetchFormatsOnce.Do(func() { fetchFormatsOnce.Do(func() {
var about *drive.About var about *drive.About
var err error var err error
err = f.pacer.Call(func() (bool, error) { err = f.pacer.Call(func() (bool, error) {
about, err = f.svc.About.Get(). about, err = f.svc.About.Get().
Fields("exportFormats,importFormats"). Fields("exportFormats,importFormats").
Do() Context(ctx).Do()
return f.shouldRetry(err) return f.shouldRetry(ctx, err)
}) })
if err != nil { if err != nil {
fs.Errorf(f, "Failed to get Drive exportFormats and importFormats: %v", err) fs.Errorf(f, "Failed to get Drive exportFormats and importFormats: %v", err)
@@ -1508,8 +1500,8 @@ func (f *Fs) fetchFormats() {
// if necessary. // if necessary.
// //
// if the fetch fails then it will not export any drive formats // if the fetch fails then it will not export any drive formats
func (f *Fs) exportFormats() map[string][]string { func (f *Fs) exportFormats(ctx context.Context) map[string][]string {
f.fetchFormats() f.fetchFormats(ctx)
return _exportFormats return _exportFormats
} }
@@ -1517,8 +1509,8 @@ func (f *Fs) exportFormats() map[string][]string {
// if necessary. // if necessary.
// //
// if the fetch fails then it will not import any drive formats // if the fetch fails then it will not import any drive formats
func (f *Fs) importFormats() map[string][]string { func (f *Fs) importFormats(ctx context.Context) map[string][]string {
f.fetchFormats() f.fetchFormats(ctx)
return _importFormats return _importFormats
} }
@@ -1527,9 +1519,9 @@ func (f *Fs) importFormats() map[string][]string {
// //
// Look through the exportExtensions and find the first format that can be // Look through the exportExtensions and find the first format that can be
// converted. If none found then return ("", "", false) // converted. If none found then return ("", "", false)
func (f *Fs) findExportFormatByMimeType(itemMimeType string) ( func (f *Fs) findExportFormatByMimeType(ctx context.Context, itemMimeType string) (
extension, mimeType string, isDocument bool) { extension, mimeType string, isDocument bool) {
exportMimeTypes, isDocument := f.exportFormats()[itemMimeType] exportMimeTypes, isDocument := f.exportFormats(ctx)[itemMimeType]
if isDocument { if isDocument {
for _, _extension := range f.exportExtensions { for _, _extension := range f.exportExtensions {
_mimeType := mime.TypeByExtension(_extension) _mimeType := mime.TypeByExtension(_extension)
@@ -1556,8 +1548,8 @@ func (f *Fs) findExportFormatByMimeType(itemMimeType string) (
// //
// Look through the exportExtensions and find the first format that can be // Look through the exportExtensions and find the first format that can be
// converted. If none found then return ("", "", "", false) // converted. If none found then return ("", "", "", false)
func (f *Fs) findExportFormat(item *drive.File) (extension, filename, mimeType string, isDocument bool) { func (f *Fs) findExportFormat(ctx context.Context, item *drive.File) (extension, filename, mimeType string, isDocument bool) {
extension, mimeType, isDocument = f.findExportFormatByMimeType(item.MimeType) extension, mimeType, isDocument = f.findExportFormatByMimeType(ctx, item.MimeType)
if extension != "" { if extension != "" {
filename = item.Name + extension filename = item.Name + extension
} }
@@ -1569,9 +1561,9 @@ func (f *Fs) findExportFormat(item *drive.File) (extension, filename, mimeType s
// MIME type is returned // MIME type is returned
// //
// When no match is found "" is returned. // When no match is found "" is returned.
func (f *Fs) findImportFormat(mimeType string) string { func (f *Fs) findImportFormat(ctx context.Context, mimeType string) string {
mimeType = fixMimeType(mimeType) mimeType = fixMimeType(mimeType)
ifs := f.importFormats() ifs := f.importFormats(ctx)
for _, mt := range f.importMimeTypes { for _, mt := range f.importMimeTypes {
if mt == mimeType { if mt == mimeType {
importMimeTypes := ifs[mimeType] importMimeTypes := ifs[mimeType]
@@ -1604,7 +1596,7 @@ func (f *Fs) List(ctx context.Context, dir string) (entries fs.DirEntries, err e
var iErr error var iErr error
_, err = f.list(ctx, []string{directoryID}, "", false, false, f.opt.TrashedOnly, false, func(item *drive.File) bool { _, err = f.list(ctx, []string{directoryID}, "", false, false, f.opt.TrashedOnly, false, func(item *drive.File) bool {
entry, err := f.itemToDirEntry(path.Join(dir, item.Name), item) entry, err := f.itemToDirEntry(ctx, path.Join(dir, item.Name), item)
if err != nil { if err != nil {
iErr = err iErr = err
return true return true
@@ -1717,7 +1709,7 @@ func (f *Fs) listRRunner(ctx context.Context, wg *sync.WaitGroup, in chan listRE
} }
} }
remote := path.Join(paths[i], item.Name) remote := path.Join(paths[i], item.Name)
entry, err := f.itemToDirEntry(remote, item) entry, err := f.itemToDirEntry(ctx, remote, item)
if err != nil { if err != nil {
iErr = err iErr = err
return true return true
@@ -1982,7 +1974,7 @@ func isShortcut(item *drive.File) bool {
// Note that we assume shortcuts can't point to shortcuts. Google // Note that we assume shortcuts can't point to shortcuts. Google
// drive web interface doesn't offer the option to create a shortcut // drive web interface doesn't offer the option to create a shortcut
// to a shortcut. The documentation is silent on the issue. // to a shortcut. The documentation is silent on the issue.
func (f *Fs) resolveShortcut(item *drive.File) (newItem *drive.File, err error) { func (f *Fs) resolveShortcut(ctx context.Context, item *drive.File) (newItem *drive.File, err error) {
if f.opt.SkipShortcuts || item.MimeType != shortcutMimeType { if f.opt.SkipShortcuts || item.MimeType != shortcutMimeType {
return item, nil return item, nil
} }
@@ -1990,7 +1982,7 @@ func (f *Fs) resolveShortcut(item *drive.File) (newItem *drive.File, err error)
fs.Errorf(nil, "Expecting shortcutDetails in %v", item) fs.Errorf(nil, "Expecting shortcutDetails in %v", item)
return item, nil return item, nil
} }
newItem, err = f.getFile(item.ShortcutDetails.TargetId, f.fileFields) newItem, err = f.getFile(ctx, item.ShortcutDetails.TargetId, f.fileFields)
if err != nil { if err != nil {
if gerr, ok := errors.Cause(err).(*googleapi.Error); ok && gerr.Code == 404 { if gerr, ok := errors.Cause(err).(*googleapi.Error); ok && gerr.Code == 404 {
// 404 means dangling shortcut, so just return the shortcut with the mime type mangled // 404 means dangling shortcut, so just return the shortcut with the mime type mangled
@@ -2012,18 +2004,21 @@ func (f *Fs) resolveShortcut(item *drive.File) (newItem *drive.File, err error)
// itemToDirEntry converts a drive.File to an fs.DirEntry. // itemToDirEntry converts a drive.File to an fs.DirEntry.
// When the drive.File cannot be represented as an fs.DirEntry // When the drive.File cannot be represented as an fs.DirEntry
// (nil, nil) is returned. // (nil, nil) is returned.
func (f *Fs) itemToDirEntry(remote string, item *drive.File) (entry fs.DirEntry, err error) { func (f *Fs) itemToDirEntry(ctx context.Context, remote string, item *drive.File) (entry fs.DirEntry, err error) {
switch { switch {
case item.MimeType == driveFolderType: case item.MimeType == driveFolderType:
// cache the directory ID for later lookups // cache the directory ID for later lookups
f.dirCache.Put(remote, item.Id) f.dirCache.Put(remote, item.Id)
when, _ := time.Parse(timeFormatIn, item.ModifiedTime) when, _ := time.Parse(timeFormatIn, item.ModifiedTime)
d := fs.NewDir(remote, when).SetID(item.Id) d := fs.NewDir(remote, when).SetID(item.Id)
if len(item.Parents) > 0 {
d.SetParentID(item.Parents[0])
}
return d, nil return d, nil
case f.opt.AuthOwnerOnly && !isAuthOwned(item): case f.opt.AuthOwnerOnly && !isAuthOwned(item):
// ignore object // ignore object
default: default:
entry, err = f.newObjectWithInfo(remote, item) entry, err = f.newObjectWithInfo(ctx, remote, item)
if err == fs.ErrorObjectNotFound { if err == fs.ErrorObjectNotFound {
return nil, nil return nil, nil
} }
@@ -2090,12 +2085,12 @@ func (f *Fs) PutUnchecked(ctx context.Context, in io.Reader, src fs.ObjectInfo,
importMimeType := "" importMimeType := ""
if f.importMimeTypes != nil && !f.opt.SkipGdocs { if f.importMimeTypes != nil && !f.opt.SkipGdocs {
importMimeType = f.findImportFormat(srcMimeType) importMimeType = f.findImportFormat(ctx, srcMimeType)
if isInternalMimeType(importMimeType) { if isInternalMimeType(importMimeType) {
remote = remote[:len(remote)-len(srcExt)] remote = remote[:len(remote)-len(srcExt)]
exportExt, _, _ = f.findExportFormatByMimeType(importMimeType) exportExt, _, _ = f.findExportFormatByMimeType(ctx, importMimeType)
if exportExt == "" { if exportExt == "" {
return nil, errors.Errorf("No export format found for %q", importMimeType) return nil, errors.Errorf("No export format found for %q", importMimeType)
} }
@@ -2125,8 +2120,8 @@ func (f *Fs) PutUnchecked(ctx context.Context, in io.Reader, src fs.ObjectInfo,
Fields(partialFields). Fields(partialFields).
SupportsAllDrives(true). SupportsAllDrives(true).
KeepRevisionForever(f.opt.KeepRevisionForever). KeepRevisionForever(f.opt.KeepRevisionForever).
Do() Context(ctx).Do()
return f.shouldRetry(err) return f.shouldRetry(ctx, err)
}) })
if err != nil { if err != nil {
return nil, err return nil, err
@@ -2138,7 +2133,7 @@ func (f *Fs) PutUnchecked(ctx context.Context, in io.Reader, src fs.ObjectInfo,
return nil, err return nil, err
} }
} }
return f.newObjectWithInfo(remote, info) return f.newObjectWithInfo(ctx, remote, info)
} }
// MergeDirs merges the contents of all the directories passed // MergeDirs merges the contents of all the directories passed
@@ -2180,8 +2175,8 @@ func (f *Fs) MergeDirs(ctx context.Context, dirs []fs.Directory) error {
AddParents(dstDir.ID()). AddParents(dstDir.ID()).
Fields(""). Fields("").
SupportsAllDrives(true). SupportsAllDrives(true).
Do() Context(ctx).Do()
return f.shouldRetry(err) return f.shouldRetry(ctx, err)
}) })
if err != nil { if err != nil {
return errors.Wrapf(err, "MergeDirs move failed on %q in %v", info.Name, srcDir) return errors.Wrapf(err, "MergeDirs move failed on %q in %v", info.Name, srcDir)
@@ -2214,14 +2209,14 @@ func (f *Fs) delete(ctx context.Context, id string, useTrash bool) error {
_, err = f.svc.Files.Update(id, &info). _, err = f.svc.Files.Update(id, &info).
Fields(""). Fields("").
SupportsAllDrives(true). SupportsAllDrives(true).
Do() Context(ctx).Do()
} else { } else {
err = f.svc.Files.Delete(id). err = f.svc.Files.Delete(id).
Fields(""). Fields("").
SupportsAllDrives(true). SupportsAllDrives(true).
Do() Context(ctx).Do()
} }
return f.shouldRetry(err) return f.shouldRetry(ctx, err)
}) })
} }
@@ -2334,11 +2329,12 @@ func (f *Fs) Copy(ctx context.Context, src fs.Object, remote string) (fs.Object,
if isDoc { if isDoc {
// preserve the description on copy for docs // preserve the description on copy for docs
info, err := f.getFile(actualID(srcObj.id), "description") info, err := f.getFile(ctx, actualID(srcObj.id), "description")
if err != nil { if err != nil {
return nil, errors.Wrap(err, "failed to read description for Google Doc") fs.Errorf(srcObj, "Failed to read description for Google Doc: %v", err)
} else {
createInfo.Description = info.Description
} }
createInfo.Description = info.Description
} else { } else {
// don't overwrite the description on copy for files // don't overwrite the description on copy for files
// this should work for docs but it doesn't - it is probably a bug in Google Drive // this should work for docs but it doesn't - it is probably a bug in Google Drive
@@ -2354,13 +2350,13 @@ func (f *Fs) Copy(ctx context.Context, src fs.Object, remote string) (fs.Object,
Fields(partialFields). Fields(partialFields).
SupportsAllDrives(true). SupportsAllDrives(true).
KeepRevisionForever(f.opt.KeepRevisionForever). KeepRevisionForever(f.opt.KeepRevisionForever).
Do() Context(ctx).Do()
return f.shouldRetry(err) return f.shouldRetry(ctx, err)
}) })
if err != nil { if err != nil {
return nil, err return nil, err
} }
newObject, err := f.newObjectWithInfo(remote, info) newObject, err := f.newObjectWithInfo(ctx, remote, info)
if err != nil { if err != nil {
return nil, err return nil, err
} }
@@ -2454,7 +2450,7 @@ func (f *Fs) CleanUp(ctx context.Context) error {
} }
err := f.pacer.Call(func() (bool, error) { err := f.pacer.Call(func() (bool, error) {
err := f.svc.Files.EmptyTrash().Context(ctx).Do() err := f.svc.Files.EmptyTrash().Context(ctx).Do()
return f.shouldRetry(err) return f.shouldRetry(ctx, err)
}) })
if err != nil { if err != nil {
@@ -2472,7 +2468,7 @@ func (f *Fs) teamDriveOK(ctx context.Context) (err error) {
var td *drive.Drive var td *drive.Drive
err = f.pacer.Call(func() (bool, error) { err = f.pacer.Call(func() (bool, error) {
td, err = f.svc.Drives.Get(f.opt.TeamDriveID).Fields("name,id,capabilities,createdTime,restrictions").Context(ctx).Do() td, err = f.svc.Drives.Get(f.opt.TeamDriveID).Fields("name,id,capabilities,createdTime,restrictions").Context(ctx).Do()
return f.shouldRetry(err) return f.shouldRetry(ctx, err)
}) })
if err != nil { if err != nil {
return errors.Wrap(err, "failed to get Shared Drive info") return errors.Wrap(err, "failed to get Shared Drive info")
@@ -2495,7 +2491,7 @@ func (f *Fs) About(ctx context.Context) (*fs.Usage, error) {
var err error var err error
err = f.pacer.Call(func() (bool, error) { err = f.pacer.Call(func() (bool, error) {
about, err = f.svc.About.Get().Fields("storageQuota").Context(ctx).Do() about, err = f.svc.About.Get().Fields("storageQuota").Context(ctx).Do()
return f.shouldRetry(err) return f.shouldRetry(ctx, err)
}) })
if err != nil { if err != nil {
return nil, errors.Wrap(err, "failed to get Drive storageQuota") return nil, errors.Wrap(err, "failed to get Drive storageQuota")
@@ -2567,14 +2563,14 @@ func (f *Fs) Move(ctx context.Context, src fs.Object, remote string) (fs.Object,
AddParents(dstParents). AddParents(dstParents).
Fields(partialFields). Fields(partialFields).
SupportsAllDrives(true). SupportsAllDrives(true).
Do() Context(ctx).Do()
return f.shouldRetry(err) return f.shouldRetry(ctx, err)
}) })
if err != nil { if err != nil {
return nil, err return nil, err
} }
return f.newObjectWithInfo(remote, info) return f.newObjectWithInfo(ctx, remote, info)
} }
// PublicLink adds a "readable by anyone with link" permission on the given file or folder. // PublicLink adds a "readable by anyone with link" permission on the given file or folder.
@@ -2604,8 +2600,8 @@ func (f *Fs) PublicLink(ctx context.Context, remote string, expire fs.Duration,
_, err = f.svc.Permissions.Create(id, permission). _, err = f.svc.Permissions.Create(id, permission).
Fields(""). Fields("").
SupportsAllDrives(true). SupportsAllDrives(true).
Do() Context(ctx).Do()
return f.shouldRetry(err) return f.shouldRetry(ctx, err)
}) })
if err != nil { if err != nil {
return "", err return "", err
@@ -2647,8 +2643,8 @@ func (f *Fs) DirMove(ctx context.Context, src fs.Fs, srcRemote, dstRemote string
AddParents(dstDirectoryID). AddParents(dstDirectoryID).
Fields(""). Fields("").
SupportsAllDrives(true). SupportsAllDrives(true).
Do() Context(ctx).Do()
return f.shouldRetry(err) return f.shouldRetry(ctx, err)
}) })
if err != nil { if err != nil {
return err return err
@@ -2666,7 +2662,7 @@ func (f *Fs) DirMove(ctx context.Context, src fs.Fs, srcRemote, dstRemote string
func (f *Fs) ChangeNotify(ctx context.Context, notifyFunc func(string, fs.EntryType), pollIntervalChan <-chan time.Duration) { func (f *Fs) ChangeNotify(ctx context.Context, notifyFunc func(string, fs.EntryType), pollIntervalChan <-chan time.Duration) {
go func() { go func() {
// get the StartPageToken early so all changes from now on get processed // get the StartPageToken early so all changes from now on get processed
startPageToken, err := f.changeNotifyStartPageToken() startPageToken, err := f.changeNotifyStartPageToken(ctx)
if err != nil { if err != nil {
fs.Infof(f, "Failed to get StartPageToken: %s", err) fs.Infof(f, "Failed to get StartPageToken: %s", err)
} }
@@ -2691,7 +2687,7 @@ func (f *Fs) ChangeNotify(ctx context.Context, notifyFunc func(string, fs.EntryT
} }
case <-tickerC: case <-tickerC:
if startPageToken == "" { if startPageToken == "" {
startPageToken, err = f.changeNotifyStartPageToken() startPageToken, err = f.changeNotifyStartPageToken(ctx)
if err != nil { if err != nil {
fs.Infof(f, "Failed to get StartPageToken: %s", err) fs.Infof(f, "Failed to get StartPageToken: %s", err)
continue continue
@@ -2706,15 +2702,15 @@ func (f *Fs) ChangeNotify(ctx context.Context, notifyFunc func(string, fs.EntryT
} }
}() }()
} }
func (f *Fs) changeNotifyStartPageToken() (pageToken string, err error) { func (f *Fs) changeNotifyStartPageToken(ctx context.Context) (pageToken string, err error) {
var startPageToken *drive.StartPageToken var startPageToken *drive.StartPageToken
err = f.pacer.Call(func() (bool, error) { err = f.pacer.Call(func() (bool, error) {
changes := f.svc.Changes.GetStartPageToken().SupportsAllDrives(true) changes := f.svc.Changes.GetStartPageToken().SupportsAllDrives(true)
if f.isTeamDrive { if f.isTeamDrive {
changes.DriveId(f.opt.TeamDriveID) changes.DriveId(f.opt.TeamDriveID)
} }
startPageToken, err = changes.Do() startPageToken, err = changes.Context(ctx).Do()
return f.shouldRetry(err) return f.shouldRetry(ctx, err)
}) })
if err != nil { if err != nil {
return return
@@ -2743,7 +2739,7 @@ func (f *Fs) changeNotifyRunner(ctx context.Context, notifyFunc func(string, fs.
changesCall.Spaces("appDataFolder") changesCall.Spaces("appDataFolder")
} }
changeList, err = changesCall.Context(ctx).Do() changeList, err = changesCall.Context(ctx).Do()
return f.shouldRetry(err) return f.shouldRetry(ctx, err)
}) })
if err != nil { if err != nil {
return return
@@ -2939,8 +2935,8 @@ func (f *Fs) makeShortcut(ctx context.Context, srcPath string, dstFs *Fs, dstPat
Fields(partialFields). Fields(partialFields).
SupportsAllDrives(true). SupportsAllDrives(true).
KeepRevisionForever(dstFs.opt.KeepRevisionForever). KeepRevisionForever(dstFs.opt.KeepRevisionForever).
Do() Context(ctx).Do()
return dstFs.shouldRetry(err) return dstFs.shouldRetry(ctx, err)
}) })
if err != nil { if err != nil {
return nil, errors.Wrap(err, "shortcut creation failed") return nil, errors.Wrap(err, "shortcut creation failed")
@@ -2948,24 +2944,24 @@ func (f *Fs) makeShortcut(ctx context.Context, srcPath string, dstFs *Fs, dstPat
if isDir { if isDir {
return nil, nil return nil, nil
} }
return dstFs.newObjectWithInfo(dstPath, info) return dstFs.newObjectWithInfo(ctx, dstPath, info)
} }
// List all team drives // List all team drives
func (f *Fs) listTeamDrives(ctx context.Context) (drives []*drive.TeamDrive, err error) { func (f *Fs) listTeamDrives(ctx context.Context) (drives []*drive.Drive, err error) {
drives = []*drive.TeamDrive{} drives = []*drive.Drive{}
listTeamDrives := f.svc.Teamdrives.List().PageSize(100) listTeamDrives := f.svc.Drives.List().PageSize(100)
var defaultFs Fs // default Fs with default Options var defaultFs Fs // default Fs with default Options
for { for {
var teamDrives *drive.TeamDriveList var teamDrives *drive.DriveList
err = f.pacer.Call(func() (bool, error) { err = f.pacer.Call(func() (bool, error) {
teamDrives, err = listTeamDrives.Context(ctx).Do() teamDrives, err = listTeamDrives.Context(ctx).Do()
return defaultFs.shouldRetry(err) return defaultFs.shouldRetry(ctx, err)
}) })
if err != nil { if err != nil {
return drives, errors.Wrap(err, "listing Team Drives failed") return drives, errors.Wrap(err, "listing Team Drives failed")
} }
drives = append(drives, teamDrives.TeamDrives...) drives = append(drives, teamDrives.Drives...)
if teamDrives.NextPageToken == "" { if teamDrives.NextPageToken == "" {
break break
} }
@@ -3002,8 +2998,8 @@ func (f *Fs) unTrash(ctx context.Context, dir string, directoryID string, recurs
_, err := f.svc.Files.Update(item.Id, &update). _, err := f.svc.Files.Update(item.Id, &update).
SupportsAllDrives(true). SupportsAllDrives(true).
Fields("trashed"). Fields("trashed").
Do() Context(ctx).Do()
return f.shouldRetry(err) return f.shouldRetry(ctx, err)
}) })
if err != nil { if err != nil {
err = errors.Wrap(err, "failed to restore") err = errors.Wrap(err, "failed to restore")
@@ -3045,7 +3041,7 @@ func (f *Fs) unTrashDir(ctx context.Context, dir string, recurse bool) (r unTras
// copy file with id to dest // copy file with id to dest
func (f *Fs) copyID(ctx context.Context, id, dest string) (err error) { func (f *Fs) copyID(ctx context.Context, id, dest string) (err error) {
info, err := f.getFile(id, f.fileFields) info, err := f.getFile(ctx, id, f.fileFields)
if err != nil { if err != nil {
return errors.Wrap(err, "couldn't find id") return errors.Wrap(err, "couldn't find id")
} }
@@ -3053,7 +3049,7 @@ func (f *Fs) copyID(ctx context.Context, id, dest string) (err error) {
return errors.Errorf("can't copy directory use: rclone copy --drive-root-folder-id %s %s %s", id, fs.ConfigString(f), dest) return errors.Errorf("can't copy directory use: rclone copy --drive-root-folder-id %s %s %s", id, fs.ConfigString(f), dest)
} }
info.Name = f.opt.Enc.ToStandardName(info.Name) info.Name = f.opt.Enc.ToStandardName(info.Name)
o, err := f.newObjectWithInfo(info.Name, info) o, err := f.newObjectWithInfo(ctx, info.Name, info)
if err != nil { if err != nil {
return err return err
} }
@@ -3062,7 +3058,7 @@ func (f *Fs) copyID(ctx context.Context, id, dest string) (err error) {
return err return err
} }
if destLeaf == "" { if destLeaf == "" {
destLeaf = info.Name destLeaf = path.Base(o.Remote())
} }
if destDir == "" { if destDir == "" {
destDir = "." destDir = "."
@@ -3354,7 +3350,7 @@ func (f *Fs) getRemoteInfoWithExport(ctx context.Context, remote string) (
found, err := f.list(ctx, []string{directoryID}, leaf, false, false, f.opt.TrashedOnly, false, func(item *drive.File) bool { found, err := f.list(ctx, []string{directoryID}, leaf, false, false, f.opt.TrashedOnly, false, func(item *drive.File) bool {
if !f.opt.SkipGdocs { if !f.opt.SkipGdocs {
extension, exportName, exportMimeType, isDocument = f.findExportFormat(item) extension, exportName, exportMimeType, isDocument = f.findExportFormat(ctx, item)
if exportName == leaf { if exportName == leaf {
info = item info = item
return true return true
@@ -3405,8 +3401,8 @@ func (o *baseObject) SetModTime(ctx context.Context, modTime time.Time) error {
info, err = o.fs.svc.Files.Update(actualID(o.id), updateInfo). info, err = o.fs.svc.Files.Update(actualID(o.id), updateInfo).
Fields(partialFields). Fields(partialFields).
SupportsAllDrives(true). SupportsAllDrives(true).
Do() Context(ctx).Do()
return o.fs.shouldRetry(err) return o.fs.shouldRetry(ctx, err)
}) })
if err != nil { if err != nil {
return err return err
@@ -3444,7 +3440,7 @@ func (o *baseObject) httpResponse(ctx context.Context, url, method string, optio
_ = res.Body.Close() // ignore error _ = res.Body.Close() // ignore error
} }
} }
return o.fs.shouldRetry(err) return o.fs.shouldRetry(ctx, err)
}) })
if err != nil { if err != nil {
return req, nil, err return req, nil, err
@@ -3536,8 +3532,8 @@ func (o *Object) Open(ctx context.Context, options ...fs.OpenOption) (in io.Read
v2File, err = o.fs.v2Svc.Files.Get(actualID(o.id)). v2File, err = o.fs.v2Svc.Files.Get(actualID(o.id)).
Fields("downloadUrl"). Fields("downloadUrl").
SupportsAllDrives(true). SupportsAllDrives(true).
Do() Context(ctx).Do()
return o.fs.shouldRetry(err) return o.fs.shouldRetry(ctx, err)
}) })
if err == nil { if err == nil {
fs.Debugf(o, "Using v2 download: %v", v2File.DownloadUrl) fs.Debugf(o, "Using v2 download: %v", v2File.DownloadUrl)
@@ -3617,8 +3613,8 @@ func (o *baseObject) update(ctx context.Context, updateInfo *drive.File, uploadM
Fields(partialFields). Fields(partialFields).
SupportsAllDrives(true). SupportsAllDrives(true).
KeepRevisionForever(o.fs.opt.KeepRevisionForever). KeepRevisionForever(o.fs.opt.KeepRevisionForever).
Do() Context(ctx).Do()
return o.fs.shouldRetry(err) return o.fs.shouldRetry(ctx, err)
}) })
return return
} }
@@ -3661,7 +3657,7 @@ func (o *Object) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, op
if err != nil { if err != nil {
return err return err
} }
newO, err := o.fs.newObjectWithInfo(src.Remote(), info) newO, err := o.fs.newObjectWithInfo(ctx, src.Remote(), info)
if err != nil { if err != nil {
return err return err
} }
@@ -3685,7 +3681,7 @@ func (o *documentObject) Update(ctx context.Context, in io.Reader, src fs.Object
if o.fs.importMimeTypes == nil || o.fs.opt.SkipGdocs { if o.fs.importMimeTypes == nil || o.fs.opt.SkipGdocs {
return errors.Errorf("can't update google document type without --drive-import-formats") return errors.Errorf("can't update google document type without --drive-import-formats")
} }
importMimeType = o.fs.findImportFormat(updateInfo.MimeType) importMimeType = o.fs.findImportFormat(ctx, updateInfo.MimeType)
if importMimeType == "" { if importMimeType == "" {
return errors.Errorf("no import format found for %q", srcMimeType) return errors.Errorf("no import format found for %q", srcMimeType)
} }
@@ -3702,7 +3698,7 @@ func (o *documentObject) Update(ctx context.Context, in io.Reader, src fs.Object
remote := src.Remote() remote := src.Remote()
remote = remote[:len(remote)-o.extLen] remote = remote[:len(remote)-o.extLen]
newO, err := o.fs.newObjectWithInfo(remote, info) newO, err := o.fs.newObjectWithInfo(ctx, remote, info)
if err != nil { if err != nil {
return err return err
} }
@@ -3722,7 +3718,7 @@ func (o *linkObject) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo
// Remove an object // Remove an object
func (o *baseObject) Remove(ctx context.Context) error { func (o *baseObject) Remove(ctx context.Context) error {
if o.parents > 1 { if len(o.parents) > 1 {
return errors.New("can't delete safely - has multiple parents") return errors.New("can't delete safely - has multiple parents")
} }
return o.fs.delete(ctx, shortcutID(o.id), o.fs.opt.UseTrash) return o.fs.delete(ctx, shortcutID(o.id), o.fs.opt.UseTrash)
@@ -3738,6 +3734,14 @@ func (o *baseObject) ID() string {
return o.id return o.id
} }
// ParentID returns the ID of the Object parent if known, or "" if not
func (o *baseObject) ParentID() string {
if len(o.parents) > 0 {
return o.parents[0]
}
return ""
}
func (o *documentObject) ext() string { func (o *documentObject) ext() string {
return o.baseObject.remote[len(o.baseObject.remote)-o.extLen:] return o.baseObject.remote[len(o.baseObject.remote)-o.extLen:]
} }
@@ -3798,10 +3802,13 @@ var (
_ fs.Object = (*Object)(nil) _ fs.Object = (*Object)(nil)
_ fs.MimeTyper = (*Object)(nil) _ fs.MimeTyper = (*Object)(nil)
_ fs.IDer = (*Object)(nil) _ fs.IDer = (*Object)(nil)
_ fs.ParentIDer = (*Object)(nil)
_ fs.Object = (*documentObject)(nil) _ fs.Object = (*documentObject)(nil)
_ fs.MimeTyper = (*documentObject)(nil) _ fs.MimeTyper = (*documentObject)(nil)
_ fs.IDer = (*documentObject)(nil) _ fs.IDer = (*documentObject)(nil)
_ fs.ParentIDer = (*documentObject)(nil)
_ fs.Object = (*linkObject)(nil) _ fs.Object = (*linkObject)(nil)
_ fs.MimeTyper = (*linkObject)(nil) _ fs.MimeTyper = (*linkObject)(nil)
_ fs.IDer = (*linkObject)(nil) _ fs.IDer = (*linkObject)(nil)
_ fs.ParentIDer = (*linkObject)(nil)
) )

View File

@@ -111,6 +111,7 @@ func TestInternalParseExtensions(t *testing.T) {
} }
func TestInternalFindExportFormat(t *testing.T) { func TestInternalFindExportFormat(t *testing.T) {
ctx := context.Background()
item := &drive.File{ item := &drive.File{
Name: "file", Name: "file",
MimeType: "application/vnd.google-apps.document", MimeType: "application/vnd.google-apps.document",
@@ -128,7 +129,7 @@ func TestInternalFindExportFormat(t *testing.T) {
} { } {
f := new(Fs) f := new(Fs)
f.exportExtensions = test.extensions f.exportExtensions = test.extensions
gotExtension, gotFilename, gotMimeType, gotIsDocument := f.findExportFormat(item) gotExtension, gotFilename, gotMimeType, gotIsDocument := f.findExportFormat(ctx, item)
assert.Equal(t, test.wantExtension, gotExtension) assert.Equal(t, test.wantExtension, gotExtension)
if test.wantExtension != "" { if test.wantExtension != "" {
assert.Equal(t, item.Name+gotExtension, gotFilename) assert.Equal(t, item.Name+gotExtension, gotFilename)

View File

@@ -94,7 +94,7 @@ func (f *Fs) Upload(ctx context.Context, in io.Reader, size int64, contentType,
defer googleapi.CloseBody(res) defer googleapi.CloseBody(res)
err = googleapi.CheckResponse(res) err = googleapi.CheckResponse(res)
} }
return f.shouldRetry(err) return f.shouldRetry(ctx, err)
}) })
if err != nil { if err != nil {
return nil, err return nil, err
@@ -202,7 +202,7 @@ func (rx *resumableUpload) Upload(ctx context.Context) (*drive.File, error) {
err = rx.f.pacer.Call(func() (bool, error) { err = rx.f.pacer.Call(func() (bool, error) {
fs.Debugf(rx.remote, "Sending chunk %d length %d", start, reqSize) fs.Debugf(rx.remote, "Sending chunk %d length %d", start, reqSize)
StatusCode, err = rx.transferChunk(ctx, start, chunk, reqSize) StatusCode, err = rx.transferChunk(ctx, start, chunk, reqSize)
again, err := rx.f.shouldRetry(err) again, err := rx.f.shouldRetry(ctx, err)
if StatusCode == statusResumeIncomplete || StatusCode == http.StatusCreated || StatusCode == http.StatusOK { if StatusCode == statusResumeIncomplete || StatusCode == http.StatusCreated || StatusCode == http.StatusOK {
again = false again = false
err = nil err = nil

350
backend/dropbox/batcher.go Normal file
View File

@@ -0,0 +1,350 @@
// This file contains the implementation of the sync batcher for uploads
//
// Dropbox rules say you can start as many batches as you want, but
// you may only have one batch being committed and must wait for the
// batch to be finished before committing another.
package dropbox
import (
"context"
"fmt"
"sync"
"time"
"github.com/dropbox/dropbox-sdk-go-unofficial/dropbox/async"
"github.com/dropbox/dropbox-sdk-go-unofficial/dropbox/files"
"github.com/pkg/errors"
"github.com/rclone/rclone/fs"
"github.com/rclone/rclone/fs/fserrors"
"github.com/rclone/rclone/lib/atexit"
)
const (
maxBatchSize = 1000 // max size the batch can be
defaultTimeoutSync = 500 * time.Millisecond // kick off the batch if nothing added for this long (sync)
defaultTimeoutAsync = 10 * time.Second // kick off the batch if nothing added for this long (ssync)
defaultBatchSizeAsync = 100 // default batch size if async
)
// batcher holds info about the current items waiting for upload
type batcher struct {
f *Fs // Fs this batch is part of
mode string // configured batch mode
size int // maximum size for batch
timeout time.Duration // idle timeout for batch
async bool // whether we are using async batching
in chan batcherRequest // incoming items to batch
closed chan struct{} // close to indicate batcher shut down
atexit atexit.FnHandle // atexit handle
shutOnce sync.Once // make sure we shutdown once only
wg sync.WaitGroup // wait for shutdown
}
// batcherRequest holds an incoming request with a place for a reply
type batcherRequest struct {
commitInfo *files.UploadSessionFinishArg
result chan<- batcherResponse
}
// Return true if batcherRequest is the quit request
func (br *batcherRequest) isQuit() bool {
return br.commitInfo == nil
}
// Send this to get the engine to quit
var quitRequest = batcherRequest{}
// batcherResponse holds a response to be delivered to clients waiting
// for a batch to complete.
type batcherResponse struct {
err error
entry *files.FileMetadata
}
// newBatcher creates a new batcher structure
func newBatcher(ctx context.Context, f *Fs, mode string, size int, timeout time.Duration) (*batcher, error) {
// fs.Debugf(f, "Creating batcher with mode %q, size %d, timeout %v", mode, size, timeout)
if size > maxBatchSize || size < 0 {
return nil, errors.Errorf("dropbox: batch size must be < %d and >= 0 - it is currently %d", maxBatchSize, size)
}
async := false
switch mode {
case "sync":
if size <= 0 {
ci := fs.GetConfig(ctx)
size = ci.Transfers
}
if timeout <= 0 {
timeout = defaultTimeoutSync
}
case "async":
if size <= 0 {
size = defaultBatchSizeAsync
}
if timeout <= 0 {
timeout = defaultTimeoutAsync
}
async = true
case "off":
size = 0
default:
return nil, errors.Errorf("dropbox: batch mode must be sync|async|off not %q", mode)
}
b := &batcher{
f: f,
mode: mode,
size: size,
timeout: timeout,
async: async,
in: make(chan batcherRequest, size),
closed: make(chan struct{}),
}
if b.Batching() {
b.atexit = atexit.Register(b.Shutdown)
b.wg.Add(1)
go b.commitLoop(context.Background())
}
return b, nil
}
// Batching returns true if batching is active
func (b *batcher) Batching() bool {
return b.size > 0
}
// finishBatch commits the batch, returning a batch status to poll or maybe complete
func (b *batcher) finishBatch(ctx context.Context, items []*files.UploadSessionFinishArg) (batchStatus *files.UploadSessionFinishBatchLaunch, err error) {
var arg = &files.UploadSessionFinishBatchArg{
Entries: items,
}
err = b.f.pacer.Call(func() (bool, error) {
batchStatus, err = b.f.srv.UploadSessionFinishBatch(arg)
// If error is insufficient space then don't retry
if e, ok := err.(files.UploadSessionFinishAPIError); ok {
if e.EndpointError != nil && e.EndpointError.Path != nil && e.EndpointError.Path.Tag == files.WriteErrorInsufficientSpace {
err = fserrors.NoRetryError(err)
return false, err
}
}
// after the first chunk is uploaded, we retry everything
return err != nil, err
})
if err != nil {
return nil, errors.Wrap(err, "batch commit failed")
}
return batchStatus, nil
}
// finishBatchJobStatus waits for the batch to complete returning completed entries
func (b *batcher) finishBatchJobStatus(ctx context.Context, launchBatchStatus *files.UploadSessionFinishBatchLaunch) (complete *files.UploadSessionFinishBatchResult, err error) {
if launchBatchStatus.AsyncJobId == "" {
return nil, errors.New("wait for batch completion: empty job ID")
}
var batchStatus *files.UploadSessionFinishBatchJobStatus
sleepTime := 100 * time.Millisecond
const maxTries = 120
for try := 1; try <= maxTries; try++ {
err = b.f.pacer.Call(func() (bool, error) {
batchStatus, err = b.f.srv.UploadSessionFinishBatchCheck(&async.PollArg{
AsyncJobId: launchBatchStatus.AsyncJobId,
})
return shouldRetry(ctx, err)
})
if err != nil {
fs.Debugf(b.f, "Wait for batch: sleeping for %v after error: %v: try %d/%d", sleepTime, err, try, maxTries)
} else {
if batchStatus.Tag == "complete" {
return batchStatus.Complete, nil
}
fs.Debugf(b.f, "Wait for batch: sleeping for %v after status: %q: try %d/%d", sleepTime, batchStatus.Tag, try, maxTries)
}
time.Sleep(sleepTime)
sleepTime *= 2
if sleepTime > time.Second {
sleepTime = time.Second
}
}
if err == nil {
err = errors.New("batch didn't complete")
}
return nil, errors.Wrapf(err, "wait for batch failed after %d tries", maxTries)
}
// commit a batch
func (b *batcher) commitBatch(ctx context.Context, items []*files.UploadSessionFinishArg, results []chan<- batcherResponse) (err error) {
// If commit fails then signal clients if sync
var signalled = b.async
defer func() {
if err != nil && signalled {
// Signal to clients that there was an error
for _, result := range results {
result <- batcherResponse{err: err}
}
}
}()
desc := fmt.Sprintf("%s batch length %d starting with: %s", b.mode, len(items), items[0].Commit.Path)
fs.Debugf(b.f, "Committing %s", desc)
// finalise the batch getting either a result or a job id to poll
batchStatus, err := b.finishBatch(ctx, items)
if err != nil {
return err
}
// check whether batch is complete
var complete *files.UploadSessionFinishBatchResult
switch batchStatus.Tag {
case "async_job_id":
// wait for batch to complete
complete, err = b.finishBatchJobStatus(ctx, batchStatus)
if err != nil {
return err
}
case "complete":
complete = batchStatus.Complete
default:
return errors.Errorf("batch returned unknown status %q", batchStatus.Tag)
}
// Check we got the right number of entries
entries := complete.Entries
if len(entries) != len(results) {
return errors.Errorf("expecting %d items in batch but got %d", len(results), len(entries))
}
// Report results to clients
var (
errorTag = ""
errorCount = 0
)
for i := range results {
item := entries[i]
resp := batcherResponse{}
if item.Tag == "success" {
resp.entry = item.Success
} else {
errorCount++
errorTag = item.Tag
if item.Failure != nil {
errorTag = item.Failure.Tag
if item.Failure.LookupFailed != nil {
errorTag += "/" + item.Failure.LookupFailed.Tag
}
if item.Failure.Path != nil {
errorTag += "/" + item.Failure.Path.Tag
}
if item.Failure.PropertiesError != nil {
errorTag += "/" + item.Failure.PropertiesError.Tag
}
}
resp.err = errors.Errorf("batch upload failed: %s", errorTag)
}
if !b.async {
results[i] <- resp
}
}
// Show signalled so no need to report error to clients from now on
signalled = true
// Report an error if any failed in the batch
if errorTag != "" {
return errors.Errorf("batch had %d errors: last error: %s", errorCount, errorTag)
}
fs.Debugf(b.f, "Committed %s", desc)
return nil
}
// commitLoop runs the commit engine in the background
func (b *batcher) commitLoop(ctx context.Context) {
var (
items []*files.UploadSessionFinishArg // current batch of uncommitted files
results []chan<- batcherResponse // current batch of clients awaiting results
idleTimer = time.NewTimer(b.timeout)
commit = func() {
err := b.commitBatch(ctx, items, results)
if err != nil {
fs.Errorf(b.f, "%s batch commit: failed to commit batch length %d: %v", b.mode, len(items), err)
}
items, results = nil, nil
}
)
defer b.wg.Done()
defer idleTimer.Stop()
idleTimer.Stop()
outer:
for {
select {
case req := <-b.in:
if req.isQuit() {
break outer
}
items = append(items, req.commitInfo)
results = append(results, req.result)
idleTimer.Stop()
if len(items) >= b.size {
commit()
} else {
idleTimer.Reset(b.timeout)
}
case <-idleTimer.C:
if len(items) > 0 {
fs.Debugf(b.f, "Batch idle for %v so committing", b.timeout)
commit()
}
}
}
// commit any remaining items
if len(items) > 0 {
commit()
}
}
// Shutdown finishes any pending batches then shuts everything down
//
// Can be called from atexit handler
func (b *batcher) Shutdown() {
b.shutOnce.Do(func() {
atexit.Unregister(b.atexit)
fs.Infof(b.f, "Commiting uploads - please wait...")
// show that batcher is shutting down
close(b.closed)
// quit the commitLoop by sending a quitRequest message
//
// Note that we don't close b.in because that will
// cause write to closed channel in Commit when we are
// exiting due to a signal.
b.in <- quitRequest
b.wg.Wait()
})
}
// Commit commits the file using a batch call, first adding it to the
// batch and then waiting for the batch to complete in a synchronous
// way if async is not set.
func (b *batcher) Commit(ctx context.Context, commitInfo *files.UploadSessionFinishArg) (entry *files.FileMetadata, err error) {
select {
case <-b.closed:
return nil, fserrors.FatalError(errors.New("batcher is shutting down"))
default:
}
fs.Debugf(b.f, "Adding %q to batch", commitInfo.Commit.Path)
resp := make(chan batcherResponse, 1)
b.in <- batcherRequest{
commitInfo: commitInfo,
result: resp,
}
// If running async then don't wait for the result
if b.async {
return nil, nil
}
result := <-resp
return result.entry, result.err
}

File diff suppressed because it is too large Load Diff

View File

@@ -4,6 +4,7 @@ import (
"context" "context"
"io" "io"
"net/http" "net/http"
"net/url"
"regexp" "regexp"
"strconv" "strconv"
"strings" "strings"
@@ -28,7 +29,10 @@ var retryErrorCodes = []int{
// shouldRetry returns a boolean as to whether this resp and err // shouldRetry returns a boolean as to whether this resp and err
// deserve to be retried. It returns the err as a convenience // deserve to be retried. It returns the err as a convenience
func shouldRetry(resp *http.Response, err error) (bool, error) { func shouldRetry(ctx context.Context, resp *http.Response, err error) (bool, error) {
if fserrors.ContextError(ctx, &err) {
return false, err
}
// Detect this error which the integration tests provoke // Detect this error which the integration tests provoke
// error HTTP error 403 (403 Forbidden) returned body: "{\"message\":\"Flood detected: IP Locked #374\",\"status\":\"KO\"}" // error HTTP error 403 (403 Forbidden) returned body: "{\"message\":\"Flood detected: IP Locked #374\",\"status\":\"KO\"}"
// //
@@ -48,10 +52,46 @@ func shouldRetry(resp *http.Response, err error) (bool, error) {
var isAlphaNumeric = regexp.MustCompile(`^[a-zA-Z0-9]+$`).MatchString var isAlphaNumeric = regexp.MustCompile(`^[a-zA-Z0-9]+$`).MatchString
func (f *Fs) createObject(ctx context.Context, remote string) (o *Object, leaf string, directoryID string, err error) {
// Create the directory for the object if it doesn't exist
leaf, directoryID, err = f.dirCache.FindPath(ctx, remote, true)
if err != nil {
return
}
// Temporary Object under construction
o = &Object{
fs: f,
remote: remote,
}
return o, leaf, directoryID, nil
}
func (f *Fs) readFileInfo(ctx context.Context, url string) (*File, error) {
request := FileInfoRequest{
URL: url,
}
opts := rest.Opts{
Method: "POST",
Path: "/file/info.cgi",
}
var file File
err := f.pacer.Call(func() (bool, error) {
resp, err := f.rest.CallJSON(ctx, &opts, &request, &file)
return shouldRetry(ctx, resp, err)
})
if err != nil {
return nil, errors.Wrap(err, "couldn't read file info")
}
return &file, err
}
func (f *Fs) getDownloadToken(ctx context.Context, url string) (*GetTokenResponse, error) { func (f *Fs) getDownloadToken(ctx context.Context, url string) (*GetTokenResponse, error) {
request := DownloadRequest{ request := DownloadRequest{
URL: url, URL: url,
Single: 1, Single: 1,
Pass: f.opt.FilePassword,
} }
opts := rest.Opts{ opts := rest.Opts{
Method: "POST", Method: "POST",
@@ -61,7 +101,7 @@ func (f *Fs) getDownloadToken(ctx context.Context, url string) (*GetTokenRespons
var token GetTokenResponse var token GetTokenResponse
err := f.pacer.Call(func() (bool, error) { err := f.pacer.Call(func() (bool, error) {
resp, err := f.rest.CallJSON(ctx, &opts, &request, &token) resp, err := f.rest.CallJSON(ctx, &opts, &request, &token)
return shouldRetry(resp, err) return shouldRetry(ctx, resp, err)
}) })
if err != nil { if err != nil {
return nil, errors.Wrap(err, "couldn't list files") return nil, errors.Wrap(err, "couldn't list files")
@@ -80,16 +120,22 @@ func fileFromSharedFile(file *SharedFile) File {
func (f *Fs) listSharedFiles(ctx context.Context, id string) (entries fs.DirEntries, err error) { func (f *Fs) listSharedFiles(ctx context.Context, id string) (entries fs.DirEntries, err error) {
opts := rest.Opts{ opts := rest.Opts{
Method: "GET", Method: "GET",
RootURL: "https://1fichier.com/dir/", RootURL: "https://1fichier.com/dir/",
Path: id, Path: id,
Parameters: map[string][]string{"json": {"1"}}, Parameters: map[string][]string{"json": {"1"}},
ContentType: "application/x-www-form-urlencoded",
}
if f.opt.FolderPassword != "" {
opts.Method = "POST"
opts.Parameters = nil
opts.Body = strings.NewReader("json=1&pass=" + url.QueryEscape(f.opt.FolderPassword))
} }
var sharedFiles SharedFolderResponse var sharedFiles SharedFolderResponse
err = f.pacer.Call(func() (bool, error) { err = f.pacer.Call(func() (bool, error) {
resp, err := f.rest.CallJSON(ctx, &opts, nil, &sharedFiles) resp, err := f.rest.CallJSON(ctx, &opts, nil, &sharedFiles)
return shouldRetry(resp, err) return shouldRetry(ctx, resp, err)
}) })
if err != nil { if err != nil {
return nil, errors.Wrap(err, "couldn't list files") return nil, errors.Wrap(err, "couldn't list files")
@@ -118,7 +164,7 @@ func (f *Fs) listFiles(ctx context.Context, directoryID int) (filesList *FilesLi
filesList = &FilesList{} filesList = &FilesList{}
err = f.pacer.Call(func() (bool, error) { err = f.pacer.Call(func() (bool, error) {
resp, err := f.rest.CallJSON(ctx, &opts, &request, filesList) resp, err := f.rest.CallJSON(ctx, &opts, &request, filesList)
return shouldRetry(resp, err) return shouldRetry(ctx, resp, err)
}) })
if err != nil { if err != nil {
return nil, errors.Wrap(err, "couldn't list files") return nil, errors.Wrap(err, "couldn't list files")
@@ -146,7 +192,7 @@ func (f *Fs) listFolders(ctx context.Context, directoryID int) (foldersList *Fol
foldersList = &FoldersList{} foldersList = &FoldersList{}
err = f.pacer.Call(func() (bool, error) { err = f.pacer.Call(func() (bool, error) {
resp, err := f.rest.CallJSON(ctx, &opts, &request, foldersList) resp, err := f.rest.CallJSON(ctx, &opts, &request, foldersList)
return shouldRetry(resp, err) return shouldRetry(ctx, resp, err)
}) })
if err != nil { if err != nil {
return nil, errors.Wrap(err, "couldn't list folders") return nil, errors.Wrap(err, "couldn't list folders")
@@ -240,7 +286,7 @@ func (f *Fs) makeFolder(ctx context.Context, leaf string, folderID int) (respons
response = &MakeFolderResponse{} response = &MakeFolderResponse{}
err = f.pacer.Call(func() (bool, error) { err = f.pacer.Call(func() (bool, error) {
resp, err := f.rest.CallJSON(ctx, &opts, &request, response) resp, err := f.rest.CallJSON(ctx, &opts, &request, response)
return shouldRetry(resp, err) return shouldRetry(ctx, resp, err)
}) })
if err != nil { if err != nil {
return nil, errors.Wrap(err, "couldn't create folder") return nil, errors.Wrap(err, "couldn't create folder")
@@ -267,13 +313,13 @@ func (f *Fs) removeFolder(ctx context.Context, name string, folderID int) (respo
var resp *http.Response var resp *http.Response
err = f.pacer.Call(func() (bool, error) { err = f.pacer.Call(func() (bool, error) {
resp, err = f.rest.CallJSON(ctx, &opts, request, response) resp, err = f.rest.CallJSON(ctx, &opts, request, response)
return shouldRetry(resp, err) return shouldRetry(ctx, resp, err)
}) })
if err != nil { if err != nil {
return nil, errors.Wrap(err, "couldn't remove folder") return nil, errors.Wrap(err, "couldn't remove folder")
} }
if response.Status != "OK" { if response.Status != "OK" {
return nil, errors.New("Can't remove non-empty dir") return nil, errors.Errorf("can't remove folder: %s", response.Message)
} }
// fs.Debugf(f, "Removed Folder with id `%s`", directoryID) // fs.Debugf(f, "Removed Folder with id `%s`", directoryID)
@@ -296,7 +342,7 @@ func (f *Fs) deleteFile(ctx context.Context, url string) (response *GenericOKRes
response = &GenericOKResponse{} response = &GenericOKResponse{}
err = f.pacer.Call(func() (bool, error) { err = f.pacer.Call(func() (bool, error) {
resp, err := f.rest.CallJSON(ctx, &opts, request, response) resp, err := f.rest.CallJSON(ctx, &opts, request, response)
return shouldRetry(resp, err) return shouldRetry(ctx, resp, err)
}) })
if err != nil { if err != nil {
@@ -308,6 +354,84 @@ func (f *Fs) deleteFile(ctx context.Context, url string) (response *GenericOKRes
return response, nil return response, nil
} }
func (f *Fs) moveFile(ctx context.Context, url string, folderID int, rename string) (response *MoveFileResponse, err error) {
request := &MoveFileRequest{
URLs: []string{url},
FolderID: folderID,
Rename: rename,
}
opts := rest.Opts{
Method: "POST",
Path: "/file/mv.cgi",
}
response = &MoveFileResponse{}
err = f.pacer.Call(func() (bool, error) {
resp, err := f.rest.CallJSON(ctx, &opts, request, response)
return shouldRetry(ctx, resp, err)
})
if err != nil {
return nil, errors.Wrap(err, "couldn't copy file")
}
return response, nil
}
func (f *Fs) copyFile(ctx context.Context, url string, folderID int, rename string) (response *CopyFileResponse, err error) {
request := &CopyFileRequest{
URLs: []string{url},
FolderID: folderID,
Rename: rename,
}
opts := rest.Opts{
Method: "POST",
Path: "/file/cp.cgi",
}
response = &CopyFileResponse{}
err = f.pacer.Call(func() (bool, error) {
resp, err := f.rest.CallJSON(ctx, &opts, request, response)
return shouldRetry(ctx, resp, err)
})
if err != nil {
return nil, errors.Wrap(err, "couldn't copy file")
}
return response, nil
}
func (f *Fs) renameFile(ctx context.Context, url string, newName string) (response *RenameFileResponse, err error) {
request := &RenameFileRequest{
URLs: []RenameFileURL{
{
URL: url,
Filename: newName,
},
},
}
opts := rest.Opts{
Method: "POST",
Path: "/file/rename.cgi",
}
response = &RenameFileResponse{}
err = f.pacer.Call(func() (bool, error) {
resp, err := f.rest.CallJSON(ctx, &opts, request, response)
return shouldRetry(ctx, resp, err)
})
if err != nil {
return nil, errors.Wrap(err, "couldn't rename file")
}
return response, nil
}
func (f *Fs) getUploadNode(ctx context.Context) (response *GetUploadNodeResponse, err error) { func (f *Fs) getUploadNode(ctx context.Context) (response *GetUploadNodeResponse, err error) {
// fs.Debugf(f, "Requesting Upload node") // fs.Debugf(f, "Requesting Upload node")
@@ -320,7 +444,7 @@ func (f *Fs) getUploadNode(ctx context.Context) (response *GetUploadNodeResponse
response = &GetUploadNodeResponse{} response = &GetUploadNodeResponse{}
err = f.pacer.Call(func() (bool, error) { err = f.pacer.Call(func() (bool, error) {
resp, err := f.rest.CallJSON(ctx, &opts, nil, response) resp, err := f.rest.CallJSON(ctx, &opts, nil, response)
return shouldRetry(resp, err) return shouldRetry(ctx, resp, err)
}) })
if err != nil { if err != nil {
return nil, errors.Wrap(err, "didnt got an upload node") return nil, errors.Wrap(err, "didnt got an upload node")
@@ -363,7 +487,7 @@ func (f *Fs) uploadFile(ctx context.Context, in io.Reader, size int64, fileName,
err = f.pacer.CallNoRetry(func() (bool, error) { err = f.pacer.CallNoRetry(func() (bool, error) {
resp, err := f.rest.CallJSON(ctx, &opts, nil, nil) resp, err := f.rest.CallJSON(ctx, &opts, nil, nil)
return shouldRetry(resp, err) return shouldRetry(ctx, resp, err)
}) })
if err != nil { if err != nil {
@@ -397,7 +521,7 @@ func (f *Fs) endUpload(ctx context.Context, uploadID string, nodeurl string) (re
response = &EndFileUploadResponse{} response = &EndFileUploadResponse{}
err = f.pacer.Call(func() (bool, error) { err = f.pacer.Call(func() (bool, error) {
resp, err := f.rest.CallJSON(ctx, &opts, nil, response) resp, err := f.rest.CallJSON(ctx, &opts, nil, response)
return shouldRetry(resp, err) return shouldRetry(ctx, resp, err)
}) })
if err != nil { if err != nil {

View File

@@ -35,9 +35,7 @@ func init() {
fs.Register(&fs.RegInfo{ fs.Register(&fs.RegInfo{
Name: "fichier", Name: "fichier",
Description: "1Fichier", Description: "1Fichier",
Config: func(ctx context.Context, name string, config configmap.Mapper) { NewFs: NewFs,
},
NewFs: NewFs,
Options: []fs.Option{{ Options: []fs.Option{{
Help: "Your API Key, get it from https://1fichier.com/console/params.pl", Help: "Your API Key, get it from https://1fichier.com/console/params.pl",
Name: "api_key", Name: "api_key",
@@ -46,6 +44,18 @@ func init() {
Name: "shared_folder", Name: "shared_folder",
Required: false, Required: false,
Advanced: true, Advanced: true,
}, {
Help: "If you want to download a shared file that is password protected, add this parameter",
Name: "file_password",
Required: false,
Advanced: true,
IsPassword: true,
}, {
Help: "If you want to list the files in a shared folder that is password protected, add this parameter",
Name: "folder_password",
Required: false,
Advanced: true,
IsPassword: true,
}, { }, {
Name: config.ConfigEncoding, Name: config.ConfigEncoding,
Help: config.ConfigEncodingHelp, Help: config.ConfigEncodingHelp,
@@ -77,9 +87,11 @@ func init() {
// Options defines the configuration for this backend // Options defines the configuration for this backend
type Options struct { type Options struct {
APIKey string `config:"api_key"` APIKey string `config:"api_key"`
SharedFolder string `config:"shared_folder"` SharedFolder string `config:"shared_folder"`
Enc encoder.MultiEncoder `config:"encoding"` FilePassword string `config:"file_password"`
FolderPassword string `config:"folder_password"`
Enc encoder.MultiEncoder `config:"encoding"`
} }
// Fs is the interface a cloud storage system must provide // Fs is the interface a cloud storage system must provide
@@ -348,8 +360,10 @@ func (f *Fs) putUnchecked(ctx context.Context, in io.Reader, remote string, size
return nil, err return nil, err
} }
if len(fileUploadResponse.Links) != 1 { if len(fileUploadResponse.Links) == 0 {
return nil, errors.New("unexpected amount of files") return nil, errors.New("upload response not found")
} else if len(fileUploadResponse.Links) > 1 {
fs.Debugf(remote, "Multiple upload responses found, using the first")
} }
link := fileUploadResponse.Links[0] link := fileUploadResponse.Links[0]
@@ -363,7 +377,6 @@ func (f *Fs) putUnchecked(ctx context.Context, in io.Reader, remote string, size
fs: f, fs: f,
remote: remote, remote: remote,
file: File{ file: File{
ACL: 0,
CDN: 0, CDN: 0,
Checksum: link.Whirlpool, Checksum: link.Whirlpool,
ContentType: "", ContentType: "",
@@ -416,9 +429,109 @@ func (f *Fs) Rmdir(ctx context.Context, dir string) error {
return nil return nil
} }
// Move src to this remote using server side move operations.
func (f *Fs) Move(ctx context.Context, src fs.Object, remote string) (fs.Object, error) {
srcObj, ok := src.(*Object)
if !ok {
fs.Debugf(src, "Can't move - not same remote type")
return nil, fs.ErrorCantMove
}
// Find current directory ID
_, currentDirectoryID, err := f.dirCache.FindPath(ctx, remote, false)
if err != nil {
return nil, err
}
// Create temporary object
dstObj, leaf, directoryID, err := f.createObject(ctx, remote)
if err != nil {
return nil, err
}
// If it is in the correct directory, just rename it
var url string
if currentDirectoryID == directoryID {
resp, err := f.renameFile(ctx, srcObj.file.URL, leaf)
if err != nil {
return nil, errors.Wrap(err, "couldn't rename file")
}
if resp.Status != "OK" {
return nil, errors.Errorf("couldn't rename file: %s", resp.Message)
}
url = resp.URLs[0].URL
} else {
folderID, err := strconv.Atoi(directoryID)
if err != nil {
return nil, err
}
resp, err := f.moveFile(ctx, srcObj.file.URL, folderID, leaf)
if err != nil {
return nil, errors.Wrap(err, "couldn't move file")
}
if resp.Status != "OK" {
return nil, errors.Errorf("couldn't move file: %s", resp.Message)
}
url = resp.URLs[0]
}
file, err := f.readFileInfo(ctx, url)
if err != nil {
return nil, errors.New("couldn't read file data")
}
dstObj.setMetaData(*file)
return dstObj, nil
}
// Copy src to this remote using server side move operations.
func (f *Fs) Copy(ctx context.Context, src fs.Object, remote string) (fs.Object, error) {
srcObj, ok := src.(*Object)
if !ok {
fs.Debugf(src, "Can't move - not same remote type")
return nil, fs.ErrorCantMove
}
// Create temporary object
dstObj, leaf, directoryID, err := f.createObject(ctx, remote)
if err != nil {
return nil, err
}
folderID, err := strconv.Atoi(directoryID)
if err != nil {
return nil, err
}
resp, err := f.copyFile(ctx, srcObj.file.URL, folderID, leaf)
if err != nil {
return nil, errors.Wrap(err, "couldn't move file")
}
if resp.Status != "OK" {
return nil, errors.Errorf("couldn't move file: %s", resp.Message)
}
file, err := f.readFileInfo(ctx, resp.URLs[0].ToURL)
if err != nil {
return nil, errors.New("couldn't read file data")
}
dstObj.setMetaData(*file)
return dstObj, nil
}
// PublicLink adds a "readable by anyone with link" permission on the given file or folder.
func (f *Fs) PublicLink(ctx context.Context, remote string, expire fs.Duration, unlink bool) (string, error) {
o, err := f.NewObject(ctx, remote)
if err != nil {
return "", err
}
return o.(*Object).file.URL, nil
}
// Check the interfaces are satisfied // Check the interfaces are satisfied
var ( var (
_ fs.Fs = (*Fs)(nil) _ fs.Fs = (*Fs)(nil)
_ fs.Mover = (*Fs)(nil)
_ fs.Copier = (*Fs)(nil)
_ fs.PublicLinker = (*Fs)(nil)
_ fs.PutUncheckeder = (*Fs)(nil) _ fs.PutUncheckeder = (*Fs)(nil)
_ dircache.DirCacher = (*Fs)(nil) _ dircache.DirCacher = (*Fs)(nil)
) )

View File

@@ -72,6 +72,10 @@ func (o *Object) SetModTime(context.Context, time.Time) error {
//return errors.New("setting modtime is not supported for 1fichier remotes") //return errors.New("setting modtime is not supported for 1fichier remotes")
} }
func (o *Object) setMetaData(file File) {
o.file = file
}
// Open opens the file for read. Call Close() on the returned io.ReadCloser // Open opens the file for read. Call Close() on the returned io.ReadCloser
func (o *Object) Open(ctx context.Context, options ...fs.OpenOption) (io.ReadCloser, error) { func (o *Object) Open(ctx context.Context, options ...fs.OpenOption) (io.ReadCloser, error) {
fs.FixRangeOption(options, o.file.Size) fs.FixRangeOption(options, o.file.Size)
@@ -90,7 +94,7 @@ func (o *Object) Open(ctx context.Context, options ...fs.OpenOption) (io.ReadClo
err = o.fs.pacer.Call(func() (bool, error) { err = o.fs.pacer.Call(func() (bool, error) {
resp, err = o.fs.rest.Call(ctx, &opts) resp, err = o.fs.rest.Call(ctx, &opts)
return shouldRetry(resp, err) return shouldRetry(ctx, resp, err)
}) })
if err != nil { if err != nil {

View File

@@ -1,5 +1,10 @@
package fichier package fichier
// FileInfoRequest is the request structure of the corresponding request
type FileInfoRequest struct {
URL string `json:"url"`
}
// ListFolderRequest is the request structure of the corresponding request // ListFolderRequest is the request structure of the corresponding request
type ListFolderRequest struct { type ListFolderRequest struct {
FolderID int `json:"folder_id"` FolderID int `json:"folder_id"`
@@ -14,6 +19,7 @@ type ListFilesRequest struct {
type DownloadRequest struct { type DownloadRequest struct {
URL string `json:"url"` URL string `json:"url"`
Single int `json:"single"` Single int `json:"single"`
Pass string `json:"pass,omitempty"`
} }
// RemoveFolderRequest is the request structure of the corresponding request // RemoveFolderRequest is the request structure of the corresponding request
@@ -49,6 +55,65 @@ type MakeFolderResponse struct {
FolderID int `json:"folder_id"` FolderID int `json:"folder_id"`
} }
// MoveFileRequest is the request structure of the corresponding request
type MoveFileRequest struct {
URLs []string `json:"urls"`
FolderID int `json:"destination_folder_id"`
Rename string `json:"rename,omitempty"`
}
// MoveFileResponse is the response structure of the corresponding request
type MoveFileResponse struct {
Status string `json:"status"`
Message string `json:"message"`
URLs []string `json:"urls"`
}
// CopyFileRequest is the request structure of the corresponding request
type CopyFileRequest struct {
URLs []string `json:"urls"`
FolderID int `json:"folder_id"`
Rename string `json:"rename,omitempty"`
}
// CopyFileResponse is the response structure of the corresponding request
type CopyFileResponse struct {
Status string `json:"status"`
Message string `json:"message"`
Copied int `json:"copied"`
URLs []FileCopy `json:"urls"`
}
// FileCopy is used in the the CopyFileResponse
type FileCopy struct {
FromURL string `json:"from_url"`
ToURL string `json:"to_url"`
}
// RenameFileURL is the data structure to rename a single file
type RenameFileURL struct {
URL string `json:"url"`
Filename string `json:"filename"`
}
// RenameFileRequest is the request structure of the corresponding request
type RenameFileRequest struct {
URLs []RenameFileURL `json:"urls"`
Pretty int `json:"pretty"`
}
// RenameFileResponse is the response structure of the corresponding request
type RenameFileResponse struct {
Status string `json:"status"`
Message string `json:"message"`
Renamed int `json:"renamed"`
URLs []struct {
URL string `json:"url"`
OldFilename string `json:"old_filename"`
NewFilename string `json:"new_filename"`
} `json:"urls"`
}
// GetUploadNodeResponse is the response structure of the corresponding request // GetUploadNodeResponse is the response structure of the corresponding request
type GetUploadNodeResponse struct { type GetUploadNodeResponse struct {
ID string `json:"id"` ID string `json:"id"`
@@ -86,7 +151,6 @@ type EndFileUploadResponse struct {
// File is the structure how 1Fichier returns a File // File is the structure how 1Fichier returns a File
type File struct { type File struct {
ACL int `json:"acl"`
CDN int `json:"cdn"` CDN int `json:"cdn"`
Checksum string `json:"checksum"` Checksum string `json:"checksum"`
ContentType string `json:"content-type"` ContentType string `json:"content-type"`

View File

@@ -5,6 +5,7 @@ package api
import ( import (
"bytes" "bytes"
"encoding/json"
"fmt" "fmt"
"reflect" "reflect"
"strings" "strings"
@@ -51,6 +52,23 @@ func (t Time) String() string {
return time.Time(t).UTC().Format(timeFormatParameters) return time.Time(t).UTC().Format(timeFormatParameters)
} }
// Int represents an integer which can be represented in JSON as a
// quoted integer or an integer.
type Int int
// MarshalJSON turns a Int into JSON
func (i *Int) MarshalJSON() (out []byte, err error) {
return json.Marshal((*int)(i))
}
// UnmarshalJSON turns JSON into a Int
func (i *Int) UnmarshalJSON(data []byte) error {
if len(data) >= 2 && data[0] == '"' && data[len(data)-1] == '"' {
data = data[1 : len(data)-1]
}
return json.Unmarshal(data, (*int)(i))
}
// Status return returned in all status responses // Status return returned in all status responses
type Status struct { type Status struct {
Code string `json:"status"` Code string `json:"status"`
@@ -115,7 +133,7 @@ type GetFolderContentsResponse struct {
Total int `json:"total,string"` Total int `json:"total,string"`
Items []Item `json:"filelist"` Items []Item `json:"filelist"`
Folder Item `json:"folder"` Folder Item `json:"folder"`
From int `json:"from,string"` From Int `json:"from"`
//Count int `json:"count"` //Count int `json:"count"`
Pid string `json:"pid"` Pid string `json:"pid"`
RefreshResult Status `json:"refreshresult"` RefreshResult Status `json:"refreshresult"`

View File

@@ -228,7 +228,10 @@ var retryStatusCodes = []struct {
// shouldRetry returns a boolean as to whether this resp and err // shouldRetry returns a boolean as to whether this resp and err
// deserve to be retried. It returns the err as a convenience // deserve to be retried. It returns the err as a convenience
func (f *Fs) shouldRetry(resp *http.Response, err error, status api.OKError) (bool, error) { func (f *Fs) shouldRetry(ctx context.Context, resp *http.Response, err error, status api.OKError) (bool, error) {
if fserrors.ContextError(ctx, &err) {
return false, err
}
if err != nil { if err != nil {
return fserrors.ShouldRetry(err) || fserrors.ShouldRetryHTTP(resp, retryErrorCodes), err return fserrors.ShouldRetry(err) || fserrors.ShouldRetryHTTP(resp, retryErrorCodes), err
} }
@@ -401,7 +404,7 @@ func (f *Fs) rpc(ctx context.Context, function string, p params, result api.OKEr
// Refresh the body each retry // Refresh the body each retry
opts.Body = strings.NewReader(data.Encode()) opts.Body = strings.NewReader(data.Encode())
resp, err = f.srv.CallJSON(ctx, &opts, nil, result) resp, err = f.srv.CallJSON(ctx, &opts, nil, result)
return f.shouldRetry(resp, err, result) return f.shouldRetry(ctx, resp, err, result)
}) })
if err != nil { if err != nil {
return resp, err return resp, err
@@ -1277,7 +1280,7 @@ func (o *Object) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, op
} }
err = o.fs.pacer.CallNoRetry(func() (bool, error) { err = o.fs.pacer.CallNoRetry(func() (bool, error) {
resp, err := o.fs.srv.CallJSON(ctx, &opts, nil, &uploader) resp, err := o.fs.srv.CallJSON(ctx, &opts, nil, &uploader)
return o.fs.shouldRetry(resp, err, nil) return o.fs.shouldRetry(ctx, resp, err, nil)
}) })
if err != nil { if err != nil {
return errors.Wrap(err, "failed to upload") return errors.Wrap(err, "failed to upload")

View File

@@ -5,6 +5,7 @@ import (
"context" "context"
"crypto/tls" "crypto/tls"
"io" "io"
"net"
"net/textproto" "net/textproto"
"path" "path"
"runtime" "runtime"
@@ -20,6 +21,8 @@ import (
"github.com/rclone/rclone/fs/config/configmap" "github.com/rclone/rclone/fs/config/configmap"
"github.com/rclone/rclone/fs/config/configstruct" "github.com/rclone/rclone/fs/config/configstruct"
"github.com/rclone/rclone/fs/config/obscure" "github.com/rclone/rclone/fs/config/obscure"
"github.com/rclone/rclone/fs/fserrors"
"github.com/rclone/rclone/fs/fshttp"
"github.com/rclone/rclone/fs/hash" "github.com/rclone/rclone/fs/hash"
"github.com/rclone/rclone/lib/encoder" "github.com/rclone/rclone/lib/encoder"
"github.com/rclone/rclone/lib/env" "github.com/rclone/rclone/lib/env"
@@ -31,6 +34,12 @@ var (
currentUser = env.CurrentUser() currentUser = env.CurrentUser()
) )
const (
minSleep = 10 * time.Millisecond
maxSleep = 2 * time.Second
decayConstant = 2 // bigger for slower decay, exponential
)
// Register with Fs // Register with Fs
func init() { func init() {
fs.Register(&fs.RegInfo{ fs.Register(&fs.RegInfo{
@@ -91,6 +100,22 @@ to an encrypted one. Cannot be used in combination with implicit FTP.`,
Help: "Disable using MLSD even if server advertises support", Help: "Disable using MLSD even if server advertises support",
Default: false, Default: false,
Advanced: true, Advanced: true,
}, {
Name: "idle_timeout",
Default: fs.Duration(60 * time.Second),
Help: `Max time before closing idle connections
If no connections have been returned to the connection pool in the time
given, rclone will empty the connection pool.
Set to 0 to keep connections indefinitely.
`,
Advanced: true,
}, {
Name: "close_timeout",
Help: "Maximum time to wait for a response to close.",
Default: fs.Duration(60 * time.Second),
Advanced: true,
}, { }, {
Name: config.ConfigEncoding, Name: config.ConfigEncoding,
Help: config.ConfigEncodingHelp, Help: config.ConfigEncodingHelp,
@@ -118,6 +143,8 @@ type Options struct {
SkipVerifyTLSCert bool `config:"no_check_certificate"` SkipVerifyTLSCert bool `config:"no_check_certificate"`
DisableEPSV bool `config:"disable_epsv"` DisableEPSV bool `config:"disable_epsv"`
DisableMLSD bool `config:"disable_mlsd"` DisableMLSD bool `config:"disable_mlsd"`
IdleTimeout fs.Duration `config:"idle_timeout"`
CloseTimeout fs.Duration `config:"close_timeout"`
Enc encoder.MultiEncoder `config:"encoding"` Enc encoder.MultiEncoder `config:"encoding"`
} }
@@ -134,7 +161,10 @@ type Fs struct {
dialAddr string dialAddr string
poolMu sync.Mutex poolMu sync.Mutex
pool []*ftp.ServerConn pool []*ftp.ServerConn
drain *time.Timer // used to drain the pool when we stop using the connections
tokens *pacer.TokenDispenser tokens *pacer.TokenDispenser
tlsConf *tls.Config
pacer *fs.Pacer // pacer for FTP connections
} }
// Object describes an FTP file // Object describes an FTP file
@@ -211,25 +241,48 @@ func (dl *debugLog) Write(p []byte) (n int, err error) {
return len(p), nil return len(p), nil
} }
// shouldRetry returns a boolean as to whether this err deserve to be
// retried. It returns the err as a convenience
func shouldRetry(ctx context.Context, err error) (bool, error) {
if fserrors.ContextError(ctx, &err) {
return false, err
}
switch errX := err.(type) {
case *textproto.Error:
switch errX.Code {
case ftp.StatusNotAvailable:
return true, err
}
}
return fserrors.ShouldRetry(err), err
}
// Open a new connection to the FTP server. // Open a new connection to the FTP server.
func (f *Fs) ftpConnection(ctx context.Context) (*ftp.ServerConn, error) { func (f *Fs) ftpConnection(ctx context.Context) (c *ftp.ServerConn, err error) {
fs.Debugf(f, "Connecting to FTP server") fs.Debugf(f, "Connecting to FTP server")
ftpConfig := []ftp.DialOption{ftp.DialWithTimeout(f.ci.ConnectTimeout)}
if f.opt.TLS && f.opt.ExplicitTLS { // Make ftp library dial with fshttp dialer optionally using TLS
fs.Errorf(f, "Implicit TLS and explicit TLS are mutually incompatible. Please revise your config") dial := func(network, address string) (conn net.Conn, err error) {
return nil, errors.New("Implicit TLS and explicit TLS are mutually incompatible. Please revise your config") conn, err = fshttp.NewDialer(ctx).Dial(network, address)
} else if f.opt.TLS { if f.tlsConf != nil && err == nil {
tlsConfig := &tls.Config{ conn = tls.Client(conn, f.tlsConf)
ServerName: f.opt.Host,
InsecureSkipVerify: f.opt.SkipVerifyTLSCert,
} }
ftpConfig = append(ftpConfig, ftp.DialWithTLS(tlsConfig)) return
}
ftpConfig := []ftp.DialOption{ftp.DialWithDialFunc(dial)}
if f.opt.TLS {
// Our dialer takes care of TLS but ftp library also needs tlsConf
// as a trigger for sending PSBZ and PROT options to server.
ftpConfig = append(ftpConfig, ftp.DialWithTLS(f.tlsConf))
} else if f.opt.ExplicitTLS { } else if f.opt.ExplicitTLS {
tlsConfig := &tls.Config{ ftpConfig = append(ftpConfig, ftp.DialWithExplicitTLS(f.tlsConf))
ServerName: f.opt.Host, // Initial connection needs to be cleartext for explicit TLS
InsecureSkipVerify: f.opt.SkipVerifyTLSCert, conn, err := fshttp.NewDialer(ctx).Dial("tcp", f.dialAddr)
if err != nil {
return nil, err
} }
ftpConfig = append(ftpConfig, ftp.DialWithExplicitTLS(tlsConfig)) ftpConfig = append(ftpConfig, ftp.DialWithNetConn(conn))
} }
if f.opt.DisableEPSV { if f.opt.DisableEPSV {
ftpConfig = append(ftpConfig, ftp.DialWithDisabledEPSV(true)) ftpConfig = append(ftpConfig, ftp.DialWithDisabledEPSV(true))
@@ -240,18 +293,22 @@ func (f *Fs) ftpConnection(ctx context.Context) (*ftp.ServerConn, error) {
if f.ci.Dump&(fs.DumpHeaders|fs.DumpBodies|fs.DumpRequests|fs.DumpResponses) != 0 { if f.ci.Dump&(fs.DumpHeaders|fs.DumpBodies|fs.DumpRequests|fs.DumpResponses) != 0 {
ftpConfig = append(ftpConfig, ftp.DialWithDebugOutput(&debugLog{auth: f.ci.Dump&fs.DumpAuth != 0})) ftpConfig = append(ftpConfig, ftp.DialWithDebugOutput(&debugLog{auth: f.ci.Dump&fs.DumpAuth != 0}))
} }
c, err := ftp.Dial(f.dialAddr, ftpConfig...) err = f.pacer.Call(func() (bool, error) {
c, err = ftp.Dial(f.dialAddr, ftpConfig...)
if err != nil {
return shouldRetry(ctx, err)
}
err = c.Login(f.user, f.pass)
if err != nil {
_ = c.Quit()
return shouldRetry(ctx, err)
}
return false, nil
})
if err != nil { if err != nil {
fs.Errorf(f, "Error while Dialing %s: %s", f.dialAddr, err) err = errors.Wrapf(err, "failed to make FTP connection to %q", f.dialAddr)
return nil, errors.Wrap(err, "ftpConnection Dial")
} }
err = c.Login(f.user, f.pass) return c, err
if err != nil {
_ = c.Quit()
fs.Errorf(f, "Error while Logging in into %s: %s", f.dialAddr, err)
return nil, errors.Wrap(err, "ftpConnection Login")
}
return c, nil
} }
// Get an FTP connection from the pool, or open a new one // Get an FTP connection from the pool, or open a new one
@@ -308,9 +365,32 @@ func (f *Fs) putFtpConnection(pc **ftp.ServerConn, err error) {
} }
f.poolMu.Lock() f.poolMu.Lock()
f.pool = append(f.pool, c) f.pool = append(f.pool, c)
if f.opt.IdleTimeout > 0 {
f.drain.Reset(time.Duration(f.opt.IdleTimeout)) // nudge on the pool emptying timer
}
f.poolMu.Unlock() f.poolMu.Unlock()
} }
// Drain the pool of any connections
func (f *Fs) drainPool(ctx context.Context) (err error) {
f.poolMu.Lock()
defer f.poolMu.Unlock()
if f.opt.IdleTimeout > 0 {
f.drain.Stop()
}
if len(f.pool) != 0 {
fs.Debugf(f, "closing %d unused connections", len(f.pool))
}
for i, c := range f.pool {
if cErr := c.Quit(); cErr != nil {
err = cErr
}
f.pool[i] = nil
}
f.pool = nil
return err
}
// NewFs constructs an Fs from the path, container:path // NewFs constructs an Fs from the path, container:path
func NewFs(ctx context.Context, name, root string, m configmap.Mapper) (ff fs.Fs, err error) { func NewFs(ctx context.Context, name, root string, m configmap.Mapper) (ff fs.Fs, err error) {
// defer fs.Trace(nil, "name=%q, root=%q", name, root)("fs=%v, err=%v", &ff, &err) // defer fs.Trace(nil, "name=%q, root=%q", name, root)("fs=%v, err=%v", &ff, &err)
@@ -338,6 +418,16 @@ func NewFs(ctx context.Context, name, root string, m configmap.Mapper) (ff fs.Fs
if opt.TLS { if opt.TLS {
protocol = "ftps://" protocol = "ftps://"
} }
if opt.TLS && opt.ExplicitTLS {
return nil, errors.New("Implicit TLS and explicit TLS are mutually incompatible. Please revise your config")
}
var tlsConfig *tls.Config
if opt.TLS || opt.ExplicitTLS {
tlsConfig = &tls.Config{
ServerName: opt.Host,
InsecureSkipVerify: opt.SkipVerifyTLSCert,
}
}
u := protocol + path.Join(dialAddr+"/", root) u := protocol + path.Join(dialAddr+"/", root)
ci := fs.GetConfig(ctx) ci := fs.GetConfig(ctx)
f := &Fs{ f := &Fs{
@@ -350,10 +440,16 @@ func NewFs(ctx context.Context, name, root string, m configmap.Mapper) (ff fs.Fs
pass: pass, pass: pass,
dialAddr: dialAddr, dialAddr: dialAddr,
tokens: pacer.NewTokenDispenser(opt.Concurrency), tokens: pacer.NewTokenDispenser(opt.Concurrency),
tlsConf: tlsConfig,
pacer: fs.NewPacer(ctx, pacer.NewDefault(pacer.MinSleep(minSleep), pacer.MaxSleep(maxSleep), pacer.DecayConstant(decayConstant))),
} }
f.features = (&fs.Features{ f.features = (&fs.Features{
CanHaveEmptyDirectories: true, CanHaveEmptyDirectories: true,
}).Fill(ctx, f) }).Fill(ctx, f)
// set the pool drainer timer going
if f.opt.IdleTimeout > 0 {
f.drain = time.AfterFunc(time.Duration(opt.IdleTimeout), func() { _ = f.drainPool(ctx) })
}
// Make a connection and pool it to return errors early // Make a connection and pool it to return errors early
c, err := f.getFtpConnection(ctx) c, err := f.getFtpConnection(ctx)
if err != nil { if err != nil {
@@ -382,6 +478,12 @@ func NewFs(ctx context.Context, name, root string, m configmap.Mapper) (ff fs.Fs
return f, err return f, err
} }
// Shutdown the backend, closing any background tasks and any
// cached connections.
func (f *Fs) Shutdown(ctx context.Context) error {
return f.drainPool(ctx)
}
// translateErrorFile turns FTP errors into rclone errors if possible for a file // translateErrorFile turns FTP errors into rclone errors if possible for a file
func translateErrorFile(err error) error { func translateErrorFile(err error) error {
switch errX := err.(type) { switch errX := err.(type) {
@@ -527,7 +629,7 @@ func (f *Fs) List(ctx context.Context, dir string) (entries fs.DirEntries, err e
}() }()
// Wait for List for up to Timeout seconds // Wait for List for up to Timeout seconds
timer := time.NewTimer(f.ci.Timeout) timer := time.NewTimer(f.ci.TimeoutOrInfinite())
select { select {
case listErr = <-errchan: case listErr = <-errchan:
timer.Stop() timer.Stop()
@@ -860,8 +962,8 @@ func (f *ftpReadCloser) Close() error {
go func() { go func() {
errchan <- f.rc.Close() errchan <- f.rc.Close()
}() }()
// Wait for Close for up to 60 seconds // Wait for Close for up to 60 seconds by default
timer := time.NewTimer(60 * time.Second) timer := time.NewTimer(time.Duration(f.f.opt.CloseTimeout))
select { select {
case err = <-errchan: case err = <-errchan:
timer.Stop() timer.Stop()
@@ -990,5 +1092,6 @@ var (
_ fs.Mover = &Fs{} _ fs.Mover = &Fs{}
_ fs.DirMover = &Fs{} _ fs.DirMover = &Fs{}
_ fs.PutStreamer = &Fs{} _ fs.PutStreamer = &Fs{}
_ fs.Shutdowner = &Fs{}
_ fs.Object = &Object{} _ fs.Object = &Object{}
) )

View File

@@ -19,9 +19,9 @@ import (
"fmt" "fmt"
"io" "io"
"io/ioutil" "io/ioutil"
"log"
"net/http" "net/http"
"path" "path"
"strconv"
"strings" "strings"
"time" "time"
@@ -51,10 +51,10 @@ import (
const ( const (
rcloneClientID = "202264815644.apps.googleusercontent.com" rcloneClientID = "202264815644.apps.googleusercontent.com"
rcloneEncryptedClientSecret = "Uj7C9jGfb9gmeaV70Lh058cNkWvepr-Es9sBm0zdgil7JaOWF1VySw" rcloneEncryptedClientSecret = "Uj7C9jGfb9gmeaV70Lh058cNkWvepr-Es9sBm0zdgil7JaOWF1VySw"
timeFormatIn = time.RFC3339 timeFormat = time.RFC3339Nano
timeFormatOut = "2006-01-02T15:04:05.000000000Z07:00" metaMtime = "mtime" // key to store mtime in metadata
metaMtime = "mtime" // key to store mtime under in metadata metaMtimeGsutil = "goog-reserved-file-mtime" // key used by GSUtil to store mtime in metadata
listChunks = 1000 // chunk size to read directory listings listChunks = 1000 // chunk size to read directory listings
minSleep = 10 * time.Millisecond minSleep = 10 * time.Millisecond
) )
@@ -76,17 +76,16 @@ func init() {
Prefix: "gcs", Prefix: "gcs",
Description: "Google Cloud Storage (this is not Google Drive)", Description: "Google Cloud Storage (this is not Google Drive)",
NewFs: NewFs, NewFs: NewFs,
Config: func(ctx context.Context, name string, m configmap.Mapper) { Config: func(ctx context.Context, name string, m configmap.Mapper, config fs.ConfigIn) (*fs.ConfigOut, error) {
saFile, _ := m.Get("service_account_file") saFile, _ := m.Get("service_account_file")
saCreds, _ := m.Get("service_account_credentials") saCreds, _ := m.Get("service_account_credentials")
anonymous, _ := m.Get("anonymous") anonymous, _ := m.Get("anonymous")
if saFile != "" || saCreds != "" || anonymous == "true" { if saFile != "" || saCreds != "" || anonymous == "true" {
return return nil, nil
}
err := oauthutil.Config(ctx, "google cloud storage", name, m, storageConfig, nil)
if err != nil {
log.Fatalf("Failed to configure token: %v", err)
} }
return oauthutil.ConfigOut("", &oauthutil.Options{
OAuth2Config: storageConfig,
})
}, },
Options: append(oauthutil.SharedOptions, []fs.Option{{ Options: append(oauthutil.SharedOptions, []fs.Option{{
Name: "project_number", Name: "project_number",
@@ -329,7 +328,10 @@ func (f *Fs) Features() *fs.Features {
} }
// shouldRetry determines whether a given err rates being retried // shouldRetry determines whether a given err rates being retried
func shouldRetry(err error) (again bool, errOut error) { func shouldRetry(ctx context.Context, err error) (again bool, errOut error) {
if fserrors.ContextError(ctx, &err) {
return false, err
}
again = false again = false
if err != nil { if err != nil {
if fserrors.ShouldRetry(err) { if fserrors.ShouldRetry(err) {
@@ -455,7 +457,7 @@ func NewFs(ctx context.Context, name, root string, m configmap.Mapper) (fs.Fs, e
encodedDirectory := f.opt.Enc.FromStandardPath(f.rootDirectory) encodedDirectory := f.opt.Enc.FromStandardPath(f.rootDirectory)
err = f.pacer.Call(func() (bool, error) { err = f.pacer.Call(func() (bool, error) {
_, err = f.svc.Objects.Get(f.rootBucket, encodedDirectory).Context(ctx).Do() _, err = f.svc.Objects.Get(f.rootBucket, encodedDirectory).Context(ctx).Do()
return shouldRetry(err) return shouldRetry(ctx, err)
}) })
if err == nil { if err == nil {
newRoot := path.Dir(f.root) newRoot := path.Dir(f.root)
@@ -521,7 +523,7 @@ func (f *Fs) list(ctx context.Context, bucket, directory, prefix string, addBuck
var objects *storage.Objects var objects *storage.Objects
err = f.pacer.Call(func() (bool, error) { err = f.pacer.Call(func() (bool, error) {
objects, err = list.Context(ctx).Do() objects, err = list.Context(ctx).Do()
return shouldRetry(err) return shouldRetry(ctx, err)
}) })
if err != nil { if err != nil {
if gErr, ok := err.(*googleapi.Error); ok { if gErr, ok := err.(*googleapi.Error); ok {
@@ -624,7 +626,7 @@ func (f *Fs) listBuckets(ctx context.Context) (entries fs.DirEntries, err error)
var buckets *storage.Buckets var buckets *storage.Buckets
err = f.pacer.Call(func() (bool, error) { err = f.pacer.Call(func() (bool, error) {
buckets, err = listBuckets.Context(ctx).Do() buckets, err = listBuckets.Context(ctx).Do()
return shouldRetry(err) return shouldRetry(ctx, err)
}) })
if err != nil { if err != nil {
return nil, err return nil, err
@@ -750,7 +752,7 @@ func (f *Fs) makeBucket(ctx context.Context, bucket string) (err error) {
// service account that only has the "Storage Object Admin" role. See #2193 for details. // service account that only has the "Storage Object Admin" role. See #2193 for details.
err = f.pacer.Call(func() (bool, error) { err = f.pacer.Call(func() (bool, error) {
_, err = f.svc.Objects.List(bucket).MaxResults(1).Context(ctx).Do() _, err = f.svc.Objects.List(bucket).MaxResults(1).Context(ctx).Do()
return shouldRetry(err) return shouldRetry(ctx, err)
}) })
if err == nil { if err == nil {
// Bucket already exists // Bucket already exists
@@ -785,7 +787,7 @@ func (f *Fs) makeBucket(ctx context.Context, bucket string) (err error) {
insertBucket.PredefinedAcl(f.opt.BucketACL) insertBucket.PredefinedAcl(f.opt.BucketACL)
} }
_, err = insertBucket.Context(ctx).Do() _, err = insertBucket.Context(ctx).Do()
return shouldRetry(err) return shouldRetry(ctx, err)
}) })
}, nil) }, nil)
} }
@@ -802,7 +804,7 @@ func (f *Fs) Rmdir(ctx context.Context, dir string) (err error) {
return f.cache.Remove(bucket, func() error { return f.cache.Remove(bucket, func() error {
return f.pacer.Call(func() (bool, error) { return f.pacer.Call(func() (bool, error) {
err = f.svc.Buckets.Delete(bucket).Context(ctx).Do() err = f.svc.Buckets.Delete(bucket).Context(ctx).Do()
return shouldRetry(err) return shouldRetry(ctx, err)
}) })
}) })
} }
@@ -848,7 +850,7 @@ func (f *Fs) Copy(ctx context.Context, src fs.Object, remote string) (fs.Object,
for { for {
err = f.pacer.Call(func() (bool, error) { err = f.pacer.Call(func() (bool, error) {
rewriteResponse, err = rewriteRequest.Context(ctx).Do() rewriteResponse, err = rewriteRequest.Context(ctx).Do()
return shouldRetry(err) return shouldRetry(ctx, err)
}) })
if err != nil { if err != nil {
return nil, err return nil, err
@@ -919,7 +921,7 @@ func (o *Object) setMetaData(info *storage.Object) {
// read mtime out of metadata if available // read mtime out of metadata if available
mtimeString, ok := info.Metadata[metaMtime] mtimeString, ok := info.Metadata[metaMtime]
if ok { if ok {
modTime, err := time.Parse(timeFormatIn, mtimeString) modTime, err := time.Parse(timeFormat, mtimeString)
if err == nil { if err == nil {
o.modTime = modTime o.modTime = modTime
return return
@@ -927,8 +929,19 @@ func (o *Object) setMetaData(info *storage.Object) {
fs.Debugf(o, "Failed to read mtime from metadata: %s", err) fs.Debugf(o, "Failed to read mtime from metadata: %s", err)
} }
// Fallback to GSUtil mtime
mtimeGsutilString, ok := info.Metadata[metaMtimeGsutil]
if ok {
unixTimeSec, err := strconv.ParseInt(mtimeGsutilString, 10, 64)
if err == nil {
o.modTime = time.Unix(unixTimeSec, 0)
return
}
fs.Debugf(o, "Failed to read GSUtil mtime from metadata: %s", err)
}
// Fallback to the Updated time // Fallback to the Updated time
modTime, err := time.Parse(timeFormatIn, info.Updated) modTime, err := time.Parse(timeFormat, info.Updated)
if err != nil { if err != nil {
fs.Logf(o, "Bad time decode: %v", err) fs.Logf(o, "Bad time decode: %v", err)
} else { } else {
@@ -941,7 +954,7 @@ func (o *Object) readObjectInfo(ctx context.Context) (object *storage.Object, er
bucket, bucketPath := o.split() bucket, bucketPath := o.split()
err = o.fs.pacer.Call(func() (bool, error) { err = o.fs.pacer.Call(func() (bool, error) {
object, err = o.fs.svc.Objects.Get(bucket, bucketPath).Context(ctx).Do() object, err = o.fs.svc.Objects.Get(bucket, bucketPath).Context(ctx).Do()
return shouldRetry(err) return shouldRetry(ctx, err)
}) })
if err != nil { if err != nil {
if gErr, ok := err.(*googleapi.Error); ok { if gErr, ok := err.(*googleapi.Error); ok {
@@ -985,7 +998,8 @@ func (o *Object) ModTime(ctx context.Context) time.Time {
// Returns metadata for an object // Returns metadata for an object
func metadataFromModTime(modTime time.Time) map[string]string { func metadataFromModTime(modTime time.Time) map[string]string {
metadata := make(map[string]string, 1) metadata := make(map[string]string, 1)
metadata[metaMtime] = modTime.Format(timeFormatOut) metadata[metaMtime] = modTime.Format(timeFormat)
metadata[metaMtimeGsutil] = strconv.FormatInt(modTime.Unix(), 10)
return metadata return metadata
} }
@@ -997,11 +1011,11 @@ func (o *Object) SetModTime(ctx context.Context, modTime time.Time) (err error)
return err return err
} }
// Add the mtime to the existing metadata // Add the mtime to the existing metadata
mtime := modTime.Format(timeFormatOut)
if object.Metadata == nil { if object.Metadata == nil {
object.Metadata = make(map[string]string, 1) object.Metadata = make(map[string]string, 1)
} }
object.Metadata[metaMtime] = mtime object.Metadata[metaMtime] = modTime.Format(timeFormat)
object.Metadata[metaMtimeGsutil] = strconv.FormatInt(modTime.Unix(), 10)
// Copy the object to itself to update the metadata // Copy the object to itself to update the metadata
// Using PATCH requires too many permissions // Using PATCH requires too many permissions
bucket, bucketPath := o.split() bucket, bucketPath := o.split()
@@ -1012,7 +1026,7 @@ func (o *Object) SetModTime(ctx context.Context, modTime time.Time) (err error)
copyObject.DestinationPredefinedAcl(o.fs.opt.ObjectACL) copyObject.DestinationPredefinedAcl(o.fs.opt.ObjectACL)
} }
newObject, err = copyObject.Context(ctx).Do() newObject, err = copyObject.Context(ctx).Do()
return shouldRetry(err) return shouldRetry(ctx, err)
}) })
if err != nil { if err != nil {
return err return err
@@ -1043,7 +1057,7 @@ func (o *Object) Open(ctx context.Context, options ...fs.OpenOption) (in io.Read
_ = res.Body.Close() // ignore error _ = res.Body.Close() // ignore error
} }
} }
return shouldRetry(err) return shouldRetry(ctx, err)
}) })
if err != nil { if err != nil {
return nil, err return nil, err
@@ -1109,7 +1123,7 @@ func (o *Object) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, op
insertObject.PredefinedAcl(o.fs.opt.ObjectACL) insertObject.PredefinedAcl(o.fs.opt.ObjectACL)
} }
newObject, err = insertObject.Context(ctx).Do() newObject, err = insertObject.Context(ctx).Do()
return shouldRetry(err) return shouldRetry(ctx, err)
}) })
if err != nil { if err != nil {
return err return err
@@ -1124,7 +1138,7 @@ func (o *Object) Remove(ctx context.Context) (err error) {
bucket, bucketPath := o.split() bucket, bucketPath := o.split()
err = o.fs.pacer.Call(func() (bool, error) { err = o.fs.pacer.Call(func() (bool, error) {
err = o.fs.svc.Objects.Delete(bucket, bucketPath).Context(ctx).Do() err = o.fs.svc.Objects.Delete(bucket, bucketPath).Context(ctx).Do()
return shouldRetry(err) return shouldRetry(ctx, err)
}) })
return err return err
} }

View File

@@ -8,7 +8,6 @@ import (
"encoding/json" "encoding/json"
"fmt" "fmt"
"io" "io"
golog "log"
"net/http" "net/http"
"net/url" "net/url"
"path" "path"
@@ -78,36 +77,36 @@ func init() {
Prefix: "gphotos", Prefix: "gphotos",
Description: "Google Photos", Description: "Google Photos",
NewFs: NewFs, NewFs: NewFs,
Config: func(ctx context.Context, name string, m configmap.Mapper) { Config: func(ctx context.Context, name string, m configmap.Mapper, config fs.ConfigIn) (*fs.ConfigOut, error) {
// Parse config into Options struct // Parse config into Options struct
opt := new(Options) opt := new(Options)
err := configstruct.Set(m, opt) err := configstruct.Set(m, opt)
if err != nil { if err != nil {
fs.Errorf(nil, "Couldn't parse config into struct: %v", err) return nil, errors.Wrap(err, "couldn't parse config into struct")
return
} }
// Fill in the scopes switch config.State {
if opt.ReadOnly { case "":
oauthConfig.Scopes[0] = scopeReadOnly // Fill in the scopes
} else { if opt.ReadOnly {
oauthConfig.Scopes[0] = scopeReadWrite oauthConfig.Scopes[0] = scopeReadOnly
} else {
oauthConfig.Scopes[0] = scopeReadWrite
}
return oauthutil.ConfigOut("warning", &oauthutil.Options{
OAuth2Config: oauthConfig,
})
case "warning":
// Warn the user as required by google photos integration
return fs.ConfigConfirm("warning_done", true, "config_warning", `Warning
IMPORTANT: All media items uploaded to Google Photos with rclone
are stored in full resolution at original quality. These uploads
will count towards storage in your Google Account.`)
case "warning_done":
return nil, nil
} }
return nil, fmt.Errorf("unknown state %q", config.State)
// Do the oauth
err = oauthutil.Config(ctx, "google photos", name, m, oauthConfig, nil)
if err != nil {
golog.Fatalf("Failed to configure token: %v", err)
}
// Warn the user
fmt.Print(`
*** IMPORTANT: All media items uploaded to Google Photos with rclone
*** are stored in full resolution at original quality. These uploads
*** will count towards storage in your Google Account.
`)
}, },
Options: append(oauthutil.SharedOptions, []fs.Option{{ Options: append(oauthutil.SharedOptions, []fs.Option{{
Name: "read_only", Name: "read_only",
@@ -240,7 +239,10 @@ var retryErrorCodes = []int{
// shouldRetry returns a boolean as to whether this resp and err // shouldRetry returns a boolean as to whether this resp and err
// deserve to be retried. It returns the err as a convenience // deserve to be retried. It returns the err as a convenience
func shouldRetry(resp *http.Response, err error) (bool, error) { func shouldRetry(ctx context.Context, resp *http.Response, err error) (bool, error) {
if fserrors.ContextError(ctx, &err) {
return false, err
}
return fserrors.ShouldRetry(err) || fserrors.ShouldRetryHTTP(resp, retryErrorCodes), err return fserrors.ShouldRetry(err) || fserrors.ShouldRetryHTTP(resp, retryErrorCodes), err
} }
@@ -329,7 +331,7 @@ func (f *Fs) fetchEndpoint(ctx context.Context, name string) (endpoint string, e
var openIDconfig map[string]interface{} var openIDconfig map[string]interface{}
err = f.pacer.Call(func() (bool, error) { err = f.pacer.Call(func() (bool, error) {
resp, err := f.unAuth.CallJSON(ctx, &opts, nil, &openIDconfig) resp, err := f.unAuth.CallJSON(ctx, &opts, nil, &openIDconfig)
return shouldRetry(resp, err) return shouldRetry(ctx, resp, err)
}) })
if err != nil { if err != nil {
return "", errors.Wrap(err, "couldn't read openID config") return "", errors.Wrap(err, "couldn't read openID config")
@@ -358,7 +360,7 @@ func (f *Fs) UserInfo(ctx context.Context) (userInfo map[string]string, err erro
} }
err = f.pacer.Call(func() (bool, error) { err = f.pacer.Call(func() (bool, error) {
resp, err := f.srv.CallJSON(ctx, &opts, nil, &userInfo) resp, err := f.srv.CallJSON(ctx, &opts, nil, &userInfo)
return shouldRetry(resp, err) return shouldRetry(ctx, resp, err)
}) })
if err != nil { if err != nil {
return nil, errors.Wrap(err, "couldn't read user info") return nil, errors.Wrap(err, "couldn't read user info")
@@ -389,7 +391,7 @@ func (f *Fs) Disconnect(ctx context.Context) (err error) {
var res interface{} var res interface{}
err = f.pacer.Call(func() (bool, error) { err = f.pacer.Call(func() (bool, error) {
resp, err := f.srv.CallJSON(ctx, &opts, nil, &res) resp, err := f.srv.CallJSON(ctx, &opts, nil, &res)
return shouldRetry(resp, err) return shouldRetry(ctx, resp, err)
}) })
if err != nil { if err != nil {
return errors.Wrap(err, "couldn't revoke token") return errors.Wrap(err, "couldn't revoke token")
@@ -476,7 +478,7 @@ func (f *Fs) listAlbums(ctx context.Context, shared bool) (all *albums, err erro
var resp *http.Response var resp *http.Response
err = f.pacer.Call(func() (bool, error) { err = f.pacer.Call(func() (bool, error) {
resp, err = f.srv.CallJSON(ctx, &opts, nil, &result) resp, err = f.srv.CallJSON(ctx, &opts, nil, &result)
return shouldRetry(resp, err) return shouldRetry(ctx, resp, err)
}) })
if err != nil { if err != nil {
return nil, errors.Wrap(err, "couldn't list albums") return nil, errors.Wrap(err, "couldn't list albums")
@@ -531,7 +533,7 @@ func (f *Fs) list(ctx context.Context, filter api.SearchFilter, fn listFn) (err
var resp *http.Response var resp *http.Response
err = f.pacer.Call(func() (bool, error) { err = f.pacer.Call(func() (bool, error) {
resp, err = f.srv.CallJSON(ctx, &opts, &filter, &result) resp, err = f.srv.CallJSON(ctx, &opts, &filter, &result)
return shouldRetry(resp, err) return shouldRetry(ctx, resp, err)
}) })
if err != nil { if err != nil {
return errors.Wrap(err, "couldn't list files") return errors.Wrap(err, "couldn't list files")
@@ -675,7 +677,7 @@ func (f *Fs) createAlbum(ctx context.Context, albumTitle string) (album *api.Alb
var resp *http.Response var resp *http.Response
err = f.pacer.Call(func() (bool, error) { err = f.pacer.Call(func() (bool, error) {
resp, err = f.srv.CallJSON(ctx, &opts, request, &result) resp, err = f.srv.CallJSON(ctx, &opts, request, &result)
return shouldRetry(resp, err) return shouldRetry(ctx, resp, err)
}) })
if err != nil { if err != nil {
return nil, errors.Wrap(err, "couldn't create album") return nil, errors.Wrap(err, "couldn't create album")
@@ -810,7 +812,7 @@ func (o *Object) Size() int64 {
} }
err = o.fs.pacer.Call(func() (bool, error) { err = o.fs.pacer.Call(func() (bool, error) {
resp, err = o.fs.srv.Call(ctx, &opts) resp, err = o.fs.srv.Call(ctx, &opts)
return shouldRetry(resp, err) return shouldRetry(ctx, resp, err)
}) })
if err != nil { if err != nil {
fs.Debugf(o, "Reading size failed: %v", err) fs.Debugf(o, "Reading size failed: %v", err)
@@ -861,7 +863,7 @@ func (o *Object) readMetaData(ctx context.Context) (err error) {
var resp *http.Response var resp *http.Response
err = o.fs.pacer.Call(func() (bool, error) { err = o.fs.pacer.Call(func() (bool, error) {
resp, err = o.fs.srv.CallJSON(ctx, &opts, nil, &item) resp, err = o.fs.srv.CallJSON(ctx, &opts, nil, &item)
return shouldRetry(resp, err) return shouldRetry(ctx, resp, err)
}) })
if err != nil { if err != nil {
return errors.Wrap(err, "couldn't get media item") return errors.Wrap(err, "couldn't get media item")
@@ -938,7 +940,7 @@ func (o *Object) Open(ctx context.Context, options ...fs.OpenOption) (in io.Read
} }
err = o.fs.pacer.Call(func() (bool, error) { err = o.fs.pacer.Call(func() (bool, error) {
resp, err = o.fs.srv.Call(ctx, &opts) resp, err = o.fs.srv.Call(ctx, &opts)
return shouldRetry(resp, err) return shouldRetry(ctx, resp, err)
}) })
if err != nil { if err != nil {
return nil, err return nil, err
@@ -993,10 +995,10 @@ func (o *Object) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, op
err = o.fs.pacer.CallNoRetry(func() (bool, error) { err = o.fs.pacer.CallNoRetry(func() (bool, error) {
resp, err = o.fs.srv.Call(ctx, &opts) resp, err = o.fs.srv.Call(ctx, &opts)
if err != nil { if err != nil {
return shouldRetry(resp, err) return shouldRetry(ctx, resp, err)
} }
token, err = rest.ReadBody(resp) token, err = rest.ReadBody(resp)
return shouldRetry(resp, err) return shouldRetry(ctx, resp, err)
}) })
if err != nil { if err != nil {
return errors.Wrap(err, "couldn't upload file") return errors.Wrap(err, "couldn't upload file")
@@ -1024,7 +1026,7 @@ func (o *Object) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, op
var result api.BatchCreateResponse var result api.BatchCreateResponse
err = o.fs.pacer.Call(func() (bool, error) { err = o.fs.pacer.Call(func() (bool, error) {
resp, err = o.fs.srv.CallJSON(ctx, &opts, request, &result) resp, err = o.fs.srv.CallJSON(ctx, &opts, request, &result)
return shouldRetry(resp, err) return shouldRetry(ctx, resp, err)
}) })
if err != nil { if err != nil {
return errors.Wrap(err, "failed to create media item") return errors.Wrap(err, "failed to create media item")
@@ -1069,7 +1071,7 @@ func (o *Object) Remove(ctx context.Context) (err error) {
var resp *http.Response var resp *http.Response
err = o.fs.pacer.Call(func() (bool, error) { err = o.fs.pacer.Call(func() (bool, error) {
resp, err = o.fs.srv.CallJSON(ctx, &opts, &request, nil) resp, err = o.fs.srv.CallJSON(ctx, &opts, &request, nil)
return shouldRetry(resp, err) return shouldRetry(ctx, resp, err)
}) })
if err != nil { if err != nil {
return errors.Wrap(err, "couldn't delete item from album") return errors.Wrap(err, "couldn't delete item from album")

View File

@@ -109,7 +109,7 @@ func (o *Object) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, op
dirname := path.Dir(realpath) dirname := path.Dir(realpath)
fs.Debugf(o.fs, "update [%s]", realpath) fs.Debugf(o.fs, "update [%s]", realpath)
err := o.fs.client.MkdirAll(dirname, 755) err := o.fs.client.MkdirAll(dirname, 0755)
if err != nil { if err != nil {
return err return err
} }

View File

@@ -15,7 +15,7 @@ import (
"time" "time"
"github.com/rclone/rclone/fs" "github.com/rclone/rclone/fs"
"github.com/rclone/rclone/fs/config" "github.com/rclone/rclone/fs/config/configfile"
"github.com/rclone/rclone/fs/config/configmap" "github.com/rclone/rclone/fs/config/configmap"
"github.com/rclone/rclone/fstest" "github.com/rclone/rclone/fstest"
"github.com/rclone/rclone/lib/rest" "github.com/rclone/rclone/lib/rest"
@@ -47,7 +47,7 @@ func prepareServer(t *testing.T) (configmap.Simple, func()) {
ts := httptest.NewServer(handler) ts := httptest.NewServer(handler)
// Configure the remote // Configure the remote
config.LoadConfig(context.Background()) configfile.Install()
// fs.Config.LogLevel = fs.LogLevelDebug // fs.Config.LogLevel = fs.LogLevelDebug
// fs.Config.DumpHeaders = true // fs.Config.DumpHeaders = true
// fs.Config.DumpBodies = true // fs.Config.DumpBodies = true

View File

@@ -11,7 +11,6 @@ import (
"encoding/json" "encoding/json"
"fmt" "fmt"
"io/ioutil" "io/ioutil"
"log"
"net/http" "net/http"
"strings" "strings"
"time" "time"
@@ -56,11 +55,10 @@ func init() {
Name: "hubic", Name: "hubic",
Description: "Hubic", Description: "Hubic",
NewFs: NewFs, NewFs: NewFs,
Config: func(ctx context.Context, name string, m configmap.Mapper) { Config: func(ctx context.Context, name string, m configmap.Mapper, config fs.ConfigIn) (*fs.ConfigOut, error) {
err := oauthutil.Config(ctx, "hubic", name, m, oauthConfig, nil) return oauthutil.ConfigOut("", &oauthutil.Options{
if err != nil { OAuth2Config: oauthConfig,
log.Fatalf("Failed to configure token: %v", err) })
}
}, },
Options: append(oauthutil.SharedOptions, swift.SharedOptions...), Options: append(oauthutil.SharedOptions, swift.SharedOptions...),
}) })

View File

@@ -10,7 +10,6 @@ import (
"fmt" "fmt"
"io" "io"
"io/ioutil" "io/ioutil"
"log"
"math/rand" "math/rand"
"net/http" "net/http"
"net/url" "net/url"
@@ -49,37 +48,29 @@ const (
rootURL = "https://jfs.jottacloud.com/jfs/" rootURL = "https://jfs.jottacloud.com/jfs/"
apiURL = "https://api.jottacloud.com/" apiURL = "https://api.jottacloud.com/"
baseURL = "https://www.jottacloud.com/" baseURL = "https://www.jottacloud.com/"
defaultTokenURL = "https://id.jottacloud.com/auth/realms/jottacloud/protocol/openid-connect/token"
cachePrefix = "rclone-jcmd5-" cachePrefix = "rclone-jcmd5-"
configDevice = "device" configDevice = "device"
configMountpoint = "mountpoint" configMountpoint = "mountpoint"
configTokenURL = "tokenURL" configTokenURL = "tokenURL"
configClientID = "client_id" configClientID = "client_id"
configClientSecret = "client_secret" configClientSecret = "client_secret"
configUsername = "username"
configVersion = 1 configVersion = 1
v1tokenURL = "https://api.jottacloud.com/auth/v1/token" defaultTokenURL = "https://id.jottacloud.com/auth/realms/jottacloud/protocol/openid-connect/token"
v1registerURL = "https://api.jottacloud.com/auth/v1/register" defaultClientID = "jottacli"
v1ClientID = "nibfk8biu12ju7hpqomr8b1e40"
v1EncryptedClientSecret = "Vp8eAv7eVElMnQwN-kgU9cbhgApNDaMqWdlDi5qFydlQoji4JBxrGMF2" legacyTokenURL = "https://api.jottacloud.com/auth/v1/token"
v1configVersion = 0 legacyRegisterURL = "https://api.jottacloud.com/auth/v1/register"
legacyClientID = "nibfk8biu12ju7hpqomr8b1e40"
legacyEncryptedClientSecret = "Vp8eAv7eVElMnQwN-kgU9cbhgApNDaMqWdlDi5qFydlQoji4JBxrGMF2"
legacyConfigVersion = 0
teliaCloudTokenURL = "https://cloud-auth.telia.se/auth/realms/telia_se/protocol/openid-connect/token" teliaCloudTokenURL = "https://cloud-auth.telia.se/auth/realms/telia_se/protocol/openid-connect/token"
teliaCloudAuthURL = "https://cloud-auth.telia.se/auth/realms/telia_se/protocol/openid-connect/auth" teliaCloudAuthURL = "https://cloud-auth.telia.se/auth/realms/telia_se/protocol/openid-connect/auth"
teliaCloudClientID = "desktop" teliaCloudClientID = "desktop"
) )
var (
// Description of how to auth for this app for a personal account
oauthConfig = &oauth2.Config{
Endpoint: oauth2.Endpoint{
AuthURL: defaultTokenURL,
TokenURL: defaultTokenURL,
},
RedirectURL: oauthutil.RedirectLocalhostURL,
}
)
// Register with Fs // Register with Fs
func init() { func init() {
// needs to be done early so we can use oauth during config // needs to be done early so we can use oauth during config
@@ -87,42 +78,7 @@ func init() {
Name: "jottacloud", Name: "jottacloud",
Description: "Jottacloud", Description: "Jottacloud",
NewFs: NewFs, NewFs: NewFs,
Config: func(ctx context.Context, name string, m configmap.Mapper) { Config: Config,
refresh := false
if version, ok := m.Get("configVersion"); ok {
ver, err := strconv.Atoi(version)
if err != nil {
log.Fatalf("Failed to parse config version - corrupted config")
}
refresh = (ver != configVersion) && (ver != v1configVersion)
}
if refresh {
fmt.Printf("Config outdated - refreshing\n")
} else {
tokenString, ok := m.Get("token")
if ok && tokenString != "" {
fmt.Printf("Already have a token - refresh?\n")
if !config.Confirm(false) {
return
}
}
}
fmt.Printf("Choose authentication type:\n" +
"1: Standard authentication - use this if you're a normal Jottacloud user.\n" +
"2: Legacy authentication - this is only required for certain whitelabel versions of Jottacloud and not recommended for normal users.\n" +
"3: Telia Cloud authentication - use this if you are using Telia Cloud.\n")
switch config.ChooseNumber("Your choice", 1, 3) {
case 1:
v2config(ctx, name, m)
case 2:
v1config(ctx, name, m)
case 3:
teliaCloudConfig(ctx, name, m)
}
},
Options: []fs.Option{{ Options: []fs.Option{{
Name: "md5_memory_limit", Name: "md5_memory_limit",
Help: "Files bigger than this will be cached on disk to calculate the MD5 if required.", Help: "Files bigger than this will be cached on disk to calculate the MD5 if required.",
@@ -157,6 +113,183 @@ func init() {
}) })
} }
// Config runs the backend configuration protocol
func Config(ctx context.Context, name string, m configmap.Mapper, config fs.ConfigIn) (*fs.ConfigOut, error) {
switch config.State {
case "":
return fs.ConfigChooseFixed("auth_type_done", "config_type", `Authentication type`, []fs.OptionExample{{
Value: "standard",
Help: "Standard authentication - use this if you're a normal Jottacloud user.",
}, {
Value: "legacy",
Help: "Legacy authentication - this is only required for certain whitelabel versions of Jottacloud and not recommended for normal users.",
}, {
Value: "telia",
Help: "Telia Cloud authentication - use this if you are using Telia Cloud.",
}})
case "auth_type_done":
// Jump to next state according to config chosen
return fs.ConfigGoto(config.Result)
case "standard": // configure a jottacloud backend using the modern JottaCli token based authentication
m.Set("configVersion", fmt.Sprint(configVersion))
return fs.ConfigInput("standard_token", "config_login_token", "Personal login token.\n\nGenerate here: https://www.jottacloud.com/web/secure")
case "standard_token":
loginToken := config.Result
m.Set(configClientID, defaultClientID)
m.Set(configClientSecret, "")
srv := rest.NewClient(fshttp.NewClient(ctx))
token, tokenEndpoint, err := doTokenAuth(ctx, srv, loginToken)
if err != nil {
return nil, errors.Wrap(err, "failed to get oauth token")
}
m.Set(configTokenURL, tokenEndpoint)
err = oauthutil.PutToken(name, m, &token, true)
if err != nil {
return nil, errors.Wrap(err, "error while saving token")
}
return fs.ConfigGoto("choose_device")
case "legacy": // configure a jottacloud backend using legacy authentication
m.Set("configVersion", fmt.Sprint(legacyConfigVersion))
return fs.ConfigConfirm("legacy_api", false, "config_machine_specific", `Do you want to create a machine specific API key?
Rclone has it's own Jottacloud API KEY which works fine as long as one
only uses rclone on a single machine. When you want to use rclone with
this account on more than one machine it's recommended to create a
machine specific API key. These keys can NOT be shared between
machines.`)
case "legacy_api":
srv := rest.NewClient(fshttp.NewClient(ctx))
if config.Result == "true" {
deviceRegistration, err := registerDevice(ctx, srv)
if err != nil {
return nil, errors.Wrap(err, "failed to register device")
}
m.Set(configClientID, deviceRegistration.ClientID)
m.Set(configClientSecret, obscure.MustObscure(deviceRegistration.ClientSecret))
fs.Debugf(nil, "Got clientID %q and clientSecret %q", deviceRegistration.ClientID, deviceRegistration.ClientSecret)
}
return fs.ConfigInput("legacy_username", "config_username", "Username (e-mail address)")
case "legacy_username":
m.Set(configUsername, config.Result)
return fs.ConfigPassword("legacy_password", "config_password", "Password (only used in setup, will not be stored)")
case "legacy_password":
m.Set("password", config.Result)
m.Set("auth_code", "")
return fs.ConfigGoto("legacy_do_auth")
case "legacy_auth_code":
authCode := strings.Replace(config.Result, "-", "", -1) // remove any "-" contained in the code so we have a 6 digit number
m.Set("auth_code", authCode)
return fs.ConfigGoto("legacy_do_auth")
case "legacy_do_auth":
username, _ := m.Get(configUsername)
password, _ := m.Get("password")
password = obscure.MustReveal(password)
authCode, _ := m.Get("auth_code")
srv := rest.NewClient(fshttp.NewClient(ctx))
clientID, ok := m.Get(configClientID)
if !ok {
clientID = legacyClientID
}
clientSecret, ok := m.Get(configClientSecret)
if !ok {
clientSecret = legacyEncryptedClientSecret
}
oauthConfig := &oauth2.Config{
Endpoint: oauth2.Endpoint{
AuthURL: legacyTokenURL,
},
ClientID: clientID,
ClientSecret: obscure.MustReveal(clientSecret),
}
token, err := doLegacyAuth(ctx, srv, oauthConfig, username, password, authCode)
if err == errAuthCodeRequired {
return fs.ConfigInput("legacy_auth_code", "config_auth_code", "Verification Code\nThis account uses 2 factor authentication you will receive a verification code via SMS.")
}
m.Set("password", "")
m.Set("auth_code", "")
if err != nil {
return nil, errors.Wrap(err, "failed to get oauth token")
}
err = oauthutil.PutToken(name, m, &token, true)
if err != nil {
return nil, errors.Wrap(err, "error while saving token")
}
return fs.ConfigGoto("choose_device")
case "telia": // telia cloud config
m.Set("configVersion", fmt.Sprint(configVersion))
m.Set(configClientID, teliaCloudClientID)
m.Set(configTokenURL, teliaCloudTokenURL)
return oauthutil.ConfigOut("choose_device", &oauthutil.Options{
OAuth2Config: &oauth2.Config{
Endpoint: oauth2.Endpoint{
AuthURL: teliaCloudAuthURL,
TokenURL: teliaCloudTokenURL,
},
ClientID: teliaCloudClientID,
Scopes: []string{"openid", "jotta-default", "offline_access"},
RedirectURL: oauthutil.RedirectLocalhostURL,
},
})
case "choose_device":
return fs.ConfigConfirm("choose_device_query", false, "config_non_standard", "Use a non standard device/mountpoint e.g. for accessing files uploaded using the official Jottacloud client?")
case "choose_device_query":
if config.Result != "true" {
m.Set(configDevice, "")
m.Set(configMountpoint, "")
return fs.ConfigGoto("end")
}
oAuthClient, _, err := getOAuthClient(ctx, name, m)
if err != nil {
return nil, err
}
srv := rest.NewClient(oAuthClient).SetRoot(rootURL)
apiSrv := rest.NewClient(oAuthClient).SetRoot(apiURL)
cust, err := getCustomerInfo(ctx, apiSrv)
if err != nil {
return nil, err
}
m.Set(configUsername, cust.Username)
acc, err := getDriveInfo(ctx, srv, cust.Username)
if err != nil {
return nil, err
}
return fs.ConfigChoose("choose_device_result", "config_device", `Please select the device to use. Normally this will be Jotta`, len(acc.Devices), func(i int) (string, string) {
return acc.Devices[i].Name, ""
})
case "choose_device_result":
device := config.Result
m.Set(configDevice, device)
oAuthClient, _, err := getOAuthClient(ctx, name, m)
if err != nil {
return nil, err
}
srv := rest.NewClient(oAuthClient).SetRoot(rootURL)
username, _ := m.Get(configUsername)
dev, err := getDeviceInfo(ctx, srv, path.Join(username, device))
if err != nil {
return nil, err
}
return fs.ConfigChoose("choose_device_mountpoint", "config_mountpoint", `Please select the mountpoint to use. Normally this will be Archive.`, len(dev.MountPoints), func(i int) (string, string) {
return dev.MountPoints[i].Name, ""
})
case "choose_device_mountpoint":
mountpoint := config.Result
m.Set(configMountpoint, mountpoint)
return fs.ConfigGoto("end")
case "end":
// All the config flows end up here in case we need to carry on with something
return nil, nil
}
return nil, fmt.Errorf("unknown state %q", config.State)
}
// Options defines the configuration for this backend // Options defines the configuration for this backend
type Options struct { type Options struct {
Device string `config:"device"` Device string `config:"device"`
@@ -217,10 +350,21 @@ func (f *Fs) Features() *fs.Features {
return f.features return f.features
} }
// parsePath parses a box 'url' // joinPath joins two path/url elements
func parsePath(path string) (root string) { //
root = strings.Trim(path, "/") // Does not perform clean on the result like path.Join does,
return // which breaks urls by changing prefix "https://" into "https:/".
func joinPath(base string, rel string) string {
if rel == "" {
return base
}
if strings.HasSuffix(base, "/") {
return base + strings.TrimPrefix(rel, "/")
}
if strings.HasPrefix(rel, "/") {
return strings.TrimSuffix(base, "/") + rel
}
return base + "/" + rel
} }
// retryErrorCodes is a slice of error codes that we will retry // retryErrorCodes is a slice of error codes that we will retry
@@ -235,114 +379,13 @@ var retryErrorCodes = []int{
// shouldRetry returns a boolean as to whether this resp and err // shouldRetry returns a boolean as to whether this resp and err
// deserve to be retried. It returns the err as a convenience // deserve to be retried. It returns the err as a convenience
func shouldRetry(resp *http.Response, err error) (bool, error) { func shouldRetry(ctx context.Context, resp *http.Response, err error) (bool, error) {
if fserrors.ContextError(ctx, &err) {
return false, err
}
return fserrors.ShouldRetry(err) || fserrors.ShouldRetryHTTP(resp, retryErrorCodes), err return fserrors.ShouldRetry(err) || fserrors.ShouldRetryHTTP(resp, retryErrorCodes), err
} }
func teliaCloudConfig(ctx context.Context, name string, m configmap.Mapper) {
teliaCloudOauthConfig := &oauth2.Config{
Endpoint: oauth2.Endpoint{
AuthURL: teliaCloudAuthURL,
TokenURL: teliaCloudTokenURL,
},
ClientID: teliaCloudClientID,
Scopes: []string{"openid", "jotta-default", "offline_access"},
RedirectURL: oauthutil.RedirectLocalhostURL,
}
err := oauthutil.Config(ctx, "jottacloud", name, m, teliaCloudOauthConfig, nil)
if err != nil {
log.Fatalf("Failed to configure token: %v", err)
return
}
fmt.Printf("\nDo you want to use a non standard device/mountpoint e.g. for accessing files uploaded using the official Jottacloud client?\n\n")
if config.Confirm(false) {
oAuthClient, _, err := oauthutil.NewClient(ctx, name, m, teliaCloudOauthConfig)
if err != nil {
log.Fatalf("Failed to load oAuthClient: %s", err)
}
srv := rest.NewClient(oAuthClient).SetRoot(rootURL)
apiSrv := rest.NewClient(oAuthClient).SetRoot(apiURL)
device, mountpoint, err := setupMountpoint(ctx, srv, apiSrv)
if err != nil {
log.Fatalf("Failed to setup mountpoint: %s", err)
}
m.Set(configDevice, device)
m.Set(configMountpoint, mountpoint)
}
m.Set("configVersion", strconv.Itoa(configVersion))
m.Set(configClientID, teliaCloudClientID)
m.Set(configTokenURL, teliaCloudTokenURL)
}
// v1config configure a jottacloud backend using legacy authentication
func v1config(ctx context.Context, name string, m configmap.Mapper) {
srv := rest.NewClient(fshttp.NewClient(ctx))
fmt.Printf("\nDo you want to create a machine specific API key?\n\nRclone has it's own Jottacloud API KEY which works fine as long as one only uses rclone on a single machine. When you want to use rclone with this account on more than one machine it's recommended to create a machine specific API key. These keys can NOT be shared between machines.\n\n")
if config.Confirm(false) {
deviceRegistration, err := registerDevice(ctx, srv)
if err != nil {
log.Fatalf("Failed to register device: %v", err)
}
m.Set(configClientID, deviceRegistration.ClientID)
m.Set(configClientSecret, obscure.MustObscure(deviceRegistration.ClientSecret))
fs.Debugf(nil, "Got clientID '%s' and clientSecret '%s'", deviceRegistration.ClientID, deviceRegistration.ClientSecret)
}
clientID, ok := m.Get(configClientID)
if !ok {
clientID = v1ClientID
}
clientSecret, ok := m.Get(configClientSecret)
if !ok {
clientSecret = v1EncryptedClientSecret
}
oauthConfig.ClientID = clientID
oauthConfig.ClientSecret = obscure.MustReveal(clientSecret)
oauthConfig.Endpoint.AuthURL = v1tokenURL
oauthConfig.Endpoint.TokenURL = v1tokenURL
fmt.Printf("Username> ")
username := config.ReadLine()
password := config.GetPassword("Your Jottacloud password is only required during setup and will not be stored.")
token, err := doAuthV1(ctx, srv, username, password)
if err != nil {
log.Fatalf("Failed to get oauth token: %s", err)
}
err = oauthutil.PutToken(name, m, &token, true)
if err != nil {
log.Fatalf("Error while saving token: %s", err)
}
fmt.Printf("\nDo you want to use a non standard device/mountpoint e.g. for accessing files uploaded using the official Jottacloud client?\n\n")
if config.Confirm(false) {
oAuthClient, _, err := oauthutil.NewClient(ctx, name, m, oauthConfig)
if err != nil {
log.Fatalf("Failed to load oAuthClient: %s", err)
}
srv = rest.NewClient(oAuthClient).SetRoot(rootURL)
apiSrv := rest.NewClient(oAuthClient).SetRoot(apiURL)
device, mountpoint, err := setupMountpoint(ctx, srv, apiSrv)
if err != nil {
log.Fatalf("Failed to setup mountpoint: %s", err)
}
m.Set(configDevice, device)
m.Set(configMountpoint, mountpoint)
}
m.Set("configVersion", strconv.Itoa(v1configVersion))
}
// registerDevice register a new device for use with the jottacloud API // registerDevice register a new device for use with the jottacloud API
func registerDevice(ctx context.Context, srv *rest.Client) (reg *api.DeviceRegistrationResponse, err error) { func registerDevice(ctx context.Context, srv *rest.Client) (reg *api.DeviceRegistrationResponse, err error) {
// random generator to generate random device names // random generator to generate random device names
@@ -361,7 +404,7 @@ func registerDevice(ctx context.Context, srv *rest.Client) (reg *api.DeviceRegis
opts := rest.Opts{ opts := rest.Opts{
Method: "POST", Method: "POST",
RootURL: v1registerURL, RootURL: legacyRegisterURL,
ContentType: "application/x-www-form-urlencoded", ContentType: "application/x-www-form-urlencoded",
ExtraHeaders: map[string]string{"Authorization": "Bearer c2xrZmpoYWRsZmFramhkc2xma2phaHNkbGZramhhc2xkZmtqaGFzZGxrZmpobGtq"}, ExtraHeaders: map[string]string{"Authorization": "Bearer c2xrZmpoYWRsZmFramhkc2xma2phaHNkbGZramhhc2xkZmtqaGFzZGxrZmpobGtq"},
Parameters: values, Parameters: values,
@@ -372,8 +415,13 @@ func registerDevice(ctx context.Context, srv *rest.Client) (reg *api.DeviceRegis
return deviceRegistration, err return deviceRegistration, err
} }
// doAuthV1 runs the actual token request for V1 authentication var errAuthCodeRequired = errors.New("auth code required")
func doAuthV1(ctx context.Context, srv *rest.Client, username, password string) (token oauth2.Token, err error) {
// doLegacyAuth runs the actual token request for V1 authentication
//
// Call this first with blank authCode. If errAuthCodeRequired is
// returned then call it again with an authCode
func doLegacyAuth(ctx context.Context, srv *rest.Client, oauthConfig *oauth2.Config, username, password, authCode string) (token oauth2.Token, err error) {
// prepare out token request with username and password // prepare out token request with username and password
values := url.Values{} values := url.Values{}
values.Set("grant_type", "PASSWORD") values.Set("grant_type", "PASSWORD")
@@ -387,22 +435,19 @@ func doAuthV1(ctx context.Context, srv *rest.Client, username, password string)
ContentType: "application/x-www-form-urlencoded", ContentType: "application/x-www-form-urlencoded",
Parameters: values, Parameters: values,
} }
if authCode != "" {
opts.ExtraHeaders = make(map[string]string)
opts.ExtraHeaders["X-Jottacloud-Otp"] = authCode
}
// do the first request // do the first request
var jsonToken api.TokenJSON var jsonToken api.TokenJSON
resp, err := srv.CallJSON(ctx, &opts, nil, &jsonToken) resp, err := srv.CallJSON(ctx, &opts, nil, &jsonToken)
if err != nil { if err != nil && authCode == "" {
// if 2fa is enabled the first request is expected to fail. We will do another request with the 2fa code as an additional http header // if 2fa is enabled the first request is expected to fail. We will do another request with the 2fa code as an additional http header
if resp != nil { if resp != nil {
if resp.Header.Get("X-JottaCloud-OTP") == "required; SMS" { if resp.Header.Get("X-JottaCloud-OTP") == "required; SMS" {
fmt.Printf("This account uses 2 factor authentication you will receive a verification code via SMS.\n") return token, errAuthCodeRequired
fmt.Printf("Enter verification code> ")
authCode := config.ReadLine()
authCode = strings.Replace(authCode, "-", "", -1) // remove any "-" contained in the code so we have a 6 digit number
opts.ExtraHeaders = make(map[string]string)
opts.ExtraHeaders["X-Jottacloud-Otp"] = authCode
_, err = srv.CallJSON(ctx, &opts, nil, &jsonToken)
} }
} }
} }
@@ -414,51 +459,11 @@ func doAuthV1(ctx context.Context, srv *rest.Client, username, password string)
return token, err return token, err
} }
// v2config configure a jottacloud backend using the modern JottaCli token based authentication // doTokenAuth runs the actual token request for V2 authentication
func v2config(ctx context.Context, name string, m configmap.Mapper) { func doTokenAuth(ctx context.Context, apiSrv *rest.Client, loginTokenBase64 string) (token oauth2.Token, tokenEndpoint string, err error) {
srv := rest.NewClient(fshttp.NewClient(ctx))
fmt.Printf("Generate a personal login token here: https://www.jottacloud.com/web/secure\n")
fmt.Printf("Login Token> ")
loginToken := config.ReadLine()
m.Set(configClientID, "jottacli")
m.Set(configClientSecret, "")
token, err := doAuthV2(ctx, srv, loginToken, m)
if err != nil {
log.Fatalf("Failed to get oauth token: %s", err)
}
err = oauthutil.PutToken(name, m, &token, true)
if err != nil {
log.Fatalf("Error while saving token: %s", err)
}
fmt.Printf("\nDo you want to use a non standard device/mountpoint e.g. for accessing files uploaded using the official Jottacloud client?\n\n")
if config.Confirm(false) {
oAuthClient, _, err := oauthutil.NewClient(ctx, name, m, oauthConfig)
if err != nil {
log.Fatalf("Failed to load oAuthClient: %s", err)
}
srv = rest.NewClient(oAuthClient).SetRoot(rootURL)
apiSrv := rest.NewClient(oAuthClient).SetRoot(apiURL)
device, mountpoint, err := setupMountpoint(ctx, srv, apiSrv)
if err != nil {
log.Fatalf("Failed to setup mountpoint: %s", err)
}
m.Set(configDevice, device)
m.Set(configMountpoint, mountpoint)
}
m.Set("configVersion", strconv.Itoa(configVersion))
}
// doAuthV2 runs the actual token request for V2 authentication
func doAuthV2(ctx context.Context, srv *rest.Client, loginTokenBase64 string, m configmap.Mapper) (token oauth2.Token, err error) {
loginTokenBytes, err := base64.RawURLEncoding.DecodeString(loginTokenBase64) loginTokenBytes, err := base64.RawURLEncoding.DecodeString(loginTokenBase64)
if err != nil { if err != nil {
return token, err return token, "", err
} }
// decode login token // decode login token
@@ -466,7 +471,7 @@ func doAuthV2(ctx context.Context, srv *rest.Client, loginTokenBase64 string, m
decoder := json.NewDecoder(bytes.NewReader(loginTokenBytes)) decoder := json.NewDecoder(bytes.NewReader(loginTokenBytes))
err = decoder.Decode(&loginToken) err = decoder.Decode(&loginToken)
if err != nil { if err != nil {
return token, err return token, "", err
} }
// retrieve endpoint urls // retrieve endpoint urls
@@ -475,19 +480,14 @@ func doAuthV2(ctx context.Context, srv *rest.Client, loginTokenBase64 string, m
RootURL: loginToken.WellKnownLink, RootURL: loginToken.WellKnownLink,
} }
var wellKnown api.WellKnown var wellKnown api.WellKnown
_, err = srv.CallJSON(ctx, &opts, nil, &wellKnown) _, err = apiSrv.CallJSON(ctx, &opts, nil, &wellKnown)
if err != nil { if err != nil {
return token, err return token, "", err
} }
// save the tokenurl
oauthConfig.Endpoint.AuthURL = wellKnown.TokenEndpoint
oauthConfig.Endpoint.TokenURL = wellKnown.TokenEndpoint
m.Set(configTokenURL, wellKnown.TokenEndpoint)
// prepare out token request with username and password // prepare out token request with username and password
values := url.Values{} values := url.Values{}
values.Set("client_id", "jottacli") values.Set("client_id", defaultClientID)
values.Set("grant_type", "password") values.Set("grant_type", "password")
values.Set("password", loginToken.AuthToken) values.Set("password", loginToken.AuthToken)
values.Set("scope", "offline_access+openid") values.Set("scope", "offline_access+openid")
@@ -495,68 +495,33 @@ func doAuthV2(ctx context.Context, srv *rest.Client, loginTokenBase64 string, m
values.Encode() values.Encode()
opts = rest.Opts{ opts = rest.Opts{
Method: "POST", Method: "POST",
RootURL: oauthConfig.Endpoint.AuthURL, RootURL: wellKnown.TokenEndpoint,
ContentType: "application/x-www-form-urlencoded", ContentType: "application/x-www-form-urlencoded",
Body: strings.NewReader(values.Encode()), Body: strings.NewReader(values.Encode()),
} }
// do the first request // do the first request
var jsonToken api.TokenJSON var jsonToken api.TokenJSON
_, err = srv.CallJSON(ctx, &opts, nil, &jsonToken) _, err = apiSrv.CallJSON(ctx, &opts, nil, &jsonToken)
if err != nil { if err != nil {
return token, err return token, "", err
} }
token.AccessToken = jsonToken.AccessToken token.AccessToken = jsonToken.AccessToken
token.RefreshToken = jsonToken.RefreshToken token.RefreshToken = jsonToken.RefreshToken
token.TokenType = jsonToken.TokenType token.TokenType = jsonToken.TokenType
token.Expiry = time.Now().Add(time.Duration(jsonToken.ExpiresIn) * time.Second) token.Expiry = time.Now().Add(time.Duration(jsonToken.ExpiresIn) * time.Second)
return token, err return token, wellKnown.TokenEndpoint, err
}
// setupMountpoint sets up a custom device and mountpoint if desired by the user
func setupMountpoint(ctx context.Context, srv *rest.Client, apiSrv *rest.Client) (device, mountpoint string, err error) {
cust, err := getCustomerInfo(ctx, apiSrv)
if err != nil {
return "", "", err
}
acc, err := getDriveInfo(ctx, srv, cust.Username)
if err != nil {
return "", "", err
}
var deviceNames []string
for i := range acc.Devices {
deviceNames = append(deviceNames, acc.Devices[i].Name)
}
fmt.Printf("Please select the device to use. Normally this will be Jotta\n")
device = config.Choose("Devices", deviceNames, nil, false)
dev, err := getDeviceInfo(ctx, srv, path.Join(cust.Username, device))
if err != nil {
return "", "", err
}
if len(dev.MountPoints) == 0 {
return "", "", errors.New("no mountpoints for selected device")
}
var mountpointNames []string
for i := range dev.MountPoints {
mountpointNames = append(mountpointNames, dev.MountPoints[i].Name)
}
fmt.Printf("Please select the mountpoint to user. Normally this will be Archive\n")
mountpoint = config.Choose("Mountpoints", mountpointNames, nil, false)
return device, mountpoint, err
} }
// getCustomerInfo queries general information about the account // getCustomerInfo queries general information about the account
func getCustomerInfo(ctx context.Context, srv *rest.Client) (info *api.CustomerInfo, err error) { func getCustomerInfo(ctx context.Context, apiSrv *rest.Client) (info *api.CustomerInfo, err error) {
opts := rest.Opts{ opts := rest.Opts{
Method: "GET", Method: "GET",
Path: "account/v1/customer", Path: "account/v1/customer",
} }
_, err = srv.CallJSON(ctx, &opts, nil, &info) _, err = apiSrv.CallJSON(ctx, &opts, nil, &info)
if err != nil { if err != nil {
return nil, errors.Wrap(err, "couldn't get customer info") return nil, errors.Wrap(err, "couldn't get customer info")
} }
@@ -615,7 +580,7 @@ func (f *Fs) readMetaDataForPath(ctx context.Context, path string) (info *api.Jo
var resp *http.Response var resp *http.Response
err = f.pacer.Call(func() (bool, error) { err = f.pacer.Call(func() (bool, error) {
resp, err = f.srv.CallXML(ctx, &opts, nil, &result) resp, err = f.srv.CallXML(ctx, &opts, nil, &result)
return shouldRetry(resp, err) return shouldRetry(ctx, resp, err)
}) })
if apiErr, ok := err.(*api.Error); ok { if apiErr, ok := err.(*api.Error); ok {
@@ -673,7 +638,7 @@ func (f *Fs) filePath(file string) string {
// This filter catches all refresh requests, reads the body, // This filter catches all refresh requests, reads the body,
// changes the case and then sends it on // changes the case and then sends it on
func grantTypeFilter(req *http.Request) { func grantTypeFilter(req *http.Request) {
if v1tokenURL == req.URL.String() { if legacyTokenURL == req.URL.String() {
// read the entire body // read the entire body
refreshBody, err := ioutil.ReadAll(req.Body) refreshBody, err := ioutil.ReadAll(req.Body)
if err != nil { if err != nil {
@@ -689,53 +654,50 @@ func grantTypeFilter(req *http.Request) {
} }
} }
// NewFs constructs an Fs from the path, container:path func getOAuthClient(ctx context.Context, name string, m configmap.Mapper) (oAuthClient *http.Client, ts *oauthutil.TokenSource, err error) {
func NewFs(ctx context.Context, name, root string, m configmap.Mapper) (fs.Fs, error) {
// Parse config into Options struct
opt := new(Options)
err := configstruct.Set(m, opt)
if err != nil {
return nil, err
}
// Check config version // Check config version
var ver int var ver int
version, ok := m.Get("configVersion") version, ok := m.Get("configVersion")
if ok { if ok {
ver, err = strconv.Atoi(version) ver, err = strconv.Atoi(version)
if err != nil { if err != nil {
return nil, errors.New("Failed to parse config version") return nil, nil, errors.New("Failed to parse config version")
} }
ok = (ver == configVersion) || (ver == v1configVersion) ok = (ver == configVersion) || (ver == legacyConfigVersion)
} }
if !ok { if !ok {
return nil, errors.New("Outdated config - please reconfigure this backend") return nil, nil, errors.New("Outdated config - please reconfigure this backend")
} }
baseClient := fshttp.NewClient(ctx) baseClient := fshttp.NewClient(ctx)
oauthConfig := &oauth2.Config{
Endpoint: oauth2.Endpoint{
AuthURL: defaultTokenURL,
TokenURL: defaultTokenURL,
},
}
if ver == configVersion { if ver == configVersion {
oauthConfig.ClientID = "jottacli" oauthConfig.ClientID = defaultClientID
// if custom endpoints are set use them else stick with defaults // if custom endpoints are set use them else stick with defaults
if tokenURL, ok := m.Get(configTokenURL); ok { if tokenURL, ok := m.Get(configTokenURL); ok {
oauthConfig.Endpoint.TokenURL = tokenURL oauthConfig.Endpoint.TokenURL = tokenURL
// jottacloud is weird. we need to use the tokenURL as authURL // jottacloud is weird. we need to use the tokenURL as authURL
oauthConfig.Endpoint.AuthURL = tokenURL oauthConfig.Endpoint.AuthURL = tokenURL
} }
} else if ver == v1configVersion { } else if ver == legacyConfigVersion {
clientID, ok := m.Get(configClientID) clientID, ok := m.Get(configClientID)
if !ok { if !ok {
clientID = v1ClientID clientID = legacyClientID
} }
clientSecret, ok := m.Get(configClientSecret) clientSecret, ok := m.Get(configClientSecret)
if !ok { if !ok {
clientSecret = v1EncryptedClientSecret clientSecret = legacyEncryptedClientSecret
} }
oauthConfig.ClientID = clientID oauthConfig.ClientID = clientID
oauthConfig.ClientSecret = obscure.MustReveal(clientSecret) oauthConfig.ClientSecret = obscure.MustReveal(clientSecret)
oauthConfig.Endpoint.TokenURL = v1tokenURL oauthConfig.Endpoint.TokenURL = legacyTokenURL
oauthConfig.Endpoint.AuthURL = v1tokenURL oauthConfig.Endpoint.AuthURL = legacyTokenURL
// add the request filter to fix token refresh // add the request filter to fix token refresh
if do, ok := baseClient.Transport.(interface { if do, ok := baseClient.Transport.(interface {
@@ -748,13 +710,29 @@ func NewFs(ctx context.Context, name, root string, m configmap.Mapper) (fs.Fs, e
} }
// Create OAuth Client // Create OAuth Client
oAuthClient, ts, err := oauthutil.NewClientWithBaseClient(ctx, name, m, oauthConfig, baseClient) oAuthClient, ts, err = oauthutil.NewClientWithBaseClient(ctx, name, m, oauthConfig, baseClient)
if err != nil { if err != nil {
return nil, errors.Wrap(err, "Failed to configure Jottacloud oauth client") return nil, nil, errors.Wrap(err, "Failed to configure Jottacloud oauth client")
}
return oAuthClient, ts, nil
}
// NewFs constructs an Fs from the path, container:path
func NewFs(ctx context.Context, name, root string, m configmap.Mapper) (fs.Fs, error) {
// Parse config into Options struct
opt := new(Options)
err := configstruct.Set(m, opt)
if err != nil {
return nil, err
}
oAuthClient, ts, err := getOAuthClient(ctx, name, m)
if err != nil {
return nil, err
} }
rootIsDir := strings.HasSuffix(root, "/") rootIsDir := strings.HasSuffix(root, "/")
root = parsePath(root) root = strings.Trim(root, "/")
f := &Fs{ f := &Fs{
name: name, name: name,
@@ -854,7 +832,7 @@ func (f *Fs) CreateDir(ctx context.Context, path string) (jf *api.JottaFolder, e
err = f.pacer.Call(func() (bool, error) { err = f.pacer.Call(func() (bool, error) {
resp, err = f.srv.CallXML(ctx, &opts, nil, &jf) resp, err = f.srv.CallXML(ctx, &opts, nil, &jf)
return shouldRetry(resp, err) return shouldRetry(ctx, resp, err)
}) })
if err != nil { if err != nil {
//fmt.Printf("...Error %v\n", err) //fmt.Printf("...Error %v\n", err)
@@ -883,7 +861,7 @@ func (f *Fs) List(ctx context.Context, dir string) (entries fs.DirEntries, err e
var result api.JottaFolder var result api.JottaFolder
err = f.pacer.Call(func() (bool, error) { err = f.pacer.Call(func() (bool, error) {
resp, err = f.srv.CallXML(ctx, &opts, nil, &result) resp, err = f.srv.CallXML(ctx, &opts, nil, &result)
return shouldRetry(resp, err) return shouldRetry(ctx, resp, err)
}) })
if err != nil { if err != nil {
@@ -995,7 +973,7 @@ func (f *Fs) ListR(ctx context.Context, dir string, callback fs.ListRCallback) (
var result api.JottaFolder // Could be JottaFileDirList, but JottaFolder is close enough var result api.JottaFolder // Could be JottaFileDirList, but JottaFolder is close enough
err = f.pacer.Call(func() (bool, error) { err = f.pacer.Call(func() (bool, error) {
resp, err = f.srv.CallXML(ctx, &opts, nil, &result) resp, err = f.srv.CallXML(ctx, &opts, nil, &result)
return shouldRetry(resp, err) return shouldRetry(ctx, resp, err)
}) })
if err != nil { if err != nil {
if apiErr, ok := err.(*api.Error); ok { if apiErr, ok := err.(*api.Error); ok {
@@ -1101,7 +1079,7 @@ func (f *Fs) purgeCheck(ctx context.Context, dir string, check bool) (err error)
var resp *http.Response var resp *http.Response
err = f.pacer.Call(func() (bool, error) { err = f.pacer.Call(func() (bool, error) {
resp, err = f.srv.Call(ctx, &opts) resp, err = f.srv.Call(ctx, &opts)
return shouldRetry(resp, err) return shouldRetry(ctx, resp, err)
}) })
if err != nil { if err != nil {
return errors.Wrap(err, "couldn't purge directory") return errors.Wrap(err, "couldn't purge directory")
@@ -1140,7 +1118,7 @@ func (f *Fs) copyOrMove(ctx context.Context, method, src, dest string) (info *ap
var resp *http.Response var resp *http.Response
err = f.pacer.Call(func() (bool, error) { err = f.pacer.Call(func() (bool, error) {
resp, err = f.srv.CallXML(ctx, &opts, nil, &info) resp, err = f.srv.CallXML(ctx, &opts, nil, &info)
return shouldRetry(resp, err) return shouldRetry(ctx, resp, err)
}) })
if err != nil { if err != nil {
return nil, err return nil, err
@@ -1268,7 +1246,7 @@ func (f *Fs) PublicLink(ctx context.Context, remote string, expire fs.Duration,
var result api.JottaFile var result api.JottaFile
err = f.pacer.Call(func() (bool, error) { err = f.pacer.Call(func() (bool, error) {
resp, err = f.srv.CallXML(ctx, &opts, nil, &result) resp, err = f.srv.CallXML(ctx, &opts, nil, &result)
return shouldRetry(resp, err) return shouldRetry(ctx, resp, err)
}) })
if apiErr, ok := err.(*api.Error); ok { if apiErr, ok := err.(*api.Error); ok {
@@ -1292,8 +1270,7 @@ func (f *Fs) PublicLink(ctx context.Context, remote string, expire fs.Duration,
if result.PublicSharePath == "" { if result.PublicSharePath == "" {
return "", errors.New("couldn't create public link - no link path received") return "", errors.New("couldn't create public link - no link path received")
} }
link = path.Join(baseURL, result.PublicSharePath) return joinPath(baseURL, result.PublicSharePath), nil
return link, nil
} }
// About gets quota information // About gets quota information
@@ -1446,7 +1423,7 @@ func (o *Object) Open(ctx context.Context, options ...fs.OpenOption) (in io.Read
err = o.fs.pacer.Call(func() (bool, error) { err = o.fs.pacer.Call(func() (bool, error) {
resp, err = o.fs.srv.Call(ctx, &opts) resp, err = o.fs.srv.Call(ctx, &opts)
return shouldRetry(resp, err) return shouldRetry(ctx, resp, err)
}) })
if err != nil { if err != nil {
return nil, err return nil, err
@@ -1559,7 +1536,7 @@ func (o *Object) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, op
var response api.AllocateFileResponse var response api.AllocateFileResponse
err = o.fs.pacer.CallNoRetry(func() (bool, error) { err = o.fs.pacer.CallNoRetry(func() (bool, error) {
resp, err = o.fs.apiSrv.CallJSON(ctx, &opts, &request, &response) resp, err = o.fs.apiSrv.CallJSON(ctx, &opts, &request, &response)
return shouldRetry(resp, err) return shouldRetry(ctx, resp, err)
}) })
if err != nil { if err != nil {
return err return err
@@ -1624,7 +1601,7 @@ func (o *Object) Remove(ctx context.Context) error {
return o.fs.pacer.Call(func() (bool, error) { return o.fs.pacer.Call(func() (bool, error) {
resp, err := o.fs.srv.CallXML(ctx, &opts, nil, nil) resp, err := o.fs.srv.CallXML(ctx, &opts, nil, nil)
return shouldRetry(resp, err) return shouldRetry(ctx, resp, err)
}) })
} }

View File

@@ -534,7 +534,7 @@ func (f *Fs) DirMove(ctx context.Context, src fs.Fs, srcRemote, dstRemote string
return nil return nil
} }
// About reports space usage (with a MB precision) // About reports space usage (with a MiB precision)
func (f *Fs) About(ctx context.Context) (*fs.Usage, error) { func (f *Fs) About(ctx context.Context) (*fs.Usage, error) {
mount, err := f.client.MountsDetails(f.mountID) mount, err := f.client.MountsDetails(f.mountID)
if err != nil { if err != nil {

View File

@@ -27,6 +27,7 @@ import (
"github.com/rclone/rclone/lib/encoder" "github.com/rclone/rclone/lib/encoder"
"github.com/rclone/rclone/lib/file" "github.com/rclone/rclone/lib/file"
"github.com/rclone/rclone/lib/readers" "github.com/rclone/rclone/lib/readers"
"golang.org/x/text/unicode/norm"
) )
// Constants // Constants
@@ -42,8 +43,9 @@ func init() {
NewFs: NewFs, NewFs: NewFs,
CommandHelp: commandHelp, CommandHelp: commandHelp,
Options: []fs.Option{{ Options: []fs.Option{{
Name: "nounc", Name: "nounc",
Help: "Disable UNC (long path names) conversion on Windows", Help: "Disable UNC (long path names) conversion on Windows",
Advanced: runtime.GOOS != "windows",
Examples: []fs.OptionExample{{ Examples: []fs.OptionExample{{
Value: "true", Value: "true",
Help: "Disables long file names", Help: "Disables long file names",
@@ -72,25 +74,34 @@ points, as you explicitly acknowledge that they should be skipped.`,
Advanced: true, Advanced: true,
}, { }, {
Name: "zero_size_links", Name: "zero_size_links",
Help: `Assume the Stat size of links is zero (and read them instead) Help: `Assume the Stat size of links is zero (and read them instead) (Deprecated)
On some virtual filesystems (such ash LucidLink), reading a link size via a Stat call always returns 0. Rclone used to use the Stat size of links as the link size, but this fails in quite a few places
However, on unix it reads as the length of the text in the link. This may cause errors like this when
syncing:
Failed to copy: corrupted on transfer: sizes differ 0 vs 13 - Windows
- On some virtual filesystems (such ash LucidLink)
- Android
Setting this flag causes rclone to read the link and use that as the size of the link So rclone now always reads the link
instead of 0 which in most cases fixes the problem.`, `,
Default: false, Default: false,
Advanced: true, Advanced: true,
}, { }, {
Name: "no_unicode_normalization", Name: "unicode_normalization",
Help: `Don't apply unicode normalization to paths and filenames (Deprecated) Help: `Apply unicode NFC normalization to paths and filenames
This flag is deprecated now. Rclone no longer normalizes unicode file This flag can be used to normalize file names into unicode NFC form
names, but it compares them with unicode normalization in the sync that are read from the local filesystem.
routine instead.`,
Rclone does not normally touch the encoding of file names it reads from
the file system.
This can be useful when using macOS as it normally provides decomposed (NFD)
unicode which in some language (eg Korean) doesn't display properly on
some OSes.
Note that rclone compares filenames with unicode normalization in the sync
routine so this flag shouldn't normally be used.`,
Default: false, Default: false,
Advanced: true, Advanced: true,
}, { }, {
@@ -148,6 +159,17 @@ Windows/macOS and case sensitive for everything else. Use this flag
to override the default choice.`, to override the default choice.`,
Default: false, Default: false,
Advanced: true, Advanced: true,
}, {
Name: "no_preallocate",
Help: `Disable preallocation of disk space for transferred files
Preallocation of disk space helps prevent filesystem fragmentation.
However, some virtual filesystem layers (such as Google Drive File
Stream) may incorrectly set the actual file size equal to the
preallocated space, causing checksum and file size checks to fail.
Use this flag to disable preallocation.`,
Default: false,
Advanced: true,
}, { }, {
Name: "no_sparse", Name: "no_sparse",
Help: `Disable sparse files for multi-thread downloads Help: `Disable sparse files for multi-thread downloads
@@ -184,13 +206,13 @@ type Options struct {
FollowSymlinks bool `config:"copy_links"` FollowSymlinks bool `config:"copy_links"`
TranslateSymlinks bool `config:"links"` TranslateSymlinks bool `config:"links"`
SkipSymlinks bool `config:"skip_links"` SkipSymlinks bool `config:"skip_links"`
ZeroSizeLinks bool `config:"zero_size_links"` UTFNorm bool `config:"unicode_normalization"`
NoUTFNorm bool `config:"no_unicode_normalization"`
NoCheckUpdated bool `config:"no_check_updated"` NoCheckUpdated bool `config:"no_check_updated"`
NoUNC bool `config:"nounc"` NoUNC bool `config:"nounc"`
OneFileSystem bool `config:"one_file_system"` OneFileSystem bool `config:"one_file_system"`
CaseSensitive bool `config:"case_sensitive"` CaseSensitive bool `config:"case_sensitive"`
CaseInsensitive bool `config:"case_insensitive"` CaseInsensitive bool `config:"case_insensitive"`
NoPreAllocate bool `config:"no_preallocate"`
NoSparse bool `config:"no_sparse"` NoSparse bool `config:"no_sparse"`
NoSetModTime bool `config:"no_set_modtime"` NoSetModTime bool `config:"no_set_modtime"`
Enc encoder.MultiEncoder `config:"encoding"` Enc encoder.MultiEncoder `config:"encoding"`
@@ -243,10 +265,6 @@ func NewFs(ctx context.Context, name, root string, m configmap.Mapper) (fs.Fs, e
return nil, errLinksAndCopyLinks return nil, errLinksAndCopyLinks
} }
if opt.NoUTFNorm {
fs.Errorf(nil, "The --local-no-unicode-normalization flag is deprecated and will be removed")
}
f := &Fs{ f := &Fs{
name: name, name: name,
opt: *opt, opt: *opt,
@@ -509,6 +527,9 @@ func (f *Fs) List(ctx context.Context, dir string) (entries fs.DirEntries, err e
} }
func (f *Fs) cleanRemote(dir, filename string) (remote string) { func (f *Fs) cleanRemote(dir, filename string) (remote string) {
if f.opt.UTFNorm {
filename = norm.NFC.String(filename)
}
remote = path.Join(dir, f.opt.Enc.ToStandardName(filename)) remote = path.Join(dir, f.opt.Enc.ToStandardName(filename))
if !utf8.ValidString(filename) { if !utf8.ValidString(filename) {
@@ -1127,10 +1148,16 @@ func (o *Object) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, op
return err return err
} }
} }
// Pre-allocate the file for performance reasons if !o.fs.opt.NoPreAllocate {
err = file.PreAllocate(src.Size(), f) // Pre-allocate the file for performance reasons
if err != nil { err = file.PreAllocate(src.Size(), f)
fs.Debugf(o, "Failed to pre-allocate: %v", err) if err != nil {
fs.Debugf(o, "Failed to pre-allocate: %v", err)
if err == file.ErrDiskFull {
_ = f.Close()
return err
}
}
} }
out = f out = f
} else { } else {
@@ -1217,9 +1244,11 @@ func (f *Fs) OpenWriterAt(ctx context.Context, remote string, size int64) (fs.Wr
return nil, err return nil, err
} }
// Pre-allocate the file for performance reasons // Pre-allocate the file for performance reasons
err = file.PreAllocate(size, out) if !f.opt.NoPreAllocate {
if err != nil { err = file.PreAllocate(size, out)
fs.Debugf(o, "Failed to pre-allocate: %v", err) if err != nil {
fs.Debugf(o, "Failed to pre-allocate: %v", err)
}
} }
if !f.opt.NoSparse && file.SetSparseImplemented { if !f.opt.NoSparse && file.SetSparseImplemented {
sparseWarning.Do(func() { sparseWarning.Do(func() {
@@ -1246,9 +1275,13 @@ func (o *Object) setMetadata(info os.FileInfo) {
o.modTime = info.ModTime() o.modTime = info.ModTime()
o.mode = info.Mode() o.mode = info.Mode()
o.fs.objectMetaMu.Unlock() o.fs.objectMetaMu.Unlock()
// On Windows links read as 0 size so set the correct size here // Read the size of the link.
// Optionally, users can turn this feature on with the zero_size_links flag //
if (runtime.GOOS == "windows" || o.fs.opt.ZeroSizeLinks) && o.translatedLink { // The value in info.Size() is not always correct
// - Windows links read as 0 size
// - Some virtual filesystems (such ash LucidLink) links read as 0 size
// - Android - some versions the links are larger than readlink suggests
if o.translatedLink {
linkdst, err := os.Readlink(o.path) linkdst, err := os.Readlink(o.path)
if err != nil { if err != nil {
fs.Errorf(o, "Failed to read link size: %v", err) fs.Errorf(o, "Failed to read link size: %v", err)

View File

@@ -6,8 +6,8 @@ import (
"bufio" "bufio"
"bytes" "bytes"
"encoding/binary" "encoding/binary"
"fmt"
"io" "io"
"log"
"time" "time"
"github.com/pkg/errors" "github.com/pkg/errors"
@@ -48,7 +48,7 @@ func (w *BinWriter) Reader() io.Reader {
// WritePu16 writes a short as unsigned varint // WritePu16 writes a short as unsigned varint
func (w *BinWriter) WritePu16(val int) { func (w *BinWriter) WritePu16(val int) {
if val < 0 || val > 65535 { if val < 0 || val > 65535 {
log.Fatalf("Invalid UInt16 %v", val) panic(fmt.Sprintf("Invalid UInt16 %v", val))
} }
w.WritePu64(int64(val)) w.WritePu64(int64(val))
} }
@@ -56,7 +56,7 @@ func (w *BinWriter) WritePu16(val int) {
// WritePu32 writes a signed long as unsigned varint // WritePu32 writes a signed long as unsigned varint
func (w *BinWriter) WritePu32(val int64) { func (w *BinWriter) WritePu32(val int64) {
if val < 0 || val > 4294967295 { if val < 0 || val > 4294967295 {
log.Fatalf("Invalid UInt32 %v", val) panic(fmt.Sprintf("Invalid UInt32 %v", val))
} }
w.WritePu64(val) w.WritePu64(val)
} }
@@ -64,7 +64,7 @@ func (w *BinWriter) WritePu32(val int64) {
// WritePu64 writes an unsigned (actually, signed) long as unsigned varint // WritePu64 writes an unsigned (actually, signed) long as unsigned varint
func (w *BinWriter) WritePu64(val int64) { func (w *BinWriter) WritePu64(val int64) {
if val < 0 { if val < 0 {
log.Fatalf("Invalid UInt64 %v", val) panic(fmt.Sprintf("Invalid UInt64 %v", val))
} }
w.b.Write(w.a[:binary.PutUvarint(w.a, uint64(val))]) w.b.Write(w.a[:binary.PutUvarint(w.a, uint64(val))])
} }
@@ -123,7 +123,7 @@ func (r *BinReader) check(err error) bool {
r.err = err r.err = err
} }
if err != io.EOF { if err != io.EOF {
log.Fatalf("Error parsing response: %v", err) panic(fmt.Sprintf("Error parsing response: %v", err))
} }
return false return false
} }

View File

@@ -80,7 +80,7 @@ var oauthConfig = &oauth2.Config{
// Register with Fs // Register with Fs
func init() { func init() {
MrHashType = hash.RegisterHash("MailruHash", 40, mrhash.New) MrHashType = hash.RegisterHash("mailru", "MailruHash", 40, mrhash.New)
fs.Register(&fs.RegInfo{ fs.Register(&fs.RegInfo{
Name: "mailru", Name: "mailru",
Description: "Mail.ru Cloud", Description: "Mail.ru Cloud",
@@ -234,7 +234,10 @@ var retryErrorCodes = []int{
// shouldRetry returns a boolean as to whether this response and err // shouldRetry returns a boolean as to whether this response and err
// deserve to be retried. It returns the err as a convenience. // deserve to be retried. It returns the err as a convenience.
// Retries password authorization (once) in a special case of access denied. // Retries password authorization (once) in a special case of access denied.
func shouldRetry(res *http.Response, err error, f *Fs, opts *rest.Opts) (bool, error) { func shouldRetry(ctx context.Context, res *http.Response, err error, f *Fs, opts *rest.Opts) (bool, error) {
if fserrors.ContextError(ctx, &err) {
return false, err
}
if res != nil && res.StatusCode == 403 && f.opt.Password != "" && !f.passFailed { if res != nil && res.StatusCode == 403 && f.opt.Password != "" && !f.passFailed {
reAuthErr := f.reAuthorize(opts, err) reAuthErr := f.reAuthorize(opts, err)
return reAuthErr == nil, err // return an original error return reAuthErr == nil, err // return an original error
@@ -600,7 +603,7 @@ func (f *Fs) readItemMetaData(ctx context.Context, path string) (entry fs.DirEnt
var info api.ItemInfoResponse var info api.ItemInfoResponse
err = f.pacer.Call(func() (bool, error) { err = f.pacer.Call(func() (bool, error) {
res, err := f.srv.CallJSON(ctx, &opts, nil, &info) res, err := f.srv.CallJSON(ctx, &opts, nil, &info)
return shouldRetry(res, err, f, &opts) return shouldRetry(ctx, res, err, f, &opts)
}) })
if err != nil { if err != nil {
@@ -736,7 +739,7 @@ func (f *Fs) listM1(ctx context.Context, dirPath string, offset int, limit int)
) )
err = f.pacer.Call(func() (bool, error) { err = f.pacer.Call(func() (bool, error) {
res, err = f.srv.CallJSON(ctx, &opts, nil, &info) res, err = f.srv.CallJSON(ctx, &opts, nil, &info)
return shouldRetry(res, err, f, &opts) return shouldRetry(ctx, res, err, f, &opts)
}) })
if err != nil { if err != nil {
@@ -800,7 +803,7 @@ func (f *Fs) listBin(ctx context.Context, dirPath string, depth int) (entries fs
var res *http.Response var res *http.Response
err = f.pacer.Call(func() (bool, error) { err = f.pacer.Call(func() (bool, error) {
res, err = f.srv.Call(ctx, &opts) res, err = f.srv.Call(ctx, &opts)
return shouldRetry(res, err, f, &opts) return shouldRetry(ctx, res, err, f, &opts)
}) })
if err != nil { if err != nil {
closeBody(res) closeBody(res)
@@ -1073,7 +1076,7 @@ func (f *Fs) CreateDir(ctx context.Context, path string) error {
var res *http.Response var res *http.Response
err = f.pacer.Call(func() (bool, error) { err = f.pacer.Call(func() (bool, error) {
res, err = f.srv.Call(ctx, &opts) res, err = f.srv.Call(ctx, &opts)
return shouldRetry(res, err, f, &opts) return shouldRetry(ctx, res, err, f, &opts)
}) })
if err != nil { if err != nil {
closeBody(res) closeBody(res)
@@ -1216,7 +1219,7 @@ func (f *Fs) delete(ctx context.Context, path string, hardDelete bool) error {
var response api.GenericResponse var response api.GenericResponse
err = f.pacer.Call(func() (bool, error) { err = f.pacer.Call(func() (bool, error) {
res, err := f.srv.CallJSON(ctx, &opts, nil, &response) res, err := f.srv.CallJSON(ctx, &opts, nil, &response)
return shouldRetry(res, err, f, &opts) return shouldRetry(ctx, res, err, f, &opts)
}) })
switch { switch {
@@ -1288,7 +1291,7 @@ func (f *Fs) Copy(ctx context.Context, src fs.Object, remote string) (fs.Object,
var response api.GenericBodyResponse var response api.GenericBodyResponse
err = f.pacer.Call(func() (bool, error) { err = f.pacer.Call(func() (bool, error) {
res, err := f.srv.CallJSON(ctx, &opts, nil, &response) res, err := f.srv.CallJSON(ctx, &opts, nil, &response)
return shouldRetry(res, err, f, &opts) return shouldRetry(ctx, res, err, f, &opts)
}) })
if err != nil { if err != nil {
@@ -1392,7 +1395,7 @@ func (f *Fs) moveItemBin(ctx context.Context, srcPath, dstPath, opName string) e
var res *http.Response var res *http.Response
err = f.pacer.Call(func() (bool, error) { err = f.pacer.Call(func() (bool, error) {
res, err = f.srv.Call(ctx, &opts) res, err = f.srv.Call(ctx, &opts)
return shouldRetry(res, err, f, &opts) return shouldRetry(ctx, res, err, f, &opts)
}) })
if err != nil { if err != nil {
closeBody(res) closeBody(res)
@@ -1483,7 +1486,7 @@ func (f *Fs) PublicLink(ctx context.Context, remote string, expire fs.Duration,
var response api.GenericBodyResponse var response api.GenericBodyResponse
err = f.pacer.Call(func() (bool, error) { err = f.pacer.Call(func() (bool, error) {
res, err := f.srv.CallJSON(ctx, &opts, nil, &response) res, err := f.srv.CallJSON(ctx, &opts, nil, &response)
return shouldRetry(res, err, f, &opts) return shouldRetry(ctx, res, err, f, &opts)
}) })
if err == nil && response.Body != "" { if err == nil && response.Body != "" {
@@ -1524,7 +1527,7 @@ func (f *Fs) CleanUp(ctx context.Context) error {
var response api.CleanupResponse var response api.CleanupResponse
err = f.pacer.Call(func() (bool, error) { err = f.pacer.Call(func() (bool, error) {
res, err := f.srv.CallJSON(ctx, &opts, nil, &response) res, err := f.srv.CallJSON(ctx, &opts, nil, &response)
return shouldRetry(res, err, f, &opts) return shouldRetry(ctx, res, err, f, &opts)
}) })
if err != nil { if err != nil {
return err return err
@@ -1557,7 +1560,7 @@ func (f *Fs) About(ctx context.Context) (*fs.Usage, error) {
var info api.UserInfoResponse var info api.UserInfoResponse
err = f.pacer.Call(func() (bool, error) { err = f.pacer.Call(func() (bool, error) {
res, err := f.srv.CallJSON(ctx, &opts, nil, &info) res, err := f.srv.CallJSON(ctx, &opts, nil, &info)
return shouldRetry(res, err, f, &opts) return shouldRetry(ctx, res, err, f, &opts)
}) })
if err != nil { if err != nil {
return nil, err return nil, err
@@ -2076,7 +2079,7 @@ func (o *Object) addFileMetaData(ctx context.Context, overwrite bool) error {
var res *http.Response var res *http.Response
err = o.fs.pacer.Call(func() (bool, error) { err = o.fs.pacer.Call(func() (bool, error) {
res, err = o.fs.srv.Call(ctx, &opts) res, err = o.fs.srv.Call(ctx, &opts)
return shouldRetry(res, err, o.fs, &opts) return shouldRetry(ctx, res, err, o.fs, &opts)
}) })
if err != nil { if err != nil {
closeBody(res) closeBody(res)
@@ -2172,7 +2175,7 @@ func (o *Object) Open(ctx context.Context, options ...fs.OpenOption) (in io.Read
} }
opts.RootURL = server opts.RootURL = server
res, err = o.fs.srv.Call(ctx, &opts) res, err = o.fs.srv.Call(ctx, &opts)
return shouldRetry(res, err, o.fs, &opts) return shouldRetry(ctx, res, err, o.fs, &opts)
}) })
if err != nil { if err != nil {
if res != nil && res.Body != nil { if res != nil && res.Body != nil {

View File

@@ -30,6 +30,7 @@ import (
"github.com/rclone/rclone/fs/config/configmap" "github.com/rclone/rclone/fs/config/configmap"
"github.com/rclone/rclone/fs/config/configstruct" "github.com/rclone/rclone/fs/config/configstruct"
"github.com/rclone/rclone/fs/config/obscure" "github.com/rclone/rclone/fs/config/obscure"
"github.com/rclone/rclone/fs/fserrors"
"github.com/rclone/rclone/fs/fshttp" "github.com/rclone/rclone/fs/fshttp"
"github.com/rclone/rclone/fs/hash" "github.com/rclone/rclone/fs/hash"
"github.com/rclone/rclone/lib/encoder" "github.com/rclone/rclone/lib/encoder"
@@ -158,7 +159,10 @@ func parsePath(path string) (root string) {
// shouldRetry returns a boolean as to whether this err deserves to be // shouldRetry returns a boolean as to whether this err deserves to be
// retried. It returns the err as a convenience // retried. It returns the err as a convenience
func shouldRetry(err error) (bool, error) { func shouldRetry(ctx context.Context, err error) (bool, error) {
if fserrors.ContextError(ctx, &err) {
return false, err
}
// Let the mega library handle the low level retries // Let the mega library handle the low level retries
return false, err return false, err
/* /*
@@ -171,8 +175,8 @@ func shouldRetry(err error) (bool, error) {
} }
// readMetaDataForPath reads the metadata from the path // readMetaDataForPath reads the metadata from the path
func (f *Fs) readMetaDataForPath(remote string) (info *mega.Node, err error) { func (f *Fs) readMetaDataForPath(ctx context.Context, remote string) (info *mega.Node, err error) {
rootNode, err := f.findRoot(false) rootNode, err := f.findRoot(ctx, false)
if err != nil { if err != nil {
return nil, err return nil, err
} }
@@ -237,7 +241,7 @@ func NewFs(ctx context.Context, name, root string, m configmap.Mapper) (fs.Fs, e
}).Fill(ctx, f) }).Fill(ctx, f)
// Find the root node and check if it is a file or not // Find the root node and check if it is a file or not
_, err = f.findRoot(false) _, err = f.findRoot(ctx, false)
switch err { switch err {
case nil: case nil:
// root node found and is a directory // root node found and is a directory
@@ -307,8 +311,8 @@ func (f *Fs) findObject(rootNode *mega.Node, file string) (node *mega.Node, err
// lookupDir looks up the node for the directory of the name given // lookupDir looks up the node for the directory of the name given
// //
// if create is true it tries to create the root directory if not found // if create is true it tries to create the root directory if not found
func (f *Fs) lookupDir(dir string) (*mega.Node, error) { func (f *Fs) lookupDir(ctx context.Context, dir string) (*mega.Node, error) {
rootNode, err := f.findRoot(false) rootNode, err := f.findRoot(ctx, false)
if err != nil { if err != nil {
return nil, err return nil, err
} }
@@ -316,15 +320,15 @@ func (f *Fs) lookupDir(dir string) (*mega.Node, error) {
} }
// lookupParentDir finds the parent node for the remote passed in // lookupParentDir finds the parent node for the remote passed in
func (f *Fs) lookupParentDir(remote string) (dirNode *mega.Node, leaf string, err error) { func (f *Fs) lookupParentDir(ctx context.Context, remote string) (dirNode *mega.Node, leaf string, err error) {
parent, leaf := path.Split(remote) parent, leaf := path.Split(remote)
dirNode, err = f.lookupDir(parent) dirNode, err = f.lookupDir(ctx, parent)
return dirNode, leaf, err return dirNode, leaf, err
} }
// mkdir makes the directory and any parent directories for the // mkdir makes the directory and any parent directories for the
// directory of the name given // directory of the name given
func (f *Fs) mkdir(rootNode *mega.Node, dir string) (node *mega.Node, err error) { func (f *Fs) mkdir(ctx context.Context, rootNode *mega.Node, dir string) (node *mega.Node, err error) {
f.mkdirMu.Lock() f.mkdirMu.Lock()
defer f.mkdirMu.Unlock() defer f.mkdirMu.Unlock()
@@ -358,7 +362,7 @@ func (f *Fs) mkdir(rootNode *mega.Node, dir string) (node *mega.Node, err error)
// create directory called name in node // create directory called name in node
err = f.pacer.Call(func() (bool, error) { err = f.pacer.Call(func() (bool, error) {
node, err = f.srv.CreateDir(name, node) node, err = f.srv.CreateDir(name, node)
return shouldRetry(err) return shouldRetry(ctx, err)
}) })
if err != nil { if err != nil {
return nil, errors.Wrap(err, "mkdir create node failed") return nil, errors.Wrap(err, "mkdir create node failed")
@@ -368,20 +372,20 @@ func (f *Fs) mkdir(rootNode *mega.Node, dir string) (node *mega.Node, err error)
} }
// mkdirParent creates the parent directory of remote // mkdirParent creates the parent directory of remote
func (f *Fs) mkdirParent(remote string) (dirNode *mega.Node, leaf string, err error) { func (f *Fs) mkdirParent(ctx context.Context, remote string) (dirNode *mega.Node, leaf string, err error) {
rootNode, err := f.findRoot(true) rootNode, err := f.findRoot(ctx, true)
if err != nil { if err != nil {
return nil, "", err return nil, "", err
} }
parent, leaf := path.Split(remote) parent, leaf := path.Split(remote)
dirNode, err = f.mkdir(rootNode, parent) dirNode, err = f.mkdir(ctx, rootNode, parent)
return dirNode, leaf, err return dirNode, leaf, err
} }
// findRoot looks up the root directory node and returns it. // findRoot looks up the root directory node and returns it.
// //
// if create is true it tries to create the root directory if not found // if create is true it tries to create the root directory if not found
func (f *Fs) findRoot(create bool) (*mega.Node, error) { func (f *Fs) findRoot(ctx context.Context, create bool) (*mega.Node, error) {
f.rootNodeMu.Lock() f.rootNodeMu.Lock()
defer f.rootNodeMu.Unlock() defer f.rootNodeMu.Unlock()
@@ -403,7 +407,7 @@ func (f *Fs) findRoot(create bool) (*mega.Node, error) {
} }
//..not found so create the root directory //..not found so create the root directory
f._rootNode, err = f.mkdir(absRoot, f.root) f._rootNode, err = f.mkdir(ctx, absRoot, f.root)
return f._rootNode, err return f._rootNode, err
} }
@@ -433,7 +437,7 @@ func (f *Fs) CleanUp(ctx context.Context) (err error) {
fs.Debugf(f, "Deleting trash %q", f.opt.Enc.ToStandardName(item.GetName())) fs.Debugf(f, "Deleting trash %q", f.opt.Enc.ToStandardName(item.GetName()))
deleteErr := f.pacer.Call(func() (bool, error) { deleteErr := f.pacer.Call(func() (bool, error) {
err := f.srv.Delete(item, true) err := f.srv.Delete(item, true)
return shouldRetry(err) return shouldRetry(ctx, err)
}) })
if deleteErr != nil { if deleteErr != nil {
err = deleteErr err = deleteErr
@@ -447,7 +451,7 @@ func (f *Fs) CleanUp(ctx context.Context) (err error) {
// Return an Object from a path // Return an Object from a path
// //
// If it can't be found it returns the error fs.ErrorObjectNotFound. // If it can't be found it returns the error fs.ErrorObjectNotFound.
func (f *Fs) newObjectWithInfo(remote string, info *mega.Node) (fs.Object, error) { func (f *Fs) newObjectWithInfo(ctx context.Context, remote string, info *mega.Node) (fs.Object, error) {
o := &Object{ o := &Object{
fs: f, fs: f,
remote: remote, remote: remote,
@@ -457,7 +461,7 @@ func (f *Fs) newObjectWithInfo(remote string, info *mega.Node) (fs.Object, error
// Set info // Set info
err = o.setMetaData(info) err = o.setMetaData(info)
} else { } else {
err = o.readMetaData() // reads info and meta, returning an error err = o.readMetaData(ctx) // reads info and meta, returning an error
} }
if err != nil { if err != nil {
return nil, err return nil, err
@@ -468,7 +472,7 @@ func (f *Fs) newObjectWithInfo(remote string, info *mega.Node) (fs.Object, error
// NewObject finds the Object at remote. If it can't be found // NewObject finds the Object at remote. If it can't be found
// it returns the error fs.ErrorObjectNotFound. // it returns the error fs.ErrorObjectNotFound.
func (f *Fs) NewObject(ctx context.Context, remote string) (fs.Object, error) { func (f *Fs) NewObject(ctx context.Context, remote string) (fs.Object, error) {
return f.newObjectWithInfo(remote, nil) return f.newObjectWithInfo(ctx, remote, nil)
} }
// list the objects into the function supplied // list the objects into the function supplied
@@ -506,7 +510,7 @@ func (f *Fs) list(ctx context.Context, dir *mega.Node, fn listFn) (found bool, e
// This should return ErrDirNotFound if the directory isn't // This should return ErrDirNotFound if the directory isn't
// found. // found.
func (f *Fs) List(ctx context.Context, dir string) (entries fs.DirEntries, err error) { func (f *Fs) List(ctx context.Context, dir string) (entries fs.DirEntries, err error) {
dirNode, err := f.lookupDir(dir) dirNode, err := f.lookupDir(ctx, dir)
if err != nil { if err != nil {
return nil, err return nil, err
} }
@@ -518,7 +522,7 @@ func (f *Fs) List(ctx context.Context, dir string) (entries fs.DirEntries, err e
d := fs.NewDir(remote, info.GetTimeStamp()).SetID(info.GetHash()) d := fs.NewDir(remote, info.GetTimeStamp()).SetID(info.GetHash())
entries = append(entries, d) entries = append(entries, d)
case mega.FILE: case mega.FILE:
o, err := f.newObjectWithInfo(remote, info) o, err := f.newObjectWithInfo(ctx, remote, info)
if err != nil { if err != nil {
iErr = err iErr = err
return true return true
@@ -542,8 +546,8 @@ func (f *Fs) List(ctx context.Context, dir string) (entries fs.DirEntries, err e
// Returns the dirNode, object, leaf and error // Returns the dirNode, object, leaf and error
// //
// Used to create new objects // Used to create new objects
func (f *Fs) createObject(remote string, modTime time.Time, size int64) (o *Object, dirNode *mega.Node, leaf string, err error) { func (f *Fs) createObject(ctx context.Context, remote string, modTime time.Time, size int64) (o *Object, dirNode *mega.Node, leaf string, err error) {
dirNode, leaf, err = f.mkdirParent(remote) dirNode, leaf, err = f.mkdirParent(ctx, remote)
if err != nil { if err != nil {
return nil, nil, leaf, err return nil, nil, leaf, err
} }
@@ -565,7 +569,7 @@ func (f *Fs) createObject(remote string, modTime time.Time, size int64) (o *Obje
// This will create a duplicate if we upload a new file without // This will create a duplicate if we upload a new file without
// checking to see if there is one already - use Put() for that. // checking to see if there is one already - use Put() for that.
func (f *Fs) Put(ctx context.Context, in io.Reader, src fs.ObjectInfo, options ...fs.OpenOption) (fs.Object, error) { func (f *Fs) Put(ctx context.Context, in io.Reader, src fs.ObjectInfo, options ...fs.OpenOption) (fs.Object, error) {
existingObj, err := f.newObjectWithInfo(src.Remote(), nil) existingObj, err := f.newObjectWithInfo(ctx, src.Remote(), nil)
switch err { switch err {
case nil: case nil:
return existingObj, existingObj.Update(ctx, in, src, options...) return existingObj, existingObj.Update(ctx, in, src, options...)
@@ -591,7 +595,7 @@ func (f *Fs) PutUnchecked(ctx context.Context, in io.Reader, src fs.ObjectInfo,
size := src.Size() size := src.Size()
modTime := src.ModTime(ctx) modTime := src.ModTime(ctx)
o, _, _, err := f.createObject(remote, modTime, size) o, _, _, err := f.createObject(ctx, remote, modTime, size)
if err != nil { if err != nil {
return nil, err return nil, err
} }
@@ -600,30 +604,30 @@ func (f *Fs) PutUnchecked(ctx context.Context, in io.Reader, src fs.ObjectInfo,
// Mkdir creates the directory if it doesn't exist // Mkdir creates the directory if it doesn't exist
func (f *Fs) Mkdir(ctx context.Context, dir string) error { func (f *Fs) Mkdir(ctx context.Context, dir string) error {
rootNode, err := f.findRoot(true) rootNode, err := f.findRoot(ctx, true)
if err != nil { if err != nil {
return err return err
} }
_, err = f.mkdir(rootNode, dir) _, err = f.mkdir(ctx, rootNode, dir)
return errors.Wrap(err, "Mkdir failed") return errors.Wrap(err, "Mkdir failed")
} }
// deleteNode removes a file or directory, observing useTrash // deleteNode removes a file or directory, observing useTrash
func (f *Fs) deleteNode(node *mega.Node) (err error) { func (f *Fs) deleteNode(ctx context.Context, node *mega.Node) (err error) {
err = f.pacer.Call(func() (bool, error) { err = f.pacer.Call(func() (bool, error) {
err = f.srv.Delete(node, f.opt.HardDelete) err = f.srv.Delete(node, f.opt.HardDelete)
return shouldRetry(err) return shouldRetry(ctx, err)
}) })
return err return err
} }
// purgeCheck removes the directory dir, if check is set then it // purgeCheck removes the directory dir, if check is set then it
// refuses to do so if it has anything in // refuses to do so if it has anything in
func (f *Fs) purgeCheck(dir string, check bool) error { func (f *Fs) purgeCheck(ctx context.Context, dir string, check bool) error {
f.mkdirMu.Lock() f.mkdirMu.Lock()
defer f.mkdirMu.Unlock() defer f.mkdirMu.Unlock()
rootNode, err := f.findRoot(false) rootNode, err := f.findRoot(ctx, false)
if err != nil { if err != nil {
return err return err
} }
@@ -644,7 +648,7 @@ func (f *Fs) purgeCheck(dir string, check bool) error {
waitEvent := f.srv.WaitEventsStart() waitEvent := f.srv.WaitEventsStart()
err = f.deleteNode(dirNode) err = f.deleteNode(ctx, dirNode)
if err != nil { if err != nil {
return errors.Wrap(err, "delete directory node failed") return errors.Wrap(err, "delete directory node failed")
} }
@@ -662,7 +666,7 @@ func (f *Fs) purgeCheck(dir string, check bool) error {
// //
// Returns an error if it isn't empty // Returns an error if it isn't empty
func (f *Fs) Rmdir(ctx context.Context, dir string) error { func (f *Fs) Rmdir(ctx context.Context, dir string) error {
return f.purgeCheck(dir, true) return f.purgeCheck(ctx, dir, true)
} }
// Precision return the precision of this Fs // Precision return the precision of this Fs
@@ -676,13 +680,13 @@ func (f *Fs) Precision() time.Duration {
// deleting all the files quicker than just running Remove() on the // deleting all the files quicker than just running Remove() on the
// result of List() // result of List()
func (f *Fs) Purge(ctx context.Context, dir string) error { func (f *Fs) Purge(ctx context.Context, dir string) error {
return f.purgeCheck(dir, false) return f.purgeCheck(ctx, dir, false)
} }
// move a file or folder (srcFs, srcRemote, info) to (f, dstRemote) // move a file or folder (srcFs, srcRemote, info) to (f, dstRemote)
// //
// info will be updates // info will be updates
func (f *Fs) move(dstRemote string, srcFs *Fs, srcRemote string, info *mega.Node) (err error) { func (f *Fs) move(ctx context.Context, dstRemote string, srcFs *Fs, srcRemote string, info *mega.Node) (err error) {
var ( var (
dstFs = f dstFs = f
srcDirNode, dstDirNode *mega.Node srcDirNode, dstDirNode *mega.Node
@@ -692,12 +696,12 @@ func (f *Fs) move(dstRemote string, srcFs *Fs, srcRemote string, info *mega.Node
if dstRemote != "" { if dstRemote != "" {
// lookup or create the destination parent directory // lookup or create the destination parent directory
dstDirNode, dstLeaf, err = dstFs.mkdirParent(dstRemote) dstDirNode, dstLeaf, err = dstFs.mkdirParent(ctx, dstRemote)
} else { } else {
// find or create the parent of the root directory // find or create the parent of the root directory
absRoot := dstFs.srv.FS.GetRoot() absRoot := dstFs.srv.FS.GetRoot()
dstParent, dstLeaf = path.Split(dstFs.root) dstParent, dstLeaf = path.Split(dstFs.root)
dstDirNode, err = dstFs.mkdir(absRoot, dstParent) dstDirNode, err = dstFs.mkdir(ctx, absRoot, dstParent)
} }
if err != nil { if err != nil {
return errors.Wrap(err, "server-side move failed to make dst parent dir") return errors.Wrap(err, "server-side move failed to make dst parent dir")
@@ -705,7 +709,7 @@ func (f *Fs) move(dstRemote string, srcFs *Fs, srcRemote string, info *mega.Node
if srcRemote != "" { if srcRemote != "" {
// lookup the existing parent directory // lookup the existing parent directory
srcDirNode, srcLeaf, err = srcFs.lookupParentDir(srcRemote) srcDirNode, srcLeaf, err = srcFs.lookupParentDir(ctx, srcRemote)
} else { } else {
// lookup the existing root parent // lookup the existing root parent
absRoot := srcFs.srv.FS.GetRoot() absRoot := srcFs.srv.FS.GetRoot()
@@ -721,7 +725,7 @@ func (f *Fs) move(dstRemote string, srcFs *Fs, srcRemote string, info *mega.Node
//log.Printf("move src %p %q dst %p %q", srcDirNode, srcDirNode.GetName(), dstDirNode, dstDirNode.GetName()) //log.Printf("move src %p %q dst %p %q", srcDirNode, srcDirNode.GetName(), dstDirNode, dstDirNode.GetName())
err = f.pacer.Call(func() (bool, error) { err = f.pacer.Call(func() (bool, error) {
err = f.srv.Move(info, dstDirNode) err = f.srv.Move(info, dstDirNode)
return shouldRetry(err) return shouldRetry(ctx, err)
}) })
if err != nil { if err != nil {
return errors.Wrap(err, "server-side move failed") return errors.Wrap(err, "server-side move failed")
@@ -735,7 +739,7 @@ func (f *Fs) move(dstRemote string, srcFs *Fs, srcRemote string, info *mega.Node
//log.Printf("rename %q to %q", srcLeaf, dstLeaf) //log.Printf("rename %q to %q", srcLeaf, dstLeaf)
err = f.pacer.Call(func() (bool, error) { err = f.pacer.Call(func() (bool, error) {
err = f.srv.Rename(info, f.opt.Enc.FromStandardName(dstLeaf)) err = f.srv.Rename(info, f.opt.Enc.FromStandardName(dstLeaf))
return shouldRetry(err) return shouldRetry(ctx, err)
}) })
if err != nil { if err != nil {
return errors.Wrap(err, "server-side rename failed") return errors.Wrap(err, "server-side rename failed")
@@ -767,7 +771,7 @@ func (f *Fs) Move(ctx context.Context, src fs.Object, remote string) (fs.Object,
} }
// Do the move // Do the move
err := f.move(remote, srcObj.fs, srcObj.remote, srcObj.info) err := f.move(ctx, remote, srcObj.fs, srcObj.remote, srcObj.info)
if err != nil { if err != nil {
return nil, err return nil, err
} }
@@ -798,13 +802,13 @@ func (f *Fs) DirMove(ctx context.Context, src fs.Fs, srcRemote, dstRemote string
} }
// find the source // find the source
info, err := srcFs.lookupDir(srcRemote) info, err := srcFs.lookupDir(ctx, srcRemote)
if err != nil { if err != nil {
return err return err
} }
// check the destination doesn't exist // check the destination doesn't exist
_, err = dstFs.lookupDir(dstRemote) _, err = dstFs.lookupDir(ctx, dstRemote)
if err == nil { if err == nil {
return fs.ErrorDirExists return fs.ErrorDirExists
} else if err != fs.ErrorDirNotFound { } else if err != fs.ErrorDirNotFound {
@@ -812,7 +816,7 @@ func (f *Fs) DirMove(ctx context.Context, src fs.Fs, srcRemote, dstRemote string
} }
// Do the move // Do the move
err = f.move(dstRemote, srcFs, srcRemote, info) err = f.move(ctx, dstRemote, srcFs, srcRemote, info)
if err != nil { if err != nil {
return err return err
} }
@@ -838,7 +842,7 @@ func (f *Fs) Hashes() hash.Set {
// PublicLink generates a public link to the remote path (usually readable by anyone) // PublicLink generates a public link to the remote path (usually readable by anyone)
func (f *Fs) PublicLink(ctx context.Context, remote string, expire fs.Duration, unlink bool) (link string, err error) { func (f *Fs) PublicLink(ctx context.Context, remote string, expire fs.Duration, unlink bool) (link string, err error) {
root, err := f.findRoot(false) root, err := f.findRoot(ctx, false)
if err != nil { if err != nil {
return "", errors.Wrap(err, "PublicLink failed to find root node") return "", errors.Wrap(err, "PublicLink failed to find root node")
} }
@@ -886,7 +890,7 @@ func (f *Fs) MergeDirs(ctx context.Context, dirs []fs.Directory) error {
fs.Infof(srcDir, "merging %q", f.opt.Enc.ToStandardName(info.GetName())) fs.Infof(srcDir, "merging %q", f.opt.Enc.ToStandardName(info.GetName()))
err = f.pacer.Call(func() (bool, error) { err = f.pacer.Call(func() (bool, error) {
err = f.srv.Move(info, dstDirNode) err = f.srv.Move(info, dstDirNode)
return shouldRetry(err) return shouldRetry(ctx, err)
}) })
if err != nil { if err != nil {
return errors.Wrapf(err, "MergeDirs move failed on %q in %v", f.opt.Enc.ToStandardName(info.GetName()), srcDir) return errors.Wrapf(err, "MergeDirs move failed on %q in %v", f.opt.Enc.ToStandardName(info.GetName()), srcDir)
@@ -894,7 +898,7 @@ func (f *Fs) MergeDirs(ctx context.Context, dirs []fs.Directory) error {
} }
// rmdir (into trash) the now empty source directory // rmdir (into trash) the now empty source directory
fs.Infof(srcDir, "removing empty directory") fs.Infof(srcDir, "removing empty directory")
err = f.deleteNode(srcDirNode) err = f.deleteNode(ctx, srcDirNode)
if err != nil { if err != nil {
return errors.Wrapf(err, "MergeDirs move failed to rmdir %q", srcDir) return errors.Wrapf(err, "MergeDirs move failed to rmdir %q", srcDir)
} }
@@ -908,7 +912,7 @@ func (f *Fs) About(ctx context.Context) (*fs.Usage, error) {
var err error var err error
err = f.pacer.Call(func() (bool, error) { err = f.pacer.Call(func() (bool, error) {
q, err = f.srv.GetQuota() q, err = f.srv.GetQuota()
return shouldRetry(err) return shouldRetry(ctx, err)
}) })
if err != nil { if err != nil {
return nil, errors.Wrap(err, "failed to get Mega Quota") return nil, errors.Wrap(err, "failed to get Mega Quota")
@@ -963,11 +967,11 @@ func (o *Object) setMetaData(info *mega.Node) (err error) {
// readMetaData gets the metadata if it hasn't already been fetched // readMetaData gets the metadata if it hasn't already been fetched
// //
// it also sets the info // it also sets the info
func (o *Object) readMetaData() (err error) { func (o *Object) readMetaData(ctx context.Context) (err error) {
if o.info != nil { if o.info != nil {
return nil return nil
} }
info, err := o.fs.readMetaDataForPath(o.remote) info, err := o.fs.readMetaDataForPath(ctx, o.remote)
if err != nil { if err != nil {
if err == fs.ErrorDirNotFound { if err == fs.ErrorDirNotFound {
err = fs.ErrorObjectNotFound err = fs.ErrorObjectNotFound
@@ -998,6 +1002,7 @@ func (o *Object) Storable() bool {
// openObject represents a download in progress // openObject represents a download in progress
type openObject struct { type openObject struct {
ctx context.Context
mu sync.Mutex mu sync.Mutex
o *Object o *Object
d *mega.Download d *mega.Download
@@ -1008,14 +1013,14 @@ type openObject struct {
} }
// get the next chunk // get the next chunk
func (oo *openObject) getChunk() (err error) { func (oo *openObject) getChunk(ctx context.Context) (err error) {
if oo.id >= oo.d.Chunks() { if oo.id >= oo.d.Chunks() {
return io.EOF return io.EOF
} }
var chunk []byte var chunk []byte
err = oo.o.fs.pacer.Call(func() (bool, error) { err = oo.o.fs.pacer.Call(func() (bool, error) {
chunk, err = oo.d.DownloadChunk(oo.id) chunk, err = oo.d.DownloadChunk(oo.id)
return shouldRetry(err) return shouldRetry(ctx, err)
}) })
if err != nil { if err != nil {
return err return err
@@ -1045,7 +1050,7 @@ func (oo *openObject) Read(p []byte) (n int, err error) {
oo.skip -= int64(size) oo.skip -= int64(size)
} }
if len(oo.chunk) == 0 { if len(oo.chunk) == 0 {
err = oo.getChunk() err = oo.getChunk(oo.ctx)
if err != nil { if err != nil {
return 0, err return 0, err
} }
@@ -1068,7 +1073,7 @@ func (oo *openObject) Close() (err error) {
} }
err = oo.o.fs.pacer.Call(func() (bool, error) { err = oo.o.fs.pacer.Call(func() (bool, error) {
err = oo.d.Finish() err = oo.d.Finish()
return shouldRetry(err) return shouldRetry(oo.ctx, err)
}) })
if err != nil { if err != nil {
return errors.Wrap(err, "failed to finish download") return errors.Wrap(err, "failed to finish download")
@@ -1096,13 +1101,14 @@ func (o *Object) Open(ctx context.Context, options ...fs.OpenOption) (in io.Read
var d *mega.Download var d *mega.Download
err = o.fs.pacer.Call(func() (bool, error) { err = o.fs.pacer.Call(func() (bool, error) {
d, err = o.fs.srv.NewDownload(o.info) d, err = o.fs.srv.NewDownload(o.info)
return shouldRetry(err) return shouldRetry(ctx, err)
}) })
if err != nil { if err != nil {
return nil, errors.Wrap(err, "open download file failed") return nil, errors.Wrap(err, "open download file failed")
} }
oo := &openObject{ oo := &openObject{
ctx: ctx,
o: o, o: o,
d: d, d: d,
skip: offset, skip: offset,
@@ -1125,7 +1131,7 @@ func (o *Object) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, op
remote := o.Remote() remote := o.Remote()
// Create the parent directory // Create the parent directory
dirNode, leaf, err := o.fs.mkdirParent(remote) dirNode, leaf, err := o.fs.mkdirParent(ctx, remote)
if err != nil { if err != nil {
return errors.Wrap(err, "update make parent dir failed") return errors.Wrap(err, "update make parent dir failed")
} }
@@ -1133,7 +1139,7 @@ func (o *Object) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, op
var u *mega.Upload var u *mega.Upload
err = o.fs.pacer.Call(func() (bool, error) { err = o.fs.pacer.Call(func() (bool, error) {
u, err = o.fs.srv.NewUpload(dirNode, o.fs.opt.Enc.FromStandardName(leaf), size) u, err = o.fs.srv.NewUpload(dirNode, o.fs.opt.Enc.FromStandardName(leaf), size)
return shouldRetry(err) return shouldRetry(ctx, err)
}) })
if err != nil { if err != nil {
return errors.Wrap(err, "upload file failed to create session") return errors.Wrap(err, "upload file failed to create session")
@@ -1154,7 +1160,7 @@ func (o *Object) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, op
err = o.fs.pacer.Call(func() (bool, error) { err = o.fs.pacer.Call(func() (bool, error) {
err = u.UploadChunk(id, chunk) err = u.UploadChunk(id, chunk)
return shouldRetry(err) return shouldRetry(ctx, err)
}) })
if err != nil { if err != nil {
return errors.Wrap(err, "upload file failed to upload chunk") return errors.Wrap(err, "upload file failed to upload chunk")
@@ -1165,7 +1171,7 @@ func (o *Object) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, op
var info *mega.Node var info *mega.Node
err = o.fs.pacer.Call(func() (bool, error) { err = o.fs.pacer.Call(func() (bool, error) {
info, err = u.Finish() info, err = u.Finish()
return shouldRetry(err) return shouldRetry(ctx, err)
}) })
if err != nil { if err != nil {
return errors.Wrap(err, "failed to finish upload") return errors.Wrap(err, "failed to finish upload")
@@ -1173,7 +1179,7 @@ func (o *Object) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, op
// If the upload succeeded and the original object existed, then delete it // If the upload succeeded and the original object existed, then delete it
if o.info != nil { if o.info != nil {
err = o.fs.deleteNode(o.info) err = o.fs.deleteNode(ctx, o.info)
if err != nil { if err != nil {
return errors.Wrap(err, "upload failed to remove old version") return errors.Wrap(err, "upload failed to remove old version")
} }
@@ -1185,7 +1191,7 @@ func (o *Object) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, op
// Remove an object // Remove an object
func (o *Object) Remove(ctx context.Context) error { func (o *Object) Remove(ctx context.Context) error {
err := o.fs.deleteNode(o.info) err := o.fs.deleteNode(ctx, o.info)
if err != nil { if err != nil {
return errors.Wrap(err, "Remove object failed") return errors.Wrap(err, "Remove object failed")
} }

View File

@@ -9,7 +9,6 @@ import (
"encoding/json" "encoding/json"
"fmt" "fmt"
"io" "io"
"log"
"net/http" "net/http"
"net/url" "net/url"
"path" "path"
@@ -52,8 +51,8 @@ const (
driveTypePersonal = "personal" driveTypePersonal = "personal"
driveTypeBusiness = "business" driveTypeBusiness = "business"
driveTypeSharepoint = "documentLibrary" driveTypeSharepoint = "documentLibrary"
defaultChunkSize = 10 * fs.MebiByte defaultChunkSize = 10 * fs.Mebi
chunkSizeMultiple = 320 * fs.KibiByte chunkSizeMultiple = 320 * fs.Kibi
regionGlobal = "global" regionGlobal = "global"
regionUS = "us" regionUS = "us"
@@ -94,216 +93,12 @@ var (
// Register with Fs // Register with Fs
func init() { func init() {
QuickXorHashType = hash.RegisterHash("QuickXorHash", 40, quickxorhash.New) QuickXorHashType = hash.RegisterHash("quickxor", "QuickXorHash", 40, quickxorhash.New)
fs.Register(&fs.RegInfo{ fs.Register(&fs.RegInfo{
Name: "onedrive", Name: "onedrive",
Description: "Microsoft OneDrive", Description: "Microsoft OneDrive",
NewFs: NewFs, NewFs: NewFs,
Config: func(ctx context.Context, name string, m configmap.Mapper) { Config: Config,
region, _ := m.Get("region")
graphURL := graphAPIEndpoint[region] + "/v1.0"
oauthConfig.Endpoint = oauth2.Endpoint{
AuthURL: authEndpoint[region] + authPath,
TokenURL: authEndpoint[region] + tokenPath,
}
ci := fs.GetConfig(ctx)
err := oauthutil.Config(ctx, "onedrive", name, m, oauthConfig, nil)
if err != nil {
log.Fatalf("Failed to configure token: %v", err)
return
}
// Stop if we are running non-interactive config
if ci.AutoConfirm {
return
}
type driveResource struct {
DriveID string `json:"id"`
DriveName string `json:"name"`
DriveType string `json:"driveType"`
}
type drivesResponse struct {
Drives []driveResource `json:"value"`
}
type siteResource struct {
SiteID string `json:"id"`
SiteName string `json:"displayName"`
SiteURL string `json:"webUrl"`
}
type siteResponse struct {
Sites []siteResource `json:"value"`
}
oAuthClient, _, err := oauthutil.NewClient(ctx, name, m, oauthConfig)
if err != nil {
log.Fatalf("Failed to configure OneDrive: %v", err)
}
srv := rest.NewClient(oAuthClient)
var opts rest.Opts
var finalDriveID string
var siteID string
var relativePath string
switch config.Choose("Your choice",
[]string{"onedrive", "sharepoint", "url", "search", "driveid", "siteid", "path"},
[]string{
"OneDrive Personal or Business",
"Root Sharepoint site",
"Sharepoint site name or URL (e.g. mysite or https://contoso.sharepoint.com/sites/mysite)",
"Search for a Sharepoint site",
"Type in driveID (advanced)",
"Type in SiteID (advanced)",
"Sharepoint server-relative path (advanced, e.g. /teams/hr)",
},
false) {
case "onedrive":
opts = rest.Opts{
Method: "GET",
RootURL: graphURL,
Path: "/me/drives",
}
case "sharepoint":
opts = rest.Opts{
Method: "GET",
RootURL: graphURL,
Path: "/sites/root/drives",
}
case "driveid":
fmt.Printf("Paste your Drive ID here> ")
finalDriveID = config.ReadLine()
case "siteid":
fmt.Printf("Paste your Site ID here> ")
siteID = config.ReadLine()
case "url":
fmt.Println("Example: \"https://contoso.sharepoint.com/sites/mysite\" or \"mysite\"")
fmt.Printf("Paste your Site URL here> ")
siteURL := config.ReadLine()
re := regexp.MustCompile(`https://.*\.sharepoint.com/sites/(.*)`)
match := re.FindStringSubmatch(siteURL)
if len(match) == 2 {
relativePath = "/sites/" + match[1]
} else {
relativePath = "/sites/" + siteURL
}
case "path":
fmt.Printf("Enter server-relative URL here> ")
relativePath = config.ReadLine()
case "search":
fmt.Printf("What to search for> ")
searchTerm := config.ReadLine()
opts = rest.Opts{
Method: "GET",
RootURL: graphURL,
Path: "/sites?search=" + searchTerm,
}
sites := siteResponse{}
_, err := srv.CallJSON(ctx, &opts, nil, &sites)
if err != nil {
log.Fatalf("Failed to query available sites: %v", err)
}
if len(sites.Sites) == 0 {
log.Fatalf("Search for '%s' returned no results", searchTerm)
} else {
fmt.Printf("Found %d sites, please select the one you want to use:\n", len(sites.Sites))
for index, site := range sites.Sites {
fmt.Printf("%d: %s (%s) id=%s\n", index, site.SiteName, site.SiteURL, site.SiteID)
}
siteID = sites.Sites[config.ChooseNumber("Chose drive to use:", 0, len(sites.Sites)-1)].SiteID
}
}
// if we use server-relative URL for finding the drive
if relativePath != "" {
opts = rest.Opts{
Method: "GET",
RootURL: graphURL,
Path: "/sites/root:" + relativePath,
}
site := siteResource{}
_, err := srv.CallJSON(ctx, &opts, nil, &site)
if err != nil {
log.Fatalf("Failed to query available site by relative path: %v", err)
}
siteID = site.SiteID
}
// if we have a siteID we need to ask for the drives
if siteID != "" {
opts = rest.Opts{
Method: "GET",
RootURL: graphURL,
Path: "/sites/" + siteID + "/drives",
}
}
// We don't have the final ID yet?
// query Microsoft Graph
if finalDriveID == "" {
drives := drivesResponse{}
_, err := srv.CallJSON(ctx, &opts, nil, &drives)
if err != nil {
log.Fatalf("Failed to query available drives: %v", err)
}
// Also call /me/drive as sometimes /me/drives doesn't return it #4068
if opts.Path == "/me/drives" {
opts.Path = "/me/drive"
meDrive := driveResource{}
_, err := srv.CallJSON(ctx, &opts, nil, &meDrive)
if err != nil {
log.Fatalf("Failed to query available drives: %v", err)
}
found := false
for _, drive := range drives.Drives {
if drive.DriveID == meDrive.DriveID {
found = true
break
}
}
// add the me drive if not found already
if !found {
fs.Debugf(nil, "Adding %v to drives list from /me/drive", meDrive)
drives.Drives = append(drives.Drives, meDrive)
}
}
if len(drives.Drives) == 0 {
log.Fatalf("No drives found")
} else {
fmt.Printf("Found %d drives, please select the one you want to use:\n", len(drives.Drives))
for index, drive := range drives.Drives {
fmt.Printf("%d: %s (%s) id=%s\n", index, drive.DriveName, drive.DriveType, drive.DriveID)
}
finalDriveID = drives.Drives[config.ChooseNumber("Chose drive to use:", 0, len(drives.Drives)-1)].DriveID
}
}
// Test the driveID and get drive type
opts = rest.Opts{
Method: "GET",
RootURL: graphURL,
Path: "/drives/" + finalDriveID + "/root"}
var rootItem api.Item
_, err = srv.CallJSON(ctx, &opts, nil, &rootItem)
if err != nil {
log.Fatalf("Failed to query root for drive %s: %v", finalDriveID, err)
}
fmt.Printf("Found drive '%s' of type '%s', URL: %s\nIs that okay?\n", rootItem.Name, rootItem.ParentReference.DriveType, rootItem.WebURL)
// This does not work, YET :)
if !config.ConfirmWithConfig(ctx, m, "config_drive_ok", true) {
log.Fatalf("Cancelled by user")
}
m.Set(configDriveID, finalDriveID)
m.Set(configDriveType, rootItem.ParentReference.DriveType)
config.SaveConfig()
},
Options: append(oauthutil.SharedOptions, []fs.Option{{ Options: append(oauthutil.SharedOptions, []fs.Option{{
Name: "region", Name: "region",
Help: "Choose national cloud region for OneDrive.", Help: "Choose national cloud region for OneDrive.",
@@ -362,6 +157,11 @@ This will only work if you are copying between two OneDrive *Personal* drives AN
the files to copy are already shared between them. In other cases, rclone will the files to copy are already shared between them. In other cases, rclone will
fall back to normal copy (which will be slightly slower).`, fall back to normal copy (which will be slightly slower).`,
Advanced: true, Advanced: true,
}, {
Name: "list_chunk",
Help: "Size of listing chunk.",
Default: 1000,
Advanced: true,
}, { }, {
Name: "no_versions", Name: "no_versions",
Default: false, Default: false,
@@ -461,6 +261,266 @@ At the time of writing this only works with OneDrive personal paid accounts.
}) })
} }
type driveResource struct {
DriveID string `json:"id"`
DriveName string `json:"name"`
DriveType string `json:"driveType"`
}
type drivesResponse struct {
Drives []driveResource `json:"value"`
}
type siteResource struct {
SiteID string `json:"id"`
SiteName string `json:"displayName"`
SiteURL string `json:"webUrl"`
}
type siteResponse struct {
Sites []siteResource `json:"value"`
}
// Get the region and graphURL from the config
func getRegionURL(m configmap.Mapper) (region, graphURL string) {
region, _ = m.Get("region")
graphURL = graphAPIEndpoint[region] + "/v1.0"
return region, graphURL
}
// Config for chooseDrive
type chooseDriveOpt struct {
opts rest.Opts
finalDriveID string
siteID string
relativePath string
}
// chooseDrive returns a query to choose which drive the user is interested in
func chooseDrive(ctx context.Context, name string, m configmap.Mapper, srv *rest.Client, opt chooseDriveOpt) (*fs.ConfigOut, error) {
_, graphURL := getRegionURL(m)
// if we use server-relative URL for finding the drive
if opt.relativePath != "" {
opt.opts = rest.Opts{
Method: "GET",
RootURL: graphURL,
Path: "/sites/root:" + opt.relativePath,
}
site := siteResource{}
_, err := srv.CallJSON(ctx, &opt.opts, nil, &site)
if err != nil {
return fs.ConfigError("choose_type", fmt.Sprintf("Failed to query available site by relative path: %v", err))
}
opt.siteID = site.SiteID
}
// if we have a siteID we need to ask for the drives
if opt.siteID != "" {
opt.opts = rest.Opts{
Method: "GET",
RootURL: graphURL,
Path: "/sites/" + opt.siteID + "/drives",
}
}
drives := drivesResponse{}
// We don't have the final ID yet?
// query Microsoft Graph
if opt.finalDriveID == "" {
_, err := srv.CallJSON(ctx, &opt.opts, nil, &drives)
if err != nil {
return fs.ConfigError("choose_type", fmt.Sprintf("Failed to query available drives: %v", err))
}
// Also call /me/drive as sometimes /me/drives doesn't return it #4068
if opt.opts.Path == "/me/drives" {
opt.opts.Path = "/me/drive"
meDrive := driveResource{}
_, err := srv.CallJSON(ctx, &opt.opts, nil, &meDrive)
if err != nil {
return fs.ConfigError("choose_type", fmt.Sprintf("Failed to query available drives: %v", err))
}
found := false
for _, drive := range drives.Drives {
if drive.DriveID == meDrive.DriveID {
found = true
break
}
}
// add the me drive if not found already
if !found {
fs.Debugf(nil, "Adding %v to drives list from /me/drive", meDrive)
drives.Drives = append(drives.Drives, meDrive)
}
}
} else {
drives.Drives = append(drives.Drives, driveResource{
DriveID: opt.finalDriveID,
DriveName: "Chosen Drive ID",
DriveType: "drive",
})
}
if len(drives.Drives) == 0 {
return fs.ConfigError("choose_type", "No drives found")
}
return fs.ConfigChoose("driveid_final", "config_driveid", "Select drive you want to use", len(drives.Drives), func(i int) (string, string) {
drive := drives.Drives[i]
return drive.DriveID, fmt.Sprintf("%s (%s)", drive.DriveName, drive.DriveType)
})
}
// Config the backend
func Config(ctx context.Context, name string, m configmap.Mapper, config fs.ConfigIn) (*fs.ConfigOut, error) {
region, graphURL := getRegionURL(m)
if config.State == "" {
oauthConfig.Endpoint = oauth2.Endpoint{
AuthURL: authEndpoint[region] + authPath,
TokenURL: authEndpoint[region] + tokenPath,
}
return oauthutil.ConfigOut("choose_type", &oauthutil.Options{
OAuth2Config: oauthConfig,
})
}
oAuthClient, _, err := oauthutil.NewClient(ctx, name, m, oauthConfig)
if err != nil {
return nil, errors.Wrap(err, "failed to configure OneDrive")
}
srv := rest.NewClient(oAuthClient)
switch config.State {
case "choose_type":
return fs.ConfigChooseFixed("choose_type_done", "config_type", "Type of connection", []fs.OptionExample{{
Value: "onedrive",
Help: "OneDrive Personal or Business",
}, {
Value: "sharepoint",
Help: "Root Sharepoint site",
}, {
Value: "url",
Help: "Sharepoint site name or URL (e.g. mysite or https://contoso.sharepoint.com/sites/mysite)",
}, {
Value: "search",
Help: "Search for a Sharepoint site",
}, {
Value: "driveid",
Help: "Type in driveID (advanced)",
}, {
Value: "siteid",
Help: "Type in SiteID (advanced)",
}, {
Value: "path",
Help: "Sharepoint server-relative path (advanced, e.g. /teams/hr)",
}})
case "choose_type_done":
// Jump to next state according to config chosen
return fs.ConfigGoto(config.Result)
case "onedrive":
return chooseDrive(ctx, name, m, srv, chooseDriveOpt{
opts: rest.Opts{
Method: "GET",
RootURL: graphURL,
Path: "/me/drives",
},
})
case "sharepoint":
return chooseDrive(ctx, name, m, srv, chooseDriveOpt{
opts: rest.Opts{
Method: "GET",
RootURL: graphURL,
Path: "/sites/root/drives",
},
})
case "driveid":
return fs.ConfigInput("driveid_end", "config_driveid_fixed", "Drive ID")
case "driveid_end":
return chooseDrive(ctx, name, m, srv, chooseDriveOpt{
finalDriveID: config.Result,
})
case "siteid":
return fs.ConfigInput("siteid_end", "config_siteid", "Site ID")
case "siteid_end":
return chooseDrive(ctx, name, m, srv, chooseDriveOpt{
siteID: config.Result,
})
case "url":
return fs.ConfigInput("url_end", "config_site_url", `Site URL
Example: "https://contoso.sharepoint.com/sites/mysite" or "mysite"
`)
case "url_end":
siteURL := config.Result
re := regexp.MustCompile(`https://.*\.sharepoint.com/sites/(.*)`)
match := re.FindStringSubmatch(siteURL)
if len(match) == 2 {
return chooseDrive(ctx, name, m, srv, chooseDriveOpt{
relativePath: "/sites/" + match[1],
})
}
return chooseDrive(ctx, name, m, srv, chooseDriveOpt{
relativePath: "/sites/" + siteURL,
})
case "path":
return fs.ConfigInput("path_end", "config_sharepoint_url", `Server-relative URL`)
case "path_end":
return chooseDrive(ctx, name, m, srv, chooseDriveOpt{
relativePath: config.Result,
})
case "search":
return fs.ConfigInput("search_end", "config_search_term", `Search term`)
case "search_end":
searchTerm := config.Result
opts := rest.Opts{
Method: "GET",
RootURL: graphURL,
Path: "/sites?search=" + searchTerm,
}
sites := siteResponse{}
_, err := srv.CallJSON(ctx, &opts, nil, &sites)
if err != nil {
return fs.ConfigError("choose_type", fmt.Sprintf("Failed to query available sites: %v", err))
}
if len(sites.Sites) == 0 {
return fs.ConfigError("choose_type", fmt.Sprintf("search for %q returned no results", searchTerm))
}
return fs.ConfigChoose("search_sites", "config_site", `Select the Site you want to use`, len(sites.Sites), func(i int) (string, string) {
site := sites.Sites[i]
return site.SiteID, fmt.Sprintf("%s (%s)", site.SiteName, site.SiteURL)
})
case "search_sites":
return chooseDrive(ctx, name, m, srv, chooseDriveOpt{
siteID: config.Result,
})
case "driveid_final":
finalDriveID := config.Result
// Test the driveID and get drive type
opts := rest.Opts{
Method: "GET",
RootURL: graphURL,
Path: "/drives/" + finalDriveID + "/root"}
var rootItem api.Item
_, err = srv.CallJSON(ctx, &opts, nil, &rootItem)
if err != nil {
return fs.ConfigError("choose_type", fmt.Sprintf("Failed to query root for drive %q: %v", finalDriveID, err))
}
m.Set(configDriveID, finalDriveID)
m.Set(configDriveType, rootItem.ParentReference.DriveType)
return fs.ConfigConfirm("driveid_final_end", true, "config_drive_ok", fmt.Sprintf("Drive OK?\n\nFound drive %q of type %q\nURL: %s\n", rootItem.Name, rootItem.ParentReference.DriveType, rootItem.WebURL))
case "driveid_final_end":
if config.Result == "true" {
return nil, nil
}
return fs.ConfigGoto("choose_type")
}
return nil, fmt.Errorf("unknown state %q", config.State)
}
// Options defines the configuration for this backend // Options defines the configuration for this backend
type Options struct { type Options struct {
Region string `config:"region"` Region string `config:"region"`
@@ -469,6 +529,7 @@ type Options struct {
DriveType string `config:"drive_type"` DriveType string `config:"drive_type"`
ExposeOneNoteFiles bool `config:"expose_onenote_files"` ExposeOneNoteFiles bool `config:"expose_onenote_files"`
ServerSideAcrossConfigs bool `config:"server_side_across_configs"` ServerSideAcrossConfigs bool `config:"server_side_across_configs"`
ListChunk int64 `config:"list_chunk"`
NoVersions bool `config:"no_versions"` NoVersions bool `config:"no_versions"`
LinkScope string `config:"link_scope"` LinkScope string `config:"link_scope"`
LinkType string `config:"link_type"` LinkType string `config:"link_type"`
@@ -550,7 +611,10 @@ var errAsyncJobAccessDenied = errors.New("async job failed - access denied")
// shouldRetry returns a boolean as to whether this resp and err // shouldRetry returns a boolean as to whether this resp and err
// deserve to be retried. It returns the err as a convenience // deserve to be retried. It returns the err as a convenience
func shouldRetry(resp *http.Response, err error) (bool, error) { func shouldRetry(ctx context.Context, resp *http.Response, err error) (bool, error) {
if fserrors.ContextError(ctx, &err) {
return false, err
}
retry := false retry := false
if resp != nil { if resp != nil {
switch resp.StatusCode { switch resp.StatusCode {
@@ -558,6 +622,9 @@ func shouldRetry(resp *http.Response, err error) (bool, error) {
if len(resp.Header["Www-Authenticate"]) == 1 && strings.Index(resp.Header["Www-Authenticate"][0], "expired_token") >= 0 { if len(resp.Header["Www-Authenticate"]) == 1 && strings.Index(resp.Header["Www-Authenticate"][0], "expired_token") >= 0 {
retry = true retry = true
fs.Debugf(nil, "Should retry: %v", err) fs.Debugf(nil, "Should retry: %v", err)
} else if err != nil && strings.Contains(err.Error(), "Unable to initialize RPS") {
retry = true
fs.Debugf(nil, "HTTP 401: Unable to initialize RPS. Trying again.")
} }
case 429: // Too Many Requests. case 429: // Too Many Requests.
// see https://docs.microsoft.com/en-us/sharepoint/dev/general-development/how-to-avoid-getting-throttled-or-blocked-in-sharepoint-online // see https://docs.microsoft.com/en-us/sharepoint/dev/general-development/how-to-avoid-getting-throttled-or-blocked-in-sharepoint-online
@@ -597,7 +664,7 @@ func (f *Fs) readMetaDataForPathRelativeToID(ctx context.Context, normalizedID s
err = f.pacer.Call(func() (bool, error) { err = f.pacer.Call(func() (bool, error) {
resp, err = f.srv.CallJSON(ctx, &opts, nil, &info) resp, err = f.srv.CallJSON(ctx, &opts, nil, &info)
return shouldRetry(resp, err) return shouldRetry(ctx, resp, err)
}) })
return info, resp, err return info, resp, err
@@ -613,7 +680,7 @@ func (f *Fs) readMetaDataForPath(ctx context.Context, path string) (info *api.It
opts.Path = strings.TrimSuffix(opts.Path, ":") opts.Path = strings.TrimSuffix(opts.Path, ":")
err = f.pacer.Call(func() (bool, error) { err = f.pacer.Call(func() (bool, error) {
resp, err = f.srv.CallJSON(ctx, &opts, nil, &info) resp, err = f.srv.CallJSON(ctx, &opts, nil, &info)
return shouldRetry(resp, err) return shouldRetry(ctx, resp, err)
}) })
return info, resp, err return info, resp, err
} }
@@ -685,7 +752,7 @@ func errorHandler(resp *http.Response) error {
} }
func checkUploadChunkSize(cs fs.SizeSuffix) error { func checkUploadChunkSize(cs fs.SizeSuffix) error {
const minChunkSize = fs.Byte const minChunkSize = fs.SizeSuffixBase
if cs%chunkSizeMultiple != 0 { if cs%chunkSizeMultiple != 0 {
return errors.Errorf("%s is not a multiple of %s", cs, chunkSizeMultiple) return errors.Errorf("%s is not a multiple of %s", cs, chunkSizeMultiple)
} }
@@ -869,7 +936,7 @@ func (f *Fs) CreateDir(ctx context.Context, dirID, leaf string) (newID string, e
} }
err = f.pacer.Call(func() (bool, error) { err = f.pacer.Call(func() (bool, error) {
resp, err = f.srv.CallJSON(ctx, &opts, &mkdir, &info) resp, err = f.srv.CallJSON(ctx, &opts, &mkdir, &info)
return shouldRetry(resp, err) return shouldRetry(ctx, resp, err)
}) })
if err != nil { if err != nil {
//fmt.Printf("...Error %v\n", err) //fmt.Printf("...Error %v\n", err)
@@ -894,14 +961,14 @@ type listAllFn func(*api.Item) bool
func (f *Fs) listAll(ctx context.Context, dirID string, directoriesOnly bool, filesOnly bool, fn listAllFn) (found bool, err error) { func (f *Fs) listAll(ctx context.Context, dirID string, directoriesOnly bool, filesOnly bool, fn listAllFn) (found bool, err error) {
// Top parameter asks for bigger pages of data // Top parameter asks for bigger pages of data
// https://dev.onedrive.com/odata/optional-query-parameters.htm // https://dev.onedrive.com/odata/optional-query-parameters.htm
opts := f.newOptsCall(dirID, "GET", "/children?$top=1000") opts := f.newOptsCall(dirID, "GET", fmt.Sprintf("/children?$top=%d", f.opt.ListChunk))
OUTER: OUTER:
for { for {
var result api.ListChildrenResponse var result api.ListChildrenResponse
var resp *http.Response var resp *http.Response
err = f.pacer.Call(func() (bool, error) { err = f.pacer.Call(func() (bool, error) {
resp, err = f.srv.CallJSON(ctx, &opts, nil, &result) resp, err = f.srv.CallJSON(ctx, &opts, nil, &result)
return shouldRetry(resp, err) return shouldRetry(ctx, resp, err)
}) })
if err != nil { if err != nil {
return found, errors.Wrap(err, "couldn't list files") return found, errors.Wrap(err, "couldn't list files")
@@ -1038,7 +1105,7 @@ func (f *Fs) deleteObject(ctx context.Context, id string) error {
return f.pacer.Call(func() (bool, error) { return f.pacer.Call(func() (bool, error) {
resp, err := f.srv.Call(ctx, &opts) resp, err := f.srv.Call(ctx, &opts)
return shouldRetry(resp, err) return shouldRetry(ctx, resp, err)
}) })
} }
@@ -1088,7 +1155,7 @@ func (f *Fs) Precision() time.Duration {
// waitForJob waits for the job with status in url to complete // waitForJob waits for the job with status in url to complete
func (f *Fs) waitForJob(ctx context.Context, location string, o *Object) error { func (f *Fs) waitForJob(ctx context.Context, location string, o *Object) error {
deadline := time.Now().Add(f.ci.Timeout) deadline := time.Now().Add(f.ci.TimeoutOrInfinite())
for time.Now().Before(deadline) { for time.Now().Before(deadline) {
var resp *http.Response var resp *http.Response
var err error var err error
@@ -1126,7 +1193,7 @@ func (f *Fs) waitForJob(ctx context.Context, location string, o *Object) error {
time.Sleep(1 * time.Second) time.Sleep(1 * time.Second)
} }
return errors.Errorf("async operation didn't complete after %v", f.ci.Timeout) return errors.Errorf("async operation didn't complete after %v", f.ci.TimeoutOrInfinite())
} }
// Copy src to this remote using server-side copy operations. // Copy src to this remote using server-side copy operations.
@@ -1194,7 +1261,7 @@ func (f *Fs) Copy(ctx context.Context, src fs.Object, remote string) (fs.Object,
var resp *http.Response var resp *http.Response
err = f.pacer.Call(func() (bool, error) { err = f.pacer.Call(func() (bool, error) {
resp, err = f.srv.CallJSON(ctx, &opts, &copyReq, nil) resp, err = f.srv.CallJSON(ctx, &opts, &copyReq, nil)
return shouldRetry(resp, err) return shouldRetry(ctx, resp, err)
}) })
if err != nil { if err != nil {
return nil, err return nil, err
@@ -1287,7 +1354,7 @@ func (f *Fs) Move(ctx context.Context, src fs.Object, remote string) (fs.Object,
var info api.Item var info api.Item
err = f.pacer.Call(func() (bool, error) { err = f.pacer.Call(func() (bool, error) {
resp, err = f.srv.CallJSON(ctx, &opts, &move, &info) resp, err = f.srv.CallJSON(ctx, &opts, &move, &info)
return shouldRetry(resp, err) return shouldRetry(ctx, resp, err)
}) })
if err != nil { if err != nil {
return nil, err return nil, err
@@ -1354,7 +1421,7 @@ func (f *Fs) DirMove(ctx context.Context, src fs.Fs, srcRemote, dstRemote string
var info api.Item var info api.Item
err = f.pacer.Call(func() (bool, error) { err = f.pacer.Call(func() (bool, error) {
resp, err = f.srv.CallJSON(ctx, &opts, &move, &info) resp, err = f.srv.CallJSON(ctx, &opts, &move, &info)
return shouldRetry(resp, err) return shouldRetry(ctx, resp, err)
}) })
if err != nil { if err != nil {
return err return err
@@ -1380,7 +1447,7 @@ func (f *Fs) About(ctx context.Context) (usage *fs.Usage, err error) {
var resp *http.Response var resp *http.Response
err = f.pacer.Call(func() (bool, error) { err = f.pacer.Call(func() (bool, error) {
resp, err = f.srv.CallJSON(ctx, &opts, nil, &drive) resp, err = f.srv.CallJSON(ctx, &opts, nil, &drive)
return shouldRetry(resp, err) return shouldRetry(ctx, resp, err)
}) })
if err != nil { if err != nil {
return nil, errors.Wrap(err, "about failed") return nil, errors.Wrap(err, "about failed")
@@ -1421,7 +1488,7 @@ func (f *Fs) PublicLink(ctx context.Context, remote string, expire fs.Duration,
Password: f.opt.LinkPassword, Password: f.opt.LinkPassword,
} }
if expire < fs.Duration(time.Hour*24*365*100) { if expire < fs.DurationOff {
expiry := time.Now().Add(time.Duration(expire)) expiry := time.Now().Add(time.Duration(expire))
share.Expiry = &expiry share.Expiry = &expiry
} }
@@ -1430,7 +1497,7 @@ func (f *Fs) PublicLink(ctx context.Context, remote string, expire fs.Duration,
var result api.CreateShareLinkResponse var result api.CreateShareLinkResponse
err = f.pacer.Call(func() (bool, error) { err = f.pacer.Call(func() (bool, error) {
resp, err = f.srv.CallJSON(ctx, &opts, &share, &result) resp, err = f.srv.CallJSON(ctx, &opts, &share, &result)
return shouldRetry(resp, err) return shouldRetry(ctx, resp, err)
}) })
if err != nil { if err != nil {
fmt.Println(err) fmt.Println(err)
@@ -1475,7 +1542,7 @@ func (o *Object) deleteVersions(ctx context.Context) error {
var versions api.VersionsResponse var versions api.VersionsResponse
err := o.fs.pacer.Call(func() (bool, error) { err := o.fs.pacer.Call(func() (bool, error) {
resp, err := o.fs.srv.CallJSON(ctx, &opts, nil, &versions) resp, err := o.fs.srv.CallJSON(ctx, &opts, nil, &versions)
return shouldRetry(resp, err) return shouldRetry(ctx, resp, err)
}) })
if err != nil { if err != nil {
return err return err
@@ -1502,7 +1569,7 @@ func (o *Object) deleteVersion(ctx context.Context, ID string) error {
opts.NoResponse = true opts.NoResponse = true
return o.fs.pacer.Call(func() (bool, error) { return o.fs.pacer.Call(func() (bool, error) {
resp, err := o.fs.srv.Call(ctx, &opts) resp, err := o.fs.srv.Call(ctx, &opts)
return shouldRetry(resp, err) return shouldRetry(ctx, resp, err)
}) })
} }
@@ -1653,7 +1720,7 @@ func (o *Object) setModTime(ctx context.Context, modTime time.Time) (*api.Item,
var info *api.Item var info *api.Item
err := o.fs.pacer.Call(func() (bool, error) { err := o.fs.pacer.Call(func() (bool, error) {
resp, err := o.fs.srv.CallJSON(ctx, &opts, &update, &info) resp, err := o.fs.srv.CallJSON(ctx, &opts, &update, &info)
return shouldRetry(resp, err) return shouldRetry(ctx, resp, err)
}) })
// Remove versions if required // Remove versions if required
if o.fs.opt.NoVersions { if o.fs.opt.NoVersions {
@@ -1695,7 +1762,7 @@ func (o *Object) Open(ctx context.Context, options ...fs.OpenOption) (in io.Read
err = o.fs.pacer.Call(func() (bool, error) { err = o.fs.pacer.Call(func() (bool, error) {
resp, err = o.fs.srv.Call(ctx, &opts) resp, err = o.fs.srv.Call(ctx, &opts)
return shouldRetry(resp, err) return shouldRetry(ctx, resp, err)
}) })
if err != nil { if err != nil {
return nil, err return nil, err
@@ -1723,7 +1790,7 @@ func (o *Object) createUploadSession(ctx context.Context, modTime time.Time) (re
err = errors.New(err.Error() + " (is it a OneNote file?)") err = errors.New(err.Error() + " (is it a OneNote file?)")
} }
} }
return shouldRetry(resp, err) return shouldRetry(ctx, resp, err)
}) })
return response, err return response, err
} }
@@ -1738,7 +1805,7 @@ func (o *Object) getPosition(ctx context.Context, url string) (pos int64, err er
var resp *http.Response var resp *http.Response
err = o.fs.pacer.Call(func() (bool, error) { err = o.fs.pacer.Call(func() (bool, error) {
resp, err = o.fs.srv.CallJSON(ctx, &opts, nil, &info) resp, err = o.fs.srv.CallJSON(ctx, &opts, nil, &info)
return shouldRetry(resp, err) return shouldRetry(ctx, resp, err)
}) })
if err != nil { if err != nil {
return 0, err return 0, err
@@ -1798,11 +1865,11 @@ func (o *Object) uploadFragment(ctx context.Context, url string, start int64, to
return true, errors.Wrapf(err, "retry this chunk skipping %d bytes", skip) return true, errors.Wrapf(err, "retry this chunk skipping %d bytes", skip)
} }
if err != nil { if err != nil {
return shouldRetry(resp, err) return shouldRetry(ctx, resp, err)
} }
body, err = rest.ReadBody(resp) body, err = rest.ReadBody(resp)
if err != nil { if err != nil {
return shouldRetry(resp, err) return shouldRetry(ctx, resp, err)
} }
if resp.StatusCode == 200 || resp.StatusCode == 201 { if resp.StatusCode == 200 || resp.StatusCode == 201 {
// we are done :) // we are done :)
@@ -1825,7 +1892,7 @@ func (o *Object) cancelUploadSession(ctx context.Context, url string) (err error
var resp *http.Response var resp *http.Response
err = o.fs.pacer.Call(func() (bool, error) { err = o.fs.pacer.Call(func() (bool, error) {
resp, err = o.fs.srv.Call(ctx, &opts) resp, err = o.fs.srv.Call(ctx, &opts)
return shouldRetry(resp, err) return shouldRetry(ctx, resp, err)
}) })
return return
} }
@@ -1849,7 +1916,7 @@ func (o *Object) uploadMultipart(ctx context.Context, in io.Reader, size int64,
fs.Debugf(o, "Cancelling multipart upload: %v", err) fs.Debugf(o, "Cancelling multipart upload: %v", err)
cancelErr := o.cancelUploadSession(ctx, uploadURL) cancelErr := o.cancelUploadSession(ctx, uploadURL)
if cancelErr != nil { if cancelErr != nil {
fs.Logf(o, "Failed to cancel multipart upload: %v", cancelErr) fs.Logf(o, "Failed to cancel multipart upload: %v (upload failed due to: %v)", cancelErr, err)
} }
})() })()
@@ -1874,11 +1941,11 @@ func (o *Object) uploadMultipart(ctx context.Context, in io.Reader, size int64,
return info, nil return info, nil
} }
// Update the content of a remote file within 4MB size in one single request // Update the content of a remote file within 4 MiB size in one single request
// This function will set modtime after uploading, which will create a new version for the remote file // This function will set modtime after uploading, which will create a new version for the remote file
func (o *Object) uploadSinglepart(ctx context.Context, in io.Reader, size int64, modTime time.Time, options ...fs.OpenOption) (info *api.Item, err error) { func (o *Object) uploadSinglepart(ctx context.Context, in io.Reader, size int64, modTime time.Time, options ...fs.OpenOption) (info *api.Item, err error) {
if size < 0 || size > int64(fs.SizeSuffix(4*1024*1024)) { if size < 0 || size > int64(fs.SizeSuffix(4*1024*1024)) {
return nil, errors.New("size passed into uploadSinglepart must be >= 0 and <= 4MiB") return nil, errors.New("size passed into uploadSinglepart must be >= 0 and <= 4 MiB")
} }
fs.Debugf(o, "Starting singlepart upload") fs.Debugf(o, "Starting singlepart upload")
@@ -1896,7 +1963,7 @@ func (o *Object) uploadSinglepart(ctx context.Context, in io.Reader, size int64,
err = errors.New(err.Error() + " (is it a OneNote file?)") err = errors.New(err.Error() + " (is it a OneNote file?)")
} }
} }
return shouldRetry(resp, err) return shouldRetry(ctx, resp, err)
}) })
if err != nil { if err != nil {
return nil, err return nil, err

View File

@@ -88,7 +88,7 @@ func init() {
Note that these chunks are buffered in memory so increasing them will Note that these chunks are buffered in memory so increasing them will
increase memory use.`, increase memory use.`,
Default: 10 * fs.MebiByte, Default: 10 * fs.Mebi,
Advanced: true, Advanced: true,
}}, }},
}) })
@@ -119,6 +119,7 @@ type Object struct {
fs *Fs // what this object is part of fs *Fs // what this object is part of
remote string // The remote path remote string // The remote path
id string // ID of the file id string // ID of the file
parent string // ID of the parent directory
modTime time.Time // The modified time of the object if known modTime time.Time // The modified time of the object if known
md5 string // MD5 hash if known md5 string // MD5 hash if known
size int64 // Size of the object size int64 // Size of the object
@@ -206,7 +207,7 @@ func NewFs(ctx context.Context, name, root string, m configmap.Mapper) (fs.Fs, e
Path: "/session/login.json", Path: "/session/login.json",
} }
resp, err = f.srv.CallJSON(ctx, &opts, &account, &f.session) resp, err = f.srv.CallJSON(ctx, &opts, &account, &f.session)
return f.shouldRetry(resp, err) return f.shouldRetry(ctx, resp, err)
}) })
if err != nil { if err != nil {
return nil, errors.Wrap(err, "failed to create session") return nil, errors.Wrap(err, "failed to create session")
@@ -233,7 +234,7 @@ func NewFs(ctx context.Context, name, root string, m configmap.Mapper) (fs.Fs, e
// No root so return old f // No root so return old f
return f, nil return f, nil
} }
_, err := tempF.newObjectWithInfo(ctx, remote, nil) _, err := tempF.newObjectWithInfo(ctx, remote, nil, "")
if err != nil { if err != nil {
if err == fs.ErrorObjectNotFound { if err == fs.ErrorObjectNotFound {
// File doesn't exist so return old f // File doesn't exist so return old f
@@ -293,7 +294,7 @@ func (f *Fs) deleteObject(ctx context.Context, id string) error {
Path: "/folder/remove.json", Path: "/folder/remove.json",
} }
resp, err := f.srv.CallJSON(ctx, &opts, &removeDirData, nil) resp, err := f.srv.CallJSON(ctx, &opts, &removeDirData, nil)
return f.shouldRetry(resp, err) return f.shouldRetry(ctx, resp, err)
}) })
} }
@@ -388,7 +389,7 @@ func (f *Fs) Copy(ctx context.Context, src fs.Object, remote string) (fs.Object,
Path: "/file/move_copy.json", Path: "/file/move_copy.json",
} }
resp, err = f.srv.CallJSON(ctx, &opts, &copyFileData, &response) resp, err = f.srv.CallJSON(ctx, &opts, &copyFileData, &response)
return f.shouldRetry(resp, err) return f.shouldRetry(ctx, resp, err)
}) })
if err != nil { if err != nil {
return nil, err return nil, err
@@ -445,7 +446,7 @@ func (f *Fs) Move(ctx context.Context, src fs.Object, remote string) (fs.Object,
Path: "/file/move_copy.json", Path: "/file/move_copy.json",
} }
resp, err = f.srv.CallJSON(ctx, &opts, &copyFileData, &response) resp, err = f.srv.CallJSON(ctx, &opts, &copyFileData, &response)
return f.shouldRetry(resp, err) return f.shouldRetry(ctx, resp, err)
}) })
if err != nil { if err != nil {
return nil, err return nil, err
@@ -494,7 +495,7 @@ func (f *Fs) DirMove(ctx context.Context, src fs.Fs, srcRemote, dstRemote string
Path: "/folder/move_copy.json", Path: "/folder/move_copy.json",
} }
resp, err = f.srv.CallJSON(ctx, &opts, &moveFolderData, &response) resp, err = f.srv.CallJSON(ctx, &opts, &moveFolderData, &response)
return f.shouldRetry(resp, err) return f.shouldRetry(ctx, resp, err)
}) })
if err != nil { if err != nil {
fs.Debugf(src, "DirMove error %v", err) fs.Debugf(src, "DirMove error %v", err)
@@ -517,7 +518,7 @@ func (f *Fs) Purge(ctx context.Context, dir string) error {
// Return an Object from a path // Return an Object from a path
// //
// If it can't be found it returns the error fs.ErrorObjectNotFound. // If it can't be found it returns the error fs.ErrorObjectNotFound.
func (f *Fs) newObjectWithInfo(ctx context.Context, remote string, file *File) (fs.Object, error) { func (f *Fs) newObjectWithInfo(ctx context.Context, remote string, file *File, parent string) (fs.Object, error) {
// fs.Debugf(nil, "newObjectWithInfo(%s, %v)", remote, file) // fs.Debugf(nil, "newObjectWithInfo(%s, %v)", remote, file)
var o *Object var o *Object
@@ -526,6 +527,7 @@ func (f *Fs) newObjectWithInfo(ctx context.Context, remote string, file *File) (
fs: f, fs: f,
remote: remote, remote: remote,
id: file.FileID, id: file.FileID,
parent: parent,
modTime: time.Unix(file.DateModified, 0), modTime: time.Unix(file.DateModified, 0),
size: file.Size, size: file.Size,
md5: file.FileHash, md5: file.FileHash,
@@ -548,7 +550,7 @@ func (f *Fs) newObjectWithInfo(ctx context.Context, remote string, file *File) (
// it returns the error fs.ErrorObjectNotFound. // it returns the error fs.ErrorObjectNotFound.
func (f *Fs) NewObject(ctx context.Context, remote string) (fs.Object, error) { func (f *Fs) NewObject(ctx context.Context, remote string) (fs.Object, error) {
// fs.Debugf(nil, "NewObject(\"%s\")", remote) // fs.Debugf(nil, "NewObject(\"%s\")", remote)
return f.newObjectWithInfo(ctx, remote, nil) return f.newObjectWithInfo(ctx, remote, nil, "")
} }
// Creates from the parameters passed in a half finished Object which // Creates from the parameters passed in a half finished Object which
@@ -581,7 +583,7 @@ func (f *Fs) readMetaDataForFolderID(ctx context.Context, id string) (info *Fold
} }
err = f.pacer.Call(func() (bool, error) { err = f.pacer.Call(func() (bool, error) {
resp, err = f.srv.CallJSON(ctx, &opts, nil, &info) resp, err = f.srv.CallJSON(ctx, &opts, nil, &info)
return f.shouldRetry(resp, err) return f.shouldRetry(ctx, resp, err)
}) })
if err != nil { if err != nil {
return nil, err return nil, err
@@ -631,7 +633,7 @@ func (f *Fs) Put(ctx context.Context, in io.Reader, src fs.ObjectInfo, options .
Path: "/upload/create_file.json", Path: "/upload/create_file.json",
} }
resp, err = o.fs.srv.CallJSON(ctx, &opts, &createFileData, &response) resp, err = o.fs.srv.CallJSON(ctx, &opts, &createFileData, &response)
return o.fs.shouldRetry(resp, err) return o.fs.shouldRetry(ctx, resp, err)
}) })
if err != nil { if err != nil {
return nil, errors.Wrap(err, "failed to create file") return nil, errors.Wrap(err, "failed to create file")
@@ -657,7 +659,10 @@ var retryErrorCodes = []int{
// shouldRetry returns a boolean as to whether this resp and err // shouldRetry returns a boolean as to whether this resp and err
// deserve to be retried. It returns the err as a convenience // deserve to be retried. It returns the err as a convenience
func (f *Fs) shouldRetry(resp *http.Response, err error) (bool, error) { func (f *Fs) shouldRetry(ctx context.Context, resp *http.Response, err error) (bool, error) {
if fserrors.ContextError(ctx, &err) {
return false, err
}
return fserrors.ShouldRetry(err) || fserrors.ShouldRetryHTTP(resp, retryErrorCodes), err return fserrors.ShouldRetry(err) || fserrors.ShouldRetryHTTP(resp, retryErrorCodes), err
} }
@@ -683,7 +688,7 @@ func (f *Fs) CreateDir(ctx context.Context, pathID, leaf string) (newID string,
Path: "/folder.json", Path: "/folder.json",
} }
resp, err = f.srv.CallJSON(ctx, &opts, &createDirData, &response) resp, err = f.srv.CallJSON(ctx, &opts, &createDirData, &response)
return f.shouldRetry(resp, err) return f.shouldRetry(ctx, resp, err)
}) })
if err != nil { if err != nil {
return "", err return "", err
@@ -711,7 +716,7 @@ func (f *Fs) FindLeaf(ctx context.Context, pathID, leaf string) (pathIDOut strin
Path: "/folder/list.json/" + f.session.SessionID + "/" + pathID, Path: "/folder/list.json/" + f.session.SessionID + "/" + pathID,
} }
resp, err = f.srv.CallJSON(ctx, &opts, nil, &folderList) resp, err = f.srv.CallJSON(ctx, &opts, nil, &folderList)
return f.shouldRetry(resp, err) return f.shouldRetry(ctx, resp, err)
}) })
if err != nil { if err != nil {
return "", false, errors.Wrap(err, "failed to get folder list") return "", false, errors.Wrap(err, "failed to get folder list")
@@ -754,7 +759,7 @@ func (f *Fs) List(ctx context.Context, dir string) (entries fs.DirEntries, err e
folderList := FolderList{} folderList := FolderList{}
err = f.pacer.Call(func() (bool, error) { err = f.pacer.Call(func() (bool, error) {
resp, err = f.srv.CallJSON(ctx, &opts, nil, &folderList) resp, err = f.srv.CallJSON(ctx, &opts, nil, &folderList)
return f.shouldRetry(resp, err) return f.shouldRetry(ctx, resp, err)
}) })
if err != nil { if err != nil {
return nil, errors.Wrap(err, "failed to get folder list") return nil, errors.Wrap(err, "failed to get folder list")
@@ -768,6 +773,7 @@ func (f *Fs) List(ctx context.Context, dir string) (entries fs.DirEntries, err e
f.dirCache.Put(remote, folder.FolderID) f.dirCache.Put(remote, folder.FolderID)
d := fs.NewDir(remote, time.Unix(folder.DateModified, 0)).SetID(folder.FolderID) d := fs.NewDir(remote, time.Unix(folder.DateModified, 0)).SetID(folder.FolderID)
d.SetItems(int64(folder.ChildFolders)) d.SetItems(int64(folder.ChildFolders))
d.SetParentID(directoryID)
entries = append(entries, d) entries = append(entries, d)
} }
@@ -775,7 +781,7 @@ func (f *Fs) List(ctx context.Context, dir string) (entries fs.DirEntries, err e
file.Name = f.opt.Enc.ToStandardName(file.Name) file.Name = f.opt.Enc.ToStandardName(file.Name)
// fs.Debugf(nil, "File: %s (%s)", file.Name, file.FileID) // fs.Debugf(nil, "File: %s (%s)", file.Name, file.FileID)
remote := path.Join(dir, file.Name) remote := path.Join(dir, file.Name)
o, err := f.newObjectWithInfo(ctx, remote, &file) o, err := f.newObjectWithInfo(ctx, remote, &file, directoryID)
if err != nil { if err != nil {
return nil, err return nil, err
} }
@@ -842,7 +848,7 @@ func (o *Object) SetModTime(ctx context.Context, modTime time.Time) error {
} }
err := o.fs.pacer.Call(func() (bool, error) { err := o.fs.pacer.Call(func() (bool, error) {
resp, err := o.fs.srv.CallJSON(ctx, &opts, &update, nil) resp, err := o.fs.srv.CallJSON(ctx, &opts, &update, nil)
return o.fs.shouldRetry(resp, err) return o.fs.shouldRetry(ctx, resp, err)
}) })
o.modTime = modTime o.modTime = modTime
@@ -862,7 +868,7 @@ func (o *Object) Open(ctx context.Context, options ...fs.OpenOption) (in io.Read
var resp *http.Response var resp *http.Response
err = o.fs.pacer.Call(func() (bool, error) { err = o.fs.pacer.Call(func() (bool, error) {
resp, err = o.fs.srv.Call(ctx, &opts) resp, err = o.fs.srv.Call(ctx, &opts)
return o.fs.shouldRetry(resp, err) return o.fs.shouldRetry(ctx, resp, err)
}) })
if err != nil { if err != nil {
return nil, errors.Wrap(err, "failed to open file)") return nil, errors.Wrap(err, "failed to open file)")
@@ -881,7 +887,7 @@ func (o *Object) Remove(ctx context.Context) error {
Path: "/file.json/" + o.fs.session.SessionID + "/" + o.id, Path: "/file.json/" + o.fs.session.SessionID + "/" + o.id,
} }
resp, err := o.fs.srv.Call(ctx, &opts) resp, err := o.fs.srv.Call(ctx, &opts)
return o.fs.shouldRetry(resp, err) return o.fs.shouldRetry(ctx, resp, err)
}) })
} }
@@ -910,7 +916,7 @@ func (o *Object) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, op
Path: "/upload/open_file_upload.json", Path: "/upload/open_file_upload.json",
} }
resp, err := o.fs.srv.CallJSON(ctx, &opts, &openUploadData, &openResponse) resp, err := o.fs.srv.CallJSON(ctx, &opts, &openUploadData, &openResponse)
return o.fs.shouldRetry(resp, err) return o.fs.shouldRetry(ctx, resp, err)
}) })
if err != nil { if err != nil {
return errors.Wrap(err, "failed to create file") return errors.Wrap(err, "failed to create file")
@@ -954,7 +960,7 @@ func (o *Object) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, op
} }
resp, err = o.fs.srv.CallJSON(ctx, &opts, nil, &reply) resp, err = o.fs.srv.CallJSON(ctx, &opts, nil, &reply)
return o.fs.shouldRetry(resp, err) return o.fs.shouldRetry(ctx, resp, err)
}) })
if err != nil { if err != nil {
return errors.Wrap(err, "failed to create file") return errors.Wrap(err, "failed to create file")
@@ -977,7 +983,7 @@ func (o *Object) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, op
Path: "/upload/close_file_upload.json", Path: "/upload/close_file_upload.json",
} }
resp, err = o.fs.srv.CallJSON(ctx, &opts, &closeUploadData, &closeResponse) resp, err = o.fs.srv.CallJSON(ctx, &opts, &closeUploadData, &closeResponse)
return o.fs.shouldRetry(resp, err) return o.fs.shouldRetry(ctx, resp, err)
}) })
if err != nil { if err != nil {
return errors.Wrap(err, "failed to create file") return errors.Wrap(err, "failed to create file")
@@ -1003,7 +1009,7 @@ func (o *Object) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, op
Path: "/file/access.json", Path: "/file/access.json",
} }
resp, err = o.fs.srv.CallJSON(ctx, &opts, &update, nil) resp, err = o.fs.srv.CallJSON(ctx, &opts, &update, nil)
return o.fs.shouldRetry(resp, err) return o.fs.shouldRetry(ctx, resp, err)
}) })
if err != nil { if err != nil {
return err return err
@@ -1029,7 +1035,7 @@ func (o *Object) readMetaData(ctx context.Context) (err error) {
o.fs.session.SessionID, directoryID, url.QueryEscape(o.fs.opt.Enc.FromStandardName(leaf))), o.fs.session.SessionID, directoryID, url.QueryEscape(o.fs.opt.Enc.FromStandardName(leaf))),
} }
resp, err = o.fs.srv.CallJSON(ctx, &opts, nil, &folderList) resp, err = o.fs.srv.CallJSON(ctx, &opts, nil, &folderList)
return o.fs.shouldRetry(resp, err) return o.fs.shouldRetry(ctx, resp, err)
}) })
if err != nil { if err != nil {
return errors.Wrap(err, "failed to get folder list") return errors.Wrap(err, "failed to get folder list")
@@ -1053,6 +1059,11 @@ func (o *Object) ID() string {
return o.id return o.id
} }
// ParentID returns the ID of the Object parent directory if known, or "" if not
func (o *Object) ParentID() string {
return o.parent
}
// Check the interfaces are satisfied // Check the interfaces are satisfied
var ( var (
_ fs.Fs = (*Fs)(nil) _ fs.Fs = (*Fs)(nil)
@@ -1063,4 +1074,5 @@ var (
_ fs.DirCacheFlusher = (*Fs)(nil) _ fs.DirCacheFlusher = (*Fs)(nil)
_ fs.Object = (*Object)(nil) _ fs.Object = (*Object)(nil)
_ fs.IDer = (*Object)(nil) _ fs.IDer = (*Object)(nil)
_ fs.ParentIDer = (*Object)(nil)
) )

View File

@@ -12,7 +12,6 @@ import (
"context" "context"
"fmt" "fmt"
"io" "io"
"log"
"net/http" "net/http"
"net/url" "net/url"
"path" "path"
@@ -72,7 +71,7 @@ func init() {
Name: "pcloud", Name: "pcloud",
Description: "Pcloud", Description: "Pcloud",
NewFs: NewFs, NewFs: NewFs,
Config: func(ctx context.Context, name string, m configmap.Mapper) { Config: func(ctx context.Context, name string, m configmap.Mapper, config fs.ConfigIn) (*fs.ConfigOut, error) {
optc := new(Options) optc := new(Options)
err := configstruct.Set(m, optc) err := configstruct.Set(m, optc)
if err != nil { if err != nil {
@@ -94,14 +93,11 @@ func init() {
fs.Debugf(nil, "pcloud: got hostname %q", hostname) fs.Debugf(nil, "pcloud: got hostname %q", hostname)
return nil return nil
} }
opt := oauthutil.Options{ return oauthutil.ConfigOut("", &oauthutil.Options{
OAuth2Config: oauthConfig,
CheckAuth: checkAuth, CheckAuth: checkAuth,
StateBlankOK: true, // pCloud seems to drop the state parameter now - see #4210 StateBlankOK: true, // pCloud seems to drop the state parameter now - see #4210
} })
err = oauthutil.Config(ctx, "pcloud", name, m, oauthConfig, &opt)
if err != nil {
log.Fatalf("Failed to configure token: %v", err)
}
}, },
Options: append(oauthutil.SharedOptions, []fs.Option{{ Options: append(oauthutil.SharedOptions, []fs.Option{{
Name: config.ConfigEncoding, Name: config.ConfigEncoding,
@@ -213,7 +209,10 @@ var retryErrorCodes = []int{
// shouldRetry returns a boolean as to whether this resp and err // shouldRetry returns a boolean as to whether this resp and err
// deserve to be retried. It returns the err as a convenience // deserve to be retried. It returns the err as a convenience
func shouldRetry(resp *http.Response, err error) (bool, error) { func shouldRetry(ctx context.Context, resp *http.Response, err error) (bool, error) {
if fserrors.ContextError(ctx, &err) {
return false, err
}
doRetry := false doRetry := false
// Check if it is an api.Error // Check if it is an api.Error
@@ -405,7 +404,7 @@ func (f *Fs) CreateDir(ctx context.Context, pathID, leaf string) (newID string,
err = f.pacer.Call(func() (bool, error) { err = f.pacer.Call(func() (bool, error) {
resp, err = f.srv.CallJSON(ctx, &opts, nil, &result) resp, err = f.srv.CallJSON(ctx, &opts, nil, &result)
err = result.Error.Update(err) err = result.Error.Update(err)
return shouldRetry(resp, err) return shouldRetry(ctx, resp, err)
}) })
if err != nil { if err != nil {
//fmt.Printf("...Error %v\n", err) //fmt.Printf("...Error %v\n", err)
@@ -460,7 +459,7 @@ func (f *Fs) listAll(ctx context.Context, dirID string, directoriesOnly bool, fi
err = f.pacer.Call(func() (bool, error) { err = f.pacer.Call(func() (bool, error) {
resp, err = f.srv.CallJSON(ctx, &opts, nil, &result) resp, err = f.srv.CallJSON(ctx, &opts, nil, &result)
err = result.Error.Update(err) err = result.Error.Update(err)
return shouldRetry(resp, err) return shouldRetry(ctx, resp, err)
}) })
if err != nil { if err != nil {
return found, errors.Wrap(err, "couldn't list files") return found, errors.Wrap(err, "couldn't list files")
@@ -597,7 +596,7 @@ func (f *Fs) purgeCheck(ctx context.Context, dir string, check bool) error {
err = f.pacer.Call(func() (bool, error) { err = f.pacer.Call(func() (bool, error) {
resp, err = f.srv.CallJSON(ctx, &opts, nil, &result) resp, err = f.srv.CallJSON(ctx, &opts, nil, &result)
err = result.Error.Update(err) err = result.Error.Update(err)
return shouldRetry(resp, err) return shouldRetry(ctx, resp, err)
}) })
if err != nil { if err != nil {
return errors.Wrap(err, "rmdir failed") return errors.Wrap(err, "rmdir failed")
@@ -662,7 +661,7 @@ func (f *Fs) Copy(ctx context.Context, src fs.Object, remote string) (fs.Object,
err = f.pacer.Call(func() (bool, error) { err = f.pacer.Call(func() (bool, error) {
resp, err = f.srv.CallJSON(ctx, &opts, nil, &result) resp, err = f.srv.CallJSON(ctx, &opts, nil, &result)
err = result.Error.Update(err) err = result.Error.Update(err)
return shouldRetry(resp, err) return shouldRetry(ctx, resp, err)
}) })
if err != nil { if err != nil {
return nil, err return nil, err
@@ -700,7 +699,7 @@ func (f *Fs) CleanUp(ctx context.Context) error {
return f.pacer.Call(func() (bool, error) { return f.pacer.Call(func() (bool, error) {
resp, err = f.srv.CallJSON(ctx, &opts, nil, &result) resp, err = f.srv.CallJSON(ctx, &opts, nil, &result)
err = result.Update(err) err = result.Update(err)
return shouldRetry(resp, err) return shouldRetry(ctx, resp, err)
}) })
} }
@@ -740,7 +739,7 @@ func (f *Fs) Move(ctx context.Context, src fs.Object, remote string) (fs.Object,
err = f.pacer.Call(func() (bool, error) { err = f.pacer.Call(func() (bool, error) {
resp, err = f.srv.CallJSON(ctx, &opts, nil, &result) resp, err = f.srv.CallJSON(ctx, &opts, nil, &result)
err = result.Error.Update(err) err = result.Error.Update(err)
return shouldRetry(resp, err) return shouldRetry(ctx, resp, err)
}) })
if err != nil { if err != nil {
return nil, err return nil, err
@@ -787,7 +786,7 @@ func (f *Fs) DirMove(ctx context.Context, src fs.Fs, srcRemote, dstRemote string
err = f.pacer.Call(func() (bool, error) { err = f.pacer.Call(func() (bool, error) {
resp, err = f.srv.CallJSON(ctx, &opts, nil, &result) resp, err = f.srv.CallJSON(ctx, &opts, nil, &result)
err = result.Error.Update(err) err = result.Error.Update(err)
return shouldRetry(resp, err) return shouldRetry(ctx, resp, err)
}) })
if err != nil { if err != nil {
return err return err
@@ -814,7 +813,7 @@ func (f *Fs) linkDir(ctx context.Context, dirID string, expire fs.Duration) (str
err := f.pacer.Call(func() (bool, error) { err := f.pacer.Call(func() (bool, error) {
resp, err := f.srv.CallJSON(ctx, &opts, nil, &result) resp, err := f.srv.CallJSON(ctx, &opts, nil, &result)
err = result.Error.Update(err) err = result.Error.Update(err)
return shouldRetry(resp, err) return shouldRetry(ctx, resp, err)
}) })
if err != nil { if err != nil {
return "", err return "", err
@@ -838,7 +837,7 @@ func (f *Fs) linkFile(ctx context.Context, path string, expire fs.Duration) (str
err = f.pacer.Call(func() (bool, error) { err = f.pacer.Call(func() (bool, error) {
resp, err := f.srv.CallJSON(ctx, &opts, nil, &result) resp, err := f.srv.CallJSON(ctx, &opts, nil, &result)
err = result.Error.Update(err) err = result.Error.Update(err)
return shouldRetry(resp, err) return shouldRetry(ctx, resp, err)
}) })
if err != nil { if err != nil {
return "", err return "", err
@@ -869,7 +868,7 @@ func (f *Fs) About(ctx context.Context) (usage *fs.Usage, err error) {
err = f.pacer.Call(func() (bool, error) { err = f.pacer.Call(func() (bool, error) {
resp, err = f.srv.CallJSON(ctx, &opts, nil, &q) resp, err = f.srv.CallJSON(ctx, &opts, nil, &q)
err = q.Error.Update(err) err = q.Error.Update(err)
return shouldRetry(resp, err) return shouldRetry(ctx, resp, err)
}) })
if err != nil { if err != nil {
return nil, errors.Wrap(err, "about failed") return nil, errors.Wrap(err, "about failed")
@@ -927,7 +926,7 @@ func (o *Object) getHashes(ctx context.Context) (err error) {
err = o.fs.pacer.Call(func() (bool, error) { err = o.fs.pacer.Call(func() (bool, error) {
resp, err = o.fs.srv.CallJSON(ctx, &opts, nil, &result) resp, err = o.fs.srv.CallJSON(ctx, &opts, nil, &result)
err = result.Error.Update(err) err = result.Error.Update(err)
return shouldRetry(resp, err) return shouldRetry(ctx, resp, err)
}) })
if err != nil { if err != nil {
return err return err
@@ -1046,7 +1045,7 @@ func (o *Object) downloadURL(ctx context.Context) (URL string, err error) {
err = o.fs.pacer.Call(func() (bool, error) { err = o.fs.pacer.Call(func() (bool, error) {
resp, err = o.fs.srv.CallJSON(ctx, &opts, nil, &result) resp, err = o.fs.srv.CallJSON(ctx, &opts, nil, &result)
err = result.Error.Update(err) err = result.Error.Update(err)
return shouldRetry(resp, err) return shouldRetry(ctx, resp, err)
}) })
if err != nil { if err != nil {
return "", err return "", err
@@ -1072,7 +1071,7 @@ func (o *Object) Open(ctx context.Context, options ...fs.OpenOption) (in io.Read
} }
err = o.fs.pacer.Call(func() (bool, error) { err = o.fs.pacer.Call(func() (bool, error) {
resp, err = o.fs.srv.Call(ctx, &opts) resp, err = o.fs.srv.Call(ctx, &opts)
return shouldRetry(resp, err) return shouldRetry(ctx, resp, err)
}) })
if err != nil { if err != nil {
return nil, err return nil, err
@@ -1134,7 +1133,7 @@ func (o *Object) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, op
// opts.Body=0), so upload it as a multipart form POST with // opts.Body=0), so upload it as a multipart form POST with
// Content-Length set. // Content-Length set.
if size == 0 { if size == 0 {
formReader, contentType, overhead, err := rest.MultipartUpload(in, opts.Parameters, "content", leaf) formReader, contentType, overhead, err := rest.MultipartUpload(ctx, in, opts.Parameters, "content", leaf)
if err != nil { if err != nil {
return errors.Wrap(err, "failed to make multipart upload for 0 length file") return errors.Wrap(err, "failed to make multipart upload for 0 length file")
} }
@@ -1151,7 +1150,7 @@ func (o *Object) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, op
err = o.fs.pacer.CallNoRetry(func() (bool, error) { err = o.fs.pacer.CallNoRetry(func() (bool, error) {
resp, err = o.fs.srv.CallJSON(ctx, &opts, nil, &result) resp, err = o.fs.srv.CallJSON(ctx, &opts, nil, &result)
err = result.Error.Update(err) err = result.Error.Update(err)
return shouldRetry(resp, err) return shouldRetry(ctx, resp, err)
}) })
if err != nil { if err != nil {
// sometimes pcloud leaves a half complete file on // sometimes pcloud leaves a half complete file on
@@ -1181,7 +1180,7 @@ func (o *Object) Remove(ctx context.Context) error {
return o.fs.pacer.Call(func() (bool, error) { return o.fs.pacer.Call(func() (bool, error) {
resp, err := o.fs.srv.CallJSON(ctx, &opts, nil, &result) resp, err := o.fs.srv.CallJSON(ctx, &opts, nil, &result)
err = result.Error.Update(err) err = result.Error.Update(err)
return shouldRetry(resp, err) return shouldRetry(ctx, resp, err)
}) })
} }

View File

@@ -20,7 +20,6 @@ import (
"encoding/json" "encoding/json"
"fmt" "fmt"
"io" "io"
"log"
"net" "net"
"net/http" "net/http"
"net/url" "net/url"
@@ -78,11 +77,10 @@ func init() {
Name: "premiumizeme", Name: "premiumizeme",
Description: "premiumize.me", Description: "premiumize.me",
NewFs: NewFs, NewFs: NewFs,
Config: func(ctx context.Context, name string, m configmap.Mapper) { Config: func(ctx context.Context, name string, m configmap.Mapper, config fs.ConfigIn) (*fs.ConfigOut, error) {
err := oauthutil.Config(ctx, "premiumizeme", name, m, oauthConfig, nil) return oauthutil.ConfigOut("", &oauthutil.Options{
if err != nil { OAuth2Config: oauthConfig,
log.Fatalf("Failed to configure token: %v", err) })
}
}, },
Options: []fs.Option{{ Options: []fs.Option{{
Name: "api_key", Name: "api_key",
@@ -176,7 +174,10 @@ var retryErrorCodes = []int{
// shouldRetry returns a boolean as to whether this resp and err // shouldRetry returns a boolean as to whether this resp and err
// deserve to be retried. It returns the err as a convenience // deserve to be retried. It returns the err as a convenience
func shouldRetry(resp *http.Response, err error) (bool, error) { func shouldRetry(ctx context.Context, resp *http.Response, err error) (bool, error) {
if fserrors.ContextError(ctx, &err) {
return false, err
}
return fserrors.ShouldRetry(err) || fserrors.ShouldRetryHTTP(resp, retryErrorCodes), err return fserrors.ShouldRetry(err) || fserrors.ShouldRetryHTTP(resp, retryErrorCodes), err
} }
@@ -370,7 +371,7 @@ func (f *Fs) CreateDir(ctx context.Context, pathID, leaf string) (newID string,
} }
err = f.pacer.Call(func() (bool, error) { err = f.pacer.Call(func() (bool, error) {
resp, err = f.srv.CallJSON(ctx, &opts, nil, &info) resp, err = f.srv.CallJSON(ctx, &opts, nil, &info)
return shouldRetry(resp, err) return shouldRetry(ctx, resp, err)
}) })
if err != nil { if err != nil {
//fmt.Printf("...Error %v\n", err) //fmt.Printf("...Error %v\n", err)
@@ -407,7 +408,7 @@ func (f *Fs) listAll(ctx context.Context, dirID string, directoriesOnly bool, fi
var resp *http.Response var resp *http.Response
err = f.pacer.Call(func() (bool, error) { err = f.pacer.Call(func() (bool, error) {
resp, err = f.srv.CallJSON(ctx, &opts, nil, &result) resp, err = f.srv.CallJSON(ctx, &opts, nil, &result)
return shouldRetry(resp, err) return shouldRetry(ctx, resp, err)
}) })
if err != nil { if err != nil {
return found, errors.Wrap(err, "couldn't list files") return found, errors.Wrap(err, "couldn't list files")
@@ -581,7 +582,7 @@ func (f *Fs) purgeCheck(ctx context.Context, dir string, check bool) error {
var result api.Response var result api.Response
err = f.pacer.Call(func() (bool, error) { err = f.pacer.Call(func() (bool, error) {
resp, err = f.srv.CallJSON(ctx, &opts, nil, &result) resp, err = f.srv.CallJSON(ctx, &opts, nil, &result)
return shouldRetry(resp, err) return shouldRetry(ctx, resp, err)
}) })
if err != nil { if err != nil {
return errors.Wrap(err, "rmdir failed") return errors.Wrap(err, "rmdir failed")
@@ -660,7 +661,7 @@ func (f *Fs) move(ctx context.Context, isFile bool, id, oldLeaf, newLeaf, oldDir
var result api.Response var result api.Response
err = f.pacer.Call(func() (bool, error) { err = f.pacer.Call(func() (bool, error) {
resp, err = f.srv.CallJSON(ctx, &opts, nil, &result) resp, err = f.srv.CallJSON(ctx, &opts, nil, &result)
return shouldRetry(resp, err) return shouldRetry(ctx, resp, err)
}) })
if err != nil { if err != nil {
return errors.Wrap(err, "Move http") return errors.Wrap(err, "Move http")
@@ -769,7 +770,7 @@ func (f *Fs) About(ctx context.Context) (usage *fs.Usage, err error) {
} }
err = f.pacer.Call(func() (bool, error) { err = f.pacer.Call(func() (bool, error) {
resp, err = f.srv.CallJSON(ctx, &opts, nil, &info) resp, err = f.srv.CallJSON(ctx, &opts, nil, &info)
return shouldRetry(resp, err) return shouldRetry(ctx, resp, err)
}) })
if err != nil { if err != nil {
return nil, errors.Wrap(err, "CreateDir http") return nil, errors.Wrap(err, "CreateDir http")
@@ -896,7 +897,7 @@ func (o *Object) Open(ctx context.Context, options ...fs.OpenOption) (in io.Read
} }
err = o.fs.pacer.Call(func() (bool, error) { err = o.fs.pacer.Call(func() (bool, error) {
resp, err = o.fs.srv.Call(ctx, &opts) resp, err = o.fs.srv.Call(ctx, &opts)
return shouldRetry(resp, err) return shouldRetry(ctx, resp, err)
}) })
if err != nil { if err != nil {
return nil, err return nil, err
@@ -934,7 +935,7 @@ func (o *Object) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, op
err = o.fs.pacer.Call(func() (bool, error) { err = o.fs.pacer.Call(func() (bool, error) {
resp, err = o.fs.srv.CallJSON(ctx, &opts, nil, &info) resp, err = o.fs.srv.CallJSON(ctx, &opts, nil, &info)
if err != nil { if err != nil {
return shouldRetry(resp, err) return shouldRetry(ctx, resp, err)
} }
// Just check the download URL resolves - sometimes // Just check the download URL resolves - sometimes
// the URLs returned by premiumize.me don't resolve so // the URLs returned by premiumize.me don't resolve so
@@ -993,7 +994,7 @@ func (o *Object) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, op
var result api.Response var result api.Response
err = o.fs.pacer.CallNoRetry(func() (bool, error) { err = o.fs.pacer.CallNoRetry(func() (bool, error) {
resp, err = o.fs.srv.CallJSON(ctx, &opts, nil, &result) resp, err = o.fs.srv.CallJSON(ctx, &opts, nil, &result)
return shouldRetry(resp, err) return shouldRetry(ctx, resp, err)
}) })
if err != nil { if err != nil {
return errors.Wrap(err, "upload file http") return errors.Wrap(err, "upload file http")
@@ -1035,7 +1036,7 @@ func (f *Fs) renameLeaf(ctx context.Context, isFile bool, id string, newLeaf str
var result api.Response var result api.Response
err = f.pacer.Call(func() (bool, error) { err = f.pacer.Call(func() (bool, error) {
resp, err = f.srv.CallJSON(ctx, &opts, nil, &result) resp, err = f.srv.CallJSON(ctx, &opts, nil, &result)
return shouldRetry(resp, err) return shouldRetry(ctx, resp, err)
}) })
if err != nil { if err != nil {
return errors.Wrap(err, "rename http") return errors.Wrap(err, "rename http")
@@ -1060,7 +1061,7 @@ func (f *Fs) remove(ctx context.Context, id string) (err error) {
var result api.Response var result api.Response
err = f.pacer.Call(func() (bool, error) { err = f.pacer.Call(func() (bool, error) {
resp, err = f.srv.CallJSON(ctx, &opts, nil, &result) resp, err = f.srv.CallJSON(ctx, &opts, nil, &result)
return shouldRetry(resp, err) return shouldRetry(ctx, resp, err)
}) })
if err != nil { if err != nil {
return errors.Wrap(err, "remove http") return errors.Wrap(err, "remove http")

View File

@@ -1,6 +1,7 @@
package putio package putio
import ( import (
"context"
"fmt" "fmt"
"net/http" "net/http"
@@ -29,7 +30,10 @@ func (e *statusCodeError) Temporary() bool {
// shouldRetry returns a boolean as to whether this err deserves to be // shouldRetry returns a boolean as to whether this err deserves to be
// retried. It returns the err as a convenience // retried. It returns the err as a convenience
func shouldRetry(err error) (bool, error) { func shouldRetry(ctx context.Context, err error) (bool, error) {
if fserrors.ContextError(ctx, &err) {
return false, err
}
if err == nil { if err == nil {
return false, nil return false, nil
} }

View File

@@ -147,7 +147,7 @@ func (f *Fs) CreateDir(ctx context.Context, pathID, leaf string) (newID string,
err = f.pacer.Call(func() (bool, error) { err = f.pacer.Call(func() (bool, error) {
// fs.Debugf(f, "creating folder. part: %s, parentID: %d", leaf, parentID) // fs.Debugf(f, "creating folder. part: %s, parentID: %d", leaf, parentID)
entry, err = f.client.Files.CreateFolder(ctx, f.opt.Enc.FromStandardName(leaf), parentID) entry, err = f.client.Files.CreateFolder(ctx, f.opt.Enc.FromStandardName(leaf), parentID)
return shouldRetry(err) return shouldRetry(ctx, err)
}) })
return itoa(entry.ID), err return itoa(entry.ID), err
} }
@@ -164,7 +164,7 @@ func (f *Fs) FindLeaf(ctx context.Context, pathID, leaf string) (pathIDOut strin
err = f.pacer.Call(func() (bool, error) { err = f.pacer.Call(func() (bool, error) {
// fs.Debugf(f, "listing file: %d", fileID) // fs.Debugf(f, "listing file: %d", fileID)
children, _, err = f.client.Files.List(ctx, fileID) children, _, err = f.client.Files.List(ctx, fileID)
return shouldRetry(err) return shouldRetry(ctx, err)
}) })
if err != nil { if err != nil {
if perr, ok := err.(*putio.ErrorResponse); ok && perr.Response.StatusCode == 404 { if perr, ok := err.(*putio.ErrorResponse); ok && perr.Response.StatusCode == 404 {
@@ -205,7 +205,7 @@ func (f *Fs) List(ctx context.Context, dir string) (entries fs.DirEntries, err e
err = f.pacer.Call(func() (bool, error) { err = f.pacer.Call(func() (bool, error) {
// fs.Debugf(f, "listing files inside List: %d", parentID) // fs.Debugf(f, "listing files inside List: %d", parentID)
children, _, err = f.client.Files.List(ctx, parentID) children, _, err = f.client.Files.List(ctx, parentID)
return shouldRetry(err) return shouldRetry(ctx, err)
}) })
if err != nil { if err != nil {
return return
@@ -271,7 +271,7 @@ func (f *Fs) PutUnchecked(ctx context.Context, in io.Reader, src fs.ObjectInfo,
err = f.pacer.Call(func() (bool, error) { err = f.pacer.Call(func() (bool, error) {
// fs.Debugf(f, "getting file: %d", fileID) // fs.Debugf(f, "getting file: %d", fileID)
entry, err = f.client.Files.Get(ctx, fileID) entry, err = f.client.Files.Get(ctx, fileID)
return shouldRetry(err) return shouldRetry(ctx, err)
}) })
if err != nil { if err != nil {
return nil, err return nil, err
@@ -295,7 +295,7 @@ func (f *Fs) createUpload(ctx context.Context, name string, size int64, parentID
req.Header.Set("upload-metadata", fmt.Sprintf("name %s,no-torrent %s,parent_id %s,updated-at %s", b64name, b64true, b64parentID, b64modifiedAt)) req.Header.Set("upload-metadata", fmt.Sprintf("name %s,no-torrent %s,parent_id %s,updated-at %s", b64name, b64true, b64parentID, b64modifiedAt))
fs.OpenOptionAddHTTPHeaders(req.Header, options) fs.OpenOptionAddHTTPHeaders(req.Header, options)
resp, err := f.oAuthClient.Do(req) resp, err := f.oAuthClient.Do(req)
retry, err := shouldRetry(err) retry, err := shouldRetry(ctx, err)
if retry { if retry {
return true, err return true, err
} }
@@ -320,7 +320,7 @@ func (f *Fs) sendUpload(ctx context.Context, location string, size int64, in io.
err = f.pacer.Call(func() (bool, error) { err = f.pacer.Call(func() (bool, error) {
fs.Debugf(f, "Sending zero length chunk") fs.Debugf(f, "Sending zero length chunk")
_, fileID, err = f.transferChunk(ctx, location, 0, bytes.NewReader([]byte{}), 0) _, fileID, err = f.transferChunk(ctx, location, 0, bytes.NewReader([]byte{}), 0)
return shouldRetry(err) return shouldRetry(ctx, err)
}) })
return return
} }
@@ -344,13 +344,13 @@ func (f *Fs) sendUpload(ctx context.Context, location string, size int64, in io.
// Get file offset and seek to the position // Get file offset and seek to the position
offset, err := f.getServerOffset(ctx, location) offset, err := f.getServerOffset(ctx, location)
if err != nil { if err != nil {
return shouldRetry(err) return shouldRetry(ctx, err)
} }
sentBytes := offset - chunkStart sentBytes := offset - chunkStart
fs.Debugf(f, "sentBytes: %d", sentBytes) fs.Debugf(f, "sentBytes: %d", sentBytes)
_, err = chunk.Seek(sentBytes, io.SeekStart) _, err = chunk.Seek(sentBytes, io.SeekStart)
if err != nil { if err != nil {
return shouldRetry(err) return shouldRetry(ctx, err)
} }
transferOffset = offset transferOffset = offset
reqSize = chunkSize - sentBytes reqSize = chunkSize - sentBytes
@@ -367,7 +367,7 @@ func (f *Fs) sendUpload(ctx context.Context, location string, size int64, in io.
offsetMismatch = true offsetMismatch = true
return true, errors.New("connection broken") return true, errors.New("connection broken")
} }
return shouldRetry(err) return shouldRetry(ctx, err)
}) })
if err != nil { if err != nil {
return return
@@ -479,7 +479,7 @@ func (f *Fs) purgeCheck(ctx context.Context, dir string, check bool) (err error)
err = f.pacer.Call(func() (bool, error) { err = f.pacer.Call(func() (bool, error) {
// fs.Debugf(f, "listing files: %d", dirID) // fs.Debugf(f, "listing files: %d", dirID)
children, _, err = f.client.Files.List(ctx, dirID) children, _, err = f.client.Files.List(ctx, dirID)
return shouldRetry(err) return shouldRetry(ctx, err)
}) })
if err != nil { if err != nil {
return errors.Wrap(err, "Rmdir") return errors.Wrap(err, "Rmdir")
@@ -493,7 +493,7 @@ func (f *Fs) purgeCheck(ctx context.Context, dir string, check bool) (err error)
err = f.pacer.Call(func() (bool, error) { err = f.pacer.Call(func() (bool, error) {
// fs.Debugf(f, "deleting file: %d", dirID) // fs.Debugf(f, "deleting file: %d", dirID)
err = f.client.Files.Delete(ctx, dirID) err = f.client.Files.Delete(ctx, dirID)
return shouldRetry(err) return shouldRetry(ctx, err)
}) })
f.dirCache.FlushDir(dir) f.dirCache.FlushDir(dir)
return err return err
@@ -552,7 +552,7 @@ func (f *Fs) Copy(ctx context.Context, src fs.Object, remote string) (o fs.Objec
req.Header.Set("Content-Type", "application/x-www-form-urlencoded") req.Header.Set("Content-Type", "application/x-www-form-urlencoded")
// fs.Debugf(f, "copying file (%d) to parent_id: %s", srcObj.file.ID, directoryID) // fs.Debugf(f, "copying file (%d) to parent_id: %s", srcObj.file.ID, directoryID)
_, err = f.client.Do(req, nil) _, err = f.client.Do(req, nil)
return shouldRetry(err) return shouldRetry(ctx, err)
}) })
if err != nil { if err != nil {
return nil, err return nil, err
@@ -591,7 +591,7 @@ func (f *Fs) Move(ctx context.Context, src fs.Object, remote string) (o fs.Objec
req.Header.Set("Content-Type", "application/x-www-form-urlencoded") req.Header.Set("Content-Type", "application/x-www-form-urlencoded")
// fs.Debugf(f, "moving file (%d) to parent_id: %s", srcObj.file.ID, directoryID) // fs.Debugf(f, "moving file (%d) to parent_id: %s", srcObj.file.ID, directoryID)
_, err = f.client.Do(req, nil) _, err = f.client.Do(req, nil)
return shouldRetry(err) return shouldRetry(ctx, err)
}) })
if err != nil { if err != nil {
return nil, err return nil, err
@@ -631,7 +631,7 @@ func (f *Fs) DirMove(ctx context.Context, src fs.Fs, srcRemote, dstRemote string
req.Header.Set("Content-Type", "application/x-www-form-urlencoded") req.Header.Set("Content-Type", "application/x-www-form-urlencoded")
// fs.Debugf(f, "moving file (%s) to parent_id: %s", srcID, dstDirectoryID) // fs.Debugf(f, "moving file (%s) to parent_id: %s", srcID, dstDirectoryID)
_, err = f.client.Do(req, nil) _, err = f.client.Do(req, nil)
return shouldRetry(err) return shouldRetry(ctx, err)
}) })
srcFs.dirCache.FlushDir(srcRemote) srcFs.dirCache.FlushDir(srcRemote)
return err return err
@@ -644,7 +644,7 @@ func (f *Fs) About(ctx context.Context) (usage *fs.Usage, err error) {
err = f.pacer.Call(func() (bool, error) { err = f.pacer.Call(func() (bool, error) {
// fs.Debugf(f, "getting account info") // fs.Debugf(f, "getting account info")
ai, err = f.client.Account.Info(ctx) ai, err = f.client.Account.Info(ctx)
return shouldRetry(err) return shouldRetry(ctx, err)
}) })
if err != nil { if err != nil {
return nil, errors.Wrap(err, "about failed") return nil, errors.Wrap(err, "about failed")
@@ -678,6 +678,6 @@ func (f *Fs) CleanUp(ctx context.Context) (err error) {
} }
// fs.Debugf(f, "emptying trash") // fs.Debugf(f, "emptying trash")
_, err = f.client.Do(req, nil) _, err = f.client.Do(req, nil)
return shouldRetry(err) return shouldRetry(ctx, err)
}) })
} }

View File

@@ -145,7 +145,7 @@ func (o *Object) readEntry(ctx context.Context) (f *putio.File, err error) {
if perr, ok := err.(*putio.ErrorResponse); ok && perr.Response.StatusCode == 404 { if perr, ok := err.(*putio.ErrorResponse); ok && perr.Response.StatusCode == 404 {
return false, fs.ErrorObjectNotFound return false, fs.ErrorObjectNotFound
} }
return shouldRetry(err) return shouldRetry(ctx, err)
}) })
if err != nil { if err != nil {
return nil, err return nil, err
@@ -220,7 +220,7 @@ func (o *Object) Open(ctx context.Context, options ...fs.OpenOption) (in io.Read
var storageURL string var storageURL string
err = o.fs.pacer.Call(func() (bool, error) { err = o.fs.pacer.Call(func() (bool, error) {
storageURL, err = o.fs.client.Files.URL(ctx, o.file.ID, true) storageURL, err = o.fs.client.Files.URL(ctx, o.file.ID, true)
return shouldRetry(err) return shouldRetry(ctx, err)
}) })
if err != nil { if err != nil {
return return
@@ -231,7 +231,7 @@ func (o *Object) Open(ctx context.Context, options ...fs.OpenOption) (in io.Read
err = o.fs.pacer.Call(func() (bool, error) { err = o.fs.pacer.Call(func() (bool, error) {
req, err := http.NewRequestWithContext(ctx, http.MethodGet, storageURL, nil) req, err := http.NewRequestWithContext(ctx, http.MethodGet, storageURL, nil)
if err != nil { if err != nil {
return shouldRetry(err) return shouldRetry(ctx, err)
} }
req.Header.Set("User-Agent", o.fs.client.UserAgent) req.Header.Set("User-Agent", o.fs.client.UserAgent)
@@ -241,7 +241,7 @@ func (o *Object) Open(ctx context.Context, options ...fs.OpenOption) (in io.Read
} }
// fs.Debugf(o, "opening file: id=%d", o.file.ID) // fs.Debugf(o, "opening file: id=%d", o.file.ID)
resp, err = o.fs.httpClient.Do(req) resp, err = o.fs.httpClient.Do(req)
return shouldRetry(err) return shouldRetry(ctx, err)
}) })
if perr, ok := err.(*putio.ErrorResponse); ok && perr.Response.StatusCode >= 400 && perr.Response.StatusCode <= 499 { if perr, ok := err.(*putio.ErrorResponse); ok && perr.Response.StatusCode >= 400 && perr.Response.StatusCode <= 499 {
_ = resp.Body.Close() _ = resp.Body.Close()
@@ -283,6 +283,6 @@ func (o *Object) Remove(ctx context.Context) (err error) {
return o.fs.pacer.Call(func() (bool, error) { return o.fs.pacer.Call(func() (bool, error) {
// fs.Debugf(o, "removing file: id=%d", o.file.ID) // fs.Debugf(o, "removing file: id=%d", o.file.ID)
err = o.fs.client.Files.Delete(ctx, o.file.ID) err = o.fs.client.Files.Delete(ctx, o.file.ID)
return shouldRetry(err) return shouldRetry(ctx, err)
}) })
} }

View File

@@ -2,7 +2,6 @@ package putio
import ( import (
"context" "context"
"log"
"regexp" "regexp"
"time" "time"
@@ -35,7 +34,7 @@ const (
minSleep = 10 * time.Millisecond minSleep = 10 * time.Millisecond
maxSleep = 2 * time.Second maxSleep = 2 * time.Second
decayConstant = 2 // bigger for slower decay, exponential decayConstant = 2 // bigger for slower decay, exponential
defaultChunkSize = 48 * fs.MebiByte defaultChunkSize = 48 * fs.Mebi
) )
var ( var (
@@ -60,14 +59,11 @@ func init() {
Name: "putio", Name: "putio",
Description: "Put.io", Description: "Put.io",
NewFs: NewFs, NewFs: NewFs,
Config: func(ctx context.Context, name string, m configmap.Mapper) { Config: func(ctx context.Context, name string, m configmap.Mapper, config fs.ConfigIn) (*fs.ConfigOut, error) {
opt := oauthutil.Options{ return oauthutil.ConfigOut("", &oauthutil.Options{
NoOffline: true, OAuth2Config: putioConfig,
} NoOffline: true,
err := oauthutil.Config(ctx, "putio", name, m, putioConfig, &opt) })
if err != nil {
log.Fatalf("Failed to configure token: %v", err)
}
}, },
Options: []fs.Option{{ Options: []fs.Option{{
Name: config.ConfigEncoding, Name: config.ConfigEncoding,

View File

@@ -80,7 +80,7 @@ func init() {
Help: `Cutoff for switching to chunked upload Help: `Cutoff for switching to chunked upload
Any files larger than this will be uploaded in chunks of chunk_size. Any files larger than this will be uploaded in chunks of chunk_size.
The minimum is 0 and the maximum is 5GB.`, The minimum is 0 and the maximum is 5 GiB.`,
Default: defaultUploadCutoff, Default: defaultUploadCutoff,
Advanced: true, Advanced: true,
}, { }, {

View File

@@ -26,7 +26,6 @@ import (
"github.com/aws/aws-sdk-go/aws/corehandlers" "github.com/aws/aws-sdk-go/aws/corehandlers"
"github.com/aws/aws-sdk-go/aws/credentials" "github.com/aws/aws-sdk-go/aws/credentials"
"github.com/aws/aws-sdk-go/aws/credentials/ec2rolecreds" "github.com/aws/aws-sdk-go/aws/credentials/ec2rolecreds"
"github.com/aws/aws-sdk-go/aws/credentials/stscreds"
"github.com/aws/aws-sdk-go/aws/defaults" "github.com/aws/aws-sdk-go/aws/defaults"
"github.com/aws/aws-sdk-go/aws/ec2metadata" "github.com/aws/aws-sdk-go/aws/ec2metadata"
"github.com/aws/aws-sdk-go/aws/endpoints" "github.com/aws/aws-sdk-go/aws/endpoints"
@@ -59,7 +58,7 @@ import (
func init() { func init() {
fs.Register(&fs.RegInfo{ fs.Register(&fs.RegInfo{
Name: "s3", Name: "s3",
Description: "Amazon S3 Compliant Storage Providers including AWS, Alibaba, Ceph, Digital Ocean, Dreamhost, IBM COS, Minio, and Tencent COS", Description: "Amazon S3 Compliant Storage Providers including AWS, Alibaba, Ceph, Digital Ocean, Dreamhost, IBM COS, Minio, SeaweedFS, and Tencent COS",
NewFs: NewFs, NewFs: NewFs,
CommandHelp: commandHelp, CommandHelp: commandHelp,
Options: []fs.Option{{ Options: []fs.Option{{
@@ -92,6 +91,9 @@ func init() {
}, { }, {
Value: "Scaleway", Value: "Scaleway",
Help: "Scaleway Object Storage", Help: "Scaleway Object Storage",
}, {
Value: "SeaweedFS",
Help: "SeaweedFS S3",
}, { }, {
Value: "StackPath", Value: "StackPath",
Help: "StackPath Object Storage", Help: "StackPath Object Storage",
@@ -593,6 +595,10 @@ func init() {
Value: "sgp1.digitaloceanspaces.com", Value: "sgp1.digitaloceanspaces.com",
Help: "Digital Ocean Spaces Singapore 1", Help: "Digital Ocean Spaces Singapore 1",
Provider: "DigitalOcean", Provider: "DigitalOcean",
}, {
Value: "localhost:8333",
Help: "SeaweedFS S3 localhost",
Provider: "SeaweedFS",
}, { }, {
Value: "s3.wasabisys.com", Value: "s3.wasabisys.com",
Help: "Wasabi US East endpoint", Help: "Wasabi US East endpoint",
@@ -1017,7 +1023,7 @@ If you leave it blank, this is calculated automatically from the sse_customer_ke
Help: `Cutoff for switching to chunked upload Help: `Cutoff for switching to chunked upload
Any files larger than this will be uploaded in chunks of chunk_size. Any files larger than this will be uploaded in chunks of chunk_size.
The minimum is 0 and the maximum is 5GB.`, The minimum is 0 and the maximum is 5 GiB.`,
Default: defaultUploadCutoff, Default: defaultUploadCutoff,
Advanced: true, Advanced: true,
}, { }, {
@@ -1039,9 +1045,9 @@ Rclone will automatically increase the chunk size when uploading a
large file of known size to stay below the 10,000 chunks limit. large file of known size to stay below the 10,000 chunks limit.
Files of unknown size are uploaded with the configured Files of unknown size are uploaded with the configured
chunk_size. Since the default chunk size is 5MB and there can be at chunk_size. Since the default chunk size is 5 MiB and there can be at
most 10,000 chunks, this means that by default the maximum size of most 10,000 chunks, this means that by default the maximum size of
a file you can stream upload is 48GB. If you wish to stream upload a file you can stream upload is 48 GiB. If you wish to stream upload
larger files then you will need to increase chunk_size.`, larger files then you will need to increase chunk_size.`,
Default: minChunkSize, Default: minChunkSize,
Advanced: true, Advanced: true,
@@ -1067,7 +1073,7 @@ large file of a known size to stay below this number of chunks limit.
Any files larger than this that need to be server-side copied will be Any files larger than this that need to be server-side copied will be
copied in chunks of this size. copied in chunks of this size.
The minimum is 0 and the maximum is 5GB.`, The minimum is 0 and the maximum is 5 GiB.`,
Default: fs.SizeSuffix(maxSizeForCopy), Default: fs.SizeSuffix(maxSizeForCopy),
Advanced: true, Advanced: true,
}, { }, {
@@ -1221,6 +1227,11 @@ very small even with this flag.
`, `,
Default: false, Default: false,
Advanced: true, Advanced: true,
}, {
Name: "no_head_object",
Help: `If set, don't HEAD objects`,
Default: false,
Advanced: true,
}, { }, {
Name: config.ConfigEncoding, Name: config.ConfigEncoding,
Help: config.ConfigEncodingHelp, Help: config.ConfigEncodingHelp,
@@ -1271,7 +1282,7 @@ See: https://github.com/rclone/rclone/issues/4673, https://github.com/rclone/rcl
const ( const (
metaMtime = "Mtime" // the meta key to store mtime in - e.g. X-Amz-Meta-Mtime metaMtime = "Mtime" // the meta key to store mtime in - e.g. X-Amz-Meta-Mtime
metaMD5Hash = "Md5chksum" // the meta key to store md5hash in metaMD5Hash = "Md5chksum" // the meta key to store md5hash in
// The maximum size of object we can COPY - this should be 5GiB but is < 5GB for b2 compatibility // The maximum size of object we can COPY - this should be 5 GiB but is < 5 GB for b2 compatibility
// See https://forum.rclone.org/t/copying-files-within-a-b2-bucket/16680/76 // See https://forum.rclone.org/t/copying-files-within-a-b2-bucket/16680/76
maxSizeForCopy = 4768 * 1024 * 1024 maxSizeForCopy = 4768 * 1024 * 1024
maxUploadParts = 10000 // maximum allowed number of parts in a multi-part upload maxUploadParts = 10000 // maximum allowed number of parts in a multi-part upload
@@ -1319,6 +1330,7 @@ type Options struct {
ListChunk int64 `config:"list_chunk"` ListChunk int64 `config:"list_chunk"`
NoCheckBucket bool `config:"no_check_bucket"` NoCheckBucket bool `config:"no_check_bucket"`
NoHead bool `config:"no_head"` NoHead bool `config:"no_head"`
NoHeadObject bool `config:"no_head_object"`
Enc encoder.MultiEncoder `config:"encoding"` Enc encoder.MultiEncoder `config:"encoding"`
MemoryPoolFlushTime fs.Duration `config:"memory_pool_flush_time"` MemoryPoolFlushTime fs.Duration `config:"memory_pool_flush_time"`
MemoryPoolUseMmap bool `config:"memory_pool_use_mmap"` MemoryPoolUseMmap bool `config:"memory_pool_use_mmap"`
@@ -1399,7 +1411,10 @@ var retryErrorCodes = []int{
//S3 is pretty resilient, and the built in retry handling is probably sufficient //S3 is pretty resilient, and the built in retry handling is probably sufficient
// as it should notice closed connections and timeouts which are the most likely // as it should notice closed connections and timeouts which are the most likely
// sort of failure modes // sort of failure modes
func (f *Fs) shouldRetry(err error) (bool, error) { func (f *Fs) shouldRetry(ctx context.Context, err error) (bool, error) {
if fserrors.ContextError(ctx, &err) {
return false, err
}
// If this is an awserr object, try and extract more useful information to determine if we should retry // If this is an awserr object, try and extract more useful information to determine if we should retry
if awsError, ok := err.(awserr.Error); ok { if awsError, ok := err.(awserr.Error); ok {
// Simple case, check the original embedded error in case it's generically retryable // Simple case, check the original embedded error in case it's generically retryable
@@ -1411,7 +1426,7 @@ func (f *Fs) shouldRetry(err error) (bool, error) {
// 301 if wrong region for bucket - can only update if running from a bucket // 301 if wrong region for bucket - can only update if running from a bucket
if f.rootBucket != "" { if f.rootBucket != "" {
if reqErr.StatusCode() == http.StatusMovedPermanently { if reqErr.StatusCode() == http.StatusMovedPermanently {
urfbErr := f.updateRegionForBucket(f.rootBucket) urfbErr := f.updateRegionForBucket(ctx, f.rootBucket)
if urfbErr != nil { if urfbErr != nil {
fs.Errorf(f, "Failed to update region for bucket: %v", urfbErr) fs.Errorf(f, "Failed to update region for bucket: %v", urfbErr)
return false, err return false, err
@@ -1462,7 +1477,7 @@ func getClient(ctx context.Context, opt *Options) *http.Client {
} }
// s3Connection makes a connection to s3 // s3Connection makes a connection to s3
func s3Connection(ctx context.Context, opt *Options) (*s3.S3, *session.Session, error) { func s3Connection(ctx context.Context, opt *Options, client *http.Client) (*s3.S3, *session.Session, error) {
// Make the auth // Make the auth
v := credentials.Value{ v := credentials.Value{
AccessKeyID: opt.AccessKeyID, AccessKeyID: opt.AccessKeyID,
@@ -1508,11 +1523,6 @@ func s3Connection(ctx context.Context, opt *Options) (*s3.S3, *session.Session,
}), }),
ExpiryWindow: 3 * time.Minute, ExpiryWindow: 3 * time.Minute,
}, },
// Pick up IAM role if we are in EKS
&stscreds.WebIdentityRoleProvider{
ExpiryWindow: 3 * time.Minute,
},
} }
cred := credentials.NewChainCredentials(providers) cred := credentials.NewChainCredentials(providers)
@@ -1540,7 +1550,7 @@ func s3Connection(ctx context.Context, opt *Options) (*s3.S3, *session.Session,
awsConfig := aws.NewConfig(). awsConfig := aws.NewConfig().
WithMaxRetries(0). // Rely on rclone's retry logic WithMaxRetries(0). // Rely on rclone's retry logic
WithCredentials(cred). WithCredentials(cred).
WithHTTPClient(getClient(ctx, opt)). WithHTTPClient(client).
WithS3ForcePathStyle(opt.ForcePathStyle). WithS3ForcePathStyle(opt.ForcePathStyle).
WithS3UseAccelerate(opt.UseAccelerateEndpoint). WithS3UseAccelerate(opt.UseAccelerateEndpoint).
WithS3UsEast1RegionalEndpoint(endpoints.RegionalS3UsEast1Endpoint) WithS3UsEast1RegionalEndpoint(endpoints.RegionalS3UsEast1Endpoint)
@@ -1559,9 +1569,8 @@ func s3Connection(ctx context.Context, opt *Options) (*s3.S3, *session.Session,
if opt.EnvAuth && opt.AccessKeyID == "" && opt.SecretAccessKey == "" { if opt.EnvAuth && opt.AccessKeyID == "" && opt.SecretAccessKey == "" {
// Enable loading config options from ~/.aws/config (selected by AWS_PROFILE env) // Enable loading config options from ~/.aws/config (selected by AWS_PROFILE env)
awsSessionOpts.SharedConfigState = session.SharedConfigEnable awsSessionOpts.SharedConfigState = session.SharedConfigEnable
// The session constructor (aws/session/mergeConfigSrcs) will only use the user's preferred credential source // Set the name of the profile if supplied
// (from the shared config file) if the passed-in Options.Config.Credentials is nil. awsSessionOpts.Profile = opt.Profile
awsSessionOpts.Config.Credentials = nil
} }
ses, err := session.NewSessionWithOptions(awsSessionOpts) ses, err := session.NewSessionWithOptions(awsSessionOpts)
if err != nil { if err != nil {
@@ -1647,7 +1656,8 @@ func NewFs(ctx context.Context, name, root string, m configmap.Mapper) (fs.Fs, e
md5sumBinary := md5.Sum([]byte(opt.SSECustomerKey)) md5sumBinary := md5.Sum([]byte(opt.SSECustomerKey))
opt.SSECustomerKeyMD5 = base64.StdEncoding.EncodeToString(md5sumBinary[:]) opt.SSECustomerKeyMD5 = base64.StdEncoding.EncodeToString(md5sumBinary[:])
} }
c, ses, err := s3Connection(ctx, opt) srv := getClient(ctx, opt)
c, ses, err := s3Connection(ctx, opt, srv)
if err != nil { if err != nil {
return nil, err return nil, err
} }
@@ -1662,7 +1672,7 @@ func NewFs(ctx context.Context, name, root string, m configmap.Mapper) (fs.Fs, e
ses: ses, ses: ses,
pacer: fs.NewPacer(ctx, pacer.NewS3(pacer.MinSleep(minSleep))), pacer: fs.NewPacer(ctx, pacer.NewS3(pacer.MinSleep(minSleep))),
cache: bucket.NewCache(), cache: bucket.NewCache(),
srv: getClient(ctx, opt), srv: srv,
pool: pool.New( pool: pool.New(
time.Duration(opt.MemoryPoolFlushTime), time.Duration(opt.MemoryPoolFlushTime),
int(opt.ChunkSize), int(opt.ChunkSize),
@@ -1690,19 +1700,16 @@ func NewFs(ctx context.Context, name, root string, m configmap.Mapper) (fs.Fs, e
GetTier: true, GetTier: true,
SlowModTime: true, SlowModTime: true,
}).Fill(ctx, f) }).Fill(ctx, f)
if f.rootBucket != "" && f.rootDirectory != "" { if f.rootBucket != "" && f.rootDirectory != "" && !opt.NoHeadObject && !strings.HasSuffix(root, "/") {
// Check to see if the (bucket,directory) is actually an existing file // Check to see if the (bucket,directory) is actually an existing file
oldRoot := f.root oldRoot := f.root
newRoot, leaf := path.Split(oldRoot) newRoot, leaf := path.Split(oldRoot)
f.setRoot(newRoot) f.setRoot(newRoot)
_, err := f.NewObject(ctx, leaf) _, err := f.NewObject(ctx, leaf)
if err != nil { if err != nil {
if err == fs.ErrorObjectNotFound || err == fs.ErrorNotAFile { // File doesn't exist or is a directory so return old f
// File doesn't exist or is a directory so return old f f.setRoot(oldRoot)
f.setRoot(oldRoot) return f, nil
return f, nil
}
return nil, err
} }
// return an error with an fs which points to the parent // return an error with an fs which points to the parent
return f, fs.ErrorIsFile return f, fs.ErrorIsFile
@@ -1730,7 +1737,7 @@ func (f *Fs) newObjectWithInfo(ctx context.Context, remote string, info *s3.Obje
o.setMD5FromEtag(aws.StringValue(info.ETag)) o.setMD5FromEtag(aws.StringValue(info.ETag))
o.bytes = aws.Int64Value(info.Size) o.bytes = aws.Int64Value(info.Size)
o.storageClass = aws.StringValue(info.StorageClass) o.storageClass = aws.StringValue(info.StorageClass)
} else { } else if !o.fs.opt.NoHeadObject {
err := o.readMetaData(ctx) // reads info and meta, returning an error err := o.readMetaData(ctx) // reads info and meta, returning an error
if err != nil { if err != nil {
return nil, err return nil, err
@@ -1746,7 +1753,7 @@ func (f *Fs) NewObject(ctx context.Context, remote string) (fs.Object, error) {
} }
// Gets the bucket location // Gets the bucket location
func (f *Fs) getBucketLocation(bucket string) (string, error) { func (f *Fs) getBucketLocation(ctx context.Context, bucket string) (string, error) {
req := s3.GetBucketLocationInput{ req := s3.GetBucketLocationInput{
Bucket: &bucket, Bucket: &bucket,
} }
@@ -1754,7 +1761,7 @@ func (f *Fs) getBucketLocation(bucket string) (string, error) {
var err error var err error
err = f.pacer.Call(func() (bool, error) { err = f.pacer.Call(func() (bool, error) {
resp, err = f.c.GetBucketLocation(&req) resp, err = f.c.GetBucketLocation(&req)
return f.shouldRetry(err) return f.shouldRetry(ctx, err)
}) })
if err != nil { if err != nil {
return "", err return "", err
@@ -1764,8 +1771,8 @@ func (f *Fs) getBucketLocation(bucket string) (string, error) {
// Updates the region for the bucket by reading the region from the // Updates the region for the bucket by reading the region from the
// bucket then updating the session. // bucket then updating the session.
func (f *Fs) updateRegionForBucket(bucket string) error { func (f *Fs) updateRegionForBucket(ctx context.Context, bucket string) error {
region, err := f.getBucketLocation(bucket) region, err := f.getBucketLocation(ctx, bucket)
if err != nil { if err != nil {
return errors.Wrap(err, "reading bucket location failed") return errors.Wrap(err, "reading bucket location failed")
} }
@@ -1779,7 +1786,7 @@ func (f *Fs) updateRegionForBucket(bucket string) error {
// Make a new session with the new region // Make a new session with the new region
oldRegion := f.opt.Region oldRegion := f.opt.Region
f.opt.Region = region f.opt.Region = region
c, ses, err := s3Connection(f.ctx, &f.opt) c, ses, err := s3Connection(f.ctx, &f.opt, f.srv)
if err != nil { if err != nil {
return errors.Wrap(err, "creating new session failed") return errors.Wrap(err, "creating new session failed")
} }
@@ -1859,7 +1866,7 @@ func (f *Fs) list(ctx context.Context, bucket, directory, prefix string, addBuck
} }
} }
} }
return f.shouldRetry(err) return f.shouldRetry(ctx, err)
}) })
if err != nil { if err != nil {
if awsErr, ok := err.(awserr.RequestFailure); ok { if awsErr, ok := err.(awserr.RequestFailure); ok {
@@ -2006,7 +2013,7 @@ func (f *Fs) listBuckets(ctx context.Context) (entries fs.DirEntries, err error)
var resp *s3.ListBucketsOutput var resp *s3.ListBucketsOutput
err = f.pacer.Call(func() (bool, error) { err = f.pacer.Call(func() (bool, error) {
resp, err = f.c.ListBucketsWithContext(ctx, &req) resp, err = f.c.ListBucketsWithContext(ctx, &req)
return f.shouldRetry(err) return f.shouldRetry(ctx, err)
}) })
if err != nil { if err != nil {
return nil, err return nil, err
@@ -2121,7 +2128,7 @@ func (f *Fs) bucketExists(ctx context.Context, bucket string) (bool, error) {
} }
err := f.pacer.Call(func() (bool, error) { err := f.pacer.Call(func() (bool, error) {
_, err := f.c.HeadBucketWithContext(ctx, &req) _, err := f.c.HeadBucketWithContext(ctx, &req)
return f.shouldRetry(err) return f.shouldRetry(ctx, err)
}) })
if err == nil { if err == nil {
return true, nil return true, nil
@@ -2157,7 +2164,7 @@ func (f *Fs) makeBucket(ctx context.Context, bucket string) error {
} }
err := f.pacer.Call(func() (bool, error) { err := f.pacer.Call(func() (bool, error) {
_, err := f.c.CreateBucketWithContext(ctx, &req) _, err := f.c.CreateBucketWithContext(ctx, &req)
return f.shouldRetry(err) return f.shouldRetry(ctx, err)
}) })
if err == nil { if err == nil {
fs.Infof(f, "Bucket %q created with ACL %q", bucket, f.opt.BucketACL) fs.Infof(f, "Bucket %q created with ACL %q", bucket, f.opt.BucketACL)
@@ -2187,7 +2194,7 @@ func (f *Fs) Rmdir(ctx context.Context, dir string) error {
} }
err := f.pacer.Call(func() (bool, error) { err := f.pacer.Call(func() (bool, error) {
_, err := f.c.DeleteBucketWithContext(ctx, &req) _, err := f.c.DeleteBucketWithContext(ctx, &req)
return f.shouldRetry(err) return f.shouldRetry(ctx, err)
}) })
if err == nil { if err == nil {
fs.Infof(f, "Bucket %q deleted", bucket) fs.Infof(f, "Bucket %q deleted", bucket)
@@ -2247,7 +2254,7 @@ func (f *Fs) copy(ctx context.Context, req *s3.CopyObjectInput, dstBucket, dstPa
} }
return f.pacer.Call(func() (bool, error) { return f.pacer.Call(func() (bool, error) {
_, err := f.c.CopyObjectWithContext(ctx, req) _, err := f.c.CopyObjectWithContext(ctx, req)
return f.shouldRetry(err) return f.shouldRetry(ctx, err)
}) })
} }
@@ -2291,7 +2298,7 @@ func (f *Fs) copyMultipart(ctx context.Context, copyReq *s3.CopyObjectInput, dst
if err := f.pacer.Call(func() (bool, error) { if err := f.pacer.Call(func() (bool, error) {
var err error var err error
cout, err = f.c.CreateMultipartUploadWithContext(ctx, req) cout, err = f.c.CreateMultipartUploadWithContext(ctx, req)
return f.shouldRetry(err) return f.shouldRetry(ctx, err)
}); err != nil { }); err != nil {
return err return err
} }
@@ -2307,7 +2314,7 @@ func (f *Fs) copyMultipart(ctx context.Context, copyReq *s3.CopyObjectInput, dst
UploadId: uid, UploadId: uid,
RequestPayer: req.RequestPayer, RequestPayer: req.RequestPayer,
}) })
return f.shouldRetry(err) return f.shouldRetry(ctx, err)
}) })
})() })()
@@ -2330,7 +2337,7 @@ func (f *Fs) copyMultipart(ctx context.Context, copyReq *s3.CopyObjectInput, dst
uploadPartReq.CopySourceRange = aws.String(calculateRange(partSize, partNum-1, numParts, srcSize)) uploadPartReq.CopySourceRange = aws.String(calculateRange(partSize, partNum-1, numParts, srcSize))
uout, err := f.c.UploadPartCopyWithContext(ctx, uploadPartReq) uout, err := f.c.UploadPartCopyWithContext(ctx, uploadPartReq)
if err != nil { if err != nil {
return f.shouldRetry(err) return f.shouldRetry(ctx, err)
} }
parts = append(parts, &s3.CompletedPart{ parts = append(parts, &s3.CompletedPart{
PartNumber: &partNum, PartNumber: &partNum,
@@ -2352,7 +2359,7 @@ func (f *Fs) copyMultipart(ctx context.Context, copyReq *s3.CopyObjectInput, dst
RequestPayer: req.RequestPayer, RequestPayer: req.RequestPayer,
UploadId: uid, UploadId: uid,
}) })
return f.shouldRetry(err) return f.shouldRetry(ctx, err)
}) })
} }
@@ -2583,7 +2590,7 @@ func (f *Fs) Command(ctx context.Context, name string, arg []string, opt map[str
reqCopy.Key = &bucketPath reqCopy.Key = &bucketPath
err = f.pacer.Call(func() (bool, error) { err = f.pacer.Call(func() (bool, error) {
_, err = f.c.RestoreObject(&reqCopy) _, err = f.c.RestoreObject(&reqCopy)
return f.shouldRetry(err) return f.shouldRetry(ctx, err)
}) })
if err != nil { if err != nil {
st.Status = err.Error() st.Status = err.Error()
@@ -2631,7 +2638,7 @@ func (f *Fs) listMultipartUploads(ctx context.Context, bucket, key string) (uplo
var resp *s3.ListMultipartUploadsOutput var resp *s3.ListMultipartUploadsOutput
err = f.pacer.Call(func() (bool, error) { err = f.pacer.Call(func() (bool, error) {
resp, err = f.c.ListMultipartUploads(&req) resp, err = f.c.ListMultipartUploads(&req)
return f.shouldRetry(err) return f.shouldRetry(ctx, err)
}) })
if err != nil { if err != nil {
return nil, errors.Wrapf(err, "list multipart uploads bucket %q key %q", bucket, key) return nil, errors.Wrapf(err, "list multipart uploads bucket %q key %q", bucket, key)
@@ -2806,7 +2813,7 @@ func (o *Object) headObject(ctx context.Context) (resp *s3.HeadObjectOutput, err
err = o.fs.pacer.Call(func() (bool, error) { err = o.fs.pacer.Call(func() (bool, error) {
var err error var err error
resp, err = o.fs.c.HeadObjectWithContext(ctx, &req) resp, err = o.fs.c.HeadObjectWithContext(ctx, &req)
return o.fs.shouldRetry(err) return o.fs.shouldRetry(ctx, err)
}) })
if err != nil { if err != nil {
if awsErr, ok := err.(awserr.RequestFailure); ok { if awsErr, ok := err.(awserr.RequestFailure); ok {
@@ -2831,15 +2838,23 @@ func (o *Object) readMetaData(ctx context.Context) (err error) {
if err != nil { if err != nil {
return err return err
} }
if resp.LastModified == nil {
fs.Logf(o, "Failed to read last modified from HEAD: %v", err)
}
o.setMetaData(resp.ETag, resp.ContentLength, resp.LastModified, resp.Metadata, resp.ContentType, resp.StorageClass)
return nil
}
func (o *Object) setMetaData(etag *string, contentLength *int64, lastModified *time.Time, meta map[string]*string, mimeType *string, storageClass *string) {
var size int64 var size int64
// Ignore missing Content-Length assuming it is 0 // Ignore missing Content-Length assuming it is 0
// Some versions of ceph do this due their apache proxies // Some versions of ceph do this due their apache proxies
if resp.ContentLength != nil { if contentLength != nil {
size = *resp.ContentLength size = *contentLength
} }
o.setMD5FromEtag(aws.StringValue(resp.ETag)) o.setMD5FromEtag(aws.StringValue(etag))
o.bytes = size o.bytes = size
o.meta = resp.Metadata o.meta = meta
if o.meta == nil { if o.meta == nil {
o.meta = map[string]*string{} o.meta = map[string]*string{}
} }
@@ -2854,15 +2869,13 @@ func (o *Object) readMetaData(ctx context.Context) (err error) {
o.md5 = hex.EncodeToString(md5sumBytes) o.md5 = hex.EncodeToString(md5sumBytes)
} }
} }
o.storageClass = aws.StringValue(resp.StorageClass) o.storageClass = aws.StringValue(storageClass)
if resp.LastModified == nil { if lastModified == nil {
fs.Logf(o, "Failed to read last modified from HEAD: %v", err)
o.lastModified = time.Now() o.lastModified = time.Now()
} else { } else {
o.lastModified = *resp.LastModified o.lastModified = *lastModified
} }
o.mimeType = aws.StringValue(resp.ContentType) o.mimeType = aws.StringValue(mimeType)
return nil
} }
// ModTime returns the modification time of the object // ModTime returns the modification time of the object
@@ -2962,7 +2975,7 @@ func (o *Object) Open(ctx context.Context, options ...fs.OpenOption) (in io.Read
var err error var err error
httpReq.HTTPRequest = httpReq.HTTPRequest.WithContext(ctx) httpReq.HTTPRequest = httpReq.HTTPRequest.WithContext(ctx)
err = httpReq.Send() err = httpReq.Send()
return o.fs.shouldRetry(err) return o.fs.shouldRetry(ctx, err)
}) })
if err, ok := err.(awserr.RequestFailure); ok { if err, ok := err.(awserr.RequestFailure); ok {
if err.Code() == "InvalidObjectState" { if err.Code() == "InvalidObjectState" {
@@ -2972,6 +2985,26 @@ func (o *Object) Open(ctx context.Context, options ...fs.OpenOption) (in io.Read
if err != nil { if err != nil {
return nil, err return nil, err
} }
if resp.LastModified == nil {
fs.Logf(o, "Failed to read last modified: %v", err)
}
// read size from ContentLength or ContentRange
size := resp.ContentLength
if resp.ContentRange != nil {
var contentRange = *resp.ContentRange
slash := strings.IndexRune(contentRange, '/')
if slash >= 0 {
i, err := strconv.ParseInt(contentRange[slash+1:], 10, 64)
if err == nil {
size = &i
} else {
fs.Debugf(o, "Failed to find parse integer from in %q: %v", contentRange, err)
}
} else {
fs.Debugf(o, "Failed to find length in %q", contentRange)
}
}
o.setMetaData(resp.ETag, size, resp.LastModified, resp.Metadata, resp.ContentType, resp.StorageClass)
return resp.Body, nil return resp.Body, nil
} }
@@ -2997,9 +3030,9 @@ func (o *Object) uploadMultipart(ctx context.Context, req *s3.PutObjectInput, si
// calculate size of parts // calculate size of parts
partSize := int(f.opt.ChunkSize) partSize := int(f.opt.ChunkSize)
// size can be -1 here meaning we don't know the size of the incoming file. We use ChunkSize // size can be -1 here meaning we don't know the size of the incoming file. We use ChunkSize
// buffers here (default 5MB). With a maximum number of parts (10,000) this will be a file of // buffers here (default 5 MiB). With a maximum number of parts (10,000) this will be a file of
// 48GB which seems like a not too unreasonable limit. // 48 GiB which seems like a not too unreasonable limit.
if size == -1 { if size == -1 {
warnStreamUpload.Do(func() { warnStreamUpload.Do(func() {
fs.Logf(f, "Streaming uploads using chunk size %v will have maximum file size of %v", fs.Logf(f, "Streaming uploads using chunk size %v will have maximum file size of %v",
@@ -3008,7 +3041,7 @@ func (o *Object) uploadMultipart(ctx context.Context, req *s3.PutObjectInput, si
} else { } else {
// Adjust partSize until the number of parts is small enough. // Adjust partSize until the number of parts is small enough.
if size/int64(partSize) >= uploadParts { if size/int64(partSize) >= uploadParts {
// Calculate partition size rounded up to the nearest MB // Calculate partition size rounded up to the nearest MiB
partSize = int((((size / uploadParts) >> 20) + 1) << 20) partSize = int((((size / uploadParts) >> 20) + 1) << 20)
} }
} }
@@ -3021,7 +3054,7 @@ func (o *Object) uploadMultipart(ctx context.Context, req *s3.PutObjectInput, si
err = f.pacer.Call(func() (bool, error) { err = f.pacer.Call(func() (bool, error) {
var err error var err error
cout, err = f.c.CreateMultipartUploadWithContext(ctx, &mReq) cout, err = f.c.CreateMultipartUploadWithContext(ctx, &mReq)
return f.shouldRetry(err) return f.shouldRetry(ctx, err)
}) })
if err != nil { if err != nil {
return errors.Wrap(err, "multipart upload failed to initialise") return errors.Wrap(err, "multipart upload failed to initialise")
@@ -3040,7 +3073,7 @@ func (o *Object) uploadMultipart(ctx context.Context, req *s3.PutObjectInput, si
UploadId: uid, UploadId: uid,
RequestPayer: req.RequestPayer, RequestPayer: req.RequestPayer,
}) })
return f.shouldRetry(err) return f.shouldRetry(ctx, err)
}) })
if errCancel != nil { if errCancel != nil {
fs.Debugf(o, "Failed to cancel multipart upload: %v", errCancel) fs.Debugf(o, "Failed to cancel multipart upload: %v", errCancel)
@@ -3116,7 +3149,7 @@ func (o *Object) uploadMultipart(ctx context.Context, req *s3.PutObjectInput, si
uout, err := f.c.UploadPartWithContext(gCtx, uploadPartReq) uout, err := f.c.UploadPartWithContext(gCtx, uploadPartReq)
if err != nil { if err != nil {
if partNum <= int64(concurrency) { if partNum <= int64(concurrency) {
return f.shouldRetry(err) return f.shouldRetry(ctx, err)
} }
// retry all chunks once have done the first batch // retry all chunks once have done the first batch
return true, err return true, err
@@ -3156,7 +3189,7 @@ func (o *Object) uploadMultipart(ctx context.Context, req *s3.PutObjectInput, si
RequestPayer: req.RequestPayer, RequestPayer: req.RequestPayer,
UploadId: uid, UploadId: uid,
}) })
return f.shouldRetry(err) return f.shouldRetry(ctx, err)
}) })
if err != nil { if err != nil {
return errors.Wrap(err, "multipart upload failed to finalise") return errors.Wrap(err, "multipart upload failed to finalise")
@@ -3311,11 +3344,11 @@ func (o *Object) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, op
var err error var err error
resp, err = o.fs.srv.Do(httpReq) resp, err = o.fs.srv.Do(httpReq)
if err != nil { if err != nil {
return o.fs.shouldRetry(err) return o.fs.shouldRetry(ctx, err)
} }
body, err := rest.ReadBody(resp) body, err := rest.ReadBody(resp)
if err != nil { if err != nil {
return o.fs.shouldRetry(err) return o.fs.shouldRetry(ctx, err)
} }
if resp.StatusCode >= 200 && resp.StatusCode < 299 { if resp.StatusCode >= 200 && resp.StatusCode < 299 {
return false, nil return false, nil
@@ -3366,7 +3399,7 @@ func (o *Object) Remove(ctx context.Context) error {
} }
err := o.fs.pacer.Call(func() (bool, error) { err := o.fs.pacer.Call(func() (bool, error) {
_, err := o.fs.c.DeleteObjectWithContext(ctx, &req) _, err := o.fs.c.DeleteObjectWithContext(ctx, &req)
return o.fs.shouldRetry(err) return o.fs.shouldRetry(ctx, err)
}) })
return err return err
} }

View File

@@ -296,87 +296,86 @@ func NewFs(ctx context.Context, name, root string, m configmap.Mapper) (fs.Fs, e
} }
// Config callback for 2FA // Config callback for 2FA
func Config(ctx context.Context, name string, m configmap.Mapper) { func Config(ctx context.Context, name string, m configmap.Mapper, config fs.ConfigIn) (*fs.ConfigOut, error) {
ci := fs.GetConfig(ctx)
serverURL, ok := m.Get(configURL) serverURL, ok := m.Get(configURL)
if !ok || serverURL == "" { if !ok || serverURL == "" {
// If there's no server URL, it means we're trying an operation at the backend level, like a "rclone authorize seafile" // If there's no server URL, it means we're trying an operation at the backend level, like a "rclone authorize seafile"
fmt.Print("\nOperation not supported on this remote.\nIf you need a 2FA code on your account, use the command:\n\nrclone config reconnect <remote name>:\n\n") return nil, errors.New("operation not supported on this remote. If you need a 2FA code on your account, use the command: rclone config reconnect <remote name>: ")
return
}
// Stop if we are running non-interactive config
if ci.AutoConfirm {
return
} }
u, err := url.Parse(serverURL) u, err := url.Parse(serverURL)
if err != nil { if err != nil {
fs.Errorf(nil, "Invalid server URL %s", serverURL) return nil, errors.Errorf("invalid server URL %s", serverURL)
return
} }
is2faEnabled, _ := m.Get(config2FA) is2faEnabled, _ := m.Get(config2FA)
if is2faEnabled != "true" { if is2faEnabled != "true" {
fmt.Println("Two-factor authentication is not enabled on this account.") return nil, errors.New("two-factor authentication is not enabled on this account")
return
} }
username, _ := m.Get(configUser) username, _ := m.Get(configUser)
if username == "" { if username == "" {
fs.Errorf(nil, "A username is required") return nil, errors.New("a username is required")
return
} }
password, _ := m.Get(configPassword) password, _ := m.Get(configPassword)
if password != "" { if password != "" {
password, _ = obscure.Reveal(password) password, _ = obscure.Reveal(password)
} }
// Just make sure we do have a password
for password == "" {
fmt.Print("Two-factor authentication: please enter your password (it won't be saved in the configuration)\npassword> ")
password = config.ReadPassword()
}
// Create rest client for getAuthorizationToken switch config.State {
url := u.String() case "":
if !strings.HasPrefix(url, "/") { // Just make sure we do have a password
url += "/" if password == "" {
} return fs.ConfigPassword("", "config_password", "Two-factor authentication: please enter your password (it won't be saved in the configuration)")
srv := rest.NewClient(fshttp.NewClient(ctx)).SetRoot(url) }
return fs.ConfigGoto("password")
// We loop asking for a 2FA code case "password":
for { password = config.Result
code := "" if password == "" {
for code == "" { return fs.ConfigError("password", "Password can't be blank")
fmt.Print("Two-factor authentication: please enter your 2FA code\n2fa code> ") }
code = config.ReadLine() m.Set(configPassword, obscure.MustObscure(config.Result))
return fs.ConfigGoto("2fa")
case "2fa":
return fs.ConfigInput("2fa_do", "config_2fa", "Two-factor authentication: please enter your 2FA code")
case "2fa_do":
code := config.Result
if code == "" {
return fs.ConfigError("2fa", "2FA codes can't be blank")
} }
// Create rest client for getAuthorizationToken
url := u.String()
if !strings.HasPrefix(url, "/") {
url += "/"
}
srv := rest.NewClient(fshttp.NewClient(ctx)).SetRoot(url)
// We loop asking for a 2FA code
ctx, cancel := context.WithTimeout(context.Background(), 10*time.Second) ctx, cancel := context.WithTimeout(context.Background(), 10*time.Second)
defer cancel() defer cancel()
fmt.Println("Authenticating...")
token, err := getAuthorizationToken(ctx, srv, username, password, code) token, err := getAuthorizationToken(ctx, srv, username, password, code)
if err != nil { if err != nil {
fmt.Printf("Authentication failed: %v\n", err) return fs.ConfigConfirm("2fa_error", true, "config_retry", fmt.Sprintf("Authentication failed: %v\n\nTry Again?", err))
tryAgain := strings.ToLower(config.ReadNonEmptyLine("Do you want to try again (y/n)?"))
if tryAgain != "y" && tryAgain != "yes" {
// The user is giving up, we're done here
break
}
} }
if token != "" { if token == "" {
fmt.Println("Success!") return fs.ConfigConfirm("2fa_error", true, "config_retry", "Authentication failed - no token returned.\n\nTry Again?")
// Let's save the token into the configuration
m.Set(configAuthToken, token)
// And delete any previous entry for password
m.Set(configPassword, "")
config.SaveConfig()
// And we're done here
break
} }
// Let's save the token into the configuration
m.Set(configAuthToken, token)
// And delete any previous entry for password
m.Set(configPassword, "")
// And we're done here
return nil, nil
case "2fa_error":
if config.Result == "true" {
return fs.ConfigGoto("2fa")
}
return nil, errors.New("2fa authentication failed")
} }
return nil, fmt.Errorf("unknown state %q", config.State)
} }
// sets the AuthorizationToken up // sets the AuthorizationToken up
@@ -408,7 +407,10 @@ var retryErrorCodes = []int{
// shouldRetry returns a boolean as to whether this resp and err // shouldRetry returns a boolean as to whether this resp and err
// deserve to be retried. It returns the err as a convenience // deserve to be retried. It returns the err as a convenience
func (f *Fs) shouldRetry(resp *http.Response, err error) (bool, error) { func (f *Fs) shouldRetry(ctx context.Context, resp *http.Response, err error) (bool, error) {
if fserrors.ContextError(ctx, &err) {
return false, err
}
// For 429 errors look at the Retry-After: header and // For 429 errors look at the Retry-After: header and
// set the retry appropriately, starting with a minimum of 1 // set the retry appropriately, starting with a minimum of 1
// second if it isn't set. // second if it isn't set.

View File

@@ -86,7 +86,7 @@ func (f *Fs) getServerInfo(ctx context.Context) (account *api.ServerInfo, err er
var resp *http.Response var resp *http.Response
err = f.pacer.Call(func() (bool, error) { err = f.pacer.Call(func() (bool, error) {
resp, err = f.srv.CallJSON(ctx, &opts, nil, &result) resp, err = f.srv.CallJSON(ctx, &opts, nil, &result)
return f.shouldRetry(resp, err) return f.shouldRetry(ctx, resp, err)
}) })
if err != nil { if err != nil {
if resp != nil { if resp != nil {
@@ -112,7 +112,7 @@ func (f *Fs) getUserAccountInfo(ctx context.Context) (account *api.AccountInfo,
var resp *http.Response var resp *http.Response
err = f.pacer.Call(func() (bool, error) { err = f.pacer.Call(func() (bool, error) {
resp, err = f.srv.CallJSON(ctx, &opts, nil, &result) resp, err = f.srv.CallJSON(ctx, &opts, nil, &result)
return f.shouldRetry(resp, err) return f.shouldRetry(ctx, resp, err)
}) })
if err != nil { if err != nil {
if resp != nil { if resp != nil {
@@ -139,7 +139,7 @@ func (f *Fs) getLibraries(ctx context.Context) ([]api.Library, error) {
var err error var err error
err = f.pacer.Call(func() (bool, error) { err = f.pacer.Call(func() (bool, error) {
resp, err = f.srv.CallJSON(ctx, &opts, nil, &result) resp, err = f.srv.CallJSON(ctx, &opts, nil, &result)
return f.shouldRetry(resp, err) return f.shouldRetry(ctx, resp, err)
}) })
if err != nil { if err != nil {
if resp != nil { if resp != nil {
@@ -170,7 +170,7 @@ func (f *Fs) createLibrary(ctx context.Context, libraryName, password string) (l
var resp *http.Response var resp *http.Response
err = f.pacer.Call(func() (bool, error) { err = f.pacer.Call(func() (bool, error) {
resp, err = f.srv.CallJSON(ctx, &opts, &request, &result) resp, err = f.srv.CallJSON(ctx, &opts, &request, &result)
return f.shouldRetry(resp, err) return f.shouldRetry(ctx, resp, err)
}) })
if err != nil { if err != nil {
if resp != nil { if resp != nil {
@@ -197,7 +197,7 @@ func (f *Fs) deleteLibrary(ctx context.Context, libraryID string) error {
var err error var err error
err = f.pacer.Call(func() (bool, error) { err = f.pacer.Call(func() (bool, error) {
resp, err = f.srv.CallJSON(ctx, &opts, nil, &result) resp, err = f.srv.CallJSON(ctx, &opts, nil, &result)
return f.shouldRetry(resp, err) return f.shouldRetry(ctx, resp, err)
}) })
if err != nil { if err != nil {
if resp != nil { if resp != nil {
@@ -228,7 +228,7 @@ func (f *Fs) decryptLibrary(ctx context.Context, libraryID, password string) err
var err error var err error
err = f.pacer.Call(func() (bool, error) { err = f.pacer.Call(func() (bool, error) {
resp, err = f.srv.Call(ctx, &opts) resp, err = f.srv.Call(ctx, &opts)
return f.shouldRetry(resp, err) return f.shouldRetry(ctx, resp, err)
}) })
if err != nil { if err != nil {
if resp != nil { if resp != nil {
@@ -271,7 +271,7 @@ func (f *Fs) getDirectoryEntriesAPIv21(ctx context.Context, libraryID, dirPath s
var err error var err error
err = f.pacer.Call(func() (bool, error) { err = f.pacer.Call(func() (bool, error) {
resp, err = f.srv.CallJSON(ctx, &opts, nil, &result) resp, err = f.srv.CallJSON(ctx, &opts, nil, &result)
return f.shouldRetry(resp, err) return f.shouldRetry(ctx, resp, err)
}) })
if err != nil { if err != nil {
if resp != nil { if resp != nil {
@@ -316,7 +316,7 @@ func (f *Fs) getDirectoryDetails(ctx context.Context, libraryID, dirPath string)
var err error var err error
err = f.pacer.Call(func() (bool, error) { err = f.pacer.Call(func() (bool, error) {
resp, err = f.srv.CallJSON(ctx, &opts, nil, &result) resp, err = f.srv.CallJSON(ctx, &opts, nil, &result)
return f.shouldRetry(resp, err) return f.shouldRetry(ctx, resp, err)
}) })
if err != nil { if err != nil {
if resp != nil { if resp != nil {
@@ -358,7 +358,7 @@ func (f *Fs) createDir(ctx context.Context, libraryID, dirPath string) error {
var err error var err error
err = f.pacer.Call(func() (bool, error) { err = f.pacer.Call(func() (bool, error) {
resp, err = f.srv.Call(ctx, &opts) resp, err = f.srv.Call(ctx, &opts)
return f.shouldRetry(resp, err) return f.shouldRetry(ctx, resp, err)
}) })
if err != nil { if err != nil {
if resp != nil { if resp != nil {
@@ -398,7 +398,7 @@ func (f *Fs) renameDir(ctx context.Context, libraryID, dirPath, newName string)
var err error var err error
err = f.pacer.Call(func() (bool, error) { err = f.pacer.Call(func() (bool, error) {
resp, err = f.srv.Call(ctx, &opts) resp, err = f.srv.Call(ctx, &opts)
return f.shouldRetry(resp, err) return f.shouldRetry(ctx, resp, err)
}) })
if err != nil { if err != nil {
if resp != nil { if resp != nil {
@@ -438,7 +438,7 @@ func (f *Fs) moveDir(ctx context.Context, srcLibraryID, srcDir, srcName, dstLibr
var err error var err error
err = f.pacer.Call(func() (bool, error) { err = f.pacer.Call(func() (bool, error) {
resp, err = f.srv.CallJSON(ctx, &opts, &request, nil) resp, err = f.srv.CallJSON(ctx, &opts, &request, nil)
return f.shouldRetry(resp, err) return f.shouldRetry(ctx, resp, err)
}) })
if err != nil { if err != nil {
if resp != nil { if resp != nil {
@@ -474,7 +474,7 @@ func (f *Fs) deleteDir(ctx context.Context, libraryID, filePath string) error {
var err error var err error
err = f.pacer.Call(func() (bool, error) { err = f.pacer.Call(func() (bool, error) {
resp, err = f.srv.CallJSON(ctx, &opts, nil, nil) resp, err = f.srv.CallJSON(ctx, &opts, nil, nil)
return f.shouldRetry(resp, err) return f.shouldRetry(ctx, resp, err)
}) })
if err != nil { if err != nil {
if resp != nil { if resp != nil {
@@ -505,7 +505,7 @@ func (f *Fs) getFileDetails(ctx context.Context, libraryID, filePath string) (*a
var err error var err error
err = f.pacer.Call(func() (bool, error) { err = f.pacer.Call(func() (bool, error) {
resp, err = f.srv.CallJSON(ctx, &opts, nil, &result) resp, err = f.srv.CallJSON(ctx, &opts, nil, &result)
return f.shouldRetry(resp, err) return f.shouldRetry(ctx, resp, err)
}) })
if err != nil { if err != nil {
if resp != nil { if resp != nil {
@@ -539,7 +539,7 @@ func (f *Fs) deleteFile(ctx context.Context, libraryID, filePath string) error {
} }
err := f.pacer.Call(func() (bool, error) { err := f.pacer.Call(func() (bool, error) {
resp, err := f.srv.CallJSON(ctx, &opts, nil, nil) resp, err := f.srv.CallJSON(ctx, &opts, nil, nil)
return f.shouldRetry(resp, err) return f.shouldRetry(ctx, resp, err)
}) })
if err != nil { if err != nil {
return errors.Wrap(err, "failed to delete file") return errors.Wrap(err, "failed to delete file")
@@ -565,7 +565,7 @@ func (f *Fs) getDownloadLink(ctx context.Context, libraryID, filePath string) (s
var err error var err error
err = f.pacer.Call(func() (bool, error) { err = f.pacer.Call(func() (bool, error) {
resp, err = f.srv.CallJSON(ctx, &opts, nil, &result) resp, err = f.srv.CallJSON(ctx, &opts, nil, &result)
return f.shouldRetry(resp, err) return f.shouldRetry(ctx, resp, err)
}) })
if err != nil { if err != nil {
if resp != nil { if resp != nil {
@@ -614,7 +614,7 @@ func (f *Fs) download(ctx context.Context, url string, size int64, options ...fs
var err error var err error
err = f.pacer.Call(func() (bool, error) { err = f.pacer.Call(func() (bool, error) {
resp, err = f.srv.Call(ctx, &opts) resp, err = f.srv.Call(ctx, &opts)
return f.shouldRetry(resp, err) return f.shouldRetry(ctx, resp, err)
}) })
if err != nil { if err != nil {
if resp != nil { if resp != nil {
@@ -659,7 +659,7 @@ func (f *Fs) getUploadLink(ctx context.Context, libraryID string) (string, error
var err error var err error
err = f.pacer.Call(func() (bool, error) { err = f.pacer.Call(func() (bool, error) {
resp, err = f.srv.CallJSON(ctx, &opts, nil, &result) resp, err = f.srv.CallJSON(ctx, &opts, nil, &result)
return f.shouldRetry(resp, err) return f.shouldRetry(ctx, resp, err)
}) })
if err != nil { if err != nil {
if resp != nil { if resp != nil {
@@ -682,7 +682,7 @@ func (f *Fs) upload(ctx context.Context, in io.Reader, uploadLink, filePath stri
"need_idx_progress": {"true"}, "need_idx_progress": {"true"},
"replace": {"1"}, "replace": {"1"},
} }
formReader, contentType, _, err := rest.MultipartUpload(in, parameters, "file", f.opt.Enc.FromStandardName(filename)) formReader, contentType, _, err := rest.MultipartUpload(ctx, in, parameters, "file", f.opt.Enc.FromStandardName(filename))
if err != nil { if err != nil {
return nil, errors.Wrap(err, "failed to make multipart upload") return nil, errors.Wrap(err, "failed to make multipart upload")
} }
@@ -739,7 +739,7 @@ func (f *Fs) listShareLinks(ctx context.Context, libraryID, remote string) ([]ap
var err error var err error
err = f.pacer.Call(func() (bool, error) { err = f.pacer.Call(func() (bool, error) {
resp, err = f.srv.CallJSON(ctx, &opts, nil, &result) resp, err = f.srv.CallJSON(ctx, &opts, nil, &result)
return f.shouldRetry(resp, err) return f.shouldRetry(ctx, resp, err)
}) })
if err != nil { if err != nil {
if resp != nil { if resp != nil {
@@ -777,7 +777,7 @@ func (f *Fs) createShareLink(ctx context.Context, libraryID, remote string) (*ap
var err error var err error
err = f.pacer.Call(func() (bool, error) { err = f.pacer.Call(func() (bool, error) {
resp, err = f.srv.CallJSON(ctx, &opts, &request, &result) resp, err = f.srv.CallJSON(ctx, &opts, &request, &result)
return f.shouldRetry(resp, err) return f.shouldRetry(ctx, resp, err)
}) })
if err != nil { if err != nil {
if resp != nil { if resp != nil {
@@ -818,7 +818,7 @@ func (f *Fs) copyFile(ctx context.Context, srcLibraryID, srcPath, dstLibraryID,
var err error var err error
err = f.pacer.Call(func() (bool, error) { err = f.pacer.Call(func() (bool, error) {
resp, err = f.srv.CallJSON(ctx, &opts, &request, &result) resp, err = f.srv.CallJSON(ctx, &opts, &request, &result)
return f.shouldRetry(resp, err) return f.shouldRetry(ctx, resp, err)
}) })
if err != nil { if err != nil {
if resp != nil { if resp != nil {
@@ -860,7 +860,7 @@ func (f *Fs) moveFile(ctx context.Context, srcLibraryID, srcPath, dstLibraryID,
var err error var err error
err = f.pacer.Call(func() (bool, error) { err = f.pacer.Call(func() (bool, error) {
resp, err = f.srv.CallJSON(ctx, &opts, &request, &result) resp, err = f.srv.CallJSON(ctx, &opts, &request, &result)
return f.shouldRetry(resp, err) return f.shouldRetry(ctx, resp, err)
}) })
if err != nil { if err != nil {
if resp != nil { if resp != nil {
@@ -900,7 +900,7 @@ func (f *Fs) renameFile(ctx context.Context, libraryID, filePath, newname string
var err error var err error
err = f.pacer.Call(func() (bool, error) { err = f.pacer.Call(func() (bool, error) {
resp, err = f.srv.CallJSON(ctx, &opts, &request, &result) resp, err = f.srv.CallJSON(ctx, &opts, &request, &result)
return f.shouldRetry(resp, err) return f.shouldRetry(ctx, resp, err)
}) })
if err != nil { if err != nil {
if resp != nil { if resp != nil {
@@ -938,7 +938,7 @@ func (f *Fs) emptyLibraryTrash(ctx context.Context, libraryID string) error {
var err error var err error
err = f.pacer.Call(func() (bool, error) { err = f.pacer.Call(func() (bool, error) {
resp, err = f.srv.CallJSON(ctx, &opts, nil, nil) resp, err = f.srv.CallJSON(ctx, &opts, nil, nil)
return f.shouldRetry(resp, err) return f.shouldRetry(ctx, resp, err)
}) })
if err != nil { if err != nil {
if resp != nil { if resp != nil {
@@ -976,7 +976,7 @@ func (f *Fs) getDirectoryEntriesAPIv2(ctx context.Context, libraryID, dirPath st
var err error var err error
err = f.pacer.Call(func() (bool, error) { err = f.pacer.Call(func() (bool, error) {
resp, err = f.srv.CallJSON(ctx, &opts, nil, &result) resp, err = f.srv.CallJSON(ctx, &opts, nil, &result)
return f.shouldRetry(resp, err) return f.shouldRetry(ctx, resp, err)
}) })
if err != nil { if err != nil {
if resp != nil { if resp != nil {
@@ -1030,7 +1030,7 @@ func (f *Fs) copyFileAPIv2(ctx context.Context, srcLibraryID, srcPath, dstLibrar
var err error var err error
err = f.pacer.Call(func() (bool, error) { err = f.pacer.Call(func() (bool, error) {
resp, err = f.srv.Call(ctx, &opts) resp, err = f.srv.Call(ctx, &opts)
return f.shouldRetry(resp, err) return f.shouldRetry(ctx, resp, err)
}) })
if err != nil { if err != nil {
if resp != nil { if resp != nil {
@@ -1075,7 +1075,7 @@ func (f *Fs) renameFileAPIv2(ctx context.Context, libraryID, filePath, newname s
var err error var err error
err = f.pacer.Call(func() (bool, error) { err = f.pacer.Call(func() (bool, error) {
resp, err = f.srv.Call(ctx, &opts) resp, err = f.srv.Call(ctx, &opts)
return f.shouldRetry(resp, err) return f.shouldRetry(ctx, resp, err)
}) })
if err != nil { if err != nil {
if resp != nil { if resp != nil {

View File

@@ -16,6 +16,7 @@ import (
"strconv" "strconv"
"strings" "strings"
"sync" "sync"
"sync/atomic"
"time" "time"
"github.com/pkg/errors" "github.com/pkg/errors"
@@ -204,6 +205,47 @@ Fstat instead of Stat which is called on an already open file handle.
It has been found that this helps with IBM Sterling SFTP servers which have It has been found that this helps with IBM Sterling SFTP servers which have
"extractability" level set to 1 which means only 1 file can be opened at "extractability" level set to 1 which means only 1 file can be opened at
any given time. any given time.
`,
Advanced: true,
}, {
Name: "disable_concurrent_reads",
Default: false,
Help: `If set don't use concurrent reads
Normally concurrent reads are safe to use and not using them will
degrade performance, so this option is disabled by default.
Some servers limit the amount number of times a file can be
downloaded. Using concurrent reads can trigger this limit, so if you
have a server which returns
Failed to copy: file does not exist
Then you may need to enable this flag.
If concurrent reads are disabled, the use_fstat option is ignored.
`,
Advanced: true,
}, {
Name: "disable_concurrent_writes",
Default: false,
Help: `If set don't use concurrent writes
Normally rclone uses concurrent writes to upload files. This improves
the performance greatly, especially for distant servers.
This option disables concurrent writes should that be necessary.
`,
Advanced: true,
}, {
Name: "idle_timeout",
Default: fs.Duration(60 * time.Second),
Help: `Max time before closing idle connections
If no connections have been returned to the connection pool in the time
given, rclone will empty the connection pool.
Set to 0 to keep connections indefinitely.
`, `,
Advanced: true, Advanced: true,
}}, }},
@@ -213,27 +255,30 @@ any given time.
// Options defines the configuration for this backend // Options defines the configuration for this backend
type Options struct { type Options struct {
Host string `config:"host"` Host string `config:"host"`
User string `config:"user"` User string `config:"user"`
Port string `config:"port"` Port string `config:"port"`
Pass string `config:"pass"` Pass string `config:"pass"`
KeyPem string `config:"key_pem"` KeyPem string `config:"key_pem"`
KeyFile string `config:"key_file"` KeyFile string `config:"key_file"`
KeyFilePass string `config:"key_file_pass"` KeyFilePass string `config:"key_file_pass"`
PubKeyFile string `config:"pubkey_file"` PubKeyFile string `config:"pubkey_file"`
KnownHostsFile string `config:"known_hosts_file"` KnownHostsFile string `config:"known_hosts_file"`
KeyUseAgent bool `config:"key_use_agent"` KeyUseAgent bool `config:"key_use_agent"`
UseInsecureCipher bool `config:"use_insecure_cipher"` UseInsecureCipher bool `config:"use_insecure_cipher"`
DisableHashCheck bool `config:"disable_hashcheck"` DisableHashCheck bool `config:"disable_hashcheck"`
AskPassword bool `config:"ask_password"` AskPassword bool `config:"ask_password"`
PathOverride string `config:"path_override"` PathOverride string `config:"path_override"`
SetModTime bool `config:"set_modtime"` SetModTime bool `config:"set_modtime"`
Md5sumCommand string `config:"md5sum_command"` Md5sumCommand string `config:"md5sum_command"`
Sha1sumCommand string `config:"sha1sum_command"` Sha1sumCommand string `config:"sha1sum_command"`
SkipLinks bool `config:"skip_links"` SkipLinks bool `config:"skip_links"`
Subsystem string `config:"subsystem"` Subsystem string `config:"subsystem"`
ServerCommand string `config:"server_command"` ServerCommand string `config:"server_command"`
UseFstat bool `config:"use_fstat"` UseFstat bool `config:"use_fstat"`
DisableConcurrentReads bool `config:"disable_concurrent_reads"`
DisableConcurrentWrites bool `config:"disable_concurrent_writes"`
IdleTimeout fs.Duration `config:"idle_timeout"`
} }
// Fs stores the interface to the remote SFTP files // Fs stores the interface to the remote SFTP files
@@ -251,8 +296,10 @@ type Fs struct {
cachedHashes *hash.Set cachedHashes *hash.Set
poolMu sync.Mutex poolMu sync.Mutex
pool []*conn pool []*conn
pacer *fs.Pacer // pacer for operations drain *time.Timer // used to drain the pool when we stop using the connections
pacer *fs.Pacer // pacer for operations
savedpswd string savedpswd string
transfers int32 // count in use references
} }
// Object is a remote SFTP file that has been stat'd (so it exists, but is not necessarily open for reading) // Object is a remote SFTP file that has been stat'd (so it exists, but is not necessarily open for reading)
@@ -315,6 +362,23 @@ func (c *conn) closed() error {
return nil return nil
} }
// Show that we are doing an upload or download
//
// Call removeTransfer() when done
func (f *Fs) addTransfer() {
atomic.AddInt32(&f.transfers, 1)
}
// Show the upload or download done
func (f *Fs) removeTransfer() {
atomic.AddInt32(&f.transfers, -1)
}
// getTransfers shows whether there are any transfers in progress
func (f *Fs) getTransfers() int32 {
return atomic.LoadInt32(&f.transfers)
}
// Open a new connection to the SFTP server. // Open a new connection to the SFTP server.
func (f *Fs) sftpConnection(ctx context.Context) (c *conn, err error) { func (f *Fs) sftpConnection(ctx context.Context) (c *conn, err error) {
// Rate limit rate of new connections // Rate limit rate of new connections
@@ -360,7 +424,14 @@ func (f *Fs) newSftpClient(conn *ssh.Client, opts ...sftp.ClientOption) (*sftp.C
} }
} }
opts = opts[:len(opts):len(opts)] // make sure we don't overwrite the callers opts opts = opts[:len(opts):len(opts)] // make sure we don't overwrite the callers opts
opts = append(opts, sftp.UseFstat(f.opt.UseFstat)) opts = append(opts,
sftp.UseFstat(f.opt.UseFstat),
sftp.UseConcurrentReads(!f.opt.DisableConcurrentReads),
sftp.UseConcurrentWrites(!f.opt.DisableConcurrentWrites),
)
if f.opt.DisableConcurrentReads { // FIXME
fs.Errorf(f, "Ignoring disable_concurrent_reads after library reversion - see #5197")
}
return sftp.NewClientPipe(pr, pw, opts...) return sftp.NewClientPipe(pr, pw, opts...)
} }
@@ -428,6 +499,9 @@ func (f *Fs) putSftpConnection(pc **conn, err error) {
} }
f.poolMu.Lock() f.poolMu.Lock()
f.pool = append(f.pool, c) f.pool = append(f.pool, c)
if f.opt.IdleTimeout > 0 {
f.drain.Reset(time.Duration(f.opt.IdleTimeout)) // nudge on the pool emptying timer
}
f.poolMu.Unlock() f.poolMu.Unlock()
} }
@@ -435,6 +509,19 @@ func (f *Fs) putSftpConnection(pc **conn, err error) {
func (f *Fs) drainPool(ctx context.Context) (err error) { func (f *Fs) drainPool(ctx context.Context) (err error) {
f.poolMu.Lock() f.poolMu.Lock()
defer f.poolMu.Unlock() defer f.poolMu.Unlock()
if transfers := f.getTransfers(); transfers != 0 {
fs.Debugf(f, "Not closing %d unused connections as %d transfers in progress", len(f.pool), transfers)
if f.opt.IdleTimeout > 0 {
f.drain.Reset(time.Duration(f.opt.IdleTimeout)) // nudge on the pool emptying timer
}
return nil
}
if f.opt.IdleTimeout > 0 {
f.drain.Stop()
}
if len(f.pool) != 0 {
fs.Debugf(f, "closing %d unused connections", len(f.pool))
}
for i, c := range f.pool { for i, c := range f.pool {
if cErr := c.closed(); cErr == nil { if cErr := c.closed(); cErr == nil {
cErr = c.close() cErr = c.close()
@@ -479,7 +566,7 @@ func NewFs(ctx context.Context, name, root string, m configmap.Mapper) (fs.Fs, e
} }
if opt.KnownHostsFile != "" { if opt.KnownHostsFile != "" {
hostcallback, err := knownhosts.New(opt.KnownHostsFile) hostcallback, err := knownhosts.New(env.ShellExpand(opt.KnownHostsFile))
if err != nil { if err != nil {
return nil, errors.Wrap(err, "couldn't parse known_hosts_file") return nil, errors.Wrap(err, "couldn't parse known_hosts_file")
} }
@@ -667,6 +754,10 @@ func NewFsWithConnection(ctx context.Context, f *Fs, name string, root string, m
f.mkdirLock = newStringLock() f.mkdirLock = newStringLock()
f.pacer = fs.NewPacer(ctx, pacer.NewDefault(pacer.MinSleep(minSleep), pacer.MaxSleep(maxSleep), pacer.DecayConstant(decayConstant))) f.pacer = fs.NewPacer(ctx, pacer.NewDefault(pacer.MinSleep(minSleep), pacer.MaxSleep(maxSleep), pacer.DecayConstant(decayConstant)))
f.savedpswd = "" f.savedpswd = ""
// set the pool drainer timer going
if f.opt.IdleTimeout > 0 {
f.drain = time.AfterFunc(time.Duration(opt.IdleTimeout), func() { _ = f.drainPool(ctx) })
}
f.features = (&fs.Features{ f.features = (&fs.Features{
CanHaveEmptyDirectories: true, CanHaveEmptyDirectories: true,
@@ -1305,18 +1396,19 @@ func (o *Object) stat(ctx context.Context) error {
// //
// it also updates the info field // it also updates the info field
func (o *Object) SetModTime(ctx context.Context, modTime time.Time) error { func (o *Object) SetModTime(ctx context.Context, modTime time.Time) error {
if o.fs.opt.SetModTime { if !o.fs.opt.SetModTime {
c, err := o.fs.getSftpConnection(ctx) return nil
if err != nil {
return errors.Wrap(err, "SetModTime")
}
err = c.sftpClient.Chtimes(o.path(), modTime, modTime)
o.fs.putSftpConnection(&c, err)
if err != nil {
return errors.Wrap(err, "SetModTime failed")
}
} }
err := o.stat(ctx) c, err := o.fs.getSftpConnection(ctx)
if err != nil {
return errors.Wrap(err, "SetModTime")
}
err = c.sftpClient.Chtimes(o.path(), modTime, modTime)
o.fs.putSftpConnection(&c, err)
if err != nil {
return errors.Wrap(err, "SetModTime failed")
}
err = o.stat(ctx)
if err != nil { if err != nil {
return errors.Wrap(err, "SetModTime stat failed") return errors.Wrap(err, "SetModTime stat failed")
} }
@@ -1330,18 +1422,22 @@ func (o *Object) Storable() bool {
// objectReader represents a file open for reading on the SFTP server // objectReader represents a file open for reading on the SFTP server
type objectReader struct { type objectReader struct {
f *Fs
sftpFile *sftp.File sftpFile *sftp.File
pipeReader *io.PipeReader pipeReader *io.PipeReader
done chan struct{} done chan struct{}
} }
func newObjectReader(sftpFile *sftp.File) *objectReader { func (f *Fs) newObjectReader(sftpFile *sftp.File) *objectReader {
pipeReader, pipeWriter := io.Pipe() pipeReader, pipeWriter := io.Pipe()
file := &objectReader{ file := &objectReader{
f: f,
sftpFile: sftpFile, sftpFile: sftpFile,
pipeReader: pipeReader, pipeReader: pipeReader,
done: make(chan struct{}), done: make(chan struct{}),
} }
// Show connection in use
f.addTransfer()
go func() { go func() {
// Use sftpFile.WriteTo to pump data so that it gets a // Use sftpFile.WriteTo to pump data so that it gets a
@@ -1371,6 +1467,8 @@ func (file *objectReader) Close() (err error) {
_ = file.pipeReader.Close() _ = file.pipeReader.Close()
// Wait for the background process to finish // Wait for the background process to finish
<-file.done <-file.done
// Show connection no longer in use
file.f.removeTransfer()
return err return err
} }
@@ -1404,12 +1502,27 @@ func (o *Object) Open(ctx context.Context, options ...fs.OpenOption) (in io.Read
return nil, errors.Wrap(err, "Open Seek failed") return nil, errors.Wrap(err, "Open Seek failed")
} }
} }
in = readers.NewLimitedReadCloser(newObjectReader(sftpFile), limit) in = readers.NewLimitedReadCloser(o.fs.newObjectReader(sftpFile), limit)
return in, nil return in, nil
} }
type sizeReader struct {
io.Reader
size int64
}
// Size returns the expected size of the stream
//
// It is used in sftpFile.ReadFrom as a hint to work out the
// concurrency needed
func (sr *sizeReader) Size() int64 {
return sr.size
}
// Update a remote sftp file using the data <in> and ModTime from <src> // Update a remote sftp file using the data <in> and ModTime from <src>
func (o *Object) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, options ...fs.OpenOption) error { func (o *Object) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, options ...fs.OpenOption) error {
o.fs.addTransfer() // Show transfer in progress
defer o.fs.removeTransfer()
// Clear the hash cache since we are about to update the object // Clear the hash cache since we are about to update the object
o.md5sum = nil o.md5sum = nil
o.sha1sum = nil o.sha1sum = nil
@@ -1437,7 +1550,7 @@ func (o *Object) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, op
fs.Debugf(src, "Removed after failed upload: %v", err) fs.Debugf(src, "Removed after failed upload: %v", err)
} }
} }
_, err = file.ReadFrom(in) _, err = file.ReadFrom(&sizeReader{Reader: in, size: src.Size()})
if err != nil { if err != nil {
remove() remove()
return errors.Wrap(err, "Update ReadFrom failed") return errors.Wrap(err, "Update ReadFrom failed")
@@ -1447,10 +1560,28 @@ func (o *Object) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, op
remove() remove()
return errors.Wrap(err, "Update Close failed") return errors.Wrap(err, "Update Close failed")
} }
// Set the mod time - this stats the object if o.fs.opt.SetModTime == true
err = o.SetModTime(ctx, src.ModTime(ctx)) err = o.SetModTime(ctx, src.ModTime(ctx))
if err != nil { if err != nil {
return errors.Wrap(err, "Update SetModTime failed") return errors.Wrap(err, "Update SetModTime failed")
} }
// Stat the file after the upload to read its stats back if o.fs.opt.SetModTime == false
if !o.fs.opt.SetModTime {
err = o.stat(ctx)
if err == fs.ErrorObjectNotFound {
// In the specific case of o.fs.opt.SetModTime == false
// if the object wasn't found then don't return an error
fs.Debugf(o, "Not found after upload with set_modtime=false so returning best guess")
o.modTime = src.ModTime(ctx)
o.size = src.Size()
o.mode = os.FileMode(0666) // regular file
} else if err != nil {
return errors.Wrap(err, "Update stat failed")
}
}
return nil return nil
} }

View File

@@ -77,7 +77,6 @@ import (
"fmt" "fmt"
"io" "io"
"io/ioutil" "io/ioutil"
"log"
"net/http" "net/http"
"net/url" "net/url"
"path" "path"
@@ -110,10 +109,10 @@ const (
decayConstant = 2 // bigger for slower decay, exponential decayConstant = 2 // bigger for slower decay, exponential
apiPath = "/sf/v3" // add to endpoint to get API path apiPath = "/sf/v3" // add to endpoint to get API path
tokenPath = "/oauth/token" // add to endpoint to get Token path tokenPath = "/oauth/token" // add to endpoint to get Token path
minChunkSize = 256 * fs.KibiByte minChunkSize = 256 * fs.Kibi
maxChunkSize = 2 * fs.GibiByte maxChunkSize = 2 * fs.Gibi
defaultChunkSize = 64 * fs.MebiByte defaultChunkSize = 64 * fs.Mebi
defaultUploadCutoff = 128 * fs.MebiByte defaultUploadCutoff = 128 * fs.Mebi
) )
// Generate a new oauth2 config which we will update when we know the TokenURL // Generate a new oauth2 config which we will update when we know the TokenURL
@@ -136,7 +135,7 @@ func init() {
Name: "sharefile", Name: "sharefile",
Description: "Citrix Sharefile", Description: "Citrix Sharefile",
NewFs: NewFs, NewFs: NewFs,
Config: func(ctx context.Context, name string, m configmap.Mapper) { Config: func(ctx context.Context, name string, m configmap.Mapper, config fs.ConfigIn) (*fs.ConfigOut, error) {
oauthConfig := newOauthConfig("") oauthConfig := newOauthConfig("")
checkAuth := func(oauthConfig *oauth2.Config, auth *oauthutil.AuthResult) error { checkAuth := func(oauthConfig *oauth2.Config, auth *oauthutil.AuthResult) error {
if auth == nil || auth.Form == nil { if auth == nil || auth.Form == nil {
@@ -152,13 +151,10 @@ func init() {
oauthConfig.Endpoint.TokenURL = endpoint + tokenPath oauthConfig.Endpoint.TokenURL = endpoint + tokenPath
return nil return nil
} }
opt := oauthutil.Options{ return oauthutil.ConfigOut("", &oauthutil.Options{
CheckAuth: checkAuth, OAuth2Config: oauthConfig,
} CheckAuth: checkAuth,
err := oauthutil.Config(ctx, "sharefile", name, m, oauthConfig, &opt) })
if err != nil {
log.Fatalf("Failed to configure token: %v", err)
}
}, },
Options: []fs.Option{{ Options: []fs.Option{{
Name: "upload_cutoff", Name: "upload_cutoff",
@@ -299,7 +295,10 @@ var retryErrorCodes = []int{
// shouldRetry returns a boolean as to whether this resp and err // shouldRetry returns a boolean as to whether this resp and err
// deserve to be retried. It returns the err as a convenience // deserve to be retried. It returns the err as a convenience
func shouldRetry(resp *http.Response, err error) (bool, error) { func shouldRetry(ctx context.Context, resp *http.Response, err error) (bool, error) {
if fserrors.ContextError(ctx, &err) {
return false, err
}
return fserrors.ShouldRetry(err) || fserrors.ShouldRetryHTTP(resp, retryErrorCodes), err return fserrors.ShouldRetry(err) || fserrors.ShouldRetryHTTP(resp, retryErrorCodes), err
} }
@@ -324,7 +323,7 @@ func (f *Fs) readMetaDataForIDPath(ctx context.Context, id, path string, directo
var resp *http.Response var resp *http.Response
err = f.pacer.Call(func() (bool, error) { err = f.pacer.Call(func() (bool, error) {
resp, err = f.srv.CallJSON(ctx, &opts, nil, &item) resp, err = f.srv.CallJSON(ctx, &opts, nil, &item)
return shouldRetry(resp, err) return shouldRetry(ctx, resp, err)
}) })
if err != nil { if err != nil {
if resp != nil && resp.StatusCode == http.StatusNotFound { if resp != nil && resp.StatusCode == http.StatusNotFound {
@@ -631,7 +630,7 @@ func (f *Fs) CreateDir(ctx context.Context, pathID, leaf string) (newID string,
} }
err = f.pacer.Call(func() (bool, error) { err = f.pacer.Call(func() (bool, error) {
resp, err = f.srv.CallJSON(ctx, &opts, &req, &info) resp, err = f.srv.CallJSON(ctx, &opts, &req, &info)
return shouldRetry(resp, err) return shouldRetry(ctx, resp, err)
}) })
if err != nil { if err != nil {
return "", errors.Wrap(err, "CreateDir") return "", errors.Wrap(err, "CreateDir")
@@ -663,7 +662,7 @@ func (f *Fs) listAll(ctx context.Context, dirID string, directoriesOnly bool, fi
var resp *http.Response var resp *http.Response
err = f.pacer.Call(func() (bool, error) { err = f.pacer.Call(func() (bool, error) {
resp, err = f.srv.CallJSON(ctx, &opts, nil, &result) resp, err = f.srv.CallJSON(ctx, &opts, nil, &result)
return shouldRetry(resp, err) return shouldRetry(ctx, resp, err)
}) })
if err != nil { if err != nil {
return found, errors.Wrap(err, "couldn't list files") return found, errors.Wrap(err, "couldn't list files")
@@ -912,7 +911,7 @@ func (f *Fs) updateItem(ctx context.Context, id, leaf, directoryID string, modTi
var resp *http.Response var resp *http.Response
err = f.pacer.Call(func() (bool, error) { err = f.pacer.Call(func() (bool, error) {
resp, err = f.srv.CallJSON(ctx, &opts, &update, &info) resp, err = f.srv.CallJSON(ctx, &opts, &update, &info)
return shouldRetry(resp, err) return shouldRetry(ctx, resp, err)
}) })
if err != nil { if err != nil {
return nil, err return nil, err
@@ -1133,7 +1132,7 @@ func (f *Fs) Copy(ctx context.Context, src fs.Object, remote string) (dst fs.Obj
var info *api.Item var info *api.Item
err = f.pacer.Call(func() (bool, error) { err = f.pacer.Call(func() (bool, error) {
resp, err = f.srv.CallJSON(ctx, &opts, nil, &info) resp, err = f.srv.CallJSON(ctx, &opts, nil, &info)
return shouldRetry(resp, err) return shouldRetry(ctx, resp, err)
}) })
if err != nil { if err != nil {
return nil, err return nil, err
@@ -1294,7 +1293,7 @@ func (o *Object) Open(ctx context.Context, options ...fs.OpenOption) (in io.Read
var dl api.DownloadSpecification var dl api.DownloadSpecification
err = o.fs.pacer.Call(func() (bool, error) { err = o.fs.pacer.Call(func() (bool, error) {
resp, err = o.fs.srv.CallJSON(ctx, &opts, nil, &dl) resp, err = o.fs.srv.CallJSON(ctx, &opts, nil, &dl)
return shouldRetry(resp, err) return shouldRetry(ctx, resp, err)
}) })
if err != nil { if err != nil {
return nil, errors.Wrap(err, "open: fetch download specification") return nil, errors.Wrap(err, "open: fetch download specification")
@@ -1309,7 +1308,7 @@ func (o *Object) Open(ctx context.Context, options ...fs.OpenOption) (in io.Read
} }
err = o.fs.pacer.Call(func() (bool, error) { err = o.fs.pacer.Call(func() (bool, error) {
resp, err = o.fs.srv.Call(ctx, &opts) resp, err = o.fs.srv.Call(ctx, &opts)
return shouldRetry(resp, err) return shouldRetry(ctx, resp, err)
}) })
if err != nil { if err != nil {
return nil, errors.Wrap(err, "open") return nil, errors.Wrap(err, "open")
@@ -1365,7 +1364,7 @@ func (o *Object) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, op
} }
err = o.fs.pacer.Call(func() (bool, error) { err = o.fs.pacer.Call(func() (bool, error) {
resp, err = o.fs.srv.CallJSON(ctx, &opts, &req, &info) resp, err = o.fs.srv.CallJSON(ctx, &opts, &req, &info)
return shouldRetry(resp, err) return shouldRetry(ctx, resp, err)
}) })
if err != nil { if err != nil {
return errors.Wrap(err, "upload get specification") return errors.Wrap(err, "upload get specification")
@@ -1390,7 +1389,7 @@ func (o *Object) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, op
var finish api.UploadFinishResponse var finish api.UploadFinishResponse
err = o.fs.pacer.CallNoRetry(func() (bool, error) { err = o.fs.pacer.CallNoRetry(func() (bool, error) {
resp, err = o.fs.srv.CallJSON(ctx, &opts, nil, &finish) resp, err = o.fs.srv.CallJSON(ctx, &opts, nil, &finish)
return shouldRetry(resp, err) return shouldRetry(ctx, resp, err)
}) })
if err != nil { if err != nil {
return errors.Wrap(err, "upload file") return errors.Wrap(err, "upload file")
@@ -1426,7 +1425,7 @@ func (f *Fs) remove(ctx context.Context, id string) (err error) {
var resp *http.Response var resp *http.Response
err = f.pacer.Call(func() (bool, error) { err = f.pacer.Call(func() (bool, error) {
resp, err = f.srv.Call(ctx, &opts) resp, err = f.srv.Call(ctx, &opts)
return shouldRetry(resp, err) return shouldRetry(ctx, resp, err)
}) })
if err != nil { if err != nil {
return errors.Wrap(err, "remove") return errors.Wrap(err, "remove")

View File

@@ -155,7 +155,7 @@ func (up *largeUpload) finish(ctx context.Context) error {
err := up.f.pacer.Call(func() (bool, error) { err := up.f.pacer.Call(func() (bool, error) {
resp, err := up.f.srv.Call(ctx, &opts) resp, err := up.f.srv.Call(ctx, &opts)
if err != nil { if err != nil {
return shouldRetry(resp, err) return shouldRetry(ctx, resp, err)
} }
respBody, err = rest.ReadBody(resp) respBody, err = rest.ReadBody(resp)
// retry all errors now that the multipart upload has started // retry all errors now that the multipart upload has started

View File

@@ -16,7 +16,6 @@ import (
"context" "context"
"fmt" "fmt"
"io" "io"
"log"
"net/http" "net/http"
"net/url" "net/url"
"path" "path"
@@ -76,50 +75,63 @@ func init() {
Name: "sugarsync", Name: "sugarsync",
Description: "Sugarsync", Description: "Sugarsync",
NewFs: NewFs, NewFs: NewFs,
Config: func(ctx context.Context, name string, m configmap.Mapper) { Config: func(ctx context.Context, name string, m configmap.Mapper, config fs.ConfigIn) (*fs.ConfigOut, error) {
opt := new(Options) opt := new(Options)
err := configstruct.Set(m, opt) err := configstruct.Set(m, opt)
if err != nil { if err != nil {
log.Fatalf("Failed to read options: %v", err) return nil, errors.Wrap(err, "failed to read options")
} }
if opt.RefreshToken != "" { switch config.State {
fmt.Printf("Already have a token - refresh?\n") case "":
if !config.ConfirmWithConfig(ctx, m, "config_refresh_token", true) { if opt.RefreshToken == "" {
return return fs.ConfigGoto("username")
} }
} return fs.ConfigConfirm("refresh", true, "config_refresh", "Already have a token - refresh?")
fmt.Printf("Username (email address)> ") case "refresh":
username := config.ReadLine() if config.Result == "false" {
password := config.GetPassword("Your Sugarsync password is only required during setup and will not be stored.") return nil, nil
}
return fs.ConfigGoto("username")
case "username":
return fs.ConfigInput("password", "config_username", "username (email address)")
case "password":
m.Set("username", config.Result)
return fs.ConfigPassword("auth", "config_password", "Your Sugarsync password.\n\nOnly required during setup and will not be stored.")
case "auth":
username, _ := m.Get("username")
m.Set("username", "")
password := config.Result
authRequest := api.AppAuthorization{ authRequest := api.AppAuthorization{
Username: username, Username: username,
Password: password, Password: password,
Application: withDefault(opt.AppID, appID), Application: withDefault(opt.AppID, appID),
AccessKeyID: withDefault(opt.AccessKeyID, accessKeyID), AccessKeyID: withDefault(opt.AccessKeyID, accessKeyID),
PrivateAccessKey: withDefault(opt.PrivateAccessKey, obscure.MustReveal(encryptedPrivateAccessKey)), PrivateAccessKey: withDefault(opt.PrivateAccessKey, obscure.MustReveal(encryptedPrivateAccessKey)),
} }
var resp *http.Response var resp *http.Response
opts := rest.Opts{ opts := rest.Opts{
Method: "POST", Method: "POST",
Path: "/app-authorization", Path: "/app-authorization",
} }
srv := rest.NewClient(fshttp.NewClient(ctx)).SetRoot(rootURL) // FIXME srv := rest.NewClient(fshttp.NewClient(ctx)).SetRoot(rootURL) // FIXME
// FIXME // FIXME
//err = f.pacer.Call(func() (bool, error) { //err = f.pacer.Call(func() (bool, error) {
resp, err = srv.CallXML(context.Background(), &opts, &authRequest, nil) resp, err = srv.CallXML(context.Background(), &opts, &authRequest, nil)
// return shouldRetry(resp, err) // return shouldRetry(ctx, resp, err)
//}) //})
if err != nil { if err != nil {
log.Fatalf("Failed to get token: %v", err) return nil, errors.Wrap(err, "failed to get token")
}
opt.RefreshToken = resp.Header.Get("Location")
m.Set("refresh_token", opt.RefreshToken)
return nil, nil
} }
opt.RefreshToken = resp.Header.Get("Location") return nil, fmt.Errorf("unknown state %q", config.State)
m.Set("refresh_token", opt.RefreshToken) }, Options: []fs.Option{{
},
Options: []fs.Option{{
Name: "app_id", Name: "app_id",
Help: "Sugarsync App ID.\n\nLeave blank to use rclone's.", Help: "Sugarsync App ID.\n\nLeave blank to use rclone's.",
}, { }, {
@@ -248,7 +260,10 @@ var retryErrorCodes = []int{
// shouldRetry returns a boolean as to whether this resp and err // shouldRetry returns a boolean as to whether this resp and err
// deserve to be retried. It returns the err as a convenience // deserve to be retried. It returns the err as a convenience
func shouldRetry(resp *http.Response, err error) (bool, error) { func shouldRetry(ctx context.Context, resp *http.Response, err error) (bool, error) {
if fserrors.ContextError(ctx, &err) {
return false, err
}
return fserrors.ShouldRetry(err) || fserrors.ShouldRetryHTTP(resp, retryErrorCodes), err return fserrors.ShouldRetry(err) || fserrors.ShouldRetryHTTP(resp, retryErrorCodes), err
} }
@@ -288,7 +303,7 @@ func (f *Fs) readMetaDataForID(ctx context.Context, ID string) (info *api.File,
} }
err = f.pacer.Call(func() (bool, error) { err = f.pacer.Call(func() (bool, error) {
resp, err = f.srv.CallXML(ctx, &opts, nil, &info) resp, err = f.srv.CallXML(ctx, &opts, nil, &info)
return shouldRetry(resp, err) return shouldRetry(ctx, resp, err)
}) })
if err != nil { if err != nil {
if resp != nil && resp.StatusCode == http.StatusNotFound { if resp != nil && resp.StatusCode == http.StatusNotFound {
@@ -325,7 +340,7 @@ func (f *Fs) getAuthToken(ctx context.Context) error {
} }
err = f.pacer.Call(func() (bool, error) { err = f.pacer.Call(func() (bool, error) {
resp, err = f.srv.CallXML(ctx, &opts, &authRequest, &authResponse) resp, err = f.srv.CallXML(ctx, &opts, &authRequest, &authResponse)
return shouldRetry(resp, err) return shouldRetry(ctx, resp, err)
}) })
if err != nil { if err != nil {
return errors.Wrap(err, "failed to get authorization") return errors.Wrap(err, "failed to get authorization")
@@ -373,7 +388,7 @@ func (f *Fs) getUser(ctx context.Context) (user *api.User, err error) {
} }
err = f.pacer.Call(func() (bool, error) { err = f.pacer.Call(func() (bool, error) {
resp, err = f.srv.CallXML(ctx, &opts, nil, &user) resp, err = f.srv.CallXML(ctx, &opts, nil, &user)
return shouldRetry(resp, err) return shouldRetry(ctx, resp, err)
}) })
if err != nil { if err != nil {
return nil, errors.Wrap(err, "failed to get user") return nil, errors.Wrap(err, "failed to get user")
@@ -567,7 +582,7 @@ func (f *Fs) CreateDir(ctx context.Context, pathID, leaf string) (newID string,
} }
err = f.pacer.Call(func() (bool, error) { err = f.pacer.Call(func() (bool, error) {
resp, err = f.srv.CallXML(ctx, &opts, mkdir, nil) resp, err = f.srv.CallXML(ctx, &opts, mkdir, nil)
return shouldRetry(resp, err) return shouldRetry(ctx, resp, err)
}) })
if err != nil { if err != nil {
return "", err return "", err
@@ -618,7 +633,7 @@ OUTER:
var resp *http.Response var resp *http.Response
err = f.pacer.Call(func() (bool, error) { err = f.pacer.Call(func() (bool, error) {
resp, err = f.srv.CallXML(ctx, &opts, nil, &result) resp, err = f.srv.CallXML(ctx, &opts, nil, &result)
return shouldRetry(resp, err) return shouldRetry(ctx, resp, err)
}) })
if err != nil { if err != nil {
return found, errors.Wrap(err, "couldn't list files") return found, errors.Wrap(err, "couldn't list files")
@@ -774,7 +789,7 @@ func (f *Fs) delete(ctx context.Context, isFile bool, id string, remote string,
} }
return f.pacer.Call(func() (bool, error) { return f.pacer.Call(func() (bool, error) {
resp, err := f.srv.Call(ctx, &opts) resp, err := f.srv.Call(ctx, &opts)
return shouldRetry(resp, err) return shouldRetry(ctx, resp, err)
}) })
} }
// Move file/dir to deleted files if not hard delete // Move file/dir to deleted files if not hard delete
@@ -880,7 +895,7 @@ func (f *Fs) Copy(ctx context.Context, src fs.Object, remote string) (fs.Object,
var resp *http.Response var resp *http.Response
err = f.pacer.Call(func() (bool, error) { err = f.pacer.Call(func() (bool, error) {
resp, err = f.srv.CallXML(ctx, &opts, &copyFile, nil) resp, err = f.srv.CallXML(ctx, &opts, &copyFile, nil)
return shouldRetry(resp, err) return shouldRetry(ctx, resp, err)
}) })
if err != nil { if err != nil {
return nil, err return nil, err
@@ -934,7 +949,7 @@ func (f *Fs) moveFile(ctx context.Context, id, leaf, directoryID string) (info *
var resp *http.Response var resp *http.Response
err = f.pacer.Call(func() (bool, error) { err = f.pacer.Call(func() (bool, error) {
resp, err = f.srv.CallXML(ctx, &opts, &move, &info) resp, err = f.srv.CallXML(ctx, &opts, &move, &info)
return shouldRetry(resp, err) return shouldRetry(ctx, resp, err)
}) })
if err != nil { if err != nil {
return nil, err return nil, err
@@ -964,7 +979,7 @@ func (f *Fs) moveDir(ctx context.Context, id, leaf, directoryID string) (err err
var resp *http.Response var resp *http.Response
return f.pacer.Call(func() (bool, error) { return f.pacer.Call(func() (bool, error) {
resp, err = f.srv.CallXML(ctx, &opts, &move, nil) resp, err = f.srv.CallXML(ctx, &opts, &move, nil)
return shouldRetry(resp, err) return shouldRetry(ctx, resp, err)
}) })
} }
@@ -1053,7 +1068,7 @@ func (f *Fs) PublicLink(ctx context.Context, remote string, expire fs.Duration,
var info *api.File var info *api.File
err = f.pacer.Call(func() (bool, error) { err = f.pacer.Call(func() (bool, error) {
resp, err = f.srv.CallXML(ctx, &opts, &linkFile, &info) resp, err = f.srv.CallXML(ctx, &opts, &linkFile, &info)
return shouldRetry(resp, err) return shouldRetry(ctx, resp, err)
}) })
if err != nil { if err != nil {
return "", err return "", err
@@ -1182,7 +1197,7 @@ func (o *Object) Open(ctx context.Context, options ...fs.OpenOption) (in io.Read
} }
err = o.fs.pacer.Call(func() (bool, error) { err = o.fs.pacer.Call(func() (bool, error) {
resp, err = o.fs.srv.Call(ctx, &opts) resp, err = o.fs.srv.Call(ctx, &opts)
return shouldRetry(resp, err) return shouldRetry(ctx, resp, err)
}) })
if err != nil { if err != nil {
return nil, err return nil, err
@@ -1204,7 +1219,7 @@ func (f *Fs) createFile(ctx context.Context, pathID, leaf, mimeType string) (new
} }
err = f.pacer.Call(func() (bool, error) { err = f.pacer.Call(func() (bool, error) {
resp, err = f.srv.CallXML(ctx, &opts, &mkdir, nil) resp, err = f.srv.CallXML(ctx, &opts, &mkdir, nil)
return shouldRetry(resp, err) return shouldRetry(ctx, resp, err)
}) })
if err != nil { if err != nil {
return "", err return "", err
@@ -1262,7 +1277,7 @@ func (o *Object) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, op
} }
err = o.fs.pacer.CallNoRetry(func() (bool, error) { err = o.fs.pacer.CallNoRetry(func() (bool, error) {
resp, err = o.fs.srv.Call(ctx, &opts) resp, err = o.fs.srv.Call(ctx, &opts)
return shouldRetry(resp, err) return shouldRetry(ctx, resp, err)
}) })
if err != nil { if err != nil {
return errors.Wrap(err, "failed to upload file") return errors.Wrap(err, "failed to upload file")

View File

@@ -13,6 +13,7 @@ import (
"strings" "strings"
"time" "time"
"github.com/google/uuid"
"github.com/ncw/swift/v2" "github.com/ncw/swift/v2"
"github.com/pkg/errors" "github.com/pkg/errors"
"github.com/rclone/rclone/fs" "github.com/rclone/rclone/fs"
@@ -35,7 +36,7 @@ import (
const ( const (
directoryMarkerContentType = "application/directory" // content type of directory marker objects directoryMarkerContentType = "application/directory" // content type of directory marker objects
listChunks = 1000 // chunk size to read directory listings listChunks = 1000 // chunk size to read directory listings
defaultChunkSize = 5 * fs.GibiByte defaultChunkSize = 5 * fs.Gibi
minSleep = 10 * time.Millisecond // In case of error, start at 10ms sleep. minSleep = 10 * time.Millisecond // In case of error, start at 10ms sleep.
) )
@@ -45,7 +46,7 @@ var SharedOptions = []fs.Option{{
Help: `Above this size files will be chunked into a _segments container. Help: `Above this size files will be chunked into a _segments container.
Above this size files will be chunked into a _segments container. The Above this size files will be chunked into a _segments container. The
default for this is 5GB which is its maximum value.`, default for this is 5 GiB which is its maximum value.`,
Default: defaultChunkSize, Default: defaultChunkSize,
Advanced: true, Advanced: true,
}, { }, {
@@ -55,7 +56,7 @@ default for this is 5GB which is its maximum value.`,
When doing streaming uploads (e.g. using rcat or mount) setting this When doing streaming uploads (e.g. using rcat or mount) setting this
flag will cause the swift backend to not upload chunked files. flag will cause the swift backend to not upload chunked files.
This will limit the maximum upload size to 5GB. However non chunked This will limit the maximum upload size to 5 GiB. However non chunked
files are easier to deal with and have an MD5SUM. files are easier to deal with and have an MD5SUM.
Rclone will still chunk files bigger than chunk_size when doing normal Rclone will still chunk files bigger than chunk_size when doing normal
@@ -291,7 +292,10 @@ var retryErrorCodes = []int{
// shouldRetry returns a boolean as to whether this err deserves to be // shouldRetry returns a boolean as to whether this err deserves to be
// retried. It returns the err as a convenience // retried. It returns the err as a convenience
func shouldRetry(err error) (bool, error) { func shouldRetry(ctx context.Context, err error) (bool, error) {
if fserrors.ContextError(ctx, &err) {
return false, err
}
// If this is a swift.Error object extract the HTTP error code // If this is a swift.Error object extract the HTTP error code
if swiftError, ok := err.(*swift.Error); ok { if swiftError, ok := err.(*swift.Error); ok {
for _, e := range retryErrorCodes { for _, e := range retryErrorCodes {
@@ -307,7 +311,7 @@ func shouldRetry(err error) (bool, error) {
// shouldRetryHeaders returns a boolean as to whether this err // shouldRetryHeaders returns a boolean as to whether this err
// deserves to be retried. It reads the headers passed in looking for // deserves to be retried. It reads the headers passed in looking for
// `Retry-After`. It returns the err as a convenience // `Retry-After`. It returns the err as a convenience
func shouldRetryHeaders(headers swift.Headers, err error) (bool, error) { func shouldRetryHeaders(ctx context.Context, headers swift.Headers, err error) (bool, error) {
if swiftError, ok := err.(*swift.Error); ok && swiftError.StatusCode == 429 { if swiftError, ok := err.(*swift.Error); ok && swiftError.StatusCode == 429 {
if value := headers["Retry-After"]; value != "" { if value := headers["Retry-After"]; value != "" {
retryAfter, parseErr := strconv.Atoi(value) retryAfter, parseErr := strconv.Atoi(value)
@@ -326,7 +330,7 @@ func shouldRetryHeaders(headers swift.Headers, err error) (bool, error) {
} }
} }
} }
return shouldRetry(err) return shouldRetry(ctx, err)
} }
// parsePath parses a remote 'url' // parsePath parses a remote 'url'
@@ -415,7 +419,7 @@ func swiftConnection(ctx context.Context, opt *Options, name string) (*swift.Con
} }
func checkUploadChunkSize(cs fs.SizeSuffix) error { func checkUploadChunkSize(cs fs.SizeSuffix) error {
const minChunkSize = fs.Byte const minChunkSize = fs.SizeSuffixBase
if cs < minChunkSize { if cs < minChunkSize {
return errors.Errorf("%s is less than %s", cs, minChunkSize) return errors.Errorf("%s is less than %s", cs, minChunkSize)
} }
@@ -468,7 +472,7 @@ func NewFsWithConnection(ctx context.Context, opt *Options, name, root string, c
err = f.pacer.Call(func() (bool, error) { err = f.pacer.Call(func() (bool, error) {
var rxHeaders swift.Headers var rxHeaders swift.Headers
info, rxHeaders, err = f.c.Object(ctx, f.rootContainer, encodedDirectory) info, rxHeaders, err = f.c.Object(ctx, f.rootContainer, encodedDirectory)
return shouldRetryHeaders(rxHeaders, err) return shouldRetryHeaders(ctx, rxHeaders, err)
}) })
if err == nil && info.ContentType != directoryMarkerContentType { if err == nil && info.ContentType != directoryMarkerContentType {
newRoot := path.Dir(f.root) newRoot := path.Dir(f.root)
@@ -576,7 +580,7 @@ func (f *Fs) listContainerRoot(ctx context.Context, container, directory, prefix
var err error var err error
err = f.pacer.Call(func() (bool, error) { err = f.pacer.Call(func() (bool, error) {
objects, err = f.c.Objects(ctx, container, opts) objects, err = f.c.Objects(ctx, container, opts)
return shouldRetry(err) return shouldRetry(ctx, err)
}) })
if err == nil { if err == nil {
for i := range objects { for i := range objects {
@@ -661,7 +665,7 @@ func (f *Fs) listContainers(ctx context.Context) (entries fs.DirEntries, err err
var containers []swift.Container var containers []swift.Container
err = f.pacer.Call(func() (bool, error) { err = f.pacer.Call(func() (bool, error) {
containers, err = f.c.ContainersAll(ctx, nil) containers, err = f.c.ContainersAll(ctx, nil)
return shouldRetry(err) return shouldRetry(ctx, err)
}) })
if err != nil { if err != nil {
return nil, errors.Wrap(err, "container listing failed") return nil, errors.Wrap(err, "container listing failed")
@@ -753,7 +757,7 @@ func (f *Fs) About(ctx context.Context) (*fs.Usage, error) {
var err error var err error
err = f.pacer.Call(func() (bool, error) { err = f.pacer.Call(func() (bool, error) {
containers, err = f.c.ContainersAll(ctx, nil) containers, err = f.c.ContainersAll(ctx, nil)
return shouldRetry(err) return shouldRetry(ctx, err)
}) })
if err != nil { if err != nil {
return nil, errors.Wrap(err, "container listing failed") return nil, errors.Wrap(err, "container listing failed")
@@ -805,7 +809,7 @@ func (f *Fs) makeContainer(ctx context.Context, container string) error {
err = f.pacer.Call(func() (bool, error) { err = f.pacer.Call(func() (bool, error) {
var rxHeaders swift.Headers var rxHeaders swift.Headers
_, rxHeaders, err = f.c.Container(ctx, container) _, rxHeaders, err = f.c.Container(ctx, container)
return shouldRetryHeaders(rxHeaders, err) return shouldRetryHeaders(ctx, rxHeaders, err)
}) })
} }
if err == swift.ContainerNotFound { if err == swift.ContainerNotFound {
@@ -815,7 +819,7 @@ func (f *Fs) makeContainer(ctx context.Context, container string) error {
} }
err = f.pacer.Call(func() (bool, error) { err = f.pacer.Call(func() (bool, error) {
err = f.c.ContainerCreate(ctx, container, headers) err = f.c.ContainerCreate(ctx, container, headers)
return shouldRetry(err) return shouldRetry(ctx, err)
}) })
if err == nil { if err == nil {
fs.Infof(f, "Container %q created", container) fs.Infof(f, "Container %q created", container)
@@ -836,7 +840,7 @@ func (f *Fs) Rmdir(ctx context.Context, dir string) error {
err := f.cache.Remove(container, func() error { err := f.cache.Remove(container, func() error {
err := f.pacer.Call(func() (bool, error) { err := f.pacer.Call(func() (bool, error) {
err := f.c.ContainerDelete(ctx, container) err := f.c.ContainerDelete(ctx, container)
return shouldRetry(err) return shouldRetry(ctx, err)
}) })
if err == nil { if err == nil {
fs.Infof(f, "Container %q removed", container) fs.Infof(f, "Container %q removed", container)
@@ -902,18 +906,125 @@ func (f *Fs) Copy(ctx context.Context, src fs.Object, remote string) (fs.Object,
fs.Debugf(src, "Can't copy - not same remote type") fs.Debugf(src, "Can't copy - not same remote type")
return nil, fs.ErrorCantCopy return nil, fs.ErrorCantCopy
} }
srcContainer, srcPath := srcObj.split() isLargeObject, err := srcObj.isLargeObject(ctx)
err = f.pacer.Call(func() (bool, error) { if err != nil {
var rxHeaders swift.Headers return nil, err
rxHeaders, err = f.c.ObjectCopy(ctx, srcContainer, srcPath, dstContainer, dstPath, nil) }
return shouldRetryHeaders(rxHeaders, err) if isLargeObject {
}) /*handle large object*/
err = copyLargeObject(ctx, f, srcObj, dstContainer, dstPath)
} else {
srcContainer, srcPath := srcObj.split()
err = f.pacer.Call(func() (bool, error) {
var rxHeaders swift.Headers
rxHeaders, err = f.c.ObjectCopy(ctx, srcContainer, srcPath, dstContainer, dstPath, nil)
return shouldRetryHeaders(ctx, rxHeaders, err)
})
}
if err != nil { if err != nil {
return nil, err return nil, err
} }
return f.NewObject(ctx, remote) return f.NewObject(ctx, remote)
} }
func copyLargeObject(ctx context.Context, f *Fs, src *Object, dstContainer string, dstPath string) error {
segmentsContainer := dstContainer + "_segments"
err := f.makeContainer(ctx, segmentsContainer)
if err != nil {
return err
}
segments, err := src.getSegmentsLargeObject(ctx)
if err != nil {
return err
}
if len(segments) == 0 {
return errors.New("could not copy object, list segments are empty")
}
nanoSeconds := time.Now().Nanosecond()
prefixSegment := fmt.Sprintf("%v/%v/%s", nanoSeconds, src.size, strings.ReplaceAll(uuid.New().String(), "-", ""))
copiedSegmentsLen := 10
for _, value := range segments {
if len(value) <= 0 {
continue
}
fragment := value[0]
if len(fragment) <= 0 {
continue
}
copiedSegmentsLen = len(value)
firstIndex := strings.Index(fragment, "/")
if firstIndex < 0 {
firstIndex = 0
} else {
firstIndex = firstIndex + 1
}
lastIndex := strings.LastIndex(fragment, "/")
if lastIndex < 0 {
lastIndex = len(fragment)
} else {
lastIndex = lastIndex - 1
}
prefixSegment = fragment[firstIndex:lastIndex]
break
}
copiedSegments := make([]string, copiedSegmentsLen)
defer handleCopyFail(ctx, f, segmentsContainer, copiedSegments, err)
for c, ss := range segments {
if len(ss) <= 0 {
continue
}
for _, s := range ss {
lastIndex := strings.LastIndex(s, "/")
if lastIndex <= 0 {
lastIndex = 0
} else {
lastIndex = lastIndex + 1
}
segmentName := dstPath + "/" + prefixSegment + "/" + s[lastIndex:]
err = f.pacer.Call(func() (bool, error) {
var rxHeaders swift.Headers
rxHeaders, err = f.c.ObjectCopy(ctx, c, s, segmentsContainer, segmentName, nil)
copiedSegments = append(copiedSegments, segmentName)
return shouldRetryHeaders(ctx, rxHeaders, err)
})
if err != nil {
return err
}
}
}
m := swift.Metadata{}
headers := m.ObjectHeaders()
headers["X-Object-Manifest"] = urlEncode(fmt.Sprintf("%s/%s/%s", segmentsContainer, dstPath, prefixSegment))
headers["Content-Length"] = "0"
emptyReader := bytes.NewReader(nil)
err = f.pacer.Call(func() (bool, error) {
var rxHeaders swift.Headers
rxHeaders, err = f.c.ObjectPut(ctx, dstContainer, dstPath, emptyReader, true, "", src.contentType, headers)
return shouldRetryHeaders(ctx, rxHeaders, err)
})
return err
}
//remove copied segments when copy process failed
func handleCopyFail(ctx context.Context, f *Fs, segmentsContainer string, segments []string, err error) {
fs.Debugf(f, "handle copy segment fail")
if err == nil {
return
}
if len(segmentsContainer) == 0 {
fs.Debugf(f, "invalid segments container")
return
}
if len(segments) == 0 {
fs.Debugf(f, "segments is empty")
return
}
fs.Debugf(f, "action delete segments what copied")
for _, v := range segments {
_ = f.c.ObjectDelete(ctx, segmentsContainer, v)
}
}
// Hashes returns the supported hash sets. // Hashes returns the supported hash sets.
func (f *Fs) Hashes() hash.Set { func (f *Fs) Hashes() hash.Set {
return hash.Set(hash.MD5) return hash.Set(hash.MD5)
@@ -1041,7 +1152,7 @@ func (o *Object) readMetaData(ctx context.Context) (err error) {
container, containerPath := o.split() container, containerPath := o.split()
err = o.fs.pacer.Call(func() (bool, error) { err = o.fs.pacer.Call(func() (bool, error) {
info, h, err = o.fs.c.Object(ctx, container, containerPath) info, h, err = o.fs.c.Object(ctx, container, containerPath)
return shouldRetryHeaders(h, err) return shouldRetryHeaders(ctx, h, err)
}) })
if err != nil { if err != nil {
if err == swift.ObjectNotFound { if err == swift.ObjectNotFound {
@@ -1100,7 +1211,7 @@ func (o *Object) SetModTime(ctx context.Context, modTime time.Time) error {
container, containerPath := o.split() container, containerPath := o.split()
return o.fs.pacer.Call(func() (bool, error) { return o.fs.pacer.Call(func() (bool, error) {
err = o.fs.c.ObjectUpdate(ctx, container, containerPath, newHeaders) err = o.fs.c.ObjectUpdate(ctx, container, containerPath, newHeaders)
return shouldRetry(err) return shouldRetry(ctx, err)
}) })
} }
@@ -1121,7 +1232,7 @@ func (o *Object) Open(ctx context.Context, options ...fs.OpenOption) (in io.Read
err = o.fs.pacer.Call(func() (bool, error) { err = o.fs.pacer.Call(func() (bool, error) {
var rxHeaders swift.Headers var rxHeaders swift.Headers
in, rxHeaders, err = o.fs.c.ObjectOpen(ctx, container, containerPath, !isRanging, headers) in, rxHeaders, err = o.fs.c.ObjectOpen(ctx, container, containerPath, !isRanging, headers)
return shouldRetryHeaders(rxHeaders, err) return shouldRetryHeaders(ctx, rxHeaders, err)
}) })
return return
} }
@@ -1211,7 +1322,7 @@ func (o *Object) updateChunks(ctx context.Context, in0 io.Reader, headers swift.
err = o.fs.pacer.Call(func() (bool, error) { err = o.fs.pacer.Call(func() (bool, error) {
var rxHeaders swift.Headers var rxHeaders swift.Headers
_, rxHeaders, err = o.fs.c.Container(ctx, segmentsContainer) _, rxHeaders, err = o.fs.c.Container(ctx, segmentsContainer)
return shouldRetryHeaders(rxHeaders, err) return shouldRetryHeaders(ctx, rxHeaders, err)
}) })
if err == swift.ContainerNotFound { if err == swift.ContainerNotFound {
headers := swift.Headers{} headers := swift.Headers{}
@@ -1220,7 +1331,7 @@ func (o *Object) updateChunks(ctx context.Context, in0 io.Reader, headers swift.
} }
err = o.fs.pacer.Call(func() (bool, error) { err = o.fs.pacer.Call(func() (bool, error) {
err = o.fs.c.ContainerCreate(ctx, segmentsContainer, headers) err = o.fs.c.ContainerCreate(ctx, segmentsContainer, headers)
return shouldRetry(err) return shouldRetry(ctx, err)
}) })
} }
if err != nil { if err != nil {
@@ -1241,7 +1352,8 @@ func (o *Object) updateChunks(ctx context.Context, in0 io.Reader, headers swift.
if segmentInfos == nil || len(segmentInfos) == 0 { if segmentInfos == nil || len(segmentInfos) == 0 {
return return
} }
deleteChunks(ctx, o, segmentsContainer, segmentInfos) _ctx := context.Background()
deleteChunks(_ctx, o, segmentsContainer, segmentInfos)
})() })()
for { for {
// can we read at least one byte? // can we read at least one byte?
@@ -1267,7 +1379,7 @@ func (o *Object) updateChunks(ctx context.Context, in0 io.Reader, headers swift.
if err == nil { if err == nil {
segmentInfos = append(segmentInfos, segmentPath) segmentInfos = append(segmentInfos, segmentPath)
} }
return shouldRetryHeaders(rxHeaders, err) return shouldRetryHeaders(ctx, rxHeaders, err)
}) })
if err != nil { if err != nil {
return "", err return "", err
@@ -1281,7 +1393,7 @@ func (o *Object) updateChunks(ctx context.Context, in0 io.Reader, headers swift.
err = o.fs.pacer.Call(func() (bool, error) { err = o.fs.pacer.Call(func() (bool, error) {
var rxHeaders swift.Headers var rxHeaders swift.Headers
rxHeaders, err = o.fs.c.ObjectPut(ctx, container, containerPath, emptyReader, true, "", contentType, headers) rxHeaders, err = o.fs.c.ObjectPut(ctx, container, containerPath, emptyReader, true, "", contentType, headers)
return shouldRetryHeaders(rxHeaders, err) return shouldRetryHeaders(ctx, rxHeaders, err)
}) })
if err == nil { if err == nil {
@@ -1356,7 +1468,7 @@ func (o *Object) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, op
var rxHeaders swift.Headers var rxHeaders swift.Headers
err = o.fs.pacer.CallNoRetry(func() (bool, error) { err = o.fs.pacer.CallNoRetry(func() (bool, error) {
rxHeaders, err = o.fs.c.ObjectPut(ctx, container, containerPath, in, true, "", contentType, headers) rxHeaders, err = o.fs.c.ObjectPut(ctx, container, containerPath, in, true, "", contentType, headers)
return shouldRetryHeaders(rxHeaders, err) return shouldRetryHeaders(ctx, rxHeaders, err)
}) })
if err != nil { if err != nil {
return err return err
@@ -1414,7 +1526,7 @@ func (o *Object) Remove(ctx context.Context) (err error) {
// Remove file/manifest first // Remove file/manifest first
err = o.fs.pacer.Call(func() (bool, error) { err = o.fs.pacer.Call(func() (bool, error) {
err = o.fs.c.ObjectDelete(ctx, container, containerPath) err = o.fs.c.ObjectDelete(ctx, container, containerPath)
return shouldRetry(err) return shouldRetry(ctx, err)
}) })
if err != nil { if err != nil {
return err return err

View File

@@ -1,6 +1,7 @@
package swift package swift
import ( import (
"context"
"testing" "testing"
"time" "time"
@@ -32,6 +33,7 @@ func TestInternalUrlEncode(t *testing.T) {
} }
func TestInternalShouldRetryHeaders(t *testing.T) { func TestInternalShouldRetryHeaders(t *testing.T) {
ctx := context.Background()
headers := swift.Headers{ headers := swift.Headers{
"Content-Length": "64", "Content-Length": "64",
"Content-Type": "text/html; charset=UTF-8", "Content-Type": "text/html; charset=UTF-8",
@@ -45,7 +47,7 @@ func TestInternalShouldRetryHeaders(t *testing.T) {
// Short sleep should just do the sleep // Short sleep should just do the sleep
start := time.Now() start := time.Now()
retry, gotErr := shouldRetryHeaders(headers, err) retry, gotErr := shouldRetryHeaders(ctx, headers, err)
dt := time.Since(start) dt := time.Since(start)
assert.True(t, retry) assert.True(t, retry)
assert.Equal(t, err, gotErr) assert.Equal(t, err, gotErr)
@@ -54,7 +56,7 @@ func TestInternalShouldRetryHeaders(t *testing.T) {
// Long sleep should return RetryError // Long sleep should return RetryError
headers["Retry-After"] = "3600" headers["Retry-After"] = "3600"
start = time.Now() start = time.Now()
retry, gotErr = shouldRetryHeaders(headers, err) retry, gotErr = shouldRetryHeaders(ctx, headers, err)
dt = time.Since(start) dt = time.Since(start)
assert.True(t, dt < time.Second) assert.True(t, dt < time.Second)
assert.False(t, retry) assert.False(t, retry)

View File

@@ -80,13 +80,14 @@ func (f *Fs) InternalTest(t *testing.T) {
t.Run("NoChunk", f.testNoChunk) t.Run("NoChunk", f.testNoChunk)
t.Run("WithChunk", f.testWithChunk) t.Run("WithChunk", f.testWithChunk)
t.Run("WithChunkFail", f.testWithChunkFail) t.Run("WithChunkFail", f.testWithChunkFail)
t.Run("CopyLargeObject", f.testCopyLargeObject)
} }
func (f *Fs) testWithChunk(t *testing.T) { func (f *Fs) testWithChunk(t *testing.T) {
preConfChunkSize := f.opt.ChunkSize preConfChunkSize := f.opt.ChunkSize
preConfChunk := f.opt.NoChunk preConfChunk := f.opt.NoChunk
f.opt.NoChunk = false f.opt.NoChunk = false
f.opt.ChunkSize = 1024 * fs.Byte f.opt.ChunkSize = 1024 * fs.SizeSuffixBase
defer func() { defer func() {
//restore old config after test //restore old config after test
f.opt.ChunkSize = preConfChunkSize f.opt.ChunkSize = preConfChunkSize
@@ -116,7 +117,7 @@ func (f *Fs) testWithChunkFail(t *testing.T) {
preConfChunkSize := f.opt.ChunkSize preConfChunkSize := f.opt.ChunkSize
preConfChunk := f.opt.NoChunk preConfChunk := f.opt.NoChunk
f.opt.NoChunk = false f.opt.NoChunk = false
f.opt.ChunkSize = 1024 * fs.Byte f.opt.ChunkSize = 1024 * fs.SizeSuffixBase
segmentContainer := f.root + "_segments" segmentContainer := f.root + "_segments"
defer func() { defer func() {
//restore config //restore config
@@ -154,4 +155,39 @@ func (f *Fs) testWithChunkFail(t *testing.T) {
require.Empty(t, objs) require.Empty(t, objs)
} }
func (f *Fs) testCopyLargeObject(t *testing.T) {
preConfChunkSize := f.opt.ChunkSize
preConfChunk := f.opt.NoChunk
f.opt.NoChunk = false
f.opt.ChunkSize = 1024 * fs.SizeSuffixBase
defer func() {
//restore old config after test
f.opt.ChunkSize = preConfChunkSize
f.opt.NoChunk = preConfChunk
}()
file := fstest.Item{
ModTime: fstest.Time("2020-12-31T04:05:06.499999999Z"),
Path: "large.txt",
Size: -1, // use unknown size during upload
}
const contentSize = 2048
contents := random.String(contentSize)
buf := bytes.NewBufferString(contents)
uploadHash := hash.NewMultiHasher()
in := io.TeeReader(buf, uploadHash)
file.Size = -1
obji := object.NewStaticObjectInfo(file.Path, file.ModTime, file.Size, true, nil, nil)
ctx := context.TODO()
obj, err := f.Features().PutStream(ctx, in, obji)
require.NoError(t, err)
require.NotEmpty(t, obj)
remoteTarget := "large.txt (copy)"
objTarget, err := f.Features().Copy(ctx, obj, remoteTarget)
require.NoError(t, err)
require.NotEmpty(t, objTarget)
require.Equal(t, obj.Size(), objTarget.Size())
}
var _ fstests.InternalTester = (*Fs)(nil) var _ fstests.InternalTester = (*Fs)(nil)

View File

@@ -7,7 +7,6 @@ import (
"context" "context"
"fmt" "fmt"
"io" "io"
"log"
"path" "path"
"strings" "strings"
"time" "time"
@@ -42,19 +41,19 @@ func init() {
Name: "tardigrade", Name: "tardigrade",
Description: "Tardigrade Decentralized Cloud Storage", Description: "Tardigrade Decentralized Cloud Storage",
NewFs: NewFs, NewFs: NewFs,
Config: func(ctx context.Context, name string, configMapper configmap.Mapper) { Config: func(ctx context.Context, name string, m configmap.Mapper, configIn fs.ConfigIn) (*fs.ConfigOut, error) {
provider, _ := configMapper.Get(fs.ConfigProvider) provider, _ := m.Get(fs.ConfigProvider)
config.FileDeleteKey(name, fs.ConfigProvider) config.FileDeleteKey(name, fs.ConfigProvider)
if provider == newProvider { if provider == newProvider {
satelliteString, _ := configMapper.Get("satellite_address") satelliteString, _ := m.Get("satellite_address")
apiKey, _ := configMapper.Get("api_key") apiKey, _ := m.Get("api_key")
passphrase, _ := configMapper.Get("passphrase") passphrase, _ := m.Get("passphrase")
// satelliteString contains always default and passphrase can be empty // satelliteString contains always default and passphrase can be empty
if apiKey == "" { if apiKey == "" {
return return nil, nil
} }
satellite, found := satMap[satelliteString] satellite, found := satMap[satelliteString]
@@ -64,22 +63,23 @@ func init() {
access, err := uplink.RequestAccessWithPassphrase(context.TODO(), satellite, apiKey, passphrase) access, err := uplink.RequestAccessWithPassphrase(context.TODO(), satellite, apiKey, passphrase)
if err != nil { if err != nil {
log.Fatalf("Couldn't create access grant: %v", err) return nil, errors.Wrap(err, "couldn't create access grant")
} }
serializedAccess, err := access.Serialize() serializedAccess, err := access.Serialize()
if err != nil { if err != nil {
log.Fatalf("Couldn't serialize access grant: %v", err) return nil, errors.Wrap(err, "couldn't serialize access grant")
} }
configMapper.Set("satellite_address", satellite) m.Set("satellite_address", satellite)
configMapper.Set("access_grant", serializedAccess) m.Set("access_grant", serializedAccess)
} else if provider == existingProvider { } else if provider == existingProvider {
config.FileDeleteKey(name, "satellite_address") config.FileDeleteKey(name, "satellite_address")
config.FileDeleteKey(name, "api_key") config.FileDeleteKey(name, "api_key")
config.FileDeleteKey(name, "passphrase") config.FileDeleteKey(name, "passphrase")
} else { } else {
log.Fatalf("Invalid provider type: %s", provider) return nil, errors.Errorf("invalid provider type: %s", provider)
} }
return nil, nil
}, },
Options: []fs.Option{ Options: []fs.Option{
{ {

View File

@@ -148,13 +148,7 @@ func (o *Object) Open(ctx context.Context, options ...fs.OpenOption) (_ io.ReadC
case s && !e: case s && !e:
offset = opt.Start offset = opt.Start
case !s && e: case !s && e:
object, err := o.fs.project.StatObject(ctx, bucketName, bucketPath) offset = -opt.End
if err != nil {
return nil, err
}
offset = object.System.ContentLength - opt.End
length = opt.End
} }
case *fs.SeekOption: case *fs.SeekOption:
offset = opt.Offset offset = opt.Offset

View File

@@ -59,7 +59,17 @@ func (d *Directory) candidates() []upstream.Entry {
// return an error or update the object properly (rather than e.g. calling panic). // return an error or update the object properly (rather than e.g. calling panic).
func (o *Object) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, options ...fs.OpenOption) error { func (o *Object) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, options ...fs.OpenOption) error {
entries, err := o.fs.actionEntries(o.candidates()...) entries, err := o.fs.actionEntries(o.candidates()...)
if err != nil { if err == fs.ErrorPermissionDenied {
// There are no candidates in this object which can be written to
// So attempt to create a new object instead
newO, err := o.fs.put(ctx, in, src, false, options...)
if err != nil {
return err
}
// Update current object
*o = *newO.(*Object)
return nil
} else if err != nil {
return err return err
} }
if len(entries) == 1 { if len(entries) == 1 {

View File

@@ -17,7 +17,9 @@ func init() {
type EpFF struct{} type EpFF struct{}
func (p *EpFF) epff(ctx context.Context, upstreams []*upstream.Fs, filePath string) (*upstream.Fs, error) { func (p *EpFF) epff(ctx context.Context, upstreams []*upstream.Fs, filePath string) (*upstream.Fs, error) {
ch := make(chan *upstream.Fs) ch := make(chan *upstream.Fs, len(upstreams))
ctx, cancel := context.WithCancel(ctx)
defer cancel()
for _, u := range upstreams { for _, u := range upstreams {
u := u // Closure u := u // Closure
go func() { go func() {
@@ -30,16 +32,10 @@ func (p *EpFF) epff(ctx context.Context, upstreams []*upstream.Fs, filePath stri
}() }()
} }
var u *upstream.Fs var u *upstream.Fs
for i := 0; i < len(upstreams); i++ { for range upstreams {
u = <-ch u = <-ch
if u != nil { if u != nil {
// close remaining goroutines break
go func(num int) {
defer close(ch)
for i := 0; i < num; i++ {
<-ch
}
}(len(upstreams) - 1 - i)
} }
} }
if u == nil { if u == nil {

View File

@@ -0,0 +1,67 @@
package union
import (
"bytes"
"context"
"testing"
"time"
"github.com/rclone/rclone/fs/object"
"github.com/rclone/rclone/fstest"
"github.com/rclone/rclone/fstest/fstests"
"github.com/rclone/rclone/lib/random"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
)
func (f *Fs) TestInternalReadOnly(t *testing.T) {
if f.name != "TestUnionRO" {
t.Skip("Only on RO union")
}
dir := "TestInternalReadOnly"
ctx := context.Background()
rofs := f.upstreams[len(f.upstreams)-1]
assert.False(t, rofs.IsWritable())
// Put a file onto the read only fs
contents := random.String(50)
file1 := fstest.NewItem(dir+"/file.txt", contents, time.Now())
_, obj1 := fstests.PutTestContents(ctx, t, rofs, &file1, contents, true)
// Check read from readonly fs via union
o, err := f.NewObject(ctx, file1.Path)
require.NoError(t, err)
assert.Equal(t, int64(50), o.Size())
// Now call Update on the union Object with new data
contents2 := random.String(100)
file2 := fstest.NewItem(dir+"/file.txt", contents2, time.Now())
in := bytes.NewBufferString(contents2)
src := object.NewStaticObjectInfo(file2.Path, file2.ModTime, file2.Size, true, nil, nil)
err = o.Update(ctx, in, src)
require.NoError(t, err)
assert.Equal(t, int64(100), o.Size())
// Check we read the new object via the union
o, err = f.NewObject(ctx, file1.Path)
require.NoError(t, err)
assert.Equal(t, int64(100), o.Size())
// Remove the object
assert.NoError(t, o.Remove(ctx))
// Check we read the old object in the read only layer now
o, err = f.NewObject(ctx, file1.Path)
require.NoError(t, err)
assert.Equal(t, int64(50), o.Size())
// Remove file and dir from read only fs
assert.NoError(t, obj1.Remove(ctx))
assert.NoError(t, rofs.Rmdir(ctx, dir))
}
func (f *Fs) InternalTest(t *testing.T) {
t.Run("ReadOnly", f.TestInternalReadOnly)
}
var _ fstests.InternalTester = (*Fs)(nil)

View File

@@ -2,13 +2,15 @@
package union_test package union_test
import ( import (
"fmt"
"io/ioutil"
"os" "os"
"path/filepath"
"testing" "testing"
_ "github.com/rclone/rclone/backend/local" _ "github.com/rclone/rclone/backend/local"
"github.com/rclone/rclone/fstest" "github.com/rclone/rclone/fstest"
"github.com/rclone/rclone/fstest/fstests" "github.com/rclone/rclone/fstest/fstests"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require" "github.com/stretchr/testify/require"
) )
@@ -24,17 +26,28 @@ func TestIntegration(t *testing.T) {
}) })
} }
func makeTestDirs(t *testing.T, n int) (dirs []string, clean func()) {
for i := 1; i <= n; i++ {
dir, err := ioutil.TempDir("", fmt.Sprintf("rclone-union-test-%d", n))
require.NoError(t, err)
dirs = append(dirs, dir)
}
clean = func() {
for _, dir := range dirs {
err := os.RemoveAll(dir)
assert.NoError(t, err)
}
}
return dirs, clean
}
func TestStandard(t *testing.T) { func TestStandard(t *testing.T) {
if *fstest.RemoteName != "" { if *fstest.RemoteName != "" {
t.Skip("Skipping as -remote set") t.Skip("Skipping as -remote set")
} }
tempdir1 := filepath.Join(os.TempDir(), "rclone-union-test-standard1") dirs, clean := makeTestDirs(t, 3)
tempdir2 := filepath.Join(os.TempDir(), "rclone-union-test-standard2") defer clean()
tempdir3 := filepath.Join(os.TempDir(), "rclone-union-test-standard3") upstreams := dirs[0] + " " + dirs[1] + " " + dirs[2]
require.NoError(t, os.MkdirAll(tempdir1, 0744))
require.NoError(t, os.MkdirAll(tempdir2, 0744))
require.NoError(t, os.MkdirAll(tempdir3, 0744))
upstreams := tempdir1 + " " + tempdir2 + " " + tempdir3
name := "TestUnion" name := "TestUnion"
fstests.Run(t, &fstests.Opt{ fstests.Run(t, &fstests.Opt{
RemoteName: name + ":", RemoteName: name + ":",
@@ -54,13 +67,9 @@ func TestRO(t *testing.T) {
if *fstest.RemoteName != "" { if *fstest.RemoteName != "" {
t.Skip("Skipping as -remote set") t.Skip("Skipping as -remote set")
} }
tempdir1 := filepath.Join(os.TempDir(), "rclone-union-test-ro1") dirs, clean := makeTestDirs(t, 3)
tempdir2 := filepath.Join(os.TempDir(), "rclone-union-test-ro2") defer clean()
tempdir3 := filepath.Join(os.TempDir(), "rclone-union-test-ro3") upstreams := dirs[0] + " " + dirs[1] + ":ro " + dirs[2] + ":ro"
require.NoError(t, os.MkdirAll(tempdir1, 0744))
require.NoError(t, os.MkdirAll(tempdir2, 0744))
require.NoError(t, os.MkdirAll(tempdir3, 0744))
upstreams := tempdir1 + " " + tempdir2 + ":ro " + tempdir3 + ":ro"
name := "TestUnionRO" name := "TestUnionRO"
fstests.Run(t, &fstests.Opt{ fstests.Run(t, &fstests.Opt{
RemoteName: name + ":", RemoteName: name + ":",
@@ -80,13 +89,9 @@ func TestNC(t *testing.T) {
if *fstest.RemoteName != "" { if *fstest.RemoteName != "" {
t.Skip("Skipping as -remote set") t.Skip("Skipping as -remote set")
} }
tempdir1 := filepath.Join(os.TempDir(), "rclone-union-test-nc1") dirs, clean := makeTestDirs(t, 3)
tempdir2 := filepath.Join(os.TempDir(), "rclone-union-test-nc2") defer clean()
tempdir3 := filepath.Join(os.TempDir(), "rclone-union-test-nc3") upstreams := dirs[0] + " " + dirs[1] + ":nc " + dirs[2] + ":nc"
require.NoError(t, os.MkdirAll(tempdir1, 0744))
require.NoError(t, os.MkdirAll(tempdir2, 0744))
require.NoError(t, os.MkdirAll(tempdir3, 0744))
upstreams := tempdir1 + " " + tempdir2 + ":nc " + tempdir3 + ":nc"
name := "TestUnionNC" name := "TestUnionNC"
fstests.Run(t, &fstests.Opt{ fstests.Run(t, &fstests.Opt{
RemoteName: name + ":", RemoteName: name + ":",
@@ -106,13 +111,9 @@ func TestPolicy1(t *testing.T) {
if *fstest.RemoteName != "" { if *fstest.RemoteName != "" {
t.Skip("Skipping as -remote set") t.Skip("Skipping as -remote set")
} }
tempdir1 := filepath.Join(os.TempDir(), "rclone-union-test-policy11") dirs, clean := makeTestDirs(t, 3)
tempdir2 := filepath.Join(os.TempDir(), "rclone-union-test-policy12") defer clean()
tempdir3 := filepath.Join(os.TempDir(), "rclone-union-test-policy13") upstreams := dirs[0] + " " + dirs[1] + " " + dirs[2]
require.NoError(t, os.MkdirAll(tempdir1, 0744))
require.NoError(t, os.MkdirAll(tempdir2, 0744))
require.NoError(t, os.MkdirAll(tempdir3, 0744))
upstreams := tempdir1 + " " + tempdir2 + " " + tempdir3
name := "TestUnionPolicy1" name := "TestUnionPolicy1"
fstests.Run(t, &fstests.Opt{ fstests.Run(t, &fstests.Opt{
RemoteName: name + ":", RemoteName: name + ":",
@@ -132,13 +133,9 @@ func TestPolicy2(t *testing.T) {
if *fstest.RemoteName != "" { if *fstest.RemoteName != "" {
t.Skip("Skipping as -remote set") t.Skip("Skipping as -remote set")
} }
tempdir1 := filepath.Join(os.TempDir(), "rclone-union-test-policy21") dirs, clean := makeTestDirs(t, 3)
tempdir2 := filepath.Join(os.TempDir(), "rclone-union-test-policy22") defer clean()
tempdir3 := filepath.Join(os.TempDir(), "rclone-union-test-policy23") upstreams := dirs[0] + " " + dirs[1] + " " + dirs[2]
require.NoError(t, os.MkdirAll(tempdir1, 0744))
require.NoError(t, os.MkdirAll(tempdir2, 0744))
require.NoError(t, os.MkdirAll(tempdir3, 0744))
upstreams := tempdir1 + " " + tempdir2 + " " + tempdir3
name := "TestUnionPolicy2" name := "TestUnionPolicy2"
fstests.Run(t, &fstests.Opt{ fstests.Run(t, &fstests.Opt{
RemoteName: name + ":", RemoteName: name + ":",
@@ -158,13 +155,9 @@ func TestPolicy3(t *testing.T) {
if *fstest.RemoteName != "" { if *fstest.RemoteName != "" {
t.Skip("Skipping as -remote set") t.Skip("Skipping as -remote set")
} }
tempdir1 := filepath.Join(os.TempDir(), "rclone-union-test-policy31") dirs, clean := makeTestDirs(t, 3)
tempdir2 := filepath.Join(os.TempDir(), "rclone-union-test-policy32") defer clean()
tempdir3 := filepath.Join(os.TempDir(), "rclone-union-test-policy33") upstreams := dirs[0] + " " + dirs[1] + " " + dirs[2]
require.NoError(t, os.MkdirAll(tempdir1, 0744))
require.NoError(t, os.MkdirAll(tempdir2, 0744))
require.NoError(t, os.MkdirAll(tempdir3, 0744))
upstreams := tempdir1 + " " + tempdir2 + " " + tempdir3
name := "TestUnionPolicy3" name := "TestUnionPolicy3"
fstests.Run(t, &fstests.Opt{ fstests.Run(t, &fstests.Opt{
RemoteName: name + ":", RemoteName: name + ":",

View File

@@ -14,6 +14,7 @@ import (
"github.com/pkg/errors" "github.com/pkg/errors"
"github.com/rclone/rclone/fs" "github.com/rclone/rclone/fs"
"github.com/rclone/rclone/fs/cache" "github.com/rclone/rclone/fs/cache"
"github.com/rclone/rclone/fs/fspath"
) )
var ( var (
@@ -62,12 +63,12 @@ type Entry interface {
// New creates a new Fs based on the // New creates a new Fs based on the
// string formatted `type:root_path(:ro/:nc)` // string formatted `type:root_path(:ro/:nc)`
func New(ctx context.Context, remote, root string, cacheTime time.Duration) (*Fs, error) { func New(ctx context.Context, remote, root string, cacheTime time.Duration) (*Fs, error) {
_, configName, fsPath, err := fs.ParseRemote(remote) configName, fsPath, err := fspath.SplitFs(remote)
if err != nil { if err != nil {
return nil, err return nil, err
} }
f := &Fs{ f := &Fs{
RootPath: root, RootPath: strings.TrimRight(root, "/"),
writable: true, writable: true,
creatable: true, creatable: true,
cacheExpiry: time.Now().Unix(), cacheExpiry: time.Now().Unix(),
@@ -83,15 +84,13 @@ func New(ctx context.Context, remote, root string, cacheTime time.Duration) (*Fs
f.creatable = false f.creatable = false
fsPath = fsPath[0 : len(fsPath)-3] fsPath = fsPath[0 : len(fsPath)-3]
} }
if configName != "local" { remote = configName + fsPath
fsPath = configName + ":" + fsPath rFs, err := cache.Get(ctx, remote)
}
rFs, err := cache.Get(ctx, fsPath)
if err != nil && err != fs.ErrorIsFile { if err != nil && err != fs.ErrorIsFile {
return nil, err return nil, err
} }
f.RootFs = rFs f.RootFs = rFs
rootString := path.Join(fsPath, filepath.ToSlash(root)) rootString := path.Join(remote, filepath.ToSlash(root))
myFs, err := cache.Get(ctx, rootString) myFs, err := cache.Get(ctx, rootString)
if err != nil && err != fs.ErrorIsFile { if err != nil && err != fs.ErrorIsFile {
return nil, err return nil, err

View File

@@ -0,0 +1,170 @@
package api
import "fmt"
// Error contains the error code and message returned by the API
type Error struct {
Success bool `json:"success,omitempty"`
StatusCode int `json:"statusCode,omitempty"`
Message string `json:"message,omitempty"`
Data string `json:"data,omitempty"`
}
// Error returns a string for the error and satisfies the error interface
func (e Error) Error() string {
out := fmt.Sprintf("api error %d", e.StatusCode)
if e.Message != "" {
out += ": " + e.Message
}
if e.Data != "" {
out += ": " + e.Data
}
return out
}
// FolderEntry represents a Uptobox subfolder when listing folder contents
type FolderEntry struct {
FolderID uint64 `json:"fld_id"`
Description string `json:"fld_descr"`
Password string `json:"fld_password"`
FullPath string `json:"fullPath"`
Path string `json:"fld_name"`
Name string `json:"name"`
Hash string `json:"hash"`
}
// FolderInfo represents the current folder when listing folder contents
type FolderInfo struct {
FolderID uint64 `json:"fld_id"`
Hash string `json:"hash"`
FileCount uint64 `json:"fileCount"`
TotalFileSize int64 `json:"totalFileSize"`
}
// FileInfo represents a file when listing folder contents
type FileInfo struct {
Name string `json:"file_name"`
Description string `json:"file_descr"`
Created string `json:"file_created"`
Size int64 `json:"file_size"`
Downloads uint64 `json:"file_downloads"`
Code string `json:"file_code"`
Password string `json:"file_password"`
Public int `json:"file_public"`
LastDownload string `json:"file_last_download"`
ID uint64 `json:"id"`
}
// ReadMetadataResponse is the response when listing folder contents
type ReadMetadataResponse struct {
StatusCode int `json:"statusCode"`
Message string `json:"message"`
Data struct {
CurrentFolder FolderInfo `json:"currentFolder"`
Folders []FolderEntry `json:"folders"`
Files []FileInfo `json:"files"`
PageCount int `json:"pageCount"`
TotalFileCount int `json:"totalFileCount"`
TotalFileSize int64 `json:"totalFileSize"`
} `json:"data"`
}
// UploadInfo is the response when initiating an upload
type UploadInfo struct {
StatusCode int `json:"statusCode"`
Message string `json:"message"`
Data struct {
UploadLink string `json:"uploadLink"`
MaxUpload string `json:"maxUpload"`
} `json:"data"`
}
// UploadResponse is the respnse to a successful upload
type UploadResponse struct {
Files []struct {
Name string `json:"name"`
Size int64 `json:"size"`
URL string `json:"url"`
DeleteURL string `json:"deleteUrl"`
} `json:"files"`
}
// UpdateResponse is a generic response to various action on files (rename/copy/move)
type UpdateResponse struct {
Message string `json:"message"`
StatusCode int `json:"statusCode"`
}
// Download is the response when requesting a download link
type Download struct {
StatusCode int `json:"statusCode"`
Message string `json:"message"`
Data struct {
DownloadLink string `json:"dlLink"`
} `json:"data"`
}
// MetadataRequestOptions represents all the options when listing folder contents
type MetadataRequestOptions struct {
Limit uint64
Offset uint64
SearchField string
Search string
}
// CreateFolderRequest is used for creating a folder
type CreateFolderRequest struct {
Token string `json:"token"`
Path string `json:"path"`
Name string `json:"name"`
}
// DeleteFolderRequest is used for deleting a folder
type DeleteFolderRequest struct {
Token string `json:"token"`
FolderID uint64 `json:"fld_id"`
}
// CopyMoveFileRequest is used for moving/copying a file
type CopyMoveFileRequest struct {
Token string `json:"token"`
FileCodes string `json:"file_codes"`
DestinationFolderID uint64 `json:"destination_fld_id"`
Action string `json:"action"`
}
// MoveFolderRequest is used for moving a folder
type MoveFolderRequest struct {
Token string `json:"token"`
FolderID uint64 `json:"fld_id"`
DestinationFolderID uint64 `json:"destination_fld_id"`
Action string `json:"action"`
}
// RenameFolderRequest is used for renaming a folder
type RenameFolderRequest struct {
Token string `json:"token"`
FolderID uint64 `json:"fld_id"`
NewName string `json:"new_name"`
}
// UpdateFileInformation is used for renaming a file
type UpdateFileInformation struct {
Token string `json:"token"`
FileCode string `json:"file_code"`
NewName string `json:"new_name,omitempty"`
Description string `json:"description,omitempty"`
Password string `json:"password,omitempty"`
Public string `json:"public,omitempty"`
}
// RemoveFileRequest is used for deleting a file
type RemoveFileRequest struct {
Token string `json:"token"`
FileCodes string `json:"file_codes"`
}
// Token represents the authentication token
type Token struct {
Token string `json:"token"`
}

1053
backend/uptobox/uptobox.go Normal file

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,21 @@
// Test Uptobox filesystem interface
package uptobox_test
import (
"testing"
"github.com/rclone/rclone/backend/uptobox"
"github.com/rclone/rclone/fstest"
"github.com/rclone/rclone/fstest/fstests"
)
// TestIntegration runs integration tests against the remote
func TestIntegration(t *testing.T) {
if *fstest.RemoteName == "" {
*fstest.RemoteName = "TestUptobox:"
}
fstests.Run(t, &fstests.Opt{
RemoteName: *fstest.RemoteName,
NilObject: (*uptobox.Object)(nil),
})
}

View File

@@ -125,7 +125,7 @@ func (ca *CookieAuth) getSPCookie(conf *SharepointSuccessResponse) (*CookieRespo
return nil, errors.Wrap(err, "Error while constructing endpoint URL") return nil, errors.Wrap(err, "Error while constructing endpoint URL")
} }
u, err := url.Parse("https://" + spRoot.Host + "/_forms/default.aspx?wa=wsignin1.0") u, err := url.Parse(spRoot.Scheme + "://" + spRoot.Host + "/_forms/default.aspx?wa=wsignin1.0")
if err != nil { if err != nil {
return nil, errors.Wrap(err, "Error while constructing login URL") return nil, errors.Wrap(err, "Error while constructing login URL")
} }

View File

@@ -10,6 +10,7 @@ package webdav
import ( import (
"bytes" "bytes"
"context" "context"
"crypto/tls"
"encoding/xml" "encoding/xml"
"fmt" "fmt"
"io" "io"
@@ -19,20 +20,25 @@ import (
"path" "path"
"strconv" "strconv"
"strings" "strings"
"sync"
"time" "time"
"github.com/pkg/errors" "github.com/pkg/errors"
"github.com/rclone/rclone/backend/webdav/api" "github.com/rclone/rclone/backend/webdav/api"
"github.com/rclone/rclone/backend/webdav/odrvcookie" "github.com/rclone/rclone/backend/webdav/odrvcookie"
"github.com/rclone/rclone/fs" "github.com/rclone/rclone/fs"
"github.com/rclone/rclone/fs/config"
"github.com/rclone/rclone/fs/config/configmap" "github.com/rclone/rclone/fs/config/configmap"
"github.com/rclone/rclone/fs/config/configstruct" "github.com/rclone/rclone/fs/config/configstruct"
"github.com/rclone/rclone/fs/config/obscure" "github.com/rclone/rclone/fs/config/obscure"
"github.com/rclone/rclone/fs/fserrors" "github.com/rclone/rclone/fs/fserrors"
"github.com/rclone/rclone/fs/fshttp" "github.com/rclone/rclone/fs/fshttp"
"github.com/rclone/rclone/fs/hash" "github.com/rclone/rclone/fs/hash"
"github.com/rclone/rclone/lib/encoder"
"github.com/rclone/rclone/lib/pacer" "github.com/rclone/rclone/lib/pacer"
"github.com/rclone/rclone/lib/rest" "github.com/rclone/rclone/lib/rest"
ntlmssp "github.com/Azure/go-ntlmssp"
) )
const ( const (
@@ -42,8 +48,22 @@ const (
defaultDepth = "1" // depth for PROPFIND defaultDepth = "1" // depth for PROPFIND
) )
const defaultEncodingSharepointNTLM = (encoder.EncodeWin |
encoder.EncodeHashPercent | // required by IIS/8.5 in contrast with onedrive which doesn't need it
(encoder.Display &^ encoder.EncodeDot) | // test with IIS/8.5 shows that EncodeDot is not needed
encoder.EncodeBackSlash |
encoder.EncodeLeftSpace |
encoder.EncodeLeftTilde |
encoder.EncodeRightPeriod |
encoder.EncodeRightSpace |
encoder.EncodeInvalidUtf8)
// Register with Fs // Register with Fs
func init() { func init() {
configEncodingHelp := fmt.Sprintf(
"%s\n\nDefault encoding is %s for sharepoint-ntlm or identity otherwise.",
config.ConfigEncodingHelp, defaultEncodingSharepointNTLM)
fs.Register(&fs.RegInfo{ fs.Register(&fs.RegInfo{
Name: "webdav", Name: "webdav",
Description: "Webdav", Description: "Webdav",
@@ -67,14 +87,17 @@ func init() {
Help: "Owncloud", Help: "Owncloud",
}, { }, {
Value: "sharepoint", Value: "sharepoint",
Help: "Sharepoint", Help: "Sharepoint Online, authenticated by Microsoft account.",
}, {
Value: "sharepoint-ntlm",
Help: "Sharepoint with NTLM authentication. Usually self-hosted or on-premises.",
}, { }, {
Value: "other", Value: "other",
Help: "Other site/service or software", Help: "Other site/service or software",
}}, }},
}, { }, {
Name: "user", Name: "user",
Help: "User name", Help: "User name. In case NTLM authentication is used, the username should be in the format 'Domain\\User'.",
}, { }, {
Name: "pass", Name: "pass",
Help: "Password.", Help: "Password.",
@@ -86,18 +109,39 @@ func init() {
Name: "bearer_token_command", Name: "bearer_token_command",
Help: "Command to run to get a bearer token", Help: "Command to run to get a bearer token",
Advanced: true, Advanced: true,
}, {
Name: config.ConfigEncoding,
Help: configEncodingHelp,
Advanced: true,
}, {
Name: "headers",
Help: `Set HTTP headers for all transactions
Use this to set additional HTTP headers for all transactions
The input format is comma separated list of key,value pairs. Standard
[CSV encoding](https://godoc.org/encoding/csv) may be used.
For example to set a Cookie use 'Cookie,name=value', or '"Cookie","name=value"'.
You can set multiple headers, e.g. '"Cookie","name=value","Authorization","xxx"'.
`,
Default: fs.CommaSepList{},
Advanced: true,
}}, }},
}) })
} }
// Options defines the configuration for this backend // Options defines the configuration for this backend
type Options struct { type Options struct {
URL string `config:"url"` URL string `config:"url"`
Vendor string `config:"vendor"` Vendor string `config:"vendor"`
User string `config:"user"` User string `config:"user"`
Pass string `config:"pass"` Pass string `config:"pass"`
BearerToken string `config:"bearer_token"` BearerToken string `config:"bearer_token"`
BearerTokenCommand string `config:"bearer_token_command"` BearerTokenCommand string `config:"bearer_token_command"`
Enc encoder.MultiEncoder `config:"encoding"`
Headers fs.CommaSepList `config:"headers"`
} }
// Fs represents a remote webdav // Fs represents a remote webdav
@@ -114,8 +158,10 @@ type Fs struct {
canStream bool // set if can stream canStream bool // set if can stream
useOCMtime bool // set if can use X-OC-Mtime useOCMtime bool // set if can use X-OC-Mtime
retryWithZeroDepth bool // some vendors (sharepoint) won't list files when Depth is 1 (our default) retryWithZeroDepth bool // some vendors (sharepoint) won't list files when Depth is 1 (our default)
checkBeforePurge bool // enables extra check that directory to purge really exists
hasMD5 bool // set if can use owncloud style checksums for MD5 hasMD5 bool // set if can use owncloud style checksums for MD5
hasSHA1 bool // set if can use owncloud style checksums for SHA1 hasSHA1 bool // set if can use owncloud style checksums for SHA1
ntlmAuthMu sync.Mutex // mutex to serialize NTLM auth roundtrips
} }
// Object describes a webdav object // Object describes a webdav object
@@ -166,7 +212,10 @@ var retryErrorCodes = []int{
// shouldRetry returns a boolean as to whether this resp and err // shouldRetry returns a boolean as to whether this resp and err
// deserve to be retried. It returns the err as a convenience // deserve to be retried. It returns the err as a convenience
func (f *Fs) shouldRetry(resp *http.Response, err error) (bool, error) { func (f *Fs) shouldRetry(ctx context.Context, resp *http.Response, err error) (bool, error) {
if fserrors.ContextError(ctx, &err) {
return false, err
}
// If we have a bearer token command and it has expired then refresh it // If we have a bearer token command and it has expired then refresh it
if f.opt.BearerTokenCommand != "" && resp != nil && resp.StatusCode == 401 { if f.opt.BearerTokenCommand != "" && resp != nil && resp.StatusCode == 401 {
fs.Debugf(f, "Bearer token expired: %v", err) fs.Debugf(f, "Bearer token expired: %v", err)
@@ -179,6 +228,22 @@ func (f *Fs) shouldRetry(resp *http.Response, err error) (bool, error) {
return fserrors.ShouldRetry(err) || fserrors.ShouldRetryHTTP(resp, retryErrorCodes), err return fserrors.ShouldRetry(err) || fserrors.ShouldRetryHTTP(resp, retryErrorCodes), err
} }
// safeRoundTripper is a wrapper for http.RoundTripper that serializes
// http roundtrips. NTLM authentication sequence can involve up to four
// rounds of negotiations and might fail due to concurrency.
// This wrapper allows to use ntlmssp.Negotiator safely with goroutines.
type safeRoundTripper struct {
fs *Fs
rt http.RoundTripper
}
// RoundTrip guards wrapped RoundTripper by a mutex.
func (srt *safeRoundTripper) RoundTrip(req *http.Request) (*http.Response, error) {
srt.fs.ntlmAuthMu.Lock()
defer srt.fs.ntlmAuthMu.Unlock()
return srt.rt.RoundTrip(req)
}
// itemIsDir returns true if the item is a directory // itemIsDir returns true if the item is a directory
// //
// When a client sees a resourcetype it doesn't recognize it should // When a client sees a resourcetype it doesn't recognize it should
@@ -224,7 +289,7 @@ func (f *Fs) readMetaDataForPath(ctx context.Context, path string, depth string)
var resp *http.Response var resp *http.Response
err = f.pacer.Call(func() (bool, error) { err = f.pacer.Call(func() (bool, error) {
resp, err = f.srv.CallXML(ctx, &opts, nil, &result) resp, err = f.srv.CallXML(ctx, &opts, nil, &result)
return f.shouldRetry(resp, err) return f.shouldRetry(ctx, resp, err)
}) })
if apiErr, ok := err.(*api.Error); ok { if apiErr, ok := err.(*api.Error); ok {
// does not exist // does not exist
@@ -285,7 +350,11 @@ func addSlash(s string) string {
// filePath returns a file path (f.root, file) // filePath returns a file path (f.root, file)
func (f *Fs) filePath(file string) string { func (f *Fs) filePath(file string) string {
return rest.URLPathEscape(path.Join(f.root, file)) subPath := path.Join(f.root, file)
if f.opt.Enc != encoder.EncodeZero {
subPath = f.opt.Enc.FromStandardPath(subPath)
}
return rest.URLPathEscape(subPath)
} }
// dirPath returns a directory path (f.root, dir) // dirPath returns a directory path (f.root, dir)
@@ -306,6 +375,12 @@ func NewFs(ctx context.Context, name, root string, m configmap.Mapper) (fs.Fs, e
if err != nil { if err != nil {
return nil, err return nil, err
} }
if len(opt.Headers)%2 != 0 {
return nil, errors.New("odd number of headers supplied")
}
fs.Debugf(nil, "found headers: %v", opt.Headers)
rootIsDir := strings.HasSuffix(root, "/") rootIsDir := strings.HasSuffix(root, "/")
root = strings.Trim(root, "/") root = strings.Trim(root, "/")
@@ -324,6 +399,10 @@ func NewFs(ctx context.Context, name, root string, m configmap.Mapper) (fs.Fs, e
} }
root = strings.Trim(root, "/") root = strings.Trim(root, "/")
if opt.Enc == encoder.EncodeZero && opt.Vendor == "sharepoint-ntlm" {
opt.Enc = defaultEncodingSharepointNTLM
}
// Parse the endpoint // Parse the endpoint
u, err := url.Parse(opt.URL) u, err := url.Parse(opt.URL)
if err != nil { if err != nil {
@@ -336,10 +415,28 @@ func NewFs(ctx context.Context, name, root string, m configmap.Mapper) (fs.Fs, e
opt: *opt, opt: *opt,
endpoint: u, endpoint: u,
endpointURL: u.String(), endpointURL: u.String(),
srv: rest.NewClient(fshttp.NewClient(ctx)).SetRoot(u.String()),
pacer: fs.NewPacer(ctx, pacer.NewDefault(pacer.MinSleep(minSleep), pacer.MaxSleep(maxSleep), pacer.DecayConstant(decayConstant))), pacer: fs.NewPacer(ctx, pacer.NewDefault(pacer.MinSleep(minSleep), pacer.MaxSleep(maxSleep), pacer.DecayConstant(decayConstant))),
precision: fs.ModTimeNotSupported, precision: fs.ModTimeNotSupported,
} }
client := fshttp.NewClient(ctx)
if opt.Vendor == "sharepoint-ntlm" {
// Disable transparent HTTP/2 support as per https://golang.org/pkg/net/http/ ,
// otherwise any connection to IIS 10.0 fails with 'stream error: stream ID 39; HTTP_1_1_REQUIRED'
// https://docs.microsoft.com/en-us/iis/get-started/whats-new-in-iis-10/http2-on-iis says:
// 'Windows authentication (NTLM/Kerberos/Negotiate) is not supported with HTTP/2.'
t := fshttp.NewTransportCustom(ctx, func(t *http.Transport) {
t.TLSNextProto = map[string]func(string, *tls.Conn) http.RoundTripper{}
})
// Add NTLM layer
client.Transport = &safeRoundTripper{
fs: f,
rt: ntlmssp.Negotiator{RoundTripper: t},
}
}
f.srv = rest.NewClient(client).SetRoot(u.String())
f.features = (&fs.Features{ f.features = (&fs.Features{
CanHaveEmptyDirectories: true, CanHaveEmptyDirectories: true,
}).Fill(ctx, f) }).Fill(ctx, f)
@@ -353,6 +450,9 @@ func NewFs(ctx context.Context, name, root string, m configmap.Mapper) (fs.Fs, e
return nil, err return nil, err
} }
} }
if opt.Headers != nil {
f.addHeaders(opt.Headers)
}
f.srv.SetErrorHandler(errorHandler) f.srv.SetErrorHandler(errorHandler)
err = f.setQuirks(ctx, opt.Vendor) err = f.setQuirks(ctx, opt.Vendor)
if err != nil { if err != nil {
@@ -412,6 +512,15 @@ func (f *Fs) fetchBearerToken(cmd string) (string, error) {
return stdoutString, nil return stdoutString, nil
} }
// Adds the configured headers to the request if any
func (f *Fs) addHeaders(headers fs.CommaSepList) {
for i := 0; i < len(headers); i += 2 {
key := f.opt.Headers[i]
value := f.opt.Headers[i+1]
f.srv.SetHeader(key, value)
}
}
// fetch the bearer token and set it if successful // fetch the bearer token and set it if successful
func (f *Fs) fetchAndSetBearerToken() error { func (f *Fs) fetchAndSetBearerToken() error {
if f.opt.BearerTokenCommand == "" { if f.opt.BearerTokenCommand == "" {
@@ -465,6 +574,16 @@ func (f *Fs) setQuirks(ctx context.Context, vendor string) error {
// to determine if we may have found a file, the request has to be resent // to determine if we may have found a file, the request has to be resent
// with the depth set to 0 // with the depth set to 0
f.retryWithZeroDepth = true f.retryWithZeroDepth = true
case "sharepoint-ntlm":
// Sharepoint with NTLM authentication
// See comment above
f.retryWithZeroDepth = true
// Sharepoint 2016 returns status 204 to the purge request
// even if the directory to purge does not really exist
// so we must perform an extra check to detect this
// condition and return a proper error code.
f.checkBeforePurge = true
case "other": case "other":
default: default:
fs.Debugf(f, "Unknown vendor %q", vendor) fs.Debugf(f, "Unknown vendor %q", vendor)
@@ -546,7 +665,7 @@ func (f *Fs) listAll(ctx context.Context, dir string, directoriesOnly bool, file
var resp *http.Response var resp *http.Response
err = f.pacer.Call(func() (bool, error) { err = f.pacer.Call(func() (bool, error) {
resp, err = f.srv.CallXML(ctx, &opts, nil, &result) resp, err = f.srv.CallXML(ctx, &opts, nil, &result)
return f.shouldRetry(resp, err) return f.shouldRetry(ctx, resp, err)
}) })
if err != nil { if err != nil {
if apiErr, ok := err.(*api.Error); ok { if apiErr, ok := err.(*api.Error); ok {
@@ -583,7 +702,11 @@ func (f *Fs) listAll(ctx context.Context, dir string, directoriesOnly bool, file
fs.Debugf(nil, "Item with unknown path received: %q, %q", u.Path, baseURL.Path) fs.Debugf(nil, "Item with unknown path received: %q, %q", u.Path, baseURL.Path)
continue continue
} }
remote := path.Join(dir, u.Path[len(baseURL.Path):]) subPath := u.Path[len(baseURL.Path):]
if f.opt.Enc != encoder.EncodeZero {
subPath = f.opt.Enc.ToStandardPath(subPath)
}
remote := path.Join(dir, subPath)
if strings.HasSuffix(remote, "/") { if strings.HasSuffix(remote, "/") {
remote = remote[:len(remote)-1] remote = remote[:len(remote)-1]
} }
@@ -714,7 +837,7 @@ func (f *Fs) _dirExists(ctx context.Context, dirPath string) (exists bool) {
var err error var err error
err = f.pacer.Call(func() (bool, error) { err = f.pacer.Call(func() (bool, error) {
resp, err = f.srv.CallXML(ctx, &opts, nil, &result) resp, err = f.srv.CallXML(ctx, &opts, nil, &result)
return f.shouldRetry(resp, err) return f.shouldRetry(ctx, resp, err)
}) })
return err == nil return err == nil
} }
@@ -736,7 +859,7 @@ func (f *Fs) _mkdir(ctx context.Context, dirPath string) error {
} }
err := f.pacer.Call(func() (bool, error) { err := f.pacer.Call(func() (bool, error) {
resp, err := f.srv.Call(ctx, &opts) resp, err := f.srv.Call(ctx, &opts)
return f.shouldRetry(resp, err) return f.shouldRetry(ctx, resp, err)
}) })
if apiErr, ok := err.(*api.Error); ok { if apiErr, ok := err.(*api.Error); ok {
// Check if it already exists. The response code for this isn't // Check if it already exists. The response code for this isn't
@@ -800,6 +923,21 @@ func (f *Fs) purgeCheck(ctx context.Context, dir string, check bool) error {
if notEmpty { if notEmpty {
return fs.ErrorDirectoryNotEmpty return fs.ErrorDirectoryNotEmpty
} }
} else if f.checkBeforePurge {
// We are doing purge as the `check` argument is unset.
// The quirk says that we are working with Sharepoint 2016.
// This provider returns status 204 even if the purged directory
// does not really exist so we perform an extra check here.
// Only the existence is checked, all other errors must be
// ignored here to make the rclone test suite pass.
depth := defaultDepth
if f.retryWithZeroDepth {
depth = "0"
}
_, err := f.readMetaDataForPath(ctx, dir, depth)
if err == fs.ErrorObjectNotFound {
return fs.ErrorDirNotFound
}
} }
opts := rest.Opts{ opts := rest.Opts{
Method: "DELETE", Method: "DELETE",
@@ -810,7 +948,7 @@ func (f *Fs) purgeCheck(ctx context.Context, dir string, check bool) error {
var err error var err error
err = f.pacer.Call(func() (bool, error) { err = f.pacer.Call(func() (bool, error) {
resp, err = f.srv.CallXML(ctx, &opts, nil, nil) resp, err = f.srv.CallXML(ctx, &opts, nil, nil)
return f.shouldRetry(resp, err) return f.shouldRetry(ctx, resp, err)
}) })
if err != nil { if err != nil {
return errors.Wrap(err, "rmdir failed") return errors.Wrap(err, "rmdir failed")
@@ -873,7 +1011,7 @@ func (f *Fs) copyOrMove(ctx context.Context, src fs.Object, remote string, metho
} }
err = f.pacer.Call(func() (bool, error) { err = f.pacer.Call(func() (bool, error) {
resp, err = f.srv.Call(ctx, &opts) resp, err = f.srv.Call(ctx, &opts)
return f.shouldRetry(resp, err) return f.shouldRetry(ctx, resp, err)
}) })
if err != nil { if err != nil {
return nil, errors.Wrap(err, "Copy call failed") return nil, errors.Wrap(err, "Copy call failed")
@@ -969,7 +1107,7 @@ func (f *Fs) DirMove(ctx context.Context, src fs.Fs, srcRemote, dstRemote string
} }
err = f.pacer.Call(func() (bool, error) { err = f.pacer.Call(func() (bool, error) {
resp, err = f.srv.Call(ctx, &opts) resp, err = f.srv.Call(ctx, &opts)
return f.shouldRetry(resp, err) return f.shouldRetry(ctx, resp, err)
}) })
if err != nil { if err != nil {
return errors.Wrap(err, "DirMove MOVE call failed") return errors.Wrap(err, "DirMove MOVE call failed")
@@ -1011,7 +1149,7 @@ func (f *Fs) About(ctx context.Context) (*fs.Usage, error) {
var err error var err error
err = f.pacer.Call(func() (bool, error) { err = f.pacer.Call(func() (bool, error) {
resp, err = f.srv.CallXML(ctx, &opts, nil, &q) resp, err = f.srv.CallXML(ctx, &opts, nil, &q)
return f.shouldRetry(resp, err) return f.shouldRetry(ctx, resp, err)
}) })
if err != nil { if err != nil {
return nil, errors.Wrap(err, "about call failed") return nil, errors.Wrap(err, "about call failed")
@@ -1139,7 +1277,7 @@ func (o *Object) Open(ctx context.Context, options ...fs.OpenOption) (in io.Read
} }
err = o.fs.pacer.Call(func() (bool, error) { err = o.fs.pacer.Call(func() (bool, error) {
resp, err = o.fs.srv.Call(ctx, &opts) resp, err = o.fs.srv.Call(ctx, &opts)
return o.fs.shouldRetry(resp, err) return o.fs.shouldRetry(ctx, resp, err)
}) })
if err != nil { if err != nil {
return nil, err return nil, err
@@ -1190,7 +1328,7 @@ func (o *Object) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, op
} }
err = o.fs.pacer.CallNoRetry(func() (bool, error) { err = o.fs.pacer.CallNoRetry(func() (bool, error) {
resp, err = o.fs.srv.Call(ctx, &opts) resp, err = o.fs.srv.Call(ctx, &opts)
return o.fs.shouldRetry(resp, err) return o.fs.shouldRetry(ctx, resp, err)
}) })
if err != nil { if err != nil {
// Give the WebDAV server a chance to get its internal state in order after the // Give the WebDAV server a chance to get its internal state in order after the
@@ -1217,7 +1355,7 @@ func (o *Object) Remove(ctx context.Context) error {
} }
return o.fs.pacer.Call(func() (bool, error) { return o.fs.pacer.Call(func() (bool, error) {
resp, err := o.fs.srv.Call(ctx, &opts) resp, err := o.fs.srv.Call(ctx, &opts)
return o.fs.shouldRetry(resp, err) return o.fs.shouldRetry(ctx, resp, err)
}) })
} }

View File

@@ -0,0 +1,74 @@
package webdav_test
import (
"context"
"fmt"
"net/http"
"net/http/httptest"
"strings"
"testing"
"github.com/rclone/rclone/backend/webdav"
"github.com/rclone/rclone/fs"
"github.com/rclone/rclone/fs/config/configfile"
"github.com/rclone/rclone/fs/config/configmap"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
)
var (
remoteName = "TestWebDAV"
headers = []string{"X-Potato", "sausage", "X-Rhubarb", "cucumber"}
)
// prepareServer the test server and return a function to tidy it up afterwards
// with each request the headers option tests are executed
func prepareServer(t *testing.T) (configmap.Simple, func()) {
// file server
fileServer := http.FileServer(http.Dir(""))
// test the headers are there then pass on to fileServer
handler := http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
what := fmt.Sprintf("%s %s: Header ", r.Method, r.URL.Path)
assert.Equal(t, headers[1], r.Header.Get(headers[0]), what+headers[0])
assert.Equal(t, headers[3], r.Header.Get(headers[2]), what+headers[2])
fileServer.ServeHTTP(w, r)
})
// Make the test server
ts := httptest.NewServer(handler)
// Configure the remote
configfile.Install()
m := configmap.Simple{
"type": "webdav",
"url": ts.URL,
// add headers to test the headers option
"headers": strings.Join(headers, ","),
}
// return a function to tidy up
return m, ts.Close
}
// prepare the test server and return a function to tidy it up afterwards
func prepare(t *testing.T) (fs.Fs, func()) {
m, tidy := prepareServer(t)
// Instantiate the WebDAV server
f, err := webdav.NewFs(context.Background(), remoteName, "", m)
require.NoError(t, err)
return f, tidy
}
// TestHeaders any request will test the headers option
func TestHeaders(t *testing.T) {
f, tidy := prepare(t)
defer tidy()
// any request will do
_, err := f.Features().About(context.Background())
require.NoError(t, err)
}

View File

@@ -38,3 +38,14 @@ func TestIntegration3(t *testing.T) {
NilObject: (*webdav.Object)(nil), NilObject: (*webdav.Object)(nil),
}) })
} }
// TestIntegration runs integration tests against the remote
func TestIntegration4(t *testing.T) {
if *fstest.RemoteName != "" {
t.Skip("skipping as -remote is set")
}
fstests.Run(t, &fstests.Opt{
RemoteName: "TestWebdavNTLM:",
NilObject: (*webdav.Object)(nil),
})
}

View File

@@ -60,12 +60,10 @@ func init() {
Name: "yandex", Name: "yandex",
Description: "Yandex Disk", Description: "Yandex Disk",
NewFs: NewFs, NewFs: NewFs,
Config: func(ctx context.Context, name string, m configmap.Mapper) { Config: func(ctx context.Context, name string, m configmap.Mapper, config fs.ConfigIn) (*fs.ConfigOut, error) {
err := oauthutil.Config(ctx, "yandex", name, m, oauthConfig, nil) return oauthutil.ConfigOut("", &oauthutil.Options{
if err != nil { OAuth2Config: oauthConfig,
log.Fatalf("Failed to configure token: %v", err) })
return
}
}, },
Options: append(oauthutil.SharedOptions, []fs.Option{{ Options: append(oauthutil.SharedOptions, []fs.Option{{
Name: config.ConfigEncoding, Name: config.ConfigEncoding,
@@ -153,7 +151,10 @@ var retryErrorCodes = []int{
// shouldRetry returns a boolean as to whether this resp and err // shouldRetry returns a boolean as to whether this resp and err
// deserve to be retried. It returns the err as a convenience // deserve to be retried. It returns the err as a convenience
func shouldRetry(resp *http.Response, err error) (bool, error) { func shouldRetry(ctx context.Context, resp *http.Response, err error) (bool, error) {
if fserrors.ContextError(ctx, &err) {
return false, err
}
return fserrors.ShouldRetry(err) || fserrors.ShouldRetryHTTP(resp, retryErrorCodes), err return fserrors.ShouldRetry(err) || fserrors.ShouldRetryHTTP(resp, retryErrorCodes), err
} }
@@ -226,7 +227,7 @@ func (f *Fs) readMetaDataForPath(ctx context.Context, path string, options *api.
var resp *http.Response var resp *http.Response
err = f.pacer.Call(func() (bool, error) { err = f.pacer.Call(func() (bool, error) {
resp, err = f.srv.CallJSON(ctx, &opts, nil, &info) resp, err = f.srv.CallJSON(ctx, &opts, nil, &info)
return shouldRetry(resp, err) return shouldRetry(ctx, resp, err)
}) })
if err != nil { if err != nil {
@@ -248,22 +249,22 @@ func NewFs(ctx context.Context, name, root string, m configmap.Mapper) (fs.Fs, e
token, err := oauthutil.GetToken(name, m) token, err := oauthutil.GetToken(name, m)
if err != nil { if err != nil {
log.Fatalf("Couldn't read OAuth token (this should never happen).") return nil, errors.Wrap(err, "couldn't read OAuth token")
} }
if token.RefreshToken == "" { if token.RefreshToken == "" {
log.Fatalf("Unable to get RefreshToken. If you are upgrading from older versions of rclone, please run `rclone config` and re-configure this backend.") return nil, errors.New("unable to get RefreshToken. If you are upgrading from older versions of rclone, please run `rclone config` and re-configure this backend")
} }
if token.TokenType != "OAuth" { if token.TokenType != "OAuth" {
token.TokenType = "OAuth" token.TokenType = "OAuth"
err = oauthutil.PutToken(name, m, token, false) err = oauthutil.PutToken(name, m, token, false)
if err != nil { if err != nil {
log.Fatalf("Couldn't save OAuth token (this should never happen).") return nil, errors.Wrap(err, "couldn't save OAuth token")
} }
log.Printf("Automatically upgraded OAuth config.") log.Printf("Automatically upgraded OAuth config.")
} }
oAuthClient, _, err := oauthutil.NewClient(ctx, name, m, oauthConfig) oAuthClient, _, err := oauthutil.NewClient(ctx, name, m, oauthConfig)
if err != nil { if err != nil {
log.Fatalf("Failed to configure Yandex: %v", err) return nil, errors.Wrap(err, "failed to configure Yandex")
} }
ci := fs.GetConfig(ctx) ci := fs.GetConfig(ctx)
@@ -468,7 +469,7 @@ func (f *Fs) CreateDir(ctx context.Context, path string) (err error) {
err = f.pacer.Call(func() (bool, error) { err = f.pacer.Call(func() (bool, error) {
resp, err = f.srv.Call(ctx, &opts) resp, err = f.srv.Call(ctx, &opts)
return shouldRetry(resp, err) return shouldRetry(ctx, resp, err)
}) })
if err != nil { if err != nil {
// fmt.Printf("CreateDir %q Error: %s\n", path, err.Error()) // fmt.Printf("CreateDir %q Error: %s\n", path, err.Error())
@@ -537,12 +538,15 @@ func (f *Fs) waitForJob(ctx context.Context, location string) (err error) {
RootURL: location, RootURL: location,
Method: "GET", Method: "GET",
} }
deadline := time.Now().Add(f.ci.Timeout) deadline := time.Now().Add(f.ci.TimeoutOrInfinite())
for time.Now().Before(deadline) { for time.Now().Before(deadline) {
var resp *http.Response var resp *http.Response
var body []byte var body []byte
err = f.pacer.Call(func() (bool, error) { err = f.pacer.Call(func() (bool, error) {
resp, err = f.srv.Call(ctx, &opts) resp, err = f.srv.Call(ctx, &opts)
if fserrors.ContextError(ctx, &err) {
return false, err
}
if err != nil { if err != nil {
return fserrors.ShouldRetry(err), err return fserrors.ShouldRetry(err), err
} }
@@ -568,7 +572,7 @@ func (f *Fs) waitForJob(ctx context.Context, location string) (err error) {
time.Sleep(1 * time.Second) time.Sleep(1 * time.Second)
} }
return errors.Errorf("async operation didn't complete after %v", f.ci.Timeout) return errors.Errorf("async operation didn't complete after %v", f.ci.TimeoutOrInfinite())
} }
func (f *Fs) delete(ctx context.Context, path string, hardDelete bool) (err error) { func (f *Fs) delete(ctx context.Context, path string, hardDelete bool) (err error) {
@@ -585,6 +589,9 @@ func (f *Fs) delete(ctx context.Context, path string, hardDelete bool) (err erro
var body []byte var body []byte
err = f.pacer.Call(func() (bool, error) { err = f.pacer.Call(func() (bool, error) {
resp, err = f.srv.Call(ctx, &opts) resp, err = f.srv.Call(ctx, &opts)
if fserrors.ContextError(ctx, &err) {
return false, err
}
if err != nil { if err != nil {
return fserrors.ShouldRetry(err), err return fserrors.ShouldRetry(err), err
} }
@@ -658,6 +665,9 @@ func (f *Fs) copyOrMove(ctx context.Context, method, src, dst string, overwrite
var body []byte var body []byte
err = f.pacer.Call(func() (bool, error) { err = f.pacer.Call(func() (bool, error) {
resp, err = f.srv.Call(ctx, &opts) resp, err = f.srv.Call(ctx, &opts)
if fserrors.ContextError(ctx, &err) {
return false, err
}
if err != nil { if err != nil {
return fserrors.ShouldRetry(err), err return fserrors.ShouldRetry(err), err
} }
@@ -810,7 +820,7 @@ func (f *Fs) PublicLink(ctx context.Context, remote string, expire fs.Duration,
var resp *http.Response var resp *http.Response
err = f.pacer.Call(func() (bool, error) { err = f.pacer.Call(func() (bool, error) {
resp, err = f.srv.Call(ctx, &opts) resp, err = f.srv.Call(ctx, &opts)
return shouldRetry(resp, err) return shouldRetry(ctx, resp, err)
}) })
if apiErr, ok := err.(*api.ErrorResponse); ok { if apiErr, ok := err.(*api.ErrorResponse); ok {
@@ -848,7 +858,7 @@ func (f *Fs) CleanUp(ctx context.Context) (err error) {
err = f.pacer.Call(func() (bool, error) { err = f.pacer.Call(func() (bool, error) {
resp, err = f.srv.Call(ctx, &opts) resp, err = f.srv.Call(ctx, &opts)
return shouldRetry(resp, err) return shouldRetry(ctx, resp, err)
}) })
return err return err
} }
@@ -865,7 +875,7 @@ func (f *Fs) About(ctx context.Context) (*fs.Usage, error) {
var err error var err error
err = f.pacer.Call(func() (bool, error) { err = f.pacer.Call(func() (bool, error) {
resp, err = f.srv.CallJSON(ctx, &opts, nil, &info) resp, err = f.srv.CallJSON(ctx, &opts, nil, &info)
return shouldRetry(resp, err) return shouldRetry(ctx, resp, err)
}) })
if err != nil { if err != nil {
@@ -999,7 +1009,7 @@ func (o *Object) setCustomProperty(ctx context.Context, property string, value s
err = o.fs.pacer.Call(func() (bool, error) { err = o.fs.pacer.Call(func() (bool, error) {
resp, err = o.fs.srv.CallJSON(ctx, &opts, &cpr, nil) resp, err = o.fs.srv.CallJSON(ctx, &opts, &cpr, nil)
return shouldRetry(resp, err) return shouldRetry(ctx, resp, err)
}) })
return err return err
} }
@@ -1032,7 +1042,7 @@ func (o *Object) Open(ctx context.Context, options ...fs.OpenOption) (in io.Read
err = o.fs.pacer.Call(func() (bool, error) { err = o.fs.pacer.Call(func() (bool, error) {
resp, err = o.fs.srv.CallJSON(ctx, &opts, nil, &dl) resp, err = o.fs.srv.CallJSON(ctx, &opts, nil, &dl)
return shouldRetry(resp, err) return shouldRetry(ctx, resp, err)
}) })
if err != nil { if err != nil {
@@ -1047,7 +1057,7 @@ func (o *Object) Open(ctx context.Context, options ...fs.OpenOption) (in io.Read
} }
err = o.fs.pacer.Call(func() (bool, error) { err = o.fs.pacer.Call(func() (bool, error) {
resp, err = o.fs.srv.Call(ctx, &opts) resp, err = o.fs.srv.Call(ctx, &opts)
return shouldRetry(resp, err) return shouldRetry(ctx, resp, err)
}) })
if err != nil { if err != nil {
return nil, err return nil, err
@@ -1071,7 +1081,7 @@ func (o *Object) upload(ctx context.Context, in io.Reader, overwrite bool, mimeT
err = o.fs.pacer.Call(func() (bool, error) { err = o.fs.pacer.Call(func() (bool, error) {
resp, err = o.fs.srv.CallJSON(ctx, &opts, nil, &ur) resp, err = o.fs.srv.CallJSON(ctx, &opts, nil, &ur)
return shouldRetry(resp, err) return shouldRetry(ctx, resp, err)
}) })
if err != nil { if err != nil {
@@ -1089,7 +1099,7 @@ func (o *Object) upload(ctx context.Context, in io.Reader, overwrite bool, mimeT
err = o.fs.pacer.Call(func() (bool, error) { err = o.fs.pacer.Call(func() (bool, error) {
resp, err = o.fs.srv.Call(ctx, &opts) resp, err = o.fs.srv.Call(ctx, &opts)
return shouldRetry(resp, err) return shouldRetry(ctx, resp, err)
}) })
return err return err

View File

@@ -7,7 +7,6 @@ import (
"fmt" "fmt"
"io" "io"
"io/ioutil" "io/ioutil"
"log"
"net/http" "net/http"
"net/url" "net/url"
"path" "path"
@@ -36,8 +35,8 @@ import (
) )
const ( const (
rcloneClientID = "1000.OZNFWW075EKDSIE1R42HI9I2SUPC9A" rcloneClientID = "1000.46MXF275FM2XV7QCHX5A7K3LGME66B"
rcloneEncryptedClientSecret = "rn7myzbsYK3WlqO2EU6jU8wmj0ylsx7_1B5wvSaVncYbu1Wt0QxPW9FFbidjqAZtyxnBenYIWq1pcA" rcloneEncryptedClientSecret = "U-2gxclZQBcOG9NPhjiXAhj-f0uQ137D0zar8YyNHXHkQZlTeSpIOQfmCb4oSpvosJp_SJLXmLLeUA"
minSleep = 10 * time.Millisecond minSleep = 10 * time.Millisecond
maxSleep = 2 * time.Second maxSleep = 2 * time.Second
decayConstant = 2 // bigger for slower decay, exponential decayConstant = 2 // bigger for slower decay, exponential
@@ -73,36 +72,97 @@ func init() {
Name: "zoho", Name: "zoho",
Description: "Zoho", Description: "Zoho",
NewFs: NewFs, NewFs: NewFs,
Config: func(ctx context.Context, name string, m configmap.Mapper) { Config: func(ctx context.Context, name string, m configmap.Mapper, config fs.ConfigIn) (*fs.ConfigOut, error) {
// Need to setup region before configuring oauth // Need to setup region before configuring oauth
setupRegion(m) err := setupRegion(m)
opt := oauthutil.Options{
// No refresh token unless ApprovalForce is set
OAuth2Opts: []oauth2.AuthCodeOption{oauth2.ApprovalForce},
}
if err := oauthutil.Config(ctx, "zoho", name, m, oauthConfig, &opt); err != nil {
log.Fatalf("Failed to configure token: %v", err)
}
// We need to rewrite the token type to "Zoho-oauthtoken" because Zoho wants
// it's own custom type
token, err := oauthutil.GetToken(name, m)
if err != nil { if err != nil {
log.Fatalf("Failed to read token: %v", err) return nil, err
} }
if token.TokenType != "Zoho-oauthtoken" { getSrvs := func() (authSrv, apiSrv *rest.Client, err error) {
token.TokenType = "Zoho-oauthtoken" oAuthClient, _, err := oauthutil.NewClient(ctx, name, m, oauthConfig)
err = oauthutil.PutToken(name, m, token, false)
if err != nil { if err != nil {
log.Fatalf("Failed to configure token: %v", err) return nil, nil, errors.Wrap(err, "failed to load oAuthClient")
} }
authSrv = rest.NewClient(oAuthClient).SetRoot(accountsURL)
apiSrv = rest.NewClient(oAuthClient).SetRoot(rootURL)
return authSrv, apiSrv, nil
} }
if err = setupRoot(ctx, name, m); err != nil {
log.Fatalf("Failed to configure root directory: %v", err) switch config.State {
case "":
return oauthutil.ConfigOut("teams", &oauthutil.Options{
OAuth2Config: oauthConfig,
// No refresh token unless ApprovalForce is set
OAuth2Opts: []oauth2.AuthCodeOption{oauth2.ApprovalForce},
})
case "teams":
// We need to rewrite the token type to "Zoho-oauthtoken" because Zoho wants
// it's own custom type
token, err := oauthutil.GetToken(name, m)
if err != nil {
return nil, errors.Wrap(err, "failed to read token")
}
if token.TokenType != "Zoho-oauthtoken" {
token.TokenType = "Zoho-oauthtoken"
err = oauthutil.PutToken(name, m, token, false)
if err != nil {
return nil, errors.Wrap(err, "failed to configure token")
}
}
authSrv, apiSrv, err := getSrvs()
if err != nil {
return nil, err
}
// Get the user Info
opts := rest.Opts{
Method: "GET",
Path: "/oauth/user/info",
}
var user api.User
_, err = authSrv.CallJSON(ctx, &opts, nil, &user)
if err != nil {
return nil, err
}
// Get the teams
teams, err := listTeams(ctx, user.ZUID, apiSrv)
if err != nil {
return nil, err
}
return fs.ConfigChoose("workspace", "config_team_drive_id", "Team Drive ID", len(teams), func(i int) (string, string) {
team := teams[i]
return team.ID, team.Attributes.Name
})
case "workspace":
_, apiSrv, err := getSrvs()
if err != nil {
return nil, err
}
teamID := config.Result
workspaces, err := listWorkspaces(ctx, teamID, apiSrv)
if err != nil {
return nil, err
}
return fs.ConfigChoose("workspace_end", "config_workspace", "Workspace ID", len(workspaces), func(i int) (string, string) {
workspace := workspaces[i]
return workspace.ID, workspace.Attributes.Name
})
case "workspace_end":
worksspaceID := config.Result
m.Set(configRootID, worksspaceID)
return nil, nil
} }
return nil, fmt.Errorf("unknown state %q", config.State)
}, },
Options: []fs.Option{{ Options: append(oauthutil.SharedOptions, []fs.Option{{
Name: "region", Name: "region",
Help: "Zoho region to connect to. You'll have to use the region you organization is registered in.", Help: `Zoho region to connect to.
You'll have to use the region your organization is registered in. If
not sure use the same top level domain as you connect to in your
browser.`,
Examples: []fs.OptionExample{{ Examples: []fs.OptionExample{{
Value: "com", Value: "com",
Help: "United states / Global", Help: "United states / Global",
@@ -123,7 +183,7 @@ func init() {
encoder.EncodeCtl | encoder.EncodeCtl |
encoder.EncodeDel | encoder.EncodeDel |
encoder.EncodeInvalidUtf8), encoder.EncodeInvalidUtf8),
}}, }}...),
}) })
} }
@@ -159,15 +219,16 @@ type Object struct {
// ------------------------------------------------------------ // ------------------------------------------------------------
func setupRegion(m configmap.Mapper) { func setupRegion(m configmap.Mapper) error {
region, ok := m.Get("region") region, ok := m.Get("region")
if !ok { if !ok || region == "" {
log.Fatalf("No region set\n") return errors.New("no region set")
} }
rootURL = fmt.Sprintf("https://workdrive.zoho.%s/api/v1", region) rootURL = fmt.Sprintf("https://workdrive.zoho.%s/api/v1", region)
accountsURL = fmt.Sprintf("https://accounts.zoho.%s", region) accountsURL = fmt.Sprintf("https://accounts.zoho.%s", region)
oauthConfig.Endpoint.AuthURL = fmt.Sprintf("https://accounts.zoho.%s/oauth/v2/auth", region) oauthConfig.Endpoint.AuthURL = fmt.Sprintf("https://accounts.zoho.%s/oauth/v2/auth", region)
oauthConfig.Endpoint.TokenURL = fmt.Sprintf("https://accounts.zoho.%s/oauth/v2/token", region) oauthConfig.Endpoint.TokenURL = fmt.Sprintf("https://accounts.zoho.%s/oauth/v2/token", region)
return nil
} }
// ------------------------------------------------------------ // ------------------------------------------------------------
@@ -200,49 +261,6 @@ func listWorkspaces(ctx context.Context, teamID string, srv *rest.Client) ([]api
return workspaceList.TeamWorkspace, nil return workspaceList.TeamWorkspace, nil
} }
func setupRoot(ctx context.Context, name string, m configmap.Mapper) error {
oAuthClient, _, err := oauthutil.NewClient(ctx, name, m, oauthConfig)
if err != nil {
log.Fatalf("Failed to load oAuthClient: %s", err)
}
authSrv := rest.NewClient(oAuthClient).SetRoot(accountsURL)
opts := rest.Opts{
Method: "GET",
Path: "/oauth/user/info",
}
var user api.User
_, err = authSrv.CallJSON(ctx, &opts, nil, &user)
if err != nil {
return err
}
apiSrv := rest.NewClient(oAuthClient).SetRoot(rootURL)
teams, err := listTeams(ctx, user.ZUID, apiSrv)
if err != nil {
return err
}
var teamIDs, teamNames []string
for _, team := range teams {
teamIDs = append(teamIDs, team.ID)
teamNames = append(teamNames, team.Attributes.Name)
}
teamID := config.Choose("Enter a Team Drive ID", teamIDs, teamNames, true)
workspaces, err := listWorkspaces(ctx, teamID, apiSrv)
if err != nil {
return err
}
var workspaceIDs, workspaceNames []string
for _, workspace := range workspaces {
workspaceIDs = append(workspaceIDs, workspace.ID)
workspaceNames = append(workspaceNames, workspace.Attributes.Name)
}
worksspaceID := config.Choose("Enter a Workspace ID", workspaceIDs, workspaceNames, true)
m.Set(configRootID, worksspaceID)
return nil
}
// -------------------------------------------------------------- // --------------------------------------------------------------
// retryErrorCodes is a slice of error codes that we will retry // retryErrorCodes is a slice of error codes that we will retry
@@ -257,7 +275,10 @@ var retryErrorCodes = []int{
// shouldRetry returns a boolean as to whether this resp and err // shouldRetry returns a boolean as to whether this resp and err
// deserve to be retried. It returns the err as a convenience // deserve to be retried. It returns the err as a convenience
func shouldRetry(resp *http.Response, err error) (bool, error) { func shouldRetry(ctx context.Context, resp *http.Response, err error) (bool, error) {
if fserrors.ContextError(ctx, &err) {
return false, err
}
authRetry := false authRetry := false
if resp != nil && resp.StatusCode == 401 && len(resp.Header["Www-Authenticate"]) == 1 && strings.Index(resp.Header["Www-Authenticate"][0], "expired_token") >= 0 { if resp != nil && resp.StatusCode == 401 && len(resp.Header["Www-Authenticate"]) == 1 && strings.Index(resp.Header["Www-Authenticate"][0], "expired_token") >= 0 {
@@ -354,7 +375,7 @@ func (f *Fs) readMetaDataForID(ctx context.Context, id string) (*api.Item, error
var err error var err error
err = f.pacer.Call(func() (bool, error) { err = f.pacer.Call(func() (bool, error) {
resp, err = f.srv.CallJSON(ctx, &opts, nil, &result) resp, err = f.srv.CallJSON(ctx, &opts, nil, &result)
return shouldRetry(resp, err) return shouldRetry(ctx, resp, err)
}) })
if err != nil { if err != nil {
return nil, err return nil, err
@@ -369,6 +390,10 @@ func NewFs(ctx context.Context, name, root string, m configmap.Mapper) (fs.Fs, e
if err := configstruct.Set(m, opt); err != nil { if err := configstruct.Set(m, opt); err != nil {
return nil, err return nil, err
} }
err := setupRegion(m)
if err != nil {
return nil, err
}
root = parsePath(root) root = parsePath(root)
oAuthClient, _, err := oauthutil.NewClient(ctx, name, m, oauthConfig) oAuthClient, _, err := oauthutil.NewClient(ctx, name, m, oauthConfig)
@@ -450,7 +475,7 @@ OUTER:
var resp *http.Response var resp *http.Response
err = f.pacer.Call(func() (bool, error) { err = f.pacer.Call(func() (bool, error) {
resp, err = f.srv.CallJSON(ctx, &opts, nil, &result) resp, err = f.srv.CallJSON(ctx, &opts, nil, &result)
return shouldRetry(resp, err) return shouldRetry(ctx, resp, err)
}) })
if err != nil { if err != nil {
return found, errors.Wrap(err, "couldn't list files") return found, errors.Wrap(err, "couldn't list files")
@@ -555,7 +580,7 @@ func (f *Fs) CreateDir(ctx context.Context, pathID, leaf string) (newID string,
} }
err = f.pacer.Call(func() (bool, error) { err = f.pacer.Call(func() (bool, error) {
resp, err = f.srv.CallJSON(ctx, &opts, &mkdir, &info) resp, err = f.srv.CallJSON(ctx, &opts, &mkdir, &info)
return shouldRetry(resp, err) return shouldRetry(ctx, resp, err)
}) })
if err != nil { if err != nil {
//fmt.Printf("...Error %v\n", err) //fmt.Printf("...Error %v\n", err)
@@ -643,7 +668,7 @@ func (f *Fs) upload(ctx context.Context, name string, parent string, size int64,
params.Set("filename", name) params.Set("filename", name)
params.Set("parent_id", parent) params.Set("parent_id", parent)
params.Set("override-name-exist", strconv.FormatBool(true)) params.Set("override-name-exist", strconv.FormatBool(true))
formReader, contentType, overhead, err := rest.MultipartUpload(in, nil, "content", name) formReader, contentType, overhead, err := rest.MultipartUpload(ctx, in, nil, "content", name)
if err != nil { if err != nil {
return nil, errors.Wrap(err, "failed to make multipart upload") return nil, errors.Wrap(err, "failed to make multipart upload")
} }
@@ -664,7 +689,7 @@ func (f *Fs) upload(ctx context.Context, name string, parent string, size int64,
var uploadResponse *api.UploadResponse var uploadResponse *api.UploadResponse
err = f.pacer.CallNoRetry(func() (bool, error) { err = f.pacer.CallNoRetry(func() (bool, error) {
resp, err = f.srv.CallJSON(ctx, &opts, nil, &uploadResponse) resp, err = f.srv.CallJSON(ctx, &opts, nil, &uploadResponse)
return shouldRetry(resp, err) return shouldRetry(ctx, resp, err)
}) })
if err != nil { if err != nil {
return nil, errors.Wrap(err, "upload error") return nil, errors.Wrap(err, "upload error")
@@ -746,7 +771,7 @@ func (f *Fs) deleteObject(ctx context.Context, id string) (err error) {
} }
err = f.pacer.Call(func() (bool, error) { err = f.pacer.Call(func() (bool, error) {
resp, err = f.srv.CallJSON(ctx, &opts, &delete, nil) resp, err = f.srv.CallJSON(ctx, &opts, &delete, nil)
return shouldRetry(resp, err) return shouldRetry(ctx, resp, err)
}) })
if err != nil { if err != nil {
return errors.Wrap(err, "delete object failed") return errors.Wrap(err, "delete object failed")
@@ -816,7 +841,7 @@ func (f *Fs) rename(ctx context.Context, id, name string) (item *api.Item, err e
var result *api.ItemInfo var result *api.ItemInfo
err = f.pacer.Call(func() (bool, error) { err = f.pacer.Call(func() (bool, error) {
resp, err = f.srv.CallJSON(ctx, &opts, &rename, &result) resp, err = f.srv.CallJSON(ctx, &opts, &rename, &result)
return shouldRetry(resp, err) return shouldRetry(ctx, resp, err)
}) })
if err != nil { if err != nil {
return nil, errors.Wrap(err, "rename failed") return nil, errors.Wrap(err, "rename failed")
@@ -869,7 +894,7 @@ func (f *Fs) Copy(ctx context.Context, src fs.Object, remote string) (fs.Object,
var result *api.ItemList var result *api.ItemList
err = f.pacer.Call(func() (bool, error) { err = f.pacer.Call(func() (bool, error) {
resp, err = f.srv.CallJSON(ctx, &opts, &copyFile, &result) resp, err = f.srv.CallJSON(ctx, &opts, &copyFile, &result)
return shouldRetry(resp, err) return shouldRetry(ctx, resp, err)
}) })
if err != nil { if err != nil {
return nil, errors.Wrap(err, "couldn't copy file") return nil, errors.Wrap(err, "couldn't copy file")
@@ -914,7 +939,7 @@ func (f *Fs) move(ctx context.Context, srcID, parentID string) (item *api.Item,
var result *api.ItemList var result *api.ItemList
err = f.pacer.Call(func() (bool, error) { err = f.pacer.Call(func() (bool, error) {
resp, err = f.srv.CallJSON(ctx, &opts, &moveFile, &result) resp, err = f.srv.CallJSON(ctx, &opts, &moveFile, &result)
return shouldRetry(resp, err) return shouldRetry(ctx, resp, err)
}) })
if err != nil { if err != nil {
return nil, errors.Wrap(err, "move failed") return nil, errors.Wrap(err, "move failed")
@@ -1181,7 +1206,7 @@ func (o *Object) Open(ctx context.Context, options ...fs.OpenOption) (in io.Read
} }
err = o.fs.pacer.Call(func() (bool, error) { err = o.fs.pacer.Call(func() (bool, error) {
resp, err = o.fs.srv.Call(ctx, &opts) resp, err = o.fs.srv.Call(ctx, &opts)
return shouldRetry(resp, err) return shouldRetry(ctx, resp, err)
}) })
if err != nil { if err != nil {
return nil, err return nil, err

203
bin/config.py Executable file
View File

@@ -0,0 +1,203 @@
#!/usr/bin/env python3
"""
Test program to demonstrate the remote config interfaces in
rclone.
This program can simulate
rclone config create
rclone config update
rclone config password - NOT implemented yet
rclone authorize - NOT implemented yet
Pass the desired action as the first argument then any parameters.
This assumes passwords will be passed in the clear.
"""
import argparse
import subprocess
import json
from pprint import pprint
sep = "-"*60
def rpc(args, command, params):
"""
Run the command. This could be either over the CLI or the API.
Here we run over the API either using `rclone rc --loopback` which
is useful for making sure state is saved properly or to an
existing rclone rcd if `--rc` is used on the command line.
"""
if args.rc:
import requests
kwargs = {
"json": params,
}
if args.user:
kwargs["auth"] = (args.user, args.password)
r = requests.post('http://localhost:5572/'+command, **kwargs)
if r.status_code != 200:
raise ValueError(f"RC command failed: Error {r.status_code}: {r.text}")
return r.json()
cmd = ["rclone", "-vv", "rc", "--loopback", command, "--json", json.dumps(params)]
result = subprocess.run(cmd, stdout=subprocess.PIPE, check=True)
return json.loads(result.stdout)
def parse_parameters(parameters):
"""
Parse the incoming key=value parameters into a dict
"""
d = {}
for param in parameters:
parts = param.split("=", 1)
if len(parts) != 2:
raise ValueError("bad format for parameter need name=value")
d[parts[0]] = parts[1]
return d
def ask(opt):
"""
Ask the user to enter the option
This is the user interface for asking a user a question.
If there are examples they should be presented.
"""
while True:
if opt["IsPassword"]:
print("*** Inputting a password")
print(opt['Help'])
examples = opt.get("Examples", ())
or_number = ""
if len(examples) > 0:
or_number = " or choice number"
for i, example in enumerate(examples):
print(f"{i:3} value: {example['Value']}")
print(f" help: {example['Help']}")
print(f"Enter a {opt['Type']} value{or_number}. Press Enter for the default ('{opt['DefaultStr']}')")
print(f"{opt['Name']}> ", end='')
s = input()
if s == "":
return opt["DefaultStr"]
try:
i = int(s)
if i >= 0 and i < len(examples):
return examples[i]["Value"]
except ValueError:
pass
if opt["Exclusive"]:
for example in examples:
if s == example["Value"]:
return s
# Exclusive is set but the value isn't one of the accepted
# ones so continue
print("Value isn't one of the acceptable values")
else:
return s
return s
def create_or_update(what, args):
"""
Run the equivalent of rclone config create
or rclone config update
what should either be "create" or "update
"""
print(what, args)
params = parse_parameters(args.parameters)
inp = {
"name": args.name,
"parameters": params,
"opt": {
"nonInteractive": True,
"all": args.all,
"noObscure": args.obscured_passwords,
"obscure": not args.obscured_passwords,
},
}
if what == "create":
inp["type"] = args.type
while True:
print(sep)
print("Input to API")
pprint(inp)
print(sep)
out = rpc(args, "config/"+what, inp)
print(sep)
print("Output from API")
pprint(out)
print(sep)
if out["State"] == "":
return
if out["Error"]:
print("Error", out["Error"])
result = ask(out["Option"])
inp["opt"]["state"] = out["State"]
inp["opt"]["result"] = result
inp["opt"]["continue"] = True
def create(args):
"""Run the equivalent of rclone config create"""
create_or_update("create", args)
def update(args):
"""Run the equivalent of rclone config update"""
create_or_update("update", args)
def password(args):
"""Run the equivalent of rclone config password"""
print("password", args)
raise NotImplementedError()
def authorize(args):
"""Run the equivalent of rclone authorize"""
print("authorize", args)
raise NotImplementedError()
def main():
"""
Make the command line parser and dispatch
"""
parser = argparse.ArgumentParser(
description=__doc__,
formatter_class=argparse.RawDescriptionHelpFormatter,
)
parser.add_argument("-a", "--all", action='store_true',
help="Ask all the config questions if set")
parser.add_argument("-o", "--obscured-passwords", action='store_true',
help="If set assume the passwords are obscured")
parser.add_argument("--rc", action='store_true',
help="If set use the rc (you'll need to start an rclone rcd)")
parser.add_argument("--user", type=str, default="",
help="Username for use with --rc")
parser.add_argument("--pass", type=str, default="", dest='password',
help="Password for use with --rc")
subparsers = parser.add_subparsers(dest='command', required=True)
subparser = subparsers.add_parser('create')
subparser.add_argument("name", type=str, help="Name of remote to create")
subparser.add_argument("type", type=str, help="Type of remote to create")
subparser.add_argument("parameters", type=str, nargs='*', help="Config parameters name=value name=value")
subparser.set_defaults(func=create)
subparser = subparsers.add_parser('update')
subparser.add_argument("name", type=str, help="Name of remote to update")
subparser.add_argument("parameters", type=str, nargs='*', help="Config parameters name=value name=value")
subparser.set_defaults(func=update)
subparser = subparsers.add_parser('password')
subparser.add_argument("name", type=str, help="Name of remote to update")
subparser.add_argument("parameters", type=str, nargs='*', help="Config parameters name=value name=value")
subparser.set_defaults(func=password)
subparser = subparsers.add_parser('authorize')
subparser.set_defaults(func=authorize)
args = parser.parse_args()
args.func(args)
if __name__ == "__main__":
main()

View File

@@ -27,17 +27,22 @@ import (
var ( var (
// Flags // Flags
debug = flag.Bool("d", false, "Print commands instead of running them.") debug = flag.Bool("d", false, "Print commands instead of running them.")
parallel = flag.Int("parallel", runtime.NumCPU(), "Number of commands to run in parallel.") parallel = flag.Int("parallel", runtime.NumCPU(), "Number of commands to run in parallel.")
copyAs = flag.String("release", "", "Make copies of the releases with this name") copyAs = flag.String("release", "", "Make copies of the releases with this name")
gitLog = flag.String("git-log", "", "git log to include as well") gitLog = flag.String("git-log", "", "git log to include as well")
include = flag.String("include", "^.*$", "os/arch regexp to include") include = flag.String("include", "^.*$", "os/arch regexp to include")
exclude = flag.String("exclude", "^$", "os/arch regexp to exclude") exclude = flag.String("exclude", "^$", "os/arch regexp to exclude")
cgo = flag.Bool("cgo", false, "Use cgo for the build") cgo = flag.Bool("cgo", false, "Use cgo for the build")
noClean = flag.Bool("no-clean", false, "Don't clean the build directory before running.") noClean = flag.Bool("no-clean", false, "Don't clean the build directory before running.")
tags = flag.String("tags", "", "Space separated list of build tags") tags = flag.String("tags", "", "Space separated list of build tags")
buildmode = flag.String("buildmode", "", "Passed to go build -buildmode flag") buildmode = flag.String("buildmode", "", "Passed to go build -buildmode flag")
compileOnly = flag.Bool("compile-only", false, "Just build the binary, not the zip.") compileOnly = flag.Bool("compile-only", false, "Just build the binary, not the zip.")
extraEnv = flag.String("env", "", "comma separated list of VAR=VALUE env vars to set")
macOSSDK = flag.String("macos-sdk", "", "macOS SDK to use")
macOSArch = flag.String("macos-arch", "", "macOS arch to use")
extraCgoCFlags = flag.String("cgo-cflags", "", "extra CGO_CFLAGS")
extraCgoLdFlags = flag.String("cgo-ldflags", "", "extra CGO_LDFLAGS")
) )
// GOOS/GOARCH pairs we build for // GOOS/GOARCH pairs we build for
@@ -47,6 +52,7 @@ var osarches = []string{
"windows/386", "windows/386",
"windows/amd64", "windows/amd64",
"darwin/amd64", "darwin/amd64",
"darwin/arm64",
"linux/386", "linux/386",
"linux/amd64", "linux/amd64",
"linux/arm", "linux/arm",
@@ -279,6 +285,15 @@ func stripVersion(goarch string) string {
return goarch[:i] return goarch[:i]
} }
// run the command returning trimmed output
func runOut(command ...string) string {
out, err := exec.Command(command[0], command[1:]...).Output()
if err != nil {
log.Fatalf("Failed to run %q: %v", command, err)
}
return strings.TrimSpace(string(out))
}
// build the binary in dir returning success or failure // build the binary in dir returning success or failure
func compileArch(version, goos, goarch, dir string) bool { func compileArch(version, goos, goarch, dir string) bool {
log.Printf("Compiling %s/%s into %s", goos, goarch, dir) log.Printf("Compiling %s/%s into %s", goos, goarch, dir)
@@ -314,6 +329,35 @@ func compileArch(version, goos, goarch, dir string) bool {
"GOOS=" + goos, "GOOS=" + goos,
"GOARCH=" + stripVersion(goarch), "GOARCH=" + stripVersion(goarch),
} }
if *extraEnv != "" {
env = append(env, strings.Split(*extraEnv, ",")...)
}
var (
cgoCFlags []string
cgoLdFlags []string
)
if *macOSSDK != "" {
flag := "-isysroot " + runOut("xcrun", "--sdk", *macOSSDK, "--show-sdk-path")
cgoCFlags = append(cgoCFlags, flag)
cgoLdFlags = append(cgoLdFlags, flag)
}
if *macOSArch != "" {
flag := "-arch " + *macOSArch
cgoCFlags = append(cgoCFlags, flag)
cgoLdFlags = append(cgoLdFlags, flag)
}
if *extraCgoCFlags != "" {
cgoCFlags = append(cgoCFlags, *extraCgoCFlags)
}
if *extraCgoLdFlags != "" {
cgoLdFlags = append(cgoLdFlags, *extraCgoLdFlags)
}
if len(cgoCFlags) > 0 {
env = append(env, "CGO_CFLAGS="+strings.Join(cgoCFlags, " "))
}
if len(cgoLdFlags) > 0 {
env = append(env, "CGO_LDFLAGS="+strings.Join(cgoLdFlags, " "))
}
if !*cgo { if !*cgo {
env = append(env, "CGO_ENABLED=0") env = append(env, "CGO_ENABLED=0")
} else { } else {

View File

@@ -62,6 +62,7 @@ docs = [
"sftp.md", "sftp.md",
"sugarsync.md", "sugarsync.md",
"tardigrade.md", "tardigrade.md",
"uptobox.md",
"union.md", "union.md",
"webdav.md", "webdav.md",
"yandex.md", "yandex.md",

View File

@@ -1,146 +0,0 @@
// +build ignore
// Build a directory structure with the required number of files in
//
// Run with go run make_test_files.go [flag] <directory>
package main
import (
cryptrand "crypto/rand"
"flag"
"io"
"log"
"math/rand"
"os"
"path/filepath"
)
var (
// Flags
numberOfFiles = flag.Int("n", 1000, "Number of files to create")
averageFilesPerDirectory = flag.Int("files-per-directory", 10, "Average number of files per directory")
maxDepth = flag.Int("max-depth", 10, "Maximum depth of directory hierarchy")
minFileSize = flag.Int64("min-size", 0, "Minimum size of file to create")
maxFileSize = flag.Int64("max-size", 100, "Maximum size of files to create")
minFileNameLength = flag.Int("min-name-length", 4, "Minimum size of file to create")
maxFileNameLength = flag.Int("max-name-length", 12, "Maximum size of files to create")
directoriesToCreate int
totalDirectories int
fileNames = map[string]struct{}{} // keep a note of which file name we've used already
)
// randomString create a random string for test purposes
func randomString(n int) string {
const (
vowel = "aeiou"
consonant = "bcdfghjklmnpqrstvwxyz"
digit = "0123456789"
)
pattern := []string{consonant, vowel, consonant, vowel, consonant, vowel, consonant, digit}
out := make([]byte, n)
p := 0
for i := range out {
source := pattern[p]
p = (p + 1) % len(pattern)
out[i] = source[rand.Intn(len(source))]
}
return string(out)
}
// fileName creates a unique random file or directory name
func fileName() (name string) {
for {
length := rand.Intn(*maxFileNameLength-*minFileNameLength) + *minFileNameLength
name = randomString(length)
if _, found := fileNames[name]; !found {
break
}
}
fileNames[name] = struct{}{}
return name
}
// dir is a directory in the directory hierarchy being built up
type dir struct {
name string
depth int
children []*dir
parent *dir
}
// Create a random directory hierarchy under d
func (d *dir) createDirectories() {
for totalDirectories < directoriesToCreate {
newDir := &dir{
name: fileName(),
depth: d.depth + 1,
parent: d,
}
d.children = append(d.children, newDir)
totalDirectories++
switch rand.Intn(4) {
case 0:
if d.depth < *maxDepth {
newDir.createDirectories()
}
case 1:
return
}
}
return
}
// list the directory hierarchy
func (d *dir) list(path string, output []string) []string {
dirPath := filepath.Join(path, d.name)
output = append(output, dirPath)
for _, subDir := range d.children {
output = subDir.list(dirPath, output)
}
return output
}
// writeFile writes a random file at dir/name
func writeFile(dir, name string) {
err := os.MkdirAll(dir, 0777)
if err != nil {
log.Fatalf("Failed to make directory %q: %v", dir, err)
}
path := filepath.Join(dir, name)
fd, err := os.Create(path)
if err != nil {
log.Fatalf("Failed to open file %q: %v", path, err)
}
size := rand.Int63n(*maxFileSize-*minFileSize) + *minFileSize
_, err = io.CopyN(fd, cryptrand.Reader, size)
if err != nil {
log.Fatalf("Failed to write %v bytes to file %q: %v", size, path, err)
}
err = fd.Close()
if err != nil {
log.Fatalf("Failed to close file %q: %v", path, err)
}
}
func main() {
flag.Parse()
args := flag.Args()
if len(args) != 1 {
log.Fatalf("Require 1 directory argument")
}
outputDirectory := args[0]
log.Printf("Output dir %q", outputDirectory)
directoriesToCreate = *numberOfFiles / *averageFilesPerDirectory
log.Printf("directoriesToCreate %v", directoriesToCreate)
root := &dir{name: outputDirectory, depth: 1}
for totalDirectories < directoriesToCreate {
root.createDirectories()
}
dirs := root.list("", []string{})
for i := 0; i < *numberOfFiles; i++ {
dir := dirs[rand.Intn(len(dirs))]
writeFile(dir, fileName())
}
}

View File

@@ -44,10 +44,10 @@ var commandDefinition = &cobra.Command{
Use: "about remote:", Use: "about remote:",
Short: `Get quota information from the remote.`, Short: `Get quota information from the remote.`,
Long: ` Long: `
` + "`rclone about`" + `prints quota information about a remote to standard ` + "`rclone about`" + ` prints quota information about a remote to standard
output. The output is typically used, free, quota and trash contents. output. The output is typically used, free, quota and trash contents.
E.g. Typical output from` + "`rclone about remote:`" + `is: E.g. Typical output from ` + "`rclone about remote:`" + ` is:
Total: 17G Total: 17G
Used: 7.444G Used: 7.444G
@@ -75,7 +75,7 @@ Applying a ` + "`--full`" + ` flag to the command prints the bytes in full, e.g.
Trashed: 104857602 Trashed: 104857602
Other: 8849156022 Other: 8849156022
A ` + "`--json`" + `flag generates conveniently computer readable output, e.g. A ` + "`--json`" + ` flag generates conveniently computer readable output, e.g.
{ {
"total": 18253611008, "total": 18253611008,

View File

@@ -18,14 +18,12 @@ import (
_ "github.com/rclone/rclone/cmd/copyurl" _ "github.com/rclone/rclone/cmd/copyurl"
_ "github.com/rclone/rclone/cmd/cryptcheck" _ "github.com/rclone/rclone/cmd/cryptcheck"
_ "github.com/rclone/rclone/cmd/cryptdecode" _ "github.com/rclone/rclone/cmd/cryptdecode"
_ "github.com/rclone/rclone/cmd/dbhashsum"
_ "github.com/rclone/rclone/cmd/dedupe" _ "github.com/rclone/rclone/cmd/dedupe"
_ "github.com/rclone/rclone/cmd/delete" _ "github.com/rclone/rclone/cmd/delete"
_ "github.com/rclone/rclone/cmd/deletefile" _ "github.com/rclone/rclone/cmd/deletefile"
_ "github.com/rclone/rclone/cmd/genautocomplete" _ "github.com/rclone/rclone/cmd/genautocomplete"
_ "github.com/rclone/rclone/cmd/gendocs" _ "github.com/rclone/rclone/cmd/gendocs"
_ "github.com/rclone/rclone/cmd/hashsum" _ "github.com/rclone/rclone/cmd/hashsum"
_ "github.com/rclone/rclone/cmd/info"
_ "github.com/rclone/rclone/cmd/link" _ "github.com/rclone/rclone/cmd/link"
_ "github.com/rclone/rclone/cmd/listremotes" _ "github.com/rclone/rclone/cmd/listremotes"
_ "github.com/rclone/rclone/cmd/ls" _ "github.com/rclone/rclone/cmd/ls"
@@ -34,7 +32,6 @@ import (
_ "github.com/rclone/rclone/cmd/lsjson" _ "github.com/rclone/rclone/cmd/lsjson"
_ "github.com/rclone/rclone/cmd/lsl" _ "github.com/rclone/rclone/cmd/lsl"
_ "github.com/rclone/rclone/cmd/md5sum" _ "github.com/rclone/rclone/cmd/md5sum"
_ "github.com/rclone/rclone/cmd/memtest"
_ "github.com/rclone/rclone/cmd/mkdir" _ "github.com/rclone/rclone/cmd/mkdir"
_ "github.com/rclone/rclone/cmd/mount" _ "github.com/rclone/rclone/cmd/mount"
_ "github.com/rclone/rclone/cmd/mount2" _ "github.com/rclone/rclone/cmd/mount2"
@@ -49,11 +46,18 @@ import (
_ "github.com/rclone/rclone/cmd/reveal" _ "github.com/rclone/rclone/cmd/reveal"
_ "github.com/rclone/rclone/cmd/rmdir" _ "github.com/rclone/rclone/cmd/rmdir"
_ "github.com/rclone/rclone/cmd/rmdirs" _ "github.com/rclone/rclone/cmd/rmdirs"
_ "github.com/rclone/rclone/cmd/selfupdate"
_ "github.com/rclone/rclone/cmd/serve" _ "github.com/rclone/rclone/cmd/serve"
_ "github.com/rclone/rclone/cmd/settier" _ "github.com/rclone/rclone/cmd/settier"
_ "github.com/rclone/rclone/cmd/sha1sum" _ "github.com/rclone/rclone/cmd/sha1sum"
_ "github.com/rclone/rclone/cmd/size" _ "github.com/rclone/rclone/cmd/size"
_ "github.com/rclone/rclone/cmd/sync" _ "github.com/rclone/rclone/cmd/sync"
_ "github.com/rclone/rclone/cmd/test"
_ "github.com/rclone/rclone/cmd/test/changenotify"
_ "github.com/rclone/rclone/cmd/test/histogram"
_ "github.com/rclone/rclone/cmd/test/info"
_ "github.com/rclone/rclone/cmd/test/makefiles"
_ "github.com/rclone/rclone/cmd/test/memory"
_ "github.com/rclone/rclone/cmd/touch" _ "github.com/rclone/rclone/cmd/touch"
_ "github.com/rclone/rclone/cmd/tree" _ "github.com/rclone/rclone/cmd/tree"
_ "github.com/rclone/rclone/cmd/version" _ "github.com/rclone/rclone/cmd/version"

View File

@@ -29,8 +29,8 @@ rclone config.
Use the --auth-no-open-browser to prevent rclone to open auth Use the --auth-no-open-browser to prevent rclone to open auth
link in default browser automatically.`, link in default browser automatically.`,
Run: func(command *cobra.Command, args []string) { RunE: func(command *cobra.Command, args []string) error {
cmd.CheckArgs(1, 3, command, args) cmd.CheckArgs(1, 3, command, args)
config.Authorize(context.Background(), args, noAutoBrowser) return config.Authorize(context.Background(), args, noAutoBrowser)
}, },
} }

Some files were not shown because too many files have changed in this diff Show More