1
0
mirror of https://github.com/rclone/rclone.git synced 2025-12-11 13:53:15 +00:00

Compare commits

..

135 Commits

Author SHA1 Message Date
Nick Craig-Wood
bede3a5d48 fs: dump config for an Fs before creation - FIXME TESTING DO NOT MERGE
See: https://forum.rclone.org/t/authenticating-to-box-using-connection-strings-and-config-file-fails/23333/7
2021-04-08 12:39:03 +01:00
Nick Craig-Wood
f52ae75a51 rclone authorize: Send and receive extra config options to fix oauth
Before this change any backends which required extra config in the
oauth phase (like the `region` for zoho) didn't work with `rclone
authorize`.

This change serializes the extra config and passes it to `rclone
authorize` and returns new config items to be set from rclone
authorize.

`rclone authorize` will still accept its previous configuration
parameters for use with old rclones.

Fixes #5178
2021-04-08 12:34:15 +01:00
Nick Craig-Wood
9d5c5bf7ab fs: add Options.NonDefault to read options which aren't at their default #5178 2021-04-08 12:34:15 +01:00
Nick Craig-Wood
53573b4a09 configmap: Add Encode and Decode methods to Simple for command line encoding #5178 2021-04-08 12:34:15 +01:00
Nick Craig-Wood
3622e064f5 configmap: Add priorities to configmap Setters #5178 2021-04-08 12:34:15 +01:00
Nick Craig-Wood
6d28ea7ab5 fs: factor config override detection into its own function #5178 2021-04-08 12:34:15 +01:00
Nick Craig-Wood
b9fd02039b authorize: refactor to use new config interfaces #5178 2021-04-08 12:34:15 +01:00
Nick Craig-Wood
1a41c930f3 configmap: add ClearSetters to get rid of all setters #5178 2021-04-08 12:34:15 +01:00
albertony
ddb7eb6e0a docs: fixed some typos 2021-04-08 10:19:03 +02:00
buengese
c114695a66 zoho: do not ask for mountpoint twice when using headless setup 2021-04-08 00:23:27 +02:00
Nick Craig-Wood
fcba51557f dropbox: set visibility in link sharing when --expire is set
Note that due to a bug in the dropbox SDK you'll need to set --expire
to access this.

See: https://github.com/dropbox/dropbox-sdk-go-unofficial/issues/75
See: https://forum.rclone.org/t/rclone-link-dropbox-permissions/23211
2021-04-07 13:58:37 +01:00
Nick Craig-Wood
9393225a1d link: use "off" value for unset expiry 2021-04-07 13:58:37 +01:00
albertony
3d3ff61f74 docs: minor cleanup of space around code section 2021-04-07 08:47:29 +02:00
albertony
d98f192425 docs: WinFsp 2021 is out of beta 2021-04-07 08:13:40 +02:00
Nick Craig-Wood
54771e4402 sync: fix incorrect error reported by graceful cutoff - fixes #5203
Before this change, a sync which was finished with a graceful transfer
cutoff could return "context canceled" instead of the correct error.

This fixes the problem by ignoring "context canceled" errors if we
have done a graceful stop.
2021-04-06 13:08:42 +01:00
Nick Craig-Wood
dc286529bc drive: fix backend copyid of google doc to directory - fixes #5196
Before this change the google doc was being copied to the directory
without an extension.
2021-04-06 11:46:52 +01:00
Nick Craig-Wood
7dc7c021db sftp: fix Update ReadFrom failed: failed to send packet: EOF errors
In

a3fcadddc8 sftp: close idle connections after --sftp-idle-timeout (1m by default)

Idle SFTP connections were closed after 1 minute. However due to the
way SSH multiplexes connections over a single SSH connection this
meant that if uploads or downloads went on for more than one minute
they failed with "EOF errors" as their underlying connection was
closed.

This fixes the problem by not clearing idle connections if there are
any transfers in progress.

Fixes #5197
2021-04-06 10:01:49 +01:00
Nick Craig-Wood
fe1aa13069 sftp: revert sftp library to v1.12.0 from v1.13.0 to fix performance regression #5197
This reverts the library update done in this commit.

713f8f357d sftp: fix "file not found" errors for read once servers

Reverting this commit triples the performance to a far away sftp server.

See: https://github.com/pkg/sftp/issues/426
2021-04-06 10:01:49 +01:00
Nick Craig-Wood
5fa8e7d957 Add Nick Gaya to contributors 2021-04-06 10:01:49 +01:00
Nick Gaya
9db7c51eaa sync: don't warn about --no-traverse when --files-from is set 2021-04-05 20:36:39 +01:00
Ivan Andreev
3859fe2f52 cmd/version: print os/version, kernel and bitness (#5204)
Related to #5121

Note: OpenBSD is stub yet. This will be fixed after upstream PR gets resolved
https://github.com/shirou/gopsutil/pull/993
2021-04-05 21:53:09 +03:00
buengese
0caf417779 zoho: fix error when region isn't set 2021-04-05 15:11:30 +02:00
Ivan Andreev
9eab258ffb build: add build tag noselfupdate
Allow downstream packaging to build rclone without selfupdate command:
$ go build -tags noselfupdate

Fixes #5187
2021-04-04 11:22:09 +03:00
Nick Gaya
7df57cd625 contributing.md: update setup instructions for go1.16 2021-04-04 09:10:43 +01:00
Nick Gaya
1fd9b483c8 onedrive: add list_chunk option
Add --onedrive-list-chunk option similar to existing options for azureblob, drive, and s3.

Suggested as a workaround for a OneDrive pagination bug

See: https://forum.rclone.org/t/unexpected-duplicates-on-onedrive-with-0s-in-filename/23164/8
2021-04-04 09:08:16 +01:00
Ivan Andreev
93353c431b selfupdate: dont detect FUSE if build is static
Before this patch selfupdate detected ANY build with cmount tag as a build
having libFUSE capabilities. However, only dynamic builds really have it.
The official linux builds are static and have the cmount tag as of the time
of this writing. This results in inability to update official linux binaries.
This patch fixes that. The build can be fixed independently.
2021-04-03 21:54:15 +03:00
Nick Craig-Wood
886dfd23e2 fichier: check if more than one upload link is returned #5152 2021-04-03 15:00:50 +01:00
Nick Craig-Wood
116a8021bb drive: switch to the Drives API for looking up shared drives - fixes #3139
Before this change rclone used the deprecated teamdrives API. This
change uses the new drives API (which seems to be the teamdrives API
renames).
2021-04-03 14:21:20 +01:00
Nick Craig-Wood
9e2fbe0f1a install.sh: fix macOS arm64 download - fixes #5183 2021-03-31 21:48:31 +01:00
Nick Craig-Wood
6d65d116df Start v1.56.0-DEV development 2021-03-31 19:51:43 +01:00
Ivan Andreev
edaeb51ea9 backlog: ticket templates should recommend to update rclone
Aligns Bug and Feature github templates with rclone forum
and instructs submitter to proactively update rclone.
2021-03-31 19:13:50 +01:00
Nick Craig-Wood
6e2e2d9eb2 Version v1.55.0 2021-03-31 19:12:08 +01:00
Nick Craig-Wood
20e15e52a9 vfs: fix Create causing windows explorer to truncate files on CTRL-C CTRL-V
Before this fix, doing CTRL-C and CTRL-V on a file in Windows explorer
caused the **source** and the the destination to be truncated to 0.

This is because Windows opens the source file with Create with flags
`O_RDWR|O_CREATE|O_EXCL` but doesn't write to it - it only reads from
it. Rclone was taking the call to Create as a signal to always make a
new file, but this is incorrect.

This fix reads an existing file from the directory if it exists when
Create is called rather than always creating a new one. This fixes the
problem.

Fixes #5181
2021-03-31 14:48:02 +01:00
Nick Craig-Wood
d0f8b4f479 fs/cache: fix recreation of backends after they have expired
Before this change, on the first attempt to create a backend we used a
non-canonicalized string. When the backend expired the second attempt
to create it would use the canonicalized string (because it was in the
remap cache) which would fail because it was now `name{XXXX}:`

This change makes sure that whenever we create a backend we always use
the non-canonicalized string.

See: https://forum.rclone.org/t/connection-string-inconsistencies-on-beta/23171
2021-03-30 18:46:30 +01:00
Nick Craig-Wood
58d82a5c73 rc: allow fs= params to be a JSON blob 2021-03-30 17:07:27 +01:00
Nick Craig-Wood
c0c74003f2 fs/cache: add --fs-cache-expire-duration to control the fs cache
This commit makes the previously statically configured fs cache configurable.

It introduces two parameters `--fs-cache-expire-duration` and
`--fs-cache-expire-interval` to control the caching of the items.

It also adds new interfaces to lib/cache to set these.
2021-03-30 12:46:47 +01:00
Nick Craig-Wood
60bc7a079a rc: factor rc.Error out of rcserver for re-use in librclone #4891 2021-03-30 12:46:05 +01:00
Nick Craig-Wood
20c5ca08fb test_all: fix crash when using -clean 2021-03-29 23:12:53 +01:00
Nick Craig-Wood
fc57648b75 lib/rest: fix multipart uploads stopping on context cancel
Before this change when the context was cancelled (due to
--max-duration for example) this could deadlock when uploading
multipart uploads.

This change fixes the problem by introducing another go routine to
monitor the context and close the pipe with an error when the context
errors.
2021-03-29 19:09:47 +01:00
Nick Craig-Wood
8c5c91e68f lib/readers: add NewContextReader to error on context errors 2021-03-29 19:09:47 +01:00
Nick Craig-Wood
9dd39e8524 Add x0b to contributors 2021-03-29 19:09:47 +01:00
albertony
9c9186183d docs: add short description of configuration file format (#5142)
Fixes #572
2021-03-27 17:26:01 +01:00
Nick Craig-Wood
2ccf416e83 build: add version to android builds and fix upload 2021-03-26 09:18:54 +00:00
x0b
5577c7b760 build: replace xgo with NDK for Android builds 2021-03-26 09:18:54 +00:00
Nick Craig-Wood
f6dbb98a1d cmount: update cgofuse to the latest version to bring in macfuse 4 fix 2021-03-25 09:02:17 +00:00
Nick Craig-Wood
d042f3194f b2: fix html files downloaded via cloudflare
When reading files from B2 via cloudflare using --b2-download-url
cloudflare strips the Content-Length headers (presumably so it can
inject stuff into the body).

This caused rclone to think the file was corrupted as the length
didn't match.

The patch uses the old length read from the listing if there is no
Content-Length.

See: https://forum.rclone.org/t/b2-cloudflare-error-directory-not-found/23026
2021-03-24 17:06:59 +00:00
Nick Craig-Wood
524cd327e6 build: update notes on how to build the release manually with docker 2021-03-24 14:22:27 +00:00
Nick Craig-Wood
b8c1cf7451 union: fix initialisation broken in refactor - fixes #5139
This commit broke the initialisation of the union backend

f17d7c0012 union: refactor to use fspath.SplitFs instead of fs.ParseRemote #4996

This patch fixes it.
2021-03-24 09:47:38 +00:00
Nick Craig-Wood
0fa68bda02 fspath: fix path parsing on Windows - fixes #5143
In this commit

8a46dd1b57 fspath: Implement a connection string parser #4996

The parsing code was re-written. This didn't quite work as before,
failing to adjust local paths on Windows when it should.

This patch fixes the problem and implements tests for it.
2021-03-24 09:47:03 +00:00
Nick Craig-Wood
1378bfee63 box: fix transfers getting stuck on token expiry after API change
Box recently changed their API, changing the case of returned API items

> On May 10th, 2021, as part of our continued infrastructure upgrade,
> Box's API response headers will standardize to return in a case
> insensitive manner, in line with industry best practices and our API
> documentation. Applications that are using these headers, such as
> "location" and "retry-after", will need to verify that their
> applications are checking for these headers in a case-insensitive
> fashion.

Rclone was reading the raw headers from the `http.Header` and not
using the `Get` accessor method which meant that it was sensitive to
case changes.

This fixes the problem by using the `Get` accessor method.

See: https://forum.rclone.org/t/box-backend-incompatible-with-box-api-changes-being-deployed/22972
2021-03-24 09:45:17 +00:00
nguyenhuuluan434
d6870473a1 swift: implement copying large objects 2021-03-24 08:56:39 +00:00
albertony
12cd322643 crypt: log hash ok on upload 2021-03-23 18:36:51 +01:00
Ivan Andreev
1406b6c3c9 install.sh: fail on download errors
This patch makes install.sh always run curl with flag "-f"
so it fails on download errors.
2021-03-23 11:29:00 +03:00
Ivan Andreev
088a83872d install.sh: fix some shellcheck warnings 2021-03-23 11:29:00 +03:00
Nick Craig-Wood
cb46092883 lib/atexit: unregister interrupt handler once it has fired so users can interrupt again 2021-03-23 08:03:00 +00:00
Nick Craig-Wood
a2cd5d8fa3 lib/atexit: fix deadlock calling Finalise while Run is running 2021-03-23 08:03:00 +00:00
Ivan Andreev
1fe2460e38 selfupdate: abort if updating would discard fuse semantics 2021-03-22 22:55:24 +03:00
Ivan Andreev
ef5c212f9b version: show build tags and type of executable
This patch modifies the output of `rclone version`.
The `os/arch` line is split into `os/type` and `os/arch`.
The `go version` line is now tagged as `go/version` for consistency.

Additionally the `go/linking` line tells whether the rclone
was linked as a static or dynamic executable.
The new `go/tags` line shows a space separated list of build tags.

The info about linking and build tags is also added to the output
of the `core/version` RC endpoint.
2021-03-22 22:55:24 +03:00
Nick Craig-Wood
268a7ff7b8 rc: add a full set of stats to core/stats
This patch adds the missing stats to the output of core/stats

- totalChecks
- totalTransfers
- totalBytes
- eta

This now includes enough information to rebuild the normal stats
output from rclone including percentage completions and ETAs.

Fixes #5116
2021-03-22 10:10:36 +00:00
Nick Craig-Wood
b47d6001a9 vfs: fix directory renaming by renaming dirs cached in memory
Before this change, if a directory was renamed and it or any children
had virtual entries in it they weren't flushed.

The consequence of this was that the directory path got out sync with
the actual position of the directory in the tree, leading to listings
of the old directory rather than the new one.

The fix renames any directories remaining after the ForgetAll to have
the correct path which fixes the problem.

See: https://forum.rclone.org/t/after-a-directory-renmane-using-mv-files-are-not-visible-any-longer/22797
2021-03-22 09:07:01 +00:00
Nick Craig-Wood
a4c4ddf052 vfs: rename files in cache and cancel uploads on directory rename
Before this change rclone did not cancel an uploads or rename the
cached files in the directory cache when a directory was renamed.

This caused issues with uploads arriving in the wrong place on bucket
based file systems.

See: https://forum.rclone.org/t/after-a-directory-renmane-using-mv-files-are-not-visible-any-longer/22797
2021-03-22 09:07:01 +00:00
Nick Craig-Wood
4cc2a7f342 mount: fix caching of old directories after renaming them
It was discovered `rclone mount` (but not `rclone cmount`) cached
directories after rename which it shouldn't have done.

This caused IO errors when trying to access files in renamed
directories on bucket based file systems.

This turned out to be the kernel caching the directories as basil/fuse
sets their expiry time to 60s for some reason.

This fix invalidates the relevant kernel cache entries in the for the
directories which fixes the problem.

Fixes: #4977
See: https://forum.rclone.org/t/after-a-directory-renmane-using-mv-files-are-not-visible-any-longer/22797
2021-03-22 09:07:01 +00:00
Nick Craig-Wood
c72d2c67ed vfs: add debug dump function to dump the state of the VFS cache 2021-03-22 09:06:44 +00:00
Nick Craig-Wood
9deab5a563 dropbox: raise priority of rate limited message to INFO to make it more noticeable
If you exceed rate limits, dropbox tells you to wait for 300 seconds -
this is rather a long time for the user to be waiting for rclone to
finish, so emit a NOTICE level log instead of a DEBUG.
2021-03-22 09:04:25 +00:00
buengese
da5b0cb611 zoho: add forgotten setupRegion() to NewFs
- this finally fixes regions other than eu
2021-03-21 02:15:22 +01:00
buengese
0187bc494a zoho: replace client id 2021-03-21 02:15:22 +01:00
Ivan Andreev
2bdbf00fa3 selfupdate: add instructions on reverting the update (#5141) 2021-03-18 23:11:16 +03:00
Nick Craig-Wood
9ee3ad70e9 sftp: fix SetModTime stat failed: object not found with --sftp-set-modtime=false
Some sftp servers don't allow the user to access the file after upload.

In this case the error message indicates that using
--sftp-set-modtime=false would fix the problem. However it doesn't
because SetModTime does a stat call which can't be disabled.

    Update SetModTime failed: SetModTime stat failed: object not found

After upload this patch checks for an `object not found` error if
set_modtime == false and ignores it, returning the expected size of
the object instead.

It also makes SetModTime do nothing if set_modtime = false

https://forum.rclone.org/t/sftp-update-setmodtime-failed/22873
2021-03-18 16:31:51 +00:00
albertony
ce182adf46 copyurl: add option to print resulting auto-filename (#5095)
Fixes #5094
2021-03-18 10:04:59 +01:00
albertony
97fc3b9046 rc: avoid +Inf value for speed in core/stats (#5134)
Fixes #5132
2021-03-18 10:02:30 +01:00
Nick Craig-Wood
e59acd16c6 drive: remove duplicated Context(ctx) calls
These were added by accident in

d9959b0271 drive: pass context on to drive SDK - this will help with cancellation

Which added lots of new Context() calls but duplicated some existing
ones.
2021-03-17 16:46:58 +00:00
albertony
acfd7e2403 docs: add note about limitation with pattern-list to filtering docs (#5118)
Fixes #5112
2021-03-17 14:34:46 +01:00
Nick Craig-Wood
f47893873d fs: fix failed token refresh on mounts created via the rc
Users have noticed that backends created via the rc have been failing
to refresh their tokens with this error:

    Token refresh failed try 1/5: context canceled

This is because the rc server cancels the context used to make the
backend when the request has finished. This same context is used to
refresh the token and the oauth library checks to see if the context
has been cancelled.

This patch creates a new context for the cached backends and copies
the global and filter config into the new context.

See: https://forum.rclone.org/t/google-drive-token-refresh-failed/22283
2021-03-16 16:29:22 +00:00
Nick Craig-Wood
b9a015e5b9 s3: fix --s3-profile which wasn't working - fixes #4757 2021-03-16 16:25:07 +00:00
Nick Craig-Wood
d72d9e591a ftp: retry connections and logins on 421 errors #3984
Before this we just failed if the ftp connection or login failed.

This change adds a pacer just for the ftp connect and retries if the
connection failed to Dial or the login returns a 421 error.
2021-03-16 16:17:22 +00:00
Nick Craig-Wood
df451e1e70 ftp: add --ftp-close-timeout flag for use with awkward ftp servers #3984 2021-03-16 16:17:22 +00:00
Nick Craig-Wood
d9959b0271 drive: pass context on to drive SDK - this will help with cancellation 2021-03-16 16:17:22 +00:00
Nick Craig-Wood
f2c0f82fc6 backends: Add context checking to remaining backends #4504
This is a follow up to 4013bc4a4c which missed some backends.

It adds a ctx parameter to shouldRetry and checks it.
2021-03-16 16:17:22 +00:00
albertony
f76c6cc893 docs: describe how to bypass loading of config file
Fixes #5125
2021-03-16 14:25:00 +00:00
Nick Craig-Wood
5e95877840 vfs: fix modtime set if --vfs-cache-mode writes/full and no write
When using --vfs-cache-mode writes or full if a file was opened for
write intent, the modtime was set and the file was closed without
being modified the modtime would never be written back to storage.

The sequence of events

- app opens file with write intent
- app does set modtime
- rclone sets the modtime on the cache file, but not the remote file
  because it is open for write and can't be set yet
- app closes the file without changing it
- rclone doesn't upload the file because the file wasn't changed so
  the modtime doesn't get updated

This fixes the problem by making sure any unapplied modtime changes
are applied even if the file is not modified when being closed.

Fixes #4795
2021-03-16 13:36:48 +00:00
Nick Craig-Wood
8b491f7f3d vfs: fix modtimes changing by fractional seconds after upload #4763
Before this change, rclone would return the modification times of the
cache file or the pending modtime which would be more accurate than
the modtime that the backend was capable of.

This meant that the modtime would be change slightly when the item was
actually uploaded.

For example modification times on Google Drive would be rounded to the
nearest millisecond.

This fixes the VFS layer to always return modtimes directly from an
object stored on the remote, or rounded to the precision that the
remote is capable of.
2021-03-16 13:31:47 +00:00
Nick Craig-Wood
aea8776a43 vfs: fix modtimes not updating when writing via cache - fixes #4763
This reads modtime from a dirty cache item if it exists. This mirrors
the way reading the size works.

This fixes the mod time not updating when the file is written, only
when the writeback completes.

See: https://forum.rclone.org/t/rclone-mount-and-changing-timestamps-after-writes/22629
2021-03-16 13:31:47 +00:00
Nick Craig-Wood
c387eb8c09 vfs: don't set modification time if it was already correct
Before this change, rclone would set the modification time of an
object after it had been uploaded. However with --vfs-cache-mode
writes and above, the modification time of the object is already
correct as the cache backing file gets set with the correct
modification time before upload.

Setting the modification time causes another version to be created on
backends such as S3 so it should be avoided if possible.

This change checks to see if the modification time needs changing and
only sets it if necessary.

See: https://forum.rclone.org/t/produce-2-versions-when-overwrite-an-object-in-min-io/19634
2021-03-16 13:31:47 +00:00
Nick Craig-Wood
a12b2746b4 fs: make sure backends with additional config have a different name #4996
Backends for which additional config is detected (in the config string
or on the command line or as environment variables) will gain a suffix
`{XXXXX}` where `XXXX` is a base64 encoded md5hash of the config
string.

This fixes backend caching with config string remotes.

This much requested feature now works properly:

    rclone copy -vv drive,shared_with_me:file.txt drive:
2021-03-15 19:22:07 +00:00
Nick Craig-Wood
3dbef2b2fd fs: make sure we don't save on the fly remote config to the config file #4996 2021-03-15 19:22:07 +00:00
Nick Craig-Wood
f111e0eaf8 fs: enable configmap to be able to tell values set vs config file values #4996
This adds AddOverrideGetter and GetOverride methods to config map and
uses them in fs.ConfigMap.

This enables us to tell which values have been set and which are just
read from the config file or at their defaults.

This also deletes the unused AddGetters method in configmap.
2021-03-15 19:22:07 +00:00
Nick Craig-Wood
96207f342c configmap: add consistent String() method to configmap.Simple #4996 2021-03-15 19:22:07 +00:00
Nick Craig-Wood
e25ac4dcf0 fs: Use connection string config as highest priority config #4996 2021-03-15 19:22:07 +00:00
Nick Craig-Wood
28f6efe955 cmd: refactor to use fspath.SplitFs instead of fs.ParseRemote #4996 2021-03-15 19:22:07 +00:00
Nick Craig-Wood
f17d7c0012 union: refactor to use fspath.SplitFs instead of fs.ParseRemote #4996 2021-03-15 19:22:07 +00:00
Nick Craig-Wood
3761cf68b4 chunker: refactor to use fspath.SplitFs instead of fspath.Parse #4996 2021-03-15 19:22:07 +00:00
Nick Craig-Wood
71554c1371 fspath: factor Split into SplitFs and Split #4996 2021-03-15 19:22:07 +00:00
Nick Craig-Wood
8a46dd1b57 fspath: Implement a connection string parser #4996
This is implemented as a state machine parser so it can emit sensible
error messages.

It does not use the connection strings elsewhere in rclone yet - see
subsequent commits.

An optional fuzzer is implemented for the Parse function.
2021-03-15 19:22:07 +00:00
Nick Craig-Wood
3b21857097 config: clear fs cache of stale entries when altering config - fixes #4811
Before this change if a config was altered via the rc then when a new
backend was created from that config, if there was a backend already
running from the old config in the cache then it would be used instead
of creating a new backend with the new config, thus leading to
confusion.

This change flushes the fs cache of any backends based off a config
when that config is changed is over the rc.
2021-03-15 19:22:07 +00:00
Nick Craig-Wood
a10fbf16ea fs/cache: add ClearConfig method to clear all remotes based on Config #4811 2021-03-15 19:22:07 +00:00
Nick Craig-Wood
f4750928ee lib/cache: add Delete and DeletePrefix methods #4811 2021-03-15 19:22:07 +00:00
Nick Craig-Wood
657be2ace5 rc: add fscache/clear and fscache/entries to control the fs cache #4811 2021-03-15 19:22:07 +00:00
Nick Craig-Wood
feaaca4987 fs/cache: add Entries() for finding how many entries in the fscache #4811 2021-03-15 19:22:07 +00:00
Nick Craig-Wood
ebd9462ea6 union: fix union attempting to update files on a read only file system
Before this change, when using an all create method with one of the
upstreams being read only, if there was an existing file on the read
only remote, it was impossible to update it.

This change detects that situation and creates the file on a
read/write upstream. This file will shadow the file on the read/only
upstream. If it is deleted the read only upstream file will be visible
again.

Fixes #4929
2021-03-15 19:22:07 +00:00
Nick Craig-Wood
6b9e4f939d union: fix crash when using epff policy - fixes #5000
Before this fix using the epff policy could double close a channel.

The fix refactors the code to make that impossible and cancels any
running queries when the first query is found.
2021-03-15 19:22:07 +00:00
Nick Craig-Wood
687a3b1832 vfs: fix data race discovered by the race detector
This fixes a place where we read from item.o without the item.mu held.
2021-03-15 19:22:07 +00:00
Nick Craig-Wood
25d5ed763c fs: make sync and operations tests use context instead of global variables 2021-03-15 19:22:07 +00:00
Nick Craig-Wood
5e038a5e1e lib/file: retry preallocate on EINTR
Before this change, sometimes preallocate failed with EINTR which
rclone ignored.

Retrying the syscall is the correct thing to do and seems to make
preallocate 100% reliable.
2021-03-15 19:22:07 +00:00
Nick Craig-Wood
4b4e531846 build: add missing BUILD_FLAGS to compile_only to speed up other_os build
Before this we were building all architectures unnecessarily in the
compile_all step for the other_os build. There are built elsewhere so
we don't need to build them here too.

This fix adds the missing BUILD_FLAGS which excludes the other builds
and should speed up the workflow.
2021-03-15 19:22:07 +00:00
Nick Craig-Wood
89e8fb4818 local: don't ignore preallocate disk full errors
See: https://forum.rclone.org/t/input-output-error-copying-to-cifs-mount-disk-space-filled/22163
2021-03-15 19:22:07 +00:00
Nick Craig-Wood
b9bf91c510 lib/file: don't run preallocate concurrently
This seems to cause file systems to get the amount of free space
wrong.
2021-03-15 19:22:06 +00:00
Nick Craig-Wood
40b58d59ad lib/file: make pre-allocate detect disk full errors and return them 2021-03-15 19:22:06 +00:00
Nick Craig-Wood
4fbb50422c drive: don't stop server side copy if couldn't read description
Before this change we would error out of a server side copy if we
couldn't read the description for a google doc.

This patch just logs an ERROR message and carries on.

See: https://forum.rclone.org/t/rclone-google-drive-to-google-drive-migration-for-multiple-users/19024/3
2021-03-15 19:22:06 +00:00
Nick Craig-Wood
8d847a4e94 lib/atexit: fix occasional failure to unmount with CTRL-C #4957
Before this change CTRL-C could come in to exit rclone which would
start the atexit actions running. The Fuse unmount then signals rclone
to exit which wasn't waiting for the already running atexit actions to
complete.

This change makes sure that if the atexit actions are started they
should be completed.
2021-03-15 19:22:06 +00:00
Nick Craig-Wood
e3e08a48cb Add Manish Kumar to contributors 2021-03-15 19:22:06 +00:00
Manish Kumar
ff6868900d azureblob: add container public access level support. Fixes #5045 2021-03-15 17:18:47 +00:00
albertony
aab076029f local: make nounc advanced option except on windows 2021-03-15 17:10:27 +00:00
Nick Craig-Wood
294f090361 fs: make sure --low-level-retries, --checkers, --transfers are > 0 2021-03-15 17:05:35 +00:00
Nick Craig-Wood
301e1ad982 fs: fix crash when --low-level-retries=0 - fixes #5024 2021-03-15 17:05:35 +00:00
Nick Craig-Wood
3cf6ea848b config: remove log.Fatal and replace with error passing where possible 2021-03-14 16:03:35 +00:00
Nick Craig-Wood
bb0b6432ae config: --config "" or "/notfound" for in memory config only #4996
If `--config` is set to empty string or the special value `/notfound`
then rclone will keep the config file in memory only.
2021-03-14 16:03:35 +00:00
Nick Craig-Wood
46078d391f config: make config file reads reload the config file if needed #4996
Before this change the config file needed to be explicitly reloaded.
This coupled the config file implementation with the backends
needlessly.

This change stats the config file to see if it needs to be reloaded on
every config file operation.

This allows us to remove calls to

- config.SaveConfig
- config.GetFresh

Which now makes the the only needed interface to the config file be
that provided by configmap.Map when rclone is not being configured.

This also adds tests for configfile
2021-03-14 16:03:35 +00:00
Nick Craig-Wood
849bf20598 build: disable IOS builds for the time being - see #5124 2021-03-13 22:07:46 +00:00
Ivan Andreev
e91f2e342a docs: mention rclone selfupdate in quickstart (#5122) 2021-03-13 23:02:40 +03:00
Nick Craig-Wood
713f8f357d sftp: fix "file not found" errors for read once servers - fixes #5077
It introduces a new flag --sftp-disable-concurrent-reads to stop the
problematic behaviour in the SFTP library for read-once servers.

This upgrades the sftp library to v1.13.0 which has the fix.
2021-03-13 15:38:38 +00:00
Evan Harris
83368998be docs: Updated sync and dedupe command docs #4429 2021-03-13 15:01:32 +00:00
Nick Craig-Wood
4013bc4a4c Fix excessive retries missing --max-duration timeout - fixes #4504
This change checks the context whenever rclone might retry, and
doesn't retry if the current context has an error.

This fixes the pathological behaviour of `--max-duration` refusing to
exit because all the context deadline exceeded errors were being
retried.

This unfortunately meant changing the shouldRetry logic in every
backend and doing a lot of context propagation.

See: https://forum.rclone.org/t/add-flag-to-exit-immediately-when-max-duration-reached/22723
2021-03-13 09:25:44 +00:00
Nick Craig-Wood
32925dae1f Add Lucas Messenger to contributors 2021-03-13 09:25:44 +00:00
Nick Craig-Wood
6cc70997ba Add Naveen Honest Raj to contributors 2021-03-13 09:25:44 +00:00
buengese
d260e3824e docs: cleanup optional feature table 2021-03-12 09:20:01 +00:00
Lucas Messenger
a5bd26395e hdfs: fix permissions for when directory is created 2021-03-12 09:15:14 +00:00
Ivan Andreev
6fa74340a0 cmd: rclone selfupdate (#5080)
Implements self-update command
Fixes #548
Fixes #5076
2021-03-11 22:39:30 +03:00
Saksham Khanna
4d8ef7bca7 cmd/dedupe: make largest directory primary to minimize data moved (#3648)
This change makes dedupe recursively count elements in same-named directories
and make the largest one primary. This allows to minimize the amount of data
moved (or at least the amount of API calls) when dedupe merges them.
It also adds a new fs.Object interface `ParentIDer` with function `ParentID` and
implements it for the drive and opendrive backends. This function returns
parent directory ID for objects on filesystems that allow same-named dirs.
We use it to correctly count sizes of same-named directories.

Fixes #2568

Co-authored-by: Ivan Andreev <ivandeex@gmail.com>
2021-03-11 20:40:29 +03:00
Nick Craig-Wood
6a9ae32012 config: split up main file more and move tests into correct packages
This splits config.go into ui.go for the user interface functions and
authorize.go for the implementation of `rclone authorize`.

It also moves the tests into the correct places (including one from
obscure which was in the wrong place).
2021-03-11 17:29:26 +00:00
Nick Craig-Wood
a7fd65bf2d config: move account initialisation out into accounting
Before this change the initialisation for the accounting package was
done in the config package for some strange historical reason.
2021-03-11 17:29:26 +00:00
Nick Craig-Wood
1fed2d910c config: make config file system pluggable
If you are using rclone a library you can decide to use the rclone
config file system or not by calling

    configfile.LoadConfig(ctx)

If you don't you will need to set `config.Data` to an implementation
of `config.Storage`.

Other changes
- change interface of config.FileGet to remove unused default
- remove MustValue from config.Storage interface
- change GetValue to return string or bool like elsewhere in rclone
- implement a default config file system which panics with helpful error
- implement getWithDefault to replace the removed MustValue
- don't embed goconfig.ConfigFile so we can change the methods
2021-03-11 17:29:26 +00:00
Fionera
c95b580478 config: Wrap config library in an interface 2021-03-11 17:29:26 +00:00
Nick Craig-Wood
2be310cd6e build: fix dependencies for docs build 2021-03-11 17:29:26 +00:00
Naveen Honest Raj
02a5d350f9 rcd: Added systemd notification during the 'rclone rcd' command call. This also fixes #5073.
Signed-off-by: Naveen Honest Raj <naveendurai19@gmail.com>
2021-03-11 17:12:14 +00:00
albertony
18cd2064ec mount: docs: add note about volume path syntax on windows 2021-03-11 17:09:22 +00:00
210 changed files with 15183 additions and 4240 deletions

View File

@@ -5,19 +5,31 @@ about: Report a problem with rclone
<!--
Welcome :-) We understand you are having a problem with rclone; we want to help you with that!
We understand you are having a problem with rclone; we want to help you with that!
If you've just got a question or aren't sure if you've found a bug then please use the rclone forum:
**STOP and READ**
**YOUR POST WILL BE REMOVED IF IT IS LOW QUALITY**:
Please show the effort you've put in to solving the problem and please be specific.
People are volunteering their time to help! Low effort posts are not likely to get good answers!
If you think you might have found a bug, try to replicate it with the latest beta (or stable).
The update instructions are available at https://rclone.org/commands/rclone_selfupdate/
If you can still replicate it or just got a question then please use the rclone forum:
https://forum.rclone.org/
instead of filing an issue for a quick response.
for a quick response instead of filing an issue on this repo.
If you think you might have found a bug, please can you try to replicate it with the latest beta?
If nothing else helps, then please fill in the info below which helps us help you.
https://beta.rclone.org/
If you can still replicate it with the latest beta, then please fill in the info below which makes our lives much easier. A log with -vv will make our day :-)
**DO NOT REDACT** any information except passwords/keys/personal info.
You should use 3 backticks to begin and end your paste to make it readable.
Make sure to include a log obtained with '-vv'.
You can also use '-vv --log-file bug.log' and a service such as https://pastebin.com or https://gist.github.com/
Thank you
@@ -25,6 +37,11 @@ The Rclone Developers
-->
#### The associated forum post URL from `https://forum.rclone.org`
#### What is the problem you are having with rclone?
@@ -37,7 +54,7 @@ The Rclone Developers
#### Which cloud storage system are you using? (e.g. Google Drive)
#### Which cloud storage system are you using? (e.g. Google Drive)

View File

@@ -7,12 +7,16 @@ about: Suggest a new feature or enhancement for rclone
Welcome :-)
So you've got an idea to improve rclone? We love that! You'll be glad to hear we've incorporated hundreds of ideas from contributors already.
So you've got an idea to improve rclone? We love that!
You'll be glad to hear we've incorporated hundreds of ideas from contributors already.
Here is a checklist of things to do:
Probably the latest beta (or stable) release has your feature, so try to update your rclone.
The update instructions are available at https://rclone.org/commands/rclone_selfupdate/
1. Please search the old issues first for your idea and +1 or comment on an existing issue if possible.
2. Discuss on the forum first: https://forum.rclone.org/
If it still isn't there, here is a checklist of things to do:
1. Search the old issues for your idea and +1 or comment on an existing issue if possible.
2. Discuss on the forum: https://forum.rclone.org/
3. Make a feature request issue (this is the right place!).
4. Be prepared to get involved making the feature :-)
@@ -23,6 +27,10 @@ The Rclone Developers
-->
#### The associated forum post URL from `https://forum.rclone.org`
#### What is your current rclone version (output from `rclone version`)?

View File

@@ -213,50 +213,98 @@ jobs:
# Deploy binaries if enabled in config && not a PR && not a fork
if: matrix.deploy && github.head_ref == '' && github.repository == 'rclone/rclone'
xgo:
timeout-minutes: 60
name: "xgo cross compile"
runs-on: ubuntu-latest
android:
timeout-minutes: 30
name: "android-all"
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v2
steps:
# Upgrade together with NDK version
- name: Set up Go 1.14
uses: actions/setup-go@v1
with:
go-version: 1.14
- name: Checkout
uses: actions/checkout@v1
with:
# Checkout into a fixed path to avoid import path problems on go < 1.11
path: ./src/github.com/rclone/rclone
# Upgrade together with Go version. Using a GitHub-provided version saves around 2 minutes.
- name: Force NDK version
run: echo "y" | sudo ${ANDROID_HOME}/tools/bin/sdkmanager --install "ndk;21.4.7075529" | grep -v = || true
- name: Set environment variables
shell: bash
run: |
echo 'GOPATH=${{ runner.workspace }}' >> $GITHUB_ENV
echo '${{ runner.workspace }}/bin' >> $GITHUB_PATH
- name: Go module cache
uses: actions/cache@v2
with:
path: ~/go/pkg/mod
key: ${{ runner.os }}-go-${{ hashFiles('**/go.sum') }}
restore-keys: |
${{ runner.os }}-go-
- name: Cross-compile rclone
run: |
docker pull billziss/xgo-cgofuse
GO111MODULE=off go get -v github.com/karalabe/xgo # don't add to go.mod
# xgo \
# -image=billziss/xgo-cgofuse \
# -targets=darwin/amd64,linux/386,linux/amd64,windows/386,windows/amd64 \
# -tags cmount \
# -dest build \
# .
xgo \
-image=billziss/xgo-cgofuse \
-targets=android/*,ios/* \
-dest build \
.
- name: Set global environment variables
shell: bash
run: |
echo "VERSION=$(make version)" >> $GITHUB_ENV
- name: Build rclone
shell: bash
run: |
make
- name: build native rclone
run: |
make
- name: Upload artifacts
run: |
make ci_upload
env:
RCLONE_CONFIG_PASS: ${{ secrets.RCLONE_CONFIG_PASS }}
# Upload artifacts if not a PR && not a fork
if: github.head_ref == '' && github.repository == 'rclone/rclone'
- name: arm-v7a Set environment variables
shell: bash
run: |
echo "CC=$(echo $ANDROID_HOME/ndk/21.4.7075529/toolchains/llvm/prebuilt/linux-x86_64/bin/armv7a-linux-androideabi16-clang)" >> $GITHUB_ENV
echo "CC_FOR_TARGET=$CC" >> $GITHUB_ENV
echo 'GOOS=android' >> $GITHUB_ENV
echo 'GOARCH=arm' >> $GITHUB_ENV
echo 'GOARM=7' >> $GITHUB_ENV
echo 'CGO_ENABLED=1' >> $GITHUB_ENV
echo 'CGO_LDFLAGS=-fuse-ld=lld -s -w' >> $GITHUB_ENV
- name: arm-v7a build
run: go build -v -tags android -trimpath -ldflags '-s -X github.com/rclone/rclone/fs.Version='${VERSION} -o build/rclone-android-16-armv7a .
- name: arm64-v8a Set environment variables
shell: bash
run: |
echo "CC=$(echo $ANDROID_HOME/ndk/21.4.7075529/toolchains/llvm/prebuilt/linux-x86_64/bin/aarch64-linux-android21-clang)" >> $GITHUB_ENV
echo "CC_FOR_TARGET=$CC" >> $GITHUB_ENV
echo 'GOOS=android' >> $GITHUB_ENV
echo 'GOARCH=arm64' >> $GITHUB_ENV
echo 'CGO_ENABLED=1' >> $GITHUB_ENV
echo 'CGO_LDFLAGS=-fuse-ld=lld -s -w' >> $GITHUB_ENV
- name: arm64-v8a build
run: go build -v -tags android -trimpath -ldflags '-s -X github.com/rclone/rclone/fs.Version='${VERSION} -o build/rclone-android-21-armv8a .
- name: x86 Set environment variables
shell: bash
run: |
echo "CC=$(echo $ANDROID_HOME/ndk/21.4.7075529/toolchains/llvm/prebuilt/linux-x86_64/bin/i686-linux-android16-clang)" >> $GITHUB_ENV
echo "CC_FOR_TARGET=$CC" >> $GITHUB_ENV
echo 'GOOS=android' >> $GITHUB_ENV
echo 'GOARCH=386' >> $GITHUB_ENV
echo 'CGO_ENABLED=1' >> $GITHUB_ENV
echo 'CGO_LDFLAGS=-fuse-ld=lld -s -w' >> $GITHUB_ENV
- name: x86 build
run: go build -v -tags android -trimpath -ldflags '-s -X github.com/rclone/rclone/fs.Version='${VERSION} -o build/rclone-android-16-x86 .
- name: x64 Set environment variables
shell: bash
run: |
echo "CC=$(echo $ANDROID_HOME/ndk/21.4.7075529/toolchains/llvm/prebuilt/linux-x86_64/bin/x86_64-linux-android21-clang)" >> $GITHUB_ENV
echo "CC_FOR_TARGET=$CC" >> $GITHUB_ENV
echo 'GOOS=android' >> $GITHUB_ENV
echo 'GOARCH=amd64' >> $GITHUB_ENV
echo 'CGO_ENABLED=1' >> $GITHUB_ENV
echo 'CGO_LDFLAGS=-fuse-ld=lld -s -w' >> $GITHUB_ENV
- name: x64 build
run: go build -v -tags android -trimpath -ldflags '-s -X github.com/rclone/rclone/fs.Version='${VERSION} -o build/rclone-android-21-x64 .
- name: Upload artifacts
run: |
make ci_upload
env:
RCLONE_CONFIG_PASS: ${{ secrets.RCLONE_CONFIG_PASS }}
# Upload artifacts if not a PR && not a fork
if: github.head_ref == '' && github.repository == 'rclone/rclone'

1
.gitignore vendored
View File

@@ -1,6 +1,7 @@
*~
_junk/
rclone
rclone.exe
build
docs/public
rclone.iml

View File

@@ -33,10 +33,11 @@ page](https://github.com/rclone/rclone).
Now in your terminal
go get -u github.com/rclone/rclone
cd $GOPATH/src/github.com/rclone/rclone
git clone https://github.com/rclone/rclone.git
cd rclone
git remote rename origin upstream
git remote add origin git@github.com:YOURUSER/rclone.git
go build
Make a branch to add your new feature

1289
MANUAL.html generated

File diff suppressed because it is too large Load Diff

1783
MANUAL.md generated

File diff suppressed because it is too large Load Diff

1867
MANUAL.txt generated

File diff suppressed because it is too large Load Diff

View File

@@ -119,7 +119,7 @@ doc: rclone.1 MANUAL.html MANUAL.txt rcdocs commanddocs
rclone.1: MANUAL.md
pandoc -s --from markdown-smart --to man MANUAL.md -o rclone.1
MANUAL.md: bin/make_manual.py docs/content/*.md commanddocs backenddocs
MANUAL.md: bin/make_manual.py docs/content/*.md commanddocs backenddocs rcdocs
./bin/make_manual.py
MANUAL.html: MANUAL.md
@@ -187,10 +187,10 @@ upload_github:
./bin/upload-github $(TAG)
cross: doc
go run bin/cross-compile.go -release current $(BUILDTAGS) $(BUILD_ARGS) $(TAG)
go run bin/cross-compile.go -release current $(BUILD_FLAGS) $(BUILDTAGS) $(BUILD_ARGS) $(TAG)
beta:
go run bin/cross-compile.go $(BUILDTAGS) $(BUILD_ARGS) $(TAG)
go run bin/cross-compile.go $(BUILD_FLAGS) $(BUILDTAGS) $(BUILD_ARGS) $(TAG)
rclone -v copy build/ memstore:pub-rclone-org/$(TAG)
@echo Beta release ready at https://pub.rclone.org/$(TAG)/
@@ -198,7 +198,7 @@ log_since_last_release:
git log $(LAST_TAG)..
compile_all:
go run bin/cross-compile.go -compile-only $(BUILDTAGS) $(BUILD_ARGS) $(TAG)
go run bin/cross-compile.go -compile-only $(BUILD_FLAGS) $(BUILDTAGS) $(BUILD_ARGS) $(TAG)
ci_upload:
sudo chown -R $$USER build

View File

@@ -76,6 +76,24 @@ Now
The rclone docker image should autobuild on via GitHub actions. If it doesn't
or needs to be updated then rebuild like this.
See: https://github.com/ilteoood/docker_buildx/issues/19
See: https://github.com/ilteoood/docker_buildx/blob/master/scripts/install_buildx.sh
```
git co v1.54.1
docker pull golang
export DOCKER_CLI_EXPERIMENTAL=enabled
docker buildx create --name actions_builder --use
docker run --rm --privileged docker/binfmt:820fdd95a9972a5308930a2bdfb8573dd4447ad3
docker run --rm --privileged multiarch/qemu-user-static --reset -p yes
SUPPORTED_PLATFORMS=$(docker buildx inspect --bootstrap | grep 'Platforms:*.*' | cut -d : -f2,3)
echo "Supported platforms: $SUPPORTED_PLATFORMS"
docker buildx build --platform linux/amd64,linux/386,linux/arm64,linux/arm/v7 -t rclone/rclone:1.54.1 -t rclone/rclone:1.54 -t rclone/rclone:1 -t rclone/rclone:latest --push .
docker buildx stop actions_builder
```
### Old build for linux/amd64 only
```
docker pull golang
docker build --rm --ulimit memlock=67108864 -t rclone/rclone:1.52.0 -t rclone/rclone:1.52 -t rclone/rclone:1 -t rclone/rclone:latest .

View File

@@ -1 +1 @@
v1.55.0
v1.56.0

View File

@@ -11,6 +11,7 @@ import (
_ "github.com/rclone/rclone/backend/local" // pull in test backend
"github.com/rclone/rclone/fs"
"github.com/rclone/rclone/fs/config"
"github.com/rclone/rclone/fs/config/configfile"
"github.com/stretchr/testify/require"
)
@@ -19,7 +20,7 @@ var (
)
func prepare(t *testing.T, root string) {
config.LoadConfig(context.Background())
configfile.LoadConfig(context.Background())
// Configure the remote
config.FileSet(remoteName, "type", "alias")

View File

@@ -205,7 +205,10 @@ var retryErrorCodes = []int{
// shouldRetry returns a boolean as to whether this resp and err
// deserve to be retried. It returns the err as a convenience
func (f *Fs) shouldRetry(resp *http.Response, err error) (bool, error) {
func (f *Fs) shouldRetry(ctx context.Context, resp *http.Response, err error) (bool, error) {
if fserrors.ContextError(ctx, &err) {
return false, err
}
if resp != nil {
if resp.StatusCode == 401 {
f.tokenRenewer.Invalidate()
@@ -280,7 +283,7 @@ func NewFs(ctx context.Context, name, root string, m configmap.Mapper) (fs.Fs, e
// Renew the token in the background
f.tokenRenewer = oauthutil.NewRenew(f.String(), ts, func() error {
_, err := f.getRootInfo()
_, err := f.getRootInfo(ctx)
return err
})
@@ -288,14 +291,14 @@ func NewFs(ctx context.Context, name, root string, m configmap.Mapper) (fs.Fs, e
var resp *http.Response
err = f.pacer.Call(func() (bool, error) {
_, resp, err = f.c.Account.GetEndpoints()
return f.shouldRetry(resp, err)
return f.shouldRetry(ctx, resp, err)
})
if err != nil {
return nil, errors.Wrap(err, "failed to get endpoints")
}
// Get rootID
rootInfo, err := f.getRootInfo()
rootInfo, err := f.getRootInfo(ctx)
if err != nil || rootInfo.Id == nil {
return nil, errors.Wrap(err, "failed to get root")
}
@@ -337,11 +340,11 @@ func NewFs(ctx context.Context, name, root string, m configmap.Mapper) (fs.Fs, e
}
// getRootInfo gets the root folder info
func (f *Fs) getRootInfo() (rootInfo *acd.Folder, err error) {
func (f *Fs) getRootInfo(ctx context.Context) (rootInfo *acd.Folder, err error) {
var resp *http.Response
err = f.pacer.Call(func() (bool, error) {
rootInfo, resp, err = f.c.Nodes.GetRoot()
return f.shouldRetry(resp, err)
return f.shouldRetry(ctx, resp, err)
})
return rootInfo, err
}
@@ -380,7 +383,7 @@ func (f *Fs) FindLeaf(ctx context.Context, pathID, leaf string) (pathIDOut strin
var subFolder *acd.Folder
err = f.pacer.Call(func() (bool, error) {
subFolder, resp, err = folder.GetFolder(f.opt.Enc.FromStandardName(leaf))
return f.shouldRetry(resp, err)
return f.shouldRetry(ctx, resp, err)
})
if err != nil {
if err == acd.ErrorNodeNotFound {
@@ -407,7 +410,7 @@ func (f *Fs) CreateDir(ctx context.Context, pathID, leaf string) (newID string,
var info *acd.Folder
err = f.pacer.Call(func() (bool, error) {
info, resp, err = folder.CreateFolder(f.opt.Enc.FromStandardName(leaf))
return f.shouldRetry(resp, err)
return f.shouldRetry(ctx, resp, err)
})
if err != nil {
//fmt.Printf("...Error %v\n", err)
@@ -428,7 +431,7 @@ type listAllFn func(*acd.Node) bool
// Lists the directory required calling the user function on each item found
//
// If the user fn ever returns true then it early exits with found = true
func (f *Fs) listAll(dirID string, title string, directoriesOnly bool, filesOnly bool, fn listAllFn) (found bool, err error) {
func (f *Fs) listAll(ctx context.Context, dirID string, title string, directoriesOnly bool, filesOnly bool, fn listAllFn) (found bool, err error) {
query := "parents:" + dirID
if directoriesOnly {
query += " AND kind:" + folderKind
@@ -449,7 +452,7 @@ func (f *Fs) listAll(dirID string, title string, directoriesOnly bool, filesOnly
var resp *http.Response
err = f.pacer.CallNoRetry(func() (bool, error) {
nodes, resp, err = f.c.Nodes.GetNodes(&opts)
return f.shouldRetry(resp, err)
return f.shouldRetry(ctx, resp, err)
})
if err != nil {
return false, err
@@ -508,7 +511,7 @@ func (f *Fs) List(ctx context.Context, dir string) (entries fs.DirEntries, err e
var iErr error
for tries := 1; tries <= maxTries; tries++ {
entries = nil
_, err = f.listAll(directoryID, "", false, false, func(node *acd.Node) bool {
_, err = f.listAll(ctx, directoryID, "", false, false, func(node *acd.Node) bool {
remote := path.Join(dir, *node.Name)
switch *node.Kind {
case folderKind:
@@ -667,7 +670,7 @@ func (f *Fs) Put(ctx context.Context, in io.Reader, src fs.ObjectInfo, options .
if ok {
return false, nil
}
return f.shouldRetry(resp, err)
return f.shouldRetry(ctx, resp, err)
})
if err != nil {
return nil, err
@@ -708,7 +711,7 @@ func (f *Fs) Move(ctx context.Context, src fs.Object, remote string) (fs.Object,
if err != nil {
return nil, err
}
err = f.moveNode(srcObj.remote, dstLeaf, dstDirectoryID, srcObj.info, srcLeaf, srcDirectoryID, false)
err = f.moveNode(ctx, srcObj.remote, dstLeaf, dstDirectoryID, srcObj.info, srcLeaf, srcDirectoryID, false)
if err != nil {
return nil, err
}
@@ -803,7 +806,7 @@ func (f *Fs) DirMove(ctx context.Context, src fs.Fs, srcRemote, dstRemote string
var jsonStr string
err = srcFs.pacer.Call(func() (bool, error) {
jsonStr, err = srcInfo.GetMetadata()
return srcFs.shouldRetry(nil, err)
return srcFs.shouldRetry(ctx, nil, err)
})
if err != nil {
fs.Debugf(src, "DirMove error: error reading src metadata: %v", err)
@@ -815,7 +818,7 @@ func (f *Fs) DirMove(ctx context.Context, src fs.Fs, srcRemote, dstRemote string
return err
}
err = f.moveNode(srcPath, dstLeaf, dstDirectoryID, srcInfo, srcLeaf, srcDirectoryID, true)
err = f.moveNode(ctx, srcPath, dstLeaf, dstDirectoryID, srcInfo, srcLeaf, srcDirectoryID, true)
if err != nil {
return err
}
@@ -840,7 +843,7 @@ func (f *Fs) purgeCheck(ctx context.Context, dir string, check bool) error {
if check {
// check directory is empty
empty := true
_, err = f.listAll(rootID, "", false, false, func(node *acd.Node) bool {
_, err = f.listAll(ctx, rootID, "", false, false, func(node *acd.Node) bool {
switch *node.Kind {
case folderKind:
empty = false
@@ -865,7 +868,7 @@ func (f *Fs) purgeCheck(ctx context.Context, dir string, check bool) error {
var resp *http.Response
err = f.pacer.Call(func() (bool, error) {
resp, err = node.Trash()
return f.shouldRetry(resp, err)
return f.shouldRetry(ctx, resp, err)
})
if err != nil {
return err
@@ -987,7 +990,7 @@ func (o *Object) readMetaData(ctx context.Context) (err error) {
var info *acd.File
err = o.fs.pacer.Call(func() (bool, error) {
info, resp, err = folder.GetFile(o.fs.opt.Enc.FromStandardName(leaf))
return o.fs.shouldRetry(resp, err)
return o.fs.shouldRetry(ctx, resp, err)
})
if err != nil {
if err == acd.ErrorNodeNotFound {
@@ -1044,7 +1047,7 @@ func (o *Object) Open(ctx context.Context, options ...fs.OpenOption) (in io.Read
} else {
in, resp, err = file.OpenTempURLHeaders(o.fs.noAuthClient, headers)
}
return o.fs.shouldRetry(resp, err)
return o.fs.shouldRetry(ctx, resp, err)
})
return in, err
}
@@ -1067,7 +1070,7 @@ func (o *Object) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, op
if ok {
return false, nil
}
return o.fs.shouldRetry(resp, err)
return o.fs.shouldRetry(ctx, resp, err)
})
if err != nil {
return err
@@ -1077,70 +1080,70 @@ func (o *Object) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, op
}
// Remove a node
func (f *Fs) removeNode(info *acd.Node) error {
func (f *Fs) removeNode(ctx context.Context, info *acd.Node) error {
var resp *http.Response
var err error
err = f.pacer.Call(func() (bool, error) {
resp, err = info.Trash()
return f.shouldRetry(resp, err)
return f.shouldRetry(ctx, resp, err)
})
return err
}
// Remove an object
func (o *Object) Remove(ctx context.Context) error {
return o.fs.removeNode(o.info)
return o.fs.removeNode(ctx, o.info)
}
// Restore a node
func (f *Fs) restoreNode(info *acd.Node) (newInfo *acd.Node, err error) {
func (f *Fs) restoreNode(ctx context.Context, info *acd.Node) (newInfo *acd.Node, err error) {
var resp *http.Response
err = f.pacer.Call(func() (bool, error) {
newInfo, resp, err = info.Restore()
return f.shouldRetry(resp, err)
return f.shouldRetry(ctx, resp, err)
})
return newInfo, err
}
// Changes name of given node
func (f *Fs) renameNode(info *acd.Node, newName string) (newInfo *acd.Node, err error) {
func (f *Fs) renameNode(ctx context.Context, info *acd.Node, newName string) (newInfo *acd.Node, err error) {
var resp *http.Response
err = f.pacer.Call(func() (bool, error) {
newInfo, resp, err = info.Rename(f.opt.Enc.FromStandardName(newName))
return f.shouldRetry(resp, err)
return f.shouldRetry(ctx, resp, err)
})
return newInfo, err
}
// Replaces one parent with another, effectively moving the file. Leaves other
// parents untouched. ReplaceParent cannot be used when the file is trashed.
func (f *Fs) replaceParent(info *acd.Node, oldParentID string, newParentID string) error {
func (f *Fs) replaceParent(ctx context.Context, info *acd.Node, oldParentID string, newParentID string) error {
return f.pacer.Call(func() (bool, error) {
resp, err := info.ReplaceParent(oldParentID, newParentID)
return f.shouldRetry(resp, err)
return f.shouldRetry(ctx, resp, err)
})
}
// Adds one additional parent to object.
func (f *Fs) addParent(info *acd.Node, newParentID string) error {
func (f *Fs) addParent(ctx context.Context, info *acd.Node, newParentID string) error {
return f.pacer.Call(func() (bool, error) {
resp, err := info.AddParent(newParentID)
return f.shouldRetry(resp, err)
return f.shouldRetry(ctx, resp, err)
})
}
// Remove given parent from object, leaving the other possible
// parents untouched. Object can end up having no parents.
func (f *Fs) removeParent(info *acd.Node, parentID string) error {
func (f *Fs) removeParent(ctx context.Context, info *acd.Node, parentID string) error {
return f.pacer.Call(func() (bool, error) {
resp, err := info.RemoveParent(parentID)
return f.shouldRetry(resp, err)
return f.shouldRetry(ctx, resp, err)
})
}
// moveNode moves the node given from the srcLeaf,srcDirectoryID to
// the dstLeaf,dstDirectoryID
func (f *Fs) moveNode(name, dstLeaf, dstDirectoryID string, srcInfo *acd.Node, srcLeaf, srcDirectoryID string, useDirErrorMsgs bool) (err error) {
func (f *Fs) moveNode(ctx context.Context, name, dstLeaf, dstDirectoryID string, srcInfo *acd.Node, srcLeaf, srcDirectoryID string, useDirErrorMsgs bool) (err error) {
// fs.Debugf(name, "moveNode dst(%q,%s) <- src(%q,%s)", dstLeaf, dstDirectoryID, srcLeaf, srcDirectoryID)
cantMove := fs.ErrorCantMove
if useDirErrorMsgs {
@@ -1154,7 +1157,7 @@ func (f *Fs) moveNode(name, dstLeaf, dstDirectoryID string, srcInfo *acd.Node, s
if srcLeaf != dstLeaf {
// fs.Debugf(name, "renaming")
_, err = f.renameNode(srcInfo, dstLeaf)
_, err = f.renameNode(ctx, srcInfo, dstLeaf)
if err != nil {
fs.Debugf(name, "Move: quick path rename failed: %v", err)
goto OnConflict
@@ -1162,7 +1165,7 @@ func (f *Fs) moveNode(name, dstLeaf, dstDirectoryID string, srcInfo *acd.Node, s
}
if srcDirectoryID != dstDirectoryID {
// fs.Debugf(name, "trying parent replace: %s -> %s", oldParentID, newParentID)
err = f.replaceParent(srcInfo, srcDirectoryID, dstDirectoryID)
err = f.replaceParent(ctx, srcInfo, srcDirectoryID, dstDirectoryID)
if err != nil {
fs.Debugf(name, "Move: quick path parent replace failed: %v", err)
return err
@@ -1175,13 +1178,13 @@ OnConflict:
fs.Debugf(name, "Could not directly rename file, presumably because there was a file with the same name already. Instead, the file will now be trashed where such operations do not cause errors. It will be restored to the correct parent after. If any of the subsequent calls fails, the rename/move will be in an invalid state.")
// fs.Debugf(name, "Trashing file")
err = f.removeNode(srcInfo)
err = f.removeNode(ctx, srcInfo)
if err != nil {
fs.Debugf(name, "Move: remove node failed: %v", err)
return err
}
// fs.Debugf(name, "Renaming file")
_, err = f.renameNode(srcInfo, dstLeaf)
_, err = f.renameNode(ctx, srcInfo, dstLeaf)
if err != nil {
fs.Debugf(name, "Move: rename node failed: %v", err)
return err
@@ -1189,19 +1192,19 @@ OnConflict:
// note: replacing parent is forbidden by API, modifying them individually is
// okay though
// fs.Debugf(name, "Adding target parent")
err = f.addParent(srcInfo, dstDirectoryID)
err = f.addParent(ctx, srcInfo, dstDirectoryID)
if err != nil {
fs.Debugf(name, "Move: addParent failed: %v", err)
return err
}
// fs.Debugf(name, "removing original parent")
err = f.removeParent(srcInfo, srcDirectoryID)
err = f.removeParent(ctx, srcInfo, srcDirectoryID)
if err != nil {
fs.Debugf(name, "Move: removeParent failed: %v", err)
return err
}
// fs.Debugf(name, "Restoring")
_, err = f.restoreNode(srcInfo)
_, err = f.restoreNode(ctx, srcInfo)
if err != nil {
fs.Debugf(name, "Move: restoreNode node failed: %v", err)
return err

View File

@@ -217,6 +217,23 @@ This option controls how often unused buffers will be removed from the pool.`,
encoder.EncodeDel |
encoder.EncodeBackSlash |
encoder.EncodeRightPeriod),
}, {
Name: "public_access",
Help: "Public access level of a container: blob, container.",
Default: string(azblob.PublicAccessNone),
Examples: []fs.OptionExample{
{
Value: string(azblob.PublicAccessNone),
Help: "The container and its blobs can be accessed only with an authorized request. It's a default value",
}, {
Value: string(azblob.PublicAccessBlob),
Help: "Blob data within this container can be read via anonymous request.",
}, {
Value: string(azblob.PublicAccessContainer),
Help: "Allow full public read access for container and blob data.",
},
},
Advanced: true,
}},
})
}
@@ -241,6 +258,7 @@ type Options struct {
MemoryPoolFlushTime fs.Duration `config:"memory_pool_flush_time"`
MemoryPoolUseMmap bool `config:"memory_pool_use_mmap"`
Enc encoder.MultiEncoder `config:"encoding"`
PublicAccess string `config:"public_access"`
}
// Fs represents a remote azure server
@@ -262,6 +280,7 @@ type Fs struct {
imdsPacer *fs.Pacer // Same but for IMDS
uploadToken *pacer.TokenDispenser // control concurrency
pool *pool.Pool // memory pool
publicAccess azblob.PublicAccessType // Container Public Access Level
}
// Object describes an azure object
@@ -335,6 +354,19 @@ func validateAccessTier(tier string) bool {
}
}
// validatePublicAccess checks if azureblob supports use supplied public access level
func validatePublicAccess(publicAccess string) bool {
switch publicAccess {
case string(azblob.PublicAccessNone),
string(azblob.PublicAccessBlob),
string(azblob.PublicAccessContainer):
// valid cases
return true
default:
return false
}
}
// retryErrorCodes is a slice of error codes that we will retry
var retryErrorCodes = []int{
401, // Unauthorized (e.g. "Token has expired")
@@ -347,7 +379,10 @@ var retryErrorCodes = []int{
// shouldRetry returns a boolean as to whether this resp and err
// deserve to be retried. It returns the err as a convenience
func (f *Fs) shouldRetry(err error) (bool, error) {
func (f *Fs) shouldRetry(ctx context.Context, err error) (bool, error) {
if fserrors.ContextError(ctx, &err) {
return false, err
}
// FIXME interpret special errors - more to do here
if storageErr, ok := err.(azblob.StorageError); ok {
switch storageErr.ServiceCode() {
@@ -499,6 +534,11 @@ func NewFs(ctx context.Context, name, root string, m configmap.Mapper) (fs.Fs, e
string(azblob.AccessTierHot), string(azblob.AccessTierCool), string(azblob.AccessTierArchive))
}
if !validatePublicAccess((opt.PublicAccess)) {
return nil, errors.Errorf("Azure Blob: Supported public access level are %s and %s",
string(azblob.PublicAccessBlob), string(azblob.PublicAccessContainer))
}
ci := fs.GetConfig(ctx)
f := &Fs{
name: name,
@@ -517,6 +557,7 @@ func NewFs(ctx context.Context, name, root string, m configmap.Mapper) (fs.Fs, e
opt.MemoryPoolUseMmap,
),
}
f.publicAccess = azblob.PublicAccessType(opt.PublicAccess)
f.imdsPacer.SetRetries(5) // per IMDS documentation
f.setRoot(root)
f.features = (&fs.Features{
@@ -578,7 +619,7 @@ func NewFs(ctx context.Context, name, root string, m configmap.Mapper) (fs.Fs, e
// Retry as specified by the documentation:
// https://docs.microsoft.com/en-us/azure/active-directory/managed-identities-azure-resources/how-to-use-vm-token#retry-guidance
token, err = GetMSIToken(ctx, userMSI)
return f.shouldRetry(err)
return f.shouldRetry(ctx, err)
})
if err != nil {
@@ -594,7 +635,7 @@ func NewFs(ctx context.Context, name, root string, m configmap.Mapper) (fs.Fs, e
var refreshedToken adal.Token
err := f.imdsPacer.Call(func() (bool, error) {
refreshedToken, err = GetMSIToken(ctx, userMSI)
return f.shouldRetry(err)
return f.shouldRetry(ctx, err)
})
if err != nil {
// Failed to refresh.
@@ -803,7 +844,7 @@ func (f *Fs) list(ctx context.Context, container, directory, prefix string, addC
err := f.pacer.Call(func() (bool, error) {
var err error
response, err = f.cntURL(container).ListBlobsHierarchySegment(ctx, marker, delimiter, options)
return f.shouldRetry(err)
return f.shouldRetry(ctx, err)
})
if err != nil {
@@ -1029,7 +1070,7 @@ func (f *Fs) listContainersToFn(fn listContainerFn) error {
err := f.pacer.Call(func() (bool, error) {
var err error
response, err = f.svcURL.ListContainersSegment(ctx, marker, params)
return f.shouldRetry(err)
return f.shouldRetry(ctx, err)
})
if err != nil {
return err
@@ -1081,7 +1122,7 @@ func (f *Fs) makeContainer(ctx context.Context, container string) error {
}
// now try to create the container
return f.pacer.Call(func() (bool, error) {
_, err := f.cntURL(container).Create(ctx, azblob.Metadata{}, azblob.PublicAccessNone)
_, err := f.cntURL(container).Create(ctx, azblob.Metadata{}, f.publicAccess)
if err != nil {
if storageErr, ok := err.(azblob.StorageError); ok {
switch storageErr.ServiceCode() {
@@ -1098,7 +1139,7 @@ func (f *Fs) makeContainer(ctx context.Context, container string) error {
}
}
}
return f.shouldRetry(err)
return f.shouldRetry(ctx, err)
})
}, nil)
}
@@ -1136,10 +1177,10 @@ func (f *Fs) deleteContainer(ctx context.Context, container string) error {
return false, fs.ErrorDirNotFound
}
return f.shouldRetry(err)
return f.shouldRetry(ctx, err)
}
return f.shouldRetry(err)
return f.shouldRetry(ctx, err)
})
})
}
@@ -1212,7 +1253,7 @@ func (f *Fs) Copy(ctx context.Context, src fs.Object, remote string) (fs.Object,
err = f.pacer.Call(func() (bool, error) {
startCopy, err = dstBlobURL.StartCopyFromURL(ctx, *source, nil, azblob.ModifiedAccessConditions{}, options, azblob.AccessTierType(f.opt.AccessTier), nil)
return f.shouldRetry(err)
return f.shouldRetry(ctx, err)
})
if err != nil {
return nil, err
@@ -1373,7 +1414,7 @@ func (o *Object) readMetaData() (err error) {
var blobProperties *azblob.BlobGetPropertiesResponse
err = o.fs.pacer.Call(func() (bool, error) {
blobProperties, err = blob.GetProperties(ctx, options, azblob.ClientProvidedKeyOptions{})
return o.fs.shouldRetry(err)
return o.fs.shouldRetry(ctx, err)
})
if err != nil {
// On directories - GetProperties does not work and current SDK does not populate service code correctly hence check regular http response as well
@@ -1408,7 +1449,7 @@ func (o *Object) SetModTime(ctx context.Context, modTime time.Time) error {
blob := o.getBlobReference()
err := o.fs.pacer.Call(func() (bool, error) {
_, err := blob.SetMetadata(ctx, o.meta, azblob.BlobAccessConditions{}, azblob.ClientProvidedKeyOptions{})
return o.fs.shouldRetry(err)
return o.fs.shouldRetry(ctx, err)
})
if err != nil {
return err
@@ -1451,7 +1492,7 @@ func (o *Object) Open(ctx context.Context, options ...fs.OpenOption) (in io.Read
var downloadResponse *azblob.DownloadResponse
err = o.fs.pacer.Call(func() (bool, error) {
downloadResponse, err = blob.Download(ctx, offset, count, ac, false, azblob.ClientProvidedKeyOptions{})
return o.fs.shouldRetry(err)
return o.fs.shouldRetry(ctx, err)
})
if err != nil {
return nil, errors.Wrap(err, "failed to open for download")
@@ -1592,7 +1633,7 @@ func (o *Object) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, op
// Stream contents of the reader object to the given blob URL
blockBlobURL := blob.ToBlockBlobURL()
_, err = azblob.UploadStreamToBlockBlob(ctx, in, blockBlobURL, putBlobOptions)
return o.fs.shouldRetry(err)
return o.fs.shouldRetry(ctx, err)
})
if err != nil {
return err
@@ -1620,7 +1661,7 @@ func (o *Object) Remove(ctx context.Context) error {
ac := azblob.BlobAccessConditions{}
return o.fs.pacer.Call(func() (bool, error) {
_, err := blob.Delete(ctx, snapShotOptions, ac)
return o.fs.shouldRetry(err)
return o.fs.shouldRetry(ctx, err)
})
}
@@ -1649,7 +1690,7 @@ func (o *Object) SetTier(tier string) error {
ctx := context.Background()
err := o.fs.pacer.Call(func() (bool, error) {
_, err := blob.SetTier(ctx, desiredAccessTier, azblob.LeaseAccessConditions{})
return o.fs.shouldRetry(err)
return o.fs.shouldRetry(ctx, err)
})
if err != nil {

View File

@@ -305,7 +305,10 @@ var retryErrorCodes = []int{
// shouldRetryNoAuth returns a boolean as to whether this resp and err
// deserve to be retried. It returns the err as a convenience
func (f *Fs) shouldRetryNoReauth(resp *http.Response, err error) (bool, error) {
func (f *Fs) shouldRetryNoReauth(ctx context.Context, resp *http.Response, err error) (bool, error) {
if fserrors.ContextError(ctx, &err) {
return false, err
}
// For 429 or 503 errors look at the Retry-After: header and
// set the retry appropriately, starting with a minimum of 1
// second if it isn't set.
@@ -336,7 +339,7 @@ func (f *Fs) shouldRetry(ctx context.Context, resp *http.Response, err error) (b
}
return true, err
}
return f.shouldRetryNoReauth(resp, err)
return f.shouldRetryNoReauth(ctx, resp, err)
}
// errorHandler parses a non 2xx error response into an error
@@ -504,7 +507,7 @@ func (f *Fs) authorizeAccount(ctx context.Context) error {
}
err := f.pacer.Call(func() (bool, error) {
resp, err := f.srv.CallJSON(ctx, &opts, nil, &f.info)
return f.shouldRetryNoReauth(resp, err)
return f.shouldRetryNoReauth(ctx, resp, err)
})
if err != nil {
return errors.Wrap(err, "failed to authenticate")
@@ -1741,6 +1744,13 @@ func (o *Object) getOrHead(ctx context.Context, method string, options []fs.Open
ContentType: resp.Header.Get("Content-Type"),
Info: Info,
}
// When reading files from B2 via cloudflare using
// --b2-download-url cloudflare strips the Content-Length
// headers (presumably so it can inject stuff) so use the old
// length read from the listing.
if info.Size < 0 {
info.Size = o.size
}
return resp, info, nil
}

View File

@@ -317,10 +317,13 @@ var retryErrorCodes = []int{
// shouldRetry returns a boolean as to whether this resp and err
// deserve to be retried. It returns the err as a convenience
func shouldRetry(resp *http.Response, err error) (bool, error) {
func shouldRetry(ctx context.Context, resp *http.Response, err error) (bool, error) {
if fserrors.ContextError(ctx, &err) {
return false, err
}
authRetry := false
if resp != nil && resp.StatusCode == 401 && len(resp.Header["Www-Authenticate"]) == 1 && strings.Index(resp.Header["Www-Authenticate"][0], "expired_token") >= 0 {
if resp != nil && resp.StatusCode == 401 && strings.Contains(resp.Header.Get("Www-Authenticate"), "expired_token") {
authRetry = true
fs.Debugf(nil, "Should retry: %v", err)
}
@@ -548,7 +551,7 @@ func (f *Fs) CreateDir(ctx context.Context, pathID, leaf string) (newID string,
}
err = f.pacer.Call(func() (bool, error) {
resp, err = f.srv.CallJSON(ctx, &opts, &mkdir, &info)
return shouldRetry(resp, err)
return shouldRetry(ctx, resp, err)
})
if err != nil {
//fmt.Printf("...Error %v\n", err)
@@ -585,7 +588,7 @@ OUTER:
var resp *http.Response
err = f.pacer.Call(func() (bool, error) {
resp, err = f.srv.CallJSON(ctx, &opts, nil, &result)
return shouldRetry(resp, err)
return shouldRetry(ctx, resp, err)
})
if err != nil {
return found, errors.Wrap(err, "couldn't list files")
@@ -740,7 +743,7 @@ func (f *Fs) deleteObject(ctx context.Context, id string) error {
}
return f.pacer.Call(func() (bool, error) {
resp, err := f.srv.Call(ctx, &opts)
return shouldRetry(resp, err)
return shouldRetry(ctx, resp, err)
})
}
@@ -767,7 +770,7 @@ func (f *Fs) purgeCheck(ctx context.Context, dir string, check bool) error {
var resp *http.Response
err = f.pacer.Call(func() (bool, error) {
resp, err = f.srv.Call(ctx, &opts)
return shouldRetry(resp, err)
return shouldRetry(ctx, resp, err)
})
if err != nil {
return errors.Wrap(err, "rmdir failed")
@@ -839,7 +842,7 @@ func (f *Fs) Copy(ctx context.Context, src fs.Object, remote string) (fs.Object,
var info *api.Item
err = f.pacer.Call(func() (bool, error) {
resp, err = f.srv.CallJSON(ctx, &opts, &copyFile, &info)
return shouldRetry(resp, err)
return shouldRetry(ctx, resp, err)
})
if err != nil {
return nil, err
@@ -877,7 +880,7 @@ func (f *Fs) move(ctx context.Context, endpoint, id, leaf, directoryID string) (
var resp *http.Response
err = f.pacer.Call(func() (bool, error) {
resp, err = f.srv.CallJSON(ctx, &opts, &move, &info)
return shouldRetry(resp, err)
return shouldRetry(ctx, resp, err)
})
if err != nil {
return nil, err
@@ -895,7 +898,7 @@ func (f *Fs) About(ctx context.Context) (usage *fs.Usage, err error) {
var resp *http.Response
err = f.pacer.Call(func() (bool, error) {
resp, err = f.srv.CallJSON(ctx, &opts, nil, &user)
return shouldRetry(resp, err)
return shouldRetry(ctx, resp, err)
})
if err != nil {
return nil, errors.Wrap(err, "failed to read user info")
@@ -1008,7 +1011,7 @@ func (f *Fs) PublicLink(ctx context.Context, remote string, expire fs.Duration,
var resp *http.Response
err = f.pacer.Call(func() (bool, error) {
resp, err = f.srv.CallJSON(ctx, &opts, &shareLink, &info)
return shouldRetry(resp, err)
return shouldRetry(ctx, resp, err)
})
return info.SharedLink.URL, err
}
@@ -1026,7 +1029,7 @@ func (f *Fs) deletePermanently(ctx context.Context, itemType, id string) error {
}
return f.pacer.Call(func() (bool, error) {
resp, err := f.srv.Call(ctx, &opts)
return shouldRetry(resp, err)
return shouldRetry(ctx, resp, err)
})
}
@@ -1048,7 +1051,7 @@ func (f *Fs) CleanUp(ctx context.Context) (err error) {
var resp *http.Response
err = f.pacer.Call(func() (bool, error) {
resp, err = f.srv.CallJSON(ctx, &opts, nil, &result)
return shouldRetry(resp, err)
return shouldRetry(ctx, resp, err)
})
if err != nil {
return errors.Wrap(err, "couldn't list trash")
@@ -1182,7 +1185,7 @@ func (o *Object) setModTime(ctx context.Context, modTime time.Time) (*api.Item,
var info *api.Item
err := o.fs.pacer.Call(func() (bool, error) {
resp, err := o.fs.srv.CallJSON(ctx, &opts, &update, &info)
return shouldRetry(resp, err)
return shouldRetry(ctx, resp, err)
})
return info, err
}
@@ -1215,7 +1218,7 @@ func (o *Object) Open(ctx context.Context, options ...fs.OpenOption) (in io.Read
}
err = o.fs.pacer.Call(func() (bool, error) {
resp, err = o.fs.srv.Call(ctx, &opts)
return shouldRetry(resp, err)
return shouldRetry(ctx, resp, err)
})
if err != nil {
return nil, err
@@ -1255,7 +1258,7 @@ func (o *Object) upload(ctx context.Context, in io.Reader, leaf, directoryID str
}
err = o.fs.pacer.CallNoRetry(func() (bool, error) {
resp, err = o.fs.srv.CallJSON(ctx, &opts, &upload, &result)
return shouldRetry(resp, err)
return shouldRetry(ctx, resp, err)
})
if err != nil {
return err

View File

@@ -44,7 +44,7 @@ func (o *Object) createUploadSession(ctx context.Context, leaf, directoryID stri
var resp *http.Response
err = o.fs.pacer.Call(func() (bool, error) {
resp, err = o.fs.srv.CallJSON(ctx, &opts, &request, &response)
return shouldRetry(resp, err)
return shouldRetry(ctx, resp, err)
})
return
}
@@ -74,7 +74,7 @@ func (o *Object) uploadPart(ctx context.Context, SessionID string, offset, total
err = o.fs.pacer.Call(func() (bool, error) {
opts.Body = wrap(bytes.NewReader(chunk))
resp, err = o.fs.srv.CallJSON(ctx, &opts, nil, &response)
return shouldRetry(resp, err)
return shouldRetry(ctx, resp, err)
})
if err != nil {
return nil, err
@@ -109,10 +109,10 @@ outer:
err = o.fs.pacer.Call(func() (bool, error) {
resp, err = o.fs.srv.CallJSON(ctx, &opts, &request, nil)
if err != nil {
return shouldRetry(resp, err)
return shouldRetry(ctx, resp, err)
}
body, err = rest.ReadBody(resp)
return shouldRetry(resp, err)
return shouldRetry(ctx, resp, err)
})
delay := defaultDelay
var why string
@@ -167,7 +167,7 @@ func (o *Object) abortUpload(ctx context.Context, SessionID string) (err error)
var resp *http.Response
err = o.fs.pacer.Call(func() (bool, error) {
resp, err = o.fs.srv.Call(ctx, &opts)
return shouldRetry(resp, err)
return shouldRetry(ctx, resp, err)
})
return err
}

View File

@@ -892,7 +892,7 @@ func (r *run) newCacheFs(t *testing.T, remote, id string, needRemote, purge bool
m.Set("type", "cache")
m.Set("remote", localRemote+":"+filepath.Join(os.TempDir(), localRemote))
} else {
remoteType := config.FileGet(remote, "type", "")
remoteType := config.FileGet(remote, "type")
if remoteType == "" {
t.Skipf("skipped due to invalid remote type for %v", remote)
return nil, nil
@@ -903,14 +903,14 @@ func (r *run) newCacheFs(t *testing.T, remote, id string, needRemote, purge bool
m.Set("password", cryptPassword1)
m.Set("password2", cryptPassword2)
}
remoteRemote := config.FileGet(remote, "remote", "")
remoteRemote := config.FileGet(remote, "remote")
if remoteRemote == "" {
t.Skipf("skipped due to invalid remote wrapper for %v", remote)
return nil, nil
}
remoteRemoteParts := strings.Split(remoteRemote, ":")
remoteWrapping := remoteRemoteParts[0]
remoteType := config.FileGet(remoteWrapping, "type", "")
remoteType := config.FileGet(remoteWrapping, "type")
if remoteType != "cache" {
t.Skipf("skipped due to invalid remote type for %v: '%v'", remoteWrapping, remoteType)
return nil, nil

View File

@@ -277,13 +277,10 @@ func NewFs(ctx context.Context, name, rpath string, m configmap.Mapper) (fs.Fs,
return nil, errors.New("can't point remote at itself - check the value of the remote setting")
}
baseName, basePath, err := fspath.Parse(remote)
baseName, basePath, err := fspath.SplitFs(remote)
if err != nil {
return nil, errors.Wrapf(err, "failed to parse remote %q to wrap", remote)
}
if baseName != "" {
baseName += ":"
}
// Look for a file first
remotePath := fspath.JoinRootPath(basePath, rpath)
baseFs, err := cache.Get(ctx, baseName+remotePath)

View File

@@ -404,13 +404,16 @@ func (f *Fs) put(ctx context.Context, in io.Reader, src fs.ObjectInfo, options [
if err != nil {
return nil, errors.Wrap(err, "failed to read destination hash")
}
if srcHash != "" && dstHash != "" && srcHash != dstHash {
// remove object
err = o.Remove(ctx)
if err != nil {
fs.Errorf(o, "Failed to remove corrupted object: %v", err)
if srcHash != "" && dstHash != "" {
if srcHash != dstHash {
// remove object
err = o.Remove(ctx)
if err != nil {
fs.Errorf(o, "Failed to remove corrupted object: %v", err)
}
return nil, errors.Errorf("corrupted on transfer: %v crypted hash differ %q vs %q", ht, srcHash, dstHash)
}
return nil, errors.Errorf("corrupted on transfer: %v crypted hash differ %q vs %q", ht, srcHash, dstHash)
fs.Debugf(src, "%v = %s OK", ht, srcHash)
}
}

View File

@@ -590,13 +590,13 @@ type Fs struct {
}
type baseObject struct {
fs *Fs // what this object is part of
remote string // The remote path
id string // Drive Id of this object
modifiedDate string // RFC3339 time it was last modified
mimeType string // The object MIME type
bytes int64 // size of the object
parents int // number of parents
fs *Fs // what this object is part of
remote string // The remote path
id string // Drive Id of this object
modifiedDate string // RFC3339 time it was last modified
mimeType string // The object MIME type
bytes int64 // size of the object
parents []string // IDs of the parent directories
}
type documentObject struct {
baseObject
@@ -641,7 +641,10 @@ func (f *Fs) Features() *fs.Features {
}
// shouldRetry determines whether a given err rates being retried
func (f *Fs) shouldRetry(err error) (bool, error) {
func (f *Fs) shouldRetry(ctx context.Context, err error) (bool, error) {
if fserrors.ContextError(ctx, &err) {
return false, err
}
if err == nil {
return false, nil
}
@@ -695,20 +698,20 @@ func containsString(slice []string, s string) bool {
}
// getFile returns drive.File for the ID passed and fields passed in
func (f *Fs) getFile(ID string, fields googleapi.Field) (info *drive.File, err error) {
func (f *Fs) getFile(ctx context.Context, ID string, fields googleapi.Field) (info *drive.File, err error) {
err = f.pacer.Call(func() (bool, error) {
info, err = f.svc.Files.Get(ID).
Fields(fields).
SupportsAllDrives(true).
Do()
return f.shouldRetry(err)
Context(ctx).Do()
return f.shouldRetry(ctx, err)
})
return info, err
}
// getRootID returns the canonical ID for the "root" ID
func (f *Fs) getRootID() (string, error) {
info, err := f.getFile("root", "id")
func (f *Fs) getRootID(ctx context.Context) (string, error) {
info, err := f.getFile(ctx, "root", "id")
if err != nil {
return "", errors.Wrap(err, "couldn't find root directory ID")
}
@@ -814,7 +817,7 @@ OUTER:
var files *drive.FileList
err = f.pacer.Call(func() (bool, error) {
files, err = list.Fields(googleapi.Field(fields)).Context(ctx).Do()
return f.shouldRetry(err)
return f.shouldRetry(ctx, err)
})
if err != nil {
return false, errors.Wrap(err, "couldn't list directory")
@@ -837,7 +840,7 @@ OUTER:
if filesOnly && item.ShortcutDetails.TargetMimeType == driveFolderType {
continue
}
item, err = f.resolveShortcut(item)
item, err = f.resolveShortcut(ctx, item)
if err != nil {
return false, errors.Wrap(err, "list")
}
@@ -855,7 +858,7 @@ OUTER:
if !found {
continue
}
_, exportName, _, _ := f.findExportFormat(item)
_, exportName, _, _ := f.findExportFormat(ctx, item)
if exportName == "" || exportName != title {
continue
}
@@ -1155,7 +1158,7 @@ func NewFs(ctx context.Context, name, path string, m configmap.Mapper) (fs.Fs, e
f.rootFolderID = f.opt.TeamDriveID
} else {
// otherwise look up the actual root ID
rootID, err := f.getRootID()
rootID, err := f.getRootID(ctx)
if err != nil {
if gerr, ok := errors.Cause(err).(*googleapi.Error); ok && gerr.Code == 404 {
// 404 means that this scope does not have permission to get the
@@ -1236,7 +1239,7 @@ func (f *Fs) newBaseObject(remote string, info *drive.File) baseObject {
modifiedDate: modifiedDate,
mimeType: info.MimeType,
bytes: size,
parents: len(info.Parents),
parents: info.Parents,
}
}
@@ -1328,26 +1331,26 @@ func (f *Fs) newLinkObject(remote string, info *drive.File, extension, exportMim
// newObjectWithInfo creates an fs.Object for any drive.File
//
// When the drive.File cannot be represented as an fs.Object it will return (nil, nil).
func (f *Fs) newObjectWithInfo(remote string, info *drive.File) (fs.Object, error) {
func (f *Fs) newObjectWithInfo(ctx context.Context, remote string, info *drive.File) (fs.Object, error) {
// If item has MD5 sum or a length it is a file stored on drive
if info.Md5Checksum != "" || info.Size > 0 {
return f.newRegularObject(remote, info), nil
}
extension, exportName, exportMimeType, isDocument := f.findExportFormat(info)
return f.newObjectWithExportInfo(remote, info, extension, exportName, exportMimeType, isDocument)
extension, exportName, exportMimeType, isDocument := f.findExportFormat(ctx, info)
return f.newObjectWithExportInfo(ctx, remote, info, extension, exportName, exportMimeType, isDocument)
}
// newObjectWithExportInfo creates an fs.Object for any drive.File and the result of findExportFormat
//
// When the drive.File cannot be represented as an fs.Object it will return (nil, nil).
func (f *Fs) newObjectWithExportInfo(
remote string, info *drive.File,
ctx context.Context, remote string, info *drive.File,
extension, exportName, exportMimeType string, isDocument bool) (o fs.Object, err error) {
// Note that resolveShortcut will have been called already if
// we are being called from a listing. However the drive.Item
// will have been resolved so this will do nothing.
info, err = f.resolveShortcut(info)
info, err = f.resolveShortcut(ctx, info)
if err != nil {
return nil, errors.Wrap(err, "new object")
}
@@ -1395,7 +1398,7 @@ func (f *Fs) NewObject(ctx context.Context, remote string) (fs.Object, error) {
}
remote = remote[:len(remote)-len(extension)]
obj, err := f.newObjectWithExportInfo(remote, info, extension, exportName, exportMimeType, isDocument)
obj, err := f.newObjectWithExportInfo(ctx, remote, info, extension, exportName, exportMimeType, isDocument)
switch {
case err != nil:
return nil, err
@@ -1412,7 +1415,7 @@ func (f *Fs) FindLeaf(ctx context.Context, pathID, leaf string) (pathIDOut strin
pathID = actualID(pathID)
found, err = f.list(ctx, []string{pathID}, leaf, true, false, f.opt.TrashedOnly, false, func(item *drive.File) bool {
if !f.opt.SkipGdocs {
_, exportName, _, isDocument := f.findExportFormat(item)
_, exportName, _, isDocument := f.findExportFormat(ctx, item)
if exportName == leaf {
pathIDOut = item.Id
return true
@@ -1447,8 +1450,8 @@ func (f *Fs) CreateDir(ctx context.Context, pathID, leaf string) (newID string,
info, err = f.svc.Files.Create(createInfo).
Fields("id").
SupportsAllDrives(true).
Do()
return f.shouldRetry(err)
Context(ctx).Do()
return f.shouldRetry(ctx, err)
})
if err != nil {
return "", err
@@ -1483,15 +1486,15 @@ func linkTemplate(mt string) *template.Template {
})
return _linkTemplates[mt]
}
func (f *Fs) fetchFormats() {
func (f *Fs) fetchFormats(ctx context.Context) {
fetchFormatsOnce.Do(func() {
var about *drive.About
var err error
err = f.pacer.Call(func() (bool, error) {
about, err = f.svc.About.Get().
Fields("exportFormats,importFormats").
Do()
return f.shouldRetry(err)
Context(ctx).Do()
return f.shouldRetry(ctx, err)
})
if err != nil {
fs.Errorf(f, "Failed to get Drive exportFormats and importFormats: %v", err)
@@ -1508,8 +1511,8 @@ func (f *Fs) fetchFormats() {
// if necessary.
//
// if the fetch fails then it will not export any drive formats
func (f *Fs) exportFormats() map[string][]string {
f.fetchFormats()
func (f *Fs) exportFormats(ctx context.Context) map[string][]string {
f.fetchFormats(ctx)
return _exportFormats
}
@@ -1517,8 +1520,8 @@ func (f *Fs) exportFormats() map[string][]string {
// if necessary.
//
// if the fetch fails then it will not import any drive formats
func (f *Fs) importFormats() map[string][]string {
f.fetchFormats()
func (f *Fs) importFormats(ctx context.Context) map[string][]string {
f.fetchFormats(ctx)
return _importFormats
}
@@ -1527,9 +1530,9 @@ func (f *Fs) importFormats() map[string][]string {
//
// Look through the exportExtensions and find the first format that can be
// converted. If none found then return ("", "", false)
func (f *Fs) findExportFormatByMimeType(itemMimeType string) (
func (f *Fs) findExportFormatByMimeType(ctx context.Context, itemMimeType string) (
extension, mimeType string, isDocument bool) {
exportMimeTypes, isDocument := f.exportFormats()[itemMimeType]
exportMimeTypes, isDocument := f.exportFormats(ctx)[itemMimeType]
if isDocument {
for _, _extension := range f.exportExtensions {
_mimeType := mime.TypeByExtension(_extension)
@@ -1556,8 +1559,8 @@ func (f *Fs) findExportFormatByMimeType(itemMimeType string) (
//
// Look through the exportExtensions and find the first format that can be
// converted. If none found then return ("", "", "", false)
func (f *Fs) findExportFormat(item *drive.File) (extension, filename, mimeType string, isDocument bool) {
extension, mimeType, isDocument = f.findExportFormatByMimeType(item.MimeType)
func (f *Fs) findExportFormat(ctx context.Context, item *drive.File) (extension, filename, mimeType string, isDocument bool) {
extension, mimeType, isDocument = f.findExportFormatByMimeType(ctx, item.MimeType)
if extension != "" {
filename = item.Name + extension
}
@@ -1569,9 +1572,9 @@ func (f *Fs) findExportFormat(item *drive.File) (extension, filename, mimeType s
// MIME type is returned
//
// When no match is found "" is returned.
func (f *Fs) findImportFormat(mimeType string) string {
func (f *Fs) findImportFormat(ctx context.Context, mimeType string) string {
mimeType = fixMimeType(mimeType)
ifs := f.importFormats()
ifs := f.importFormats(ctx)
for _, mt := range f.importMimeTypes {
if mt == mimeType {
importMimeTypes := ifs[mimeType]
@@ -1604,7 +1607,7 @@ func (f *Fs) List(ctx context.Context, dir string) (entries fs.DirEntries, err e
var iErr error
_, err = f.list(ctx, []string{directoryID}, "", false, false, f.opt.TrashedOnly, false, func(item *drive.File) bool {
entry, err := f.itemToDirEntry(path.Join(dir, item.Name), item)
entry, err := f.itemToDirEntry(ctx, path.Join(dir, item.Name), item)
if err != nil {
iErr = err
return true
@@ -1717,7 +1720,7 @@ func (f *Fs) listRRunner(ctx context.Context, wg *sync.WaitGroup, in chan listRE
}
}
remote := path.Join(paths[i], item.Name)
entry, err := f.itemToDirEntry(remote, item)
entry, err := f.itemToDirEntry(ctx, remote, item)
if err != nil {
iErr = err
return true
@@ -1982,7 +1985,7 @@ func isShortcut(item *drive.File) bool {
// Note that we assume shortcuts can't point to shortcuts. Google
// drive web interface doesn't offer the option to create a shortcut
// to a shortcut. The documentation is silent on the issue.
func (f *Fs) resolveShortcut(item *drive.File) (newItem *drive.File, err error) {
func (f *Fs) resolveShortcut(ctx context.Context, item *drive.File) (newItem *drive.File, err error) {
if f.opt.SkipShortcuts || item.MimeType != shortcutMimeType {
return item, nil
}
@@ -1990,7 +1993,7 @@ func (f *Fs) resolveShortcut(item *drive.File) (newItem *drive.File, err error)
fs.Errorf(nil, "Expecting shortcutDetails in %v", item)
return item, nil
}
newItem, err = f.getFile(item.ShortcutDetails.TargetId, f.fileFields)
newItem, err = f.getFile(ctx, item.ShortcutDetails.TargetId, f.fileFields)
if err != nil {
if gerr, ok := errors.Cause(err).(*googleapi.Error); ok && gerr.Code == 404 {
// 404 means dangling shortcut, so just return the shortcut with the mime type mangled
@@ -2012,18 +2015,21 @@ func (f *Fs) resolveShortcut(item *drive.File) (newItem *drive.File, err error)
// itemToDirEntry converts a drive.File to an fs.DirEntry.
// When the drive.File cannot be represented as an fs.DirEntry
// (nil, nil) is returned.
func (f *Fs) itemToDirEntry(remote string, item *drive.File) (entry fs.DirEntry, err error) {
func (f *Fs) itemToDirEntry(ctx context.Context, remote string, item *drive.File) (entry fs.DirEntry, err error) {
switch {
case item.MimeType == driveFolderType:
// cache the directory ID for later lookups
f.dirCache.Put(remote, item.Id)
when, _ := time.Parse(timeFormatIn, item.ModifiedTime)
d := fs.NewDir(remote, when).SetID(item.Id)
if len(item.Parents) > 0 {
d.SetParentID(item.Parents[0])
}
return d, nil
case f.opt.AuthOwnerOnly && !isAuthOwned(item):
// ignore object
default:
entry, err = f.newObjectWithInfo(remote, item)
entry, err = f.newObjectWithInfo(ctx, remote, item)
if err == fs.ErrorObjectNotFound {
return nil, nil
}
@@ -2090,12 +2096,12 @@ func (f *Fs) PutUnchecked(ctx context.Context, in io.Reader, src fs.ObjectInfo,
importMimeType := ""
if f.importMimeTypes != nil && !f.opt.SkipGdocs {
importMimeType = f.findImportFormat(srcMimeType)
importMimeType = f.findImportFormat(ctx, srcMimeType)
if isInternalMimeType(importMimeType) {
remote = remote[:len(remote)-len(srcExt)]
exportExt, _, _ = f.findExportFormatByMimeType(importMimeType)
exportExt, _, _ = f.findExportFormatByMimeType(ctx, importMimeType)
if exportExt == "" {
return nil, errors.Errorf("No export format found for %q", importMimeType)
}
@@ -2125,8 +2131,8 @@ func (f *Fs) PutUnchecked(ctx context.Context, in io.Reader, src fs.ObjectInfo,
Fields(partialFields).
SupportsAllDrives(true).
KeepRevisionForever(f.opt.KeepRevisionForever).
Do()
return f.shouldRetry(err)
Context(ctx).Do()
return f.shouldRetry(ctx, err)
})
if err != nil {
return nil, err
@@ -2138,7 +2144,7 @@ func (f *Fs) PutUnchecked(ctx context.Context, in io.Reader, src fs.ObjectInfo,
return nil, err
}
}
return f.newObjectWithInfo(remote, info)
return f.newObjectWithInfo(ctx, remote, info)
}
// MergeDirs merges the contents of all the directories passed
@@ -2180,8 +2186,8 @@ func (f *Fs) MergeDirs(ctx context.Context, dirs []fs.Directory) error {
AddParents(dstDir.ID()).
Fields("").
SupportsAllDrives(true).
Do()
return f.shouldRetry(err)
Context(ctx).Do()
return f.shouldRetry(ctx, err)
})
if err != nil {
return errors.Wrapf(err, "MergeDirs move failed on %q in %v", info.Name, srcDir)
@@ -2214,14 +2220,14 @@ func (f *Fs) delete(ctx context.Context, id string, useTrash bool) error {
_, err = f.svc.Files.Update(id, &info).
Fields("").
SupportsAllDrives(true).
Do()
Context(ctx).Do()
} else {
err = f.svc.Files.Delete(id).
Fields("").
SupportsAllDrives(true).
Do()
Context(ctx).Do()
}
return f.shouldRetry(err)
return f.shouldRetry(ctx, err)
})
}
@@ -2334,11 +2340,12 @@ func (f *Fs) Copy(ctx context.Context, src fs.Object, remote string) (fs.Object,
if isDoc {
// preserve the description on copy for docs
info, err := f.getFile(actualID(srcObj.id), "description")
info, err := f.getFile(ctx, actualID(srcObj.id), "description")
if err != nil {
return nil, errors.Wrap(err, "failed to read description for Google Doc")
fs.Errorf(srcObj, "Failed to read description for Google Doc: %v", err)
} else {
createInfo.Description = info.Description
}
createInfo.Description = info.Description
} else {
// don't overwrite the description on copy for files
// this should work for docs but it doesn't - it is probably a bug in Google Drive
@@ -2354,13 +2361,13 @@ func (f *Fs) Copy(ctx context.Context, src fs.Object, remote string) (fs.Object,
Fields(partialFields).
SupportsAllDrives(true).
KeepRevisionForever(f.opt.KeepRevisionForever).
Do()
return f.shouldRetry(err)
Context(ctx).Do()
return f.shouldRetry(ctx, err)
})
if err != nil {
return nil, err
}
newObject, err := f.newObjectWithInfo(remote, info)
newObject, err := f.newObjectWithInfo(ctx, remote, info)
if err != nil {
return nil, err
}
@@ -2454,7 +2461,7 @@ func (f *Fs) CleanUp(ctx context.Context) error {
}
err := f.pacer.Call(func() (bool, error) {
err := f.svc.Files.EmptyTrash().Context(ctx).Do()
return f.shouldRetry(err)
return f.shouldRetry(ctx, err)
})
if err != nil {
@@ -2472,7 +2479,7 @@ func (f *Fs) teamDriveOK(ctx context.Context) (err error) {
var td *drive.Drive
err = f.pacer.Call(func() (bool, error) {
td, err = f.svc.Drives.Get(f.opt.TeamDriveID).Fields("name,id,capabilities,createdTime,restrictions").Context(ctx).Do()
return f.shouldRetry(err)
return f.shouldRetry(ctx, err)
})
if err != nil {
return errors.Wrap(err, "failed to get Shared Drive info")
@@ -2495,7 +2502,7 @@ func (f *Fs) About(ctx context.Context) (*fs.Usage, error) {
var err error
err = f.pacer.Call(func() (bool, error) {
about, err = f.svc.About.Get().Fields("storageQuota").Context(ctx).Do()
return f.shouldRetry(err)
return f.shouldRetry(ctx, err)
})
if err != nil {
return nil, errors.Wrap(err, "failed to get Drive storageQuota")
@@ -2567,14 +2574,14 @@ func (f *Fs) Move(ctx context.Context, src fs.Object, remote string) (fs.Object,
AddParents(dstParents).
Fields(partialFields).
SupportsAllDrives(true).
Do()
return f.shouldRetry(err)
Context(ctx).Do()
return f.shouldRetry(ctx, err)
})
if err != nil {
return nil, err
}
return f.newObjectWithInfo(remote, info)
return f.newObjectWithInfo(ctx, remote, info)
}
// PublicLink adds a "readable by anyone with link" permission on the given file or folder.
@@ -2604,8 +2611,8 @@ func (f *Fs) PublicLink(ctx context.Context, remote string, expire fs.Duration,
_, err = f.svc.Permissions.Create(id, permission).
Fields("").
SupportsAllDrives(true).
Do()
return f.shouldRetry(err)
Context(ctx).Do()
return f.shouldRetry(ctx, err)
})
if err != nil {
return "", err
@@ -2647,8 +2654,8 @@ func (f *Fs) DirMove(ctx context.Context, src fs.Fs, srcRemote, dstRemote string
AddParents(dstDirectoryID).
Fields("").
SupportsAllDrives(true).
Do()
return f.shouldRetry(err)
Context(ctx).Do()
return f.shouldRetry(ctx, err)
})
if err != nil {
return err
@@ -2666,7 +2673,7 @@ func (f *Fs) DirMove(ctx context.Context, src fs.Fs, srcRemote, dstRemote string
func (f *Fs) ChangeNotify(ctx context.Context, notifyFunc func(string, fs.EntryType), pollIntervalChan <-chan time.Duration) {
go func() {
// get the StartPageToken early so all changes from now on get processed
startPageToken, err := f.changeNotifyStartPageToken()
startPageToken, err := f.changeNotifyStartPageToken(ctx)
if err != nil {
fs.Infof(f, "Failed to get StartPageToken: %s", err)
}
@@ -2691,7 +2698,7 @@ func (f *Fs) ChangeNotify(ctx context.Context, notifyFunc func(string, fs.EntryT
}
case <-tickerC:
if startPageToken == "" {
startPageToken, err = f.changeNotifyStartPageToken()
startPageToken, err = f.changeNotifyStartPageToken(ctx)
if err != nil {
fs.Infof(f, "Failed to get StartPageToken: %s", err)
continue
@@ -2706,15 +2713,15 @@ func (f *Fs) ChangeNotify(ctx context.Context, notifyFunc func(string, fs.EntryT
}
}()
}
func (f *Fs) changeNotifyStartPageToken() (pageToken string, err error) {
func (f *Fs) changeNotifyStartPageToken(ctx context.Context) (pageToken string, err error) {
var startPageToken *drive.StartPageToken
err = f.pacer.Call(func() (bool, error) {
changes := f.svc.Changes.GetStartPageToken().SupportsAllDrives(true)
if f.isTeamDrive {
changes.DriveId(f.opt.TeamDriveID)
}
startPageToken, err = changes.Do()
return f.shouldRetry(err)
startPageToken, err = changes.Context(ctx).Do()
return f.shouldRetry(ctx, err)
})
if err != nil {
return
@@ -2743,7 +2750,7 @@ func (f *Fs) changeNotifyRunner(ctx context.Context, notifyFunc func(string, fs.
changesCall.Spaces("appDataFolder")
}
changeList, err = changesCall.Context(ctx).Do()
return f.shouldRetry(err)
return f.shouldRetry(ctx, err)
})
if err != nil {
return
@@ -2939,8 +2946,8 @@ func (f *Fs) makeShortcut(ctx context.Context, srcPath string, dstFs *Fs, dstPat
Fields(partialFields).
SupportsAllDrives(true).
KeepRevisionForever(dstFs.opt.KeepRevisionForever).
Do()
return dstFs.shouldRetry(err)
Context(ctx).Do()
return dstFs.shouldRetry(ctx, err)
})
if err != nil {
return nil, errors.Wrap(err, "shortcut creation failed")
@@ -2948,24 +2955,24 @@ func (f *Fs) makeShortcut(ctx context.Context, srcPath string, dstFs *Fs, dstPat
if isDir {
return nil, nil
}
return dstFs.newObjectWithInfo(dstPath, info)
return dstFs.newObjectWithInfo(ctx, dstPath, info)
}
// List all team drives
func (f *Fs) listTeamDrives(ctx context.Context) (drives []*drive.TeamDrive, err error) {
drives = []*drive.TeamDrive{}
listTeamDrives := f.svc.Teamdrives.List().PageSize(100)
func (f *Fs) listTeamDrives(ctx context.Context) (drives []*drive.Drive, err error) {
drives = []*drive.Drive{}
listTeamDrives := f.svc.Drives.List().PageSize(100)
var defaultFs Fs // default Fs with default Options
for {
var teamDrives *drive.TeamDriveList
var teamDrives *drive.DriveList
err = f.pacer.Call(func() (bool, error) {
teamDrives, err = listTeamDrives.Context(ctx).Do()
return defaultFs.shouldRetry(err)
return defaultFs.shouldRetry(ctx, err)
})
if err != nil {
return drives, errors.Wrap(err, "listing Team Drives failed")
}
drives = append(drives, teamDrives.TeamDrives...)
drives = append(drives, teamDrives.Drives...)
if teamDrives.NextPageToken == "" {
break
}
@@ -3002,8 +3009,8 @@ func (f *Fs) unTrash(ctx context.Context, dir string, directoryID string, recurs
_, err := f.svc.Files.Update(item.Id, &update).
SupportsAllDrives(true).
Fields("trashed").
Do()
return f.shouldRetry(err)
Context(ctx).Do()
return f.shouldRetry(ctx, err)
})
if err != nil {
err = errors.Wrap(err, "failed to restore")
@@ -3045,7 +3052,7 @@ func (f *Fs) unTrashDir(ctx context.Context, dir string, recurse bool) (r unTras
// copy file with id to dest
func (f *Fs) copyID(ctx context.Context, id, dest string) (err error) {
info, err := f.getFile(id, f.fileFields)
info, err := f.getFile(ctx, id, f.fileFields)
if err != nil {
return errors.Wrap(err, "couldn't find id")
}
@@ -3053,7 +3060,7 @@ func (f *Fs) copyID(ctx context.Context, id, dest string) (err error) {
return errors.Errorf("can't copy directory use: rclone copy --drive-root-folder-id %s %s %s", id, fs.ConfigString(f), dest)
}
info.Name = f.opt.Enc.ToStandardName(info.Name)
o, err := f.newObjectWithInfo(info.Name, info)
o, err := f.newObjectWithInfo(ctx, info.Name, info)
if err != nil {
return err
}
@@ -3062,7 +3069,7 @@ func (f *Fs) copyID(ctx context.Context, id, dest string) (err error) {
return err
}
if destLeaf == "" {
destLeaf = info.Name
destLeaf = path.Base(o.Remote())
}
if destDir == "" {
destDir = "."
@@ -3354,7 +3361,7 @@ func (f *Fs) getRemoteInfoWithExport(ctx context.Context, remote string) (
found, err := f.list(ctx, []string{directoryID}, leaf, false, false, f.opt.TrashedOnly, false, func(item *drive.File) bool {
if !f.opt.SkipGdocs {
extension, exportName, exportMimeType, isDocument = f.findExportFormat(item)
extension, exportName, exportMimeType, isDocument = f.findExportFormat(ctx, item)
if exportName == leaf {
info = item
return true
@@ -3405,8 +3412,8 @@ func (o *baseObject) SetModTime(ctx context.Context, modTime time.Time) error {
info, err = o.fs.svc.Files.Update(actualID(o.id), updateInfo).
Fields(partialFields).
SupportsAllDrives(true).
Do()
return o.fs.shouldRetry(err)
Context(ctx).Do()
return o.fs.shouldRetry(ctx, err)
})
if err != nil {
return err
@@ -3444,7 +3451,7 @@ func (o *baseObject) httpResponse(ctx context.Context, url, method string, optio
_ = res.Body.Close() // ignore error
}
}
return o.fs.shouldRetry(err)
return o.fs.shouldRetry(ctx, err)
})
if err != nil {
return req, nil, err
@@ -3536,8 +3543,8 @@ func (o *Object) Open(ctx context.Context, options ...fs.OpenOption) (in io.Read
v2File, err = o.fs.v2Svc.Files.Get(actualID(o.id)).
Fields("downloadUrl").
SupportsAllDrives(true).
Do()
return o.fs.shouldRetry(err)
Context(ctx).Do()
return o.fs.shouldRetry(ctx, err)
})
if err == nil {
fs.Debugf(o, "Using v2 download: %v", v2File.DownloadUrl)
@@ -3617,8 +3624,8 @@ func (o *baseObject) update(ctx context.Context, updateInfo *drive.File, uploadM
Fields(partialFields).
SupportsAllDrives(true).
KeepRevisionForever(o.fs.opt.KeepRevisionForever).
Do()
return o.fs.shouldRetry(err)
Context(ctx).Do()
return o.fs.shouldRetry(ctx, err)
})
return
}
@@ -3661,7 +3668,7 @@ func (o *Object) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, op
if err != nil {
return err
}
newO, err := o.fs.newObjectWithInfo(src.Remote(), info)
newO, err := o.fs.newObjectWithInfo(ctx, src.Remote(), info)
if err != nil {
return err
}
@@ -3685,7 +3692,7 @@ func (o *documentObject) Update(ctx context.Context, in io.Reader, src fs.Object
if o.fs.importMimeTypes == nil || o.fs.opt.SkipGdocs {
return errors.Errorf("can't update google document type without --drive-import-formats")
}
importMimeType = o.fs.findImportFormat(updateInfo.MimeType)
importMimeType = o.fs.findImportFormat(ctx, updateInfo.MimeType)
if importMimeType == "" {
return errors.Errorf("no import format found for %q", srcMimeType)
}
@@ -3702,7 +3709,7 @@ func (o *documentObject) Update(ctx context.Context, in io.Reader, src fs.Object
remote := src.Remote()
remote = remote[:len(remote)-o.extLen]
newO, err := o.fs.newObjectWithInfo(remote, info)
newO, err := o.fs.newObjectWithInfo(ctx, remote, info)
if err != nil {
return err
}
@@ -3722,7 +3729,7 @@ func (o *linkObject) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo
// Remove an object
func (o *baseObject) Remove(ctx context.Context) error {
if o.parents > 1 {
if len(o.parents) > 1 {
return errors.New("can't delete safely - has multiple parents")
}
return o.fs.delete(ctx, shortcutID(o.id), o.fs.opt.UseTrash)
@@ -3738,6 +3745,14 @@ func (o *baseObject) ID() string {
return o.id
}
// ParentID returns the ID of the Object parent if known, or "" if not
func (o *baseObject) ParentID() string {
if len(o.parents) > 0 {
return o.parents[0]
}
return ""
}
func (o *documentObject) ext() string {
return o.baseObject.remote[len(o.baseObject.remote)-o.extLen:]
}
@@ -3798,10 +3813,13 @@ var (
_ fs.Object = (*Object)(nil)
_ fs.MimeTyper = (*Object)(nil)
_ fs.IDer = (*Object)(nil)
_ fs.ParentIDer = (*Object)(nil)
_ fs.Object = (*documentObject)(nil)
_ fs.MimeTyper = (*documentObject)(nil)
_ fs.IDer = (*documentObject)(nil)
_ fs.ParentIDer = (*documentObject)(nil)
_ fs.Object = (*linkObject)(nil)
_ fs.MimeTyper = (*linkObject)(nil)
_ fs.IDer = (*linkObject)(nil)
_ fs.ParentIDer = (*linkObject)(nil)
)

View File

@@ -111,6 +111,7 @@ func TestInternalParseExtensions(t *testing.T) {
}
func TestInternalFindExportFormat(t *testing.T) {
ctx := context.Background()
item := &drive.File{
Name: "file",
MimeType: "application/vnd.google-apps.document",
@@ -128,7 +129,7 @@ func TestInternalFindExportFormat(t *testing.T) {
} {
f := new(Fs)
f.exportExtensions = test.extensions
gotExtension, gotFilename, gotMimeType, gotIsDocument := f.findExportFormat(item)
gotExtension, gotFilename, gotMimeType, gotIsDocument := f.findExportFormat(ctx, item)
assert.Equal(t, test.wantExtension, gotExtension)
if test.wantExtension != "" {
assert.Equal(t, item.Name+gotExtension, gotFilename)

View File

@@ -94,7 +94,7 @@ func (f *Fs) Upload(ctx context.Context, in io.Reader, size int64, contentType,
defer googleapi.CloseBody(res)
err = googleapi.CheckResponse(res)
}
return f.shouldRetry(err)
return f.shouldRetry(ctx, err)
})
if err != nil {
return nil, err
@@ -202,7 +202,7 @@ func (rx *resumableUpload) Upload(ctx context.Context) (*drive.File, error) {
err = rx.f.pacer.Call(func() (bool, error) {
fs.Debugf(rx.remote, "Sending chunk %d length %d", start, reqSize)
StatusCode, err = rx.transferChunk(ctx, start, chunk, reqSize)
again, err := rx.f.shouldRetry(err)
again, err := rx.f.shouldRetry(ctx, err)
if StatusCode == statusResumeIncomplete || StatusCode == http.StatusCreated || StatusCode == http.StatusOK {
again = false
err = nil

View File

@@ -292,7 +292,10 @@ func (f *Fs) Features() *fs.Features {
// shouldRetry returns a boolean as to whether this err deserves to be
// retried. It returns the err as a convenience
func shouldRetry(err error) (bool, error) {
func shouldRetry(ctx context.Context, err error) (bool, error) {
if fserrors.ContextError(ctx, &err) {
return false, err
}
if err == nil {
return false, err
}
@@ -307,7 +310,7 @@ func shouldRetry(err error) (bool, error) {
switch e := err.(type) {
case auth.RateLimitAPIError:
if e.RateLimitError.RetryAfter > 0 {
fs.Debugf(baseErrString, "Too many requests or write operations. Trying again in %d seconds.", e.RateLimitError.RetryAfter)
fs.Logf(baseErrString, "Too many requests or write operations. Trying again in %d seconds.", e.RateLimitError.RetryAfter)
err = pacer.RetryAfterError(err, time.Duration(e.RateLimitError.RetryAfter)*time.Second)
}
return true, err
@@ -425,7 +428,7 @@ func NewFs(ctx context.Context, name, root string, m configmap.Mapper) (fs.Fs, e
if f.root == "" {
return f, nil
}
_, err := f.findSharedFile(f.root)
_, err := f.findSharedFile(ctx, f.root)
f.root = ""
if err == nil {
return f, fs.ErrorIsFile
@@ -445,7 +448,7 @@ func NewFs(ctx context.Context, name, root string, m configmap.Mapper) (fs.Fs, e
}
// root is not empty so we have find the right shared folder if it exists
id, err := f.findSharedFolder(dir)
id, err := f.findSharedFolder(ctx, dir)
if err != nil {
// if we didn't find the specified shared folder we have to bail out here
return nil, err
@@ -453,7 +456,7 @@ func NewFs(ctx context.Context, name, root string, m configmap.Mapper) (fs.Fs, e
// we found the specified shared folder so let's mount it
// this will add it to the users normal root namespace and allows us
// to actually perform operations on it using the normal api endpoints.
err = f.mountSharedFolder(id)
err = f.mountSharedFolder(ctx, id)
if err != nil {
switch e := err.(type) {
case sharing.MountFolderAPIError:
@@ -477,7 +480,7 @@ func NewFs(ctx context.Context, name, root string, m configmap.Mapper) (fs.Fs, e
var acc *users.FullAccount
err = f.pacer.Call(func() (bool, error) {
acc, err = f.users.GetCurrentAccount()
return shouldRetry(err)
return shouldRetry(ctx, err)
})
if err != nil {
return nil, errors.Wrap(err, "get current account failed")
@@ -495,7 +498,7 @@ func NewFs(ctx context.Context, name, root string, m configmap.Mapper) (fs.Fs, e
f.setRoot(root)
// See if the root is actually an object
_, err = f.getFileMetadata(f.slashRoot)
_, err = f.getFileMetadata(ctx, f.slashRoot)
if err == nil {
newRoot := path.Dir(f.root)
if newRoot == "." {
@@ -529,12 +532,12 @@ func (f *Fs) setRoot(root string) {
}
// getMetadata gets the metadata for a file or directory
func (f *Fs) getMetadata(objPath string) (entry files.IsMetadata, notFound bool, err error) {
func (f *Fs) getMetadata(ctx context.Context, objPath string) (entry files.IsMetadata, notFound bool, err error) {
err = f.pacer.Call(func() (bool, error) {
entry, err = f.srv.GetMetadata(&files.GetMetadataArg{
Path: f.opt.Enc.FromStandardPath(objPath),
})
return shouldRetry(err)
return shouldRetry(ctx, err)
})
if err != nil {
switch e := err.(type) {
@@ -549,8 +552,8 @@ func (f *Fs) getMetadata(objPath string) (entry files.IsMetadata, notFound bool,
}
// getFileMetadata gets the metadata for a file
func (f *Fs) getFileMetadata(filePath string) (fileInfo *files.FileMetadata, err error) {
entry, notFound, err := f.getMetadata(filePath)
func (f *Fs) getFileMetadata(ctx context.Context, filePath string) (fileInfo *files.FileMetadata, err error) {
entry, notFound, err := f.getMetadata(ctx, filePath)
if err != nil {
return nil, err
}
@@ -565,8 +568,8 @@ func (f *Fs) getFileMetadata(filePath string) (fileInfo *files.FileMetadata, err
}
// getDirMetadata gets the metadata for a directory
func (f *Fs) getDirMetadata(dirPath string) (dirInfo *files.FolderMetadata, err error) {
entry, notFound, err := f.getMetadata(dirPath)
func (f *Fs) getDirMetadata(ctx context.Context, dirPath string) (dirInfo *files.FolderMetadata, err error) {
entry, notFound, err := f.getMetadata(ctx, dirPath)
if err != nil {
return nil, err
}
@@ -583,7 +586,7 @@ func (f *Fs) getDirMetadata(dirPath string) (dirInfo *files.FolderMetadata, err
// Return an Object from a path
//
// If it can't be found it returns the error fs.ErrorObjectNotFound.
func (f *Fs) newObjectWithInfo(remote string, info *files.FileMetadata) (fs.Object, error) {
func (f *Fs) newObjectWithInfo(ctx context.Context, remote string, info *files.FileMetadata) (fs.Object, error) {
o := &Object{
fs: f,
remote: remote,
@@ -592,7 +595,7 @@ func (f *Fs) newObjectWithInfo(remote string, info *files.FileMetadata) (fs.Obje
if info != nil {
err = o.setMetadataFromEntry(info)
} else {
err = o.readEntryAndSetMetadata()
err = o.readEntryAndSetMetadata(ctx)
}
if err != nil {
return nil, err
@@ -604,14 +607,14 @@ func (f *Fs) newObjectWithInfo(remote string, info *files.FileMetadata) (fs.Obje
// it returns the error fs.ErrorObjectNotFound.
func (f *Fs) NewObject(ctx context.Context, remote string) (fs.Object, error) {
if f.opt.SharedFiles {
return f.findSharedFile(remote)
return f.findSharedFile(ctx, remote)
}
return f.newObjectWithInfo(remote, nil)
return f.newObjectWithInfo(ctx, remote, nil)
}
// listSharedFoldersApi lists all available shared folders mounted and not mounted
// we'll need the id later so we have to return them in original format
func (f *Fs) listSharedFolders() (entries fs.DirEntries, err error) {
func (f *Fs) listSharedFolders(ctx context.Context) (entries fs.DirEntries, err error) {
started := false
var res *sharing.ListFoldersResult
for {
@@ -621,7 +624,7 @@ func (f *Fs) listSharedFolders() (entries fs.DirEntries, err error) {
}
err := f.pacer.Call(func() (bool, error) {
res, err = f.sharing.ListFolders(&arg)
return shouldRetry(err)
return shouldRetry(ctx, err)
})
if err != nil {
return nil, err
@@ -633,7 +636,7 @@ func (f *Fs) listSharedFolders() (entries fs.DirEntries, err error) {
}
err := f.pacer.Call(func() (bool, error) {
res, err = f.sharing.ListFoldersContinue(&arg)
return shouldRetry(err)
return shouldRetry(ctx, err)
})
if err != nil {
return nil, errors.Wrap(err, "list continue")
@@ -658,8 +661,8 @@ func (f *Fs) listSharedFolders() (entries fs.DirEntries, err error) {
// findSharedFolder find the id for a given shared folder name
// somewhat annoyingly there is no endpoint to query a shared folder by it's name
// so our only option is to iterate over all shared folders
func (f *Fs) findSharedFolder(name string) (id string, err error) {
entries, err := f.listSharedFolders()
func (f *Fs) findSharedFolder(ctx context.Context, name string) (id string, err error) {
entries, err := f.listSharedFolders(ctx)
if err != nil {
return "", err
}
@@ -672,20 +675,20 @@ func (f *Fs) findSharedFolder(name string) (id string, err error) {
}
// mountSharedFolder mount a shared folder to the root namespace
func (f *Fs) mountSharedFolder(id string) error {
func (f *Fs) mountSharedFolder(ctx context.Context, id string) error {
arg := sharing.MountFolderArg{
SharedFolderId: id,
}
err := f.pacer.Call(func() (bool, error) {
_, err := f.sharing.MountFolder(&arg)
return shouldRetry(err)
return shouldRetry(ctx, err)
})
return err
}
// listReceivedFiles lists shared the user as access to (note this means individual
// files not files contained in shared folders)
func (f *Fs) listReceivedFiles() (entries fs.DirEntries, err error) {
func (f *Fs) listReceivedFiles(ctx context.Context) (entries fs.DirEntries, err error) {
started := false
var res *sharing.ListFilesResult
for {
@@ -695,7 +698,7 @@ func (f *Fs) listReceivedFiles() (entries fs.DirEntries, err error) {
}
err := f.pacer.Call(func() (bool, error) {
res, err = f.sharing.ListReceivedFiles(&arg)
return shouldRetry(err)
return shouldRetry(ctx, err)
})
if err != nil {
return nil, err
@@ -707,7 +710,7 @@ func (f *Fs) listReceivedFiles() (entries fs.DirEntries, err error) {
}
err := f.pacer.Call(func() (bool, error) {
res, err = f.sharing.ListReceivedFilesContinue(&arg)
return shouldRetry(err)
return shouldRetry(ctx, err)
})
if err != nil {
return nil, errors.Wrap(err, "list continue")
@@ -734,8 +737,8 @@ func (f *Fs) listReceivedFiles() (entries fs.DirEntries, err error) {
return entries, nil
}
func (f *Fs) findSharedFile(name string) (o *Object, err error) {
files, err := f.listReceivedFiles()
func (f *Fs) findSharedFile(ctx context.Context, name string) (o *Object, err error) {
files, err := f.listReceivedFiles(ctx)
if err != nil {
return nil, err
}
@@ -758,10 +761,10 @@ func (f *Fs) findSharedFile(name string) (o *Object, err error) {
// found.
func (f *Fs) List(ctx context.Context, dir string) (entries fs.DirEntries, err error) {
if f.opt.SharedFiles {
return f.listReceivedFiles()
return f.listReceivedFiles(ctx)
}
if f.opt.SharedFolders {
return f.listSharedFolders()
return f.listSharedFolders(ctx)
}
root := f.slashRoot
@@ -782,7 +785,7 @@ func (f *Fs) List(ctx context.Context, dir string) (entries fs.DirEntries, err e
}
err = f.pacer.Call(func() (bool, error) {
res, err = f.srv.ListFolder(&arg)
return shouldRetry(err)
return shouldRetry(ctx, err)
})
if err != nil {
switch e := err.(type) {
@@ -800,7 +803,7 @@ func (f *Fs) List(ctx context.Context, dir string) (entries fs.DirEntries, err e
}
err = f.pacer.Call(func() (bool, error) {
res, err = f.srv.ListFolderContinue(&arg)
return shouldRetry(err)
return shouldRetry(ctx, err)
})
if err != nil {
return nil, errors.Wrap(err, "list continue")
@@ -830,7 +833,7 @@ func (f *Fs) List(ctx context.Context, dir string) (entries fs.DirEntries, err e
d := fs.NewDir(remote, time.Now()).SetID(folderInfo.Id)
entries = append(entries, d)
} else if fileInfo != nil {
o, err := f.newObjectWithInfo(remote, fileInfo)
o, err := f.newObjectWithInfo(ctx, remote, fileInfo)
if err != nil {
return nil, err
}
@@ -879,7 +882,7 @@ func (f *Fs) Mkdir(ctx context.Context, dir string) error {
}
// check directory doesn't exist
_, err := f.getDirMetadata(root)
_, err := f.getDirMetadata(ctx, root)
if err == nil {
return nil // directory exists already
} else if err != fs.ErrorDirNotFound {
@@ -896,7 +899,7 @@ func (f *Fs) Mkdir(ctx context.Context, dir string) error {
}
err = f.pacer.Call(func() (bool, error) {
_, err = f.srv.CreateFolderV2(&arg2)
return shouldRetry(err)
return shouldRetry(ctx, err)
})
return err
}
@@ -913,7 +916,7 @@ func (f *Fs) purgeCheck(ctx context.Context, dir string, check bool) (err error)
if check {
// check directory exists
_, err = f.getDirMetadata(root)
_, err = f.getDirMetadata(ctx, root)
if err != nil {
return errors.Wrap(err, "Rmdir")
}
@@ -930,7 +933,7 @@ func (f *Fs) purgeCheck(ctx context.Context, dir string, check bool) (err error)
var res *files.ListFolderResult
err = f.pacer.Call(func() (bool, error) {
res, err = f.srv.ListFolder(&arg)
return shouldRetry(err)
return shouldRetry(ctx, err)
})
if err != nil {
return errors.Wrap(err, "Rmdir")
@@ -943,7 +946,7 @@ func (f *Fs) purgeCheck(ctx context.Context, dir string, check bool) (err error)
// remove it
err = f.pacer.Call(func() (bool, error) {
_, err = f.srv.DeleteV2(&files.DeleteArg{Path: root})
return shouldRetry(err)
return shouldRetry(ctx, err)
})
return err
}
@@ -996,7 +999,7 @@ func (f *Fs) Copy(ctx context.Context, src fs.Object, remote string) (fs.Object,
var result *files.RelocationResult
err = f.pacer.Call(func() (bool, error) {
result, err = f.srv.CopyV2(&arg)
return shouldRetry(err)
return shouldRetry(ctx, err)
})
if err != nil {
return nil, errors.Wrap(err, "copy failed")
@@ -1057,7 +1060,7 @@ func (f *Fs) Move(ctx context.Context, src fs.Object, remote string) (fs.Object,
var result *files.RelocationResult
err = f.pacer.Call(func() (bool, error) {
result, err = f.srv.MoveV2(&arg)
return shouldRetry(err)
return shouldRetry(ctx, err)
})
if err != nil {
return nil, errors.Wrap(err, "move failed")
@@ -1081,17 +1084,34 @@ func (f *Fs) PublicLink(ctx context.Context, remote string, expire fs.Duration,
fs.Debugf(f, "attempting to share '%s' (absolute path: %s)", remote, absPath)
createArg := sharing.CreateSharedLinkWithSettingsArg{
Path: absPath,
// FIXME this gives settings_error/not_authorized/.. errors
// and the expires setting isn't in the documentation so remove
// for now.
// Settings: &sharing.SharedLinkSettings{
// Expires: time.Now().Add(time.Duration(expire)).UTC().Round(time.Second),
// },
Settings: &sharing.SharedLinkSettings{
RequestedVisibility: &sharing.RequestedVisibility{
Tagged: dropbox.Tagged{Tag: sharing.RequestedVisibilityPublic},
},
Audience: &sharing.LinkAudience{
Tagged: dropbox.Tagged{Tag: sharing.LinkAudiencePublic},
},
Access: &sharing.RequestedLinkAccessLevel{
Tagged: dropbox.Tagged{Tag: sharing.RequestedLinkAccessLevelViewer},
},
},
}
if expire < fs.DurationOff {
expiryTime := time.Now().Add(time.Duration(expire)).UTC().Round(time.Second)
createArg.Settings.Expires = expiryTime
}
// FIXME note we can't set Settings for non enterprise dropbox
// because of https://github.com/dropbox/dropbox-sdk-go-unofficial/issues/75
// however this only goes wrong when we set Expires, so as a
// work-around remove Settings unless expire is set.
if expire == fs.DurationOff {
createArg.Settings = nil
}
var linkRes sharing.IsSharedLinkMetadata
err = f.pacer.Call(func() (bool, error) {
linkRes, err = f.sharing.CreateSharedLinkWithSettings(&createArg)
return shouldRetry(err)
return shouldRetry(ctx, err)
})
if err != nil && strings.Contains(err.Error(),
@@ -1104,7 +1124,7 @@ func (f *Fs) PublicLink(ctx context.Context, remote string, expire fs.Duration,
var listRes *sharing.ListSharedLinksResult
err = f.pacer.Call(func() (bool, error) {
listRes, err = f.sharing.ListSharedLinks(&listArg)
return shouldRetry(err)
return shouldRetry(ctx, err)
})
if err != nil {
return
@@ -1146,7 +1166,7 @@ func (f *Fs) DirMove(ctx context.Context, src fs.Fs, srcRemote, dstRemote string
dstPath := path.Join(f.slashRoot, dstRemote)
// Check if destination exists
_, err := f.getDirMetadata(dstPath)
_, err := f.getDirMetadata(ctx, dstPath)
if err == nil {
return fs.ErrorDirExists
} else if err != fs.ErrorDirNotFound {
@@ -1165,7 +1185,7 @@ func (f *Fs) DirMove(ctx context.Context, src fs.Fs, srcRemote, dstRemote string
}
err = f.pacer.Call(func() (bool, error) {
_, err = f.srv.MoveV2(&arg)
return shouldRetry(err)
return shouldRetry(ctx, err)
})
if err != nil {
return errors.Wrap(err, "MoveDir failed")
@@ -1179,7 +1199,7 @@ func (f *Fs) About(ctx context.Context) (usage *fs.Usage, err error) {
var q *users.SpaceUsage
err = f.pacer.Call(func() (bool, error) {
q, err = f.users.GetSpaceUsage()
return shouldRetry(err)
return shouldRetry(ctx, err)
})
if err != nil {
return nil, errors.Wrap(err, "about failed")
@@ -1210,7 +1230,7 @@ func (f *Fs) About(ctx context.Context) (usage *fs.Usage, err error) {
func (f *Fs) ChangeNotify(ctx context.Context, notifyFunc func(string, fs.EntryType), pollIntervalChan <-chan time.Duration) {
go func() {
// get the StartCursor early so all changes from now on get processed
startCursor, err := f.changeNotifyCursor()
startCursor, err := f.changeNotifyCursor(ctx)
if err != nil {
fs.Infof(f, "Failed to get StartCursor: %s", err)
}
@@ -1235,7 +1255,7 @@ func (f *Fs) ChangeNotify(ctx context.Context, notifyFunc func(string, fs.EntryT
}
case <-tickerC:
if startCursor == "" {
startCursor, err = f.changeNotifyCursor()
startCursor, err = f.changeNotifyCursor(ctx)
if err != nil {
fs.Infof(f, "Failed to get StartCursor: %s", err)
continue
@@ -1251,7 +1271,7 @@ func (f *Fs) ChangeNotify(ctx context.Context, notifyFunc func(string, fs.EntryT
}()
}
func (f *Fs) changeNotifyCursor() (cursor string, err error) {
func (f *Fs) changeNotifyCursor(ctx context.Context) (cursor string, err error) {
var startCursor *files.ListFolderGetLatestCursorResult
err = f.pacer.Call(func() (bool, error) {
@@ -1266,7 +1286,7 @@ func (f *Fs) changeNotifyCursor() (cursor string, err error) {
startCursor, err = f.srv.ListFolderGetLatestCursor(&arg)
return shouldRetry(err)
return shouldRetry(ctx, err)
})
if err != nil {
return
@@ -1296,7 +1316,7 @@ func (f *Fs) changeNotifyRunner(ctx context.Context, notifyFunc func(string, fs.
}
res, err = f.svc.ListFolderLongpoll(&args)
return shouldRetry(err)
return shouldRetry(ctx, err)
})
if err != nil {
return
@@ -1319,7 +1339,7 @@ func (f *Fs) changeNotifyRunner(ctx context.Context, notifyFunc func(string, fs.
}
err = f.pacer.Call(func() (bool, error) {
changeList, err = f.srv.ListFolderContinue(&arg)
return shouldRetry(err)
return shouldRetry(ctx, err)
})
if err != nil {
return "", errors.Wrap(err, "list continue")
@@ -1392,7 +1412,7 @@ func (o *Object) Hash(ctx context.Context, t hash.Type) (string, error) {
if t != DbHashType {
return "", hash.ErrUnsupported
}
err := o.readMetaData()
err := o.readMetaData(ctx)
if err != nil {
return "", errors.Wrap(err, "failed to read hash from metadata")
}
@@ -1416,17 +1436,17 @@ func (o *Object) setMetadataFromEntry(info *files.FileMetadata) error {
}
// Reads the entry for a file from dropbox
func (o *Object) readEntry() (*files.FileMetadata, error) {
return o.fs.getFileMetadata(o.remotePath())
func (o *Object) readEntry(ctx context.Context) (*files.FileMetadata, error) {
return o.fs.getFileMetadata(ctx, o.remotePath())
}
// Read entry if not set and set metadata from it
func (o *Object) readEntryAndSetMetadata() error {
func (o *Object) readEntryAndSetMetadata(ctx context.Context) error {
// Last resort set time from client
if !o.modTime.IsZero() {
return nil
}
entry, err := o.readEntry()
entry, err := o.readEntry(ctx)
if err != nil {
return err
}
@@ -1439,12 +1459,12 @@ func (o *Object) remotePath() string {
}
// readMetaData gets the info if it hasn't already been fetched
func (o *Object) readMetaData() (err error) {
func (o *Object) readMetaData(ctx context.Context) (err error) {
if !o.modTime.IsZero() {
return nil
}
// Last resort
return o.readEntryAndSetMetadata()
return o.readEntryAndSetMetadata(ctx)
}
// ModTime returns the modification time of the object
@@ -1452,7 +1472,7 @@ func (o *Object) readMetaData() (err error) {
// It attempts to read the objects mtime and if that isn't present the
// LastModified returned in the http headers
func (o *Object) ModTime(ctx context.Context) time.Time {
err := o.readMetaData()
err := o.readMetaData(ctx)
if err != nil {
fs.Debugf(o, "Failed to read metadata: %v", err)
return time.Now()
@@ -1486,7 +1506,7 @@ func (o *Object) Open(ctx context.Context, options ...fs.OpenOption) (in io.Read
}
err = o.fs.pacer.Call(func() (bool, error) {
_, in, err = o.fs.sharing.GetSharedLinkFile(&arg)
return shouldRetry(err)
return shouldRetry(ctx, err)
})
if err != nil {
return nil, err
@@ -1502,7 +1522,7 @@ func (o *Object) Open(ctx context.Context, options ...fs.OpenOption) (in io.Read
}
err = o.fs.pacer.Call(func() (bool, error) {
_, in, err = o.fs.srv.Download(&arg)
return shouldRetry(err)
return shouldRetry(ctx, err)
})
switch e := err.(type) {
@@ -1521,7 +1541,7 @@ func (o *Object) Open(ctx context.Context, options ...fs.OpenOption) (in io.Read
// Will work optimally if size is >= uploadChunkSize. If the size is either
// unknown (i.e. -1) or smaller than uploadChunkSize, the method incurs an
// avoidable request to the Dropbox API that does not carry payload.
func (o *Object) uploadChunked(in0 io.Reader, commitInfo *files.CommitInfo, size int64) (entry *files.FileMetadata, err error) {
func (o *Object) uploadChunked(ctx context.Context, in0 io.Reader, commitInfo *files.CommitInfo, size int64) (entry *files.FileMetadata, err error) {
chunkSize := int64(o.fs.opt.ChunkSize)
chunks := 0
if size != -1 {
@@ -1550,7 +1570,7 @@ func (o *Object) uploadChunked(in0 io.Reader, commitInfo *files.CommitInfo, size
return false, nil
}
res, err = o.fs.srv.UploadSessionStart(&files.UploadSessionStartArg{}, chunk)
return shouldRetry(err)
return shouldRetry(ctx, err)
})
if err != nil {
return nil, err
@@ -1676,11 +1696,11 @@ func (o *Object) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, op
var err error
var entry *files.FileMetadata
if size > int64(o.fs.opt.ChunkSize) || size == -1 {
entry, err = o.uploadChunked(in, commitInfo, size)
entry, err = o.uploadChunked(ctx, in, commitInfo, size)
} else {
err = o.fs.pacer.CallNoRetry(func() (bool, error) {
entry, err = o.fs.srv.Upload(commitInfo, in)
return shouldRetry(err)
return shouldRetry(ctx, err)
})
}
if err != nil {
@@ -1698,7 +1718,7 @@ func (o *Object) Remove(ctx context.Context) (err error) {
_, err = o.fs.srv.DeleteV2(&files.DeleteArg{
Path: o.fs.opt.Enc.FromStandardPath(o.remotePath()),
})
return shouldRetry(err)
return shouldRetry(ctx, err)
})
return err
}

View File

@@ -28,7 +28,10 @@ var retryErrorCodes = []int{
// shouldRetry returns a boolean as to whether this resp and err
// deserve to be retried. It returns the err as a convenience
func shouldRetry(resp *http.Response, err error) (bool, error) {
func shouldRetry(ctx context.Context, resp *http.Response, err error) (bool, error) {
if fserrors.ContextError(ctx, &err) {
return false, err
}
// Detect this error which the integration tests provoke
// error HTTP error 403 (403 Forbidden) returned body: "{\"message\":\"Flood detected: IP Locked #374\",\"status\":\"KO\"}"
//
@@ -74,7 +77,7 @@ func (f *Fs) readFileInfo(ctx context.Context, url string) (*File, error) {
var file File
err := f.pacer.Call(func() (bool, error) {
resp, err := f.rest.CallJSON(ctx, &opts, &request, &file)
return shouldRetry(resp, err)
return shouldRetry(ctx, resp, err)
})
if err != nil {
return nil, errors.Wrap(err, "couldn't read file info")
@@ -96,7 +99,7 @@ func (f *Fs) getDownloadToken(ctx context.Context, url string) (*GetTokenRespons
var token GetTokenResponse
err := f.pacer.Call(func() (bool, error) {
resp, err := f.rest.CallJSON(ctx, &opts, &request, &token)
return shouldRetry(resp, err)
return shouldRetry(ctx, resp, err)
})
if err != nil {
return nil, errors.Wrap(err, "couldn't list files")
@@ -124,7 +127,7 @@ func (f *Fs) listSharedFiles(ctx context.Context, id string) (entries fs.DirEntr
var sharedFiles SharedFolderResponse
err = f.pacer.Call(func() (bool, error) {
resp, err := f.rest.CallJSON(ctx, &opts, nil, &sharedFiles)
return shouldRetry(resp, err)
return shouldRetry(ctx, resp, err)
})
if err != nil {
return nil, errors.Wrap(err, "couldn't list files")
@@ -153,7 +156,7 @@ func (f *Fs) listFiles(ctx context.Context, directoryID int) (filesList *FilesLi
filesList = &FilesList{}
err = f.pacer.Call(func() (bool, error) {
resp, err := f.rest.CallJSON(ctx, &opts, &request, filesList)
return shouldRetry(resp, err)
return shouldRetry(ctx, resp, err)
})
if err != nil {
return nil, errors.Wrap(err, "couldn't list files")
@@ -181,7 +184,7 @@ func (f *Fs) listFolders(ctx context.Context, directoryID int) (foldersList *Fol
foldersList = &FoldersList{}
err = f.pacer.Call(func() (bool, error) {
resp, err := f.rest.CallJSON(ctx, &opts, &request, foldersList)
return shouldRetry(resp, err)
return shouldRetry(ctx, resp, err)
})
if err != nil {
return nil, errors.Wrap(err, "couldn't list folders")
@@ -275,7 +278,7 @@ func (f *Fs) makeFolder(ctx context.Context, leaf string, folderID int) (respons
response = &MakeFolderResponse{}
err = f.pacer.Call(func() (bool, error) {
resp, err := f.rest.CallJSON(ctx, &opts, &request, response)
return shouldRetry(resp, err)
return shouldRetry(ctx, resp, err)
})
if err != nil {
return nil, errors.Wrap(err, "couldn't create folder")
@@ -302,7 +305,7 @@ func (f *Fs) removeFolder(ctx context.Context, name string, folderID int) (respo
var resp *http.Response
err = f.pacer.Call(func() (bool, error) {
resp, err = f.rest.CallJSON(ctx, &opts, request, response)
return shouldRetry(resp, err)
return shouldRetry(ctx, resp, err)
})
if err != nil {
return nil, errors.Wrap(err, "couldn't remove folder")
@@ -331,7 +334,7 @@ func (f *Fs) deleteFile(ctx context.Context, url string) (response *GenericOKRes
response = &GenericOKResponse{}
err = f.pacer.Call(func() (bool, error) {
resp, err := f.rest.CallJSON(ctx, &opts, request, response)
return shouldRetry(resp, err)
return shouldRetry(ctx, resp, err)
})
if err != nil {
@@ -358,7 +361,7 @@ func (f *Fs) moveFile(ctx context.Context, url string, folderID int, rename stri
response = &MoveFileResponse{}
err = f.pacer.Call(func() (bool, error) {
resp, err := f.rest.CallJSON(ctx, &opts, request, response)
return shouldRetry(resp, err)
return shouldRetry(ctx, resp, err)
})
if err != nil {
@@ -383,7 +386,7 @@ func (f *Fs) copyFile(ctx context.Context, url string, folderID int, rename stri
response = &CopyFileResponse{}
err = f.pacer.Call(func() (bool, error) {
resp, err := f.rest.CallJSON(ctx, &opts, request, response)
return shouldRetry(resp, err)
return shouldRetry(ctx, resp, err)
})
if err != nil {
@@ -405,7 +408,7 @@ func (f *Fs) getUploadNode(ctx context.Context) (response *GetUploadNodeResponse
response = &GetUploadNodeResponse{}
err = f.pacer.Call(func() (bool, error) {
resp, err := f.rest.CallJSON(ctx, &opts, nil, response)
return shouldRetry(resp, err)
return shouldRetry(ctx, resp, err)
})
if err != nil {
return nil, errors.Wrap(err, "didnt got an upload node")
@@ -448,7 +451,7 @@ func (f *Fs) uploadFile(ctx context.Context, in io.Reader, size int64, fileName,
err = f.pacer.CallNoRetry(func() (bool, error) {
resp, err := f.rest.CallJSON(ctx, &opts, nil, nil)
return shouldRetry(resp, err)
return shouldRetry(ctx, resp, err)
})
if err != nil {
@@ -482,7 +485,7 @@ func (f *Fs) endUpload(ctx context.Context, uploadID string, nodeurl string) (re
response = &EndFileUploadResponse{}
err = f.pacer.Call(func() (bool, error) {
resp, err := f.rest.CallJSON(ctx, &opts, nil, response)
return shouldRetry(resp, err)
return shouldRetry(ctx, resp, err)
})
if err != nil {

View File

@@ -348,8 +348,10 @@ func (f *Fs) putUnchecked(ctx context.Context, in io.Reader, remote string, size
return nil, err
}
if len(fileUploadResponse.Links) != 1 {
return nil, errors.New("unexpected amount of files")
if len(fileUploadResponse.Links) == 0 {
return nil, errors.New("upload response not found")
} else if len(fileUploadResponse.Links) > 1 {
fs.Debugf(remote, "Multiple upload responses found, using the first")
}
link := fileUploadResponse.Links[0]

View File

@@ -94,7 +94,7 @@ func (o *Object) Open(ctx context.Context, options ...fs.OpenOption) (io.ReadClo
err = o.fs.pacer.Call(func() (bool, error) {
resp, err = o.fs.rest.Call(ctx, &opts)
return shouldRetry(resp, err)
return shouldRetry(ctx, resp, err)
})
if err != nil {

View File

@@ -228,7 +228,10 @@ var retryStatusCodes = []struct {
// shouldRetry returns a boolean as to whether this resp and err
// deserve to be retried. It returns the err as a convenience
func (f *Fs) shouldRetry(resp *http.Response, err error, status api.OKError) (bool, error) {
func (f *Fs) shouldRetry(ctx context.Context, resp *http.Response, err error, status api.OKError) (bool, error) {
if fserrors.ContextError(ctx, &err) {
return false, err
}
if err != nil {
return fserrors.ShouldRetry(err) || fserrors.ShouldRetryHTTP(resp, retryErrorCodes), err
}
@@ -401,7 +404,7 @@ func (f *Fs) rpc(ctx context.Context, function string, p params, result api.OKEr
// Refresh the body each retry
opts.Body = strings.NewReader(data.Encode())
resp, err = f.srv.CallJSON(ctx, &opts, nil, result)
return f.shouldRetry(resp, err, result)
return f.shouldRetry(ctx, resp, err, result)
})
if err != nil {
return resp, err
@@ -1277,7 +1280,7 @@ func (o *Object) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, op
}
err = o.fs.pacer.CallNoRetry(func() (bool, error) {
resp, err := o.fs.srv.CallJSON(ctx, &opts, nil, &uploader)
return o.fs.shouldRetry(resp, err, nil)
return o.fs.shouldRetry(ctx, resp, err, nil)
})
if err != nil {
return errors.Wrap(err, "failed to upload")

View File

@@ -21,6 +21,7 @@ import (
"github.com/rclone/rclone/fs/config/configmap"
"github.com/rclone/rclone/fs/config/configstruct"
"github.com/rclone/rclone/fs/config/obscure"
"github.com/rclone/rclone/fs/fserrors"
"github.com/rclone/rclone/fs/fshttp"
"github.com/rclone/rclone/fs/hash"
"github.com/rclone/rclone/lib/encoder"
@@ -33,6 +34,12 @@ var (
currentUser = env.CurrentUser()
)
const (
minSleep = 10 * time.Millisecond
maxSleep = 2 * time.Second
decayConstant = 2 // bigger for slower decay, exponential
)
// Register with Fs
func init() {
fs.Register(&fs.RegInfo{
@@ -104,6 +111,11 @@ given, rclone will empty the connection pool.
Set to 0 to keep connections indefinitely.
`,
Advanced: true,
}, {
Name: "close_timeout",
Help: "Maximum time to wait for a response to close.",
Default: fs.Duration(60 * time.Second),
Advanced: true,
}, {
Name: config.ConfigEncoding,
Help: config.ConfigEncodingHelp,
@@ -132,6 +144,7 @@ type Options struct {
DisableEPSV bool `config:"disable_epsv"`
DisableMLSD bool `config:"disable_mlsd"`
IdleTimeout fs.Duration `config:"idle_timeout"`
CloseTimeout fs.Duration `config:"close_timeout"`
Enc encoder.MultiEncoder `config:"encoding"`
}
@@ -151,6 +164,7 @@ type Fs struct {
drain *time.Timer // used to drain the pool when we stop using the connections
tokens *pacer.TokenDispenser
tlsConf *tls.Config
pacer *fs.Pacer // pacer for FTP connections
}
// Object describes an FTP file
@@ -244,8 +258,24 @@ func (d *dialCtx) dial(network, address string) (net.Conn, error) {
return conn, err
}
// shouldRetry returns a boolean as to whether this err deserve to be
// retried. It returns the err as a convenience
func shouldRetry(ctx context.Context, err error) (bool, error) {
if fserrors.ContextError(ctx, &err) {
return false, err
}
switch errX := err.(type) {
case *textproto.Error:
switch errX.Code {
case ftp.StatusNotAvailable:
return true, err
}
}
return fserrors.ShouldRetry(err), err
}
// Open a new connection to the FTP server.
func (f *Fs) ftpConnection(ctx context.Context) (*ftp.ServerConn, error) {
func (f *Fs) ftpConnection(ctx context.Context) (c *ftp.ServerConn, err error) {
fs.Debugf(f, "Connecting to FTP server")
dCtx := dialCtx{f, ctx}
ftpConfig := []ftp.DialOption{ftp.DialWithDialFunc(dCtx.dial)}
@@ -267,18 +297,22 @@ func (f *Fs) ftpConnection(ctx context.Context) (*ftp.ServerConn, error) {
if f.ci.Dump&(fs.DumpHeaders|fs.DumpBodies|fs.DumpRequests|fs.DumpResponses) != 0 {
ftpConfig = append(ftpConfig, ftp.DialWithDebugOutput(&debugLog{auth: f.ci.Dump&fs.DumpAuth != 0}))
}
c, err := ftp.Dial(f.dialAddr, ftpConfig...)
err = f.pacer.Call(func() (bool, error) {
c, err = ftp.Dial(f.dialAddr, ftpConfig...)
if err != nil {
return shouldRetry(ctx, err)
}
err = c.Login(f.user, f.pass)
if err != nil {
_ = c.Quit()
return shouldRetry(ctx, err)
}
return false, nil
})
if err != nil {
fs.Errorf(f, "Error while Dialing %s: %s", f.dialAddr, err)
return nil, errors.Wrap(err, "ftpConnection Dial")
err = errors.Wrapf(err, "failed to make FTP connection to %q", f.dialAddr)
}
err = c.Login(f.user, f.pass)
if err != nil {
_ = c.Quit()
fs.Errorf(f, "Error while Logging in into %s: %s", f.dialAddr, err)
return nil, errors.Wrap(err, "ftpConnection Login")
}
return c, nil
return c, err
}
// Get an FTP connection from the pool, or open a new one
@@ -411,6 +445,7 @@ func NewFs(ctx context.Context, name, root string, m configmap.Mapper) (ff fs.Fs
dialAddr: dialAddr,
tokens: pacer.NewTokenDispenser(opt.Concurrency),
tlsConf: tlsConfig,
pacer: fs.NewPacer(ctx, pacer.NewDefault(pacer.MinSleep(minSleep), pacer.MaxSleep(maxSleep), pacer.DecayConstant(decayConstant))),
}
f.features = (&fs.Features{
CanHaveEmptyDirectories: true,
@@ -931,8 +966,8 @@ func (f *ftpReadCloser) Close() error {
go func() {
errchan <- f.rc.Close()
}()
// Wait for Close for up to 60 seconds
timer := time.NewTimer(60 * time.Second)
// Wait for Close for up to 60 seconds by default
timer := time.NewTimer(time.Duration(f.f.opt.CloseTimeout))
select {
case err = <-errchan:
timer.Stop()

View File

@@ -329,7 +329,10 @@ func (f *Fs) Features() *fs.Features {
}
// shouldRetry determines whether a given err rates being retried
func shouldRetry(err error) (again bool, errOut error) {
func shouldRetry(ctx context.Context, err error) (again bool, errOut error) {
if fserrors.ContextError(ctx, &err) {
return false, err
}
again = false
if err != nil {
if fserrors.ShouldRetry(err) {
@@ -455,7 +458,7 @@ func NewFs(ctx context.Context, name, root string, m configmap.Mapper) (fs.Fs, e
encodedDirectory := f.opt.Enc.FromStandardPath(f.rootDirectory)
err = f.pacer.Call(func() (bool, error) {
_, err = f.svc.Objects.Get(f.rootBucket, encodedDirectory).Context(ctx).Do()
return shouldRetry(err)
return shouldRetry(ctx, err)
})
if err == nil {
newRoot := path.Dir(f.root)
@@ -521,7 +524,7 @@ func (f *Fs) list(ctx context.Context, bucket, directory, prefix string, addBuck
var objects *storage.Objects
err = f.pacer.Call(func() (bool, error) {
objects, err = list.Context(ctx).Do()
return shouldRetry(err)
return shouldRetry(ctx, err)
})
if err != nil {
if gErr, ok := err.(*googleapi.Error); ok {
@@ -624,7 +627,7 @@ func (f *Fs) listBuckets(ctx context.Context) (entries fs.DirEntries, err error)
var buckets *storage.Buckets
err = f.pacer.Call(func() (bool, error) {
buckets, err = listBuckets.Context(ctx).Do()
return shouldRetry(err)
return shouldRetry(ctx, err)
})
if err != nil {
return nil, err
@@ -750,7 +753,7 @@ func (f *Fs) makeBucket(ctx context.Context, bucket string) (err error) {
// service account that only has the "Storage Object Admin" role. See #2193 for details.
err = f.pacer.Call(func() (bool, error) {
_, err = f.svc.Objects.List(bucket).MaxResults(1).Context(ctx).Do()
return shouldRetry(err)
return shouldRetry(ctx, err)
})
if err == nil {
// Bucket already exists
@@ -785,7 +788,7 @@ func (f *Fs) makeBucket(ctx context.Context, bucket string) (err error) {
insertBucket.PredefinedAcl(f.opt.BucketACL)
}
_, err = insertBucket.Context(ctx).Do()
return shouldRetry(err)
return shouldRetry(ctx, err)
})
}, nil)
}
@@ -802,7 +805,7 @@ func (f *Fs) Rmdir(ctx context.Context, dir string) (err error) {
return f.cache.Remove(bucket, func() error {
return f.pacer.Call(func() (bool, error) {
err = f.svc.Buckets.Delete(bucket).Context(ctx).Do()
return shouldRetry(err)
return shouldRetry(ctx, err)
})
})
}
@@ -848,7 +851,7 @@ func (f *Fs) Copy(ctx context.Context, src fs.Object, remote string) (fs.Object,
for {
err = f.pacer.Call(func() (bool, error) {
rewriteResponse, err = rewriteRequest.Context(ctx).Do()
return shouldRetry(err)
return shouldRetry(ctx, err)
})
if err != nil {
return nil, err
@@ -941,7 +944,7 @@ func (o *Object) readObjectInfo(ctx context.Context) (object *storage.Object, er
bucket, bucketPath := o.split()
err = o.fs.pacer.Call(func() (bool, error) {
object, err = o.fs.svc.Objects.Get(bucket, bucketPath).Context(ctx).Do()
return shouldRetry(err)
return shouldRetry(ctx, err)
})
if err != nil {
if gErr, ok := err.(*googleapi.Error); ok {
@@ -1012,7 +1015,7 @@ func (o *Object) SetModTime(ctx context.Context, modTime time.Time) (err error)
copyObject.DestinationPredefinedAcl(o.fs.opt.ObjectACL)
}
newObject, err = copyObject.Context(ctx).Do()
return shouldRetry(err)
return shouldRetry(ctx, err)
})
if err != nil {
return err
@@ -1043,7 +1046,7 @@ func (o *Object) Open(ctx context.Context, options ...fs.OpenOption) (in io.Read
_ = res.Body.Close() // ignore error
}
}
return shouldRetry(err)
return shouldRetry(ctx, err)
})
if err != nil {
return nil, err
@@ -1109,7 +1112,7 @@ func (o *Object) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, op
insertObject.PredefinedAcl(o.fs.opt.ObjectACL)
}
newObject, err = insertObject.Context(ctx).Do()
return shouldRetry(err)
return shouldRetry(ctx, err)
})
if err != nil {
return err
@@ -1124,7 +1127,7 @@ func (o *Object) Remove(ctx context.Context) (err error) {
bucket, bucketPath := o.split()
err = o.fs.pacer.Call(func() (bool, error) {
err = o.fs.svc.Objects.Delete(bucket, bucketPath).Context(ctx).Do()
return shouldRetry(err)
return shouldRetry(ctx, err)
})
return err
}

View File

@@ -240,7 +240,10 @@ var retryErrorCodes = []int{
// shouldRetry returns a boolean as to whether this resp and err
// deserve to be retried. It returns the err as a convenience
func shouldRetry(resp *http.Response, err error) (bool, error) {
func shouldRetry(ctx context.Context, resp *http.Response, err error) (bool, error) {
if fserrors.ContextError(ctx, &err) {
return false, err
}
return fserrors.ShouldRetry(err) || fserrors.ShouldRetryHTTP(resp, retryErrorCodes), err
}
@@ -329,7 +332,7 @@ func (f *Fs) fetchEndpoint(ctx context.Context, name string) (endpoint string, e
var openIDconfig map[string]interface{}
err = f.pacer.Call(func() (bool, error) {
resp, err := f.unAuth.CallJSON(ctx, &opts, nil, &openIDconfig)
return shouldRetry(resp, err)
return shouldRetry(ctx, resp, err)
})
if err != nil {
return "", errors.Wrap(err, "couldn't read openID config")
@@ -358,7 +361,7 @@ func (f *Fs) UserInfo(ctx context.Context) (userInfo map[string]string, err erro
}
err = f.pacer.Call(func() (bool, error) {
resp, err := f.srv.CallJSON(ctx, &opts, nil, &userInfo)
return shouldRetry(resp, err)
return shouldRetry(ctx, resp, err)
})
if err != nil {
return nil, errors.Wrap(err, "couldn't read user info")
@@ -389,7 +392,7 @@ func (f *Fs) Disconnect(ctx context.Context) (err error) {
var res interface{}
err = f.pacer.Call(func() (bool, error) {
resp, err := f.srv.CallJSON(ctx, &opts, nil, &res)
return shouldRetry(resp, err)
return shouldRetry(ctx, resp, err)
})
if err != nil {
return errors.Wrap(err, "couldn't revoke token")
@@ -476,7 +479,7 @@ func (f *Fs) listAlbums(ctx context.Context, shared bool) (all *albums, err erro
var resp *http.Response
err = f.pacer.Call(func() (bool, error) {
resp, err = f.srv.CallJSON(ctx, &opts, nil, &result)
return shouldRetry(resp, err)
return shouldRetry(ctx, resp, err)
})
if err != nil {
return nil, errors.Wrap(err, "couldn't list albums")
@@ -531,7 +534,7 @@ func (f *Fs) list(ctx context.Context, filter api.SearchFilter, fn listFn) (err
var resp *http.Response
err = f.pacer.Call(func() (bool, error) {
resp, err = f.srv.CallJSON(ctx, &opts, &filter, &result)
return shouldRetry(resp, err)
return shouldRetry(ctx, resp, err)
})
if err != nil {
return errors.Wrap(err, "couldn't list files")
@@ -675,7 +678,7 @@ func (f *Fs) createAlbum(ctx context.Context, albumTitle string) (album *api.Alb
var resp *http.Response
err = f.pacer.Call(func() (bool, error) {
resp, err = f.srv.CallJSON(ctx, &opts, request, &result)
return shouldRetry(resp, err)
return shouldRetry(ctx, resp, err)
})
if err != nil {
return nil, errors.Wrap(err, "couldn't create album")
@@ -810,7 +813,7 @@ func (o *Object) Size() int64 {
}
err = o.fs.pacer.Call(func() (bool, error) {
resp, err = o.fs.srv.Call(ctx, &opts)
return shouldRetry(resp, err)
return shouldRetry(ctx, resp, err)
})
if err != nil {
fs.Debugf(o, "Reading size failed: %v", err)
@@ -861,7 +864,7 @@ func (o *Object) readMetaData(ctx context.Context) (err error) {
var resp *http.Response
err = o.fs.pacer.Call(func() (bool, error) {
resp, err = o.fs.srv.CallJSON(ctx, &opts, nil, &item)
return shouldRetry(resp, err)
return shouldRetry(ctx, resp, err)
})
if err != nil {
return errors.Wrap(err, "couldn't get media item")
@@ -938,7 +941,7 @@ func (o *Object) Open(ctx context.Context, options ...fs.OpenOption) (in io.Read
}
err = o.fs.pacer.Call(func() (bool, error) {
resp, err = o.fs.srv.Call(ctx, &opts)
return shouldRetry(resp, err)
return shouldRetry(ctx, resp, err)
})
if err != nil {
return nil, err
@@ -993,10 +996,10 @@ func (o *Object) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, op
err = o.fs.pacer.CallNoRetry(func() (bool, error) {
resp, err = o.fs.srv.Call(ctx, &opts)
if err != nil {
return shouldRetry(resp, err)
return shouldRetry(ctx, resp, err)
}
token, err = rest.ReadBody(resp)
return shouldRetry(resp, err)
return shouldRetry(ctx, resp, err)
})
if err != nil {
return errors.Wrap(err, "couldn't upload file")
@@ -1024,7 +1027,7 @@ func (o *Object) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, op
var result api.BatchCreateResponse
err = o.fs.pacer.Call(func() (bool, error) {
resp, err = o.fs.srv.CallJSON(ctx, &opts, request, &result)
return shouldRetry(resp, err)
return shouldRetry(ctx, resp, err)
})
if err != nil {
return errors.Wrap(err, "failed to create media item")
@@ -1069,7 +1072,7 @@ func (o *Object) Remove(ctx context.Context) (err error) {
var resp *http.Response
err = o.fs.pacer.Call(func() (bool, error) {
resp, err = o.fs.srv.CallJSON(ctx, &opts, &request, nil)
return shouldRetry(resp, err)
return shouldRetry(ctx, resp, err)
})
if err != nil {
return errors.Wrap(err, "couldn't delete item from album")

View File

@@ -109,7 +109,7 @@ func (o *Object) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, op
dirname := path.Dir(realpath)
fs.Debugf(o.fs, "update [%s]", realpath)
err := o.fs.client.MkdirAll(dirname, 755)
err := o.fs.client.MkdirAll(dirname, 0755)
if err != nil {
return err
}

View File

@@ -15,7 +15,7 @@ import (
"time"
"github.com/rclone/rclone/fs"
"github.com/rclone/rclone/fs/config"
"github.com/rclone/rclone/fs/config/configfile"
"github.com/rclone/rclone/fs/config/configmap"
"github.com/rclone/rclone/fstest"
"github.com/rclone/rclone/lib/rest"
@@ -47,7 +47,7 @@ func prepareServer(t *testing.T) (configmap.Simple, func()) {
ts := httptest.NewServer(handler)
// Configure the remote
config.LoadConfig(context.Background())
configfile.LoadConfig(context.Background())
// fs.Config.LogLevel = fs.LogLevelDebug
// fs.Config.DumpHeaders = true
// fs.Config.DumpBodies = true

View File

@@ -235,7 +235,10 @@ var retryErrorCodes = []int{
// shouldRetry returns a boolean as to whether this resp and err
// deserve to be retried. It returns the err as a convenience
func shouldRetry(resp *http.Response, err error) (bool, error) {
func shouldRetry(ctx context.Context, resp *http.Response, err error) (bool, error) {
if fserrors.ContextError(ctx, &err) {
return false, err
}
return fserrors.ShouldRetry(err) || fserrors.ShouldRetryHTTP(resp, retryErrorCodes), err
}
@@ -615,7 +618,7 @@ func (f *Fs) readMetaDataForPath(ctx context.Context, path string) (info *api.Jo
var resp *http.Response
err = f.pacer.Call(func() (bool, error) {
resp, err = f.srv.CallXML(ctx, &opts, nil, &result)
return shouldRetry(resp, err)
return shouldRetry(ctx, resp, err)
})
if apiErr, ok := err.(*api.Error); ok {
@@ -854,7 +857,7 @@ func (f *Fs) CreateDir(ctx context.Context, path string) (jf *api.JottaFolder, e
err = f.pacer.Call(func() (bool, error) {
resp, err = f.srv.CallXML(ctx, &opts, nil, &jf)
return shouldRetry(resp, err)
return shouldRetry(ctx, resp, err)
})
if err != nil {
//fmt.Printf("...Error %v\n", err)
@@ -883,7 +886,7 @@ func (f *Fs) List(ctx context.Context, dir string) (entries fs.DirEntries, err e
var result api.JottaFolder
err = f.pacer.Call(func() (bool, error) {
resp, err = f.srv.CallXML(ctx, &opts, nil, &result)
return shouldRetry(resp, err)
return shouldRetry(ctx, resp, err)
})
if err != nil {
@@ -995,7 +998,7 @@ func (f *Fs) ListR(ctx context.Context, dir string, callback fs.ListRCallback) (
var result api.JottaFolder // Could be JottaFileDirList, but JottaFolder is close enough
err = f.pacer.Call(func() (bool, error) {
resp, err = f.srv.CallXML(ctx, &opts, nil, &result)
return shouldRetry(resp, err)
return shouldRetry(ctx, resp, err)
})
if err != nil {
if apiErr, ok := err.(*api.Error); ok {
@@ -1101,7 +1104,7 @@ func (f *Fs) purgeCheck(ctx context.Context, dir string, check bool) (err error)
var resp *http.Response
err = f.pacer.Call(func() (bool, error) {
resp, err = f.srv.Call(ctx, &opts)
return shouldRetry(resp, err)
return shouldRetry(ctx, resp, err)
})
if err != nil {
return errors.Wrap(err, "couldn't purge directory")
@@ -1140,7 +1143,7 @@ func (f *Fs) copyOrMove(ctx context.Context, method, src, dest string) (info *ap
var resp *http.Response
err = f.pacer.Call(func() (bool, error) {
resp, err = f.srv.CallXML(ctx, &opts, nil, &info)
return shouldRetry(resp, err)
return shouldRetry(ctx, resp, err)
})
if err != nil {
return nil, err
@@ -1268,7 +1271,7 @@ func (f *Fs) PublicLink(ctx context.Context, remote string, expire fs.Duration,
var result api.JottaFile
err = f.pacer.Call(func() (bool, error) {
resp, err = f.srv.CallXML(ctx, &opts, nil, &result)
return shouldRetry(resp, err)
return shouldRetry(ctx, resp, err)
})
if apiErr, ok := err.(*api.Error); ok {
@@ -1446,7 +1449,7 @@ func (o *Object) Open(ctx context.Context, options ...fs.OpenOption) (in io.Read
err = o.fs.pacer.Call(func() (bool, error) {
resp, err = o.fs.srv.Call(ctx, &opts)
return shouldRetry(resp, err)
return shouldRetry(ctx, resp, err)
})
if err != nil {
return nil, err
@@ -1559,7 +1562,7 @@ func (o *Object) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, op
var response api.AllocateFileResponse
err = o.fs.pacer.CallNoRetry(func() (bool, error) {
resp, err = o.fs.apiSrv.CallJSON(ctx, &opts, &request, &response)
return shouldRetry(resp, err)
return shouldRetry(ctx, resp, err)
})
if err != nil {
return err
@@ -1624,7 +1627,7 @@ func (o *Object) Remove(ctx context.Context) error {
return o.fs.pacer.Call(func() (bool, error) {
resp, err := o.fs.srv.CallXML(ctx, &opts, nil, nil)
return shouldRetry(resp, err)
return shouldRetry(ctx, resp, err)
})
}

View File

@@ -42,8 +42,9 @@ func init() {
NewFs: NewFs,
CommandHelp: commandHelp,
Options: []fs.Option{{
Name: "nounc",
Help: "Disable UNC (long path names) conversion on Windows",
Name: "nounc",
Help: "Disable UNC (long path names) conversion on Windows",
Advanced: runtime.GOOS != "windows",
Examples: []fs.OptionExample{{
Value: "true",
Help: "Disables long file names",
@@ -1144,6 +1145,10 @@ func (o *Object) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, op
err = file.PreAllocate(src.Size(), f)
if err != nil {
fs.Debugf(o, "Failed to pre-allocate: %v", err)
if err == file.ErrDiskFull {
_ = f.Close()
return err
}
}
}
out = f

View File

@@ -234,7 +234,10 @@ var retryErrorCodes = []int{
// shouldRetry returns a boolean as to whether this response and err
// deserve to be retried. It returns the err as a convenience.
// Retries password authorization (once) in a special case of access denied.
func shouldRetry(res *http.Response, err error, f *Fs, opts *rest.Opts) (bool, error) {
func shouldRetry(ctx context.Context, res *http.Response, err error, f *Fs, opts *rest.Opts) (bool, error) {
if fserrors.ContextError(ctx, &err) {
return false, err
}
if res != nil && res.StatusCode == 403 && f.opt.Password != "" && !f.passFailed {
reAuthErr := f.reAuthorize(opts, err)
return reAuthErr == nil, err // return an original error
@@ -600,7 +603,7 @@ func (f *Fs) readItemMetaData(ctx context.Context, path string) (entry fs.DirEnt
var info api.ItemInfoResponse
err = f.pacer.Call(func() (bool, error) {
res, err := f.srv.CallJSON(ctx, &opts, nil, &info)
return shouldRetry(res, err, f, &opts)
return shouldRetry(ctx, res, err, f, &opts)
})
if err != nil {
@@ -736,7 +739,7 @@ func (f *Fs) listM1(ctx context.Context, dirPath string, offset int, limit int)
)
err = f.pacer.Call(func() (bool, error) {
res, err = f.srv.CallJSON(ctx, &opts, nil, &info)
return shouldRetry(res, err, f, &opts)
return shouldRetry(ctx, res, err, f, &opts)
})
if err != nil {
@@ -800,7 +803,7 @@ func (f *Fs) listBin(ctx context.Context, dirPath string, depth int) (entries fs
var res *http.Response
err = f.pacer.Call(func() (bool, error) {
res, err = f.srv.Call(ctx, &opts)
return shouldRetry(res, err, f, &opts)
return shouldRetry(ctx, res, err, f, &opts)
})
if err != nil {
closeBody(res)
@@ -1073,7 +1076,7 @@ func (f *Fs) CreateDir(ctx context.Context, path string) error {
var res *http.Response
err = f.pacer.Call(func() (bool, error) {
res, err = f.srv.Call(ctx, &opts)
return shouldRetry(res, err, f, &opts)
return shouldRetry(ctx, res, err, f, &opts)
})
if err != nil {
closeBody(res)
@@ -1216,7 +1219,7 @@ func (f *Fs) delete(ctx context.Context, path string, hardDelete bool) error {
var response api.GenericResponse
err = f.pacer.Call(func() (bool, error) {
res, err := f.srv.CallJSON(ctx, &opts, nil, &response)
return shouldRetry(res, err, f, &opts)
return shouldRetry(ctx, res, err, f, &opts)
})
switch {
@@ -1288,7 +1291,7 @@ func (f *Fs) Copy(ctx context.Context, src fs.Object, remote string) (fs.Object,
var response api.GenericBodyResponse
err = f.pacer.Call(func() (bool, error) {
res, err := f.srv.CallJSON(ctx, &opts, nil, &response)
return shouldRetry(res, err, f, &opts)
return shouldRetry(ctx, res, err, f, &opts)
})
if err != nil {
@@ -1392,7 +1395,7 @@ func (f *Fs) moveItemBin(ctx context.Context, srcPath, dstPath, opName string) e
var res *http.Response
err = f.pacer.Call(func() (bool, error) {
res, err = f.srv.Call(ctx, &opts)
return shouldRetry(res, err, f, &opts)
return shouldRetry(ctx, res, err, f, &opts)
})
if err != nil {
closeBody(res)
@@ -1483,7 +1486,7 @@ func (f *Fs) PublicLink(ctx context.Context, remote string, expire fs.Duration,
var response api.GenericBodyResponse
err = f.pacer.Call(func() (bool, error) {
res, err := f.srv.CallJSON(ctx, &opts, nil, &response)
return shouldRetry(res, err, f, &opts)
return shouldRetry(ctx, res, err, f, &opts)
})
if err == nil && response.Body != "" {
@@ -1524,7 +1527,7 @@ func (f *Fs) CleanUp(ctx context.Context) error {
var response api.CleanupResponse
err = f.pacer.Call(func() (bool, error) {
res, err := f.srv.CallJSON(ctx, &opts, nil, &response)
return shouldRetry(res, err, f, &opts)
return shouldRetry(ctx, res, err, f, &opts)
})
if err != nil {
return err
@@ -1557,7 +1560,7 @@ func (f *Fs) About(ctx context.Context) (*fs.Usage, error) {
var info api.UserInfoResponse
err = f.pacer.Call(func() (bool, error) {
res, err := f.srv.CallJSON(ctx, &opts, nil, &info)
return shouldRetry(res, err, f, &opts)
return shouldRetry(ctx, res, err, f, &opts)
})
if err != nil {
return nil, err
@@ -2076,7 +2079,7 @@ func (o *Object) addFileMetaData(ctx context.Context, overwrite bool) error {
var res *http.Response
err = o.fs.pacer.Call(func() (bool, error) {
res, err = o.fs.srv.Call(ctx, &opts)
return shouldRetry(res, err, o.fs, &opts)
return shouldRetry(ctx, res, err, o.fs, &opts)
})
if err != nil {
closeBody(res)
@@ -2172,7 +2175,7 @@ func (o *Object) Open(ctx context.Context, options ...fs.OpenOption) (in io.Read
}
opts.RootURL = server
res, err = o.fs.srv.Call(ctx, &opts)
return shouldRetry(res, err, o.fs, &opts)
return shouldRetry(ctx, res, err, o.fs, &opts)
})
if err != nil {
if res != nil && res.Body != nil {

View File

@@ -30,6 +30,7 @@ import (
"github.com/rclone/rclone/fs/config/configmap"
"github.com/rclone/rclone/fs/config/configstruct"
"github.com/rclone/rclone/fs/config/obscure"
"github.com/rclone/rclone/fs/fserrors"
"github.com/rclone/rclone/fs/fshttp"
"github.com/rclone/rclone/fs/hash"
"github.com/rclone/rclone/lib/encoder"
@@ -158,7 +159,10 @@ func parsePath(path string) (root string) {
// shouldRetry returns a boolean as to whether this err deserves to be
// retried. It returns the err as a convenience
func shouldRetry(err error) (bool, error) {
func shouldRetry(ctx context.Context, err error) (bool, error) {
if fserrors.ContextError(ctx, &err) {
return false, err
}
// Let the mega library handle the low level retries
return false, err
/*
@@ -171,8 +175,8 @@ func shouldRetry(err error) (bool, error) {
}
// readMetaDataForPath reads the metadata from the path
func (f *Fs) readMetaDataForPath(remote string) (info *mega.Node, err error) {
rootNode, err := f.findRoot(false)
func (f *Fs) readMetaDataForPath(ctx context.Context, remote string) (info *mega.Node, err error) {
rootNode, err := f.findRoot(ctx, false)
if err != nil {
return nil, err
}
@@ -237,7 +241,7 @@ func NewFs(ctx context.Context, name, root string, m configmap.Mapper) (fs.Fs, e
}).Fill(ctx, f)
// Find the root node and check if it is a file or not
_, err = f.findRoot(false)
_, err = f.findRoot(ctx, false)
switch err {
case nil:
// root node found and is a directory
@@ -307,8 +311,8 @@ func (f *Fs) findObject(rootNode *mega.Node, file string) (node *mega.Node, err
// lookupDir looks up the node for the directory of the name given
//
// if create is true it tries to create the root directory if not found
func (f *Fs) lookupDir(dir string) (*mega.Node, error) {
rootNode, err := f.findRoot(false)
func (f *Fs) lookupDir(ctx context.Context, dir string) (*mega.Node, error) {
rootNode, err := f.findRoot(ctx, false)
if err != nil {
return nil, err
}
@@ -316,15 +320,15 @@ func (f *Fs) lookupDir(dir string) (*mega.Node, error) {
}
// lookupParentDir finds the parent node for the remote passed in
func (f *Fs) lookupParentDir(remote string) (dirNode *mega.Node, leaf string, err error) {
func (f *Fs) lookupParentDir(ctx context.Context, remote string) (dirNode *mega.Node, leaf string, err error) {
parent, leaf := path.Split(remote)
dirNode, err = f.lookupDir(parent)
dirNode, err = f.lookupDir(ctx, parent)
return dirNode, leaf, err
}
// mkdir makes the directory and any parent directories for the
// directory of the name given
func (f *Fs) mkdir(rootNode *mega.Node, dir string) (node *mega.Node, err error) {
func (f *Fs) mkdir(ctx context.Context, rootNode *mega.Node, dir string) (node *mega.Node, err error) {
f.mkdirMu.Lock()
defer f.mkdirMu.Unlock()
@@ -358,7 +362,7 @@ func (f *Fs) mkdir(rootNode *mega.Node, dir string) (node *mega.Node, err error)
// create directory called name in node
err = f.pacer.Call(func() (bool, error) {
node, err = f.srv.CreateDir(name, node)
return shouldRetry(err)
return shouldRetry(ctx, err)
})
if err != nil {
return nil, errors.Wrap(err, "mkdir create node failed")
@@ -368,20 +372,20 @@ func (f *Fs) mkdir(rootNode *mega.Node, dir string) (node *mega.Node, err error)
}
// mkdirParent creates the parent directory of remote
func (f *Fs) mkdirParent(remote string) (dirNode *mega.Node, leaf string, err error) {
rootNode, err := f.findRoot(true)
func (f *Fs) mkdirParent(ctx context.Context, remote string) (dirNode *mega.Node, leaf string, err error) {
rootNode, err := f.findRoot(ctx, true)
if err != nil {
return nil, "", err
}
parent, leaf := path.Split(remote)
dirNode, err = f.mkdir(rootNode, parent)
dirNode, err = f.mkdir(ctx, rootNode, parent)
return dirNode, leaf, err
}
// findRoot looks up the root directory node and returns it.
//
// if create is true it tries to create the root directory if not found
func (f *Fs) findRoot(create bool) (*mega.Node, error) {
func (f *Fs) findRoot(ctx context.Context, create bool) (*mega.Node, error) {
f.rootNodeMu.Lock()
defer f.rootNodeMu.Unlock()
@@ -403,7 +407,7 @@ func (f *Fs) findRoot(create bool) (*mega.Node, error) {
}
//..not found so create the root directory
f._rootNode, err = f.mkdir(absRoot, f.root)
f._rootNode, err = f.mkdir(ctx, absRoot, f.root)
return f._rootNode, err
}
@@ -433,7 +437,7 @@ func (f *Fs) CleanUp(ctx context.Context) (err error) {
fs.Debugf(f, "Deleting trash %q", f.opt.Enc.ToStandardName(item.GetName()))
deleteErr := f.pacer.Call(func() (bool, error) {
err := f.srv.Delete(item, true)
return shouldRetry(err)
return shouldRetry(ctx, err)
})
if deleteErr != nil {
err = deleteErr
@@ -447,7 +451,7 @@ func (f *Fs) CleanUp(ctx context.Context) (err error) {
// Return an Object from a path
//
// If it can't be found it returns the error fs.ErrorObjectNotFound.
func (f *Fs) newObjectWithInfo(remote string, info *mega.Node) (fs.Object, error) {
func (f *Fs) newObjectWithInfo(ctx context.Context, remote string, info *mega.Node) (fs.Object, error) {
o := &Object{
fs: f,
remote: remote,
@@ -457,7 +461,7 @@ func (f *Fs) newObjectWithInfo(remote string, info *mega.Node) (fs.Object, error
// Set info
err = o.setMetaData(info)
} else {
err = o.readMetaData() // reads info and meta, returning an error
err = o.readMetaData(ctx) // reads info and meta, returning an error
}
if err != nil {
return nil, err
@@ -468,7 +472,7 @@ func (f *Fs) newObjectWithInfo(remote string, info *mega.Node) (fs.Object, error
// NewObject finds the Object at remote. If it can't be found
// it returns the error fs.ErrorObjectNotFound.
func (f *Fs) NewObject(ctx context.Context, remote string) (fs.Object, error) {
return f.newObjectWithInfo(remote, nil)
return f.newObjectWithInfo(ctx, remote, nil)
}
// list the objects into the function supplied
@@ -506,7 +510,7 @@ func (f *Fs) list(ctx context.Context, dir *mega.Node, fn listFn) (found bool, e
// This should return ErrDirNotFound if the directory isn't
// found.
func (f *Fs) List(ctx context.Context, dir string) (entries fs.DirEntries, err error) {
dirNode, err := f.lookupDir(dir)
dirNode, err := f.lookupDir(ctx, dir)
if err != nil {
return nil, err
}
@@ -518,7 +522,7 @@ func (f *Fs) List(ctx context.Context, dir string) (entries fs.DirEntries, err e
d := fs.NewDir(remote, info.GetTimeStamp()).SetID(info.GetHash())
entries = append(entries, d)
case mega.FILE:
o, err := f.newObjectWithInfo(remote, info)
o, err := f.newObjectWithInfo(ctx, remote, info)
if err != nil {
iErr = err
return true
@@ -542,8 +546,8 @@ func (f *Fs) List(ctx context.Context, dir string) (entries fs.DirEntries, err e
// Returns the dirNode, object, leaf and error
//
// Used to create new objects
func (f *Fs) createObject(remote string, modTime time.Time, size int64) (o *Object, dirNode *mega.Node, leaf string, err error) {
dirNode, leaf, err = f.mkdirParent(remote)
func (f *Fs) createObject(ctx context.Context, remote string, modTime time.Time, size int64) (o *Object, dirNode *mega.Node, leaf string, err error) {
dirNode, leaf, err = f.mkdirParent(ctx, remote)
if err != nil {
return nil, nil, leaf, err
}
@@ -565,7 +569,7 @@ func (f *Fs) createObject(remote string, modTime time.Time, size int64) (o *Obje
// This will create a duplicate if we upload a new file without
// checking to see if there is one already - use Put() for that.
func (f *Fs) Put(ctx context.Context, in io.Reader, src fs.ObjectInfo, options ...fs.OpenOption) (fs.Object, error) {
existingObj, err := f.newObjectWithInfo(src.Remote(), nil)
existingObj, err := f.newObjectWithInfo(ctx, src.Remote(), nil)
switch err {
case nil:
return existingObj, existingObj.Update(ctx, in, src, options...)
@@ -591,7 +595,7 @@ func (f *Fs) PutUnchecked(ctx context.Context, in io.Reader, src fs.ObjectInfo,
size := src.Size()
modTime := src.ModTime(ctx)
o, _, _, err := f.createObject(remote, modTime, size)
o, _, _, err := f.createObject(ctx, remote, modTime, size)
if err != nil {
return nil, err
}
@@ -600,30 +604,30 @@ func (f *Fs) PutUnchecked(ctx context.Context, in io.Reader, src fs.ObjectInfo,
// Mkdir creates the directory if it doesn't exist
func (f *Fs) Mkdir(ctx context.Context, dir string) error {
rootNode, err := f.findRoot(true)
rootNode, err := f.findRoot(ctx, true)
if err != nil {
return err
}
_, err = f.mkdir(rootNode, dir)
_, err = f.mkdir(ctx, rootNode, dir)
return errors.Wrap(err, "Mkdir failed")
}
// deleteNode removes a file or directory, observing useTrash
func (f *Fs) deleteNode(node *mega.Node) (err error) {
func (f *Fs) deleteNode(ctx context.Context, node *mega.Node) (err error) {
err = f.pacer.Call(func() (bool, error) {
err = f.srv.Delete(node, f.opt.HardDelete)
return shouldRetry(err)
return shouldRetry(ctx, err)
})
return err
}
// purgeCheck removes the directory dir, if check is set then it
// refuses to do so if it has anything in
func (f *Fs) purgeCheck(dir string, check bool) error {
func (f *Fs) purgeCheck(ctx context.Context, dir string, check bool) error {
f.mkdirMu.Lock()
defer f.mkdirMu.Unlock()
rootNode, err := f.findRoot(false)
rootNode, err := f.findRoot(ctx, false)
if err != nil {
return err
}
@@ -644,7 +648,7 @@ func (f *Fs) purgeCheck(dir string, check bool) error {
waitEvent := f.srv.WaitEventsStart()
err = f.deleteNode(dirNode)
err = f.deleteNode(ctx, dirNode)
if err != nil {
return errors.Wrap(err, "delete directory node failed")
}
@@ -662,7 +666,7 @@ func (f *Fs) purgeCheck(dir string, check bool) error {
//
// Returns an error if it isn't empty
func (f *Fs) Rmdir(ctx context.Context, dir string) error {
return f.purgeCheck(dir, true)
return f.purgeCheck(ctx, dir, true)
}
// Precision return the precision of this Fs
@@ -676,13 +680,13 @@ func (f *Fs) Precision() time.Duration {
// deleting all the files quicker than just running Remove() on the
// result of List()
func (f *Fs) Purge(ctx context.Context, dir string) error {
return f.purgeCheck(dir, false)
return f.purgeCheck(ctx, dir, false)
}
// move a file or folder (srcFs, srcRemote, info) to (f, dstRemote)
//
// info will be updates
func (f *Fs) move(dstRemote string, srcFs *Fs, srcRemote string, info *mega.Node) (err error) {
func (f *Fs) move(ctx context.Context, dstRemote string, srcFs *Fs, srcRemote string, info *mega.Node) (err error) {
var (
dstFs = f
srcDirNode, dstDirNode *mega.Node
@@ -692,12 +696,12 @@ func (f *Fs) move(dstRemote string, srcFs *Fs, srcRemote string, info *mega.Node
if dstRemote != "" {
// lookup or create the destination parent directory
dstDirNode, dstLeaf, err = dstFs.mkdirParent(dstRemote)
dstDirNode, dstLeaf, err = dstFs.mkdirParent(ctx, dstRemote)
} else {
// find or create the parent of the root directory
absRoot := dstFs.srv.FS.GetRoot()
dstParent, dstLeaf = path.Split(dstFs.root)
dstDirNode, err = dstFs.mkdir(absRoot, dstParent)
dstDirNode, err = dstFs.mkdir(ctx, absRoot, dstParent)
}
if err != nil {
return errors.Wrap(err, "server-side move failed to make dst parent dir")
@@ -705,7 +709,7 @@ func (f *Fs) move(dstRemote string, srcFs *Fs, srcRemote string, info *mega.Node
if srcRemote != "" {
// lookup the existing parent directory
srcDirNode, srcLeaf, err = srcFs.lookupParentDir(srcRemote)
srcDirNode, srcLeaf, err = srcFs.lookupParentDir(ctx, srcRemote)
} else {
// lookup the existing root parent
absRoot := srcFs.srv.FS.GetRoot()
@@ -721,7 +725,7 @@ func (f *Fs) move(dstRemote string, srcFs *Fs, srcRemote string, info *mega.Node
//log.Printf("move src %p %q dst %p %q", srcDirNode, srcDirNode.GetName(), dstDirNode, dstDirNode.GetName())
err = f.pacer.Call(func() (bool, error) {
err = f.srv.Move(info, dstDirNode)
return shouldRetry(err)
return shouldRetry(ctx, err)
})
if err != nil {
return errors.Wrap(err, "server-side move failed")
@@ -735,7 +739,7 @@ func (f *Fs) move(dstRemote string, srcFs *Fs, srcRemote string, info *mega.Node
//log.Printf("rename %q to %q", srcLeaf, dstLeaf)
err = f.pacer.Call(func() (bool, error) {
err = f.srv.Rename(info, f.opt.Enc.FromStandardName(dstLeaf))
return shouldRetry(err)
return shouldRetry(ctx, err)
})
if err != nil {
return errors.Wrap(err, "server-side rename failed")
@@ -767,7 +771,7 @@ func (f *Fs) Move(ctx context.Context, src fs.Object, remote string) (fs.Object,
}
// Do the move
err := f.move(remote, srcObj.fs, srcObj.remote, srcObj.info)
err := f.move(ctx, remote, srcObj.fs, srcObj.remote, srcObj.info)
if err != nil {
return nil, err
}
@@ -798,13 +802,13 @@ func (f *Fs) DirMove(ctx context.Context, src fs.Fs, srcRemote, dstRemote string
}
// find the source
info, err := srcFs.lookupDir(srcRemote)
info, err := srcFs.lookupDir(ctx, srcRemote)
if err != nil {
return err
}
// check the destination doesn't exist
_, err = dstFs.lookupDir(dstRemote)
_, err = dstFs.lookupDir(ctx, dstRemote)
if err == nil {
return fs.ErrorDirExists
} else if err != fs.ErrorDirNotFound {
@@ -812,7 +816,7 @@ func (f *Fs) DirMove(ctx context.Context, src fs.Fs, srcRemote, dstRemote string
}
// Do the move
err = f.move(dstRemote, srcFs, srcRemote, info)
err = f.move(ctx, dstRemote, srcFs, srcRemote, info)
if err != nil {
return err
}
@@ -838,7 +842,7 @@ func (f *Fs) Hashes() hash.Set {
// PublicLink generates a public link to the remote path (usually readable by anyone)
func (f *Fs) PublicLink(ctx context.Context, remote string, expire fs.Duration, unlink bool) (link string, err error) {
root, err := f.findRoot(false)
root, err := f.findRoot(ctx, false)
if err != nil {
return "", errors.Wrap(err, "PublicLink failed to find root node")
}
@@ -886,7 +890,7 @@ func (f *Fs) MergeDirs(ctx context.Context, dirs []fs.Directory) error {
fs.Infof(srcDir, "merging %q", f.opt.Enc.ToStandardName(info.GetName()))
err = f.pacer.Call(func() (bool, error) {
err = f.srv.Move(info, dstDirNode)
return shouldRetry(err)
return shouldRetry(ctx, err)
})
if err != nil {
return errors.Wrapf(err, "MergeDirs move failed on %q in %v", f.opt.Enc.ToStandardName(info.GetName()), srcDir)
@@ -894,7 +898,7 @@ func (f *Fs) MergeDirs(ctx context.Context, dirs []fs.Directory) error {
}
// rmdir (into trash) the now empty source directory
fs.Infof(srcDir, "removing empty directory")
err = f.deleteNode(srcDirNode)
err = f.deleteNode(ctx, srcDirNode)
if err != nil {
return errors.Wrapf(err, "MergeDirs move failed to rmdir %q", srcDir)
}
@@ -908,7 +912,7 @@ func (f *Fs) About(ctx context.Context) (*fs.Usage, error) {
var err error
err = f.pacer.Call(func() (bool, error) {
q, err = f.srv.GetQuota()
return shouldRetry(err)
return shouldRetry(ctx, err)
})
if err != nil {
return nil, errors.Wrap(err, "failed to get Mega Quota")
@@ -963,11 +967,11 @@ func (o *Object) setMetaData(info *mega.Node) (err error) {
// readMetaData gets the metadata if it hasn't already been fetched
//
// it also sets the info
func (o *Object) readMetaData() (err error) {
func (o *Object) readMetaData(ctx context.Context) (err error) {
if o.info != nil {
return nil
}
info, err := o.fs.readMetaDataForPath(o.remote)
info, err := o.fs.readMetaDataForPath(ctx, o.remote)
if err != nil {
if err == fs.ErrorDirNotFound {
err = fs.ErrorObjectNotFound
@@ -998,6 +1002,7 @@ func (o *Object) Storable() bool {
// openObject represents a download in progress
type openObject struct {
ctx context.Context
mu sync.Mutex
o *Object
d *mega.Download
@@ -1008,14 +1013,14 @@ type openObject struct {
}
// get the next chunk
func (oo *openObject) getChunk() (err error) {
func (oo *openObject) getChunk(ctx context.Context) (err error) {
if oo.id >= oo.d.Chunks() {
return io.EOF
}
var chunk []byte
err = oo.o.fs.pacer.Call(func() (bool, error) {
chunk, err = oo.d.DownloadChunk(oo.id)
return shouldRetry(err)
return shouldRetry(ctx, err)
})
if err != nil {
return err
@@ -1045,7 +1050,7 @@ func (oo *openObject) Read(p []byte) (n int, err error) {
oo.skip -= int64(size)
}
if len(oo.chunk) == 0 {
err = oo.getChunk()
err = oo.getChunk(oo.ctx)
if err != nil {
return 0, err
}
@@ -1068,7 +1073,7 @@ func (oo *openObject) Close() (err error) {
}
err = oo.o.fs.pacer.Call(func() (bool, error) {
err = oo.d.Finish()
return shouldRetry(err)
return shouldRetry(oo.ctx, err)
})
if err != nil {
return errors.Wrap(err, "failed to finish download")
@@ -1096,13 +1101,14 @@ func (o *Object) Open(ctx context.Context, options ...fs.OpenOption) (in io.Read
var d *mega.Download
err = o.fs.pacer.Call(func() (bool, error) {
d, err = o.fs.srv.NewDownload(o.info)
return shouldRetry(err)
return shouldRetry(ctx, err)
})
if err != nil {
return nil, errors.Wrap(err, "open download file failed")
}
oo := &openObject{
ctx: ctx,
o: o,
d: d,
skip: offset,
@@ -1125,7 +1131,7 @@ func (o *Object) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, op
remote := o.Remote()
// Create the parent directory
dirNode, leaf, err := o.fs.mkdirParent(remote)
dirNode, leaf, err := o.fs.mkdirParent(ctx, remote)
if err != nil {
return errors.Wrap(err, "update make parent dir failed")
}
@@ -1133,7 +1139,7 @@ func (o *Object) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, op
var u *mega.Upload
err = o.fs.pacer.Call(func() (bool, error) {
u, err = o.fs.srv.NewUpload(dirNode, o.fs.opt.Enc.FromStandardName(leaf), size)
return shouldRetry(err)
return shouldRetry(ctx, err)
})
if err != nil {
return errors.Wrap(err, "upload file failed to create session")
@@ -1154,7 +1160,7 @@ func (o *Object) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, op
err = o.fs.pacer.Call(func() (bool, error) {
err = u.UploadChunk(id, chunk)
return shouldRetry(err)
return shouldRetry(ctx, err)
})
if err != nil {
return errors.Wrap(err, "upload file failed to upload chunk")
@@ -1165,7 +1171,7 @@ func (o *Object) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, op
var info *mega.Node
err = o.fs.pacer.Call(func() (bool, error) {
info, err = u.Finish()
return shouldRetry(err)
return shouldRetry(ctx, err)
})
if err != nil {
return errors.Wrap(err, "failed to finish upload")
@@ -1173,7 +1179,7 @@ func (o *Object) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, op
// If the upload succeeded and the original object existed, then delete it
if o.info != nil {
err = o.fs.deleteNode(o.info)
err = o.fs.deleteNode(ctx, o.info)
if err != nil {
return errors.Wrap(err, "upload failed to remove old version")
}
@@ -1185,7 +1191,7 @@ func (o *Object) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, op
// Remove an object
func (o *Object) Remove(ctx context.Context) error {
err := o.fs.deleteNode(o.info)
err := o.fs.deleteNode(ctx, o.info)
if err != nil {
return errors.Wrap(err, "Remove object failed")
}

View File

@@ -302,7 +302,6 @@ func init() {
m.Set(configDriveID, finalDriveID)
m.Set(configDriveType, rootItem.ParentReference.DriveType)
config.SaveConfig()
},
Options: append(oauthutil.SharedOptions, []fs.Option{{
Name: "region",
@@ -362,6 +361,11 @@ This will only work if you are copying between two OneDrive *Personal* drives AN
the files to copy are already shared between them. In other cases, rclone will
fall back to normal copy (which will be slightly slower).`,
Advanced: true,
}, {
Name: "list_chunk",
Help: "Size of listing chunk.",
Default: 1000,
Advanced: true,
}, {
Name: "no_versions",
Default: false,
@@ -469,6 +473,7 @@ type Options struct {
DriveType string `config:"drive_type"`
ExposeOneNoteFiles bool `config:"expose_onenote_files"`
ServerSideAcrossConfigs bool `config:"server_side_across_configs"`
ListChunk int64 `config:"list_chunk"`
NoVersions bool `config:"no_versions"`
LinkScope string `config:"link_scope"`
LinkType string `config:"link_type"`
@@ -550,7 +555,10 @@ var errAsyncJobAccessDenied = errors.New("async job failed - access denied")
// shouldRetry returns a boolean as to whether this resp and err
// deserve to be retried. It returns the err as a convenience
func shouldRetry(resp *http.Response, err error) (bool, error) {
func shouldRetry(ctx context.Context, resp *http.Response, err error) (bool, error) {
if fserrors.ContextError(ctx, &err) {
return false, err
}
retry := false
if resp != nil {
switch resp.StatusCode {
@@ -597,7 +605,7 @@ func (f *Fs) readMetaDataForPathRelativeToID(ctx context.Context, normalizedID s
err = f.pacer.Call(func() (bool, error) {
resp, err = f.srv.CallJSON(ctx, &opts, nil, &info)
return shouldRetry(resp, err)
return shouldRetry(ctx, resp, err)
})
return info, resp, err
@@ -613,7 +621,7 @@ func (f *Fs) readMetaDataForPath(ctx context.Context, path string) (info *api.It
opts.Path = strings.TrimSuffix(opts.Path, ":")
err = f.pacer.Call(func() (bool, error) {
resp, err = f.srv.CallJSON(ctx, &opts, nil, &info)
return shouldRetry(resp, err)
return shouldRetry(ctx, resp, err)
})
return info, resp, err
}
@@ -869,7 +877,7 @@ func (f *Fs) CreateDir(ctx context.Context, dirID, leaf string) (newID string, e
}
err = f.pacer.Call(func() (bool, error) {
resp, err = f.srv.CallJSON(ctx, &opts, &mkdir, &info)
return shouldRetry(resp, err)
return shouldRetry(ctx, resp, err)
})
if err != nil {
//fmt.Printf("...Error %v\n", err)
@@ -894,14 +902,14 @@ type listAllFn func(*api.Item) bool
func (f *Fs) listAll(ctx context.Context, dirID string, directoriesOnly bool, filesOnly bool, fn listAllFn) (found bool, err error) {
// Top parameter asks for bigger pages of data
// https://dev.onedrive.com/odata/optional-query-parameters.htm
opts := f.newOptsCall(dirID, "GET", "/children?$top=1000")
opts := f.newOptsCall(dirID, "GET", fmt.Sprintf("/children?$top=%d", f.opt.ListChunk))
OUTER:
for {
var result api.ListChildrenResponse
var resp *http.Response
err = f.pacer.Call(func() (bool, error) {
resp, err = f.srv.CallJSON(ctx, &opts, nil, &result)
return shouldRetry(resp, err)
return shouldRetry(ctx, resp, err)
})
if err != nil {
return found, errors.Wrap(err, "couldn't list files")
@@ -1038,7 +1046,7 @@ func (f *Fs) deleteObject(ctx context.Context, id string) error {
return f.pacer.Call(func() (bool, error) {
resp, err := f.srv.Call(ctx, &opts)
return shouldRetry(resp, err)
return shouldRetry(ctx, resp, err)
})
}
@@ -1194,7 +1202,7 @@ func (f *Fs) Copy(ctx context.Context, src fs.Object, remote string) (fs.Object,
var resp *http.Response
err = f.pacer.Call(func() (bool, error) {
resp, err = f.srv.CallJSON(ctx, &opts, &copyReq, nil)
return shouldRetry(resp, err)
return shouldRetry(ctx, resp, err)
})
if err != nil {
return nil, err
@@ -1287,7 +1295,7 @@ func (f *Fs) Move(ctx context.Context, src fs.Object, remote string) (fs.Object,
var info api.Item
err = f.pacer.Call(func() (bool, error) {
resp, err = f.srv.CallJSON(ctx, &opts, &move, &info)
return shouldRetry(resp, err)
return shouldRetry(ctx, resp, err)
})
if err != nil {
return nil, err
@@ -1354,7 +1362,7 @@ func (f *Fs) DirMove(ctx context.Context, src fs.Fs, srcRemote, dstRemote string
var info api.Item
err = f.pacer.Call(func() (bool, error) {
resp, err = f.srv.CallJSON(ctx, &opts, &move, &info)
return shouldRetry(resp, err)
return shouldRetry(ctx, resp, err)
})
if err != nil {
return err
@@ -1380,7 +1388,7 @@ func (f *Fs) About(ctx context.Context) (usage *fs.Usage, err error) {
var resp *http.Response
err = f.pacer.Call(func() (bool, error) {
resp, err = f.srv.CallJSON(ctx, &opts, nil, &drive)
return shouldRetry(resp, err)
return shouldRetry(ctx, resp, err)
})
if err != nil {
return nil, errors.Wrap(err, "about failed")
@@ -1421,7 +1429,7 @@ func (f *Fs) PublicLink(ctx context.Context, remote string, expire fs.Duration,
Password: f.opt.LinkPassword,
}
if expire < fs.Duration(time.Hour*24*365*100) {
if expire < fs.DurationOff {
expiry := time.Now().Add(time.Duration(expire))
share.Expiry = &expiry
}
@@ -1430,7 +1438,7 @@ func (f *Fs) PublicLink(ctx context.Context, remote string, expire fs.Duration,
var result api.CreateShareLinkResponse
err = f.pacer.Call(func() (bool, error) {
resp, err = f.srv.CallJSON(ctx, &opts, &share, &result)
return shouldRetry(resp, err)
return shouldRetry(ctx, resp, err)
})
if err != nil {
fmt.Println(err)
@@ -1475,7 +1483,7 @@ func (o *Object) deleteVersions(ctx context.Context) error {
var versions api.VersionsResponse
err := o.fs.pacer.Call(func() (bool, error) {
resp, err := o.fs.srv.CallJSON(ctx, &opts, nil, &versions)
return shouldRetry(resp, err)
return shouldRetry(ctx, resp, err)
})
if err != nil {
return err
@@ -1502,7 +1510,7 @@ func (o *Object) deleteVersion(ctx context.Context, ID string) error {
opts.NoResponse = true
return o.fs.pacer.Call(func() (bool, error) {
resp, err := o.fs.srv.Call(ctx, &opts)
return shouldRetry(resp, err)
return shouldRetry(ctx, resp, err)
})
}
@@ -1653,7 +1661,7 @@ func (o *Object) setModTime(ctx context.Context, modTime time.Time) (*api.Item,
var info *api.Item
err := o.fs.pacer.Call(func() (bool, error) {
resp, err := o.fs.srv.CallJSON(ctx, &opts, &update, &info)
return shouldRetry(resp, err)
return shouldRetry(ctx, resp, err)
})
// Remove versions if required
if o.fs.opt.NoVersions {
@@ -1695,7 +1703,7 @@ func (o *Object) Open(ctx context.Context, options ...fs.OpenOption) (in io.Read
err = o.fs.pacer.Call(func() (bool, error) {
resp, err = o.fs.srv.Call(ctx, &opts)
return shouldRetry(resp, err)
return shouldRetry(ctx, resp, err)
})
if err != nil {
return nil, err
@@ -1723,7 +1731,7 @@ func (o *Object) createUploadSession(ctx context.Context, modTime time.Time) (re
err = errors.New(err.Error() + " (is it a OneNote file?)")
}
}
return shouldRetry(resp, err)
return shouldRetry(ctx, resp, err)
})
return response, err
}
@@ -1738,7 +1746,7 @@ func (o *Object) getPosition(ctx context.Context, url string) (pos int64, err er
var resp *http.Response
err = o.fs.pacer.Call(func() (bool, error) {
resp, err = o.fs.srv.CallJSON(ctx, &opts, nil, &info)
return shouldRetry(resp, err)
return shouldRetry(ctx, resp, err)
})
if err != nil {
return 0, err
@@ -1798,11 +1806,11 @@ func (o *Object) uploadFragment(ctx context.Context, url string, start int64, to
return true, errors.Wrapf(err, "retry this chunk skipping %d bytes", skip)
}
if err != nil {
return shouldRetry(resp, err)
return shouldRetry(ctx, resp, err)
}
body, err = rest.ReadBody(resp)
if err != nil {
return shouldRetry(resp, err)
return shouldRetry(ctx, resp, err)
}
if resp.StatusCode == 200 || resp.StatusCode == 201 {
// we are done :)
@@ -1825,7 +1833,7 @@ func (o *Object) cancelUploadSession(ctx context.Context, url string) (err error
var resp *http.Response
err = o.fs.pacer.Call(func() (bool, error) {
resp, err = o.fs.srv.Call(ctx, &opts)
return shouldRetry(resp, err)
return shouldRetry(ctx, resp, err)
})
return
}
@@ -1896,7 +1904,7 @@ func (o *Object) uploadSinglepart(ctx context.Context, in io.Reader, size int64,
err = errors.New(err.Error() + " (is it a OneNote file?)")
}
}
return shouldRetry(resp, err)
return shouldRetry(ctx, resp, err)
})
if err != nil {
return nil, err

View File

@@ -119,6 +119,7 @@ type Object struct {
fs *Fs // what this object is part of
remote string // The remote path
id string // ID of the file
parent string // ID of the parent directory
modTime time.Time // The modified time of the object if known
md5 string // MD5 hash if known
size int64 // Size of the object
@@ -206,7 +207,7 @@ func NewFs(ctx context.Context, name, root string, m configmap.Mapper) (fs.Fs, e
Path: "/session/login.json",
}
resp, err = f.srv.CallJSON(ctx, &opts, &account, &f.session)
return f.shouldRetry(resp, err)
return f.shouldRetry(ctx, resp, err)
})
if err != nil {
return nil, errors.Wrap(err, "failed to create session")
@@ -233,7 +234,7 @@ func NewFs(ctx context.Context, name, root string, m configmap.Mapper) (fs.Fs, e
// No root so return old f
return f, nil
}
_, err := tempF.newObjectWithInfo(ctx, remote, nil)
_, err := tempF.newObjectWithInfo(ctx, remote, nil, "")
if err != nil {
if err == fs.ErrorObjectNotFound {
// File doesn't exist so return old f
@@ -293,7 +294,7 @@ func (f *Fs) deleteObject(ctx context.Context, id string) error {
Path: "/folder/remove.json",
}
resp, err := f.srv.CallJSON(ctx, &opts, &removeDirData, nil)
return f.shouldRetry(resp, err)
return f.shouldRetry(ctx, resp, err)
})
}
@@ -388,7 +389,7 @@ func (f *Fs) Copy(ctx context.Context, src fs.Object, remote string) (fs.Object,
Path: "/file/move_copy.json",
}
resp, err = f.srv.CallJSON(ctx, &opts, &copyFileData, &response)
return f.shouldRetry(resp, err)
return f.shouldRetry(ctx, resp, err)
})
if err != nil {
return nil, err
@@ -445,7 +446,7 @@ func (f *Fs) Move(ctx context.Context, src fs.Object, remote string) (fs.Object,
Path: "/file/move_copy.json",
}
resp, err = f.srv.CallJSON(ctx, &opts, &copyFileData, &response)
return f.shouldRetry(resp, err)
return f.shouldRetry(ctx, resp, err)
})
if err != nil {
return nil, err
@@ -494,7 +495,7 @@ func (f *Fs) DirMove(ctx context.Context, src fs.Fs, srcRemote, dstRemote string
Path: "/folder/move_copy.json",
}
resp, err = f.srv.CallJSON(ctx, &opts, &moveFolderData, &response)
return f.shouldRetry(resp, err)
return f.shouldRetry(ctx, resp, err)
})
if err != nil {
fs.Debugf(src, "DirMove error %v", err)
@@ -517,7 +518,7 @@ func (f *Fs) Purge(ctx context.Context, dir string) error {
// Return an Object from a path
//
// If it can't be found it returns the error fs.ErrorObjectNotFound.
func (f *Fs) newObjectWithInfo(ctx context.Context, remote string, file *File) (fs.Object, error) {
func (f *Fs) newObjectWithInfo(ctx context.Context, remote string, file *File, parent string) (fs.Object, error) {
// fs.Debugf(nil, "newObjectWithInfo(%s, %v)", remote, file)
var o *Object
@@ -526,6 +527,7 @@ func (f *Fs) newObjectWithInfo(ctx context.Context, remote string, file *File) (
fs: f,
remote: remote,
id: file.FileID,
parent: parent,
modTime: time.Unix(file.DateModified, 0),
size: file.Size,
md5: file.FileHash,
@@ -548,7 +550,7 @@ func (f *Fs) newObjectWithInfo(ctx context.Context, remote string, file *File) (
// it returns the error fs.ErrorObjectNotFound.
func (f *Fs) NewObject(ctx context.Context, remote string) (fs.Object, error) {
// fs.Debugf(nil, "NewObject(\"%s\")", remote)
return f.newObjectWithInfo(ctx, remote, nil)
return f.newObjectWithInfo(ctx, remote, nil, "")
}
// Creates from the parameters passed in a half finished Object which
@@ -581,7 +583,7 @@ func (f *Fs) readMetaDataForFolderID(ctx context.Context, id string) (info *Fold
}
err = f.pacer.Call(func() (bool, error) {
resp, err = f.srv.CallJSON(ctx, &opts, nil, &info)
return f.shouldRetry(resp, err)
return f.shouldRetry(ctx, resp, err)
})
if err != nil {
return nil, err
@@ -631,7 +633,7 @@ func (f *Fs) Put(ctx context.Context, in io.Reader, src fs.ObjectInfo, options .
Path: "/upload/create_file.json",
}
resp, err = o.fs.srv.CallJSON(ctx, &opts, &createFileData, &response)
return o.fs.shouldRetry(resp, err)
return o.fs.shouldRetry(ctx, resp, err)
})
if err != nil {
return nil, errors.Wrap(err, "failed to create file")
@@ -657,7 +659,10 @@ var retryErrorCodes = []int{
// shouldRetry returns a boolean as to whether this resp and err
// deserve to be retried. It returns the err as a convenience
func (f *Fs) shouldRetry(resp *http.Response, err error) (bool, error) {
func (f *Fs) shouldRetry(ctx context.Context, resp *http.Response, err error) (bool, error) {
if fserrors.ContextError(ctx, &err) {
return false, err
}
return fserrors.ShouldRetry(err) || fserrors.ShouldRetryHTTP(resp, retryErrorCodes), err
}
@@ -683,7 +688,7 @@ func (f *Fs) CreateDir(ctx context.Context, pathID, leaf string) (newID string,
Path: "/folder.json",
}
resp, err = f.srv.CallJSON(ctx, &opts, &createDirData, &response)
return f.shouldRetry(resp, err)
return f.shouldRetry(ctx, resp, err)
})
if err != nil {
return "", err
@@ -711,7 +716,7 @@ func (f *Fs) FindLeaf(ctx context.Context, pathID, leaf string) (pathIDOut strin
Path: "/folder/list.json/" + f.session.SessionID + "/" + pathID,
}
resp, err = f.srv.CallJSON(ctx, &opts, nil, &folderList)
return f.shouldRetry(resp, err)
return f.shouldRetry(ctx, resp, err)
})
if err != nil {
return "", false, errors.Wrap(err, "failed to get folder list")
@@ -754,7 +759,7 @@ func (f *Fs) List(ctx context.Context, dir string) (entries fs.DirEntries, err e
folderList := FolderList{}
err = f.pacer.Call(func() (bool, error) {
resp, err = f.srv.CallJSON(ctx, &opts, nil, &folderList)
return f.shouldRetry(resp, err)
return f.shouldRetry(ctx, resp, err)
})
if err != nil {
return nil, errors.Wrap(err, "failed to get folder list")
@@ -768,6 +773,7 @@ func (f *Fs) List(ctx context.Context, dir string) (entries fs.DirEntries, err e
f.dirCache.Put(remote, folder.FolderID)
d := fs.NewDir(remote, time.Unix(folder.DateModified, 0)).SetID(folder.FolderID)
d.SetItems(int64(folder.ChildFolders))
d.SetParentID(directoryID)
entries = append(entries, d)
}
@@ -775,7 +781,7 @@ func (f *Fs) List(ctx context.Context, dir string) (entries fs.DirEntries, err e
file.Name = f.opt.Enc.ToStandardName(file.Name)
// fs.Debugf(nil, "File: %s (%s)", file.Name, file.FileID)
remote := path.Join(dir, file.Name)
o, err := f.newObjectWithInfo(ctx, remote, &file)
o, err := f.newObjectWithInfo(ctx, remote, &file, directoryID)
if err != nil {
return nil, err
}
@@ -842,7 +848,7 @@ func (o *Object) SetModTime(ctx context.Context, modTime time.Time) error {
}
err := o.fs.pacer.Call(func() (bool, error) {
resp, err := o.fs.srv.CallJSON(ctx, &opts, &update, nil)
return o.fs.shouldRetry(resp, err)
return o.fs.shouldRetry(ctx, resp, err)
})
o.modTime = modTime
@@ -862,7 +868,7 @@ func (o *Object) Open(ctx context.Context, options ...fs.OpenOption) (in io.Read
var resp *http.Response
err = o.fs.pacer.Call(func() (bool, error) {
resp, err = o.fs.srv.Call(ctx, &opts)
return o.fs.shouldRetry(resp, err)
return o.fs.shouldRetry(ctx, resp, err)
})
if err != nil {
return nil, errors.Wrap(err, "failed to open file)")
@@ -881,7 +887,7 @@ func (o *Object) Remove(ctx context.Context) error {
Path: "/file.json/" + o.fs.session.SessionID + "/" + o.id,
}
resp, err := o.fs.srv.Call(ctx, &opts)
return o.fs.shouldRetry(resp, err)
return o.fs.shouldRetry(ctx, resp, err)
})
}
@@ -910,7 +916,7 @@ func (o *Object) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, op
Path: "/upload/open_file_upload.json",
}
resp, err := o.fs.srv.CallJSON(ctx, &opts, &openUploadData, &openResponse)
return o.fs.shouldRetry(resp, err)
return o.fs.shouldRetry(ctx, resp, err)
})
if err != nil {
return errors.Wrap(err, "failed to create file")
@@ -954,7 +960,7 @@ func (o *Object) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, op
}
resp, err = o.fs.srv.CallJSON(ctx, &opts, nil, &reply)
return o.fs.shouldRetry(resp, err)
return o.fs.shouldRetry(ctx, resp, err)
})
if err != nil {
return errors.Wrap(err, "failed to create file")
@@ -977,7 +983,7 @@ func (o *Object) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, op
Path: "/upload/close_file_upload.json",
}
resp, err = o.fs.srv.CallJSON(ctx, &opts, &closeUploadData, &closeResponse)
return o.fs.shouldRetry(resp, err)
return o.fs.shouldRetry(ctx, resp, err)
})
if err != nil {
return errors.Wrap(err, "failed to create file")
@@ -1003,7 +1009,7 @@ func (o *Object) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, op
Path: "/file/access.json",
}
resp, err = o.fs.srv.CallJSON(ctx, &opts, &update, nil)
return o.fs.shouldRetry(resp, err)
return o.fs.shouldRetry(ctx, resp, err)
})
if err != nil {
return err
@@ -1029,7 +1035,7 @@ func (o *Object) readMetaData(ctx context.Context) (err error) {
o.fs.session.SessionID, directoryID, url.QueryEscape(o.fs.opt.Enc.FromStandardName(leaf))),
}
resp, err = o.fs.srv.CallJSON(ctx, &opts, nil, &folderList)
return o.fs.shouldRetry(resp, err)
return o.fs.shouldRetry(ctx, resp, err)
})
if err != nil {
return errors.Wrap(err, "failed to get folder list")
@@ -1053,6 +1059,11 @@ func (o *Object) ID() string {
return o.id
}
// ParentID returns the ID of the Object parent directory if known, or "" if not
func (o *Object) ParentID() string {
return o.parent
}
// Check the interfaces are satisfied
var (
_ fs.Fs = (*Fs)(nil)
@@ -1063,4 +1074,5 @@ var (
_ fs.DirCacheFlusher = (*Fs)(nil)
_ fs.Object = (*Object)(nil)
_ fs.IDer = (*Object)(nil)
_ fs.ParentIDer = (*Object)(nil)
)

View File

@@ -213,7 +213,10 @@ var retryErrorCodes = []int{
// shouldRetry returns a boolean as to whether this resp and err
// deserve to be retried. It returns the err as a convenience
func shouldRetry(resp *http.Response, err error) (bool, error) {
func shouldRetry(ctx context.Context, resp *http.Response, err error) (bool, error) {
if fserrors.ContextError(ctx, &err) {
return false, err
}
doRetry := false
// Check if it is an api.Error
@@ -405,7 +408,7 @@ func (f *Fs) CreateDir(ctx context.Context, pathID, leaf string) (newID string,
err = f.pacer.Call(func() (bool, error) {
resp, err = f.srv.CallJSON(ctx, &opts, nil, &result)
err = result.Error.Update(err)
return shouldRetry(resp, err)
return shouldRetry(ctx, resp, err)
})
if err != nil {
//fmt.Printf("...Error %v\n", err)
@@ -460,7 +463,7 @@ func (f *Fs) listAll(ctx context.Context, dirID string, directoriesOnly bool, fi
err = f.pacer.Call(func() (bool, error) {
resp, err = f.srv.CallJSON(ctx, &opts, nil, &result)
err = result.Error.Update(err)
return shouldRetry(resp, err)
return shouldRetry(ctx, resp, err)
})
if err != nil {
return found, errors.Wrap(err, "couldn't list files")
@@ -597,7 +600,7 @@ func (f *Fs) purgeCheck(ctx context.Context, dir string, check bool) error {
err = f.pacer.Call(func() (bool, error) {
resp, err = f.srv.CallJSON(ctx, &opts, nil, &result)
err = result.Error.Update(err)
return shouldRetry(resp, err)
return shouldRetry(ctx, resp, err)
})
if err != nil {
return errors.Wrap(err, "rmdir failed")
@@ -662,7 +665,7 @@ func (f *Fs) Copy(ctx context.Context, src fs.Object, remote string) (fs.Object,
err = f.pacer.Call(func() (bool, error) {
resp, err = f.srv.CallJSON(ctx, &opts, nil, &result)
err = result.Error.Update(err)
return shouldRetry(resp, err)
return shouldRetry(ctx, resp, err)
})
if err != nil {
return nil, err
@@ -700,7 +703,7 @@ func (f *Fs) CleanUp(ctx context.Context) error {
return f.pacer.Call(func() (bool, error) {
resp, err = f.srv.CallJSON(ctx, &opts, nil, &result)
err = result.Update(err)
return shouldRetry(resp, err)
return shouldRetry(ctx, resp, err)
})
}
@@ -740,7 +743,7 @@ func (f *Fs) Move(ctx context.Context, src fs.Object, remote string) (fs.Object,
err = f.pacer.Call(func() (bool, error) {
resp, err = f.srv.CallJSON(ctx, &opts, nil, &result)
err = result.Error.Update(err)
return shouldRetry(resp, err)
return shouldRetry(ctx, resp, err)
})
if err != nil {
return nil, err
@@ -787,7 +790,7 @@ func (f *Fs) DirMove(ctx context.Context, src fs.Fs, srcRemote, dstRemote string
err = f.pacer.Call(func() (bool, error) {
resp, err = f.srv.CallJSON(ctx, &opts, nil, &result)
err = result.Error.Update(err)
return shouldRetry(resp, err)
return shouldRetry(ctx, resp, err)
})
if err != nil {
return err
@@ -814,7 +817,7 @@ func (f *Fs) linkDir(ctx context.Context, dirID string, expire fs.Duration) (str
err := f.pacer.Call(func() (bool, error) {
resp, err := f.srv.CallJSON(ctx, &opts, nil, &result)
err = result.Error.Update(err)
return shouldRetry(resp, err)
return shouldRetry(ctx, resp, err)
})
if err != nil {
return "", err
@@ -838,7 +841,7 @@ func (f *Fs) linkFile(ctx context.Context, path string, expire fs.Duration) (str
err = f.pacer.Call(func() (bool, error) {
resp, err := f.srv.CallJSON(ctx, &opts, nil, &result)
err = result.Error.Update(err)
return shouldRetry(resp, err)
return shouldRetry(ctx, resp, err)
})
if err != nil {
return "", err
@@ -869,7 +872,7 @@ func (f *Fs) About(ctx context.Context) (usage *fs.Usage, err error) {
err = f.pacer.Call(func() (bool, error) {
resp, err = f.srv.CallJSON(ctx, &opts, nil, &q)
err = q.Error.Update(err)
return shouldRetry(resp, err)
return shouldRetry(ctx, resp, err)
})
if err != nil {
return nil, errors.Wrap(err, "about failed")
@@ -927,7 +930,7 @@ func (o *Object) getHashes(ctx context.Context) (err error) {
err = o.fs.pacer.Call(func() (bool, error) {
resp, err = o.fs.srv.CallJSON(ctx, &opts, nil, &result)
err = result.Error.Update(err)
return shouldRetry(resp, err)
return shouldRetry(ctx, resp, err)
})
if err != nil {
return err
@@ -1046,7 +1049,7 @@ func (o *Object) downloadURL(ctx context.Context) (URL string, err error) {
err = o.fs.pacer.Call(func() (bool, error) {
resp, err = o.fs.srv.CallJSON(ctx, &opts, nil, &result)
err = result.Error.Update(err)
return shouldRetry(resp, err)
return shouldRetry(ctx, resp, err)
})
if err != nil {
return "", err
@@ -1072,7 +1075,7 @@ func (o *Object) Open(ctx context.Context, options ...fs.OpenOption) (in io.Read
}
err = o.fs.pacer.Call(func() (bool, error) {
resp, err = o.fs.srv.Call(ctx, &opts)
return shouldRetry(resp, err)
return shouldRetry(ctx, resp, err)
})
if err != nil {
return nil, err
@@ -1134,7 +1137,7 @@ func (o *Object) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, op
// opts.Body=0), so upload it as a multipart form POST with
// Content-Length set.
if size == 0 {
formReader, contentType, overhead, err := rest.MultipartUpload(in, opts.Parameters, "content", leaf)
formReader, contentType, overhead, err := rest.MultipartUpload(ctx, in, opts.Parameters, "content", leaf)
if err != nil {
return errors.Wrap(err, "failed to make multipart upload for 0 length file")
}
@@ -1151,7 +1154,7 @@ func (o *Object) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, op
err = o.fs.pacer.CallNoRetry(func() (bool, error) {
resp, err = o.fs.srv.CallJSON(ctx, &opts, nil, &result)
err = result.Error.Update(err)
return shouldRetry(resp, err)
return shouldRetry(ctx, resp, err)
})
if err != nil {
// sometimes pcloud leaves a half complete file on
@@ -1181,7 +1184,7 @@ func (o *Object) Remove(ctx context.Context) error {
return o.fs.pacer.Call(func() (bool, error) {
resp, err := o.fs.srv.CallJSON(ctx, &opts, nil, &result)
err = result.Error.Update(err)
return shouldRetry(resp, err)
return shouldRetry(ctx, resp, err)
})
}

View File

@@ -176,7 +176,10 @@ var retryErrorCodes = []int{
// shouldRetry returns a boolean as to whether this resp and err
// deserve to be retried. It returns the err as a convenience
func shouldRetry(resp *http.Response, err error) (bool, error) {
func shouldRetry(ctx context.Context, resp *http.Response, err error) (bool, error) {
if fserrors.ContextError(ctx, &err) {
return false, err
}
return fserrors.ShouldRetry(err) || fserrors.ShouldRetryHTTP(resp, retryErrorCodes), err
}
@@ -370,7 +373,7 @@ func (f *Fs) CreateDir(ctx context.Context, pathID, leaf string) (newID string,
}
err = f.pacer.Call(func() (bool, error) {
resp, err = f.srv.CallJSON(ctx, &opts, nil, &info)
return shouldRetry(resp, err)
return shouldRetry(ctx, resp, err)
})
if err != nil {
//fmt.Printf("...Error %v\n", err)
@@ -407,7 +410,7 @@ func (f *Fs) listAll(ctx context.Context, dirID string, directoriesOnly bool, fi
var resp *http.Response
err = f.pacer.Call(func() (bool, error) {
resp, err = f.srv.CallJSON(ctx, &opts, nil, &result)
return shouldRetry(resp, err)
return shouldRetry(ctx, resp, err)
})
if err != nil {
return found, errors.Wrap(err, "couldn't list files")
@@ -581,7 +584,7 @@ func (f *Fs) purgeCheck(ctx context.Context, dir string, check bool) error {
var result api.Response
err = f.pacer.Call(func() (bool, error) {
resp, err = f.srv.CallJSON(ctx, &opts, nil, &result)
return shouldRetry(resp, err)
return shouldRetry(ctx, resp, err)
})
if err != nil {
return errors.Wrap(err, "rmdir failed")
@@ -660,7 +663,7 @@ func (f *Fs) move(ctx context.Context, isFile bool, id, oldLeaf, newLeaf, oldDir
var result api.Response
err = f.pacer.Call(func() (bool, error) {
resp, err = f.srv.CallJSON(ctx, &opts, nil, &result)
return shouldRetry(resp, err)
return shouldRetry(ctx, resp, err)
})
if err != nil {
return errors.Wrap(err, "Move http")
@@ -769,7 +772,7 @@ func (f *Fs) About(ctx context.Context) (usage *fs.Usage, err error) {
}
err = f.pacer.Call(func() (bool, error) {
resp, err = f.srv.CallJSON(ctx, &opts, nil, &info)
return shouldRetry(resp, err)
return shouldRetry(ctx, resp, err)
})
if err != nil {
return nil, errors.Wrap(err, "CreateDir http")
@@ -896,7 +899,7 @@ func (o *Object) Open(ctx context.Context, options ...fs.OpenOption) (in io.Read
}
err = o.fs.pacer.Call(func() (bool, error) {
resp, err = o.fs.srv.Call(ctx, &opts)
return shouldRetry(resp, err)
return shouldRetry(ctx, resp, err)
})
if err != nil {
return nil, err
@@ -934,7 +937,7 @@ func (o *Object) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, op
err = o.fs.pacer.Call(func() (bool, error) {
resp, err = o.fs.srv.CallJSON(ctx, &opts, nil, &info)
if err != nil {
return shouldRetry(resp, err)
return shouldRetry(ctx, resp, err)
}
// Just check the download URL resolves - sometimes
// the URLs returned by premiumize.me don't resolve so
@@ -993,7 +996,7 @@ func (o *Object) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, op
var result api.Response
err = o.fs.pacer.CallNoRetry(func() (bool, error) {
resp, err = o.fs.srv.CallJSON(ctx, &opts, nil, &result)
return shouldRetry(resp, err)
return shouldRetry(ctx, resp, err)
})
if err != nil {
return errors.Wrap(err, "upload file http")
@@ -1035,7 +1038,7 @@ func (f *Fs) renameLeaf(ctx context.Context, isFile bool, id string, newLeaf str
var result api.Response
err = f.pacer.Call(func() (bool, error) {
resp, err = f.srv.CallJSON(ctx, &opts, nil, &result)
return shouldRetry(resp, err)
return shouldRetry(ctx, resp, err)
})
if err != nil {
return errors.Wrap(err, "rename http")
@@ -1060,7 +1063,7 @@ func (f *Fs) remove(ctx context.Context, id string) (err error) {
var result api.Response
err = f.pacer.Call(func() (bool, error) {
resp, err = f.srv.CallJSON(ctx, &opts, nil, &result)
return shouldRetry(resp, err)
return shouldRetry(ctx, resp, err)
})
if err != nil {
return errors.Wrap(err, "remove http")

View File

@@ -1,6 +1,7 @@
package putio
import (
"context"
"fmt"
"net/http"
@@ -29,7 +30,10 @@ func (e *statusCodeError) Temporary() bool {
// shouldRetry returns a boolean as to whether this err deserves to be
// retried. It returns the err as a convenience
func shouldRetry(err error) (bool, error) {
func shouldRetry(ctx context.Context, err error) (bool, error) {
if fserrors.ContextError(ctx, &err) {
return false, err
}
if err == nil {
return false, nil
}

View File

@@ -147,7 +147,7 @@ func (f *Fs) CreateDir(ctx context.Context, pathID, leaf string) (newID string,
err = f.pacer.Call(func() (bool, error) {
// fs.Debugf(f, "creating folder. part: %s, parentID: %d", leaf, parentID)
entry, err = f.client.Files.CreateFolder(ctx, f.opt.Enc.FromStandardName(leaf), parentID)
return shouldRetry(err)
return shouldRetry(ctx, err)
})
return itoa(entry.ID), err
}
@@ -164,7 +164,7 @@ func (f *Fs) FindLeaf(ctx context.Context, pathID, leaf string) (pathIDOut strin
err = f.pacer.Call(func() (bool, error) {
// fs.Debugf(f, "listing file: %d", fileID)
children, _, err = f.client.Files.List(ctx, fileID)
return shouldRetry(err)
return shouldRetry(ctx, err)
})
if err != nil {
if perr, ok := err.(*putio.ErrorResponse); ok && perr.Response.StatusCode == 404 {
@@ -205,7 +205,7 @@ func (f *Fs) List(ctx context.Context, dir string) (entries fs.DirEntries, err e
err = f.pacer.Call(func() (bool, error) {
// fs.Debugf(f, "listing files inside List: %d", parentID)
children, _, err = f.client.Files.List(ctx, parentID)
return shouldRetry(err)
return shouldRetry(ctx, err)
})
if err != nil {
return
@@ -271,7 +271,7 @@ func (f *Fs) PutUnchecked(ctx context.Context, in io.Reader, src fs.ObjectInfo,
err = f.pacer.Call(func() (bool, error) {
// fs.Debugf(f, "getting file: %d", fileID)
entry, err = f.client.Files.Get(ctx, fileID)
return shouldRetry(err)
return shouldRetry(ctx, err)
})
if err != nil {
return nil, err
@@ -295,7 +295,7 @@ func (f *Fs) createUpload(ctx context.Context, name string, size int64, parentID
req.Header.Set("upload-metadata", fmt.Sprintf("name %s,no-torrent %s,parent_id %s,updated-at %s", b64name, b64true, b64parentID, b64modifiedAt))
fs.OpenOptionAddHTTPHeaders(req.Header, options)
resp, err := f.oAuthClient.Do(req)
retry, err := shouldRetry(err)
retry, err := shouldRetry(ctx, err)
if retry {
return true, err
}
@@ -320,7 +320,7 @@ func (f *Fs) sendUpload(ctx context.Context, location string, size int64, in io.
err = f.pacer.Call(func() (bool, error) {
fs.Debugf(f, "Sending zero length chunk")
_, fileID, err = f.transferChunk(ctx, location, 0, bytes.NewReader([]byte{}), 0)
return shouldRetry(err)
return shouldRetry(ctx, err)
})
return
}
@@ -344,13 +344,13 @@ func (f *Fs) sendUpload(ctx context.Context, location string, size int64, in io.
// Get file offset and seek to the position
offset, err := f.getServerOffset(ctx, location)
if err != nil {
return shouldRetry(err)
return shouldRetry(ctx, err)
}
sentBytes := offset - chunkStart
fs.Debugf(f, "sentBytes: %d", sentBytes)
_, err = chunk.Seek(sentBytes, io.SeekStart)
if err != nil {
return shouldRetry(err)
return shouldRetry(ctx, err)
}
transferOffset = offset
reqSize = chunkSize - sentBytes
@@ -367,7 +367,7 @@ func (f *Fs) sendUpload(ctx context.Context, location string, size int64, in io.
offsetMismatch = true
return true, errors.New("connection broken")
}
return shouldRetry(err)
return shouldRetry(ctx, err)
})
if err != nil {
return
@@ -479,7 +479,7 @@ func (f *Fs) purgeCheck(ctx context.Context, dir string, check bool) (err error)
err = f.pacer.Call(func() (bool, error) {
// fs.Debugf(f, "listing files: %d", dirID)
children, _, err = f.client.Files.List(ctx, dirID)
return shouldRetry(err)
return shouldRetry(ctx, err)
})
if err != nil {
return errors.Wrap(err, "Rmdir")
@@ -493,7 +493,7 @@ func (f *Fs) purgeCheck(ctx context.Context, dir string, check bool) (err error)
err = f.pacer.Call(func() (bool, error) {
// fs.Debugf(f, "deleting file: %d", dirID)
err = f.client.Files.Delete(ctx, dirID)
return shouldRetry(err)
return shouldRetry(ctx, err)
})
f.dirCache.FlushDir(dir)
return err
@@ -552,7 +552,7 @@ func (f *Fs) Copy(ctx context.Context, src fs.Object, remote string) (o fs.Objec
req.Header.Set("Content-Type", "application/x-www-form-urlencoded")
// fs.Debugf(f, "copying file (%d) to parent_id: %s", srcObj.file.ID, directoryID)
_, err = f.client.Do(req, nil)
return shouldRetry(err)
return shouldRetry(ctx, err)
})
if err != nil {
return nil, err
@@ -591,7 +591,7 @@ func (f *Fs) Move(ctx context.Context, src fs.Object, remote string) (o fs.Objec
req.Header.Set("Content-Type", "application/x-www-form-urlencoded")
// fs.Debugf(f, "moving file (%d) to parent_id: %s", srcObj.file.ID, directoryID)
_, err = f.client.Do(req, nil)
return shouldRetry(err)
return shouldRetry(ctx, err)
})
if err != nil {
return nil, err
@@ -631,7 +631,7 @@ func (f *Fs) DirMove(ctx context.Context, src fs.Fs, srcRemote, dstRemote string
req.Header.Set("Content-Type", "application/x-www-form-urlencoded")
// fs.Debugf(f, "moving file (%s) to parent_id: %s", srcID, dstDirectoryID)
_, err = f.client.Do(req, nil)
return shouldRetry(err)
return shouldRetry(ctx, err)
})
srcFs.dirCache.FlushDir(srcRemote)
return err
@@ -644,7 +644,7 @@ func (f *Fs) About(ctx context.Context) (usage *fs.Usage, err error) {
err = f.pacer.Call(func() (bool, error) {
// fs.Debugf(f, "getting account info")
ai, err = f.client.Account.Info(ctx)
return shouldRetry(err)
return shouldRetry(ctx, err)
})
if err != nil {
return nil, errors.Wrap(err, "about failed")
@@ -678,6 +678,6 @@ func (f *Fs) CleanUp(ctx context.Context) (err error) {
}
// fs.Debugf(f, "emptying trash")
_, err = f.client.Do(req, nil)
return shouldRetry(err)
return shouldRetry(ctx, err)
})
}

View File

@@ -145,7 +145,7 @@ func (o *Object) readEntry(ctx context.Context) (f *putio.File, err error) {
if perr, ok := err.(*putio.ErrorResponse); ok && perr.Response.StatusCode == 404 {
return false, fs.ErrorObjectNotFound
}
return shouldRetry(err)
return shouldRetry(ctx, err)
})
if err != nil {
return nil, err
@@ -220,7 +220,7 @@ func (o *Object) Open(ctx context.Context, options ...fs.OpenOption) (in io.Read
var storageURL string
err = o.fs.pacer.Call(func() (bool, error) {
storageURL, err = o.fs.client.Files.URL(ctx, o.file.ID, true)
return shouldRetry(err)
return shouldRetry(ctx, err)
})
if err != nil {
return
@@ -231,7 +231,7 @@ func (o *Object) Open(ctx context.Context, options ...fs.OpenOption) (in io.Read
err = o.fs.pacer.Call(func() (bool, error) {
req, err := http.NewRequestWithContext(ctx, http.MethodGet, storageURL, nil)
if err != nil {
return shouldRetry(err)
return shouldRetry(ctx, err)
}
req.Header.Set("User-Agent", o.fs.client.UserAgent)
@@ -241,7 +241,7 @@ func (o *Object) Open(ctx context.Context, options ...fs.OpenOption) (in io.Read
}
// fs.Debugf(o, "opening file: id=%d", o.file.ID)
resp, err = o.fs.httpClient.Do(req)
return shouldRetry(err)
return shouldRetry(ctx, err)
})
if perr, ok := err.(*putio.ErrorResponse); ok && perr.Response.StatusCode >= 400 && perr.Response.StatusCode <= 499 {
_ = resp.Body.Close()
@@ -283,6 +283,6 @@ func (o *Object) Remove(ctx context.Context) (err error) {
return o.fs.pacer.Call(func() (bool, error) {
// fs.Debugf(o, "removing file: id=%d", o.file.ID)
err = o.fs.client.Files.Delete(ctx, o.file.ID)
return shouldRetry(err)
return shouldRetry(ctx, err)
})
}

View File

@@ -1399,7 +1399,10 @@ var retryErrorCodes = []int{
//S3 is pretty resilient, and the built in retry handling is probably sufficient
// as it should notice closed connections and timeouts which are the most likely
// sort of failure modes
func (f *Fs) shouldRetry(err error) (bool, error) {
func (f *Fs) shouldRetry(ctx context.Context, err error) (bool, error) {
if fserrors.ContextError(ctx, &err) {
return false, err
}
// If this is an awserr object, try and extract more useful information to determine if we should retry
if awsError, ok := err.(awserr.Error); ok {
// Simple case, check the original embedded error in case it's generically retryable
@@ -1411,7 +1414,7 @@ func (f *Fs) shouldRetry(err error) (bool, error) {
// 301 if wrong region for bucket - can only update if running from a bucket
if f.rootBucket != "" {
if reqErr.StatusCode() == http.StatusMovedPermanently {
urfbErr := f.updateRegionForBucket(f.rootBucket)
urfbErr := f.updateRegionForBucket(ctx, f.rootBucket)
if urfbErr != nil {
fs.Errorf(f, "Failed to update region for bucket: %v", urfbErr)
return false, err
@@ -1559,6 +1562,8 @@ func s3Connection(ctx context.Context, opt *Options, client *http.Client) (*s3.S
if opt.EnvAuth && opt.AccessKeyID == "" && opt.SecretAccessKey == "" {
// Enable loading config options from ~/.aws/config (selected by AWS_PROFILE env)
awsSessionOpts.SharedConfigState = session.SharedConfigEnable
// Set the name of the profile if supplied
awsSessionOpts.Profile = opt.Profile
}
ses, err := session.NewSessionWithOptions(awsSessionOpts)
if err != nil {
@@ -1741,7 +1746,7 @@ func (f *Fs) NewObject(ctx context.Context, remote string) (fs.Object, error) {
}
// Gets the bucket location
func (f *Fs) getBucketLocation(bucket string) (string, error) {
func (f *Fs) getBucketLocation(ctx context.Context, bucket string) (string, error) {
req := s3.GetBucketLocationInput{
Bucket: &bucket,
}
@@ -1749,7 +1754,7 @@ func (f *Fs) getBucketLocation(bucket string) (string, error) {
var err error
err = f.pacer.Call(func() (bool, error) {
resp, err = f.c.GetBucketLocation(&req)
return f.shouldRetry(err)
return f.shouldRetry(ctx, err)
})
if err != nil {
return "", err
@@ -1759,8 +1764,8 @@ func (f *Fs) getBucketLocation(bucket string) (string, error) {
// Updates the region for the bucket by reading the region from the
// bucket then updating the session.
func (f *Fs) updateRegionForBucket(bucket string) error {
region, err := f.getBucketLocation(bucket)
func (f *Fs) updateRegionForBucket(ctx context.Context, bucket string) error {
region, err := f.getBucketLocation(ctx, bucket)
if err != nil {
return errors.Wrap(err, "reading bucket location failed")
}
@@ -1854,7 +1859,7 @@ func (f *Fs) list(ctx context.Context, bucket, directory, prefix string, addBuck
}
}
}
return f.shouldRetry(err)
return f.shouldRetry(ctx, err)
})
if err != nil {
if awsErr, ok := err.(awserr.RequestFailure); ok {
@@ -2001,7 +2006,7 @@ func (f *Fs) listBuckets(ctx context.Context) (entries fs.DirEntries, err error)
var resp *s3.ListBucketsOutput
err = f.pacer.Call(func() (bool, error) {
resp, err = f.c.ListBucketsWithContext(ctx, &req)
return f.shouldRetry(err)
return f.shouldRetry(ctx, err)
})
if err != nil {
return nil, err
@@ -2116,7 +2121,7 @@ func (f *Fs) bucketExists(ctx context.Context, bucket string) (bool, error) {
}
err := f.pacer.Call(func() (bool, error) {
_, err := f.c.HeadBucketWithContext(ctx, &req)
return f.shouldRetry(err)
return f.shouldRetry(ctx, err)
})
if err == nil {
return true, nil
@@ -2152,7 +2157,7 @@ func (f *Fs) makeBucket(ctx context.Context, bucket string) error {
}
err := f.pacer.Call(func() (bool, error) {
_, err := f.c.CreateBucketWithContext(ctx, &req)
return f.shouldRetry(err)
return f.shouldRetry(ctx, err)
})
if err == nil {
fs.Infof(f, "Bucket %q created with ACL %q", bucket, f.opt.BucketACL)
@@ -2182,7 +2187,7 @@ func (f *Fs) Rmdir(ctx context.Context, dir string) error {
}
err := f.pacer.Call(func() (bool, error) {
_, err := f.c.DeleteBucketWithContext(ctx, &req)
return f.shouldRetry(err)
return f.shouldRetry(ctx, err)
})
if err == nil {
fs.Infof(f, "Bucket %q deleted", bucket)
@@ -2242,7 +2247,7 @@ func (f *Fs) copy(ctx context.Context, req *s3.CopyObjectInput, dstBucket, dstPa
}
return f.pacer.Call(func() (bool, error) {
_, err := f.c.CopyObjectWithContext(ctx, req)
return f.shouldRetry(err)
return f.shouldRetry(ctx, err)
})
}
@@ -2286,7 +2291,7 @@ func (f *Fs) copyMultipart(ctx context.Context, copyReq *s3.CopyObjectInput, dst
if err := f.pacer.Call(func() (bool, error) {
var err error
cout, err = f.c.CreateMultipartUploadWithContext(ctx, req)
return f.shouldRetry(err)
return f.shouldRetry(ctx, err)
}); err != nil {
return err
}
@@ -2302,7 +2307,7 @@ func (f *Fs) copyMultipart(ctx context.Context, copyReq *s3.CopyObjectInput, dst
UploadId: uid,
RequestPayer: req.RequestPayer,
})
return f.shouldRetry(err)
return f.shouldRetry(ctx, err)
})
})()
@@ -2325,7 +2330,7 @@ func (f *Fs) copyMultipart(ctx context.Context, copyReq *s3.CopyObjectInput, dst
uploadPartReq.CopySourceRange = aws.String(calculateRange(partSize, partNum-1, numParts, srcSize))
uout, err := f.c.UploadPartCopyWithContext(ctx, uploadPartReq)
if err != nil {
return f.shouldRetry(err)
return f.shouldRetry(ctx, err)
}
parts = append(parts, &s3.CompletedPart{
PartNumber: &partNum,
@@ -2347,7 +2352,7 @@ func (f *Fs) copyMultipart(ctx context.Context, copyReq *s3.CopyObjectInput, dst
RequestPayer: req.RequestPayer,
UploadId: uid,
})
return f.shouldRetry(err)
return f.shouldRetry(ctx, err)
})
}
@@ -2578,7 +2583,7 @@ func (f *Fs) Command(ctx context.Context, name string, arg []string, opt map[str
reqCopy.Key = &bucketPath
err = f.pacer.Call(func() (bool, error) {
_, err = f.c.RestoreObject(&reqCopy)
return f.shouldRetry(err)
return f.shouldRetry(ctx, err)
})
if err != nil {
st.Status = err.Error()
@@ -2626,7 +2631,7 @@ func (f *Fs) listMultipartUploads(ctx context.Context, bucket, key string) (uplo
var resp *s3.ListMultipartUploadsOutput
err = f.pacer.Call(func() (bool, error) {
resp, err = f.c.ListMultipartUploads(&req)
return f.shouldRetry(err)
return f.shouldRetry(ctx, err)
})
if err != nil {
return nil, errors.Wrapf(err, "list multipart uploads bucket %q key %q", bucket, key)
@@ -2801,7 +2806,7 @@ func (o *Object) headObject(ctx context.Context) (resp *s3.HeadObjectOutput, err
err = o.fs.pacer.Call(func() (bool, error) {
var err error
resp, err = o.fs.c.HeadObjectWithContext(ctx, &req)
return o.fs.shouldRetry(err)
return o.fs.shouldRetry(ctx, err)
})
if err != nil {
if awsErr, ok := err.(awserr.RequestFailure); ok {
@@ -2957,7 +2962,7 @@ func (o *Object) Open(ctx context.Context, options ...fs.OpenOption) (in io.Read
var err error
httpReq.HTTPRequest = httpReq.HTTPRequest.WithContext(ctx)
err = httpReq.Send()
return o.fs.shouldRetry(err)
return o.fs.shouldRetry(ctx, err)
})
if err, ok := err.(awserr.RequestFailure); ok {
if err.Code() == "InvalidObjectState" {
@@ -3016,7 +3021,7 @@ func (o *Object) uploadMultipart(ctx context.Context, req *s3.PutObjectInput, si
err = f.pacer.Call(func() (bool, error) {
var err error
cout, err = f.c.CreateMultipartUploadWithContext(ctx, &mReq)
return f.shouldRetry(err)
return f.shouldRetry(ctx, err)
})
if err != nil {
return errors.Wrap(err, "multipart upload failed to initialise")
@@ -3035,7 +3040,7 @@ func (o *Object) uploadMultipart(ctx context.Context, req *s3.PutObjectInput, si
UploadId: uid,
RequestPayer: req.RequestPayer,
})
return f.shouldRetry(err)
return f.shouldRetry(ctx, err)
})
if errCancel != nil {
fs.Debugf(o, "Failed to cancel multipart upload: %v", errCancel)
@@ -3111,7 +3116,7 @@ func (o *Object) uploadMultipart(ctx context.Context, req *s3.PutObjectInput, si
uout, err := f.c.UploadPartWithContext(gCtx, uploadPartReq)
if err != nil {
if partNum <= int64(concurrency) {
return f.shouldRetry(err)
return f.shouldRetry(ctx, err)
}
// retry all chunks once have done the first batch
return true, err
@@ -3151,7 +3156,7 @@ func (o *Object) uploadMultipart(ctx context.Context, req *s3.PutObjectInput, si
RequestPayer: req.RequestPayer,
UploadId: uid,
})
return f.shouldRetry(err)
return f.shouldRetry(ctx, err)
})
if err != nil {
return errors.Wrap(err, "multipart upload failed to finalise")
@@ -3306,11 +3311,11 @@ func (o *Object) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, op
var err error
resp, err = o.fs.srv.Do(httpReq)
if err != nil {
return o.fs.shouldRetry(err)
return o.fs.shouldRetry(ctx, err)
}
body, err := rest.ReadBody(resp)
if err != nil {
return o.fs.shouldRetry(err)
return o.fs.shouldRetry(ctx, err)
}
if resp.StatusCode >= 200 && resp.StatusCode < 299 {
return false, nil
@@ -3361,7 +3366,7 @@ func (o *Object) Remove(ctx context.Context) error {
}
err := o.fs.pacer.Call(func() (bool, error) {
_, err := o.fs.c.DeleteObjectWithContext(ctx, &req)
return o.fs.shouldRetry(err)
return o.fs.shouldRetry(ctx, err)
})
return err
}

View File

@@ -372,7 +372,6 @@ func Config(ctx context.Context, name string, m configmap.Mapper) {
m.Set(configAuthToken, token)
// And delete any previous entry for password
m.Set(configPassword, "")
config.SaveConfig()
// And we're done here
break
}
@@ -408,7 +407,10 @@ var retryErrorCodes = []int{
// shouldRetry returns a boolean as to whether this resp and err
// deserve to be retried. It returns the err as a convenience
func (f *Fs) shouldRetry(resp *http.Response, err error) (bool, error) {
func (f *Fs) shouldRetry(ctx context.Context, resp *http.Response, err error) (bool, error) {
if fserrors.ContextError(ctx, &err) {
return false, err
}
// For 429 errors look at the Retry-After: header and
// set the retry appropriately, starting with a minimum of 1
// second if it isn't set.

View File

@@ -86,7 +86,7 @@ func (f *Fs) getServerInfo(ctx context.Context) (account *api.ServerInfo, err er
var resp *http.Response
err = f.pacer.Call(func() (bool, error) {
resp, err = f.srv.CallJSON(ctx, &opts, nil, &result)
return f.shouldRetry(resp, err)
return f.shouldRetry(ctx, resp, err)
})
if err != nil {
if resp != nil {
@@ -112,7 +112,7 @@ func (f *Fs) getUserAccountInfo(ctx context.Context) (account *api.AccountInfo,
var resp *http.Response
err = f.pacer.Call(func() (bool, error) {
resp, err = f.srv.CallJSON(ctx, &opts, nil, &result)
return f.shouldRetry(resp, err)
return f.shouldRetry(ctx, resp, err)
})
if err != nil {
if resp != nil {
@@ -139,7 +139,7 @@ func (f *Fs) getLibraries(ctx context.Context) ([]api.Library, error) {
var err error
err = f.pacer.Call(func() (bool, error) {
resp, err = f.srv.CallJSON(ctx, &opts, nil, &result)
return f.shouldRetry(resp, err)
return f.shouldRetry(ctx, resp, err)
})
if err != nil {
if resp != nil {
@@ -170,7 +170,7 @@ func (f *Fs) createLibrary(ctx context.Context, libraryName, password string) (l
var resp *http.Response
err = f.pacer.Call(func() (bool, error) {
resp, err = f.srv.CallJSON(ctx, &opts, &request, &result)
return f.shouldRetry(resp, err)
return f.shouldRetry(ctx, resp, err)
})
if err != nil {
if resp != nil {
@@ -197,7 +197,7 @@ func (f *Fs) deleteLibrary(ctx context.Context, libraryID string) error {
var err error
err = f.pacer.Call(func() (bool, error) {
resp, err = f.srv.CallJSON(ctx, &opts, nil, &result)
return f.shouldRetry(resp, err)
return f.shouldRetry(ctx, resp, err)
})
if err != nil {
if resp != nil {
@@ -228,7 +228,7 @@ func (f *Fs) decryptLibrary(ctx context.Context, libraryID, password string) err
var err error
err = f.pacer.Call(func() (bool, error) {
resp, err = f.srv.Call(ctx, &opts)
return f.shouldRetry(resp, err)
return f.shouldRetry(ctx, resp, err)
})
if err != nil {
if resp != nil {
@@ -271,7 +271,7 @@ func (f *Fs) getDirectoryEntriesAPIv21(ctx context.Context, libraryID, dirPath s
var err error
err = f.pacer.Call(func() (bool, error) {
resp, err = f.srv.CallJSON(ctx, &opts, nil, &result)
return f.shouldRetry(resp, err)
return f.shouldRetry(ctx, resp, err)
})
if err != nil {
if resp != nil {
@@ -316,7 +316,7 @@ func (f *Fs) getDirectoryDetails(ctx context.Context, libraryID, dirPath string)
var err error
err = f.pacer.Call(func() (bool, error) {
resp, err = f.srv.CallJSON(ctx, &opts, nil, &result)
return f.shouldRetry(resp, err)
return f.shouldRetry(ctx, resp, err)
})
if err != nil {
if resp != nil {
@@ -358,7 +358,7 @@ func (f *Fs) createDir(ctx context.Context, libraryID, dirPath string) error {
var err error
err = f.pacer.Call(func() (bool, error) {
resp, err = f.srv.Call(ctx, &opts)
return f.shouldRetry(resp, err)
return f.shouldRetry(ctx, resp, err)
})
if err != nil {
if resp != nil {
@@ -398,7 +398,7 @@ func (f *Fs) renameDir(ctx context.Context, libraryID, dirPath, newName string)
var err error
err = f.pacer.Call(func() (bool, error) {
resp, err = f.srv.Call(ctx, &opts)
return f.shouldRetry(resp, err)
return f.shouldRetry(ctx, resp, err)
})
if err != nil {
if resp != nil {
@@ -438,7 +438,7 @@ func (f *Fs) moveDir(ctx context.Context, srcLibraryID, srcDir, srcName, dstLibr
var err error
err = f.pacer.Call(func() (bool, error) {
resp, err = f.srv.CallJSON(ctx, &opts, &request, nil)
return f.shouldRetry(resp, err)
return f.shouldRetry(ctx, resp, err)
})
if err != nil {
if resp != nil {
@@ -474,7 +474,7 @@ func (f *Fs) deleteDir(ctx context.Context, libraryID, filePath string) error {
var err error
err = f.pacer.Call(func() (bool, error) {
resp, err = f.srv.CallJSON(ctx, &opts, nil, nil)
return f.shouldRetry(resp, err)
return f.shouldRetry(ctx, resp, err)
})
if err != nil {
if resp != nil {
@@ -505,7 +505,7 @@ func (f *Fs) getFileDetails(ctx context.Context, libraryID, filePath string) (*a
var err error
err = f.pacer.Call(func() (bool, error) {
resp, err = f.srv.CallJSON(ctx, &opts, nil, &result)
return f.shouldRetry(resp, err)
return f.shouldRetry(ctx, resp, err)
})
if err != nil {
if resp != nil {
@@ -539,7 +539,7 @@ func (f *Fs) deleteFile(ctx context.Context, libraryID, filePath string) error {
}
err := f.pacer.Call(func() (bool, error) {
resp, err := f.srv.CallJSON(ctx, &opts, nil, nil)
return f.shouldRetry(resp, err)
return f.shouldRetry(ctx, resp, err)
})
if err != nil {
return errors.Wrap(err, "failed to delete file")
@@ -565,7 +565,7 @@ func (f *Fs) getDownloadLink(ctx context.Context, libraryID, filePath string) (s
var err error
err = f.pacer.Call(func() (bool, error) {
resp, err = f.srv.CallJSON(ctx, &opts, nil, &result)
return f.shouldRetry(resp, err)
return f.shouldRetry(ctx, resp, err)
})
if err != nil {
if resp != nil {
@@ -614,7 +614,7 @@ func (f *Fs) download(ctx context.Context, url string, size int64, options ...fs
var err error
err = f.pacer.Call(func() (bool, error) {
resp, err = f.srv.Call(ctx, &opts)
return f.shouldRetry(resp, err)
return f.shouldRetry(ctx, resp, err)
})
if err != nil {
if resp != nil {
@@ -659,7 +659,7 @@ func (f *Fs) getUploadLink(ctx context.Context, libraryID string) (string, error
var err error
err = f.pacer.Call(func() (bool, error) {
resp, err = f.srv.CallJSON(ctx, &opts, nil, &result)
return f.shouldRetry(resp, err)
return f.shouldRetry(ctx, resp, err)
})
if err != nil {
if resp != nil {
@@ -682,7 +682,7 @@ func (f *Fs) upload(ctx context.Context, in io.Reader, uploadLink, filePath stri
"need_idx_progress": {"true"},
"replace": {"1"},
}
formReader, contentType, _, err := rest.MultipartUpload(in, parameters, "file", f.opt.Enc.FromStandardName(filename))
formReader, contentType, _, err := rest.MultipartUpload(ctx, in, parameters, "file", f.opt.Enc.FromStandardName(filename))
if err != nil {
return nil, errors.Wrap(err, "failed to make multipart upload")
}
@@ -739,7 +739,7 @@ func (f *Fs) listShareLinks(ctx context.Context, libraryID, remote string) ([]ap
var err error
err = f.pacer.Call(func() (bool, error) {
resp, err = f.srv.CallJSON(ctx, &opts, nil, &result)
return f.shouldRetry(resp, err)
return f.shouldRetry(ctx, resp, err)
})
if err != nil {
if resp != nil {
@@ -777,7 +777,7 @@ func (f *Fs) createShareLink(ctx context.Context, libraryID, remote string) (*ap
var err error
err = f.pacer.Call(func() (bool, error) {
resp, err = f.srv.CallJSON(ctx, &opts, &request, &result)
return f.shouldRetry(resp, err)
return f.shouldRetry(ctx, resp, err)
})
if err != nil {
if resp != nil {
@@ -818,7 +818,7 @@ func (f *Fs) copyFile(ctx context.Context, srcLibraryID, srcPath, dstLibraryID,
var err error
err = f.pacer.Call(func() (bool, error) {
resp, err = f.srv.CallJSON(ctx, &opts, &request, &result)
return f.shouldRetry(resp, err)
return f.shouldRetry(ctx, resp, err)
})
if err != nil {
if resp != nil {
@@ -860,7 +860,7 @@ func (f *Fs) moveFile(ctx context.Context, srcLibraryID, srcPath, dstLibraryID,
var err error
err = f.pacer.Call(func() (bool, error) {
resp, err = f.srv.CallJSON(ctx, &opts, &request, &result)
return f.shouldRetry(resp, err)
return f.shouldRetry(ctx, resp, err)
})
if err != nil {
if resp != nil {
@@ -900,7 +900,7 @@ func (f *Fs) renameFile(ctx context.Context, libraryID, filePath, newname string
var err error
err = f.pacer.Call(func() (bool, error) {
resp, err = f.srv.CallJSON(ctx, &opts, &request, &result)
return f.shouldRetry(resp, err)
return f.shouldRetry(ctx, resp, err)
})
if err != nil {
if resp != nil {
@@ -938,7 +938,7 @@ func (f *Fs) emptyLibraryTrash(ctx context.Context, libraryID string) error {
var err error
err = f.pacer.Call(func() (bool, error) {
resp, err = f.srv.CallJSON(ctx, &opts, nil, nil)
return f.shouldRetry(resp, err)
return f.shouldRetry(ctx, resp, err)
})
if err != nil {
if resp != nil {
@@ -976,7 +976,7 @@ func (f *Fs) getDirectoryEntriesAPIv2(ctx context.Context, libraryID, dirPath st
var err error
err = f.pacer.Call(func() (bool, error) {
resp, err = f.srv.CallJSON(ctx, &opts, nil, &result)
return f.shouldRetry(resp, err)
return f.shouldRetry(ctx, resp, err)
})
if err != nil {
if resp != nil {
@@ -1030,7 +1030,7 @@ func (f *Fs) copyFileAPIv2(ctx context.Context, srcLibraryID, srcPath, dstLibrar
var err error
err = f.pacer.Call(func() (bool, error) {
resp, err = f.srv.Call(ctx, &opts)
return f.shouldRetry(resp, err)
return f.shouldRetry(ctx, resp, err)
})
if err != nil {
if resp != nil {
@@ -1075,7 +1075,7 @@ func (f *Fs) renameFileAPIv2(ctx context.Context, libraryID, filePath, newname s
var err error
err = f.pacer.Call(func() (bool, error) {
resp, err = f.srv.Call(ctx, &opts)
return f.shouldRetry(resp, err)
return f.shouldRetry(ctx, resp, err)
})
if err != nil {
if resp != nil {

View File

@@ -16,6 +16,7 @@ import (
"strconv"
"strings"
"sync"
"sync/atomic"
"time"
"github.com/pkg/errors"
@@ -204,6 +205,25 @@ Fstat instead of Stat which is called on an already open file handle.
It has been found that this helps with IBM Sterling SFTP servers which have
"extractability" level set to 1 which means only 1 file can be opened at
any given time.
`,
Advanced: true,
}, {
Name: "disable_concurrent_reads",
Default: false,
Help: `If set don't use concurrent reads
Normally concurrent reads are safe to use and not using them will
degrade performance, so this option is disabled by default.
Some servers limit the amount number of times a file can be
downloaded. Using concurrent reads can trigger this limit, so if you
have a server which returns
Failed to copy: file does not exist
Then you may need to enable this flag.
If concurrent reads are disabled, the use_fstat option is ignored.
`,
Advanced: true,
}, {
@@ -224,28 +244,29 @@ Set to 0 to keep connections indefinitely.
// Options defines the configuration for this backend
type Options struct {
Host string `config:"host"`
User string `config:"user"`
Port string `config:"port"`
Pass string `config:"pass"`
KeyPem string `config:"key_pem"`
KeyFile string `config:"key_file"`
KeyFilePass string `config:"key_file_pass"`
PubKeyFile string `config:"pubkey_file"`
KnownHostsFile string `config:"known_hosts_file"`
KeyUseAgent bool `config:"key_use_agent"`
UseInsecureCipher bool `config:"use_insecure_cipher"`
DisableHashCheck bool `config:"disable_hashcheck"`
AskPassword bool `config:"ask_password"`
PathOverride string `config:"path_override"`
SetModTime bool `config:"set_modtime"`
Md5sumCommand string `config:"md5sum_command"`
Sha1sumCommand string `config:"sha1sum_command"`
SkipLinks bool `config:"skip_links"`
Subsystem string `config:"subsystem"`
ServerCommand string `config:"server_command"`
UseFstat bool `config:"use_fstat"`
IdleTimeout fs.Duration `config:"idle_timeout"`
Host string `config:"host"`
User string `config:"user"`
Port string `config:"port"`
Pass string `config:"pass"`
KeyPem string `config:"key_pem"`
KeyFile string `config:"key_file"`
KeyFilePass string `config:"key_file_pass"`
PubKeyFile string `config:"pubkey_file"`
KnownHostsFile string `config:"known_hosts_file"`
KeyUseAgent bool `config:"key_use_agent"`
UseInsecureCipher bool `config:"use_insecure_cipher"`
DisableHashCheck bool `config:"disable_hashcheck"`
AskPassword bool `config:"ask_password"`
PathOverride string `config:"path_override"`
SetModTime bool `config:"set_modtime"`
Md5sumCommand string `config:"md5sum_command"`
Sha1sumCommand string `config:"sha1sum_command"`
SkipLinks bool `config:"skip_links"`
Subsystem string `config:"subsystem"`
ServerCommand string `config:"server_command"`
UseFstat bool `config:"use_fstat"`
DisableConcurrentReads bool `config:"disable_concurrent_reads"`
IdleTimeout fs.Duration `config:"idle_timeout"`
}
// Fs stores the interface to the remote SFTP files
@@ -266,6 +287,7 @@ type Fs struct {
drain *time.Timer // used to drain the pool when we stop using the connections
pacer *fs.Pacer // pacer for operations
savedpswd string
transfers int32 // count in use references
}
// Object is a remote SFTP file that has been stat'd (so it exists, but is not necessarily open for reading)
@@ -328,6 +350,23 @@ func (c *conn) closed() error {
return nil
}
// Show that we are doing an upload or download
//
// Call removeTransfer() when done
func (f *Fs) addTransfer() {
atomic.AddInt32(&f.transfers, 1)
}
// Show the upload or download done
func (f *Fs) removeTransfer() {
atomic.AddInt32(&f.transfers, -1)
}
// getTransfers shows whether there are any transfers in progress
func (f *Fs) getTransfers() int32 {
return atomic.LoadInt32(&f.transfers)
}
// Open a new connection to the SFTP server.
func (f *Fs) sftpConnection(ctx context.Context) (c *conn, err error) {
// Rate limit rate of new connections
@@ -373,7 +412,14 @@ func (f *Fs) newSftpClient(conn *ssh.Client, opts ...sftp.ClientOption) (*sftp.C
}
}
opts = opts[:len(opts):len(opts)] // make sure we don't overwrite the callers opts
opts = append(opts, sftp.UseFstat(f.opt.UseFstat))
opts = append(opts,
sftp.UseFstat(f.opt.UseFstat),
// FIXME disabled after library reversion
// sftp.UseConcurrentReads(!f.opt.DisableConcurrentReads),
)
if f.opt.DisableConcurrentReads { // FIXME
fs.Errorf(f, "Ignoring disable_concurrent_reads after library reversion - see #5197")
}
return sftp.NewClientPipe(pr, pw, opts...)
}
@@ -451,6 +497,13 @@ func (f *Fs) putSftpConnection(pc **conn, err error) {
func (f *Fs) drainPool(ctx context.Context) (err error) {
f.poolMu.Lock()
defer f.poolMu.Unlock()
if transfers := f.getTransfers(); transfers != 0 {
fs.Debugf(f, "Not closing %d unused connections as %d transfers in progress", len(f.pool), transfers)
if f.opt.IdleTimeout > 0 {
f.drain.Reset(time.Duration(f.opt.IdleTimeout)) // nudge on the pool emptying timer
}
return nil
}
if f.opt.IdleTimeout > 0 {
f.drain.Stop()
}
@@ -1331,18 +1384,19 @@ func (o *Object) stat(ctx context.Context) error {
//
// it also updates the info field
func (o *Object) SetModTime(ctx context.Context, modTime time.Time) error {
if o.fs.opt.SetModTime {
c, err := o.fs.getSftpConnection(ctx)
if err != nil {
return errors.Wrap(err, "SetModTime")
}
err = c.sftpClient.Chtimes(o.path(), modTime, modTime)
o.fs.putSftpConnection(&c, err)
if err != nil {
return errors.Wrap(err, "SetModTime failed")
}
if !o.fs.opt.SetModTime {
return nil
}
err := o.stat(ctx)
c, err := o.fs.getSftpConnection(ctx)
if err != nil {
return errors.Wrap(err, "SetModTime")
}
err = c.sftpClient.Chtimes(o.path(), modTime, modTime)
o.fs.putSftpConnection(&c, err)
if err != nil {
return errors.Wrap(err, "SetModTime failed")
}
err = o.stat(ctx)
if err != nil {
return errors.Wrap(err, "SetModTime stat failed")
}
@@ -1356,18 +1410,22 @@ func (o *Object) Storable() bool {
// objectReader represents a file open for reading on the SFTP server
type objectReader struct {
f *Fs
sftpFile *sftp.File
pipeReader *io.PipeReader
done chan struct{}
}
func newObjectReader(sftpFile *sftp.File) *objectReader {
func (f *Fs) newObjectReader(sftpFile *sftp.File) *objectReader {
pipeReader, pipeWriter := io.Pipe()
file := &objectReader{
f: f,
sftpFile: sftpFile,
pipeReader: pipeReader,
done: make(chan struct{}),
}
// Show connection in use
f.addTransfer()
go func() {
// Use sftpFile.WriteTo to pump data so that it gets a
@@ -1397,6 +1455,8 @@ func (file *objectReader) Close() (err error) {
_ = file.pipeReader.Close()
// Wait for the background process to finish
<-file.done
// Show connection no longer in use
file.f.removeTransfer()
return err
}
@@ -1430,12 +1490,14 @@ func (o *Object) Open(ctx context.Context, options ...fs.OpenOption) (in io.Read
return nil, errors.Wrap(err, "Open Seek failed")
}
}
in = readers.NewLimitedReadCloser(newObjectReader(sftpFile), limit)
in = readers.NewLimitedReadCloser(o.fs.newObjectReader(sftpFile), limit)
return in, nil
}
// Update a remote sftp file using the data <in> and ModTime from <src>
func (o *Object) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, options ...fs.OpenOption) error {
o.fs.addTransfer() // Show transfer in progress
defer o.fs.removeTransfer()
// Clear the hash cache since we are about to update the object
o.md5sum = nil
o.sha1sum = nil
@@ -1473,10 +1535,28 @@ func (o *Object) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, op
remove()
return errors.Wrap(err, "Update Close failed")
}
// Set the mod time - this stats the object if o.fs.opt.SetModTime == true
err = o.SetModTime(ctx, src.ModTime(ctx))
if err != nil {
return errors.Wrap(err, "Update SetModTime failed")
}
// Stat the file after the upload to read its stats back if o.fs.opt.SetModTime == false
if !o.fs.opt.SetModTime {
err = o.stat(ctx)
if err == fs.ErrorObjectNotFound {
// In the specific case of o.fs.opt.SetModTime == false
// if the object wasn't found then don't return an error
fs.Debugf(o, "Not found after upload with set_modtime=false so returning best guess")
o.modTime = src.ModTime(ctx)
o.size = src.Size()
o.mode = os.FileMode(0666) // regular file
} else if err != nil {
return errors.Wrap(err, "Update stat failed")
}
}
return nil
}

View File

@@ -299,7 +299,10 @@ var retryErrorCodes = []int{
// shouldRetry returns a boolean as to whether this resp and err
// deserve to be retried. It returns the err as a convenience
func shouldRetry(resp *http.Response, err error) (bool, error) {
func shouldRetry(ctx context.Context, resp *http.Response, err error) (bool, error) {
if fserrors.ContextError(ctx, &err) {
return false, err
}
return fserrors.ShouldRetry(err) || fserrors.ShouldRetryHTTP(resp, retryErrorCodes), err
}
@@ -324,7 +327,7 @@ func (f *Fs) readMetaDataForIDPath(ctx context.Context, id, path string, directo
var resp *http.Response
err = f.pacer.Call(func() (bool, error) {
resp, err = f.srv.CallJSON(ctx, &opts, nil, &item)
return shouldRetry(resp, err)
return shouldRetry(ctx, resp, err)
})
if err != nil {
if resp != nil && resp.StatusCode == http.StatusNotFound {
@@ -631,7 +634,7 @@ func (f *Fs) CreateDir(ctx context.Context, pathID, leaf string) (newID string,
}
err = f.pacer.Call(func() (bool, error) {
resp, err = f.srv.CallJSON(ctx, &opts, &req, &info)
return shouldRetry(resp, err)
return shouldRetry(ctx, resp, err)
})
if err != nil {
return "", errors.Wrap(err, "CreateDir")
@@ -663,7 +666,7 @@ func (f *Fs) listAll(ctx context.Context, dirID string, directoriesOnly bool, fi
var resp *http.Response
err = f.pacer.Call(func() (bool, error) {
resp, err = f.srv.CallJSON(ctx, &opts, nil, &result)
return shouldRetry(resp, err)
return shouldRetry(ctx, resp, err)
})
if err != nil {
return found, errors.Wrap(err, "couldn't list files")
@@ -912,7 +915,7 @@ func (f *Fs) updateItem(ctx context.Context, id, leaf, directoryID string, modTi
var resp *http.Response
err = f.pacer.Call(func() (bool, error) {
resp, err = f.srv.CallJSON(ctx, &opts, &update, &info)
return shouldRetry(resp, err)
return shouldRetry(ctx, resp, err)
})
if err != nil {
return nil, err
@@ -1133,7 +1136,7 @@ func (f *Fs) Copy(ctx context.Context, src fs.Object, remote string) (dst fs.Obj
var info *api.Item
err = f.pacer.Call(func() (bool, error) {
resp, err = f.srv.CallJSON(ctx, &opts, nil, &info)
return shouldRetry(resp, err)
return shouldRetry(ctx, resp, err)
})
if err != nil {
return nil, err
@@ -1294,7 +1297,7 @@ func (o *Object) Open(ctx context.Context, options ...fs.OpenOption) (in io.Read
var dl api.DownloadSpecification
err = o.fs.pacer.Call(func() (bool, error) {
resp, err = o.fs.srv.CallJSON(ctx, &opts, nil, &dl)
return shouldRetry(resp, err)
return shouldRetry(ctx, resp, err)
})
if err != nil {
return nil, errors.Wrap(err, "open: fetch download specification")
@@ -1309,7 +1312,7 @@ func (o *Object) Open(ctx context.Context, options ...fs.OpenOption) (in io.Read
}
err = o.fs.pacer.Call(func() (bool, error) {
resp, err = o.fs.srv.Call(ctx, &opts)
return shouldRetry(resp, err)
return shouldRetry(ctx, resp, err)
})
if err != nil {
return nil, errors.Wrap(err, "open")
@@ -1365,7 +1368,7 @@ func (o *Object) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, op
}
err = o.fs.pacer.Call(func() (bool, error) {
resp, err = o.fs.srv.CallJSON(ctx, &opts, &req, &info)
return shouldRetry(resp, err)
return shouldRetry(ctx, resp, err)
})
if err != nil {
return errors.Wrap(err, "upload get specification")
@@ -1390,7 +1393,7 @@ func (o *Object) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, op
var finish api.UploadFinishResponse
err = o.fs.pacer.CallNoRetry(func() (bool, error) {
resp, err = o.fs.srv.CallJSON(ctx, &opts, nil, &finish)
return shouldRetry(resp, err)
return shouldRetry(ctx, resp, err)
})
if err != nil {
return errors.Wrap(err, "upload file")
@@ -1426,7 +1429,7 @@ func (f *Fs) remove(ctx context.Context, id string) (err error) {
var resp *http.Response
err = f.pacer.Call(func() (bool, error) {
resp, err = f.srv.Call(ctx, &opts)
return shouldRetry(resp, err)
return shouldRetry(ctx, resp, err)
})
if err != nil {
return errors.Wrap(err, "remove")

View File

@@ -155,7 +155,7 @@ func (up *largeUpload) finish(ctx context.Context) error {
err := up.f.pacer.Call(func() (bool, error) {
resp, err := up.f.srv.Call(ctx, &opts)
if err != nil {
return shouldRetry(resp, err)
return shouldRetry(ctx, resp, err)
}
respBody, err = rest.ReadBody(resp)
// retry all errors now that the multipart upload has started

View File

@@ -111,7 +111,7 @@ func init() {
// FIXME
//err = f.pacer.Call(func() (bool, error) {
resp, err = srv.CallXML(context.Background(), &opts, &authRequest, nil)
// return shouldRetry(resp, err)
// return shouldRetry(ctx, resp, err)
//})
if err != nil {
log.Fatalf("Failed to get token: %v", err)
@@ -248,7 +248,10 @@ var retryErrorCodes = []int{
// shouldRetry returns a boolean as to whether this resp and err
// deserve to be retried. It returns the err as a convenience
func shouldRetry(resp *http.Response, err error) (bool, error) {
func shouldRetry(ctx context.Context, resp *http.Response, err error) (bool, error) {
if fserrors.ContextError(ctx, &err) {
return false, err
}
return fserrors.ShouldRetry(err) || fserrors.ShouldRetryHTTP(resp, retryErrorCodes), err
}
@@ -288,7 +291,7 @@ func (f *Fs) readMetaDataForID(ctx context.Context, ID string) (info *api.File,
}
err = f.pacer.Call(func() (bool, error) {
resp, err = f.srv.CallXML(ctx, &opts, nil, &info)
return shouldRetry(resp, err)
return shouldRetry(ctx, resp, err)
})
if err != nil {
if resp != nil && resp.StatusCode == http.StatusNotFound {
@@ -325,7 +328,7 @@ func (f *Fs) getAuthToken(ctx context.Context) error {
}
err = f.pacer.Call(func() (bool, error) {
resp, err = f.srv.CallXML(ctx, &opts, &authRequest, &authResponse)
return shouldRetry(resp, err)
return shouldRetry(ctx, resp, err)
})
if err != nil {
return errors.Wrap(err, "failed to get authorization")
@@ -373,7 +376,7 @@ func (f *Fs) getUser(ctx context.Context) (user *api.User, err error) {
}
err = f.pacer.Call(func() (bool, error) {
resp, err = f.srv.CallXML(ctx, &opts, nil, &user)
return shouldRetry(resp, err)
return shouldRetry(ctx, resp, err)
})
if err != nil {
return nil, errors.Wrap(err, "failed to get user")
@@ -567,7 +570,7 @@ func (f *Fs) CreateDir(ctx context.Context, pathID, leaf string) (newID string,
}
err = f.pacer.Call(func() (bool, error) {
resp, err = f.srv.CallXML(ctx, &opts, mkdir, nil)
return shouldRetry(resp, err)
return shouldRetry(ctx, resp, err)
})
if err != nil {
return "", err
@@ -618,7 +621,7 @@ OUTER:
var resp *http.Response
err = f.pacer.Call(func() (bool, error) {
resp, err = f.srv.CallXML(ctx, &opts, nil, &result)
return shouldRetry(resp, err)
return shouldRetry(ctx, resp, err)
})
if err != nil {
return found, errors.Wrap(err, "couldn't list files")
@@ -774,7 +777,7 @@ func (f *Fs) delete(ctx context.Context, isFile bool, id string, remote string,
}
return f.pacer.Call(func() (bool, error) {
resp, err := f.srv.Call(ctx, &opts)
return shouldRetry(resp, err)
return shouldRetry(ctx, resp, err)
})
}
// Move file/dir to deleted files if not hard delete
@@ -880,7 +883,7 @@ func (f *Fs) Copy(ctx context.Context, src fs.Object, remote string) (fs.Object,
var resp *http.Response
err = f.pacer.Call(func() (bool, error) {
resp, err = f.srv.CallXML(ctx, &opts, &copyFile, nil)
return shouldRetry(resp, err)
return shouldRetry(ctx, resp, err)
})
if err != nil {
return nil, err
@@ -934,7 +937,7 @@ func (f *Fs) moveFile(ctx context.Context, id, leaf, directoryID string) (info *
var resp *http.Response
err = f.pacer.Call(func() (bool, error) {
resp, err = f.srv.CallXML(ctx, &opts, &move, &info)
return shouldRetry(resp, err)
return shouldRetry(ctx, resp, err)
})
if err != nil {
return nil, err
@@ -964,7 +967,7 @@ func (f *Fs) moveDir(ctx context.Context, id, leaf, directoryID string) (err err
var resp *http.Response
return f.pacer.Call(func() (bool, error) {
resp, err = f.srv.CallXML(ctx, &opts, &move, nil)
return shouldRetry(resp, err)
return shouldRetry(ctx, resp, err)
})
}
@@ -1053,7 +1056,7 @@ func (f *Fs) PublicLink(ctx context.Context, remote string, expire fs.Duration,
var info *api.File
err = f.pacer.Call(func() (bool, error) {
resp, err = f.srv.CallXML(ctx, &opts, &linkFile, &info)
return shouldRetry(resp, err)
return shouldRetry(ctx, resp, err)
})
if err != nil {
return "", err
@@ -1182,7 +1185,7 @@ func (o *Object) Open(ctx context.Context, options ...fs.OpenOption) (in io.Read
}
err = o.fs.pacer.Call(func() (bool, error) {
resp, err = o.fs.srv.Call(ctx, &opts)
return shouldRetry(resp, err)
return shouldRetry(ctx, resp, err)
})
if err != nil {
return nil, err
@@ -1204,7 +1207,7 @@ func (f *Fs) createFile(ctx context.Context, pathID, leaf, mimeType string) (new
}
err = f.pacer.Call(func() (bool, error) {
resp, err = f.srv.CallXML(ctx, &opts, &mkdir, nil)
return shouldRetry(resp, err)
return shouldRetry(ctx, resp, err)
})
if err != nil {
return "", err
@@ -1262,7 +1265,7 @@ func (o *Object) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, op
}
err = o.fs.pacer.CallNoRetry(func() (bool, error) {
resp, err = o.fs.srv.Call(ctx, &opts)
return shouldRetry(resp, err)
return shouldRetry(ctx, resp, err)
})
if err != nil {
return errors.Wrap(err, "failed to upload file")

View File

@@ -13,6 +13,7 @@ import (
"strings"
"time"
"github.com/google/uuid"
"github.com/ncw/swift/v2"
"github.com/pkg/errors"
"github.com/rclone/rclone/fs"
@@ -291,7 +292,10 @@ var retryErrorCodes = []int{
// shouldRetry returns a boolean as to whether this err deserves to be
// retried. It returns the err as a convenience
func shouldRetry(err error) (bool, error) {
func shouldRetry(ctx context.Context, err error) (bool, error) {
if fserrors.ContextError(ctx, &err) {
return false, err
}
// If this is a swift.Error object extract the HTTP error code
if swiftError, ok := err.(*swift.Error); ok {
for _, e := range retryErrorCodes {
@@ -307,7 +311,7 @@ func shouldRetry(err error) (bool, error) {
// shouldRetryHeaders returns a boolean as to whether this err
// deserves to be retried. It reads the headers passed in looking for
// `Retry-After`. It returns the err as a convenience
func shouldRetryHeaders(headers swift.Headers, err error) (bool, error) {
func shouldRetryHeaders(ctx context.Context, headers swift.Headers, err error) (bool, error) {
if swiftError, ok := err.(*swift.Error); ok && swiftError.StatusCode == 429 {
if value := headers["Retry-After"]; value != "" {
retryAfter, parseErr := strconv.Atoi(value)
@@ -326,7 +330,7 @@ func shouldRetryHeaders(headers swift.Headers, err error) (bool, error) {
}
}
}
return shouldRetry(err)
return shouldRetry(ctx, err)
}
// parsePath parses a remote 'url'
@@ -468,7 +472,7 @@ func NewFsWithConnection(ctx context.Context, opt *Options, name, root string, c
err = f.pacer.Call(func() (bool, error) {
var rxHeaders swift.Headers
info, rxHeaders, err = f.c.Object(ctx, f.rootContainer, encodedDirectory)
return shouldRetryHeaders(rxHeaders, err)
return shouldRetryHeaders(ctx, rxHeaders, err)
})
if err == nil && info.ContentType != directoryMarkerContentType {
newRoot := path.Dir(f.root)
@@ -576,7 +580,7 @@ func (f *Fs) listContainerRoot(ctx context.Context, container, directory, prefix
var err error
err = f.pacer.Call(func() (bool, error) {
objects, err = f.c.Objects(ctx, container, opts)
return shouldRetry(err)
return shouldRetry(ctx, err)
})
if err == nil {
for i := range objects {
@@ -661,7 +665,7 @@ func (f *Fs) listContainers(ctx context.Context) (entries fs.DirEntries, err err
var containers []swift.Container
err = f.pacer.Call(func() (bool, error) {
containers, err = f.c.ContainersAll(ctx, nil)
return shouldRetry(err)
return shouldRetry(ctx, err)
})
if err != nil {
return nil, errors.Wrap(err, "container listing failed")
@@ -753,7 +757,7 @@ func (f *Fs) About(ctx context.Context) (*fs.Usage, error) {
var err error
err = f.pacer.Call(func() (bool, error) {
containers, err = f.c.ContainersAll(ctx, nil)
return shouldRetry(err)
return shouldRetry(ctx, err)
})
if err != nil {
return nil, errors.Wrap(err, "container listing failed")
@@ -805,7 +809,7 @@ func (f *Fs) makeContainer(ctx context.Context, container string) error {
err = f.pacer.Call(func() (bool, error) {
var rxHeaders swift.Headers
_, rxHeaders, err = f.c.Container(ctx, container)
return shouldRetryHeaders(rxHeaders, err)
return shouldRetryHeaders(ctx, rxHeaders, err)
})
}
if err == swift.ContainerNotFound {
@@ -815,7 +819,7 @@ func (f *Fs) makeContainer(ctx context.Context, container string) error {
}
err = f.pacer.Call(func() (bool, error) {
err = f.c.ContainerCreate(ctx, container, headers)
return shouldRetry(err)
return shouldRetry(ctx, err)
})
if err == nil {
fs.Infof(f, "Container %q created", container)
@@ -836,7 +840,7 @@ func (f *Fs) Rmdir(ctx context.Context, dir string) error {
err := f.cache.Remove(container, func() error {
err := f.pacer.Call(func() (bool, error) {
err := f.c.ContainerDelete(ctx, container)
return shouldRetry(err)
return shouldRetry(ctx, err)
})
if err == nil {
fs.Infof(f, "Container %q removed", container)
@@ -902,18 +906,125 @@ func (f *Fs) Copy(ctx context.Context, src fs.Object, remote string) (fs.Object,
fs.Debugf(src, "Can't copy - not same remote type")
return nil, fs.ErrorCantCopy
}
srcContainer, srcPath := srcObj.split()
err = f.pacer.Call(func() (bool, error) {
var rxHeaders swift.Headers
rxHeaders, err = f.c.ObjectCopy(ctx, srcContainer, srcPath, dstContainer, dstPath, nil)
return shouldRetryHeaders(rxHeaders, err)
})
isLargeObject, err := srcObj.isLargeObject(ctx)
if err != nil {
return nil, err
}
if isLargeObject {
/*handle large object*/
err = copyLargeObject(ctx, f, srcObj, dstContainer, dstPath)
} else {
srcContainer, srcPath := srcObj.split()
err = f.pacer.Call(func() (bool, error) {
var rxHeaders swift.Headers
rxHeaders, err = f.c.ObjectCopy(ctx, srcContainer, srcPath, dstContainer, dstPath, nil)
return shouldRetryHeaders(ctx, rxHeaders, err)
})
}
if err != nil {
return nil, err
}
return f.NewObject(ctx, remote)
}
func copyLargeObject(ctx context.Context, f *Fs, src *Object, dstContainer string, dstPath string) error {
segmentsContainer := dstContainer + "_segments"
err := f.makeContainer(ctx, segmentsContainer)
if err != nil {
return err
}
segments, err := src.getSegmentsLargeObject(ctx)
if err != nil {
return err
}
if len(segments) == 0 {
return errors.New("could not copy object, list segments are empty")
}
nanoSeconds := time.Now().Nanosecond()
prefixSegment := fmt.Sprintf("%v/%v/%s", nanoSeconds, src.size, strings.ReplaceAll(uuid.New().String(), "-", ""))
copiedSegmentsLen := 10
for _, value := range segments {
if len(value) <= 0 {
continue
}
fragment := value[0]
if len(fragment) <= 0 {
continue
}
copiedSegmentsLen = len(value)
firstIndex := strings.Index(fragment, "/")
if firstIndex < 0 {
firstIndex = 0
} else {
firstIndex = firstIndex + 1
}
lastIndex := strings.LastIndex(fragment, "/")
if lastIndex < 0 {
lastIndex = len(fragment)
} else {
lastIndex = lastIndex - 1
}
prefixSegment = fragment[firstIndex:lastIndex]
break
}
copiedSegments := make([]string, copiedSegmentsLen)
defer handleCopyFail(ctx, f, segmentsContainer, copiedSegments, err)
for c, ss := range segments {
if len(ss) <= 0 {
continue
}
for _, s := range ss {
lastIndex := strings.LastIndex(s, "/")
if lastIndex <= 0 {
lastIndex = 0
} else {
lastIndex = lastIndex + 1
}
segmentName := dstPath + "/" + prefixSegment + "/" + s[lastIndex:]
err = f.pacer.Call(func() (bool, error) {
var rxHeaders swift.Headers
rxHeaders, err = f.c.ObjectCopy(ctx, c, s, segmentsContainer, segmentName, nil)
copiedSegments = append(copiedSegments, segmentName)
return shouldRetryHeaders(ctx, rxHeaders, err)
})
if err != nil {
return err
}
}
}
m := swift.Metadata{}
headers := m.ObjectHeaders()
headers["X-Object-Manifest"] = urlEncode(fmt.Sprintf("%s/%s/%s", segmentsContainer, dstPath, prefixSegment))
headers["Content-Length"] = "0"
emptyReader := bytes.NewReader(nil)
err = f.pacer.Call(func() (bool, error) {
var rxHeaders swift.Headers
rxHeaders, err = f.c.ObjectPut(ctx, dstContainer, dstPath, emptyReader, true, "", src.contentType, headers)
return shouldRetryHeaders(ctx, rxHeaders, err)
})
return err
}
//remove copied segments when copy process failed
func handleCopyFail(ctx context.Context, f *Fs, segmentsContainer string, segments []string, err error) {
fs.Debugf(f, "handle copy segment fail")
if err == nil {
return
}
if len(segmentsContainer) == 0 {
fs.Debugf(f, "invalid segments container")
return
}
if len(segments) == 0 {
fs.Debugf(f, "segments is empty")
return
}
fs.Debugf(f, "action delete segments what copied")
for _, v := range segments {
_ = f.c.ObjectDelete(ctx, segmentsContainer, v)
}
}
// Hashes returns the supported hash sets.
func (f *Fs) Hashes() hash.Set {
return hash.Set(hash.MD5)
@@ -1041,7 +1152,7 @@ func (o *Object) readMetaData(ctx context.Context) (err error) {
container, containerPath := o.split()
err = o.fs.pacer.Call(func() (bool, error) {
info, h, err = o.fs.c.Object(ctx, container, containerPath)
return shouldRetryHeaders(h, err)
return shouldRetryHeaders(ctx, h, err)
})
if err != nil {
if err == swift.ObjectNotFound {
@@ -1100,7 +1211,7 @@ func (o *Object) SetModTime(ctx context.Context, modTime time.Time) error {
container, containerPath := o.split()
return o.fs.pacer.Call(func() (bool, error) {
err = o.fs.c.ObjectUpdate(ctx, container, containerPath, newHeaders)
return shouldRetry(err)
return shouldRetry(ctx, err)
})
}
@@ -1121,7 +1232,7 @@ func (o *Object) Open(ctx context.Context, options ...fs.OpenOption) (in io.Read
err = o.fs.pacer.Call(func() (bool, error) {
var rxHeaders swift.Headers
in, rxHeaders, err = o.fs.c.ObjectOpen(ctx, container, containerPath, !isRanging, headers)
return shouldRetryHeaders(rxHeaders, err)
return shouldRetryHeaders(ctx, rxHeaders, err)
})
return
}
@@ -1211,7 +1322,7 @@ func (o *Object) updateChunks(ctx context.Context, in0 io.Reader, headers swift.
err = o.fs.pacer.Call(func() (bool, error) {
var rxHeaders swift.Headers
_, rxHeaders, err = o.fs.c.Container(ctx, segmentsContainer)
return shouldRetryHeaders(rxHeaders, err)
return shouldRetryHeaders(ctx, rxHeaders, err)
})
if err == swift.ContainerNotFound {
headers := swift.Headers{}
@@ -1220,7 +1331,7 @@ func (o *Object) updateChunks(ctx context.Context, in0 io.Reader, headers swift.
}
err = o.fs.pacer.Call(func() (bool, error) {
err = o.fs.c.ContainerCreate(ctx, segmentsContainer, headers)
return shouldRetry(err)
return shouldRetry(ctx, err)
})
}
if err != nil {
@@ -1241,7 +1352,8 @@ func (o *Object) updateChunks(ctx context.Context, in0 io.Reader, headers swift.
if segmentInfos == nil || len(segmentInfos) == 0 {
return
}
deleteChunks(ctx, o, segmentsContainer, segmentInfos)
_ctx := context.Background()
deleteChunks(_ctx, o, segmentsContainer, segmentInfos)
})()
for {
// can we read at least one byte?
@@ -1267,7 +1379,7 @@ func (o *Object) updateChunks(ctx context.Context, in0 io.Reader, headers swift.
if err == nil {
segmentInfos = append(segmentInfos, segmentPath)
}
return shouldRetryHeaders(rxHeaders, err)
return shouldRetryHeaders(ctx, rxHeaders, err)
})
if err != nil {
return "", err
@@ -1281,7 +1393,7 @@ func (o *Object) updateChunks(ctx context.Context, in0 io.Reader, headers swift.
err = o.fs.pacer.Call(func() (bool, error) {
var rxHeaders swift.Headers
rxHeaders, err = o.fs.c.ObjectPut(ctx, container, containerPath, emptyReader, true, "", contentType, headers)
return shouldRetryHeaders(rxHeaders, err)
return shouldRetryHeaders(ctx, rxHeaders, err)
})
if err == nil {
@@ -1356,7 +1468,7 @@ func (o *Object) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, op
var rxHeaders swift.Headers
err = o.fs.pacer.CallNoRetry(func() (bool, error) {
rxHeaders, err = o.fs.c.ObjectPut(ctx, container, containerPath, in, true, "", contentType, headers)
return shouldRetryHeaders(rxHeaders, err)
return shouldRetryHeaders(ctx, rxHeaders, err)
})
if err != nil {
return err
@@ -1414,7 +1526,7 @@ func (o *Object) Remove(ctx context.Context) (err error) {
// Remove file/manifest first
err = o.fs.pacer.Call(func() (bool, error) {
err = o.fs.c.ObjectDelete(ctx, container, containerPath)
return shouldRetry(err)
return shouldRetry(ctx, err)
})
if err != nil {
return err

View File

@@ -1,6 +1,7 @@
package swift
import (
"context"
"testing"
"time"
@@ -32,6 +33,7 @@ func TestInternalUrlEncode(t *testing.T) {
}
func TestInternalShouldRetryHeaders(t *testing.T) {
ctx := context.Background()
headers := swift.Headers{
"Content-Length": "64",
"Content-Type": "text/html; charset=UTF-8",
@@ -45,7 +47,7 @@ func TestInternalShouldRetryHeaders(t *testing.T) {
// Short sleep should just do the sleep
start := time.Now()
retry, gotErr := shouldRetryHeaders(headers, err)
retry, gotErr := shouldRetryHeaders(ctx, headers, err)
dt := time.Since(start)
assert.True(t, retry)
assert.Equal(t, err, gotErr)
@@ -54,7 +56,7 @@ func TestInternalShouldRetryHeaders(t *testing.T) {
// Long sleep should return RetryError
headers["Retry-After"] = "3600"
start = time.Now()
retry, gotErr = shouldRetryHeaders(headers, err)
retry, gotErr = shouldRetryHeaders(ctx, headers, err)
dt = time.Since(start)
assert.True(t, dt < time.Second)
assert.False(t, retry)

View File

@@ -80,6 +80,7 @@ func (f *Fs) InternalTest(t *testing.T) {
t.Run("NoChunk", f.testNoChunk)
t.Run("WithChunk", f.testWithChunk)
t.Run("WithChunkFail", f.testWithChunkFail)
t.Run("CopyLargeObject", f.testCopyLargeObject)
}
func (f *Fs) testWithChunk(t *testing.T) {
@@ -154,4 +155,39 @@ func (f *Fs) testWithChunkFail(t *testing.T) {
require.Empty(t, objs)
}
func (f *Fs) testCopyLargeObject(t *testing.T) {
preConfChunkSize := f.opt.ChunkSize
preConfChunk := f.opt.NoChunk
f.opt.NoChunk = false
f.opt.ChunkSize = 1024 * fs.Byte
defer func() {
//restore old config after test
f.opt.ChunkSize = preConfChunkSize
f.opt.NoChunk = preConfChunk
}()
file := fstest.Item{
ModTime: fstest.Time("2020-12-31T04:05:06.499999999Z"),
Path: "large.txt",
Size: -1, // use unknown size during upload
}
const contentSize = 2048
contents := random.String(contentSize)
buf := bytes.NewBufferString(contents)
uploadHash := hash.NewMultiHasher()
in := io.TeeReader(buf, uploadHash)
file.Size = -1
obji := object.NewStaticObjectInfo(file.Path, file.ModTime, file.Size, true, nil, nil)
ctx := context.TODO()
obj, err := f.Features().PutStream(ctx, in, obji)
require.NoError(t, err)
require.NotEmpty(t, obj)
remoteTarget := "large.txt (copy)"
objTarget, err := f.Features().Copy(ctx, obj, remoteTarget)
require.NoError(t, err)
require.NotEmpty(t, objTarget)
require.Equal(t, obj.Size(), objTarget.Size())
}
var _ fstests.InternalTester = (*Fs)(nil)

View File

@@ -59,7 +59,17 @@ func (d *Directory) candidates() []upstream.Entry {
// return an error or update the object properly (rather than e.g. calling panic).
func (o *Object) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, options ...fs.OpenOption) error {
entries, err := o.fs.actionEntries(o.candidates()...)
if err != nil {
if err == fs.ErrorPermissionDenied {
// There are no candidates in this object which can be written to
// So attempt to create a new object instead
newO, err := o.fs.put(ctx, in, src, false, options...)
if err != nil {
return err
}
// Update current object
*o = *newO.(*Object)
return nil
} else if err != nil {
return err
}
if len(entries) == 1 {

View File

@@ -17,7 +17,9 @@ func init() {
type EpFF struct{}
func (p *EpFF) epff(ctx context.Context, upstreams []*upstream.Fs, filePath string) (*upstream.Fs, error) {
ch := make(chan *upstream.Fs)
ch := make(chan *upstream.Fs, len(upstreams))
ctx, cancel := context.WithCancel(ctx)
defer cancel()
for _, u := range upstreams {
u := u // Closure
go func() {
@@ -30,16 +32,10 @@ func (p *EpFF) epff(ctx context.Context, upstreams []*upstream.Fs, filePath stri
}()
}
var u *upstream.Fs
for i := 0; i < len(upstreams); i++ {
for range upstreams {
u = <-ch
if u != nil {
// close remaining goroutines
go func(num int) {
defer close(ch)
for i := 0; i < num; i++ {
<-ch
}
}(len(upstreams) - 1 - i)
break
}
}
if u == nil {

View File

@@ -0,0 +1,67 @@
package union
import (
"bytes"
"context"
"testing"
"time"
"github.com/rclone/rclone/fs/object"
"github.com/rclone/rclone/fstest"
"github.com/rclone/rclone/fstest/fstests"
"github.com/rclone/rclone/lib/random"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
)
func (f *Fs) TestInternalReadOnly(t *testing.T) {
if f.name != "TestUnionRO" {
t.Skip("Only on RO union")
}
dir := "TestInternalReadOnly"
ctx := context.Background()
rofs := f.upstreams[len(f.upstreams)-1]
assert.False(t, rofs.IsWritable())
// Put a file onto the read only fs
contents := random.String(50)
file1 := fstest.NewItem(dir+"/file.txt", contents, time.Now())
_, obj1 := fstests.PutTestContents(ctx, t, rofs, &file1, contents, true)
// Check read from readonly fs via union
o, err := f.NewObject(ctx, file1.Path)
require.NoError(t, err)
assert.Equal(t, int64(50), o.Size())
// Now call Update on the union Object with new data
contents2 := random.String(100)
file2 := fstest.NewItem(dir+"/file.txt", contents2, time.Now())
in := bytes.NewBufferString(contents2)
src := object.NewStaticObjectInfo(file2.Path, file2.ModTime, file2.Size, true, nil, nil)
err = o.Update(ctx, in, src)
require.NoError(t, err)
assert.Equal(t, int64(100), o.Size())
// Check we read the new object via the union
o, err = f.NewObject(ctx, file1.Path)
require.NoError(t, err)
assert.Equal(t, int64(100), o.Size())
// Remove the object
assert.NoError(t, o.Remove(ctx))
// Check we read the old object in the read only layer now
o, err = f.NewObject(ctx, file1.Path)
require.NoError(t, err)
assert.Equal(t, int64(50), o.Size())
// Remove file and dir from read only fs
assert.NoError(t, obj1.Remove(ctx))
assert.NoError(t, rofs.Rmdir(ctx, dir))
}
func (f *Fs) InternalTest(t *testing.T) {
t.Run("ReadOnly", f.TestInternalReadOnly)
}
var _ fstests.InternalTester = (*Fs)(nil)

View File

@@ -2,13 +2,15 @@
package union_test
import (
"fmt"
"io/ioutil"
"os"
"path/filepath"
"testing"
_ "github.com/rclone/rclone/backend/local"
"github.com/rclone/rclone/fstest"
"github.com/rclone/rclone/fstest/fstests"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
)
@@ -24,17 +26,28 @@ func TestIntegration(t *testing.T) {
})
}
func makeTestDirs(t *testing.T, n int) (dirs []string, clean func()) {
for i := 1; i <= n; i++ {
dir, err := ioutil.TempDir("", fmt.Sprintf("rclone-union-test-%d", n))
require.NoError(t, err)
dirs = append(dirs, dir)
}
clean = func() {
for _, dir := range dirs {
err := os.RemoveAll(dir)
assert.NoError(t, err)
}
}
return dirs, clean
}
func TestStandard(t *testing.T) {
if *fstest.RemoteName != "" {
t.Skip("Skipping as -remote set")
}
tempdir1 := filepath.Join(os.TempDir(), "rclone-union-test-standard1")
tempdir2 := filepath.Join(os.TempDir(), "rclone-union-test-standard2")
tempdir3 := filepath.Join(os.TempDir(), "rclone-union-test-standard3")
require.NoError(t, os.MkdirAll(tempdir1, 0744))
require.NoError(t, os.MkdirAll(tempdir2, 0744))
require.NoError(t, os.MkdirAll(tempdir3, 0744))
upstreams := tempdir1 + " " + tempdir2 + " " + tempdir3
dirs, clean := makeTestDirs(t, 3)
defer clean()
upstreams := dirs[0] + " " + dirs[1] + " " + dirs[2]
name := "TestUnion"
fstests.Run(t, &fstests.Opt{
RemoteName: name + ":",
@@ -54,13 +67,9 @@ func TestRO(t *testing.T) {
if *fstest.RemoteName != "" {
t.Skip("Skipping as -remote set")
}
tempdir1 := filepath.Join(os.TempDir(), "rclone-union-test-ro1")
tempdir2 := filepath.Join(os.TempDir(), "rclone-union-test-ro2")
tempdir3 := filepath.Join(os.TempDir(), "rclone-union-test-ro3")
require.NoError(t, os.MkdirAll(tempdir1, 0744))
require.NoError(t, os.MkdirAll(tempdir2, 0744))
require.NoError(t, os.MkdirAll(tempdir3, 0744))
upstreams := tempdir1 + " " + tempdir2 + ":ro " + tempdir3 + ":ro"
dirs, clean := makeTestDirs(t, 3)
defer clean()
upstreams := dirs[0] + " " + dirs[1] + ":ro " + dirs[2] + ":ro"
name := "TestUnionRO"
fstests.Run(t, &fstests.Opt{
RemoteName: name + ":",
@@ -80,13 +89,9 @@ func TestNC(t *testing.T) {
if *fstest.RemoteName != "" {
t.Skip("Skipping as -remote set")
}
tempdir1 := filepath.Join(os.TempDir(), "rclone-union-test-nc1")
tempdir2 := filepath.Join(os.TempDir(), "rclone-union-test-nc2")
tempdir3 := filepath.Join(os.TempDir(), "rclone-union-test-nc3")
require.NoError(t, os.MkdirAll(tempdir1, 0744))
require.NoError(t, os.MkdirAll(tempdir2, 0744))
require.NoError(t, os.MkdirAll(tempdir3, 0744))
upstreams := tempdir1 + " " + tempdir2 + ":nc " + tempdir3 + ":nc"
dirs, clean := makeTestDirs(t, 3)
defer clean()
upstreams := dirs[0] + " " + dirs[1] + ":nc " + dirs[2] + ":nc"
name := "TestUnionNC"
fstests.Run(t, &fstests.Opt{
RemoteName: name + ":",
@@ -106,13 +111,9 @@ func TestPolicy1(t *testing.T) {
if *fstest.RemoteName != "" {
t.Skip("Skipping as -remote set")
}
tempdir1 := filepath.Join(os.TempDir(), "rclone-union-test-policy11")
tempdir2 := filepath.Join(os.TempDir(), "rclone-union-test-policy12")
tempdir3 := filepath.Join(os.TempDir(), "rclone-union-test-policy13")
require.NoError(t, os.MkdirAll(tempdir1, 0744))
require.NoError(t, os.MkdirAll(tempdir2, 0744))
require.NoError(t, os.MkdirAll(tempdir3, 0744))
upstreams := tempdir1 + " " + tempdir2 + " " + tempdir3
dirs, clean := makeTestDirs(t, 3)
defer clean()
upstreams := dirs[0] + " " + dirs[1] + " " + dirs[2]
name := "TestUnionPolicy1"
fstests.Run(t, &fstests.Opt{
RemoteName: name + ":",
@@ -132,13 +133,9 @@ func TestPolicy2(t *testing.T) {
if *fstest.RemoteName != "" {
t.Skip("Skipping as -remote set")
}
tempdir1 := filepath.Join(os.TempDir(), "rclone-union-test-policy21")
tempdir2 := filepath.Join(os.TempDir(), "rclone-union-test-policy22")
tempdir3 := filepath.Join(os.TempDir(), "rclone-union-test-policy23")
require.NoError(t, os.MkdirAll(tempdir1, 0744))
require.NoError(t, os.MkdirAll(tempdir2, 0744))
require.NoError(t, os.MkdirAll(tempdir3, 0744))
upstreams := tempdir1 + " " + tempdir2 + " " + tempdir3
dirs, clean := makeTestDirs(t, 3)
defer clean()
upstreams := dirs[0] + " " + dirs[1] + " " + dirs[2]
name := "TestUnionPolicy2"
fstests.Run(t, &fstests.Opt{
RemoteName: name + ":",
@@ -158,13 +155,9 @@ func TestPolicy3(t *testing.T) {
if *fstest.RemoteName != "" {
t.Skip("Skipping as -remote set")
}
tempdir1 := filepath.Join(os.TempDir(), "rclone-union-test-policy31")
tempdir2 := filepath.Join(os.TempDir(), "rclone-union-test-policy32")
tempdir3 := filepath.Join(os.TempDir(), "rclone-union-test-policy33")
require.NoError(t, os.MkdirAll(tempdir1, 0744))
require.NoError(t, os.MkdirAll(tempdir2, 0744))
require.NoError(t, os.MkdirAll(tempdir3, 0744))
upstreams := tempdir1 + " " + tempdir2 + " " + tempdir3
dirs, clean := makeTestDirs(t, 3)
defer clean()
upstreams := dirs[0] + " " + dirs[1] + " " + dirs[2]
name := "TestUnionPolicy3"
fstests.Run(t, &fstests.Opt{
RemoteName: name + ":",

View File

@@ -14,6 +14,7 @@ import (
"github.com/pkg/errors"
"github.com/rclone/rclone/fs"
"github.com/rclone/rclone/fs/cache"
"github.com/rclone/rclone/fs/fspath"
)
var (
@@ -62,7 +63,7 @@ type Entry interface {
// New creates a new Fs based on the
// string formatted `type:root_path(:ro/:nc)`
func New(ctx context.Context, remote, root string, cacheTime time.Duration) (*Fs, error) {
_, configName, fsPath, err := fs.ParseRemote(remote)
configName, fsPath, err := fspath.SplitFs(remote)
if err != nil {
return nil, err
}
@@ -83,15 +84,13 @@ func New(ctx context.Context, remote, root string, cacheTime time.Duration) (*Fs
f.creatable = false
fsPath = fsPath[0 : len(fsPath)-3]
}
if configName != "local" {
fsPath = configName + ":" + fsPath
}
rFs, err := cache.Get(ctx, fsPath)
remote = configName + fsPath
rFs, err := cache.Get(ctx, remote)
if err != nil && err != fs.ErrorIsFile {
return nil, err
}
f.RootFs = rFs
rootString := path.Join(fsPath, filepath.ToSlash(root))
rootString := path.Join(remote, filepath.ToSlash(root))
myFs, err := cache.Get(ctx, rootString)
if err != nil && err != fs.ErrorIsFile {
return nil, err

View File

@@ -196,7 +196,10 @@ var retryErrorCodes = []int{
// shouldRetry returns a boolean as to whether this resp and err
// deserve to be retried. It returns the err as a convenience
func (f *Fs) shouldRetry(resp *http.Response, err error) (bool, error) {
func (f *Fs) shouldRetry(ctx context.Context, resp *http.Response, err error) (bool, error) {
if fserrors.ContextError(ctx, &err) {
return false, err
}
// If we have a bearer token command and it has expired then refresh it
if f.opt.BearerTokenCommand != "" && resp != nil && resp.StatusCode == 401 {
fs.Debugf(f, "Bearer token expired: %v", err)
@@ -270,7 +273,7 @@ func (f *Fs) readMetaDataForPath(ctx context.Context, path string, depth string)
var resp *http.Response
err = f.pacer.Call(func() (bool, error) {
resp, err = f.srv.CallXML(ctx, &opts, nil, &result)
return f.shouldRetry(resp, err)
return f.shouldRetry(ctx, resp, err)
})
if apiErr, ok := err.(*api.Error); ok {
// does not exist
@@ -628,7 +631,7 @@ func (f *Fs) listAll(ctx context.Context, dir string, directoriesOnly bool, file
var resp *http.Response
err = f.pacer.Call(func() (bool, error) {
resp, err = f.srv.CallXML(ctx, &opts, nil, &result)
return f.shouldRetry(resp, err)
return f.shouldRetry(ctx, resp, err)
})
if err != nil {
if apiErr, ok := err.(*api.Error); ok {
@@ -800,7 +803,7 @@ func (f *Fs) _dirExists(ctx context.Context, dirPath string) (exists bool) {
var err error
err = f.pacer.Call(func() (bool, error) {
resp, err = f.srv.CallXML(ctx, &opts, nil, &result)
return f.shouldRetry(resp, err)
return f.shouldRetry(ctx, resp, err)
})
return err == nil
}
@@ -822,7 +825,7 @@ func (f *Fs) _mkdir(ctx context.Context, dirPath string) error {
}
err := f.pacer.Call(func() (bool, error) {
resp, err := f.srv.Call(ctx, &opts)
return f.shouldRetry(resp, err)
return f.shouldRetry(ctx, resp, err)
})
if apiErr, ok := err.(*api.Error); ok {
// Check if it already exists. The response code for this isn't
@@ -911,7 +914,7 @@ func (f *Fs) purgeCheck(ctx context.Context, dir string, check bool) error {
var err error
err = f.pacer.Call(func() (bool, error) {
resp, err = f.srv.CallXML(ctx, &opts, nil, nil)
return f.shouldRetry(resp, err)
return f.shouldRetry(ctx, resp, err)
})
if err != nil {
return errors.Wrap(err, "rmdir failed")
@@ -974,7 +977,7 @@ func (f *Fs) copyOrMove(ctx context.Context, src fs.Object, remote string, metho
}
err = f.pacer.Call(func() (bool, error) {
resp, err = f.srv.Call(ctx, &opts)
return f.shouldRetry(resp, err)
return f.shouldRetry(ctx, resp, err)
})
if err != nil {
return nil, errors.Wrap(err, "Copy call failed")
@@ -1070,7 +1073,7 @@ func (f *Fs) DirMove(ctx context.Context, src fs.Fs, srcRemote, dstRemote string
}
err = f.pacer.Call(func() (bool, error) {
resp, err = f.srv.Call(ctx, &opts)
return f.shouldRetry(resp, err)
return f.shouldRetry(ctx, resp, err)
})
if err != nil {
return errors.Wrap(err, "DirMove MOVE call failed")
@@ -1112,7 +1115,7 @@ func (f *Fs) About(ctx context.Context) (*fs.Usage, error) {
var err error
err = f.pacer.Call(func() (bool, error) {
resp, err = f.srv.CallXML(ctx, &opts, nil, &q)
return f.shouldRetry(resp, err)
return f.shouldRetry(ctx, resp, err)
})
if err != nil {
return nil, errors.Wrap(err, "about call failed")
@@ -1240,7 +1243,7 @@ func (o *Object) Open(ctx context.Context, options ...fs.OpenOption) (in io.Read
}
err = o.fs.pacer.Call(func() (bool, error) {
resp, err = o.fs.srv.Call(ctx, &opts)
return o.fs.shouldRetry(resp, err)
return o.fs.shouldRetry(ctx, resp, err)
})
if err != nil {
return nil, err
@@ -1291,7 +1294,7 @@ func (o *Object) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, op
}
err = o.fs.pacer.CallNoRetry(func() (bool, error) {
resp, err = o.fs.srv.Call(ctx, &opts)
return o.fs.shouldRetry(resp, err)
return o.fs.shouldRetry(ctx, resp, err)
})
if err != nil {
// Give the WebDAV server a chance to get its internal state in order after the
@@ -1318,7 +1321,7 @@ func (o *Object) Remove(ctx context.Context) error {
}
return o.fs.pacer.Call(func() (bool, error) {
resp, err := o.fs.srv.Call(ctx, &opts)
return o.fs.shouldRetry(resp, err)
return o.fs.shouldRetry(ctx, resp, err)
})
}

View File

@@ -153,7 +153,10 @@ var retryErrorCodes = []int{
// shouldRetry returns a boolean as to whether this resp and err
// deserve to be retried. It returns the err as a convenience
func shouldRetry(resp *http.Response, err error) (bool, error) {
func shouldRetry(ctx context.Context, resp *http.Response, err error) (bool, error) {
if fserrors.ContextError(ctx, &err) {
return false, err
}
return fserrors.ShouldRetry(err) || fserrors.ShouldRetryHTTP(resp, retryErrorCodes), err
}
@@ -226,7 +229,7 @@ func (f *Fs) readMetaDataForPath(ctx context.Context, path string, options *api.
var resp *http.Response
err = f.pacer.Call(func() (bool, error) {
resp, err = f.srv.CallJSON(ctx, &opts, nil, &info)
return shouldRetry(resp, err)
return shouldRetry(ctx, resp, err)
})
if err != nil {
@@ -468,7 +471,7 @@ func (f *Fs) CreateDir(ctx context.Context, path string) (err error) {
err = f.pacer.Call(func() (bool, error) {
resp, err = f.srv.Call(ctx, &opts)
return shouldRetry(resp, err)
return shouldRetry(ctx, resp, err)
})
if err != nil {
// fmt.Printf("CreateDir %q Error: %s\n", path, err.Error())
@@ -543,6 +546,9 @@ func (f *Fs) waitForJob(ctx context.Context, location string) (err error) {
var body []byte
err = f.pacer.Call(func() (bool, error) {
resp, err = f.srv.Call(ctx, &opts)
if fserrors.ContextError(ctx, &err) {
return false, err
}
if err != nil {
return fserrors.ShouldRetry(err), err
}
@@ -585,6 +591,9 @@ func (f *Fs) delete(ctx context.Context, path string, hardDelete bool) (err erro
var body []byte
err = f.pacer.Call(func() (bool, error) {
resp, err = f.srv.Call(ctx, &opts)
if fserrors.ContextError(ctx, &err) {
return false, err
}
if err != nil {
return fserrors.ShouldRetry(err), err
}
@@ -658,6 +667,9 @@ func (f *Fs) copyOrMove(ctx context.Context, method, src, dst string, overwrite
var body []byte
err = f.pacer.Call(func() (bool, error) {
resp, err = f.srv.Call(ctx, &opts)
if fserrors.ContextError(ctx, &err) {
return false, err
}
if err != nil {
return fserrors.ShouldRetry(err), err
}
@@ -810,7 +822,7 @@ func (f *Fs) PublicLink(ctx context.Context, remote string, expire fs.Duration,
var resp *http.Response
err = f.pacer.Call(func() (bool, error) {
resp, err = f.srv.Call(ctx, &opts)
return shouldRetry(resp, err)
return shouldRetry(ctx, resp, err)
})
if apiErr, ok := err.(*api.ErrorResponse); ok {
@@ -848,7 +860,7 @@ func (f *Fs) CleanUp(ctx context.Context) (err error) {
err = f.pacer.Call(func() (bool, error) {
resp, err = f.srv.Call(ctx, &opts)
return shouldRetry(resp, err)
return shouldRetry(ctx, resp, err)
})
return err
}
@@ -865,7 +877,7 @@ func (f *Fs) About(ctx context.Context) (*fs.Usage, error) {
var err error
err = f.pacer.Call(func() (bool, error) {
resp, err = f.srv.CallJSON(ctx, &opts, nil, &info)
return shouldRetry(resp, err)
return shouldRetry(ctx, resp, err)
})
if err != nil {
@@ -999,7 +1011,7 @@ func (o *Object) setCustomProperty(ctx context.Context, property string, value s
err = o.fs.pacer.Call(func() (bool, error) {
resp, err = o.fs.srv.CallJSON(ctx, &opts, &cpr, nil)
return shouldRetry(resp, err)
return shouldRetry(ctx, resp, err)
})
return err
}
@@ -1032,7 +1044,7 @@ func (o *Object) Open(ctx context.Context, options ...fs.OpenOption) (in io.Read
err = o.fs.pacer.Call(func() (bool, error) {
resp, err = o.fs.srv.CallJSON(ctx, &opts, nil, &dl)
return shouldRetry(resp, err)
return shouldRetry(ctx, resp, err)
})
if err != nil {
@@ -1047,7 +1059,7 @@ func (o *Object) Open(ctx context.Context, options ...fs.OpenOption) (in io.Read
}
err = o.fs.pacer.Call(func() (bool, error) {
resp, err = o.fs.srv.Call(ctx, &opts)
return shouldRetry(resp, err)
return shouldRetry(ctx, resp, err)
})
if err != nil {
return nil, err
@@ -1071,7 +1083,7 @@ func (o *Object) upload(ctx context.Context, in io.Reader, overwrite bool, mimeT
err = o.fs.pacer.Call(func() (bool, error) {
resp, err = o.fs.srv.CallJSON(ctx, &opts, nil, &ur)
return shouldRetry(resp, err)
return shouldRetry(ctx, resp, err)
})
if err != nil {
@@ -1089,7 +1101,7 @@ func (o *Object) upload(ctx context.Context, in io.Reader, overwrite bool, mimeT
err = o.fs.pacer.Call(func() (bool, error) {
resp, err = o.fs.srv.Call(ctx, &opts)
return shouldRetry(resp, err)
return shouldRetry(ctx, resp, err)
})
return err

View File

@@ -96,6 +96,11 @@ func init() {
log.Fatalf("Failed to configure token: %v", err)
}
}
if fs.GetConfig(ctx).AutoConfirm {
return
}
if err = setupRoot(ctx, name, m); err != nil {
log.Fatalf("Failed to configure root directory: %v", err)
}
@@ -161,7 +166,7 @@ type Object struct {
func setupRegion(m configmap.Mapper) {
region, ok := m.Get("region")
if !ok {
if !ok || region == "" {
log.Fatalf("No region set\n")
}
rootURL = fmt.Sprintf("https://workdrive.zoho.%s/api/v1", region)
@@ -257,7 +262,10 @@ var retryErrorCodes = []int{
// shouldRetry returns a boolean as to whether this resp and err
// deserve to be retried. It returns the err as a convenience
func shouldRetry(resp *http.Response, err error) (bool, error) {
func shouldRetry(ctx context.Context, resp *http.Response, err error) (bool, error) {
if fserrors.ContextError(ctx, &err) {
return false, err
}
authRetry := false
if resp != nil && resp.StatusCode == 401 && len(resp.Header["Www-Authenticate"]) == 1 && strings.Index(resp.Header["Www-Authenticate"][0], "expired_token") >= 0 {
@@ -354,7 +362,7 @@ func (f *Fs) readMetaDataForID(ctx context.Context, id string) (*api.Item, error
var err error
err = f.pacer.Call(func() (bool, error) {
resp, err = f.srv.CallJSON(ctx, &opts, nil, &result)
return shouldRetry(resp, err)
return shouldRetry(ctx, resp, err)
})
if err != nil {
return nil, err
@@ -369,6 +377,7 @@ func NewFs(ctx context.Context, name, root string, m configmap.Mapper) (fs.Fs, e
if err := configstruct.Set(m, opt); err != nil {
return nil, err
}
setupRegion(m)
root = parsePath(root)
oAuthClient, _, err := oauthutil.NewClient(ctx, name, m, oauthConfig)
@@ -450,7 +459,7 @@ OUTER:
var resp *http.Response
err = f.pacer.Call(func() (bool, error) {
resp, err = f.srv.CallJSON(ctx, &opts, nil, &result)
return shouldRetry(resp, err)
return shouldRetry(ctx, resp, err)
})
if err != nil {
return found, errors.Wrap(err, "couldn't list files")
@@ -555,7 +564,7 @@ func (f *Fs) CreateDir(ctx context.Context, pathID, leaf string) (newID string,
}
err = f.pacer.Call(func() (bool, error) {
resp, err = f.srv.CallJSON(ctx, &opts, &mkdir, &info)
return shouldRetry(resp, err)
return shouldRetry(ctx, resp, err)
})
if err != nil {
//fmt.Printf("...Error %v\n", err)
@@ -643,7 +652,7 @@ func (f *Fs) upload(ctx context.Context, name string, parent string, size int64,
params.Set("filename", name)
params.Set("parent_id", parent)
params.Set("override-name-exist", strconv.FormatBool(true))
formReader, contentType, overhead, err := rest.MultipartUpload(in, nil, "content", name)
formReader, contentType, overhead, err := rest.MultipartUpload(ctx, in, nil, "content", name)
if err != nil {
return nil, errors.Wrap(err, "failed to make multipart upload")
}
@@ -664,7 +673,7 @@ func (f *Fs) upload(ctx context.Context, name string, parent string, size int64,
var uploadResponse *api.UploadResponse
err = f.pacer.CallNoRetry(func() (bool, error) {
resp, err = f.srv.CallJSON(ctx, &opts, nil, &uploadResponse)
return shouldRetry(resp, err)
return shouldRetry(ctx, resp, err)
})
if err != nil {
return nil, errors.Wrap(err, "upload error")
@@ -746,7 +755,7 @@ func (f *Fs) deleteObject(ctx context.Context, id string) (err error) {
}
err = f.pacer.Call(func() (bool, error) {
resp, err = f.srv.CallJSON(ctx, &opts, &delete, nil)
return shouldRetry(resp, err)
return shouldRetry(ctx, resp, err)
})
if err != nil {
return errors.Wrap(err, "delete object failed")
@@ -816,7 +825,7 @@ func (f *Fs) rename(ctx context.Context, id, name string) (item *api.Item, err e
var result *api.ItemInfo
err = f.pacer.Call(func() (bool, error) {
resp, err = f.srv.CallJSON(ctx, &opts, &rename, &result)
return shouldRetry(resp, err)
return shouldRetry(ctx, resp, err)
})
if err != nil {
return nil, errors.Wrap(err, "rename failed")
@@ -869,7 +878,7 @@ func (f *Fs) Copy(ctx context.Context, src fs.Object, remote string) (fs.Object,
var result *api.ItemList
err = f.pacer.Call(func() (bool, error) {
resp, err = f.srv.CallJSON(ctx, &opts, &copyFile, &result)
return shouldRetry(resp, err)
return shouldRetry(ctx, resp, err)
})
if err != nil {
return nil, errors.Wrap(err, "couldn't copy file")
@@ -914,7 +923,7 @@ func (f *Fs) move(ctx context.Context, srcID, parentID string) (item *api.Item,
var result *api.ItemList
err = f.pacer.Call(func() (bool, error) {
resp, err = f.srv.CallJSON(ctx, &opts, &moveFile, &result)
return shouldRetry(resp, err)
return shouldRetry(ctx, resp, err)
})
if err != nil {
return nil, errors.Wrap(err, "move failed")
@@ -1181,7 +1190,7 @@ func (o *Object) Open(ctx context.Context, options ...fs.OpenOption) (in io.Read
}
err = o.fs.pacer.Call(func() (bool, error) {
resp, err = o.fs.srv.Call(ctx, &opts)
return shouldRetry(resp, err)
return shouldRetry(ctx, resp, err)
})
if err != nil {
return nil, err

View File

@@ -44,10 +44,10 @@ var commandDefinition = &cobra.Command{
Use: "about remote:",
Short: `Get quota information from the remote.`,
Long: `
` + "`rclone about`" + `prints quota information about a remote to standard
` + "`rclone about`" + ` prints quota information about a remote to standard
output. The output is typically used, free, quota and trash contents.
E.g. Typical output from` + "`rclone about remote:`" + `is:
E.g. Typical output from ` + "`rclone about remote:`" + ` is:
Total: 17G
Used: 7.444G
@@ -75,7 +75,7 @@ Applying a ` + "`--full`" + ` flag to the command prints the bytes in full, e.g.
Trashed: 104857602
Other: 8849156022
A ` + "`--json`" + `flag generates conveniently computer readable output, e.g.
A ` + "`--json`" + ` flag generates conveniently computer readable output, e.g.
{
"total": 18253611008,

View File

@@ -47,6 +47,7 @@ import (
_ "github.com/rclone/rclone/cmd/reveal"
_ "github.com/rclone/rclone/cmd/rmdir"
_ "github.com/rclone/rclone/cmd/rmdirs"
_ "github.com/rclone/rclone/cmd/selfupdate"
_ "github.com/rclone/rclone/cmd/serve"
_ "github.com/rclone/rclone/cmd/settier"
_ "github.com/rclone/rclone/cmd/sha1sum"

View File

@@ -29,8 +29,8 @@ rclone config.
Use the --auth-no-open-browser to prevent rclone to open auth
link in default browser automatically.`,
Run: func(command *cobra.Command, args []string) {
RunE: func(command *cobra.Command, args []string) error {
cmd.CheckArgs(1, 3, command, args)
config.Authorize(context.Background(), args, noAutoBrowser)
return config.Authorize(context.Background(), args, noAutoBrowser)
},
}

View File

@@ -25,6 +25,7 @@ import (
"github.com/rclone/rclone/fs"
"github.com/rclone/rclone/fs/accounting"
"github.com/rclone/rclone/fs/cache"
"github.com/rclone/rclone/fs/config/configfile"
"github.com/rclone/rclone/fs/config/configflags"
"github.com/rclone/rclone/fs/config/flags"
"github.com/rclone/rclone/fs/filter"
@@ -35,6 +36,7 @@ import (
"github.com/rclone/rclone/fs/rc/rcflags"
"github.com/rclone/rclone/fs/rc/rcserver"
"github.com/rclone/rclone/lib/atexit"
"github.com/rclone/rclone/lib/buildinfo"
"github.com/rclone/rclone/lib/random"
"github.com/rclone/rclone/lib/terminal"
"github.com/spf13/cobra"
@@ -73,9 +75,24 @@ const (
// ShowVersion prints the version to stdout
func ShowVersion() {
osVersion, osKernel := buildinfo.GetOSVersion()
if osVersion == "" {
osVersion = "unknown"
}
if osKernel == "" {
osKernel = "unknown"
}
linking, tagString := buildinfo.GetLinkingAndTags()
fmt.Printf("rclone %s\n", fs.Version)
fmt.Printf("- os/arch: %s/%s\n", runtime.GOOS, runtime.GOARCH)
fmt.Printf("- go version: %s\n", runtime.Version())
fmt.Printf("- os/version: %s\n", osVersion)
fmt.Printf("- os/kernel: %s\n", osKernel)
fmt.Printf("- os/type: %s\n", runtime.GOOS)
fmt.Printf("- os/arch: %s\n", runtime.GOARCH)
fmt.Printf("- go/version: %s\n", runtime.Version())
fmt.Printf("- go/linking: %s\n", linking)
fmt.Printf("- go/tags: %s\n", tagString)
}
// NewFsFile creates an Fs from a name but may point to a file.
@@ -83,7 +100,7 @@ func ShowVersion() {
// It returns a string with the file name if points to a file
// otherwise "".
func NewFsFile(remote string) (fs.Fs, string) {
_, _, fsPath, err := fs.ParseRemote(remote)
_, fsPath, err := fspath.SplitFs(remote)
if err != nil {
err = fs.CountError(err)
log.Fatalf("Failed to create file system for %q: %v", remote, err)
@@ -382,6 +399,12 @@ func initConfig() {
// Finish parsing any command line flags
configflags.SetFlags(ci)
// Load the config
configfile.LoadConfig(ctx)
// Start accounting
accounting.Start(ctx)
// Hide console window
if ci.NoConsole {
terminal.HideConsole()
@@ -541,6 +564,9 @@ func Main() {
setupRootCommand(Root)
AddBackendFlags()
if err := Root.Execute(); err != nil {
if strings.HasPrefix(err.Error(), "unknown command") && selfupdateEnabled {
Root.PrintErrf("You could use '%s selfupdate' to get latest features.\n\n", Root.CommandPath())
}
log.Fatalf("Fatal error: %v", err)
}
}

7
cmd/cmount/arch.go Normal file
View File

@@ -0,0 +1,7 @@
package cmount
// ProvidedBy returns true if the rclone build for the given OS
// provides support for lib/cgo-fuse
func ProvidedBy(osName string) bool {
return osName == "windows" || osName == "darwin"
}

View File

@@ -21,12 +21,13 @@ import (
"github.com/rclone/rclone/cmd/mountlib"
"github.com/rclone/rclone/fs"
"github.com/rclone/rclone/lib/atexit"
"github.com/rclone/rclone/lib/buildinfo"
"github.com/rclone/rclone/vfs"
)
func init() {
name := "cmount"
cmountOnly := runtime.GOOS == "windows" || runtime.GOOS == "darwin"
cmountOnly := ProvidedBy(runtime.GOOS)
if cmountOnly {
name = "mount"
}
@@ -35,6 +36,7 @@ func init() {
cmd.Aliases = append(cmd.Aliases, "cmount")
}
mountlib.AddRc("cmount", mount)
buildinfo.Tags = append(buildinfo.Tags, "cmount")
}
// Find the option string in the current options

View File

@@ -3,6 +3,7 @@ package copyurl
import (
"context"
"errors"
"fmt"
"os"
"github.com/rclone/rclone/cmd"
@@ -13,15 +14,17 @@ import (
)
var (
autoFilename = false
stdout = false
noClobber = false
autoFilename = false
printFilename = false
stdout = false
noClobber = false
)
func init() {
cmd.Root.AddCommand(commandDefinition)
cmdFlags := commandDefinition.Flags()
flags.BoolVarP(cmdFlags, &autoFilename, "auto-filename", "a", autoFilename, "Get the file name from the URL and use it for destination file path")
flags.BoolVarP(cmdFlags, &printFilename, "print-filename", "p", printFilename, "Print the resulting name from --auto-filename")
flags.BoolVarP(cmdFlags, &noClobber, "no-clobber", "", noClobber, "Prevent overwriting file with same name")
flags.BoolVarP(cmdFlags, &stdout, "stdout", "", stdout, "Write the output to stdout rather than a file")
}
@@ -33,15 +36,16 @@ var commandDefinition = &cobra.Command{
Download a URL's content and copy it to the destination without saving
it in temporary storage.
Setting --auto-filename will cause the file name to be retrieved from
Setting ` + "`--auto-filename`" + ` will cause the file name to be retrieved from
the from URL (after any redirections) and used in the destination
path.
path. With ` + "`--print-filename`" + ` in addition, the resuling file name will
be printed.
Setting --no-clobber will prevent overwriting file on the
Setting ` + "`--no-clobber`" + ` will prevent overwriting file on the
destination if there is one with the same name.
Setting --stdout or making the output file name "-" will cause the
output to be written to standard output.
Setting ` + "`--stdout`" + ` or making the output file name ` + "`-`" + `
will cause the output to be written to standard output.
`,
RunE: func(command *cobra.Command, args []string) (err error) {
cmd.CheckArgs(1, 2, command, args)
@@ -61,10 +65,14 @@ output to be written to standard output.
}
}
cmd.Run(true, true, command, func() error {
var dst fs.Object
if stdout {
err = operations.CopyURLToWriter(context.Background(), args[0], os.Stdout)
} else {
_, err = operations.CopyURL(context.Background(), fsdst, dstFileName, args[0], autoFilename, noClobber)
dst, err = operations.CopyURL(context.Background(), fsdst, dstFileName, args[0], autoFilename, noClobber)
if printFilename && err == nil && dst != nil {
fmt.Println(dst.Remote())
}
}
return err
})

View File

@@ -32,8 +32,8 @@ By default ` + "`dedupe`" + ` interactively finds files with duplicate
names and offers to delete all but one or rename them to be
different. This is known as deduping by name.
Deduping by name is only useful with backends like Google Drive which
can have duplicate file names. It can be run on wrapping backends
Deduping by name is only useful with a small group of backends (e.g. Google Drive,
Opendrive) that can have duplicate file names. It can be run on wrapping backends
(e.g. crypt) if they wrap a backend which supports duplicate file
names.

View File

@@ -3,7 +3,6 @@ package link
import (
"context"
"fmt"
"time"
"github.com/rclone/rclone/cmd"
"github.com/rclone/rclone/fs"
@@ -13,7 +12,7 @@ import (
)
var (
expire = fs.Duration(time.Hour * 24 * 365 * 100)
expire = fs.DurationOff
unlink = false
)

View File

@@ -41,7 +41,7 @@ When uses with the -l flag it lists the types too.
}
for _, remote := range remotes {
if listLong {
remoteType := config.FileGet(remote, "type", "UNKNOWN")
remoteType := config.FileGet(remote, "type")
fmt.Printf("%-*s %s\n", maxlen+1, remote+":", remoteType)
} else {
fmt.Printf("%s:\n", remote)

View File

@@ -181,6 +181,15 @@ func (d *Dir) Remove(ctx context.Context, req *fuse.RemoveRequest) (err error) {
return nil
}
// Invalidate a leaf in a directory
func (d *Dir) invalidateEntry(dirNode fusefs.Node, leaf string) {
fs.Debugf(dirNode, "Invalidating %q", leaf)
err := d.fsys.server.InvalidateEntry(dirNode, leaf)
if err != nil {
fs.Debugf(dirNode, "Failed to invalidate %q: %v", leaf, err)
}
}
// Check interface satisfied
var _ fusefs.NodeRenamer = (*Dir)(nil)
@@ -197,6 +206,13 @@ func (d *Dir) Rename(ctx context.Context, req *fuse.RenameRequest, newDir fusefs
return translateError(err)
}
// Invalidate the new directory entry so it gets re-read (in
// the background otherwise we cause a deadlock)
//
// See https://github.com/rclone/rclone/issues/4977 for why
go d.invalidateEntry(newDir, req.NewName)
//go d.invalidateEntry(d, req.OldName)
return nil
}

View File

@@ -20,8 +20,9 @@ import (
// FS represents the top level filing system
type FS struct {
*vfs.VFS
f fs.Fs
opt *mountlib.Options
f fs.Fs
opt *mountlib.Options
server *fusefs.Server
}
// Check interface satisfied

View File

@@ -91,12 +91,12 @@ func mount(VFS *vfs.VFS, mountpoint string, opt *mountlib.Options) (<-chan error
}
filesys := NewFS(VFS, opt)
server := fusefs.New(c, nil)
filesys.server = fusefs.New(c, nil)
// Serve the mount point in the background returning error to errChan
errChan := make(chan error, 1)
go func() {
err := server.Serve(filesys)
err := filesys.server.Serve(filesys)
closeErr := c.Close()
if err == nil {
err = closeErr

View File

@@ -334,7 +334,7 @@ metadata about files like in UNIX. One case that may arise is that other program
(incorrectly) interprets this as the file being accessible by everyone. For example
an SSH client may warn about "unprotected private key file".
WinFsp 2021 (version 1.9, still in beta) introduces a new FUSE option "FileSecurity",
WinFsp 2021 (version 1.9) introduces a new FUSE option "FileSecurity",
that allows the complete specification of file security descriptors using
[SDDL](https://docs.microsoft.com/en-us/windows/win32/secauthz/security-descriptor-string-format).
With this you can work around issues such as the mentioned "unprotected private key file"

View File

@@ -13,6 +13,7 @@ import (
_ "github.com/rclone/rclone/cmd/cmount"
_ "github.com/rclone/rclone/cmd/mount"
_ "github.com/rclone/rclone/cmd/mount2"
"github.com/rclone/rclone/fs/config/configfile"
"github.com/rclone/rclone/fs/rc"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
@@ -20,6 +21,7 @@ import (
func TestRc(t *testing.T) {
ctx := context.Background()
configfile.LoadConfig(ctx)
mount := rc.Calls.Get("mount/mount")
assert.NotNil(t, mount)
unmount := rc.Calls.Get("mount/unmount")

View File

@@ -3,10 +3,13 @@ package rcd
import (
"context"
"log"
"sync"
sysdnotify "github.com/iguanesolutions/go-systemd/v5/notify"
"github.com/rclone/rclone/cmd"
"github.com/rclone/rclone/fs/rc/rcflags"
"github.com/rclone/rclone/fs/rc/rcserver"
"github.com/rclone/rclone/lib/atexit"
"github.com/spf13/cobra"
)
@@ -48,6 +51,22 @@ See the [rc documentation](/rc/) for more info on the rc flags.
log.Fatal("rc server not configured")
}
// Notify stopping on exit
var finaliseOnce sync.Once
finalise := func() {
finaliseOnce.Do(func() {
_ = sysdnotify.Stopping()
})
}
fnHandle := atexit.Register(finalise)
defer atexit.Unregister(fnHandle)
// Notify ready to systemd
if err := sysdnotify.Ready(); err != nil {
log.Fatalf("failed to notify ready to systemd: %v", err)
}
s.Wait()
finalise()
},
}

53
cmd/selfupdate/help.go Normal file
View File

@@ -0,0 +1,53 @@
// +build !noselfupdate
package selfupdate
// Note: "|" will be replaced by backticks in the help string below
var selfUpdateHelp string = `
This command downloads the latest release of rclone and replaces
the currently running binary. The download is verified with a hashsum
and cryptographically signed signature.
If used without flags (or with implied |--stable| flag), this command
will install the latest stable release. However, some issues may be fixed
(or features added) only in the latest beta release. In such cases you should
run the command with the |--beta| flag, i.e. |rclone selfupdate --beta|.
You can check in advance what version would be installed by adding the
|--check| flag, then repeat the command without it when you are satisfied.
Sometimes the rclone team may recommend you a concrete beta or stable
rclone release to troubleshoot your issue or add a bleeding edge feature.
The |--version VER| flag, if given, will update to the concrete version
instead of the latest one. If you omit micro version from |VER| (for
example |1.53|), the latest matching micro version will be used.
Upon successful update rclone will print a message that contains a previous
version number. You will need it if you later decide to revert your update
for some reason. Then you'll have to note the previous version and run the
following command: |rclone selfupdate [--beta] OLDVER|.
If the old version contains only dots and digits (for example |v1.54.0|)
then it's a stable release so you won't need the |--beta| flag. Beta releases
have an additional information similar to |v1.54.0-beta.5111.06f1c0c61|.
(if you are a developer and use a locally built rclone, the version number
will end with |-DEV|, you will have to rebuild it as it obviously can't
be distributed).
If you previously installed rclone via a package manager, the package may
include local documentation or configure services. You may wish to update
with the flag |--package deb| or |--package rpm| (whichever is correct for
your OS) to update these too. This command with the default |--package zip|
will update only the rclone executable so the local manual may become
inaccurate after it.
The |rclone mount| command (https://rclone.org/commands/rclone_mount/) may
or may not support extended FUSE options depending on the build and OS.
|selfupdate| will refuse to update if the capability would be discarded.
Note: Windows forbids deletion of a currently running executable so this
command will rename the old executable to 'rclone.old.exe' upon success.
Please note that this command was not available before rclone version 1.55.
If it fails for you with the message |unknown command "selfupdate"| then
you will need to update manually following the install instructions located
at https://rclone.org/install/
`

View File

@@ -0,0 +1,11 @@
// +build noselfupdate
package selfupdate
import (
"github.com/rclone/rclone/lib/buildinfo"
)
func init() {
buildinfo.Tags = append(buildinfo.Tags, "noselfupdate")
}

View File

@@ -0,0 +1,477 @@
// +build !noselfupdate
package selfupdate
import (
"archive/zip"
"bufio"
"bytes"
"context"
"crypto/sha256"
"encoding/hex"
"fmt"
"io"
"io/ioutil"
"log"
"net/http"
"os"
"os/exec"
"path/filepath"
"regexp"
"runtime"
"strings"
"github.com/pkg/errors"
"github.com/rclone/rclone/cmd"
"github.com/rclone/rclone/cmd/cmount"
"github.com/rclone/rclone/fs"
"github.com/rclone/rclone/fs/config/flags"
"github.com/rclone/rclone/fs/fshttp"
"github.com/rclone/rclone/lib/buildinfo"
"github.com/rclone/rclone/lib/random"
"github.com/spf13/cobra"
versionCmd "github.com/rclone/rclone/cmd/version"
)
// Options contains options for the self-update command
type Options struct {
Check bool
Output string // output path
Beta bool // mutually exclusive with Stable (false means "stable")
Stable bool // mutually exclusive with Beta
Version string
Package string // package format: zip, deb, rpm (empty string means "zip")
}
// Opt is options set via command line
var Opt = Options{}
func init() {
cmd.Root.AddCommand(cmdSelfUpdate)
cmdFlags := cmdSelfUpdate.Flags()
flags.BoolVarP(cmdFlags, &Opt.Check, "check", "", Opt.Check, "Check for latest release, do not download.")
flags.StringVarP(cmdFlags, &Opt.Output, "output", "", Opt.Output, "Save the downloaded binary at a given path (default: replace running binary)")
flags.BoolVarP(cmdFlags, &Opt.Stable, "stable", "", Opt.Stable, "Install stable release (this is the default)")
flags.BoolVarP(cmdFlags, &Opt.Beta, "beta", "", Opt.Beta, "Install beta release.")
flags.StringVarP(cmdFlags, &Opt.Version, "version", "", Opt.Version, "Install the given rclone version (default: latest)")
flags.StringVarP(cmdFlags, &Opt.Package, "package", "", Opt.Package, "Package format: zip|deb|rpm (default: zip)")
}
var cmdSelfUpdate = &cobra.Command{
Use: "selfupdate",
Aliases: []string{"self-update"},
Short: `Update the rclone binary.`,
Long: strings.ReplaceAll(selfUpdateHelp, "|", "`"),
Run: func(command *cobra.Command, args []string) {
cmd.CheckArgs(0, 0, command, args)
if Opt.Package == "" {
Opt.Package = "zip"
}
gotActionFlags := Opt.Stable || Opt.Beta || Opt.Output != "" || Opt.Version != "" || Opt.Package != "zip"
if Opt.Check && !gotActionFlags {
versionCmd.CheckVersion()
return
}
if Opt.Package != "zip" {
if Opt.Package != "deb" && Opt.Package != "rpm" {
log.Fatalf("--package should be one of zip|deb|rpm")
}
if runtime.GOOS != "linux" {
log.Fatalf(".deb and .rpm packages are supported only on Linux")
} else if os.Geteuid() != 0 && !Opt.Check {
log.Fatalf(".deb and .rpm must be installed by root")
}
if Opt.Output != "" && !Opt.Check {
fmt.Println("Warning: --output is ignored with --package deb|rpm")
}
}
if err := InstallUpdate(context.Background(), &Opt); err != nil {
log.Fatalf("Error: %v", err)
}
},
}
// GetVersion can get the latest release number from the download site
// or massage a stable release number - prepend semantic "v" prefix
// or find the latest micro release for a given major.minor release.
// Note: this will not be applied to beta releases.
func GetVersion(ctx context.Context, beta bool, version string) (newVersion, siteURL string, err error) {
siteURL = "https://downloads.rclone.org"
if beta {
siteURL = "https://beta.rclone.org"
}
if version == "" {
// Request the latest release number from the download site
_, newVersion, _, err = versionCmd.GetVersion(siteURL + "/version.txt")
return
}
newVersion = version
if version[0] != 'v' {
newVersion = "v" + version
}
if beta {
return
}
if valid, _ := regexp.MatchString(`^v\d+\.\d+(\.\d+)?$`, newVersion); !valid {
return "", siteURL, errors.New("invalid semantic version")
}
// Find the latest stable micro release
if strings.Count(newVersion, ".") == 1 {
html, err := downloadFile(ctx, siteURL)
if err != nil {
return "", siteURL, errors.Wrap(err, "failed to get list of releases")
}
reSubver := fmt.Sprintf(`href="\./%s\.\d+/"`, regexp.QuoteMeta(newVersion))
allSubvers := regexp.MustCompile(reSubver).FindAllString(string(html), -1)
if allSubvers == nil {
return "", siteURL, errors.New("could not find the minor release")
}
// Use the fact that releases in the index are sorted by date
lastSubver := allSubvers[len(allSubvers)-1]
newVersion = lastSubver[8 : len(lastSubver)-2]
}
return
}
// InstallUpdate performs rclone self-update
func InstallUpdate(ctx context.Context, opt *Options) error {
// Find the latest release number
if opt.Stable && opt.Beta {
return errors.New("--stable and --beta are mutually exclusive")
}
// The `cmount` tag is added by cmd/cmount/mount.go only if build is static.
_, tags := buildinfo.GetLinkingAndTags()
if strings.Contains(" "+tags+" ", " cmount ") && !cmount.ProvidedBy(runtime.GOOS) {
return errors.New("updating would discard the mount FUSE capability, aborting")
}
newVersion, siteURL, err := GetVersion(ctx, opt.Beta, opt.Version)
if err != nil {
return errors.Wrap(err, "unable to detect new version")
}
oldVersion := fs.Version
if newVersion == oldVersion {
fmt.Println("rclone is up to date")
return nil
}
// Install .deb/.rpm package if requested by user
if opt.Package == "deb" || opt.Package == "rpm" {
if opt.Check {
fmt.Println("Warning: --package flag is ignored in --check mode")
} else {
err := installPackage(ctx, opt.Beta, newVersion, siteURL, opt.Package)
if err == nil {
fmt.Printf("Successfully updated rclone package from version %s to version %s\n", oldVersion, newVersion)
}
return err
}
}
// Get the current executable path
executable, err := os.Executable()
if err != nil {
return errors.Wrap(err, "unable to find executable")
}
targetFile := opt.Output
if targetFile == "" {
targetFile = executable
}
if opt.Check {
fmt.Printf("Without --check this would install rclone version %s at %s\n", newVersion, targetFile)
return nil
}
// Make temporary file names and check for possible access errors in advance
var newFile string
if newFile, err = makeRandomExeName(targetFile, "new"); err != nil {
return err
}
savedFile := ""
if runtime.GOOS == "windows" {
savedFile = targetFile
if strings.HasSuffix(savedFile, ".exe") {
savedFile = savedFile[:len(savedFile)-4]
}
savedFile += ".old.exe"
}
if savedFile == executable || newFile == executable {
return fmt.Errorf("%s: a temporary file would overwrite the executable, specify a different --output path", targetFile)
}
if err := verifyAccess(targetFile); err != nil {
return err
}
// Download the update as a temporary file
err = downloadUpdate(ctx, opt.Beta, newVersion, siteURL, newFile, "zip")
if err != nil {
return errors.Wrap(err, "failed to update rclone")
}
err = replaceExecutable(targetFile, newFile, savedFile)
if err == nil {
fmt.Printf("Successfully updated rclone from version %s to version %s\n", oldVersion, newVersion)
}
return err
}
func installPackage(ctx context.Context, beta bool, version, siteURL, packageFormat string) error {
tempFile, err := ioutil.TempFile("", "rclone.*."+packageFormat)
if err != nil {
return errors.Wrap(err, "unable to write temporary package")
}
packageFile := tempFile.Name()
_ = tempFile.Close()
defer func() {
if rmErr := os.Remove(packageFile); rmErr != nil {
fs.Errorf(nil, "%s: could not remove temporary package: %v", packageFile, rmErr)
}
}()
if err := downloadUpdate(ctx, beta, version, siteURL, packageFile, packageFormat); err != nil {
return err
}
packageCommand := "dpkg"
if packageFormat == "rpm" {
packageCommand = "rpm"
}
cmd := exec.Command(packageCommand, "-i", packageFile)
cmd.Stdout = os.Stdout
cmd.Stderr = os.Stderr
if err := cmd.Run(); err != nil {
return fmt.Errorf("failed to run %s: %v", packageCommand, err)
}
return nil
}
func replaceExecutable(targetFile, newFile, savedFile string) error {
// Copy permission bits from the old executable
// (it was extracted with mode 0755)
fileInfo, err := os.Lstat(targetFile)
if err == nil {
if err = os.Chmod(newFile, fileInfo.Mode()); err != nil {
return errors.Wrap(err, "failed to set permission")
}
}
if err = os.Remove(targetFile); os.IsNotExist(err) {
err = nil
}
if err != nil && savedFile != "" {
// Windows forbids removal of a running executable so we rename it.
// For starters, rename download as the original file with ".old.exe" appended.
var saveErr error
if saveErr = os.Remove(savedFile); os.IsNotExist(saveErr) {
saveErr = nil
}
if saveErr == nil {
saveErr = os.Rename(targetFile, savedFile)
}
if saveErr != nil {
// The ".old" file cannot be removed or cannot be renamed to.
// This usually means that the running executable has a name with ".old".
// This can happen in very rare cases, but we ought to handle it.
// Try inserting a randomness in the name to mitigate it.
fs.Debugf(nil, "%s: cannot replace old file, randomizing name", savedFile)
savedFile, saveErr = makeRandomExeName(targetFile, "old")
if saveErr == nil {
if saveErr = os.Remove(savedFile); os.IsNotExist(saveErr) {
saveErr = nil
}
}
if saveErr == nil {
saveErr = os.Rename(targetFile, savedFile)
}
}
if saveErr == nil {
fmt.Printf("The old executable was saved as %s\n", savedFile)
err = nil
}
}
if err == nil {
err = os.Rename(newFile, targetFile)
}
if err != nil {
if rmErr := os.Remove(newFile); rmErr != nil {
fs.Errorf(nil, "%s: could not remove temporary file: %v", newFile, rmErr)
}
return err
}
return nil
}
func makeRandomExeName(baseName, extension string) (string, error) {
const maxAttempts = 5
if runtime.GOOS == "windows" {
if strings.HasSuffix(baseName, ".exe") {
baseName = baseName[:len(baseName)-4]
}
extension += ".exe"
}
for attempt := 0; attempt < maxAttempts; attempt++ {
filename := fmt.Sprintf("%s.%s.%s", baseName, random.String(4), extension)
if _, err := os.Stat(filename); os.IsNotExist(err) {
return filename, nil
}
}
return "", fmt.Errorf("cannot find a file name like %s.xxxx.%s", baseName, extension)
}
func downloadUpdate(ctx context.Context, beta bool, version, siteURL, newFile, packageFormat string) error {
osName := runtime.GOOS
arch := runtime.GOARCH
if arch == "darwin" {
arch = "osx"
}
archiveFilename := fmt.Sprintf("rclone-%s-%s-%s.%s", version, osName, arch, packageFormat)
archiveURL := fmt.Sprintf("%s/%s/%s", siteURL, version, archiveFilename)
archiveBuf, err := downloadFile(ctx, archiveURL)
if err != nil {
return err
}
gotHash := sha256.Sum256(archiveBuf)
strHash := hex.EncodeToString(gotHash[:])
fs.Debugf(nil, "downloaded release archive with hashsum %s from %s", strHash, archiveURL)
// CI/CD does not provide hashsums for beta releases
if !beta {
if err := verifyHashsum(ctx, siteURL, version, archiveFilename, gotHash[:]); err != nil {
return err
}
}
if packageFormat == "deb" || packageFormat == "rpm" {
if err := ioutil.WriteFile(newFile, archiveBuf, 0644); err != nil {
return errors.Wrap(err, "cannot write temporary ."+packageFormat)
}
return nil
}
entryName := fmt.Sprintf("rclone-%s-%s-%s/rclone", version, osName, arch)
if runtime.GOOS == "windows" {
entryName += ".exe"
}
// Extract executable to a temporary file, then replace it by an instant rename
err = extractZipToFile(archiveBuf, entryName, newFile)
if err != nil {
return err
}
fs.Debugf(nil, "extracted %s to %s", entryName, newFile)
return nil
}
func verifyAccess(file string) error {
admin := "root"
if runtime.GOOS == "windows" {
admin = "Administrator"
}
fileInfo, fileErr := os.Lstat(file)
if fileErr != nil {
dir := filepath.Dir(file)
dirInfo, dirErr := os.Lstat(dir)
if dirErr != nil {
return dirErr
}
if !dirInfo.Mode().IsDir() {
return fmt.Errorf("%s: parent path is not a directory, specify a different path using --output", dir)
}
if !writable(dir) {
return fmt.Errorf("%s: directory is not writable, please run self-update as %s", dir, admin)
}
}
if fileErr == nil && !fileInfo.Mode().IsRegular() {
return fmt.Errorf("%s: path is not a normal file, specify a different path using --output", file)
}
if fileErr == nil && !writable(file) {
return fmt.Errorf("%s: file is not writable, run self-update as %s", file, admin)
}
return nil
}
func findFileHash(buf []byte, filename string) (hash []byte, err error) {
lines := bufio.NewScanner(bytes.NewReader(buf))
for lines.Scan() {
tokens := strings.Split(lines.Text(), " ")
if len(tokens) == 2 && tokens[1] == filename {
if hash, err := hex.DecodeString(tokens[0]); err == nil {
return hash, nil
}
}
}
return nil, fmt.Errorf("%s: unable to find hash", filename)
}
func extractZipToFile(buf []byte, entryName, newFile string) error {
zipReader, err := zip.NewReader(bytes.NewReader(buf), int64(len(buf)))
if err != nil {
return err
}
var reader io.ReadCloser
for _, entry := range zipReader.File {
if entry.Name == entryName {
reader, err = entry.Open()
break
}
}
if reader == nil || err != nil {
return fmt.Errorf("%s: file not found in archive", entryName)
}
defer func() {
_ = reader.Close()
}()
err = os.Remove(newFile)
if err != nil && !os.IsNotExist(err) {
return fmt.Errorf("%s: unable to create new file: %v", newFile, err)
}
writer, err := os.OpenFile(newFile, os.O_CREATE|os.O_EXCL|os.O_WRONLY, os.FileMode(0755))
if err != nil {
return err
}
_, err = io.Copy(writer, reader)
_ = writer.Close()
if err != nil {
if rmErr := os.Remove(newFile); rmErr != nil {
fs.Errorf(nil, "%s: could not remove temporary file: %v", newFile, rmErr)
}
}
return err
}
func downloadFile(ctx context.Context, url string) ([]byte, error) {
resp, err := fshttp.NewClient(ctx).Get(url)
if err != nil {
return nil, err
}
defer fs.CheckClose(resp.Body, &err)
if resp.StatusCode != http.StatusOK {
return nil, fmt.Errorf("failed with %s downloading %s", resp.Status, url)
}
return ioutil.ReadAll(resp.Body)
}

View File

@@ -0,0 +1,200 @@
// +build !noselfupdate
package selfupdate
import (
"context"
"io/ioutil"
"os"
"os/exec"
"path/filepath"
"regexp"
"runtime"
"testing"
"time"
"github.com/rclone/rclone/fs"
"github.com/rclone/rclone/fstest/testy"
"github.com/rclone/rclone/lib/random"
"github.com/stretchr/testify/assert"
)
func TestGetVersion(t *testing.T) {
testy.SkipUnreliable(t)
ctx := context.Background()
// a beta version can only have "v" prepended
resultVer, _, err := GetVersion(ctx, true, "1.2.3.4")
assert.NoError(t, err)
assert.Equal(t, "v1.2.3.4", resultVer)
// but a stable version syntax should be checked
_, _, err = GetVersion(ctx, false, "1")
assert.Error(t, err)
_, _, err = GetVersion(ctx, false, "1.")
assert.Error(t, err)
_, _, err = GetVersion(ctx, false, "1.2.")
assert.Error(t, err)
_, _, err = GetVersion(ctx, false, "1.2.3.4")
assert.Error(t, err)
// incomplete stable version should have micro release added
resultVer, _, err = GetVersion(ctx, false, "1.52")
assert.NoError(t, err)
assert.Equal(t, "v1.52.3", resultVer)
}
func makeTestDir() (testDir string, err error) {
const maxAttempts = 5
testDirBase := filepath.Join(os.TempDir(), "rclone-test-selfupdate.")
for attempt := 0; attempt < maxAttempts; attempt++ {
testDir = testDirBase + random.String(4)
err = os.MkdirAll(testDir, os.ModePerm)
if err == nil {
break
}
}
return
}
func TestInstallOnLinux(t *testing.T) {
testy.SkipUnreliable(t)
if runtime.GOOS != "linux" {
t.Skip("this is a Linux only test")
}
// Prepare for test
ctx := context.Background()
testDir, err := makeTestDir()
assert.NoError(t, err)
path := filepath.Join(testDir, "rclone")
defer func() {
_ = os.Chmod(path, 0644)
_ = os.RemoveAll(testDir)
}()
regexVer := regexp.MustCompile(`v[0-9]\S+`)
betaVer, _, err := GetVersion(ctx, true, "")
assert.NoError(t, err)
// Must do nothing if version isn't changing
assert.NoError(t, InstallUpdate(ctx, &Options{Beta: true, Output: path, Version: fs.Version}))
// Must fail on non-writable file
assert.NoError(t, ioutil.WriteFile(path, []byte("test"), 0644))
assert.NoError(t, os.Chmod(path, 0000))
err = (InstallUpdate(ctx, &Options{Beta: true, Output: path}))
assert.Error(t, err)
assert.Contains(t, err.Error(), "run self-update as root")
// Must keep non-standard permissions
assert.NoError(t, os.Chmod(path, 0644))
assert.NoError(t, InstallUpdate(ctx, &Options{Beta: true, Output: path}))
info, err := os.Stat(path)
assert.NoError(t, err)
assert.Equal(t, os.FileMode(0644), info.Mode().Perm())
// Must remove temporary files
files, err := ioutil.ReadDir(testDir)
assert.NoError(t, err)
assert.Equal(t, 1, len(files))
// Must contain valid executable
assert.NoError(t, os.Chmod(path, 0755))
cmd := exec.Command(path, "version")
output, err := cmd.CombinedOutput()
assert.NoError(t, err)
assert.True(t, cmd.ProcessState.Success())
assert.Equal(t, betaVer, regexVer.FindString(string(output)))
}
func TestRenameOnWindows(t *testing.T) {
testy.SkipUnreliable(t)
if runtime.GOOS != "windows" {
t.Skip("this is a Windows only test")
}
// Prepare for test
ctx := context.Background()
testDir, err := makeTestDir()
assert.NoError(t, err)
defer func() {
_ = os.RemoveAll(testDir)
}()
path := filepath.Join(testDir, "rclone.exe")
regexVer := regexp.MustCompile(`v[0-9]\S+`)
stableVer, _, err := GetVersion(ctx, false, "")
assert.NoError(t, err)
betaVer, _, err := GetVersion(ctx, true, "")
assert.NoError(t, err)
// Must not create temporary files when target doesn't exist
assert.NoError(t, InstallUpdate(ctx, &Options{Beta: true, Output: path}))
files, err := ioutil.ReadDir(testDir)
assert.NoError(t, err)
assert.Equal(t, 1, len(files))
// Must save running executable as the "old" file
cmdWait := exec.Command(path, "config")
stdinWait, err := cmdWait.StdinPipe() // Make it run waiting for input
assert.NoError(t, err)
assert.NoError(t, cmdWait.Start())
assert.NoError(t, InstallUpdate(ctx, &Options{Beta: false, Output: path}))
files, err = ioutil.ReadDir(testDir)
assert.NoError(t, err)
assert.Equal(t, 2, len(files))
pathOld := filepath.Join(testDir, "rclone.old.exe")
_, err = os.Stat(pathOld)
assert.NoError(t, err)
cmd := exec.Command(path, "version")
output, err := cmd.CombinedOutput()
assert.NoError(t, err)
assert.True(t, cmd.ProcessState.Success())
assert.Equal(t, stableVer, regexVer.FindString(string(output)))
cmdOld := exec.Command(pathOld, "version")
output, err = cmdOld.CombinedOutput()
assert.NoError(t, err)
assert.True(t, cmdOld.ProcessState.Success())
assert.Equal(t, betaVer, regexVer.FindString(string(output)))
// Stop previous waiting executable, run new and saved executables
_ = stdinWait.Close()
_ = cmdWait.Wait()
time.Sleep(100 * time.Millisecond)
cmdWait = exec.Command(path, "config")
stdinWait, err = cmdWait.StdinPipe()
assert.NoError(t, err)
assert.NoError(t, cmdWait.Start())
cmdWaitOld := exec.Command(pathOld, "config")
stdinWaitOld, err := cmdWaitOld.StdinPipe()
assert.NoError(t, err)
assert.NoError(t, cmdWaitOld.Start())
// Updating when the "old" executable is running must produce a random "old" file
assert.NoError(t, InstallUpdate(ctx, &Options{Beta: true, Output: path}))
files, err = ioutil.ReadDir(testDir)
assert.NoError(t, err)
assert.Equal(t, 3, len(files))
// Stop all waiting executables
_ = stdinWait.Close()
_ = cmdWait.Wait()
_ = stdinWaitOld.Close()
_ = cmdWaitOld.Wait()
time.Sleep(100 * time.Millisecond)
}

74
cmd/selfupdate/verify.go Normal file
View File

@@ -0,0 +1,74 @@
// +build !noselfupdate
package selfupdate
import (
"bytes"
"context"
"fmt"
"strings"
"github.com/pkg/errors"
"github.com/rclone/rclone/fs"
"golang.org/x/crypto/openpgp"
"golang.org/x/crypto/openpgp/clearsign"
)
var ncwPublicKeyPGP = `-----BEGIN PGP PUBLIC KEY BLOCK-----
mQGiBDuy3V0RBADVQOAF5aFiCxD3t2h6iAF2WMiaMlgZ6kX2i/u7addNkzX71VU9
7NpI0SnsP5YWt+gEedST6OmFbtLfZWCR4KWn5XnNdjCMNhxaH6WccVqNm4ALPIqT
59uVjkgf8RISmmoNJ1d+2wMWjQTUfwOEmoIgH6n+2MYNUKuctBrwAACflwCg1I1Q
O/prv/5hczdpQCs+fL87DxsD/Rt7pIXvsIOZyQWbIhSvNpGalJuMkW5Jx92UjsE9
1Ipo3Xr6SGRPgW9+NxAZAsiZfCX/19knAyNrN9blwL0rcPDnkhdGwK69kfjF+wq+
QbogRGodbKhqY4v+cMNkKiemBuTQiWPkpKjifwNsD1fNjNKfDP3pJ64Yz7a4fuzV
X1YwBACpKVuEen34lmcX6ziY4jq8rKibKBs4JjQCRO24kYoHDULVe+RS9krQWY5b
e0foDhru4dsKccefK099G+WEzKVCKxupstWkTT/iJwajR8mIqd4AhD0wO9W3MCfV
Ov8ykMDZ7qBWk1DHc87Ep3W1o8t8wq74ifV+HjhhWg8QAylXg7QlTmljayBDcmFp
Zy1Xb29kIDxuaWNrQGNyYWlnLXdvb2QuY29tPohxBBMRCAAxBQsHCgMEAxUDAgMW
AgECF4AWIQT79zfs6firGGBL0qyTk14C/ztU+gUCXjg2UgIZAQAKCRCTk14C/ztU
+lmmAJ4jH5FyULzStjisuTvHLTVz6G44eQCfaR5QGZFPseenE5ic2WeQcBcmtoG5
Ag0EO7LdgRAIAI6QdFBg3/xa1gFKPYy1ihV9eSdGqwWZGJvokWsfCvHy5180tj/v
UNOLAJrdqglMSvevNTXe8bT65D6423AAsLhch9wq/aNqrHolTYABzxRigjcS1//T
yln5naGUzlVQXDVfrDk3Md/NrkdOFj7r/YyMF0+iWwpFz2qAjL95i5wfVZ1kWGrT
2AmivE1wD1sWT/Ja3FDI0NRkU0Nbz/a0TKe4ml8iLVtZXpTRbxxCCPdkHXXgSyu1
eZ4NrF/wTJuvwGn12TJ1EF95aVkHxAUw0+KmLGdcyBG+IKuHamrsjWIAXGXV///K
AxPgUthccQ03HMjltFsrdmen5Q034YM3eOsAAwUH/jAKiIAA8LpZmZPnt9GZ4+Ol
Zp22VAfyfDOFl4Ol+cWjkLAgjAFsm5gnOKcRSE/9XPxnQqkhw7+ZygYuUMgTDJ99
/5IM1UQL3ooS+oFrDaE99S8bLeOe17skcdXcA/K83VqD9m93rQRnbtD+75zqKkZn
9WNFyKCXg5P6PFPdNYRtlQKOcwFR9mHRLUmapQSAM8Y2pCgALZ7GViKQca8/TT1T
gZk9fJMZYGez+IlOPxTJxjn80+vywk4/wdIWSiQj+8u5RzT9sjmm77wbMVNGRqYd
W/EemW9Zz9vi0CIvJGgbPMqcuxw8e/5lnuQ6Mi3uDR0P2RNIAhFrdZpVSME8xQaI
RgQYEQIABgUCO7LdgQAKCRCTk14C/ztU+mLBAKC2cdFy7eLaQAvyzcE2VK6HVIjn
JACguA00bxLQuJ4+RCJrLFZP8ZlN2sc=
=TtR5
-----END PGP PUBLIC KEY BLOCK-----`
func verifyHashsum(ctx context.Context, siteURL, version, archive string, hash []byte) error {
sumsURL := fmt.Sprintf("%s/%s/SHA256SUMS", siteURL, version)
sumsBuf, err := downloadFile(ctx, sumsURL)
if err != nil {
return err
}
fs.Debugf(nil, "downloaded hashsum list: %s", sumsURL)
keyRing, err := openpgp.ReadArmoredKeyRing(strings.NewReader(ncwPublicKeyPGP))
if err != nil {
return errors.New("unsupported signing key")
}
block, rest := clearsign.Decode(sumsBuf)
// block.Bytes = block.Bytes[1:] // uncomment to test invalid signature
_, err = openpgp.CheckDetachedSignature(keyRing, bytes.NewReader(block.Bytes), block.ArmoredSignature.Body)
if err != nil || len(rest) > 0 {
return errors.New("invalid hashsum signature")
}
wantHash, err := findFileHash(sumsBuf, archive)
if err != nil {
return err
}
if !bytes.Equal(hash, wantHash) {
return fmt.Errorf("archive hash mismatch: want %02x vs got %02x", wantHash, hash)
}
return nil
}

View File

@@ -0,0 +1,12 @@
// +build !windows,!plan9,!js
// +build !noselfupdate
package selfupdate
import (
"golang.org/x/sys/unix"
)
func writable(path string) bool {
return unix.Access(path, unix.W_OK) == nil
}

View File

@@ -0,0 +1,8 @@
// +build plan9 js
// +build !noselfupdate
package selfupdate
func writable(path string) bool {
return true
}

View File

@@ -0,0 +1,17 @@
// +build windows
// +build !noselfupdate
package selfupdate
import (
"os"
)
func writable(path string) bool {
info, err := os.Stat(path)
const UserWritableBit = 128
if err == nil {
return info.Mode().Perm()&UserWritableBit != 0
}
return false
}

View File

@@ -0,0 +1,5 @@
// +build noselfupdate
package cmd
const selfupdateEnabled = false

View File

@@ -0,0 +1,7 @@
// +build !noselfupdate
package cmd
// This constant must be in the `cmd` package rather than `cmd/selfupdate`
// to prevent build failure due to dependency loop.
const selfupdateEnabled = true

View File

@@ -13,12 +13,12 @@ import (
"github.com/anacrolix/dms/soap"
"github.com/rclone/rclone/fs/config/configfile"
"github.com/rclone/rclone/vfs"
_ "github.com/rclone/rclone/backend/local"
"github.com/rclone/rclone/cmd/serve/dlna/dlnaflags"
"github.com/rclone/rclone/fs"
"github.com/rclone/rclone/fs/config"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
)
@@ -41,7 +41,7 @@ func startServer(t *testing.T, f fs.Fs) {
}
func TestInit(t *testing.T) {
config.LoadConfig(context.Background())
configfile.LoadConfig(context.Background())
f, err := fs.NewFs(context.Background(), "testdata/files")
l, _ := f.List(context.Background(), "")

View File

@@ -12,7 +12,7 @@ import (
_ "github.com/rclone/rclone/backend/local"
"github.com/rclone/rclone/cmd/serve/httplib"
"github.com/rclone/rclone/fs"
"github.com/rclone/rclone/fs/config"
"github.com/rclone/rclone/fs/config/configfile"
"github.com/rclone/rclone/fs/filter"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
@@ -61,7 +61,7 @@ var (
func TestInit(t *testing.T) {
ctx := context.Background()
// Configure the remote
config.LoadConfig(context.Background())
configfile.LoadConfig(context.Background())
// fs.Config.LogLevel = fs.LogLevelDebug
// fs.Config.DumpHeaders = true
// fs.Config.DumpBodies = true

View File

@@ -1,6 +1,7 @@
package restic
import (
"context"
"crypto/rand"
"encoding/hex"
"io"
@@ -12,6 +13,7 @@ import (
"github.com/rclone/rclone/cmd"
"github.com/rclone/rclone/cmd/serve/httplib/httpflags"
"github.com/rclone/rclone/fs/config/configfile"
"github.com/stretchr/testify/require"
)
@@ -63,6 +65,8 @@ func createOverwriteDeleteSeq(t testing.TB, path string) []TestRequest {
// TestResticHandler runs tests on the restic handler code, especially in append-only mode.
func TestResticHandler(t *testing.T) {
ctx := context.Background()
configfile.LoadConfig(ctx)
buf := make([]byte, 32)
_, err := io.ReadFull(rand.Reader, buf)
require.NoError(t, err)

View File

@@ -27,7 +27,8 @@ var commandDefinition = &cobra.Command{
Sync the source to the destination, changing the destination
only. Doesn't transfer unchanged files, testing by size and
modification time or MD5SUM. Destination is updated to match
source, including deleting files if necessary.
source, including deleting files if necessary (except duplicate
objects, see below).
**Important**: Since this can cause data loss, test first with the
` + "`--dry-run` or the `--interactive`/`-i`" + ` flag.
@@ -35,7 +36,8 @@ source, including deleting files if necessary.
rclone sync -i SOURCE remote:DESTINATION
Note that files in the destination won't be deleted if there were any
errors at any point.
errors at any point. Duplicate objects (files with the same name, on
those providers that support it) are also not yet handled.
It is always the contents of the directory that is synced, not the
directory so when source:path is a directory, it's the contents of
@@ -46,6 +48,9 @@ If dest:path doesn't exist, it is created and the source:path contents
go there.
**Note**: Use the ` + "`-P`" + `/` + "`--progress`" + ` flag to view real-time transfer statistics
**Note**: Use the ` + "`rclone dedupe`" + ` command to deal with "Duplicate object/directory found in source/destination - ignoring" errors.
See [this forum post](https://forum.rclone.org/t/sync-not-clearing-duplicates/14372) for more info.
`,
Run: func(command *cobra.Command, args []string) {
cmd.CheckArgs(2, 2, command, args)

View File

@@ -29,14 +29,24 @@ var commandDefinition = &cobra.Command{
Use: "version",
Short: `Show the version number.`,
Long: `
Show the version number, the go version and the architecture.
Show the rclone version number, the go version, the build target
OS and architecture, the runtime OS and kernel version and bitness,
build tags and the type of executable (static or dynamic).
Eg
For example:
$ rclone version
rclone v1.41
- os/arch: linux/amd64
- go version: go1.10
rclone v1.55.0
- os/version: ubuntu 18.04 (64 bit)
- os/kernel: 4.15.0-136-generic (x86_64)
- os/type: linux
- os/arch: amd64
- go/version: go1.16
- go/linking: static
- go/tags: none
Note: before rclone version 1.55 the os/type and os/arch lines were merged,
and the "go/version" line was tagged as "go version".
If you supply the --check flag, then it will do an online check to
compare your version with the latest release and the latest beta.
@@ -59,7 +69,7 @@ Or
Run: func(command *cobra.Command, args []string) {
cmd.CheckArgs(0, 0, command, args)
if check {
checkVersion()
CheckVersion()
} else {
cmd.ShowVersion()
}
@@ -74,8 +84,8 @@ func stripV(s string) string {
return s
}
// getVersion gets the version by checking the download repository passed in
func getVersion(url string) (v *semver.Version, vs string, date time.Time, err error) {
// GetVersion gets the version available for download
func GetVersion(url string) (v *semver.Version, vs string, date time.Time, err error) {
resp, err := http.Get(url)
if err != nil {
return v, vs, date, err
@@ -89,9 +99,7 @@ func getVersion(url string) (v *semver.Version, vs string, date time.Time, err e
return v, vs, date, err
}
vs = strings.TrimSpace(string(bodyBytes))
if strings.HasPrefix(vs, "rclone ") {
vs = vs[7:]
}
vs = strings.TrimPrefix(vs, "rclone ")
vs = strings.TrimRight(vs, "β")
date, err = http.ParseTime(resp.Header.Get("Last-Modified"))
if err != nil {
@@ -101,9 +109,8 @@ func getVersion(url string) (v *semver.Version, vs string, date time.Time, err e
return v, vs, date, err
}
// check the current version against available versions
func checkVersion() {
// Get Current version
// CheckVersion checks the installed version against available downloads
func CheckVersion() {
vCurrent, err := semver.NewVersion(stripV(fs.Version))
if err != nil {
fs.Errorf(nil, "Failed to parse version: %v", err)
@@ -111,7 +118,7 @@ func checkVersion() {
const timeFormat = "2006-01-02"
printVersion := func(what, url string) {
v, vs, t, err := getVersion(url + "version.txt")
v, vs, t, err := GetVersion(url + "version.txt")
if err != nil {
fs.Errorf(nil, "Failed to get rclone %s version: %v", what, err)
return

View File

@@ -473,3 +473,9 @@ put them back in again.` >}}
* tYYGH <tYYGH@users.noreply.github.com>
* georne <77802995+georne@users.noreply.github.com>
* Maxwell Calman <mcalman@MacBook-Pro.local>
* Naveen Honest Raj <naveendurai19@gmail.com>
* Lucas Messenger <lmesseng@cisco.com>
* Manish Kumar <krmanish260@gmail.com>
* x0b <x0bdev@gmail.com>
* CERN through the CS3MESH4EOSC Project
* Nick Gaya <nicholasgaya+github@gmail.com>

View File

@@ -392,6 +392,22 @@ See: the [encoding section in the overview](/overview/#encoding) for more info.
- Type: MultiEncoder
- Default: Slash,BackSlash,Del,Ctl,RightPeriod,InvalidUtf8
#### --azureblob-public-access
Public access level of a container: blob, container.
- Config: public_access
- Env Var: RCLONE_AZUREBLOB_PUBLIC_ACCESS
- Type: string
- Default: ""
- Examples:
- ""
- The container and its blobs can be accessed only with an authorized request. It's a default value
- "blob"
- Blob data within this container can be read via anonymous request.
- "container"
- Allow full public read access for container and blob data.
{{< rem autogenerated options stop >}}
### Limitations ###

View File

@@ -5,6 +5,160 @@ description: "Rclone Changelog"
# Changelog
## v1.55.0 - 2021-03-31
[See commits](https://github.com/rclone/rclone/compare/v1.54.0...v1.55.0)
* New commands
* [selfupdate](/commands/rclone_selfupdate/) (Ivan Andreev)
* Allows rclone to update itself in-place or via a package (using `--package` flag)
* Reads cryptographically signed signatures for non beta releases
* Works on all OSes.
* [test](/commands/rclone_test/) - these are test commands - use with care!
* `histogram` - Makes a histogram of file name characters.
* `info` - Discovers file name or other limitations for paths.
* `makefiles` - Make a random file hierarchy for testing.
* `memory` - Load all the objects at remote:path into memory and report memory stats.
* New Features
* [Connection strings](/docs/#connection-strings)
* Config parameters can now be passed as part of the remote name as a connection string.
* For example to do the equivalent of `--drive-shared-with-me` use `drive,shared_with_me:`
* Make sure we don't save on the fly remote config to the config file (Nick Craig-Wood)
* Make sure backends with additional config have a different name for caching (Nick Craig-Wood)
* This work was sponsored by CERN, through the [CS3MESH4EOSC Project](https://cs3mesh4eosc.eu/).
* CS3MESH4EOSC has received funding from the European Unions Horizon 2020
* research and innovation programme under Grant Agreement no. 863353.
* build
* Update go build version to go1.16 and raise minimum go version to go1.13 (Nick Craig-Wood)
* Make a macOS ARM64 build to support Apple Silicon (Nick Craig-Wood)
* Install macfuse 4.x instead of osxfuse 3.x (Nick Craig-Wood)
* Use `GO386=softfloat` instead of deprecated `GO386=387` for 386 builds (Nick Craig-Wood)
* Disable IOS builds for the time being (Nick Craig-Wood)
* Androids builds made with up to date NDK (x0b)
* Add an rclone user to the Docker image but don't use it by default (cynthia kwok)
* dedupe: Make largest directory primary to minimize data moved (Saksham Khanna)
* config
* Wrap config library in an interface (Fionera)
* Make config file system pluggable (Nick Craig-Wood)
* `--config ""` or `"/notfound"` for in memory config only (Nick Craig-Wood)
* Clear fs cache of stale entries when altering config (Nick Craig-Wood)
* copyurl: Add option to print resulting auto-filename (albertony)
* delete: Make `--rmdirs` obey the filters (Nick Craig-Wood)
* docs - many fixes and reworks from edwardxml, albertony, pvalls, Ivan Andreev, Evan Harris, buengese, Alexey Tabakman
* encoder/filename - add SCSU as tables (Klaus Post)
* Add multiple paths support to `--compare-dest` and `--copy-dest` flag (K265)
* filter: Make `--exclude "dir/"` equivalent to `--exclude "dir/**"` (Nick Craig-Wood)
* fshttp: Add DSCP support with `--dscp` for QoS with differentiated services (Max Sum)
* lib/cache: Add Delete and DeletePrefix methods (Nick Craig-Wood)
* lib/file
* Make pre-allocate detect disk full errors and return them (Nick Craig-Wood)
* Don't run preallocate concurrently (Nick Craig-Wood)
* Retry preallocate on EINTR (Nick Craig-Wood)
* operations: Made copy and sync operations obey a RetryAfterError (Ankur Gupta)
* rc
* Add string alternatives for setting options over the rc (Nick Craig-Wood)
* Add `options/local` to see the options configured in the context (Nick Craig-Wood)
* Add `_config` parameter to set global config for just this rc call (Nick Craig-Wood)
* Implement passing filter config with `_filter` parameter (Nick Craig-Wood)
* Add `fscache/clear` and `fscache/entries` to control the fs cache (Nick Craig-Wood)
* Avoid +Inf value for speed in `core/stats` (albertony)
* Add a full set of stats to `core/stats` (Nick Craig-Wood)
* Allow `fs=` params to be a JSON blob (Nick Craig-Wood)
* rcd: Added systemd notification during the `rclone rcd` command. (Naveen Honest Raj)
* rmdirs: Make `--rmdirs` obey the filters (Nick Craig-Wood)
* version: Show build tags and type of executable (Ivan Andreev)
* Bug Fixes
* install.sh: make it fail on download errors (Ivan Andreev)
* Fix excessive retries missing `--max-duration` timeout (Nick Craig-Wood)
* Fix crash when `--low-level-retries=0` (Nick Craig-Wood)
* Fix failed token refresh on mounts created via the rc (Nick Craig-Wood)
* fshttp: Fix bandwidth limiting after bad merge (Nick Craig-Wood)
* lib/atexit
* Unregister interrupt handler once it has fired so users can interrupt again (Nick Craig-Wood)
* Fix occasional failure to unmount with CTRL-C (Nick Craig-Wood)
* Fix deadlock calling Finalise while Run is running (Nick Craig-Wood)
* lib/rest: Fix multipart uploads not stopping on context cancel (Nick Craig-Wood)
* Mount
* Allow mounting to root directory on windows (albertony)
* Improved handling of relative paths on windows (albertony)
* Fix unicode issues with accented characters on macOS (Nick Craig-Wood)
* Docs: document the new FileSecurity option in WinFsp 2021 (albertony)
* Docs: add note about volume path syntax on windows (albertony)
* Fix caching of old directories after renaming them (Nick Craig-Wood)
* Update cgofuse to the latest version to bring in macfuse 4 fix (Nick Craig-Wood)
* VFS
* `--vfs-used-is-size` to report used space using recursive scan (tYYGH)
* Don't set modification time if it was already correct (Nick Craig-Wood)
* Fix Create causing windows explorer to truncate files on CTRL-C CTRL-V (Nick Craig-Wood)
* Fix modtimes not updating when writing via cache (Nick Craig-Wood)
* Fix modtimes changing by fractional seconds after upload (Nick Craig-Wood)
* Fix modtime set if `--vfs-cache-mode writes`/`full` and no write (Nick Craig-Wood)
* Rename files in cache and cancel uploads on directory rename (Nick Craig-Wood)
* Fix directory renaming by renaming dirs cached in memory (Nick Craig-Wood)
* Local
* Add flag `--local-no-preallocate` (David Sze)
* Make `nounc` an advanced option except on Windows (albertony)
* Don't ignore preallocate disk full errors (Nick Craig-Wood)
* Cache
* Add `--fs-cache-expire-duration` to control the fs cache (Nick Craig-Wood)
* Crypt
* Add option to not encrypt data (Vesnyx)
* Log hash ok on upload (albertony)
* Azure Blob
* Add container public access level support. (Manish Kumar)
* B2
* Fix HTML files downloaded via cloudflare (Nick Craig-Wood)
* Box
* Fix transfers getting stuck on token expiry after API change (Nick Craig-Wood)
* Chunker
* Partially implement no-rename transactions (Maxwell Calman)
* Drive
* Don't stop server side copy if couldn't read description (Nick Craig-Wood)
* Pass context on to drive SDK - to help with cancellation (Nick Craig-Wood)
* Dropbox
* Add polling for changes support (Robert Thomas)
* Make `--timeout 0` work properly (Nick Craig-Wood)
* Raise priority of rate limited message to INFO to make it more noticeable (Nick Craig-Wood)
* Fichier
* Implement copy & move (buengese)
* Implement public link (buengese)
* FTP
* Implement Shutdown method (Nick Craig-Wood)
* Close idle connections after `--ftp-idle-timeout` (1m by default) (Nick Craig-Wood)
* Make `--timeout 0` work properly (Nick Craig-Wood)
* Add `--ftp-close-timeout` flag for use with awkward ftp servers (Nick Craig-Wood)
* Retry connections and logins on 421 errors (Nick Craig-Wood)
* Hdfs
* Fix permissions for when directory is created (Lucas Messenger)
* Onedrive
* Make `--timeout 0` work properly (Nick Craig-Wood)
* S3
* Fix `--s3-profile` which wasn't working (Nick Craig-Wood)
* SFTP
* Close idle connections after `--sftp-idle-timeout` (1m by default) (Nick Craig-Wood)
* Fix "file not found" errors for read once servers (Nick Craig-Wood)
* Fix SetModTime stat failed: object not found with `--sftp-set-modtime=false` (Nick Craig-Wood)
* Swift
* Update github.com/ncw/swift to v2.0.0 (Nick Craig-Wood)
* Implement copying large objects (nguyenhuuluan434)
* Union
* Fix crash when using epff policy (Nick Craig-Wood)
* Fix union attempting to update files on a read only file system (Nick Craig-Wood)
* Refactor to use fspath.SplitFs instead of fs.ParseRemote (Nick Craig-Wood)
* Fix initialisation broken in refactor (Nick Craig-Wood)
* WebDAV
* Add support for sharepoint with NTLM authentication (Rauno Ots)
* Make sharepoint-ntlm docs more consistent (Alex Chen)
* Improve terminology in sharepoint-ntlm docs (Ivan Andreev)
* Disable HTTP/2 for NTLM authentication (georne)
* Fix sharepoint-ntlm error 401 for parallel actions (Ivan Andreev)
* Check that purged directory really exists (Ivan Andreev)
* Yandex
* Make `--timeout 0` work properly (Nick Craig-Wood)
* Zoho
* Replace client id - you will need to `rclone config reconnect` after this (buengese)
* Add forgotten setupRegion() to NewFs - this finally fixes regions other than EU (buengese)
## v1.54.1 - 2021-03-08
[See commits](https://github.com/rclone/rclone/compare/v1.54.0...v1.54.1)

View File

@@ -416,4 +416,27 @@ Choose how chunker should handle files with missing or invalid chunks.
- "false"
- Warn user, skip incomplete file and proceed.
#### --chunker-transactions
Choose how chunker should handle temporary files during transactions.
- Config: transactions
- Env Var: RCLONE_CHUNKER_TRANSACTIONS
- Type: string
- Default: "rename"
- Examples:
- "rename"
- Rename temporary files after a successful transaction.
- "norename"
- Leave temporary file names and write transaction ID to metadata file.
- Metadata is required for no rename transactions (meta format cannot be "none").
- If you are using norename transactions you should be careful not to downgrade Rclone
- as older versions of Rclone don't support this transaction style and will misinterpret
- files manipulated by norename transactions.
- This method is EXPERIMENTAL, don't use on production systems.
- "auto"
- Rename or norename will be used depending on capabilities of the backend.
- If meta format is set to "none", rename transactions will always be used.
- This method is EXPERIMENTAL, don't use on production systems.
{{< rem autogenerated options stop >}}

View File

@@ -72,11 +72,13 @@ See the [global flags page](/flags/) for global options not listed here.
* [rclone rcd](/commands/rclone_rcd/) - Run rclone listening to remote control commands only.
* [rclone rmdir](/commands/rclone_rmdir/) - Remove the empty directory at path.
* [rclone rmdirs](/commands/rclone_rmdirs/) - Remove empty directories under the path.
* [rclone selfupdate](/commands/rclone_selfupdate/) - Update the rclone binary.
* [rclone serve](/commands/rclone_serve/) - Serve a remote over a protocol.
* [rclone settier](/commands/rclone_settier/) - Changes storage class/tier of objects in remote.
* [rclone sha1sum](/commands/rclone_sha1sum/) - Produces an sha1sum file for all the objects in the path.
* [rclone size](/commands/rclone_size/) - Prints the total size and number of objects in remote:path.
* [rclone sync](/commands/rclone_sync/) - Make source and dest identical, modifying destination only.
* [rclone test](/commands/rclone_test/) - Run a test command
* [rclone touch](/commands/rclone_touch/) - Create new file or change file modification time.
* [rclone tree](/commands/rclone_tree/) - List the contents of the remote in a tree like fashion.
* [rclone version](/commands/rclone_version/) - Show the version number.

View File

@@ -15,15 +15,16 @@ Copy url content to dest.
Download a URL's content and copy it to the destination without saving
it in temporary storage.
Setting --auto-filename will cause the file name to be retrieved from
Setting `--auto-filename`will cause the file name to be retrieved from
the from URL (after any redirections) and used in the destination
path.
path. With `--print-filename` in addition, the resuling file name will
be printed.
Setting --no-clobber will prevent overwriting file on the
Setting `--no-clobber` will prevent overwriting file on the
destination if there is one with the same name.
Setting --stdout or making the output file name "-" will cause the
output to be written to standard output.
Setting `--stdout` or making the output file name `-`
will cause the output to be written to standard output.
```
@@ -33,10 +34,11 @@ rclone copyurl https://example.com dest:path [flags]
## Options
```
-a, --auto-filename Get the file name from the URL and use it for destination file path
-h, --help help for copyurl
--no-clobber Prevent overwriting file with same name
--stdout Write the output to stdout rather than a file
-a, --auto-filename Get the file name from the URL and use it for destination file path
-h, --help help for copyurl
--no-clobber Prevent overwriting file with same name
-p, --print-filename Print the resulting name from --auto-filename
--stdout Write the output to stdout rather than a file
```
See the [global flags page](/flags/) for global options not listed here.

View File

@@ -17,8 +17,8 @@ By default `dedupe` interactively finds files with duplicate
names and offers to delete all but one or rename them to be
different. This is known as deduping by name.
Deduping by name is only useful with backends like Google Drive which
can have duplicate file names. It can be run on wrapping backends
Deduping by name is only useful with a small group of backends (e.g. Google Drive,
Opendrive) that can have duplicate file names. It can be run on wrapping backends
(e.g. crypt) if they wrap a backend which supports duplicate file
names.

Some files were not shown because too many files have changed in this diff Show More