It was reported that rclone copy occasionally uploaded corrupted data
to azure blob.
This turned out to be a race condition updating the block count which
caused blocks to be duplicated.
This bug was introduced in this commit in v1.64.0 and will be fixed in v1.65.2
0427177857 azureblob: implement OpenChunkWriter and multi-thread uploads #7056
This race only seems to happen if `--checksum` is used but can happen otherwise.
Unfortunately Azure blob does not check the MD5 that we send them so
despite sending incorrect data this corruption is not detected. The
corruption is detected when rclone tries to download the file, so
attempting to copy the files back to local disk will result in errors
such as:
ERROR : file.pokosuf5.partial: corrupted on transfer: md5 hash differ "XXX" vs "YYY"
This adds a check to test the blocklist we upload is as we expected
which would have caught the problem had it been in place earlier.
Before this change the VFS cache could get into a state where when an
object was updated remotely, the fingerprint of the item was correct
for the new object but the data in the VFS cache was for the old
object.
This fixes the problem by updating the fingerprint of the item at the
point we remove the stale data. The empty cache item now represents
the new item even though it has no data in.
This stops the fallback code for an empty fingerprint running (used
when we are writing items to the cache instead of reading them) which
was causing the problem.
Fixes#6053
See: https://forum.rclone.org/t/cached-webdav-mount-fingerprints-get-nuked-on-ls/43974/
Before this change we were only counting moves as checks. This means
that when using `rclone move` the `Transfers` stat did not count up
like it should do.
This changes introduces a new primitive operations.MoveTransfers which
counts moves as Transfers for use where that is appropriate, such as
rclone move/moveto. Otherwise moves are counted as checks and their
bytes are not accounted.
See: #7183
See: https://forum.rclone.org/t/stats-one-line-date-broken-in-1-64-0-and-later/43263/
Before this fix we were not counting transferred files nor transferred
bytes for server side moves/copies.
If the server side move/copy has been marked as a transfer and not a
checker then this accounts transferred files and transferred bytes.
The transferred bytes are not accounted to the network though so this
should not affect the network stats.
Before this change, listing a subdirectory gave errors like this:
Entry doesn't belong in directory "" (contains subdir) - ignoring
It also did full recursive listings when it didn't need to.
This was caused by the code using the underlying Fs to do recursive
listings on bucket based backends.
Using both the VFS and the underlying Fs is a mistake so this patch
removes the code which uses the underlying Fs and just uses the VFS.
Fixes#7500
Unexpectedly the team which runs the Go docker images have removed the
arm/v6 image which means that the rclone docker images no longer
build.
One of the recommended fixes is what we've done here - switch to the
alpine builder. This has the advantage that it actually builds arm/v6
architecture unlike the previous builder which build arm/v5.
See: https://github.com/docker-library/golang/issues/502
the field `raw` of `oauth2.Token` may be an uncomparable type(often map[string]interface{}), causing `*token != *ts.token` expression to panic(comparing uncomparable type ...).
the semantics of comparing whether two tokens are the same can be achieved by comparing accessToken, refreshToken and expire to avoid panic.
Before this change multi-thread copies using the FTP backend used to error with
551 Error reading file
This was caused by a spurious error being reported which this code silences.
Fixes#7532
See #3942
When f.opt.MaxAge == 0, f.db is never set, however several methods later assume
it is set and attempt to access it, causing an invalid memory address error.
This change fixes the issue in a few spots (there may still be others I haven't
yet encountered.)
This fixes the Root() returned by the backend when it has returned
fs.ErrorIsFile.
Before this change it returned a root which included the file path.
Because Root() was wrong this caused the detection of the file being
moved over itself check to fail.
This adds an integration test to check it for all backends.
See: https://forum.rclone.org/t/rclone-move-chunker-dir-file-chunker-dir-deletes-all-file-chunks/43333/
- make compile on all unix OSes - this will make the docs appear on linux and rclone.org!
- add --sudo flag for using with mount
- improve error reporting
- fix option handling
This error was introduced in this commit when refactoring the list
routine.
b8591b230d onedrive: implement ListR method which gives --fast-list support
The error was caused by OneNote files not being skipped properly.
Before this change the IP address of the server was used in the SMB
connect request (see CloudSoda/go-smb2#18).
The updated library now can pass the hostname instead.
The update requires a small change in the dial method call.
Fixes rclone#6672
Before this change ListR was unconditionally enabled on onedrive.
This caused performance problems for some uses, so now the
--onedrive-delta flag has to be supplied.
Fixes#7362
Before this change PartialUploads was not set. This is clearly wrong
since incoming files are visible on the smb server.
Setting PartialUploads fixes the multithread upload modtime problem as
it uses the PartialUploads flag as an indication that it needs to set
the modtime explicitly.
This problem was detected by the new TestMultithreadCopy integration
tests
Fixes#7411
Before this change smb drives sometimes showed a fraction of the
correct size using `rclone about`.
This fixes the problem by switching the upstream library from
github.com/hirochachacha/go-smb2 to github.com/cloudsoda/go-smb2 which
has a fix for the problem.
The new library passes the integration tests.
Fixes#6733
Before this change overwriting an existing file with a 0 length file
didn't update the file size.
This change corrects the issue and makes sure the file is truncated
properly.
This was discovered by the full integration tests.
Before this change serve s3 would return NoSuchKey errors when a non
existent prefix was listed.
This change fixes it to return an empty list like AWS does.
This was discovered by the full integration tests.
The following command will block for 60s(default) when the network is slow or unavailable:
```
rclone --contimeout 10s --low-level-retries 0 lsd dropbox:
```
This change will make it timeout after the expected 10s.
Signed-off-by: rkonfj <rkonfj@gmail.com>
Before this change, streaming files an exact multiple of the chunk
size would cause rclone to attempt to stream a 0 sized chunk which was
rejected by the b2 servers.
This bug was noticed by the new integration tests for chunked streaming.
Before this change the b2 servers would complain as this was only a
single part transfer.
This was noticed by the new integration tests for server side chunked copy.
Before this change, if a multithread upload failed (let's say the
source became unavailable) rclone would finalise the file first before
aborting the transfer.
This caused the partial file to be written which would overwrite any
existing files.
This was fixed by making sure we Abort the transfer before Close-ing
it.
This updates the docs to encourage calling of Abort before Close and
updates writerAtChunkWriter to make sure that works properly.
This also reworks the tests to detect this and to make sure we upload
and download to each multi-thread capable backend (we were only
downloading before which isn't a full test).
Fixes#7071
This commit fixed the problem but made the integration tests fail.
33376bf399 dropbox: fix missing encoding for rclone purge
This fixes the problem properly by making sure we send the encoded or
non encoded root to the right places.
The free account has a very ungenerous 1000 api calls per day limit
and the full integration test suite breaches that so limit the
integration tests to just the backend.
For uploads which are coming from disk or going to disk or going to a
backend which doesn't need to seek except for retries this doesn't
buffer the input.
This dramatically reduces rclone's memory usage.
Fixes#7350
When using `--no-traverse` the march routines call NewObject on each
potential object in the destination.
The concurrency limiter was accidentally arranged so that there were
`--checkers` * `--checkers` NewObject calls going on at once.
This became obvious when using the sftp backend which used too many
connections.
Fixes#5824
Before this change, the drive backend only used metadata if it was
created with Metadata enabled.
This patch changes it so the Metadata support is enabled dynamically
if it is set in the context.
This fixes the metadata tests in the integration tests which have been
changed to make sure Metadata is enabled.
Google drive doesn't allow the btime (created time) metadata to be
updated when updating an existing object.
This changes skips btime metadata if we are updating an existing
object but allows it otherwise.
- fetch metadata with listings and fetch permissions in parallel
- only write permissions out if they are not inherited.
- make setting labels, owner and permissions work controlled by flags
- `--drive-metadata-labels`, `--drive-metadata-owner`, `--drive-metadata-permissions`
- convert to directoryCache - makes backend much more efficient
- don't force --low-level-retries to 2
- don't wrap paced calls in pacer
- fix shouldRetry
- fix file list searching mechanism
Before this change if a backend can't upload 0 length files and
`--vfs-cache-mode writes` was in use then the writeback logic would
try to upload the 0 length file forever.
This change causes it to exit on the first failure to upload.
- use rclone's http Transport
- fix handling of 0 length files
- combine into one file and remove uneeded abstraction
- make `chunk_size` and `upload_concurrency` settable
- make auth the same as azureblob
- set the Features correctly
- implement `--azurefiles-max-stream-size`
- remove arbitrary sleep on Mkdir
- implement `--header-upload`
- implement read and write MimeType for objects
- implement optional methods
- About
- Copy
- DirMove
- Move
- OpenWriterAt
- PutStream
- finish documentation
- disable build on plan9 and js
Fixes#365Fixes#7378
- Changes
- Rename `--s3-authkey` to `--auth-key` to get it out of the s3 backend namespace
- Enable `Content-MD5` integrity checks
- Remove locking after code audit
- Documentation
- Factor out documentation into seperate file
- Add Quickstart to docs
- Add Bugs section to docs
- Add experimental tag to docs
- Add rclone provider to s3 backend docs
- Fixes
- Correct quirks in s3 backend
- Change fmt.Printlns into fs.Logs
- Make metadata storage per backend not global
- Log on startup if anonymous access is enabled
- Coding style fixes
- rename fs to vfs to save confusion with the rest of rclone code
- rename db to b for *s3Backend
Fixes#7062
- add context to log and fallthrough to error log level
- test: use rclone random lib to generate random strings
- calculate hash from vfs cache if file is uploading
- add server started log with server url
- remove md5 hasher
In ths security related issue the go1.21.4 stdlib changed the parsing
of volume names on Windows.
https://github.com/golang/go/issues/63713
This had the consequences of breaking the MkdirAll tests which were
looking for specific error messages which changed and using invalid
paths.
In particular under go1.21.3:
filepath.VolumeName(`\\?\C:`) == `\\?\C:`
But under go1.21.4 it is:
filepath.VolumeName(`\\?\C:`) == `\\?`
The path `\\?\C:` isn't actually a valid Windows path. I reported this
as a FYI bug upstream - I'm not expecting it to be fixed.
See: https://github.com/golang/go/issues/64101
Users can now input a comma separated list of namenodes when writing
config for hdfs remotes.
This is required when you have multiple namenodes in your hdfs cluster
and cannot be certain which namenodes will be in 'standby' or 'active'
states.
This was available before but wasn't documented and didn't use the
correct rclone interfaces.
This makes it easier to add resources with any build method, and also when
building librclone.dll.
Goversioninfo is now used as a library, instead of running it as a tool.
After the copy refactor:
179f978f75 operations: refactor Copy into methods on an temporary object
There was some confusion in the code about server side copies - should
they or shouldn't they use partials?
This manifested in unit test failures for remotes which supported
server side Copy and PartialUploads. This combination is rare and only
exists in the sftp backend with the --sftp-copy-is-hardlink flag.
This fix makes the choice that backends which set PartialUploads
always use partials even for server side copies.
The upstream library rclone uses for rclone mount no longer supports
freebsd. Not only is it broken, but it no longer compiles.
This patch disables rclone mount for freebsd.
However all is not lost for freebsd users - compiling rclone with the
`cmount` tag, so `go install -tags cmount` will install a working
`rclone mount` command which uses cgofuse and the libfuse C library
directly.
Note that the binaries from rclone.org will not have mount support as
we don't have a freebsd build machine in CI and it is very hard to
cross compile cmount.
See: https://github.com/bazil/fuse/issues/280Fixes#5843
operations.Copy had become very unwieldy. This refactors it into
methods on a copy object which is created for the duration of the
copy. This makes it much easier to read and reason about.
On a 404 error, b2 returns an empty body which, before this change,
caused the error handler to try to parse an empty string and give the
following DEBUG message:
Couldn't decode error response: EOF
This is confusing as it is expected in normal operations and isn't an
error.
This change reads the body of an error response first then tries to
decode it only if it isn't empty, which avoids the confusing DEBUG
message.
This also upgrades failure to read the body or failure to decode the
JSON to ERROR messages as now we are certain that we should have
something to read and decode.
ncdu stores the position that it was in for each directory. However
doing a rescan can cause those positions to be out of range if the
number of files decreased in a directory. When re-entering the
directory, this causes an index out of range error.
This fixes the problem by detecting the index out of range and
flushing the saved directory position.
See: https://forum.rclone.org/t/slice-bounds-out-of-range-during-ncdu/42492/
For some files the Windows Volume Shadow Service (VSS) advertises the
file size as X in the directory listing but returns a different number
Y on stat-ing the file. If the file is opened and read there are Y
bytes available for reading.
Existing copy tools copy Y bytes rather than X so for consistency
rclone should do the same.
This fixes the problem by stat-ing the file immediately before opening
it. This will also reduce the unnecessary occurrence of "can't copy -
source file is being updated" errors; if the file has finished
changing by the time we come to copy it then we now can copy it
successfully.
See: https://forum.rclone.org/t/consistently-getting-corrupted-on-transfer-sizes-differ-syncing-to-an-smb-share/42218/
This was caused by a change to the upstream library
ProtonMail/go-crypto checking the flags on the keys more strictly.
However the signing key for rclone is very old and does not have those
flags. Adding those flags using `gpg --edit-key` and then the
`change-usage` subcommand to remove, save, quite then re-add, save
quit the signing capabilities caused the key to work.
This also adds tests for the verification and adds the selfupdate
tests into the integration test harness as they had been disabled on
CI because they rely on external sources and are sometimes unreliable.
Fixes#7373
makes the following go strings functions available to be used in custom templates; contains, hasPrefix, hasSuffix
added documentation for exported funcs
With automount the target mount drive appears twice in /proc/self/mountinfo.
379 27 0:70 / /mnt/rclone rw,relatime shared:433 - autofs systemd-1 rw,fd=57,...
566 379 0:90 / /mnt/rclone rw,nosuid,nodev,relatime shared:488 - fuse.rclone remote: rw,...
Before this fix we only looked for the mount once in
/proc/self/mountinfo. It finds the automount line and since this
doesn't have fs type rclone it concludes the mount isn't ready yet.
This patch makes rclone look through all the mounts and if any of them
have fs type rclone it concludes the mount is ready.
See: https://forum.rclone.org/t/systemd-mount-works-but-automount-does-not/42287/
Before this change, if a hardlink command was issued, rclone would
just ignore it and not return an error.
This changes any unknown operations (including hardlink) to return an
unsupported error.
Streaming uploads are used by rclone rcat and rclone mount
--vfs-cache-mode off.
After the multipart chunker refactor the multipart chunked streaming
upload was accidentally mixing the first and the second parts up which
was causing corrupted uploads.
This was caused by a simple off by one error in the refactoring where
we went from 1 based part number counting to 0 based part number
counting.
Fixing this revealed that the metadata wasn't being re-read for the
copied object either.
This fixes both of those issues and adds an integration tests so it
won't happen again.
Fixes#7367
After the multipart chunker refactor the multipart chunked server side
copy was accidentally sending one part too many. The last part was 0
length which was rejected by b2.
This was caused by a simple off by one error in the refactoring where
we went from 1 based part number counting to 0 based part number
counting.
Fixing this revealed that the metadata wasn't being re-read for the
copied object either.
This fixes both of those issues and adds an integration tests so it
won't happen again.
See: https://forum.rclone.org/t/large-server-side-copy-in-b2-fails-due-to-bad-byte-range/42294
Before this change if you tried to create a bucket that already
existed, but someone else owned then rclone did not return an error.
This now will return an error on providers that return the
AlreadyOwnedByYou error code or no error on bucket creation of an
existing bucket owned by you.
This introduces a new provider quirk and this has been set or cleared
for as many providers as can be tested. This can be overridden by the
--s3-use-already-exists flag.
Fixes#7351
Before this change, attempting to server side copy a google form would
give this error
No export formats found for "application/vnd.google-apps.form"
Adding this flag allows the form to be server side copied but not
downloaded.
Fixes#6302
Summary:
In cases where cmount is not available in macOS, we alias nfsmount to mount command and transparently start the NFS server and mount it to the target dir.
The NFS server is started on localhost on a random port so it is reasonably secure.
Test Plan:
```
go run rclone.go mount --http-url https://beta.rclone.org :http: nfs-test
```
Added mount tests:
```
go test ./cmd/nfsmount
```
Summary:
Adding a new command to serve any remote over NFS. This is only useful for new macOS versions where FUSE mounts are not available.
* Added willscot/go-nfs dependency and updated go.mod and go.sum
Test Plan:
```
go run rclone.go serve nfs --http-url https://beta.rclone.org :http:
```
Test that it is serving correctly by mounting the NFS directory.
```
mkdir nfs-test
mount -oport=58654,mountport=58654 localhost: nfs-test
```
Then we can list the mounted directory to see it is working.
```
ls nfs-test
```
billy defines a common file system interface that is used in multiple go packages.
vfs.Handle implements billy.File mostly, only two methods needed to be added to
make it compliant.
An interface check is added as well.
This is a preliminary work for adding serve nfs command.
A subtle bug where dir modification time is not updated when the dir already exists
in the cache. It is only noticeable when some clients use dir modification time to
invalidate cache.
Name() method was originally left out and defaulted to the base
class which always returns empty. This trigerred incorrect behavior
in serve nfs where it relied on the Name() of the interafce to figure
out what file it was modifying.
This method is copied from RWFileHandle struct.
Added extra assert in the tests.
This almost 100% backwards compatible. The only difference being that
in the rc options/get output DumpMode will be output as strings
instead of integers. This is a lot more convenient for the user. They
still accept integer inputs though so the fallout from this should be
minimal.
This almost 100% backwards compatible. The only difference being that
in the rc options/get output CacheMode will be output as strings
instead of integers. This is a lot more convenient for the user. They
still accept integer inputs though so the fallout from this should be
minimal.
This almost 100% backwards compatible. The only difference being that
in the rc options/get output CutoffMode, LogLevel, TerminalColorMode
will be output as strings instead of integers. This is a lot more
convenient for the user. They still accept integer inputs though so
the fallout from this should be minimal.
Before this change backend types were printing incorrectly as the name
of the type, not what was defined by the Type() method.
This was not working due to not calling the Type() method. However
this needed to be defined on a non-pointer type due to the way the
options are handled.
In v1.63 memory usage in the b2 backend was limited to `--transfers` *
`--b2-chunk-size`
However in v1.64 this was raised to `--transfers` * `--b2-chunk-size`
* `--b2-upload-concurrency`.
The default value for this was accidently set quite high at 16 which
means by default rclone could use up to 6.4GB of memory!
The new default sets a more reasonable (but still high) max memory of 1.6GB.
Before this change, the lock was held while the upload URL was being
fetched from the server.
This meant that any other threads were blocked from getting upload
URLs unecessarily.
It also increased the potential for deadlock.
Before this change, the maximum number of connections was set to 10.
This means that b2 could deadlock while uploading multipart uploads
due to a lock being held longer than it should have been.
Most useful is the addition of the file created timestamp, but also a timestamp for
when the file was uploaded.
Currently supporting a rather minimalistic set of metadata items, see PR #6359 for
some thoughts about possible extensions.
In this commit:
5f938fb9ed s3: fix "Entry doesn't belong in directory" errors when using directory markers
We checked that the remote has the prefix and then changed the remote
before removing the prefix. This sometimes causes:
panic: runtime error: slice bounds out of range [56:55]
The fix is to do the modification of the remote after removing the
prefix.
See: https://forum.rclone.org/t/cryptcheck-panic-runtime-error-slice-bounds-out-of-range/41977
This was always the intention, it was just implemented wrong.
This shortens the s3 docs by 1369 bringing them down to half the size
just about.
Fixes#7325
Before this change the b2 backend listed all the buckets to turn a
single bucket name into an ID.
However in July 26, 2018 a parameter was added to the list buckets API
to make listing all the buckets unecessary.
This code sets the bucketName parameter so that only the results for
the desired bucket are returned.
The improved upload logic is active by default in uplink v1.12.0, so the
`testuplink.WithConcurrentSegmentUploadsDefaultConfig(ctx)` is not
required anymore.
See https://github.com/rclone/rclone/pull/7198
Before this change uploaded files could return the error "replication
in progress".
This error is harmless though and means the Close should be retried
which is what this patch does.
In this commit we discovered a problem with objects being uploaded to
the incorrect object name. It added an integration test for the
problem.
65b2e378e0 drive: fix incorrect remote after Update on object
This test was tripped by the hdfs backend and this patch fixes the
problem.
Sometimes opendrive reports "403 Folder is already deleted" on
directories which should exist.
This might be a bug in opendrive or in rclone however we work-around
here sufficient to get the tests passing.
ChangeNotify has been broken on the compress backend for a long time!
Before this change it was wrapping the file names received rather than
unwrapping them to discover the original names.
It is likely ChangeNotify was working adequately though for users as
the VFS just uses the directories rather than the file names.
Before this change, bisync ignored the dryRun parameter (only when specified
via the rc.)
This change fixes the issue, so that the dryRun rc parameter is equivalent to
the --dry-run flag.
Before this change the concurrency used for an upload was rather
inconsistent.
- if size below `--backend-upload-cutoff` (default 200M) do single part upload.
- if size below `--multi-thread-cutoff` (default 256M) or using streaming
uploads (eg `rclone rcat) do multipart upload using
`--backend-upload-concurrency` to set the concurrency used by the uploader.
- otherwise do multipart upload using `--multi-thread-streams` to set the
concurrency.
This change makes the default for the concurrency used be the
`--backend-upload-concurrency`. If `--multi-thread-streams` is set and larger
than the `--backend-upload-concurrency` then that will be used instead.
This means that if the user sets `--backend-upload-concurrency` then it will be
obeyed for all multipart/multi-thread transfers and the user can override them
all with `--multi-thread-streams`.
See: #7056
Before this change, b2 would return an error when opening a link
generated by `rclone link`. The following error occurs when the object
path contains an ampersand that is not percent encoded:
{
"code": "bad_request",
"message": "Bad character in percent-encoded string: 38 (0x26)",
"status": 400
}
If the server returns the MIME type as application/octet-stream we
assume it doesn't really know what the MIME type. This patch tries
matching the MIME type from the file extension instead in this case.
This enables the use of servers (like OneDrive for Business) which
don't allow the setting of MIME types on upload and have a poor
selection of mime types.
Fixes#7259
Before this change the box backend could make errors like
Error "not_found" (404): On-Behalf-Of User not found ([123 34 105 110
118 97 108 105 100 95 117 115 101 114 95 105 100 34 58 123 34 105 100
34 58 34 48 48 48 48 48 48 48 48 48 48 48 34 125 125])
This fixes it to produce this instead
Error "not_found" (404): On-Behalf-Of User not found ({"invalid_user_id":{"id":"00000000000"}})
In this commit:
75dfdbf211 ci: revert revive settings back to fix lint
We accidentally disabled all the revive linters. Unfortunately setting
the rules clears the default set of rules so it is necessary to
mention all rules that we need.
This implements the OpenChunkWriter interface for b2 which
enables multi-thread uploads.
This makes the memory controls of the s3 backend inoperative; they are
replaced with the global ones.
--b2-memory-pool-flush-time
--b2-memory-pool-use-mmap
By using the buffered reader this fixes excessive memory use when
uploading large files as it will share memory pages between all
readers.
This implements the OpenChunkWriter interface for azureblob which
enables multi-thread uploads.
This makes the memory controls of the s3 backend inoperative; they are
replaced with the global ones.
--azureblob-memory-pool-flush-time
--azureblob-memory-pool-use-mmap
By using the buffered reader this fixes excessive memory use when
uploading large files as it will share memory pages between all
readers.
This makes the memory controls of the s3 backend inoperative and
replaced with the global ones.
--s3-memory-pool-flush-time
--s3-memory-pool-use-mmap
By using the buffered reader this fixes excessive memory use when
uploading large files as it will share memory pages between all
readers.
Fixes#7141
- fix docs and error messages for multithread
- use sync/errgroup built in concurrency limiting
- re-arrange multithread code
- don't continue multi-thread uploads if one part fails
In this commit we introduced a race condition when using the auth
proxy.
94a320f23c serve ftp: update to goftp.io/server v2.0.1
This was due to the re-organisation of the upstream library which made
the driver be a singleton rather than per session.
This means that when using the auth proxy we need to keep track of
which VFS to use by based on which FTP user is connected.
This also adjusts the locking so that the methods will run
concurrently.
Before this change uploading files with rclone to:
rclone serve sftp --vfs-cache-mode full
Would return the error:
command "md5sum XXX" failed with error: unexpected non file
This patch detects that the file is still in the VFS cache and reads
the MD5SUM from there rather from the remote.
Fixes#7241
Before this change, when using --cutoff-mode=soft and --max-duration
rclone deadlocked when the cutoff limit was reached.
This was because the sync objects Pipe became full and nothing was
emptying it because the cutoff was reached.
This changes the context for putting items into the pipe to be the one
that gets cancelled when the cutoff is reached.
See: https://forum.rclone.org/t/sync-command-hanging-using-cutoff-mode-soft-with-max-duration-time-flags/40866
As an extra security feature some FTP servers (eg FileZilla) require
that the data connection re-use the same TLS connection as the control
connection. This is a good thing for security.
The message "TLS session of data connection not resumed" means that it
was not done.
The problem turned out to be that rclone was re-using the TLS session
cache between concurrent connections so the resumed TLS data
connection could from any of the control connections.
This patch makes each TLS connection have its own session cache which
should fix the problem.
This also reverts the ftp library to the upstream version which now
contains all of our patches.
Fixes#7234
I ( @boukendesho ) have volunteered to maintain the snap package so
this adds it back into the installation instructions.
It will set a `snap` tag visible in `rclone version` so we know where
it came from for support queries.
Currently, the average transfer speed will stop calculating 1 minute
after the last queued transfer completes. This causes the average to
stop calculating when checking is slow and the transfer queue becomes
empty.
This change will require all checks to complete before stopping the
average speed calculation.
In this commit:
432d5d1e20 operations: fix overlapping check on case insensitive file systems
We introduced a test that makes no sense. This happens to pass without --fast-list and fail with it.
This removes the test.
From the Go docs:
"A `nil` map is equivalent to an empty map. [1]
Therefore, an additional nil check for `opts.ExtraHeaders` before the loop is
unnecessary because `opts.ExtraHeaders` is a `map`.
[1]: https://go.dev/ref/spec#Map_types
Signed-off-by: Eng Zer Jun <engzerjun@gmail.com>
Before this change we showed both server side moves and server side
copies as bytes transferred.
This made a nice easy to use stats display, but also caused confusion
for users who saw unrealistic transfer times. It also caused a problem
with --max-transfer and chunker which renames each chunk after
uploading which was counted as a transfer byte.
This patch instead accounts the server side move and copy statistics
as a seperate lines in the stats display which will only appear if
there are any server side moves / copies. This is also output in the
rc.
This gives users something to look at when transfers are running which
was the point of the original change but it now means that transfer
bytes represents data transfers through this rclone instance only.
Fixes#7183
storj.io/uplink v1.11.0 comes with an improved logic for uploading large
files where file segments are uploaded concurrently instead of serially.
This allows to fully utilize the network connection during the entire
upload process.
This change enable the new upload logic.
The Swift backend does not always respect the flag telling it to skip
HEADing zero-length objects. This commit fixes that for ls/lsl/lsf.
Swift returns zero length for dynamic large object files when they're
included in a files lookup, which means that determining their size
requires HEADing each file that returns a size of zero. rclone's
--swift-no-large-objects instructs rclone that no large objects are
present and accordingly rclone should not HEAD files that return zero
length.
When rclone is performing an ls / lsf / lsl type lookup, however, it
continues to HEAD any zero length objects it encounters, even with
this flag set. Accordingly, this change causes rclone to respect the
flag in these situations.
NB: It is worth noting that this will cause rclone to incorrectly
report zero length for any dynamic large objects encountered with the
--swift-no-large-objects flag set.
This adds an additional parameter to the creation of each flag. This
specifies one or more flag groups. This **must** be set for global
flags and **must not** be set for local flags.
This causes flags.md to be built with sections to aid comprehension
and it causes the documentation pages for each command (and the
`--help`) to be built showing the flags groups as specified in the
`groups` annotation on the command.
See: https://forum.rclone.org/t/make-docs-for-mortals-not-only-rclone-gurus/39476/
Before this change, rclone always expected --sftp-path-override to be
the absolute SSH path to remote:path/subpath which effectively made it
unusable for wrapped remotes (for example, when used with a crypt
remote, the user would need to provide the full decrypted path.)
After this change, the old behavior remains the default, but dynamic
paths are now also supported, if the user adds '@' as the first
character of --sftp-path-override. Rclone will ignore the '@' and
treat the rest of the string as the path to the SFTP remote's root.
Rclone will then add any relative subpaths automatically (including
unwrapping/decrypting remotes as necessary).
In other words, the path_override config parameter can now be used to
specify the difference between the SSH and SFTP paths. Once specified
in the config, it is no longer necessary to re-specify for each
command.
See: https://forum.rclone.org/t/sftp-path-override-breaks-on-wrapped-remotes/40025
This allows using an external ssh binary instead of the built in ssh
library for making SFTP connections.
This makes another integration test target TestSFTPRcloneSSH:
Fixes#7012
Some changes about test cases:
Because MiddlewareCORS will return early on OPTIONS request,
this middleware should only be used once at NewServer function.
Test cases should pass AllowOrigin config instead of adding
this middleware again.
A new test case was added to test CORS preflight request with
an authenticator. Preflight request should always return 200 OK
regardless of autentications.
Co-authored-by: yuudi <yuudi@users.noreply.github.com>
Before this change we released the ssh connection back to the pool
before the upload was finished.
This meant that uploads were re-using the same ssh connection which
reduces throughput.
This releases the ssh connection back to the pool only after the
upload has finished, or on error state.
See: https://forum.rclone.org/t/sftp-backend-opens-less-connection-than-expected/40245
This changes hasVirtual to an atomic struct variable that's updated on
add or delete from the virtual map.
This keeps it up to date and avoids deadlocks.
Signed-off-by: Anagh Kumar Baranwal <6824881+darthShadow@users.noreply.github.com>
smb2.File implements the WriterAtCloser interface defined in
fs/types.go. Expose it via a OpenWriterAt method on
the fs struct to support multi-threaded writes.
As suddenly many people move to Box - another "unlimited" cloud story migration saga there are frequent questions about crypt files encoding to be used.
Box is base32768 friendly.
It has been tested with:
https://pub.rclone.org/base32768.zip
and:
rclone test info --check-length boxremote:
maxFileLength = 255 // for 1 byte unicode characters
maxFileLength = 255 // for 2 byte unicode characters
maxFileLength = 255 // for 3 byte unicode characters
maxFileLength = -1 // for 4 byte unicode characters
Before this change, the overlapping check could erroneously give this
error on case insensitive file systems:
Failed to sync: destination and parameter to --backup-dir mustn't overlap
The code was fixed and re-worked to be simpler and more reliable.
See: https://forum.rclone.org/t/backup-dir-cannot-be-in-root-even-when-excluded/39844/
The error is:
Error: failed to configure token with jwt authentication: jwtutil: failed making auth request: 400 Bad Request
With the following additional debug information:
jwtutil: Response Body: {"error":"invalid_grant","error_description":"Please check the 'aud' claim. Should be a string"}
Problem is that in jwt-go the RegisteredClaims type has Audience field (aud claim) that
is a list, while box apparantly expects it to be a singular string. In jwt-go v4 we
currently use there is an alternative type StandardClaims which matches what box wants.
Unfortunately StandardClaims is marked as deprecated, and is removed in the
newer v5 version, so we this is a short term fix only.
Fixes#7114
Before this change the new partial downloads code was causing symlinks
to be copied as regular files.
This was because the partial isn't named .rclonelink so the local
backend saves it as a normal file and renaming it to .rclonelink
doesn't cause it to become a symlink.
This fixes the problem by not copying .rclonelink files using the
partials mechanism but reverting to the previous --inplace behaviour.
This could potentially be fixed better in the future by changing the
local backend Move to change files to and from symlinks depending on
their name. However this was deemed too complicated for a point
release.
This also adds a test in the local backend. This test should ideally
be in operations but it isn't easy to put it there as operations knows
nothing of symlinks.
Fixes#7101
See: https://forum.rclone.org/t/reggression-in-v1-63-0-links-drops-the-rclonelink-extension/39483
Before this change if a directory entry could be listed but not
lstat-ed then rclone would give an error and abort the directory
listing with the error
failed to read directory entry: failed to read directory "XXX": lstat XXX
This change makes sure that the directory listing carries on even
after this kind of error.
The sync will be failed but it will carry on.
This problem was caused by a programming error setting the err
variable in an outer scope when it should have been using a local err
variable.
See: https://forum.rclone.org/t/sync-aborts-if-even-one-single-unreadable-folder-is-encountered/39653
Before this change, if you mounted the root of the smb then it would
give an error on rclone about and periodically in the mount logs:
Statfs failed: bucket or container name is needed in remote
This fix makes the smb backend return empty usage in this case which
will stop the errors and show the default 1P of free space.
See: https://forum.rclone.org/t/error-statfs-failed-bucket-or-container-name-is-needed-in-remote/39631
This sentence was written at the time when backend used access token, nowadays, users need to generate and use application password instead, see #6398.
This introduces a new fs.Option flag, Sensitive and uses this along
with IsPassword to redact the info in the config file for support
purposes.
It adds this flag into backends where appropriate. It was necessary to
add oauthutil.SharedOptions to some backends as they were missing
them.
Fixes#5209
Fix https://github.com/rclone/rclone/issues/7103
Before this change the RegExp validating the endpoint URL was a bit
too strict allowing only /dav/files/USER due to chunking limitations.
This patch adds back support for /dav/files/USER/dir/subdir etc.
Co-authored-by: Nick Craig-Wood <nick@craig-wood.com>
The --progress flag overrides operations.SyncPrintf in order to do its
magic on stdout without interfering with other output.
Before this change the syncFprintf routine in operations (which is
used to print all output to stdout) was taking the
operations.StdoutMutex and the printProgress function in the
--progress routine was also attempting to take the same mutex causing
a deadlock.
This patch fixes the problem by moving the locking from the
syncFprintf function to SyncPrintf. It is then up to the function
overriding this to lock the StdoutMutex. This ensures the StdoutMutex
can never cause a deadlock.
Before this change if using --fast-list on a directory with more than
a few thousand directories in it DirTree.CheckParents became very slow
taking up to 24 hours for a directory with 1,000,000 directories in
it.
This is because it becomes an O(N²) operation as DirTree.Find has to
search each directory in a linear fashion as it is stored as a slice.
This patch fixes the problem by scanning the DirTree for directories
before starting the CheckParents process so it never has to call
DirTree.Find.
After the fix calling DirTree.CheckParents on a directory with
1,000,000 directories in it will take about 1 second.
Anything which calls DirTree.Find can potentially have bad performance
so in the future we should redesign the DirTree to use a different
underlying datastructure or have an index.
https://forum.rclone.org/t/almost-24-hours-cpu-compute-time-during-sync-between-two-large-s3-buckets/39375/
This reverts commit 9065e921c1.
It turns out the problem for the failing fs/sync tests was the
policies being different for search and create which meant that the
file was being created in one union branch but a diferent one was
found in another branch.
The API seems to have changed and the `totalFileCount` item no longer
tracks the number of files in the directory so is useless for seeing
if the directory is empty.
This patch fixes the problem by seeing whether there are any files or
directories in the folder instead.
This problem was detected by the integration tests.
For some unknown reason the API sometimes returns the name already
exists on a server side copy.
{
"error_id": null,
"error_message": "Name already exist",
"error_type": "NAME_ALREADY_EXIST",
"error_uri": "http://api.put.io/v2/docs",
"extra": {},
"status": "ERROR",
"status_code": 400
}
This patch uploads to a temporary name then renames it which works
around the problem.
This was spotted by the integration tests.
The integration tests spotted that modification times are no longer
being preserved by the putio API in server side move and copy.
This patch explicitly sets the modtime after the server side move or
copy.
In this commit we enabled PartialUploads for the union backend.
3faa84b47c combine,compress,crypt,hasher,union: support wrapping backends with PartialUploads
This turns out to cause test failures in fs/sync so this commit
disables them again pending further investigation.
At some point the sharefile API changed to require the size of the
file in the initial transaction which makes the streaming upload fail
with this error:
upload failed: file size does not match (-2)
This was discovered by the integration tests.
In this commit we discovered a problem with objects being uploaded to
the incorrect object name. It added an integration test for the
problem.
65b2e378e0 drive: fix incorrect remote after Update on object
This test was tripped by the putio backend and this patch fixes the
problem.
Before this patch the Update method had a 50/50 chance of returning
the old object rather than the new updated object.
This was discovered in the integration tests.
This patch fixes the problem by deleting the duplicate object before
we look for the new object.
In this commit we discovered a problem with objects being uploaded to
the incorrect object name. It added an integration test for the
problem.
65b2e378e0 drive: fix incorrect remote after Update on object
This test was tripped by the Storj backend and this patch fixes the
problem.
Storj has a rate limit of 1 per second when uploading to the same
file.
This was being tripped by the integration tests.
This patch fixes it by detecting the error and sleeping for 1 second
before retrying.
See: https://github.com/storj/uplink/issues/149
Before this change a server side copy did not preserve the modtime.
This used to work on nextcloud but at some point it started ignoring
the `X-Oc-Mtime` header.
This patch sets the modtime explicitly after a server side copy if the
`X-Oc-Mtime` wasn't accepted.
This problem was discovered in the integration tests.
when multi-thread downloading is enabled, rclone used
to send a write to disk after every read, resulting in a lot
of small writes to different locations of the file.
depending on the underlying filesystem or device, it can be more
efficient to send bigger writes.
Before this change we were incorrectly identifying the root directory
of the listing and adding it into the listing.
This caused higher layers of rclone to emit the error above.
See #7038
Before this change we were incorrectly identifying the root directory
of the listing and adding it into the listing.
This caused higher layers of rclone to emit the error above.
See #7038
Before this change we were incorrectly identifying the root directory
of the listing and adding it into the listing.
This caused higher layers of rclone to emit the error above.
Fixes#7038
This commit
3567a47258 fs: make ConfigString properly reverse suffixed file systems
made fs.ConfigString() return the full config of the backend. Because
mount was using this to make a volume name it started to make volume
names with illegal characters in which couldn't be mounted by macOS.
This fixes the problem by making a separate fs.ConfigStringFull() and
using that where appropriate and leaving the original
fs.ConfigString() function untouched.
Fixes#7063
See: https://forum.rclone.org/t/1-63-beta-fails-to-mount-on-macos-with-on-the-fly-crypt-remote/39090
The SIGUSR2 signal handler for bandwidth limits currently only starts
if rclone is started at a time when a bandwidth limit applies. This
means that if rclone starts _outside_ such a time, i.e. with no
bandwidth limits, then enters a time where bandwidth limits do apply,
it will not be possible to use SIGUSR2 to toggle it.
This fixes that by always starting the signal handler, but only
toggling the limiter if there is a bandwidth limit configured.
Zoho has started returning the results from Range: requests with a 200
response code rather than the technically correct 206 error code.
Before this change this triggered workaround code to deal with Zoho
not obeying Range: requests properly.
This fix tests the returned header for a Content-Range: header and if
it exists assumes it is a valid reply to the Range: request despite
the status being 200.
This problem was spotted by the integration tests.
In 04aa6969a4 we updated the displayed speed to be a rolling
average in core/stats and the progress output but we didn't update the
Prometheus metrics.
This patch updates the Prometheus metrics too.
Fixes#7053
Before this fix, if the upload failed for some reason the yandex
backend would attempt to retry itself it which would fail immediately
with 400 Bad Request.
Normally we retry uploads at a higher level so they can be done with
new data and this patch does that.
See #7044
Before this change if doing a recursive directory listing with
`--files-from` if more than `--checkers` files errored (other than
file not found) then rclone would deadlock.
This fixes the problem by exiting on the first error.
Before this change partially uploaded files (when --inplace is not in
effect) would be left lying around in the file system if rclone was
killed in the middle of a transfer.
This adds an exit handler to remove the file and removes it when the
file is complete.
Before this change, some parts of operations called the Open method on
objects directly, and some called NewReOpen to make an object which
can re-open itself on errors.
This adds a new function operations.Open which should be called
instead of fs.Object.Open to open a reliable stream of data and
changes all call sites to use that.
This means `rclone check --download` and `rclone cat` will re-open
files on failures.
See: https://forum.rclone.org/t/does-rclone-support-retries-for-check-when-using-download-flag/38641
Before this change we tested special errors for straight equality.
This works for all normal backends, but the union backend may return
wrapped errors which contain the special error types.
In particular if a pcloud backend was part of a union when attempting
to set modification times the fs.ErrorCantSetModTime return wasn't
understood because it was wrapped in a union.Error.
This fixes the problem by using errors.Is instead in all the
comparisons in operations.
See: https://forum.rclone.org/t/failed-to-set-modification-time-1-error-pcloud-cant-set-modified-time/38596
Before this change the Errors type in the union backend produced
errors which could not be Unwrapped to test their type.
This adds the (go1.20) Unwrap method to the Errors type which allows
errors.Is to work on these errors.
It also adds unit tests for the Errors type and fixes a couple of
minor bugs thrown up in the process.
See: https://forum.rclone.org/t/failed-to-set-modification-time-1-error-pcloud-cant-set-modified-time/38596
Before this fix we used the bin/get-github-release.go script to
install nfpm.
However this script fails scraping the downloads page when the target
has more than a few download options. The alternative would be using
the GitHub API but this needs authentication so as not to be rate
limited on GitHub actions.
This patch switches over to go install which is less efficient but
should work in all circumstances.
Before this change if the VFS took more than 5 to initialise (which
can happen if there is a lot of files or a lot of files which need
uploading) the backend was dropped out of the cache before the VFS was
fully created.
This was noticeable in the dropbox backend where the batcher Shutdown
too soon and prevented further uploads.
This fixes the problem by Pinning backends before the VFS cache is
created.
https://forum.rclone.org/t/if-more-than-251-elements-in-the-que-to-upload-fails-with-batcher-is-shutting-down/38076/2
Set this automatically for any backend which implements UnWrap and
manually for combine and union which can't implement UnWrap but do
overlay other backends.
Before this change, the Pikpak backend would always download
the first media item whenever possible, regardless of whether
or not it was the original contents.
Now we check the validity of a media link using the `fid`
parameter in the link URL.
Fixes#6992
Before this change the pikpak backend changed the global
--multi-thread-streams flag which wasn't desirable.
Now the machinery is in place to use the NoMultiThreading feature flag
instead.
Fixes#6915
Some servers return a 501 error when using MLST on a non-existing
directory. This patch allows it.
I don't think this is correct usage according to the RFC, but the RFC
doesn't explicitly state which error code should be returned for
file/directory not found.
Before this fix a blank line in the MLST output from the FTP server
would cause the "unsupported LIST line" error.
This fixes the problem in the upstream fork.
Fixes#6879
When copying to a backend which has the PartialUploads feature flag
set and can Move files the file is copied into a temporary name first.
Once the copy is complete, the file is renamed to the real
destination.
This prevents other processes from seeing partially downloaded copies
of files being downloaded and prevents overwriting the old file until
the new one is complete.
This also adds --inplace flag that can be used to disable the partial
file copy/rename feature.
See #3770
Co-authored-by: Nick Craig-Wood <nick@craig-wood.com>
Implement a Partialuploads feature flag to mark backends for which
uploads are not atomic.
This is set for the following backends
- local
- ftp
- sftp
See #3770
Before this change rclone used a normal SFTP rename if present to
implement Move.
However the normal SFTP rename won't overwrite existing files.
This fixes it to either use the POSIX rename extension
("posix-rename@openssh.com") or to delete the source first before
renaming using the normal SFTP rename.
This isn't normally a problem as rclone always removes any existing
objects first, however to implement non --inplace operations we do
require overwriting an existing file.
Before this patch, files or directories with unknown modtime would
appear as the current date.
When mounted some systems look at modification dates of directories to
see if they change and having them change whenever they drop out of
the directory cache is not optimal.
See #6986
This also produces a warning when rclone detects files have been
blocked because of virus content
server reports this file is infected with a virus - use --onedrive-av-override to download anyway
Fixes#557
Before this change, when Object.Update was called in the drive
backend, it overwrote the remote with that of the object info.
This is incorrect - the remote doesn't change on Update and this patch
fixes that and introduces a new test to make sure it is correct for
all backends.
This was noticed when doing Update of objects in a nested combine
backend.
See: https://forum.rclone.org/t/rclone-runtime-goroutine-stack-exceeds-1000000000-byte-limit/37912
Before this change if the storage class wasn't set on the object, we
didn't set the "tier" metadata.
This made it impossible to filter on tier using the metadata filters.
This returns the "tier" metadata as STANDARD if the storage class
isn't set on the object.
See: https://forum.rclone.org/t/copy-from-s3-to-another-s3-filter-by-storage-class/37861
- Report correct feature flag
- Fix test failures due to that
- don't output the root directory marker
- Don't create the directory marker if it is the bucket or root
- Create directories when uploading files
- Report correct feature flag
- Fix test failures due to that
- don't output the root directory marker
- Don't create the directory marker if it is the bucket or root
- Create directories when uploading files
Before this change we renamed file systems with overridden config with
{suffix}.
However this meant that ConfigString produced a value which wouldn't
re-create the file system.
This uses an internal hash to keep note of what config goes which
which {suffix} in order to remake the config properly.
Before this change, drive would mistakenly identify a folder with a
training slash as a file when passed to NewObject.
This was picked up by the integration tests
In an earlier patch
d5afcf9e34 crypt: try not to return "unexpected EOF" error
This introduced a bug for 0 length files which this fixes which only
manifests if the io.Reader returns data and EOF which not all readers
do.
This was failing in the integration tests.
When using `rclone cat` to print the contents of several files, the
user may want to inject some separator between the files, such as a
comma or a newline. This patch adds a `--separator` option to the `cat`
command to make that possible. The default value remains an empty
string, `""`, maintaining the prior behavior of `rclone cat`.
Closes#6968
Before this fix, when a write to a read only directory failed, rclone
would leav spurious directory entries in the directory.
This confuses `rclone serve webdav` into giving this error
http: superfluous response.WriteHeader
This fixes the VFS layer to remove any directory entries where the
file creation did not succeed.
Fixes#5702
This error happened on a restart of the VFS with files to upload into
a new directory on a bucket based backend. Rclone was assuming that
directories created before the restart would still exist, but this is
a bad assumption for bucket based backends which don't really have
directories.
This change creates the pretend directory and thus the directory cache
if the parent directory does not exist when adding a virtual on a
backend which can't have empty directories.
See: https://forum.rclone.org/t/that-pesky-failed-to-reload-error-message/34527
Before this change using /path/to/file.rclonelink would not find the
file when using -l/--links.
This fixes the problem by doing another stat call if the file wasn't
found without the suffix if -l/--links is in use.
It will also give an error if you refer to a symlink without its
suffix which will not work because the limit to a single file
filtering will be using the file name without the .rclonelink suffix.
need ".rclonelink" suffix to refer to symlink when using -l/--links
Before this change it would use the symlink as a directory which then
would fail when listed.
See: #6855
Before this change we weren't outputing a debug log on the start of a
transfer for files which existed on the source but not in the
destination.
This was different to the single file copy routine.
On Linux systems rclone builds with cgo but uses the internal Go
resolver for DNS by default.
This update the FAQ to suggest use of GODEBUG=netdns=cgo if there are
name resolution problems on Linux/BSD (with CGO_ENABLED rebuild from
source if necessary), or try GODEBUG=netdns=go on Windows/MacOS.
See: #683
If a file has two (or more) extensions and the second (or subsequent)
extension is recognised as a valid mime type, then the suffix will go
before that extension. So `file.tar.gz` would be backed up to
`file-2019-01-01.tar.gz` whereas `file.badextension.gz` would be
backed up to `file.badextension-2019-01-01.gz`
Fixes#6892
golang.org/x/oauth2/jws is deprecated: this package is not intended for public use and
might be removed in the future. It exists for internal use only. Please switch to another
JWS package or copy this package into your own source tree.
github.com/golang-jwt/jwt/v4 seems to be a good alternative, and was already
an implicit dependency.
Before this fix, it was noticed that the rclone webdav client did not
re-use HTTP connections when it should have been.
This turned out to be because rclone was not draining the HTTP bodies
when it was not expecting a response.
From the Go docs:
> If the returned error is nil, the Response will contain a non-nil
> Body which the user is expected to close. If the Body is not both
> read to EOF and closed, the Client's underlying RoundTripper
> (typically Transport) may not be able to re-use a persistent TCP
> connection to the server for a subsequent "keep-alive" request.
This fixes the problem by draining up to 10MB of data from an HTTP
response if the NoResponse flag is set, or at the end of a JSON or XML
response (which could have some whitespace on the end).
See: https://forum.rclone.org/t/webdav-with-persistent-connections/37024/
This changes crypt's use of sync.Pool: Instead of storing slices
it now stores pointers pointers fixed sized arrays.
This issue was reported by staticcheck:
SA6002 - Storing non-pointer values in sync.Pool allocates memory
A sync.Pool is used to avoid unnecessary allocations and reduce
the amount of work the garbage collector has to do.
When passing a value that is not a pointer to a function that accepts
an interface, the value needs to be placed on the heap, which means
an additional allocation. Slices are a common thing to put in sync.Pools,
and they're structs with 3 fields (length, capacity, and a pointer to
an array). In order to avoid the extra allocation, one should store
a pointer to the slice instead.
See: https://staticcheck.io/docs/checks#SA6002
The three linters deadcode, structcheck and varcheck we have been using are as of
gitlabci-lint version 1.49.0 (24 Aug 2022) marked as deprecated, and replaced by unused.
The linters staticcheck, gosimple, stylecheck and unused should combined correspond to
the checks performed by the stand-alone staticcheck tool, which is by default used for
linting in Visual Studio Code with the Go extension. We previously enabled the first
three, but skipped unused due to many reported issues.
See #6387 for more information.
These combined should correspond to the checks performed by the stand-alone
staticcheck tool, which is by default used for linting in Visual Studio Code
with the Go extension. One exception is the unused checks, which staticcheck
tool performs, but chose to not enabled here in rclone due to many reported
occurrences.
See #6387 for more information.
This change provides the ability to pass `env_auth` as a parameter to
the drive provider. This enables the provider to pull IAM
credentials from the environment or instance metadata. Previously if no
auth method was given it would default to requesting oauth.
In this commit we accidentally removed the global --rc flags.
0df7466d2b cmd/rcd: Fix command docs to include command specific prefix (#6675)
This re-instates them.
Before this change, change notify would pick up files which were
shared with us as well as file within the drive.
When using an encrypted mount this caused errors like:
ChangeNotify was unable to decrypt "Plain file name": illegal base32 data at input byte 5
The fix tells drive to restrict changes to the drive in use.
Fixes#6771
Before this change the code wasn't taking into account the error
io.ErrUnexpectedEOF that io.ReadFull can return properly. Sometimes
that error was being returned instead of a more specific and useful
error.
To fix this, io.ReadFull was replaced with the simpler
readers.ReadFill which is much easier to use correctly.
Before this change using operations/stat with a remote pointing to a
dir with a trailing / would return a null output rather than the
correct info.
This was because the directory was not found with a trailing slash in
the directory listing.
Fixes#6817
Before this change if both --progress and --interactive were set then
the screen display could become muddled.
This change makes --progress and --interactive use the same lock so
while rclone is asking for interactive questions, the progress will be
paused.
Fixes#6755
Before this change the cancelFunc could be called twice, once while
handling the interrupt (CTRL-C) and once while unwinding the stack if
the function happened to finish.
This change ensure the cancelFunc is only called once by wrapping it
in a sync.Once
Apparently the abort multipart upload call doesn't return while
multipart uploads are in progress on iDrive e2.
This means that if we CTRL-C a multpart upload rclone hangs until the
all parts uploading have completed. However since rclone is uploading
multiple parts at once this doesn't happen until after the entire file
is uploaded.
This was fixed by cancelling the upload context which causes all the
uploads to stop instantly.
This change addresses two issues with commands that re-used
flags from common packages:
1) cobra.Command definitions did not include the command specific
prefix in doc strings.
2) Command specific flag prefixes were added after generating
command doc strings.
The upstream revive repo changed the default settings for this linter.
We use this through golangci-lint.
This change meant lots of errors appearing all at once. We should
probably fix these in due course, but for the time being this disables
those settings.
See: https://github.com/mgechev/revive/pull/799
Before this change if logs were not redirected, logging would
corrupt the terminal screen.
This commit stores the logs (max ~100 lines) in an array and
print them when the program exits.
When using ssh-agent to hold multiple keys, it is common practice to configure
openssh to use a specific key by setting the corresponding public key as
the `IdentityFile`. This change makes a similar behavior possible in rclone
by having it parse the `key_file` config as the public key when
`key_use_agent` is `true`.
rclone already attempted this behavior before this change, but it assumed that
`key_file` is the private key and that the public key is specified in
`${key_file}.pub`. So for parity with the openssh behavior, this change makes
rclone first attempt to read the public key from `${key_file}.pub` as before
(for the sake of backward compatibility), then fall back to reading it from
`key_file`.
Fixes#6791
Before this change, we would upload files as single part uploads even
if the source MD5SUM was not available.
AWS won't let you upload a file to a locket bucket without some sort
of hash protection of the upload which we don't have with no MD5SUM.
So we switch to multipart upload when the source does not have an
MD5SUM.
This means that if --s3-disable-checksum is set or we are copying from
a source with no MD5SUMs we will copy with multipart uploads.
This patch changes all uploads, not just those to locked buckets
because having no MD5SUM protection on uploads is undesirable.
Fixes#6846
This provider:
- supports the `X-OC-Mtime` header to set the mtime
- calculates SHA1 checksum server side and returns it as a `ME:sha1hex` prop
To differentiate the new hasMESHA1 quirk, the existing hasMD5 and hasSHA1
quirks for Owncloud have been renamed to hasOCMD5 and hasOCSHA1.
Fixes#6837
Sometimes vsftpd returns a 426 error when closing the stream even when
all the data has been transferred successfully. This is some TLS
protocol mismatch.
Rclone has code to deal with this already, but the error returned from
Close was wrapped in a multierror so the detection didn't work.
This properly extract `textproto.Error` from the errors returned by
`github.com/jlaffaye/ftp` in all the cases.
See: https://forum.rclone.org/t/vsftpd-vs-rclone-part-2/36774
Before this change if a file was uploaded to a backend which didn't
support modtimes, the time of the file read after the upload had
completed would change to the time the file was uploaded on the
backend.
When using `--vfs-cache-mode writes` or `full` this time would be
different by the `--vfs-write-back` delay which would cause
applications to think the file had been modified.
This changes uses the last modification time read by the OS as a
virtual modtime for backends which don't support setting modtimes. It
does not change the modtime to that actually uploaded.
This means that as long as the file remains in the directory cache it
will have the expected modtime.
See: https://forum.rclone.org/t/saving-files-causes-wrong-modified-time-to-be-set-for-a-few-seconds-on-webdav-mount-with-bitrix24/36451
The recent changes to remove race conditions from --max-delete have
made these tests fail on chunker with s3 because they do copy then
delete and the deletes are being counted in the --max-delete(-size)
counts.
This fixes the azureblob backend so it builds again after the SDK
changes.
This doesn't update bazil.org/fuse because it doesn't build on FreeBSD
https://github.com/bazil/fuse/issues/295
If using rclone move and --check-first and --order-by then rclone uses
the transfer routine to delete files to ensure perfect ordering.
This will cause the transfer stats to have a larger than expected
number of items in it so we don't enable this by default.
Fixes#6033
Before this fix, a dangling symlink was erroring the sync. It was
writing an ERROR log and causing rclone to exit with an error. The
List method wasn't returning an error though.
This fix makes sure that we don't log or report a global error on a
file/directory that has been excluded.
This feature was first implemented in:
a61d219bc local: fix -L/--copy-links with filters missing directories
Then fixed in:
8d1fff9a8 local: obey file filters in listing to fix errors on excluded files
This commit also adds test cases for the failure modes of those commits.
See #6376
Before this change, if you renamed a directory containg files yet to
be uploaded then deleted the directory the files would still be
uploaded.
This fixes the problem by changing the directory path in all the file
objects in a directory when it is renamed. This wasn't necessary until
we introduced virtual files and directories which lived beyond the
directory flush mechanism.
Fixes#6809
Before this change, if a "--drive-stop-on-upload-limit" was set,
rclone would not stop the upload if a "storageQuotaExceeded" error occurred.
This fix now checks for the "storageQuotaExceeded" error
and "--drive-stop-on-upload-limit", and fails fast.
This change provides the ability to pass `env_auth` as a parameter to
the google cloud storage provider. This enables the provider to pull IAM
credentials from the environment or instance metadata. Previously if no
auth method was given it would default to requesting oauth.
This ensures the virtual terminal processing mode is enabled on the rclone process
for Windows 10 consoles (by using Windows Console API functions GetConsoleMode/SetConsoleMode
and flag ENABLE_VIRTUAL_TERMINAL_PROCESSING), which adds native support for ANSI/VT100
escape sequences. This mode is default in many cases, e.g. when using the Windows
Terminal application, but in other cases it is not, and the default can also be
controlled with registry setting (see below), and therefore configuring it on the process
seem to be the only reliable way of ensuring it is enabled when supported.
[HKEY_CURRENT_USER\Console]
"VirtualTerminalLevel"=dword:00000001
Since rclone version 1.61.0 the tree command uses ANSI color sequences in output by
default, but this lead to issues in Windows terminals that were not handling these (#6668).
This commit ensures the tree command uses the terminal package for output. It relies on
go-colorable to properly handle ANSI color sequences: If stdout is connected to a terminal
the escape sequences are decoded and the text are written with color formatting using
Windows Console API. If stdout is not connected to a terminal, e.g. redirected to file,
the escape sequences are stripped off. The tree command has its own method for writing
directly to a file, specified with flag --output, and then the output is not passed
through the terminal package and must therefore be written without ansi codes.
Before this change the hash used for Onedrive Personal was SHA1. From
July 2023 Microsoft is phasing out SHA1 hashes in favour of
QuickXorHash in Onedrive Personal. Onedrive Business and Sharepoint
remain using QuickXorHash as before.
This choice can be changed using the --onedrive-hash-type flag (and
config option) so that SHA1 can be selected while it is still
available in the transition period.
See: https://forum.rclone.org/t/microsoft-is-switching-onedrive-personal-to-quickxorhash-from-sha1/36296/
Before this change if an --s3-profile was set which used AWS STS (eg
to assume a role) and --s3-endpoint was set then rclone would use the
value from --s3-endpoint to contact the STS server which did not work.
This fix implements an endpoint resolver which only overrides the "s3"
service if --s3-endpoint is set. It sends the "sts" service (and any
other service) to the default resolver.
Fixes#6443
See: https://forum.rclone.org/t/s3-profile-failing-when-explicit-s3-endpoint-is-present/36063/
There were some places (e.g. deleting files) where we were using
--transfers instead of --checkers to control the concurrency when
files weren't being transferred.
These have been updated to use --checkers.
Before this change, all types of checkers showed "checking" after the
file name despite the fact that not all of them were checking.
After this change, they can show
- checking
- deleting
- hashing
- importing
- listing
- merging
- moving
- renaming
See: https://forum.rclone.org/t/what-is-rclone-checking-during-a-purge/35931/
The block size for crypt is 64k + a few bytes. The default block size
for sftp is 32k. This means that the blocks for crypt get split over 3
sftp packets two of 32k and one of a few bytes.
However due to a bug in pkg/sftp it was sending 32k instead of just a
few bytes, leading to the 65% slowdown.
This was fixed in the upstream library.
This bug probably affected transfers from over the network sources
also.
Fixes#6763
See: https://github.com/pkg/sftp/pull/537
Before this change if --s3-no-head was in use rclone didn't check the
multipart upload ETag at all. However the ETag is returned in the
final POST request when completing the object.
This change uses that ETag from the final POST if --s3-no-head is in
use, otherwise it uses the ETag from a fresh HEAD request.
See: https://forum.rclone.org/t/in-some-cases-rclone-does-not-use-etag-to-verify-files/36095/
No need to report hours, minutes, and even seconds when the
ETA is several years, e.g. "292y24w3d23h47m16s". Now only
reports the 3 most significant units, sacrificing precision,
e.g. "292y24w3d", "24w3d23h", "3d23h47m", "23h47m16s".
Fixes#6381
Integer overflow would lead to ETA such as "-255y7w4h11m22s966ms",
as reported in #6381. Now the value will be clipped at the maximum
"292y24w3d23h47m16s", and it will be shown as infinity.
Explicitly set ARM version in GOARM build variable, to avoid relying
on some default value which differs when compiling natively and when
cross-compiling, and which is also incorrectly documented as being
6 when in reality it is 5.
Fix incorrect labelling of ARMv5 builds as ARMv6, and change
architecture of .rpm and .deb packages containing them to
match.
Add ARMv6 builds, to complement existing ARMv5 and ARMv7, and to
reduce disruption due to previous ARMv5 builds incorrectly being
identified as ARMv6, and to provide .rpm and .deb packages with the
same ARMv6 architectures as was previously also published
(then containing ARMv5 binaries).
See #6528
Background info:
https://github.com/golang/go/wiki/GoArmhttps://go.dev/doc/install/source#environment661e931dd1/src/cmd/dist/build.go (L140-L144)661e931dd1/src/cmd/dist/util.go (L392-L422)
* Updates OAuth consent screen instructions to include adding scopes for backup purposes (create, edit and delete files).
* Updates instructions to keep app in testing mode (appropriate for most users). The previous instructions suggested this, but we don't need to "publish" the app at all in order to proceed with this step.
This commits ports a fast C-implementation from https://github.com/namazso/QuickXorHash
It uses new crypto/subtle code from go1.20 to avoid the use of unsafe.
Typical speedups are about 25x when using go1.20
goos: linux
goarch: amd64
cpu: Intel(R) Celeron(R) N5105 @ 2.00GHz
QuickXorHash-Before 2.49ms 422MB/s ±11% 100.00%
QuickXorHash-Subtle 87.9µs 11932MB/s ± 5% +2730.83% + 42.17%
Co-Author: @namazso
This commit documents my learnings after having encountered a failure
I reported in the rclone forum[0].
I may be a fool for having failed to understand the previous
documentation, but I am likely not the only fool to get snared by it.
This commit therefore adds details to clarify what the user must do in
order to allow `--check-access` to succeed.
While at it, I've also added some basic documentation for `--check-filename`.
[0]: https://forum.rclone.org/t/bisync-check-file-check-failed/35682
Uploading 100 files of each 1 MB took 20 seconds before. With above fix it takes around 2 seconds now.
10x time improvement in line with pacer's sleep reduction from 100ms to 10ms
Before this change when uploading files bigger than 1TiB, the chunk
calculator would work out that the chunk size needed to be bigger than
the default 100 MiB to fit within the 10,000 parts limit.
However the uploader was still using the memory pool for the old chunk
size and this caused errors like
panic: runtime error: slice bounds out of range [:122683392] with capacity 100663296
The fix for this is to make a temporary pool with the larger chunk
size and use it during the upload of the large file.
See: https://forum.rclone.org/t/rclone-cannot-complete-upload-to-b2-restarts-upload-frequently/35617/
Since version 3 of fuse libfuse no longer does anything when given the
nonempty option and it's default is to allow mounting over non empty
directories like normal mount does.
Some versions of libfuse give an error when using `--allow-non-empty`
which is annoying for the user.
We now do this check ourselves so we no longer need to pass the option
to libfuse.
Fixes#3562
Before this change we were sending webdav requests to the go http
FileServer. In go1.20 these (rightly) started returning errors which
caused the tests to fail.
The test has been changed to properly mock up an About query and
response so an end to end test of adding headers is possible.
Passwords for encrypted libraries are kept in memory in the server
and flushed after an hour.
This MR fixes an issue when the library password expires after 1 hour.
Before this fix, we told cgofuse/WinFSP that the backend was case
insensitive but didn't implement the Getpath backend function to
return the normalised case of a file.
Resently cgofuse started implementing case insensitive files properly
but since we hadn't implemented Getpath, the file names were taking
the default of all in UPPER CASE.
This patch implements Getpath for cgofuse which fixes the case
problems.
This problem came to light when we upgraded cgofuse and WinFSP (to
1.12) which had the code to implement Getpath.
Fixes#6682
rclone sync erroneously deleted folders renamed to a different case on
crypts where directory name encryption was disabled and the underlying
remote was case insensitive.
Example: Renaming the folder Test to tEST before a sync to a crypt having
remote=OneDrive:crypt and directory_name_encryption=false could result in
the folder and all its content being deleted. The following sync would
correctly create the tEST folder and upload all of the content.
Additional tests have revealed other potential issues when using
filename_encryption=off or directory_name_encryption=false on case
insensitive remotes. The documentation has been updated to warn about
potential problems when using these combinations.
Before this change only serve http was Shutting down its server which
was causing other servers such as serve restic to leave behind their
unix sockets.
This change moves the finalisation to lib/http so all servers have it
and removes it from serve http.
Fixes#6648
Before this change, we started the http listener even if --stdio was
supplied.
This also moves the log message so the user won't see the serving via
HTTP message unless they are really using that.
Fixes#6646
In the lib/http refactor
52443c2444 restic: refactor to use lib/http
We forgot to serve the data and wait for the server to finish. This is
not tested in the unit tests as it is part of the command line
handler.
Fixes#6644Fixes#6647
The webdav library was confused by the Path manipulation done by
lib/http when stripping the prefix.
This patch adds the prefix back before calling it.
Fixes#6650
This error was caused by rclone supplying an empty
`x-ms-blob-public-access:` header when creating a container for
private access, rather than omitting it completely.
This is a valid way of specifying containers should be private, but if
the storage account has the flag "Blob public access" unset then it
gives "409 Public access is not permitted on this storage account".
This patch fixes the problem by only supplying the header if the
access is set.
Fixes#6645
Storj switched to a single global s3 endpoint backed by a BGP routing.
We want to stop advertizing the former regional endpoints and have the
global one as the only option.
Before this change, when a new object was created s3 returns its
versionID (on a versioned bucket) and rclone recorded it in the
object.
This means that when rclone came to delete the object it would delete
it with the versionID.
However it is common to forbid actions with versionIDs on buckets so
as to preserve the historical record and these operations would fail
whereas they succeeded in pre-v1.60.0 versions.
This patch fixes the problem by not recording versions of objects
supplied by the S3 API on upload unless `--s3-versions` or
`--s3-version-at` is used. This makes rclone behave as it did before
v1.60.0 when version support was introduced.
See: https://forum.rclone.org/t/s3-and-intermittent-403-errors-with-file-renames-and-drag-and-drop-operations-in-windows-explorer/34773
This patch implements --use-server-modtime for the Azureblob backend.
It does this by not reading the time from the metadata if the global
flag is set.
- add support for unix sockets (which skip the auth).
- add support for multiple listeners
- collapse unnecessary internal structure of lib/http so it can all be
imported together
- moves files in sub directories of lib/http into the main lib/http
directory and reworks the code that uses them.
See: https://forum.rclone.org/t/wip-rc-rcd-over-unix-socket/33619Fixes: #6605
A recent security fix in the Owncloud container now causes it to
disallow wildcards in the OWNCLOUD_TRUSTED_DOMAINS setting.
This patch works around the problem by using port forwarding from the
host so we can keep the domain name constant.
When the SDK was upgraded it started delivering metadata where the
keys were not in lower case as per the old SDK.
Rclone normalises the case of the keys for storage in the Object, but
the directory marker check was being done with the unnormalised keys
as it needs to be done before the Object is created.
This fixes the directory marker check to do a case insensitive compare
of the metadata keys.
Before this change, we were taking the version ID straight from the
XML blob returned by the SDK and thus pinning the XML into memory
which bulked up the average memory per object from about 400 bytes to
4k.
Copying the string fixes the excess memory usage.
This reverts commit 4f386a1ccd.
It turns out that Alibaba OSS does support list v2 and the detection
code was wrong.
This means that users of the gov version of Alibaba will have to add
`list_version 1` to their config files.
See #6600
In this commit
ab849b3613 s3: fix listing loop when using v2 listing on v1 server
The ContinuationToken was tested for existence, but it is the
NextContinuationToken that we are interested in.
See: #6600
Previously it was limited to plain ASCII (0-9, A-Z, a-z).
Implemented by adding \p{L}\p{N} alongside the \w in the regex,
even though these overlap it means we can be sure it is 100%
backwards compatible.
Fixes#6618
An attacker can cause excessive memory growth in a Go server accepting
HTTP/2 requests. HTTP/2 server connections contain a cache of HTTP
header keys sent by the client. While the total number of entries in
this cache is capped, an attacker sending very large keys can cause
the server to allocate approximately 64 MiB per open connection.
The config question "Use auto config?" confused many users and lead to
recurring forum posts from users that were unaware that they were using
a remote or headless machine.
This commit makes the question and possible options more descriptive
and precise.
This commit also adds references to the guide on remote setup in the
documentation of backends using oauth as primary authentication.
This was caused by
a9bd0c8de6 s3: reduce memory consumption for s3 objects
Which assumed that the StorageClass would always be set, but it isn't
set for Versions.
The BSD-style license that Go uses requires the license to be included
with the source distribution; so add it as LICENSE.wasmexec (to avoid
confusion with the other licenses in rclone) and note the location of
the license in wasm_exec.js itself.
The updates the authentication to include
- Auth from the environment
1. Environment Variables
2. Managed Service Identity Credentials
3. Azure CLI credentials (as used by the az tool)
- Account and Shared Key
- SAS URL
- Service principal with client secret
- Service principal with certificate
- User with username and password
- Managed Service Identity Credentials
And rationalises the auth order.
Normally rclone will check the container exists before uploading if it
hasn't listed the container yet.
Often rclone will be running with a limited set of permissions which
means rclone can't create the container anyway, so this stops the
check.
This will save a transaction.
This commit switches from using the old Azure go modules
github.com/Azure/azure-pipeline-go/pipeline
github.com/Azure/azure-storage-blob-go/azblob
github.com/Azure/go-autorest/autorest/adal
To the new SDK
github.com/Azure/azure-sdk-for-go/
This stops rclone using deprecated code and enables the full range of
authentication with Azure.
See #6132 and #5284
Before this change, rclone would enter a listing loop if it used v2
listing on a v1 server and the list exceeded 1000 items.
This change detects the problem and gives the user a helpful message.
Fixes#6600
* fs: add TerminalColorMode type
* fs: add new config(flags) for TerminalColorMode
* lib/terminal: use TerminalColorMode to determine how to handle colors
* Add documentation for '--terminal-color-mode'
* tree: remove obsolete --color replaced by global --color
This changes the default behaviour of tree. It now displays colors by
default instead of only displaying them when the flag -C/--color was
active. Old behaviour (no color) can be achieved by setting --color to
'never'.
Fixes: #6604
Copying the storageClass string instead of using a pointer to the original string.
This prevents the Go garbage collector from keeping large amounts of
XMLNode structs and references in memory, created by xmlutil.XMLToStruct()
from the aws-sdk-go.
This commit uses the MLST command (where available) to get the status
for single files rather than listing the parent directory and looking
for the file. This makes actions such as using `--files-from` much quicker.
* use getEntry to lookup remote files when supported
* findItem now expects the full path directly
It makes the expected argument similar to the getInfo method, the
difference now is that one is returning a FileInfo whereas
the other is returning an ftp Entry.
Fixes#6225
Co-authored-by: Nick Craig-Wood <nick@craig-wood.com>
The previous version used values after the maximum Unicode code-point
to encode a key. This could lead to an overflow since a key is a int16,
a rune is int32 and the maximum Unicode code-point is larger than int16.
A better solution is to simply use negative runes for keys.
Before this fix, opening a file with `O_CREATE|O_RDONLY` caused an IO error to
be returned when using `--vfs-cache-mode off` or `--vfs-cache-mode writes`.
This was because the file was opened with read intent, but the `O_CREATE`
implies write intent to create the file even though the file is opened
`O_RDONLY`.
This fix sets write intent for the file if `O_CREATE` is passed in which fixes
the problem for all the VFS cache modes.
It also extends the exhaustive open flags testing to `--vfs-cache-mode writes`
as well as `--vfs-cache-mode full` which would have caught this problem.
See: https://forum.rclone.org/t/i-o-error-trashing-file-on-sftp-mount/34317/
Before this fix a chain compress -> crypt -> s3 was giving errors
BadDigest: The Content-MD5 you specified did not match what we received.
This was because the crypt backend was encrypting the underlying local
object to calculate the hash rather than the contents of the metadata
stream.
It did this because the crypt backend incorrectly identified the
object as a local object.
This fixes the problem by making sure the crypt backend does not
unwrap anything but fs.OverrideRemote objects.
See: https://forum.rclone.org/t/not-encrypting-or-compressing-before-upload/32261/10
Before this change we were putting connections into the connection
pool which had a local context in.
This meant that when the operation had finished the context was
cancelled and the connection became unusable.
See: https://forum.rclone.org/t/failed-to-sync-context-canceled/34017/
Before this patch a deadlock could occur if the cache cleaner was
running when an object upload finished.
This fixes the problem by delaying marking the object as clean until
we have notified the VFS layer. This means that the cache cleaner
won't consider the object until **after** the VFS layer has been
notified, thus avoiding the deadlock.
See: https://forum.rclone.org/t/rclone-mount-deadlock-when-dir-cache-time-strikes/33486/
Before this change, when using -server-side-across-configs rclone
would direct Move/Copy/DirMove to the destination server.
However this should be directed to the source server. This is a little
unclear in the RFC, but the name of the parameter "Destination:" seems
clear and this is how dCache and Rucio have implemented it.
See: https://forum.rclone.org/t/webdav-copy-request-implemented-incorrectly/34072/
In this commit
8d1fff9a82 local: obey file filters in listing to fix errors on excluded files
We introduced the concept of local backend filters.
Unfortunately the filters were being applied before we had resolved
the symlink to point to a directory. This meant that symlinks pointing
to directories were filtered out when they shouldn't have been.
This was fixed by moving the filter check until after the symlink had
been resolved.
See: https://forum.rclone.org/t/copy-links-not-following-symlinks-on-1-60-0/34073/7
Before this change we truncated files in the backing store regardless
of whether we needed to or not.
After, we check to see if the file is the right size and don't
truncate if it is.
Apparently Windows Defender likes to check executables each time they
are modified, and truncating a file to its existing size is enough to
trigger the Windows Defender scan. This was causing a big slowdown for
operations which opened and closed the file a lot, such as looking at
properties on an executable.
See: https://forum.rclone.org/t/for-mount-sftp-why-right-click-on-exe-file-is-so-slow-until-it-freezes/33830
Fish is different from POSIX-based Unix shells such as bash,
and a bracketed variable references like we use for the
auto-detection echo command is not supported. The command
will return with zero exit code but produce no output on
stdout. There is a message on stderr, but we don't log it
due to the zero exit code:
fish: Variables cannot be bracketed. In fish, please use {$ShellId}.
Fixes#6552
Before this change if we copied files of unknown size, then they lost
their metadata.
This was particularly noticeable using --s3-decompress.
This change adds metadata to Rcat and RcatSized and changes Copy to
pass the metadata in when it calls Rcat for an unknown sized input.
Fixes#6546
Before this patch, when an alias backend was created it would be
renamed to be canonical and in the process Shutdown would be called on
it. This was particularly noticeable with the dropbox backend which
gave this error when uploading files after the backend was Shutdown.
Failed to copy: upload failed: batcher is shutting down
This patch fixes the cache Rename code not to finalize objects if the
object that is being overwritten is the same as the existing object.
See: https://forum.rclone.org/t/upload-failed-batcher-is-shutting-down/33900
Solves link error while running rclone's wasm version. Go's `walltime1` function was renamed to `walltime`. This commit updates wasm_exec.js with the new name.
Before this change, some files were giving this error when downloaded
from Cloudflare and other providers.
ERROR corrupted on transfer: sizes differ NNN vs MMM
This is because these providers auto gzips the object when rclone
wasn't expecting it to. (AWS does not gzip objects without their being
uploaded gzipped).
This patch adds a quirk to for fix the problem and a flag to control
it. The quirk `might_gzip` is set to `true` for all providers except
AWS.
See: https://forum.rclone.org/t/s3-error-corrupted-on-transfer-sizes-differ-nnn-vs-mmm/33694/Fixes: #6533
Before this fix it was impossible to stop rclone generating an
X-Amx-Acl: header which is incompatible with GCS with uniform access
control and is generally deprecated at AWS.
The API endpoint GetBucketLocation requires
top level permission.
If we do an authenticated head request to a bucket, the bucket location will be returned in the HTTP headers.
Fixes#5066
This fixes vulnerability GO-2022-0969 reported by govulncheck:
HTTP/2 server connections can hang forever waiting for a clean
shutdown that was preempted by a fatal error. This condition can
be exploited by a malicious client to cause a denial of service.
Call stacks in your code:
Error: cmd/serve/restic/restic.go:150:22: github.com/rclone/rclone/cmd/serve/restic.init$1$1 calls golang.org/x/net/http2.Server.ServeConn
Found in: golang.org/x/net/http2@v0.0.0-20220805013720-a33c5aa5df48
Fixed in: golang.org/x/net/http2@v0.0.0-20220906165146-f3363e06e74c
More info: https://pkg.go.dev/vuln/GO-2022-0969
Before this change if --user-server-modtime was in use the ModTime
could change for an object as we receive it accurate to the nearest ms
in listings, but only accurate to the nearest second in HEAD and GET
requests.
Normally AWS returns the milliseconds as .000 in listings, but if
versions are in use it may not. Storj S3 also seems to return
milliseconds.
This patch tries to keep the maximum precision in the last modified
time, so it doesn't update a last modified time with a truncated
version if the times were the same to the nearest second.
See: https://forum.rclone.org/t/cache-fingerprint-miss-behavior-leading-to-false-positive-stalen-cache/33404/
Before this change rclone used statx() to read the metadata for files
from the local filesystem when `-M` was in use.
Unfortunately statx() was only introduced in kernel 4.11 which was
released in April 2017 so there are current systems (eg Centos 7)
still on kernel versions which don't support statx().
This patch checks to see if statx() is available and if it isn't, it
falls back to using fstatat() which was introduced in Linux 2.6.16
which is guaranteed for all Go versions.
See: https://forum.rclone.org/t/metadata-from-linux-local-s3-failed-to-copy-failed-to-read-metadata-from-source-object-function-not-implemented/33233/
The current default AnnounceInterval is too short, causing the
multicast domain to be flooded with NOTIFY announcements,
which may prevent other dlna devices from sleeping.
This change allows users to set the announcement interval,
and it's default value also increased to 12 minutes.
Even within the interval, rclone can still passively respond to
M-SEARCH requests from other devices.
Verify the http service listening address and the SSDP server
announcement address to prevent accidental listening of IPv6 addresses
that do not support dlna yet and may be globally accessible.
Unlistened addresses on the interface will also be filtered out of the
SSDP announcement to avoid misleading other services in the multicast domain.
Before this change, if the a mount was created via the rc but unmounted
externally with `fusermount -u` say, rclone would still believe the mount
was active when it wasn't.
According to [systemd.automount](https://www.freedesktop.org/software/systemd/man/systemd.automount.html) manual
> Note that automount units are separate from the mount itself, so you should
> not set After= or Requires= for mount dependencies here.
> For example, you should not set After=network-online.target or
> similar on network filesystems. Doing so may result in an ordering cycle.
In https://github.com/jlaffaye/ftp/commit/212daf295f the upstream FTP
library changed the way adding your own dialer works which meant that
connections when using explicit FTP were failing.
This patch reworks our connection code to bring it into the
expectations of the library.
Before this fix, if an error ocurred reading the metadata, it could be
set as nil and then used, causing a crash.
This fix changes the readMetadata function so it returns an error, and
the error is always set if the metadata returned is nil.
If mkdir fails then before this change it would have thrown an
error.
After this change, if the error indicated that the directory
already exists then the error is not returned to the user.
This fixes a race condition when two rclone threads are trying to
create the same directory.
Fixes issue with spacing between icon and text in backend docs headers.
This reverts the changes from PR #5889 and #5701, which aligned menu/dropdown items when
icons have different sizes, and implements an alternative fix which gives slightly better
results, and also is more of a native Font Awesome solution:
Font Awesome icons are designed on grid and share a consistent height. But they vary in
width depending on how wide or narrow each symbol is. If you prefer to work with icons
that have a consistent width, adding fa-fw will render each icon using the same width.
A very common mistake for new users of rclone is to use a remote name
without a colon. This can be on the command line or in the config when
setting up a crypt backend.
This change checks to see if the user uses a path which matches a
remote name and gives an NOTICE like this if they do
NOTICE: "remote" refers to a local folder, use "remote:" to refer to your remote or "./remote" to hide this warning
See: https://forum.rclone.org/t/sync-to-onedrive-personal-lands-file-in-localfilesystem-but-not-in-onedrive/32956
In this commit
8d1fff9a82 local: obey file filters in listing to fix errors on excluded files
We started using filters in the local backend so the user could short
circuit troublesome files/directories at a low level.
However this caused a number of integration tests to fail. This turned
out to be in backends wrapping the local backend. For example the
combine backend test failed because it changes the paths passed to the
local backend so they no longer match the paths in the current filter.
To fix this, a new feature flag `FilterAware` was added and the
UseFilter context flag is only passed to backends which support it. As
the wrapping backends don't support the flag, this fixes the problems
in the integration tests.
In future the wrapping backends could modify the active filters to
match the path modifications and then they could set the FilterAware
flag.
See #6376
Before this change we assumed that github.com/Unknwon/goconfig was
threadsafe as documented.
However it turns out it is not threadsafe and looking at the code it
appears that making it threadsafe might be quite hard.
So this change increases the lock coverage in configfile to cover the
goconfig uses also.
Fixes#6378
Before this fix, the chunksize calculator was using the previous size
of the object, not the new size of the object to calculate the chunk
sizes.
This meant that uploading a replacement object which needed a new
chunk size would fail, using too many parts.
This fix fixes the calculator to take the size explicitly.
Before this change, if rclone was run with `-M` on a filesystem
without xattr support, it would error out.
This patch makes rclone detect the not supported errors and disable
xattrs from then on. It prints one ERROR level message about this.
See: https://forum.rclone.org/t/metadata-update-local-s3/32277/7
Changes in github.com/anacrolix/dms changed upnp.ServiceURN to include a
namespace identifier. This identifier was previously hardcoded, but is
now parsed out of the URN. The old SOAP action header parsing logic was
duplicated in rclone and did not handle this field. Resulting responses
included a URN with an empty namespace identifier, breaking clients.
Before this change, rclone serve sftp operating with a new rclone
after the md5sum/sha1sum detection was reworked to just run a plain
`md5sum`/`sha1sum` command in
3ea82032e7 sftp: support md5/sha1 with rsync.net #3254
Failed to signal to the remote that md5sum/sha1sum wasn't supported as
in
71e172a139 serve/sftp: support empty "md5sum" and "sha1sum" commands
We unconditionally return good hashes even if the remote being served
doesn't support the hash type in question.
This fix checks the hash type is supported and returns an error
MD5 hash not supported
When the backend is first contacted this will cause the sftp backend
to detect that the hash type isn't available.
Unfortunately this may have cached the wrong state so editing or
remaking the config may be necessary to fix it.
Before this fix, the parsing code gave an error like this
parsing "2022-08-02 07:00:00" as fs.Time failed: expected newline
This was due to the Scan call failing to read all the data.
This patch fixes that, and redoes the tests
Before this change, if an object compressed with "Content-Encoding:
gzip" was downloaded, a length and hash mismatch would occur since the
go runtime automatically decompressed the object on download.
If --s3-decompress is set, this change erases the length and hash on
compressed objects so they can be downloaded successfully, at the cost
of not being able to check the length or the hash of the downloaded
object.
If --s3-decompress is not set the compressed files will be downloaded
as-is providing compressed objects with intact size and hash
information.
See #2658
Before this fix, the dropbox backend wasn't decoding the file names
received in changenotify events into rclone standard format.
This meant that changenotify events for filenames which had encoded
characters were failing to be decrypted properly if wrapped in crypt.
See: https://forum.rclone.org/t/rclone-vfs-cache-says-file-name-too-long/31535
Before this change the VFS cache cleaner would loop indefinitely while
the cache was above quota. This used up all the CPU.
This fix prevents the cache cleaner from looping. It will be kicked on
ENOSPACE and run in its scheduled time otherwise so this should be
sufficient.
See: https://forum.rclone.org/t/vfs-keeps-checking-same-files/32120
Before this patch backends could be shutdown when they fell out of the
cache when they were in use with combine. This was particularly
noticeable with the dropbox backend which gave this error when
uploading files after the backend was Shutdown.
Failed to copy: upload failed: batcher is shutting down
This patch gets the combine remote to pin them until it is finished.
See: https://forum.rclone.org/t/rclone-combine-upload-failed-batcher-is-shutting-down/32168
Before this change --compare-dest and --copy-dest would check to see
if the compare/copy object existed first, before seeing if the
destination object was present.
This is inefficient, because in most --copy-dest syncs the destination
will be present and the compare/copy object need never be tested.
--compare-dest syncs may also be speeded up if they are done to the
same directory repeatedly.
This fixes the problem by re-arranging the logic so if the transfer is
not needed then the compare/copy object is never tested.
See: https://forum.rclone.org/t/union-with-copy-dest-enabled-is-slower-than-expected/32172
Before this change the android build started failing with
gomobile: ANDROID_NDK_HOME specifies /usr/local/lib/android/sdk/ndk/25.0.8775105
which is unusable: unsupported API version 16 (not in 19..33)
This was caused by a change to github actions, but is ultimately due
to an issue in gomobile with the newest version of the SDK.
This change fixes the problem by declaring a minimum API version of 21
and using version 21 compilers to build everything and using the
default NDK in github actions.
See: https://github.com/actions/virtual-environments/issues/5930
See: https://github.com/lightningnetwork/lnd/issues/6651
Previously, with standard auth, the username would be stored in config - but only after
entering the non-standard device/mountpoint sequence during config (a feature introduced
with #5926). Regardless of that, rclone always requests the username from the api at
startup (NewFS).
In #6270 (commit 9dbed02329) this was changed to always
store username in config (consistency), and then also use it to avoid the repeated
customer info request in NewFs (performance). But, as reported in #6309, it did not work
with legacy auth, where user enters username manually, if user entered an email address
instead of the internal username required for api requests. This change was therefore
recently reverted.
The current commit takes another step back to not store the username in config during
the non-standard device/mountpoint config sequence (consistentcy). The username will
now only be stored in config when using legacy auth, where it is an input parameter.
Extend the shouldRetry function by also checking for the quotaExceeded
reason, and since this function appeared to be untested, add a test case
for the existing errors and this new one.
Fixes#615
In
22abd785eb s3: implement reading and writing of metadata #111
The reading information of objects was refactored to use the
s3.HeadObjectOutput structure.
Unfortunately the code branch with `--s3-no-head` was not tested
otherwise this panic would have been discovered.
This shows that this is path is not integration tested, so this adds a
new integration test.
Fixes#6322
`FS.cacheExpiry` is accessed through sync/atomic.
According to the documentation, "On ARM, 386, and 32-bit MIPS, it is
the caller's responsibility to arrange for 64-bit alignment of 64-bit
words accessed atomically. The first word in a variable or in an
allocated struct, array, or slice can be relied upon to be 64-bit
aligned."
Before commit 1d2fe0d856 this field was
aligned, but then a new field was added to the structure, causing the
test suite to panic on linux/386.
No other field is used with sync/atomic, so `cacheExpiry` can just be
placed at the beginning of the stuct to ensure it is always aligned.
By default these will be downloaded compressed.
This changes the default of the previous commit
2781f8e2f1 gcs: Fix download of "Content-Encoding: gzip" compressed objects
But will fit in better with the metadata framework when copying
gzip-encoded objects from backend to backend.
strings.Title has been deprecated since Go 1.18 and an alternative has been
available since Go 1.0. The rule Title uses for word boundaries does not handle
Unicode punctuation properly. Use golang.org/x/text/cases instead.
Most of the time this will make no difference to user logs, however
the difference may be visible in JSON logs and on the rare occasions
src and dst are pointing to different file names.
Existing version did save username in config, but only when entering the custom
device/mountpoint sequence in config. Regardless of that, it did always look up the
username at startup with an api request.
This commit improves it so that the username will always be stored in config,
and when using standard authentication it picks it from the login token instead of
requesting it from the remote api, and also in fs constructor it picks it from config
instead of requesting it from remote api (again).
This change ensures we call the Shutdown method on backends when
they drop out of the fs/cache and at program exit.
Some backends implement the optional fs.Shutdowner interface. Until now,
Shutdown is only checked and called, when a backend is wrapped (e.g.
crypt, compress, ...).
To have a general way to perform operations at the end of the backend
lifecycle with proper error handling, we can call Shutdown at cache
clear time.
We add a finalize hook to the cache which will be called when values
drop out of the cache.
Previous discussion: https://forum.rclone.org/t/31336
This was caused by nested calls to NewTransfer/Done.
This fixes the problem by only incrementing transfers if the remote is
present in the transferMap which means we only increment it once.
This message is a double panic and was actually caused by an assertion
panic in:
vfs/vfscache/downloaders/downloaders.go
This is triggered by the code added relatively recently to fix a bug
with renaming files:
ec72432cec vfs: fix failed to _ensure cache internal error: downloaders is nil error
So it appears that item.o may be nil at this point.
This patch detects item.o being nil and fetches it again with NewObject.
Fixes#6190Fixes#6235
The https://github.com/nsf/termbox-go library is no longer maintained
so this change replaces it with the maintained
github.com/gdamore/tcell library which has a termbox backwards
compatibility layer.
There are a few minor changes from the termbox library:
- Using Clear with fg bg ColorDefault resulted in a white background for some reason.
- Clear with fg ColorWhite bg ColorBlack was used instead.
- tcell's termbox wrapper doesn't support ColorLightYellow.
- ColorYellow + 8 was used instead.
The SDK doesn't wrap errors in a Go standard way so they can't be
unwrapped and tested for - eg fatal error.
The code looks for a Serialization or RequestError and returns the
unwrapped underlying error if possible.
This fixes the fs/operations integration tests checking for fatal
errors being returned.
In this commit
e5974ac4b0 s3: use PutObject from the aws SDK to upload single part objects
rclone was made to upload objects to s3 using PUT requests rather than
using signed uploads.
However this change missed the fact that there is a supported way to
do this in the SDK using the SetStreamingBody method on the Request.
This therefore reverts a lot of the previous commit to do with making
an unsigned connection and other complication and uses the SDK
facility.
Before this change using --max-duration and --cutoff-mode soft would
work like --cutoff-mode hard.
This bug was introduced in this commit which made transfers be
cancelable - before that transfers couldn't be canceled.
122a47fba6 accounting: Allow transfers to be canceled with context #3257
This change adds the timeout to the input context for reading files
rather than the transfer context so the files transfers themselves
aren't canceled if --cutoff-mode soft is in action.
This also adds a test for cutoff mode soft and max duration which was
missing.
See: https://forum.rclone.org/t/max-duration-and-retries-not-working-correctly/27738
Before this fix GetFreeSpace returned math.MaxInt64 for remotes which
don't support reading free space, however this is used in various
comparison routines as a too large value, meaning that remotes of size
math.MaxInt64 were never being selected.
This fixes GetFreeSpace to return math.MaxInt64 - 1 so then can be selected.
It also fixes GetUsedSpace the same way however as the default for not
supported was 0 this was very unlikely to have ever caused a problem.
Before this change we ran the tests and the mount in the same process.
This could cause deadlocks and often did, and made the mount tests
very unreliable.
This fixes the problem by running the mount in a seperate process and
commanding it via a pipe over stdin/stdout.
By default, rclone always requests read and write permissions. No matter what settings you configure in the AAD application. This option allows to explicitly request readonly permissions
Migrated read only option to access scope option and set disable_site_permission option to hidden.
If the remote on the command line is "remote:subdir", when
deleting "filename", the confirmation message shows the path
"remote:subdirfilename".
Using fspath.JoinRootPath() fixes this. Also use this function
and fs.ConfigString() in other parts of the file, since they
are more appropriate.
In commit
3ccf222acb sync: overlap check is now filter-sensitive
The tests were attempting to write invalid objects on some backends
due to a leading / on the object name.
This fix also adds a few more test cases and makes sure the tests can
be run individually.
In commit
da404dc0f2 sync,copy: Fix --fast-list --create-empty-src-dirs and --exclude
The fix caused DirTree.AddDir to be called with the root directory.
This in turn caused a spurious directory entry in the DirTree which
caused tests with the -fast-list flag to fail with directory not found
errors.
Windows shells like cmd and powershell needs to use different quoting/escaping
of strings and paths than the unix shell, and also absolute paths must be fixed
by removing leading slash that the POSIX formatted paths have
(e.g. /C:/Users does not work in shell, it must be converted to C:/Users).
Tries to autodetect shell type (cmd, powershell, unix) on first use.
Implemented default builtin powershell functions for hashsum and about when remote
shell is powershell.
See #5763Fixes#5758
Differentiate output of 'config show remote' command from listing options as part
of interactive config process for consistency: 'config show remote' consistent with
'config show', while listing in interactive config consistent with other output.
See #6211
This adjusts
rclone backend drives -o config drive:
So that it also emits a config section called `AllDrives` which uses
the combine backend to make a backend which combines all the shared
drives into one.
It also makes sure that all the shared drive names are valid rclone
config names, deduplicating if necessary.
Fixes#4506
Previously, the overlap check was based on simple prefix checks of the source and destination paths. Now it actually checks whether the destination is excluded via any filter rule or a "--exclude-if-present"-file.
Before this change, if an object compressed with "Content-Encoding:
gzip" was downloaded, a length and hash mismatch would occur since the
as the go runtime automatically decompressed the object on download.
This change erases the length and hash on compressed objects so they
can be downloaded successfully, at the cost of not being able to check
the length or the hash of the downloaded object.
This also adds the --gcs-download-compressed flag to allow the
compressed files to be downloaded as-is providing compressed objects
with intact size and hash information.
Fixes#2658
Before this change, if --fast-list was in use while doing a sync or
copy with --create-empty-src-dirs and --exclude excluded all the files
from the directory (but not the directory), then the directory would
not be created.
This is also visible with `rclone tree` which uses the same tree
building approach as `rclone sync --fast-list` where the directories
would go missing from the tree view.
This was caused by not adding the parents of excluded files to the
directory tree.
See: https://forum.rclone.org/t/create-empty-src-dirs-issue-with-b2/30856
Before this fix, if uploading to a union consisting of all bucket
based remotes (eg s3), uploads failed with:
Failed to copy: object not found
This was because the union backend was relying on parent directories
being created to work out which files to upload. If all the upstreams
were bucket based backends which can't hold empty directories, no
directories were created and the upload failed.
This fixes the problem by returning the upstreams used when creating
the directory for the upload, rather than searching for them again
after they've been created.
This will also make the union backend a little more efficient.
Fixes#6170
strings.ReplaceAll(s, old, new) is a wrapper function for
strings.Replace(s, old, new, -1). But strings.ReplaceAll is more
readable and removes the hardcoded -1.
Signed-off-by: Eng Zer Jun <engzerjun@gmail.com>
When using filepath.Dir, a difference to path.Dir is that it returns os PathSeparator
instead of slash when the path consists entirely of separators.
Also fixed casing of the function name, use OS in all caps instead of Os
as recommended here: https://github.com/golang/go/wiki/CodeReviewComments#initialisms
2022-05-16 12:43:43 +02:00
Michael C Tiernan - MIT-Research Computing Project
The "relative" argument was missing when Put'ing a file. This
sets an incorrect object entry in the cache, leading to the file being
unreadable when using mount functionality.
Fixes#6151
Cloudflare R2 doesn't support range options like `Range: bytes=21-`.
This patch makes FixRangeOption turn a SeekOption into an absolute
RangeOption like this `Range: bytes=21-25` to interoperate with R2.
See: #5642
Before this change FixRangeOption was leaving `Range: bytes=21-`
alone, thus not fulfilling its contract of making Range requests
absolute.
As it happens this form isn't supported by Cloudflare R2.
After this change the request is normalised to `Range: bytes=21-25`.
See: #5642
Before this change rclone used presigned requests to upload single
part objects. This was because of a limitation in the SDK which didn't
allow non seekable io.Readers to be passed in.
This is incompatible with some S3 backends, and rclone wasn't adding
the `X-Amz-Content-Sha256: UNSIGNED-PAYLOAD` header which was
incompatible with other S3 backends.
The SDK now allows for this so rclone can use PutObject directly.
This sets the `X-Amz-Content-Sha256: UNSIGNED-PAYLOAD` flag on the PUT
request. However rclone will add a `Content-Md5` header if at all
possible so the body data is still protected.
Note that the old behaviour can still be configured if required with
the `use_presigned_request` config parameter.
Fixes#5422
Changes made for macOS specific for that style of system.
Paths are established/defined singularly and modes are set automatically
when created. (Platform specific.)
Before this running `go mod tidy` caused the build to break because it
removed the dependency on golang.org/x/mobile and a command line tool
from this package is needed for the build.
This adds an explicit dependency which will mean the tool is always present.
This builds all windows binaries without CGO but with cmount.
cgofuse has a compile mode which works without CGO on Windows for
amd64/x86/arm64 architectures so switch to using that.
Uses b2_list_file_versions to retrieve all file versions, and returns
the one that was active at the specified time
This is especially useful in combination with other backup tools, such
as restic, which may use rclone as a backend.
Similar to fs.Duration but parses into a timestamp instead
Supports parsing from:
* Any of the date formats in parseTimeDates
* A time.Duration offset from now
* parseDurationSuffixes offset from now
Jottacloud have several different apis and endpoints using a mix of different
timestamp formats. In existing code the list operation (after the recent liststream
implementation) uses format from golang's time.RFC3339 constant. Uploads (using the
allocate api) uses format from a hard coded constant with value identical to golang's
time.RFC3339. And then we have the classic JFS time format, which is similar to RFC3339
but not identical, using a different constant. Also the naming is a bit confusing,
since the term api is used both as a generic term and also as a reference to the
newer format used in the api subdomain where the allocate endpoint is located.
This commit refactors these things a bit.
Now using the utility function for deduplication that was newly implemented to
fix an issue with server-side copy. This function uses the original, and generic,
"jfs" api (and its "cphash" feature), instead of the newer "allocate" api dedicated
for uploads. Both apis support similar deduplication functionaly that we rely on for
the SetModTime operation. One advantage of using the jfs variant is that the allocate
api is specialized for uploads, an initial request performs modtime-only changes and
deduplication if possible but if not possible it creates an incomplete file revision
and returns a special url to be used with a following request to upload missing content.
In the SetModTime function we only sent the first request, using metadata from existing
remote file but different timestamps, which lead to a modtime-only change. If, for some
reason, this should fail it would leave the incomplete revision behind. Probably not
a problem, but the jfs implementation used with this commit is simpler and
a more "standalone" request which either succeeds or fails without expecting additional
requests.
A strange feature (probably bug) in the api used by the server-side copy implementation
in Jottacloud backend is that if the destination file is in trash, the copy request
succeeds but the destination will still be in trash! When this situation occurs in
rclone, the copy command will fail with "Failed to copy: object not found" because
rclone verifies that the file info in the response from the copy request is valid,
and since it is marked as deleted it is treated as invalid.
This commit works around this problem by looking for this situation in the response
from the copy operation, and send an additional request to a built-in deduplication
endpoint that will restore the file from trash.
Fixes#6112
Some backends may not provide size for all objects, and instead
return -1. Existing version included these in directory sums,
with strange results. With this commit rclone ncdu will consider
negative sizes as zero, but add a new prefix flag '~' with a
description that indicates the shown size is inaccurate.
Fixes#6084
In this commit
f4c40bf79d mount: add --devname to set the device name sent to FUSE for mount display
The --devname parameter was added. However it was soon noticed that
attempting to mount via the rc gave this error:
mount helper error: fusermount: unknown option 'fsname'
mount FAILED: fusermount: exit status 1
This was because the DeviceName (and VolumeName) parameter was never
being initialised when the mount was called via the rc.
The fix for this was to refactor the rc interface so it called the
same Mount method as the command line mount which initialised the
DeviceName and VolumeName parameters properly.
This also fixes the cmd/mount tests which were breaking in the same
way but since they aren't normally run on the CI we didn't notice.
Fixes#6044
The existing code in rclone set the value "offline_access+openid",
when encoded in body it will become "offline_access%2Bopenid". I think
this is wrong. Probably an artifact of "double urlencoding" mixup -
either in rclone or in the jottacloud cli tool version it was sniffed
from? It does work, though. The token received will have scopes "email
offline_access" in it, and the same is true if I change to only
sending "offline_access" as scope.
If a proper space delimited list of "offline_access openid"
is used in the request, the response also includes openid scope:
"openid email offline_access". I think this is more correct and this
patch implements this.
See: #6107
Adds a configuration option to the GCS backend to allow skipping the
check if a bucket exists before copying an object to it, much like
f406dbb added for S3.
When this code was originally implemented os.UserCacheDir wasn't
public so this used a copy of the code. This commit replaces that now
out of date copy with a call to the now public stdlib function.
Before this change the cache backend was passing -1 into
rate.NewLimiter to mean unlimited transactions per second.
In a recent update this immediately returns a rate limit error as
might be expected.
This patch uses rate.Inf as indicated by the docs to signal no limits
are required.
Before this change the 206 responses from putio Range requests were being
returned as errors.
This change checks for 200 and 206 in the GET response now.
golang.org/x/crypto/ssh/terminal is deprecated in favor of
golang.org/x/term, see https://pkg.go.dev/golang.org/x/crypto/ssh/terminal
The latter also supports ReadPassword on solaris, so enable the
respective functionality in fs/config for solaris as well.
Before this change if the timezone was omitted in a
--min-age/--max-age time specifier then rclone defaulted to a UTC
timezone.
This is documented as using the local timezone if the time zone
specifier is omitted which is a much more useful default and this
patch corrects the implementation to agree with the documentation.
See: https://forum.rclone.org/t/problem-utc-windows-europe-1-summer-problem/29917
Before this fix, rclone retries chunks of multipart uploads. However
if they had been partially received dropbox would reply with an
incorrect_offset error which rclone was ignoring.
This patch parses the new offset from the error response and uses it
to adjust the data that rclone sends so it is the same as what dropbox
is expecting.
See: https://forum.rclone.org/t/dropbox-rate-limiting-for-upload/29779
This commit switches Google Cloud Storage from the drive pacer to the
s3 pacer. The main difference between them is that the s3 pacer does
not limit transactions in the non-error case. This is appropriate for
a cloud storage backend where you pay for each transaction.
Before this fix `NewObject` could return a wrapped `fs.Object(nil)`
which caused a crash. This was caused by `wrapObject` returning a
`nil` `*Object` which was cast into an `fs.Object`.
This changes the interface of `wrapObject` so it returns an
`fs.Object` instead of a `*Object` and an error which must be checked.
This forces the callers to return a `nil` object rather than an
`fs.Object(nil)`.
See: https://forum.rclone.org/t/panic-in-hasher-when-mounting-with-vfs-cache-and-not-synced-data-in-the-cache/29697/11
Having a replace directive in go.mod causes "go get
github.com/rclone/rclone" to fail as it discussed in this Go issue:
https://github.com/golang/go/issues/44840
This is apparently how the Go team want go.mod to work, so this commit
hard forks github.com/jlaffaye/ftp into github.com/rclone/ftp so we
can remove the `replace` directive from the go.mod file.
Fixes#5810
This error was caused by renaming an open file.
When the file was renamed in the cache, the downloaders were cleared,
however the downloaders were not re-opened when needed again, instead
this error was generated.
This fix re-opens the downloaders if they have been closed by renaming
the file.
Fixes#5984
Before this change rclone send pre-1970 timestamps as negative
numbers. pCloud ignores these and sets them as todays date.
This change sends the timestamps as unsigned 64 bit integers (which is
how the binary protocol sends them) and pCloud accepts the (actually
negative) timestamp like this.
There has been a desire from more advanced rclone users to have regexp
filtering as well as the glob filtering.
This patch adds regexp filtering using this syntax `{{ regexp }}`
which is currently a syntax error, so is backwards compatibile.
This means regexps can be used everywhere globs can be used, and that
they also can be mixed with globs in the same pattern, eg `*.{{jpe?g}}`
Before this change, if the --max-duration limit was reached then
rclone would retry the sync as a fatal error wasn't raised.
This checks the deadline and raises a fatal error if necessary at the
end of the sync.
Fixes#6002
This stops the SFTP library issuing out of order writes which fixes
the problems uploading to `serve sftp` from the `sftp` backend.
This was fixes upstream in this pull request: https://github.com/pkg/sftp/pull/482Fixes#5806
Before this change the new multipart upload ETag checking code was
failing in the integration tests with Alibaba OSS.
Apparently Alibaba calculate the ETag in a different way to AWS.
This introduces a new provider quirk with a flag to disable the
checking of the ETag for multipart uploads.
Mulpart Etag checking has been enabled for all providers that we can
test for and work, and left disabled for the others.
Before this rclone ignored the ETag on multipart uploads which missed
an opportunity for a whole file integrity check.
This adds that check which means that we now check even harder that
multipart uploads have arrived properly.
See #5993
Before this change `rclone about swift:container` would show aggregate
info about all the containers, not just the one in use.
This causes a problem if container listing is disabled (for example in
the Blomp service).
This fix makes `rclone about swift:container` show only the info about
the given `container`. If aggregate info about all the containers is
required then use `rclone about swift:`.
See: https://forum.rclone.org/t/rclone-mount-blomp-problem/29151/18
Before this change, rclone supported authorizing for remote systems by
going to a URL and cutting and pasting a token from Google. This is
known as the OAuth out-of-band (oob) flow.
This, while very convenient for users, has been shown to be insecure
and has been deprecated by Google.
https://developers.googleblog.com/2022/02/making-oauth-flows-safer.html#disallowed-oob
> OAuth out-of-band (OOB) is a legacy flow developed to support native
> clients which do not have a redirect URI like web apps to accept the
> credentials after a user approves an OAuth consent request. The OOB
> flow poses a remote phishing risk and clients must migrate to an
> alternative method to protect against this vulnerability. New
> clients will be unable to use this flow starting on Feb 28, 2022.
This change disables that flow, and forces the user to use the
redirect URL flow. (This is the flow used already for local configs.)
In practice this will mean that instead of cutting and pasting a token
for remote config, it will be necessary to run "rclone authorize"
instead. This is how all the other OAuth backends work so it is a well
tested code path.
Fixes#6000
This allows a backend to have multiple aliases. These aliases are
hidden from `rclone config` and the command line flags are hidden from
the user. However the flags, environment varialbes and config for the
alias will work just fine.
Before this change, the device name was always the remote:path rclone
was configured with. However this can contain sensitive information
and it appears in the `mount` output, so `--devname` allows the user
to configure it.
See: https://forum.rclone.org/t/rclone-mount-blomp-problem/29151/11
Making a client-id for Google Drive requires you to add two more fields
besides the already documented "Application name" field. This commit
documents what should be written for those two fields.
Fixes#5967
Before this change a multipart upload with the --no-head flag returned
the MD5SUM as a base64 string rather than a Hex string as the rest of
rclone was expecting.
Until Windows 10 version 2004 (May 2020) this can be found from registry entry
ReleaseID, after that we must use entry DisplayVersion (ReleaseId is stuck at 2009).
Source: https://ss64.com/nt/ver.html
Previously an empty input (just pressing enter) was only allowed for multiple-choice
options that did not have the Exclusive property set. With this change the existing
Required property is introduced into the multiple choice handling, so that one can have
Exclusive and Required options where only a value from the list is allowed, and one can
have Exclusive but not Required options where an empty value is accepted but any
non-empty value must still be matching an item from the list.
Fixes#5549
See #5551
This link appears to be broken, so here is another reference to (I think) the same file that provides a good example of coba. We could also do the current commit 8312004f41/cli/cobra.go although it might be better to maintain an up to date example.
MEGAcmd currently includes escaped HTML4 entites in its XML messages.
This behavior deviates from the XML standard, but currently it prevents
rclone from being able to use the remote.
Before this change attempting NewObject on a SAS URL's root would
crash the Azure SDK.
This change detects that using the code from this previous fix
f7404f52e7 azureblob: fix crash when listing outside a SAS URL's root - fixes#4851
And returns not object not found instead.
It also prevents things being uploaded to the root of the SAS URL
which also crashes the Azure SDK.
Before this change the oauth webserver would crash if it received a
request to /robots.txt.
This patch makes it ignore (with 404 error) any paths it isn't
expecting.
Before this fix if a file was updated, but to the same length and
timestamp then the local backend would return the wrong (cached)
hashes for the object.
This happens regularly on a crypted local disk mount when the VFS
thinks files have been changed but actually their contents are
identical to that written previously. This is because when files are
uploaded their nonce changes so the contents of the file changes but
the timestamp and size remain the same because the file didn't
actually change.
This causes errors like this:
ERROR: file: Failed to copy: corrupted on transfer: md5 crypted
hash differ "X" vs "Y"
This turned out to be because the local backend wasn't clearing its
cache of hashes when the file was updated.
This fix clears the hash cache for Update and Remove.
It also puts a src and destination in the crypt message to make future
debugging easier.
Fixes#4031
Currently the B2 docs don't specify which format the download_url
setting should have, and if you input it wrong, there is nothing
in the verbose logs or anywhere else that can let you know that.
* Wasabi starts to provide AP Northeast 2 (Osaka) endpoint, so add it to the list
* Rename ap-northeast-1 as "AP Northeast 1 (Tokyo)" from "AP Northeast"
Signed-off-by: lindwurm <lindwurm.q@gmail.com>
After speed testing it was discovered that upload speed goes up pretty
much linearly with upload concurrency. This patch changes the default
from 4 to 16 which means that rclone will use 16 * 4M = 64M per
transfer which is OK even for low memory devices.
This adds a note that performance may be increased by increasing
upload concurrency.
See: https://forum.rclone.org/t/performance-of-rclone-vs-azcopy/27437/9
This patch adds rclone_http_status_code counter vector labeled by
* host,
* method,
* code.
It allows to see HTTP errors, backoffs etc.
The Metrics struct is designed for extensibility.
Adding new metrics is a matter of adding them to Metrics struct and including them in the response handling.
This feature has been discussed in the forum [1].
[1] https://forum.rclone.org/t/prometheus-metrics/14484
Whenever transfer.Account() is called, a new goroutine acc.averageLoop()
is started. This goroutine exits only when the channel acc.exit is closed.
acc.exit is closed when acc.Done() is called, which happens during tr.Done().
However, if tr.Reset is called during a copy low level retry, it replaces
the tr.acc, without calling acc.Done(), which results in the goroutine
mentioned above never exiting.
This commit calls acc.Done() during a tr.Reset()
This was necessary because go1.14 seems to have a modules related bug
which means it tries to build modules even though the uses of them are
all disabled with build constraints. This seems to be fixed in go1.15.
Previously only the fs being checked on gets passed to
GetModifyWindow(). However, in most tests, the test files are
generated in the local fs and transferred to the remote fs. So the
local fs time precision has to be taken into account.
This meant that on Windows the time tests failed because the
local fs has a time precision of 100ns. Checking remote items uploaded
from local fs on Windows also requires a modify window of 100ns.
This is possible now that we no longer support go1.12 and brings
rclone into line with standard practices in the Go world.
This also removes errors.New and errors.Errorf from lib/errors and
prefers the stdlib errors package over lib/errors.
This removes the checks against the provider throughout the code and
puts them into a single setQuirks function for easy maintenance when
adding a new provider.
It also updates the quirks with the results of testing against
backends we have access to.
This also adds a list_url_encode parameter so that quirk can be
manually set.
This implements a quirks system for providers and notes which
providers we have tested to support ListObjectsV2.
For those providers which don't support ListObjectsV2 we use the
original ListObjects call.
The test is not applicable to uptobox which can't upload empty files.
The test was not skipped as intended because the direct error was compared.
This fix will compare error Cause because Sync wraps the error.
Some day in the past the Slash encode option was added to Onedrive
encoder so it began to encode slashes in file names rather then treat
them as path separators.
This patch adapts benchmark test cases accordingly.
Fixes#5659
Add an encoding test to make sure backends can deal with a URL encoded
path name. This is a fairly common failing in backends and has been an
intermittent problem with onedrive itself.
The API doesn't seem to accept a value of "0" any more for the root
directory ID, giving the error "Could not decode folder id".
However omitting it seems to work fine.
In this commit, released in 1.56.0 we started reading the size of the
object from the Content-Length header as returned by the GET request
to read the object.
4401d180aa s3: add --s3-no-head-object
However some object storage systems, notably Ceph, don't return a
Content-Length header.
The new code correctly calls the setMetaData function with a nil
pointer to the ContentLength.
However due to this commit from 2014, released in v1.18, the
setMetaData function was not ignoring the size as it should have done.
0da6f24221 s3: use official github.com/aws/aws-sdk-go including multipart upload #101
This commit correctly ignores the content length if not set.
Fixes#5732
Before this change the `shared_credentials_file` config option was
being ignored.
The correct value is passed into the SDK but it only sets the
credentials in the default provider. Unfortunately we wipe the default
provider in order to install our own chain if env_auth is true.
This patch restores the shared credentials file in the session
options, exactly the same as how we restore the profile.
Original fix:
1605f9e14d s3: Fix shared_credentials_file auth
This patch reverts this commit
1605f9e14d s3: Fix shared_credentials_file auth
It unfortunately had the side effect of making the s3 SDK ignore the
config in our custom chain and use the default provider. This means
that advanced auth was being ignored such as --s3-profile with
role_arn.
Fixes#5468Fixes#5762
The API has changed in the directory move call JSON response from
returning a TaskID as a string to returning it as an integer. In other
places it is still returned as a string though.
This patch allows the TaskID to be an integer or a string in the JSON
response and keeps it internally as a string like before.
The test fails because it expects a copy with MaxTransfer and CutoffModeHard should
return fatal error, because this is thrown from accounting (ErrorMaxTransferLimitReachedFatal),
but in case of Google Drive the external google API catches and replaces it with a
non-fatal error:
pw.CloseWithError(fmt.Errorf("googleapi: Copy failed: %v", err))
(7290f25351/internal/gensupport/media.go (L140))
The test TestIntegration/FsMkdir/FsPutFiles/FromRoot/ListR fails in
the integration test because there is a broken bucket in the test
account which support haven't been able to remove.
This tries to fix the integration tests by only allowing one
premiumizeme test to run at once, in the hope it will stop rclone
hitting the rate limits and breaking the tests.
See: #5734
The TestIntegration/FsMkdir/FsPutFiles/PublicLink test doesn't work on
a standard onedrive account, it returns
accessDenied: accountUpgradeRequired: Account Upgrade is required for this operation.
See: #5734
The TestIntegration/FsMkdir/FsPutFiles/PublicLink test doesn't work on
a standard dropbox account, only on an enterprise account because it
sets expiry dates.
See: #5734
After testing concurrent calling of `kv.Start` and `db.Stop` I had to restrict
more parts of these under mutex to make results deterministic without Sleep's
in the test body. It's more safe but has potential to lock Start for up to
2 seconds due to `db.open`.
Before this change we checked that features.ReadMimeTime was set if
and only if the Object.MimeType method was implemented.
However this test is overly general - we don't care if Objects
advertise MimeType when features.ReadMimeTime is set provided that
they always return an empty string (which is what a wrapping backend
might do).
This patch implements that logic.
Before this change the backoff for the error_background error was 6
seconds. This means that if it wasn't resolved in 60 seconds (with the
default 10 low level retries) then an error was reported.
This error was being reported frequently in the integration tests, so
is likely affecting real users too.
This patch changes the backoff into an exponential backoff
1,2,4,8...1024 seconds to make sure we wait long enough for the
background operation to complete.
See #5734
- setup correct path encoding (fixes backend test FsEncoding)
- ignore range option if file is empty (fixes VFS test TestFileReadAtZeroLength)
- cleanup stray files left after failed upload (fixes test FsPutError)
- rebase code on master, adapt backend for rclone context passing
- translate Siad errors to rclone native FS errors in sia errorHandler
- TestSia: return proper backend options from the script
- TestSia: use uptodate AntFarm image, nebulouslabs/siaantfarm is stale
In
05f128868f azureblob: add --azureblob-no-head-object
we incorrectly parsed the size of the object as the Content-Length of
the returned header. This is incorrect in the presense of Range
requests.
This fixes the problem by parsing the Content-Range header if
avaialble to read the correct length from if a Range request was
issued.
See: #5734
This reverts commit
dc06973796 Revert "s3: use rclone's low level retries instead of AWS SDK to fix listing retries"
Which in turn reverted
5470d34740 "backend/s3: use low-level-retries as the number of SDK retries"
So we are back where we started.
It then modifies it to set the AWS SDK to `--low-level-retries`
retries, but set the rclone retries to 2 so that directory listings
can be retried.
Before this change the cleanup routine exited on the first deletion
error.
This change counts any errors on deletion and exits when the iteration
is complete with an error showing the number of deletion failures.
Deletion failures will be logged.
Before this change we uses limit/offset paging for directories in the
main directory listing routine and in the trash cleanup listing.
This switches to the new scheme of limit/marker which is more reliable
on a directory which is continuously changing. It has the disadvantage
that it doesn't tell us the total number of items available, however
that wasn't information rclone uses.
This was caused by
7a1cab57b6 cmd/hashsum: dont put ERROR or UNSUPPORTED in output
And was picked up in the integration tests.
This patch no longer calls the HashLister for unsupported hash types.
This changes the interface to NewObject so that if NewObject is called
on a directory then it should return fs.ErrorIsDir if possible without
doing any extra work, otherwise fs.ErrorObjectNotFound.
Tested on integration test server with:
go run integration-test.go -tests backend -run TestIntegration/FsMkdir/FsPutFiles/FsNewObjectDir -branch fix-stat -maxtries 1
The egress charges while using a CloudFront CDN url is cheaper when
compared to accessing the file directly from S3. So added a download
URL advanced option, which when set downloads the file using it.
Before this patch the md5all option would skip creating metadata with
hashsum if base filesystem provided md5, in hope to pass it through.
However, if base hash is slow (for example on local fs), chunker passed
slow md5 but never reported this fact in features.
This patch makes chunker snapshot base hashsum in metadata when md5all is
set and base hashsum is slow since chunker was intended to provide only
instant hashsums from the start.
Fixes#5508
Before this change, when uploading to a crypt, the ObjectInfo
accidentally used the encrypted size, not the unencrypted size when
--crypt-no-data-encryption was set.
Fixes#5498
In presence of no_data_encryption the Crypt's Put method used to over-optimize
and returned base object. This patch makes it return Crypt-wrapped object now.
Fixes#5498
Before this fix, rclone only generated an RSA server key when the user
didn't supply a key.
However the RSA server key is being deprecated as it is now insecure.
This patch generates an ECDSA server key too which will be used in
preference over the RSA key, but the RSA key will carry on working.
Fixes#5671
I discovered that `rclone` always upload in chunks of 16MiB whenever
uploading a file smaller than `--drive-upload-cutoff`. This is
undesirable since the purpose of the flag `--drive-upload-cutoff` is
to *prevent* chunking below a certain file size.
I realized that it wasn't `rclone` forcing the 16MiB chunks. The
`google-api-go-client` forces a chunk size default of
[`googleapi.DefaultUploadChunkSize`](32bf29c2e1/googleapi/googleapi.go (L55-L57))
bytes for resumable type uploads. This means that all requests that
use `*drive.Service` directly for upload without specifying a
`googleapi.ChunkSize` will be forced to use a *`resumable`*
`uploadType` (rather than `multipart`) for files less than
`googleapi.DefaultUploadChunkSize`. This is also noted directly in the
Drive API client documentation [here](https://pkg.go.dev/google.golang.org/api/drive/v3@v0.44.0#FilesUpdateCall.Media).
This fixes the problem by passing `googleapi.ChunkSize(0)` to
`Media()` method calls, which is the only way to disable chunking
completely. This is mentioned in the API docs
[here](https://pkg.go.dev/google.golang.org/api/googleapi@v0.44.0#ChunkSize).
The other alternative would be to pass
`googleapi.ChunkSize(f.opt.ChunkSize)` -- however, I'm *strongly* in
favor of *not* doing this for performance reasons. By not explicitly
passing a `googleapi.ChunkSize(0)`, we effectively allow
[`PrepareUpload()`](https://pkg.go.dev/google.golang.org/api/internal/gensupport@v0.44.0#PrepareUpload)
to create a
[`NewMediaBuffer`](https://pkg.go.dev/google.golang.org/api/internal/gensupport@v0.44.0#NewMediaBuffer)
that copies the original `io.Reader` passed to `Media()` in order to
check that its size is less than `ChunkSize`, which will unnecessarily
consume time and memory.
`minChunkSize` is also changed to be `googleapi.MinUploadChunkSize`,
as it is something specified we have no control over.
Google Drive API allows for clauses like "modifiedTime > '2012-06-04T12:00:00'"
in the query param, so the filter flags --max-age and --min-age can be applied
directly at the directory listing phase rather than in a filter.
This is extremely helpful when we want to do an incremental backup of a remote
drive with many files but the number of recently changed file is small.
Co-authored-by: fotile96 <fotile96@users.noreply.github.com>
This patch will:
- add --daemon-wait flag to control the time to wait for background mount
- remove dependency on sevlyar/go-daemon and implement backgrounding directly
- avoid setsid during backgrounding as it can result in race under Automount
- provide a fallback PATH to correctly run `fusermount` under systemd as it
runs mount units without standard environment variables
- correctly handle ^C pressed while background process is being setting up
Current way of checking whether mountpoint has been already mounted (directory
list) can result in race if rclone runs under Automount (classic or systemd).
This patch adopts Linux ProcFS for the check. Note that mountpoint is considered
empty if it's tagged as "mounted" by autofs. Also ProcFS is used to check whether
rclone mount was successful (ie. tagged by a string containing "rclone").
On macOS/BSD where ProcFS is unavailable the old method is still used.
This patch also moves a few utility functions unchanged to utils.go:
CheckOverlap, CheckAllowings, SetVolumeName.
This replaces built-in os.MkdirAll with a patched version that stops the recursion
when reaching the volume part of the path. The original version would continue recursion,
and for extended length paths end up with \\? as the top-level directory, and the error
message would then be something like:
mkdir \\?: The filename, directory name, or volume label syntax is incorrect.
Before this change the union's feature flags were a strict AND of the
underlying remotes. This means that a union of a local disk (which can
Move but not Copy) and a bucket based remote (which can Copy but not
Move) could neither Move nor Copy.
This fix advertises Move in the union if all the remotes can Move or
Copy. It also implements Move as Copy+Delete (like rclone does
normally) if the underlying union does not support Move.
This enables renames to work with unions of local disk and bucket
based remotes expected.
Fixes#5632
Before this change file handles could get closed while the truncate
the file handles loop was running.
This would mean that ocassionally an ECLOSED (which is translated into
EBADF by cmd/mount) would spuriously be returned if Release happened
to happen in the middle of a Truncate call (Setattr called with
size=0).
This change ignores the ECLOSED while truncating file handles.
See: https://forum.rclone.org/t/writes-to-wasabi-mount-failing-with-bad-file-descriptor-intermittently/26321
This was started in
3626f10f26 pcloud: add sha256 support - fixes#5496
But this support turned out to be incomplete and caused the
integration tests to fail.
After updating rclone's dependencies these tests started failing on
windows/386
- TestInternalDoubleWrittenContentMatches
- TestInternalMaxChunkSizeRespected
The failures look like this. The root cause is unknown. The `Wait(n=1)
would exceed context deadline` errors come from golang.org/x/time/rate
but it isn't clear what is calling them.
2021/08/20 21:57:16 ERROR : worker-0 <one>: object open failed 0: rate: Wait(n=1) would exceed context deadline
[snip ~10 duplicates]
2021/08/20 21:57:56 ERROR : tidwcm1629496636/one: (0/26) error (chunk not found 0) response
2021/08/20 21:58:02 ERROR : worker-0 <one>: object open failed 0: rate: Wait(n=1) would exceed context deadline
--- FAIL: TestInternalDoubleWrittenContentMatches (45.77s)
cache_internal_test.go:310:
Error Trace: cache_internal_test.go:310
Error: Not equal:
expected: "one content updated double"
actual : ""
Diff:
--- Expected
+++ Actual
@@ -1 +1 @@
-one content updated double
+
Test: TestInternalDoubleWrittenContentMatches
2021/08/20 21:58:03 original size: 23592960
2021/08/20 21:58:03 updated size: 12
Switched from talking about "unchanged" files to "identical" files.
I found out the hard way that the rclone copy will overwrite newer files.
Looking at posts in the rclone forum, this is a common experience.
The docs for copy have referred to "unchanged" files.
This is ambiguous because it intuitively introduces a sense
of chronology, but chronology is irrelevant.
Rclone only "cares" about difference, not change.
After this patch the version command will be
- fully supported on openbsd/amd64
- stay stub on openbsd/i386 until we deprecate go 1.17
Remaining os/arch combinations stay as is.
In this commit the config system was re-arranged
94dbfa4ea fs: change Config callback into state based callback #3455
This passed the password as a temporary config parameter but forgot to
reveal it in the API call.
Before this fix, on Windows, the --bwlimit would max out at 2.5Gbps
even when set to 10 Gbps.
This turned out to be because of the maximum token bucket size.
This fix scales up the token bucket size linearly above a bwlimit of
2Gbps.
Fixes#5507
Before this change, if there was an existing file being uploaded when
a file was renamed on top of it, then both would be uploaded. This
causes a duplicate in Google Drive as both files get uploaded at the
same time. This was triggered reliably by LibreOffice saving doc
files.
This fix removes any duplicates in the upload queue on rename.
Before this fix, saving a :backend config gave the error
Can't save config "token" = "XXX" for on the fly backend ":backend"
Even when using the in-memory config `--config ""`
This fixes the problem by
- always using the in memory config if it is configured
- moving the check for a :backend config save to the file config backend
It also removes the contents of the config items being saved from the
log which saves confidential tokens being logged.
Fixes#5451
At some point some google docs files started having sizes returned in
their listing information.
This then caused rclone to treat the docs as files which caused
downloads to fail.
The API docs now state that google docs may have sizes (whereas I'm
pretty sure it didn't earlier).
This fix removes the check for size, so google docs are identified
solely by not having an MD5 checksum.
When rclone received a SIGINT (Ctrl+C) or SIGTERM signal while an atexit
function is registered it always terminated with status code 0. Unix
convention is to exit with a non-zero status code. Often it's
`128 + int(signum), but at least not zero.
With this change fatal signals handled by the `atexit` package cause
a non-zero exit code. On Unix systems it's `128 + int(signum)` while
on other systems, such as Windows, it's always 2 ("error not otherwise
categorised").
Resolves#5437.
Signed-off-by: Michael Hanselmann <public@hansmi.ch>
Signal handling by the `atexit` package needs acceess to
`exitCodeUncategorizedError`. With this change all exit status values
are moved to a dedicated package so that they can be reused.
Signed-off-by: Michael Hanselmann <public@hansmi.ch>
Currently rclone check supports matching two file trees by sizes and hashes.
This change adds support for SUM files produced by GNU utilities like sha1sum.
Fixes#1005
Note: checksum by default checks, hashsum by default prints sums.
New flag is named "--checkfile" but carries hash name.
Summary of introduced command forms:
```
rclone check sums.sha1 remote:path --checkfile sha1
rclone checksum sha1 sums.sha1 remote:path
rclone hashsum sha1 remote:path --checkfile sums.sha1
rclone sha1sum remote:path --checkfile sums.sha1
rclone md5sum remote:path --checkfile sums.md5
```
This change fixes the bug described below:
if a file is removed while the local backend List() runs,
the call will flag an accounting error.
The bug manifests itself if local backend is the Sync target
due to intrinsic concurrency.
The odds to hit this bug depend on --checkers and --transfers.
Chunker over local backend is affected even more because
updating a composite object with a smaller size content
translates into removing chunks on the underlying file system
and involves a number of List() calls.
There was no easy way to automatically test the end-to-end functionality
of commands, flags, environment variables etc.
The need for end-to-end testing was highlighted by the issues fixed
in #5341. There was no automated test to continually verify current
behaviour, nor a framework to quickly test the correctness of the fixes.
This change adds an end-to-end testing framework in the cmdtest folder.
It has some simple examples in func TestCmdTest in cmdtest_test.go. The
tests should be readable by anybody familiar with rclone and look like
this:
// Test the rclone version command with debug logging (-vv)
out, err = rclone("version", "-vv")
if assert.NoError(t, err) {
assert.Contains(t, out, "rclone v")
assert.Contains(t, out, "os/version:")
assert.Contains(t, out, " DEBUG : ")
}
The end-to-end tests are executed just like the Go unit tests, that is:
go test ./cmdtest -v
The change also contains a thorough test of environment variables in
environment_test.go.
Thanks to @ncw for encouragement and introduction to the TestMain trick.
Some environment variables didn’t behave like their corresponding
command line flags. The affected flags were --stats, --log-level,
--separator, --multi-tread-streams, --rc-addr, --rc-user and --rc-pass.
Example:
RCLONE_STATS='10s'
rclone check remote: remote: --progress
# Expected: rclone check remote: remote: --progress –-stats=10s
# Actual: rclone check remote: remote: --progress
Remote specific options set by environment variables was overruled by
less specific backend options set by environment variables. Example:
RCLONE_DRIVE_USE_TRASH='false'
RCLONE_CONFIG_MYDRIVE_USE_TRASH='true'
rclone deletefile myDrive:my-test-file
# Expected: my-test-file is recoverable in the trash folder
# Actual: my-test-file is permanently deleted (not recoverable)
Backend specific options set by environment variables was overruled by
general backend options set by environment variables. Example:
RCLONE_SKIP_LINKS='true'
RCLONE_LOCAL_SKIP_LINKS='false'
rclone lsd local:
# Expected result: Warnings when symlinks are skipped
# Actual result: No warnings when symlinks are skipped
# That is RCLONE_SKIP_LINKS takes precedence
The above issues have been fixed.
The debug logging (-vv) has been enhanced to show when flags are set by
environment variables.
The documentation has been enhanced with details on the precedence of
configuration options.
See pull request #5341 for more information.
Improved/added steps to:
* Install Git with basic setup
* Use both SSH and HTTPS for the git origin
* Install Go and verify the GOPATH
* Update the forked master
* Find a popular editor for Go
Nothing is added or removed and no package is renamed by this change.
Just rearrange definitions between source files in the fs directory.
New source files:
- types.go Filesystem types and interfaces
- features.go Features and optional interfaces
- registry.go Filesystem registry and backend options
- newfs.go NewFs and its helpers
- configmap.go Getters and Setters for ConfigMap
- pacer.go Pacer with logging and calculator
The final fs.go contains what is left.
Also rename options.go to open_options.go
to dissociate from registry options.
- Unify all hash names as lowercase alphanumerics without punctuation.
- Legacy names continue to work but disappear from docs, they can be depreciated or dropped later.
- Make rclone hashsum print supported hash list in case of wrong spelling.
- Update documentation.
Fixes#5071Fixes#4841
Before this change, rclone would always check the root to see if it
was an object.
This change doesn't check to see if the root is an object if the path
ends with a /
This avoids a transaction where rclone HEADs the path to see if it
exists.
See #4990
macOS stores files in NFD form and transferring them like this to some
systems causes the Korean language to display incorrectly.
This adds the flag --local-unicode-normalization to optionally
normalize the file names to NFC.
This also removes the (long deprecated) --local-no-unicode-normalization flag
See: https://forum.rclone.org/t/support-for-korean-jaso-conversion/19435
This also factors the config questions into a state based mechanism so
a backend can be configured using the same dialog as rclone config but
remotely.
This is a very large change which turns the post Config function in
backends into a state based call and response system so that
alternative user interfaces can be added.
The existing config logic has been converted, but it is quite
complicated and folloup commits will likely be needed to fix it!
Follow up commits will add a command line and API based way of using
this configuration system.
It was discovered on some Android systems, the stat size of a symlink
is different to the size that readlink returns.
This was giving errors like this
transport connection broken: http: ContentLength=30 with Body length 28
There are enough exceptions to the size of readlink being different to
the size of stat that this patch now always does readlink to work out
the size of a symlink.
Since symlinks are relatively uncommon this shouldn't affect
performance too much and will mean that the size is always correct.
This deprecates the --local-zero-size-links flag which is now
effectively always enabled.
See: https://forum.rclone.org/t/problem-with-symlinks-and-links/23840/
This changes the interface for the C code to return a struct on the
stack that is defined in the code rather than one which is defined by
the cgo compiler. This is more future proof and inline with the
gomobile interface.
This also adds a gomobile interface RcloneMobileRPC which uses generic
go types conforming to the gobind restrictions.
It also fixes up initialisation errors.
- rename C exports to be namespaced with Rclone prefix
- fix error handling in RcloneRPC
- add more examples
- add more docs
- add README
- simplify ctest Makefile
This is so that we see the text of the bug/issue first rather than the
how to use GitHub issue which is very useful when posting bug reports
to the forum or social media.
Before this change but after:
aea8776a43 vfs: fix modtimes not updating when writing via cache #4763
When a file was opened read-only the modtime was read from the cached
file. However this modtime wasn't correct leading to an incorrect
result.
This change fixes the definition of `item.IsDirty` to be true only
when the data is dirty. This fixes the problem as a read only file
isn't considered dirty.
Includes adding support for additional size input suffix Mi and MiB, treated equivalent to M.
Extends binary suffix output with letter i, e.g. Ki and Mi.
Centralizes creation of bit/byte unit strings.
v1.4.6 of uplink allows us to do a negative offset from the end of the
file. This removes a round trip when requesting the last N bytes of a
file.
Previous to v1.4.6 of uplink it wasn't possible to do a negative offset
on download. This meant that to fulfill the semantics of http range
headers it was necessary to first fetch the size of the object via a
stat call and compute absolute offset and length.
Restructuring of config code in v1.55 resulted in config
file being loaded early at process startup. If configuration
file is encrypted this means user will need to supply the password,
even when running commands that does not use config.
This also lead to an issue where mount with --deamon failed to
decrypt the config file when it had to prompt user for passord.
Fixes#5236Fixes#5228
The vfs-cache-max-size parameter is probably confusing to many users.
The cache cleaner checks cache size periodically at the --vfs-cache-poll-interval
(default 60 seconds) interval and remove cache items in the following order.
(1) cache items that are not in use and with age > vfs-cache-max-age
(2) if the cache space used at this time still is larger than
vfs-cache-max-size, the cleaner continues to remove cache items that are
not in use.
The cache cleaning process does not remove cache items that are currently in use.
If the total space consumed by in-use cache items exceeds vfs-cache-max-size, the
periodical cache cleaner thread does not do anything further and leaves the in-use
cache items alone with a total space larger than vfs-cache-max-size.
A cache reset feature was introduced in 1.53 which resets in-use (but not dirty,
i.e., not being updated) cache items when additional cache data incurs an ENOSPC
error. But this code was not activated in the periodical cache cleaning thread.
This patch adds the cache reset step in the cache cleaner thread during cache
poll to reset cache items until the total size of the remaining cache items is
below vfs-cache-max-size.
This updates the actions to only run event-based workflow scripts
under the rclone repository only and not forks. It also adds the
ability to manually trigger a build from a branch in rclone repository
and forks.
Fixes#5272
2021-04-26 17:52:03 +01:00
1585 changed files with 296024 additions and 106984 deletions
@@ -9,7 +9,7 @@ We understand you are having a problem with rclone; we want to help you with tha
**STOP and READ**
**YOUR POST WILL BE REMOVED IF IT IS LOW QUALITY**:
Please show the effort you've put into solving the problem and please be specific.
Please show the effort you've put into solving the problem and please be specific.
People are volunteering their time to help! Low effort posts are not likely to get good answers!
If you think you might have found a bug, try to replicate it with the latest beta (or stable).
@@ -37,7 +37,6 @@ The Rclone Developers
-->
#### The associated forum post URL from `https://forum.rclone.org`
@@ -65,3 +64,11 @@ The Rclone Developers
#### A log from the command with the `-vv` flag (e.g. output from `rclone -vv copy /tmp remote:tmp`)
<!--- Please keep the note below for others who read your bug report. -->
#### How to use GitHub
* Please use the 👍 [reaction](https://blog.github.com/2016-03-10-add-reactions-to-pull-requests-issues-and-comments/) to show that you are affected by the same issue.
* Please don't comment if you have no relevant information to add. It's just extra noise for everyone subscribed to this issue.
* Subscribe to receive notifications on status change and new comments.
#### The associated forum post URL from `https://forum.rclone.org`
@@ -42,3 +41,11 @@ The Rclone Developers
#### How do you think rclone should be changed to solve that?
<!--- Please keep the note below for others who read your feature request. -->
#### How to use GitHub
* Please use the 👍 [reaction](https://blog.github.com/2016-03-10-add-reactions-to-pull-requests-issues-and-comments/) to show that you are affected by the same issue.
* Please don't comment if you have no relevant information to add. It's just extra noise for everyone subscribed to this issue.
* Subscribe to receive notifications on status change and new comments.
This is a short guide on how to contribute things to rclone.
## Reporting a bug ##
## Reporting a bug
If you've just got a question or aren't sure if you've found a bug
then please use the [rclone forum](https://forum.rclone.org/) instead
@@ -12,95 +12,172 @@ When filing an issue, please include the following information if
possible as well as a description of the problem. Make sure you test
with the [latest beta of rclone](https://beta.rclone.org/):
* Rclone version (e.g. output from `rclone -V`)
* Which OS you are using and how many bits (e.g. Windows 7, 64 bit)
* The command you were trying to run (e.g. `rclone copy /tmp remote:tmp`)
* A log of the command with the `-vv` flag (e.g. output from `rclone -vv copy /tmp remote:tmp`)
* if the log contains secrets then edit the file with a text editor first to obscure them
- Rclone version (e.g. output from `rclone version`)
- Which OS you are using and how many bits (e.g. Windows 10, 64 bit)
- The command you were trying to run (e.g. `rclone copy /tmp remote:tmp`)
- A log of the command with the `-vv` flag (e.g. output from `rclone -vv copy /tmp remote:tmp`)
- if the log contains secrets then edit the file with a text editor first to obscure them
## Submitting a pull request ##
## Submitting a new feature or bug fix
If you find a bug that you'd like to fix, or a new feature that you'd
like to implement then please submit a pull request via GitHub.
If it is a big feature then make an issue first so it can be discussed.
If it is a big feature, then [make an issue](https://github.com/rclone/rclone/issues) first so it can be discussed.
You'll need a Go environment set up with GOPATH set. See [the Go
getting started docs](https://golang.org/doc/install) for more info.
First in your web browser press the fork button on [rclone's GitHub
To prepare your pull request first press the fork button on [rclone's GitHub
page](https://github.com/rclone/rclone).
Now in your terminal
Then [install Git](https://git-scm.com/downloads) and set your public contribution [name](https://docs.github.com/en/github/getting-started-with-github/setting-your-username-in-git) and [email](https://docs.github.com/en/github/setting-up-and-managing-your-github-user-account/setting-your-commit-email-address#setting-your-commit-email-address-in-git).
Next open your terminal, change directory to your preferred folder and initialise your local rclone project:
git clone https://github.com/rclone/rclone.git
cd rclone
git remote rename origin upstream
# if you have SSH keys setup in your GitHub account:
Note that most of the terminal commands in the rest of this guide must be executed from the rclone folder created above.
Now [install Go](https://golang.org/doc/install) and verify your installation:
go version
Great, you can now compile and execute your own version of rclone:
go build
./rclone version
(Note that you can also replace `go build` with `make`, which will include a
more accurate version number in the executable as well as enable you to specify
more build options.) Finally make a branch to add your new feature
git checkout -b my-new-feature
And get hacking.
When ready - run the unit tests for the code you changed
You may like one of the [popular editors/IDE's for Go](https://github.com/golang/go/wiki/IDEsAndTextEditorPlugins) and a quick view on the rclone [code organisation](#code-organisation).
When ready - test the affected functionality and run the unit tests for the code you changed
cd folder/with/changed/files
go test -v
Note that you may need to make a test remote, e.g. `TestSwift` for some
of the unit tests.
Note the top level Makefile targets
* make check
* make test
Both of these will be run by Travis when you make a pull request but
you can do this yourself locally too. These require some extra go
packages which you can install with
* make build_dep
This is typically enough if you made a simple bug fix, otherwise please read the rclone [testing](#testing) section too.
Make sure you
* Add [documentation](#writing-documentation) for a new feature.
* Follow the [commit message guidelines](#commit-messages).
* Add [unit tests](#testing) for a new feature
* squash commits down to one per feature
* rebase to master with `git rebase master`
- Add [unit tests](#testing) for a new feature.
- Add [documentation](#writing-documentation) for a new feature.
- [Commit your changes](#committing-your-changes) using the [commit message guidelines](#commit-messages).
When you are done with that
When you are done with that push your changes to GitHub:
You patch will get reviewed and you might get asked to fix some stuff.
Your changes will then get reviewed and you might get asked to fix some stuff. If so, then make the changes in the same branch, commit and push your updates to GitHub.
If so, then make the changes in the same branch, squash the commits (make multiple commits one commit) by running:
```
git log # See how many commits you want to squash
git reset --soft HEAD~2 # This squashes the 2 latest commits together.
git status # Check what will happen, if you made a mistake resetting, you can run git reset 'HEAD@{1}' to undo.
git commit # Add a new commit message.
git push --force # Push the squashed commit to your GitHub repo.
# For more, see Stack Overflow, Git docs, or generally Duck around the web. jtagcat also recommends wizardzines.com
```
You may sometimes be asked to [base your changes on the latest master](#basing-your-changes-on-the-latest-master) or [squash your commits](#squashing-your-commits).
## CI for your fork ##
## Using Git and GitHub
### Committing your changes
Follow the guideline for [commit messages](#commit-messages) and then:
git checkout my-new-feature # To switch to your branch
git status # To see the new and changed files
git add FILENAME # To select FILENAME for the commit
git status # To verify the changes to be committed
git commit # To do the commit
git log # To verify the commit. Use q to quit the log
You can modify the message or changes in the latest commit using:
git commit --amend
If you amend to commits that have been pushed to GitHub, then you will have to [replace your previously pushed commits](#replacing-your-previously-pushed-commits).
### Replacing your previously pushed commits
Note that you are about to rewrite the GitHub history of your branch. It is good practice to involve your collaborators before modifying commits that have been pushed to GitHub.
Your previously pushed commits are replaced by:
git push --force origin my-new-feature
### Basing your changes on the latest master
To base your changes on the latest version of the [rclone master](https://github.com/rclone/rclone/tree/master) (upstream):
git checkout master
git fetch upstream
git merge --ff-only
git push origin --follow-tags # optional update of your fork in GitHub
git checkout my-new-feature
git rebase master
If you rebase commits that have been pushed to GitHub, then you will have to [replace your previously pushed commits](#replacing-your-previously-pushed-commits).
### Squashing your commits ###
To combine your commits into one commit:
git log # To count the commits to squash, e.g. the last 2
git reset --soft HEAD~2 # To undo the 2 latest commits
git status # To check everything is as expected
If everything is fine, then make the new combined commit:
git commit # To commit the undone commits as one
otherwise, you may roll back using:
git reflog # To check that HEAD{1} is your previous state
git reset --soft 'HEAD@{1}' # To roll back to your previous state
If you squash commits that have been pushed to GitHub, then you will have to [replace your previously pushed commits](#replacing-your-previously-pushed-commits).
Tip: You may like to use `git rebase -i master` if you are experienced or have a more complex situation.
### GitHub Continuous Integration
rclone currently uses [GitHub Actions](https://github.com/rclone/rclone/actions) to build and test the project, which should be automatically available for your fork too from the `Actions` tab in your repository.
## Testing ##
## Testing
### Code quality tests
If you install [golangci-lint](https://github.com/golangci/golangci-lint) then you can run the same tests as get run in the CI which can be very helpful.
You can run them with `make check` or with `golangci-lint run ./...`.
Using these tests ensures that the rclone codebase all uses the same coding standards. These tests also check for easy mistakes to make (like forgetting to check an error return).
### Quick testing
rclone's tests are run from the go testing framework, so at the top
level you can run this to run all the tests.
go test -v ./...
You can also use `make`, if supported by your platform
make quicktest
The quicktest is [automatically run by GitHub](#github-continuous-integration) when you push your branch to GitHub.
### Backend testing
rclone contains a mixture of unit tests and integration tests.
Because it is difficult (and in some respects pointless) to test cloud
storage systems by mocking all their interfaces, rclone unit tests can
@@ -134,62 +211,71 @@ project root:
go install github.com/rclone/rclone/fstest/test_all
test_all -backend drive
### Full integration testing
If you want to run all the integration tests against all the remotes,
then change into the project root and run
make check
make test
This command is run daily on the integration test server. You can
The commands may require some extra go packages which you can install with
make build_dep
The full integration tests are run daily on the integration test server. You can
find the results at https://pub.rclone.org/integration-tests/
## Code Organisation ##
## Code Organisation
Rclone code is organised into a small number of top level directories
with modules beneath.
* backend - the rclone backends for interfacing to cloud providers -
* all - import this to load all the cloud providers
* ...providers
* bin - scripts for use while building or maintaining rclone
* cmd - the rclone commands
* all - import this to load all the commands
* ...commands
* docs - the documentation and website
* content - adjust these docs only - everything else is autogenerated
* command - these are autogenerated - edit the corresponding .go file
* fs - main rclone definitions - minimal amount of code
* accounting - bandwidth limiting and statistics
* asyncreader - an io.Reader which reads ahead
*config - manage the config file and flags
*driveletter - detect if a name is a drive letter
*filter - implements include/exclude filtering
* fserrors - rclone specific error handling
* fshttp - http handling for rclone
* fspath - path handling for rclone
*hash - defines rclone's hash types and functions
*list - list a remote
* log - logging facilities
*march - iterates directories in lock step
*object - in memory Fs objects
* operations - primitives for sync, e.g. Copy, Move
*sync - sync directories
*walk - walk a directory
*fstest - provides integration test framework
* fstests - integration tests for the backends
*mockdir - mocks an fs.Directory
* mockobject - mocks an fs.Object
*test_all - Runs integration tests for everything
* graphics - the images used in the website, etc.
* lib - libraries used by the backend
* atexit - register functions to run when rclone exits
*dircache - directory ID to name caching
*oauthutil - helpers for using oauth
*pacer - retries with backoff and paces operations
*readers - a selection of useful io.Readers
* rest - a thin abstraction over net/http for REST
* vfs - Virtual FileSystem layer for implementing rclone mount and similar
- backend - the rclone backends for interfacing to cloud providers -
- all - import this to load all the cloud providers
- ...providers
- bin - scripts for use while building or maintaining rclone
- cmd - the rclone commands
- all - import this to load all the commands
- ...commands
- cmdtest - end-to-end tests of commands, flags, environment variables,...
- docs - the documentation and website
- content - adjust these docs only - everything else is autogenerated
- command - these are auto-generated - edit the corresponding .go file
- fs - main rclone definitions - minimal amount of code
- accounting - bandwidth limiting and statistics
-asyncreader - an io.Reader which reads ahead
-config - manage the config file and flags
-driveletter - detect if a name is a drive letter
- filter - implements include/exclude filtering
- fserrors - rclone specific error handling
- fshttp - http handling for rclone
-fspath - path handling for rclone
-hash - defines rclone's hash types and functions
- list - list a remote
-log - logging facilities
-march - iterates directories in lock step
- object - in memory Fs objects
-operations - primitives for sync, e.g. Copy, Move
-sync - sync directories
-walk - walk a directory
- fstest - provides integration test framework
-fstests - integration tests for the backends
- mockdir - mocks an fs.Directory
-mockobject - mocks an fs.Object
- test_all - Runs integration tests for everything
- graphics - the images used in the website, etc.
- lib - libraries used by the backend
-atexit - register functions to run when rclone exits
-dircache - directory ID to name caching
-oauthutil - helpers for using oauth
-pacer - retries with backoff and paces operations
- readers - a selection of useful io.Readers
- rest - a thin abstraction over net/http for REST
- librclone - in memory interface to rclone's API for embedding rclone
- vfs - Virtual FileSystem layer for implementing rclone mount and similar
## Writing Documentation ##
## Writing Documentation
If you are adding a new feature then please update the documentation.
@@ -198,28 +284,49 @@ If you add a new general flag (not for a backend), then document it in
alphabetical order.
If you add a new backend option/flag, then it should be documented in
the source file in the `Help:` field. The first line of this is used
for the flag help, the remainder is shown to the user in `rclone
config` and is added to the docs with `make backenddocs`.
the source file in the `Help:` field.
- Start with the most important information about the option,
as a single sentence on a single line.
- This text will be used for the command-line flag help.
- It will be combined with other information, such as any default value,
and the result will look odd if not written as a single sentence.
- It should end with a period/full stop character, which will be shown
in docs but automatically removed when producing the flag help.
- Try to keep it below 80 characters, to reduce text wrapping in the terminal.
- More details can be added in a new paragraph, after an empty line (`"\n\n"`).
- Like with docs generated from Markdown, a single line break is ignored
and two line breaks creates a new paragraph.
- This text will be shown to the user in `rclone config`
and in the docs (where it will be added by `make backenddocs`,
normally run some time before next release).
- To create options of enumeration type use the `Examples:` field.
- Each example value have their own `Help:` field, but they are treated
a bit different than the main option help text. They will be shown
as an unordered list, therefore a single line break is enough to
create a new list item. Also, for enumeration texts like name of
countries, it looks better without an ending period/full stop character.
The only documentation you need to edit are the `docs/content/*.md`
files. The `MANUAL.*`, `rclone.1`, website, etc. are all autogenerated
files. The `MANUAL.*`, `rclone.1`, website, etc. are all auto-generated
from those during the release process. See the `make doc` and `make
website` targets in the Makefile if you are interested in how. You
don't need to run these when adding a feature.
Documentation for rclone sub commands is with their code, e.g.
`cmd/ls/ls.go`.
`cmd/ls/ls.go`. Write flag help strings as a single sentence on a single
line, without a period/full stop character at the end, as it will be
combined unmodified with other information (such as any default value).
Note that you can use [GitHub's online editor](https://help.github.com/en/github/managing-files-in-a-repository/editing-files-in-another-users-repository)
for small changes in the docs which makes it very easy.
## Making a release ##
## Making a release
There are separate instructions for making a release in the RELEASE.md
file.
## Commit messages ##
## Commit messages
Please make the first line of your commit message a summary of the
change that a user (not a developer) of rclone would like to read, and
@@ -252,7 +359,7 @@ And here is an example of a longer one:
```
mount: fix hang on errored upload
In certain circumstances if an upload failed then the mount could hang
In certain circumstances, if an upload failed then the mount could hang
indefinitely. This was fixed by closing the read pipe after the Put
completed. This will cause the write side to return a pipe closed
@@ -272,7 +379,7 @@ To add a dependency `github.com/ncw/new_dependency` see the
instructions below. These will fetch the dependency and add it to
`go.mod` and `go.sum`.
GO111MODULE=on go get github.com/ncw/new_dependency
go get github.com/ncw/new_dependency
You can add constraints on that package when doing `go get` (see the
go docs linked above), but don't unless you really need to.
@@ -280,15 +387,15 @@ go docs linked above), but don't unless you really need to.
Please check in the changes generated by `go mod` including `go.mod`
and `go.sum` in the same commit as your other changes.
## Updating a dependency ##
## Updating a dependency
If you need to update a dependency then run
GO111MODULE=on go get -u github.com/pkg/errors
go get golang.org/x/crypto
Check in a single commit as above.
## Updating all the dependencies ##
## Updating all the dependencies
In order to update all the dependencies then run `make update`. This
just uses the go modules to update all the modules to their latest
@@ -297,7 +404,7 @@ stable release. Check in the changes in a single commit as above.
This should be done early in the release cycle to pick up new versions
of packages in time for them to get some testing.
## Updating a backend ##
## Updating a backend
If you update a backend then please run the unit tests and the
integration tests for that backend.
@@ -312,82 +419,133 @@ integration tests.
The next section goes into more detail about the tests.
## Writing a new backend ##
## Writing a new backend
Choose a name. The docs here will use `remote` as an example.
Note that in rclone terminology a file system backend is called a
remote or an fs.
Research
### Research
* Look at the interfaces defined in `fs/fs.go`
* Study one or more of the existing remotes
- Look at the interfaces defined in `fs/types.go`
- Study one or more of the existing remotes
Getting going
### Getting going
* Create `backend/remote/remote.go` (copy this from a similar remote)
* box is a good one to start from if you have a directorybased remote
* b2 is a good one to start from if you have a bucketbased remote
* Add your remote to the imports in `backend/all/all.go`
* HTTP based remotes are easiest to maintain if they use rclone's rest module, but if there is a really good go SDK then use that instead.
* Try to implement as many optional methods as possible as it makes the remote more usable.
* Use lib/encoder to make sure we can encode any path name and `rclone info` to help determine the encodings needed
*`rclone purge -v TestRemote:rclone-info`
*`rclone test info --all --remote-encoding None -vv --write-json remote.json TestRemote:rclone-info`
*`go run cmd/test/info/internal/build_csv/main.go -o remote.csv remote.json`
* open `remote.csv` in a spreadsheet and examine
- Create `backend/remote/remote.go` (copy this from a similar remote)
- box is a good one to start from if you have a directory-based remote (and shows how to use the directory cache)
- b2 is a good one to start from if you have a bucket-based remote
- Add your remote to the imports in `backend/all/all.go`
- HTTP based remotes are easiest to maintain if they use rclone's [lib/rest](https://pkg.go.dev/github.com/rclone/rclone/lib/rest) module, but if there is a really good Go SDK from the provider then use that instead.
- Try to implement as many optional methods as possible as it makes the remote more usable.
- Use [lib/encoder](https://pkg.go.dev/github.com/rclone/rclone/lib/encoder) to make sure we can encode any path name and `rclone info` to help determine the encodings needed
-`rclone purge -v TestRemote:rclone-info`
-`rclone test info --all --remote-encoding None -vv --write-json remote.json TestRemote:rclone-info`
-`go run cmd/test/info/internal/build_csv/main.go -o remote.csv remote.json`
- open `remote.csv` in a spreadsheet and examine
Unit tests
### Guidelines for a speedy merge
* Create a config entry called `TestRemote` for the unit tests to use
* Create a `backend/remote/remote_test.go` - copy and adjust your example remote
* Make sure all tests pass with `go test -v`
- **Do** use [lib/rest](https://pkg.go.dev/github.com/rclone/rclone/lib/rest) if you are implementing a REST like backend and parsing XML/JSON in the backend.
- **Do** use rclone's Client or Transport from [fs/fshttp](https://pkg.go.dev/github.com/rclone/rclone/fs/fshttp) if your backend is HTTP based - this adds features like `--dump bodies`, `--tpslimit`, `--user-agent` without you having to code anything!
- **Do** follow your example backend exactly - use the same code order, function names, layout, structure. **Don't** move stuff around and **Don't** delete the comments.
- **Do not** split your backend up into `fs.go` and `object.go` (there are a few backends like that - don't follow them!)
- **Do** put your API type definitions in a separate file - by preference `api/types.go`
- **Remember** we have >50 backends to maintain so keeping them as similar as possible to each other is a high priority!
Integration tests
### Unit tests
* Add your backend to `fstest/test_all/config.yaml`
* Once you've done that then you can use the integration test framework from the project root:
* go install ./...
* test_all -backends remote
- Create a config entry called `TestRemote` for the unit tests to use
- Create a `backend/remote/remote_test.go` - copy and adjust your example remote
- Make sure all tests pass with `go test -v`
### Integration tests
- Add your backend to `fstest/test_all/config.yaml`
- Once you've done that then you can use the integration test framework from the project root:
- go install ./...
- test_all -backends remote
Or if you want to run the integration tests manually:
* Make sure integration tests pass with
*`cd fs/operations`
*`go test -v -remote TestRemote:`
*`cd fs/sync`
*`go test -v -remote TestRemote:`
* If your remote defines `ListR` check with this also
*`go test -v -remote TestRemote: -fast-list`
- Make sure integration tests pass with
-`cd fs/operations`
-`go test -v -remote TestRemote:`
-`cd fs/sync`
-`go test -v -remote TestRemote:`
- If your remote defines `ListR` check with this also
-`go test -v -remote TestRemote: -fast-list`
See the [testing](#testing) section for more information on integration tests.
Add your fs to the docs - you'll need to pick an icon for it from
### Backend documentation
Add your backend to the docs - you'll need to pick an icon for it from
[fontawesome](http://fontawesome.io/icons/). Keep lists of remotes in
alphabetical order of full name of remote (e.g. `drive` is ordered as
`Google Drive`) but with the local file system last.
*`README.md` - main GitHub page
*`docs/content/remote.md` - main docs page (note the backend options are automatically added to this file with `make backenddocs`)
* make sure this has the `autogenerated options` comments in (see your reference backend docs)
* update them with `makebackenddocs` - revert any changes in other backends
*`docs/content/overview.md` - overview docs
*`docs/content/docs.md` - list of remotes in config section
*`docs/content/_index.md` - front page of rclone.org
*`docs/layouts/chrome/navbar.html` - add it to the website navigation
*`bin/make_manual.py` - add the page to the `docs` constant
-`README.md` - main GitHub page
-`docs/content/remote.md` - main docs page (note the backend options are automatically added to this file with `make backenddocs`)
- make sure this has the `autogenerated options` comments in (see your reference backend docs)
- update them in your backend with `bin/make_backend_docs.py remote`
-`docs/content/overview.md` - overview docs
-`docs/content/docs.md` - list of remotes in config section
-`docs/content/_index.md` - front page of rclone.org
-`docs/layouts/chrome/navbar.html` - add it to the website navigation
-`bin/make_manual.py` - add the page to the `docs` constant
Once you've written the docs, run `make serve` and check they look OK
in the web browser and the links (internal and external) all work.
## Writing a plugin ##
## Adding a new s3 provider
It is quite easy to add a new S3 provider to rclone.
You'll need to modify the following files
-`backend/s3/s3.go`
- Add the provider to `providerOption` at the top of the file
- Add endpoints and other config for your provider gated on the provider in `fs.RegInfo`.
- Exclude your provider from genric config questions (eg `region` and `endpoint).
- Add the provider to the `setQuirks` function - see the documentation there.
-`docs/content/s3.md`
- Add the provider at the top of the page.
- Add a section about the provider linked from there.
- Add a transcript of a trial `rclone config` session
- Edit the transcript to remove things which might change in subsequent versions
- **Do not** alter or add to the autogenerated parts of `s3.md`
- **Do not** run `make backenddocs` or `bin/make_backend_docs.py s3`
-`README.md` - this is the home page in github
- Add the provider and a link to the section you wrote in `docs/contents/s3.md`
-`docs/content/_index.md` - this is the home page of rclone.org
- Add the provider and a link to the section you wrote in `docs/contents/s3.md`
When adding the provider, endpoints, quirks, docs etc keep them in
alphabetical order by `Provider` name, but with `AWS` first and
`Other` last.
Once you've written the docs, run `make serve` and check they look OK
in the web browser and the links (internal and external) all work.
Once you've written the code, test `rclone config` works to your
satisfaction, and check the integration tests work `go test -v -remote
NewS3Provider:`. You may need to adjust the quirks to get them to
pass. Some providers just can't pass the tests with control characters
in the names so if these fail and the provider doesn't support
`urlEncodeListings` in the quirks then ignore them. Note that the
`SetTier` test may also fail on non AWS providers.
For an example of adding an s3 provider see [eb3082a1](https://github.com/rclone/rclone/commit/eb3082a1ebdb76d5625f14cedec3f5154a5e7b10).
## Writing a plugin
New features (backends, commands) can also be added "out-of-tree", through Go plugins.
Changes will be kept in a dynamically loaded file instead of being compiled into the main binary.
This is useful if you can't merge your changes upstream or don't want to maintain a fork of rclone.
Usage
### Usage
- Naming
- Plugins names must have the pattern `librcloneplugin_KIND_NAME.so`.
@@ -402,7 +560,7 @@ Usage
- Plugins must be compiled against the exact version of rclone to work.
(The rclone used during building the plugin must be the same as the source of rclone)
Building
### Building
To turn your existing additions into a Go plugin, move them to an external repository
This is a guide for how to be an rclone maintainer. This is mostly a writeup of what I (@ncw) attempt to do.
This is a guide for how to be an rclone maintainer. This is mostly a write-up of what I (@ncw) attempt to do.
## Triaging Tickets ##
@@ -27,15 +32,15 @@ When a ticket comes in it should be triaged. This means it should be classified
Rclone uses the labels like this:
*`bug` - a definite verified bug
*`bug` - a definitely verified bug
*`can't reproduce` - a problem which we can't reproduce
*`doc fix` - a bug in the documentation - if users need help understanding the docs add this label
*`duplicate` - normally close these and ask the user to subscribe to the original
*`enhancement: new remote` - a new rclone backend
*`enhancement` - a new feature
*`FUSE` - to do with `rclone mount` command
*`good first issue` - mark these if you find a small selfcontained issue - these get shown to new visitors to the project
*`help` wanted - mark these if you find a selfcontained issue - these get shown to new visitors to the project
*`good first issue` - mark these if you find a small self-contained issue - these get shown to new visitors to the project
*`help` wanted - mark these if you find a self-contained issue - these get shown to new visitors to the project
*`IMPORTANT` - note to maintainers not to forget to fix this for the release
*`maintenance` - internal enhancement, code re-organisation, etc.
*`Needs Go 1.XX` - waiting for that version of Go to be released
@@ -51,7 +56,7 @@ The milestones have these meanings:
* v1.XX - stuff we would like to fit into this release
* v1.XX+1 - stuff we are leaving until the next release
* Soon - stuff we think is a good idea - waiting to be scheduled to a release
* Soon - stuff we think is a good idea - waiting to be scheduled for a release
* Help wanted - blue sky stuff that might get moved up, or someone could help with
* Known bugs - bugs waiting on external factors or we aren't going to fix for the moment
@@ -65,7 +70,7 @@ Close tickets as soon as you can - make sure they are tagged with a release. Po
Try to process pull requests promptly!
Merging pull requests on GitHub itself works quite well now-a-days so you can squash and rebase or rebase pull requests. rclone doesn't use merge commits. Use the squash and rebase option if you need to edit the commit message.
Merging pull requests on GitHub itself works quite well nowadays so you can squash and rebase or rebase pull requests. rclone doesn't use merge commits. Use the squash and rebase option if you need to edit the commit message.
After merging the commit, in your local master branch, do `git pull` then run `bin/update-authors.py` to update the authors file then `git push`.
@@ -81,15 +86,15 @@ Rclone aims for a 6-8 week release cycle. Sometimes release cycles take longer
High impact regressions should be fixed before the next release.
Near the start of the release cycle the dependencies should be updated with `make update` to give time for bugs to surface.
Near the start of the release cycle, the dependencies should be updated with `make update` to give time for bugs to surface.
Towards the end of the release cycle try not to merge anything too big so let things settle down.
Follow the instructions in RELEASE.md for making the release. Note that the testing part is the most timeconsuming often needing several rounds of test and fix depending on exactly how many new features rclone has gained.
Follow the instructions in RELEASE.md for making the release. Note that the testing part is the most time-consuming often needing several rounds of test and fix depending on exactly how many new features rclone has gained.
## Mailing list ##
There is now an inviteonly mailing list for rclone developers `rclone-dev` on google groups.
There is now an invite-only mailing list for rclone developers `rclone-dev` on google groups.
// Package b2 provides an interface to the Backblaze B2 object storage system
// Package b2 provides an interface to the Backblaze B2 object storage system.
packageb2
// FIXME should we remove sha1 checks from here as rclone now supports
@@ -9,6 +9,8 @@ import (
"bytes"
"context"
"crypto/sha1"
"encoding/json"
"errors"
"fmt"
gohash"hash"
"io"
@@ -19,7 +21,6 @@ import (
"sync"
"time"
"github.com/pkg/errors"
"github.com/rclone/rclone/backend/b2/api"
"github.com/rclone/rclone/fs"
"github.com/rclone/rclone/fs/accounting"
@@ -32,6 +33,7 @@ import (
"github.com/rclone/rclone/fs/walk"
"github.com/rclone/rclone/lib/bucket"
"github.com/rclone/rclone/lib/encoder"
"github.com/rclone/rclone/lib/multipart"
"github.com/rclone/rclone/lib/pacer"
"github.com/rclone/rclone/lib/pool"
"github.com/rclone/rclone/lib/rest"
@@ -54,17 +56,16 @@ const (
decayConstant=1// bigger for slower decay, exponential
maxParts=10000
maxVersions=100// maximum number of versions we search in --b2-versions mode
minChunkSize=5*fs.MebiByte
defaultChunkSize=96*fs.MebiByte
defaultUploadCutoff=200*fs.MebiByte
largeFileCopyCutoff=4*fs.GibiByte// 5E9 is the max
memoryPoolFlushTime=fs.Duration(time.Minute)// flush the cached buffers after this long
memoryPoolUseMmap=false
minChunkSize=5*fs.Mebi
defaultChunkSize=96*fs.Mebi
defaultUploadCutoff=200*fs.Mebi
largeFileCopyCutoff=4*fs.Gibi// 5E9 is the max
)
// Globals
var(
errNotWithVersions=errors.New("can't modify or delete files in --b2-versions mode")
errNotWithVersions=errors.New("can't modify or delete files in --b2-versions mode")
errNotWithVersionAt=errors.New("can't modify or delete files in --b2-version-at mode")
)
// Register with Fs
@@ -73,17 +74,20 @@ func init() {
Name:"b2",
Description:"Backblaze B2",
NewFs:NewFs,
CommandHelp:commandHelp,
Options:[]fs.Option{{
Name:"account",
Help:"Account ID or Application Key ID",
Required:true,
Name:"account",
Help:"Account ID or Application Key ID.",
Required:true,
Sensitive:true,
},{
Name:"key",
Help:"Application Key",
Required:true,
Name:"key",
Help:"Application Key.",
Required:true,
Sensitive:true,
},{
Name:"endpoint",
Help:"Endpoint for the service.\nLeave blank normally.",
Help:"Endpoint for the service.\n\nLeave blank normally.",
Advanced:true,
},{
Name:"test_mode",
@@ -103,9 +107,14 @@ in the [b2 integrations checklist](https://www.backblaze.com/b2/docs/integration
Advanced:true,
},{
Name:"versions",
Help:"Include old versions in directory listings.\nNote that when using this no file write operations are permitted,\nso you can't upload files or delete them.",
Help:"Include old versions in directory listings.\n\nNote that when using this no file write operations are permitted,\nso you can't upload files or delete them.",
Default:false,
Advanced:true,
},{
Name:"version_at",
Help:"Show file versions as they were at the specified time.\n\nNote that when using this no file write operations are permitted,\nso you can't upload files or delete them.",
Default:fs.Time{},
Advanced:true,
},{
Name:"hard_delete",
Help:"Permanently delete files on remote removal, otherwise hide files.",
@@ -116,32 +125,46 @@ in the [b2 integrations checklist](https://www.backblaze.com/b2/docs/integration
Files above this size will be uploaded in chunks of "--b2-chunk-size".
This value should be set no larger than 4.657GiB (== 5GB).`,
This value should be set no larger than 4.657GiB (== 5GB).`,
Default:defaultUploadCutoff,
Advanced:true,
},{
Name:"copy_cutoff",
Help:`Cutoff for switching to multipart copy
Help:`Cutoff for switching to multipart copy.
Any files larger than this that need to be server-side copied will be
copied in chunks of this size.
The minimum is 0 and the maximum is 4.6GB.`,
The minimum is 0 and the maximum is 4.6 GiB.`,
Default:largeFileCopyCutoff,
Advanced:true,
},{
Name:"chunk_size",
Help:`Upload chunk size. Must fit in memory.
Help:`Upload chunk size.
When uploading large files, chunk the file into this size. Note that
these chunks are buffered in memory and there might a maximum of
"--transfers" chunks in progress at once. 5,000,000 Bytes is the
minimum size.`,
When uploading large files, chunk the file into this size.
Must fit in memory. These chunks are buffered in memory and there
might a maximum of "--transfers" chunks in progress at once.
5,000,000 Bytes is the minimum size.`,
Default:defaultChunkSize,
Advanced:true,
},{
Name:"upload_concurrency",
Help:`Concurrency for multipart uploads.
This is the number of chunks of the same file that are uploaded
concurrently.
Note that chunks are stored in memory and there may be up to
"--transfers" * "--b2-upload-concurrency" chunks stored at once
in memory.`,
Default:4,
Advanced:true,
},{
Name:"disable_checksum",
Help:`Disable checksums for large (> upload cutoff) files
Help:`Disable checksums for large (> upload cutoff) files.
Normally rclone will calculate the SHA1 checksum of the input before
uploading it so it can add it to metadata on the object. This is great
@@ -158,7 +181,15 @@ free egress for data downloaded through the Cloudflare network.
Rclone works with private buckets by sending an "Authorization" header.
If the custom endpoint rewrites the requests for authentication,
e.g., in Cloudflare Workers, this header needs to be handled properly.
Leave blank if you want to use the endpoint provided by Backblaze.`,
Leave blank if you want to use the endpoint provided by Backblaze.
The URL provided here SHOULD have the protocol and SHOULD NOT have
a trailing slash or specify the /file/bucket subpath as rclone will
request files with "{download_url}/file/{bucket_name}/{path}".
Example:
> https://mysubdomain.mydomain.tld
(No trailing "/", "file" or "bucket")`,
Advanced:true,
},{
Name:"download_auth_duration",
@@ -170,16 +201,41 @@ The minimum value is 1 second. The maximum value is one week.`,
Advanced:true,
},{
Name:"memory_pool_flush_time",
Default:memoryPoolFlushTime,
Default:fs.Duration(time.Minute),
Advanced:true,
Help:`How often internal memory buffer pools will be flushed.
Uploads which requires additional buffers (f.e multipart) will use memory pool for allocations.
This option controls how often unused buffers will be removed from the pool.`,
Hide:fs.OptionHideBoth,
Help:`How often internal memory buffer pools will be flushed. (no longer used)`,
},{
Name:"memory_pool_use_mmap",
Default:memoryPoolUseMmap,
Default:false,
Advanced:true,
Hide:fs.OptionHideBoth,
Help:`Whether to use mmap buffers in internal memory pool. (no longer used)`,
},{
Name:"lifecycle",
Help:`Set the number of days deleted files should be kept when creating a bucket.
On bucket creation, this parameter is used to create a lifecycle rule
for the entire bucket.
If lifecycle is 0 (the default) it does not create a lifecycle rule so
the default B2 behaviour applies. This is to create versions of files
on delete and overwrite and to keep them indefinitely.
If lifecycle is >0 then it creates a single rule setting the number of
days before a file that is deleted or overwritten is deleted
permanently. This is known as daysFromHidingToDeleting in the b2 docs.
The minimum value for this parameter is 1 day.
You can also enable hard_delete in the config also which will mean
deletions won't cause versions but overwrites will still cause
versions to be made.
See: [rclone backend lifecycle](#lifecycle) for setting lifecycles after bucket creation.
`,
Default:0,
Advanced:true,
Help:`Whether to use mmap buffers in internal memory pool.`,
fs.Debugf(o,"Streaming upload with --b2-chunk-size %s allows uploads of up to %s and will fail only when that limit is reached.",f.opt.ChunkSize,maxParts*f.opt.ChunkSize)
returnnil,errors.Wrap(err,"box: failed to generate random string for jti")
returnnil,fmt.Errorf("box: failed to generate random string for jti: %w",err)
}
claims=&jws.ClaimSet{
Iss:boxConfig.BoxAppSettings.ClientID,
Sub:boxConfig.EnterpriseID,
Aud:tokenURL,
Exp:time.Now().Add(time.Second*45).Unix(),
PrivateClaims:map[string]interface{}{
"box_sub_type":boxSubType,
"aud":tokenURL,
"jti":val,
claims=&boxCustomClaims{
//lint:ignore SA1019 since we need to use jwt.StandardClaims even if deprecated in jwt-go v4 until a more permanent solution is ready in time before jwt-go v5 where it is removed entirely
//nolint:staticcheck // Don't include staticcheck when running golangci-lint to avoid SA1019
fs.Debugf(o,"Multipart upload session started for %d parts of size %v",session.TotalParts,fs.SizeSuffix(chunkSize))
@@ -222,7 +222,7 @@ outer:
// Read the chunk
_,err=io.ReadFull(in,buf)
iferr!=nil{
err=errors.Wrap(err,"multipart upload failed to read source")
err=fmt.Errorf("multipart upload failed to read source: %w",err)
breakouter
}
@@ -238,7 +238,7 @@ outer:
fs.Debugf(o,"Uploading part %d/%d offset %v/%v part size %v",part+1,session.TotalParts,fs.SizeSuffix(position),fs.SizeSuffix(size),fs.SizeSuffix(chunkSize))
// Package cache implements a virtual provider to cache existing remotes.
packagecache
import(
"context"
"errors"
"fmt"
"io"
"math"
@@ -18,7 +21,6 @@ import (
"syscall"
"time"
"github.com/pkg/errors"
"github.com/rclone/rclone/backend/crypt"
"github.com/rclone/rclone/fs"
"github.com/rclone/rclone/fs/cache"
@@ -68,26 +70,28 @@ func init() {
CommandHelp:commandHelp,
Options:[]fs.Option{{
Name:"remote",
Help:"Remote to cache.\nNormally should contain a ':' and a path, e.g. \"myremote:path/to/dir\",\n\"myremote:bucket\" or maybe \"myremote:\" (not recommended).",
Help:"Remote to cache.\n\nNormally should contain a ':' and a path, e.g. \"myremote:path/to/dir\",\n\"myremote:bucket\" or maybe \"myremote:\" (not recommended).",
Required:true,
},{
Name:"plex_url",
Help:"The URL of the Plex server",
Help:"The URL of the Plex server.",
},{
Name:"plex_username",
Help:"The username of the Plex user",
Name:"plex_username",
Help:"The username of the Plex user.",
Sensitive:true,
},{
Name:"plex_password",
Help:"The password of the Plex user",
Help:"The password of the Plex user.",
IsPassword:true,
},{
Name:"plex_token",
Help:"The plex token for authentication - auto set normally",
Hide:fs.OptionHideBoth,
Advanced:true,
Name:"plex_token",
Help:"The plex token for authentication - auto set normally.",
Hide:fs.OptionHideBoth,
Advanced:true,
Sensitive:true,
},{
Name:"plex_insecure",
Help:"Skip all certificate verification when connecting to the Plex server",
Help:"Skip all certificate verification when connecting to the Plex server.",
Advanced:true,
},{
Name:"chunk_size",
@@ -98,14 +102,14 @@ changed, any downloaded chunks will be invalid and cache-chunk-path
will need to be cleared or unexpected EOF errors will occur.`,
Default:DefCacheChunkSize,
Examples:[]fs.OptionExample{{
Value:"1m",
Help:"1MB",
Value:"1M",
Help:"1 MiB",
},{
Value:"5M",
Help:"5 MB",
Help:"5 MiB",
},{
Value:"10M",
Help:"10 MB",
Help:"10 MiB",
}},
},{
Name:"info_age",
@@ -132,22 +136,22 @@ oldest chunks until it goes under this value.`,
Help:`Any metadata supported by the underlying remote is read and written.`,
},
Options:[]fs.Option{{
Name:"remote",
Help:"Remote to encrypt/decrypt.\nNormally should contain a ':' and a path, e.g. \"myremote:path/to/dir\",\n\"myremote:bucket\" or maybe \"myremote:\" (not recommended).",
Help:"Remote to encrypt/decrypt.\n\nNormally should contain a ':' and a path, e.g. \"myremote:path/to/dir\",\n\"myremote:bucket\" or maybe \"myremote:\" (not recommended).",
Required:true,
},{
Name:"filename_encryption",
@@ -39,13 +42,13 @@ func init() {
Examples:[]fs.OptionExample{
{
Value:"standard",
Help:"Encrypt the filenames see the docs for the details.",
Help:"Encrypt the filenames.\nSee the docs for the details.",
},{
Value:"obfuscate",
Help:"Very simple filename obfuscation.",
},{
Value:"off",
Help:"Don't encrypt the file names.Adds a \".bin\" extension only.",
Help:"Don't encrypt the file names.\nAdds a \".bin\", or \"suffix\" extension only.",
},
},
},{
@@ -71,12 +74,14 @@ NB If filename_encryption is "off" then this option will do nothing.`,
Required:true,
},{
Name:"password2",
Help:"Password or pass phrase for salt.Optional but recommended.\nShould be different to the previous password.",
Help:"Password or pass phrase for salt.\n\nOptional but recommended.\nShould be different to the previous password.",
IsPassword:true,
},{
Name:"server_side_across_configs",
Default:false,
Help:`Allow server-side operations (e.g. copy) to work across different crypt configs.
Help:`Deprecated: use --server-side-across-configs instead.
Allow server-side operations (e.g. copy) to work across different crypt configs.
Normally this option is not what you want, but if you have two crypts
pointing to the same backend you can use it.
@@ -116,6 +121,46 @@ names, or for debugging purposes.`,
Help:"Encrypt file data.",
},
},
},{
Name:"pass_bad_blocks",
Help:`If set this will pass bad blocks through as all 0.
This should not be set in normal operation, it should only be set if
trying to recover an encrypted file with errors and it is desired to
recover as much of the file as possible.`,
Default:false,
Advanced:true,
},{
Name:"filename_encoding",
Help:`How to encode the encrypted filename to text string.
This option could help with shortening the encrypted filename. The
suitable option would depend on the way your remote count the filename
length and if it's case sensitive.`,
Default:"base32",
Examples:[]fs.OptionExample{
{
Value:"base32",
Help:"Encode using base32. Suitable for all remote.",
},
{
Value:"base64",
Help:"Encode using base64. Suitable for case sensitive remote.",
},
{
Value:"base32768",
Help:"Encode using base32768. Suitable if your remote counts UTF-16 or\nUnicode codepoint instead of UTF-8 byte length. (Eg. Onedrive, Dropbox)",
},
},
Advanced:true,
},{
Name:"suffix",
Help:`If this is set it will override the default suffix of ".bin".
Setting suffix to "none" will result in an empty suffix. This may be useful
Help:"Time of last modification with mS accuracy.",
Type:"RFC 3339",
Example:"2006-01-02T15:04:05.999Z07:00",
},
"btime":{
Help:"Time of file birth (creation) with mS accuracy. Note that this is only writable on fresh uploads - it can't be written for updates.",
Type:"RFC 3339",
Example:"2006-01-02T15:04:05.999Z07:00",
},
"copy-requires-writer-permission":{
Help:"Whether the options to copy, print, or download this file, should be disabled for readers and commenters.",
Type:"boolean",
Example:"true",
},
"writers-can-share":{
Help:"Whether users with only writer permission can modify the file's permissions. Not populated for items in shared drives.",
Type:"boolean",
Example:"false",
},
"viewed-by-me":{
Help:"Whether the file has been viewed by this user.",
Type:"boolean",
Example:"true",
ReadOnly:true,
},
"owner":{
Help:"The owner of the file. Usually an email address. Enable with --drive-metadata-owner.",
Type:"string",
Example:"user@example.com",
},
"permissions":{
Help:"Permissions in a JSON dump of Google drive format. On shared drives these will only be present if they aren't inherited. Enable with --drive-metadata-permissions.",
Type:"JSON",
Example:"{}",
},
"folder-color-rgb":{
Help:"The color for a folder or a shortcut to a folder as an RGB hex string.",
Type:"string",
Example:"881133",
},
"description":{
Help:"A short description of the file.",
Type:"string",
Example:"Contract for signing",
},
"starred":{
Help:"Whether the user has starred the file.",
Type:"boolean",
Example:"false",
},
"labels":{
Help:"Labels attached to this file in a JSON dump of Googled drive format. Enable with --drive-metadata-labels.",
Type:"JSON",
Example:"[]",
},
}
// Extra fields we need to fetch to implement the system metadata above
Help:"Project number.\nOptional - needed only for list/create/delete buckets - see your developer console.",
Name:"project_number",
Help:"Project number.\n\nOptional - needed only for list/create/delete buckets - see your developer console.",
Sensitive:true,
},{
Name:"user_project",
Help:"User project.\n\nOptional - needed only for requester pays.",
Sensitive:true,
},{
Name:"service_account_file",
Help:"Service Account Credentials JSON file path\nLeave blank normally.\nNeeded only if you want use SA instead of interactive login."+env.ShellExpandHelp,
Help:"Service Account Credentials JSON file path.\n\nLeave blank normally.\nNeeded only if you want use SA instead of interactive login."+env.ShellExpandHelp,
},{
Name:"service_account_credentials",
Help:"Service Account Credentials JSON blob\nLeave blank normally.\nNeeded only if you want use SA instead of interactive login.",
Hide:fs.OptionHideBoth,
Name:"service_account_credentials",
Help:"Service Account Credentials JSON blob.\n\nLeave blank normally.\nNeeded only if you want use SA instead of interactive login.",
Hide:fs.OptionHideBoth,
Sensitive:true,
},{
Name:"anonymous",
Help:"Access public buckets and objects without credentials\nSet to 'true' if you just want to download files and don't configure credentials.",
Help:"Access public buckets and objects without credentials.\n\nSet to 'true' if you just want to download files and don't configure credentials.",
Default:false,
},{
Name:"object_acl",
Help:"Access Control List for new objects.",
Examples:[]fs.OptionExample{{
Value:"authenticatedRead",
Help:"Object owner gets OWNER access, and all Authenticated Users get READER access.",
Help:"Object owner gets OWNER access.\nAll Authenticated Users get READER access.",
},{
Value:"bucketOwnerFullControl",
Help:"Object owner gets OWNER access, and project team owners get OWNER access.",
Help:"Object owner gets OWNER access.\nProject team owners get OWNER access.",
},{
Value:"bucketOwnerRead",
Help:"Object owner gets OWNER access, and project team owners get READER access.",
Help:"Object owner gets OWNER access.\nProject team owners get READER access.",
},{
Value:"private",
Help:"Object owner gets OWNER access [default if left blank].",
Help:"Object owner gets OWNER access.\nDefault if left blank.",
},{
Value:"projectPrivate",
Help:"Object owner gets OWNER access, and project team members get access according to their roles.",
Help:"Object owner gets OWNER access.\nProject team members get access according to their roles.",
},{
Value:"publicRead",
Help:"Object owner gets OWNER access, and all Users get READER access.",
Help:"Object owner gets OWNER access.\nAll Users get READER access.",
}},
},{
Name:"bucket_acl",
Help:"Access Control List for new buckets.",
Examples:[]fs.OptionExample{{
Value:"authenticatedRead",
Help:"Project team owners get OWNER access, and all Authenticated Users get READER access.",
Help:"Project team owners get OWNER access.\nAll Authenticated Users get READER access.",
},{
Value:"private",
Help:"Project team owners get OWNER access [default if left blank].",
Help:"Project team owners get OWNER access.\nDefault if left blank.",
},{
Value:"projectPrivate",
Help:"Project team members get access according to their roles.",
},{
Value:"publicRead",
Help:"Project team owners get OWNER access, and all Users get READER access.",
Help:"Project team owners get OWNER access.\nAll Users get READER access.",
},{
Value:"publicReadWrite",
Help:"Project team owners get OWNER access, and all Users get WRITER access.",
Help:"Project team owners get OWNER access.\nAll Users get WRITER access.",
Help:"Get GCP IAM credentials from runtime (environment variables or instance meta data if no env vars).\n\nOnly applies if service_account_file and service_account_credentials is blank.",
Default:false,
Examples:[]fs.OptionExample{{
Value:"false",
Help:"Enter credentials in the next step.",
},{
Value:"true",
Help:"Get GCP IAM credentials from the environment (env vars or IAM).",
Some files were not shown because too many files have changed in this diff
Show More
Reference in New Issue
Block a user
Blocking a user prevents them from interacting with repositories, such as opening or commenting on pull requests or issues. Learn more about blocking a user.