1
0
mirror of https://github.com/rclone/rclone.git synced 2026-01-02 08:33:50 +00:00

Compare commits

..

124 Commits

Author SHA1 Message Date
Nick Craig-Wood
aefe18fc41 mega: fix key decoding - FIXME VENDOR PATCH DO NOT MERGE
See: https://forum.rclone.org/t/problem-to-login-with-mega/12276
2020-01-11 11:57:42 +00:00
Nick Craig-Wood
ae340cf7d9 log: factor flags into logflags package - fixes #3792 2020-01-09 13:25:37 +00:00
Nick Craig-Wood
11f501bd44 operations: move interface assertion to tests to remove pflag dependency #3792 2020-01-09 13:25:37 +00:00
Nick Craig-Wood
a4bc4daf30 mounttest: fix unreliable tests on Windows CI
The failure is this which is not reproducable locally, only on the CI
servers.

    --- FAIL: TestMount/CacheMode=minimal/TestWriteFileOverwrite (1.01s)
        fs.go:351:
            Error Trace:    fs.go:351
                            write.go:65
            Error:          Received unexpected error:
                            open E:testwrite: The request could not be performed because of an I/O device error.
            Test:           TestMount/CacheMode=minimal/TestWriteFileOverwrite

The corresponding ERROR from the log is this:

    ERROR : IO error: truncate C:\Users\runneradmin\AppData\Local\rclone\vfs\local\C\Users\RUNNER~1\AppData\Local\Temp\rclone298719627\testwrite: Access is denied.

Instead of using ioutil.WriteFile this fix uses an equivalent based on
rclone's lib/file which doesn't set the exclusive flag on
Windows. This allows files to be deleted that are open.  It also
deletes existing files if an error is received and retries.
2020-01-09 11:11:49 +00:00
Nick Craig-Wood
51dca8c8d4 bin: update windows test paths for new setup 2020-01-09 10:55:18 +00:00
Nick Craig-Wood
6b3021209a Add Ole Schütt to contributors 2020-01-09 10:36:44 +00:00
Ole Schütt
f263828edc operations: write debug message when hashes could not be checked 2020-01-09 10:35:31 +00:00
Maciej Zimnoch
b7019a91c2 fs/operations: Clear accounting before low level retry
Statistics of ransfers which were interrupted are not cleared before
retry iteration. These transfers completed with over 100 percentage.

This change clears transfer accounting before next retry iteration is
done in order to keep numbers in track.

Fixes #3861
2020-01-09 10:32:49 +00:00
Alex Chen
27c3481ea4 build: fix CI for forks and related docs (#3847) 2020-01-09 01:27:44 +08:00
Nick Craig-Wood
706da80d88 mount: don't build on go1.10 as bazil/fuse no longer supports it 2020-01-08 08:44:02 +00:00
Nick Craig-Wood
b6e86b2c7f s3: fix missing x-amz-meta-md5chksum headers for multipart uploads
This reverts "s3: fix DisableChecksum condition" which introduced the
problem.

This reverts commit c05bb63f96.

The code was correct as it stands - the comment was incorrect and this
commit updates it.

See: https://forum.rclone.org/t/s3-upload-md5-check-sum/13706
2020-01-07 19:39:39 +00:00
Nick Craig-Wood
4453fa4ba6 drive: fix --fast-list when using appDataFolder
In listings if the ID `appDataFolder` is used to list a directory the
parents of the items returned have the actual ID instead the alias
`appDataFolder`.  This confused the ListR routine into ignoring all
these items.

This change makes the listing routine accept all parent IDs returned
if there was only one ID in the query.  This fixes the `appDataFolder`
problem. This means we are relying on Google Drive to only return the
items we asked for which is probably OK.

Fixes #3851
2020-01-05 19:57:13 +00:00
Nick Craig-Wood
540fd3f173 local: fix update of hidden files on Windows - fixes #3839 2020-01-05 19:52:22 +00:00
Nick Craig-Wood
1af4bb0c84 Add Tennix to contributors 2020-01-05 19:50:00 +00:00
Tennix
15d19131bd s3: use aws web identity role provider 2020-01-05 19:49:31 +00:00
Nick Craig-Wood
9d993e584b s3: force path style bucket access to off for AWS deprecation
AWS are deprecating path style bucket access so rclone should stop
using it by default for this provider.

This change shouldn't break any workflows as all AWS endpoints support
virtual hosted style lookups of buckets.  It may even improve
performance.

See: https://aws.amazon.com/blogs/aws/amazon-s3-path-deprecation-plan-the-rest-of-the-story/
2020-01-05 17:53:45 +00:00
Nick Craig-Wood
21b17b14a9 vendor: update bazil.org/fuse to fix FreeBSD 12.1 - fixes #3697 2020-01-05 16:35:30 +00:00
Nick Craig-Wood
1b89b38a4c vfs: skip rename tests on remotes which can't rename 2020-01-05 12:34:47 +00:00
Nick Craig-Wood
7242c7ce95 s3: fix multipart upload uploading 0 length files
This regression was introduced by the recent re-write of the s3
multipart upload code.
2020-01-05 12:32:55 +00:00
Nick Craig-Wood
ad2bb86d8c fstests: add test for 0 sized streaming upload 2020-01-05 12:32:55 +00:00
Michał Matczuk
eb10ac346f fs/accounting: Added StatsInfo locking in statsGroups sum function (#3844)
Without the fix we can have a race, example:

```
Write at 0x00c000432039 by goroutine 187:
  github.com/rclone/rclone/fs/accounting.(*StatsInfo).Error()
      fs/accounting/stats.go:495 +0x3f1
  github.com/rclone/rclone/fs/accounting.(*StatsInfo).Error-fm()
      fs/accounting/stats.go:477 +0x55
  github.com/rclone/rclone/fs/walk.listRwalk.func1()
      fs/walk/walk.go:162 +0xd2
  github.com/rclone/rclone/fs/walk.walk.func2()
      fs/walk/walk.go:402 +0x30f

Previous read at 0x00c000432039 by goroutine 184:
  github.com/rclone/rclone/fs/accounting.(*statsGroups).sum()
      fs/accounting/stats_groups.go:351 +0xcae
  github.com/rclone/rclone/fs/accounting.rcTransferredStats()
      fs/accounting/stats_groups.go:132 +0x1f4
```

Fixes #3844
2020-01-04 16:45:24 +00:00
Nick Craig-Wood
7e6fac8b1e s3: re-implement multipart upload to fix memory issues
There have been quite a few reports of problems with the multipart
uploader using too much memory and not retrying possible errors.

Before this change the multipart uploader used the s3manager
abstraction in the AWS SDK.  There are numerous bug reports of this
using up too much memory.

This change re-implements a much simplified version of the s3manager
code specialized for rclone's purposes.

This should use much less memory and retry chunks properly.

See: https://forum.rclone.org/t/memory-usage-s3-alike-to-glacier-without-big-directories/13563
See: https://forum.rclone.org/t/copy-from-local-to-s3-has-high-memory-usage/13405
See: https://forum.rclone.org/t/big-file-upload-to-s3-fails/13575
2020-01-03 22:19:28 +00:00
Nick Craig-Wood
2e0774f3cf Add Thomas Kriechbaumer to contributors 2020-01-03 22:18:23 +00:00
Aleksandar Jankovic
b9fb313f71 fs/accounting: add option to delete stats
Removed PruneAllTransfers because it had no use. startedTransfers are
set to nil in ResetCounters.
2020-01-03 17:44:05 +00:00
Aleksandar Jankovic
0e64df4b4c fs/accounting: consistency cleanup 2020-01-03 17:44:05 +00:00
buengese
69ac04fec9 docs: add GetSky to list of supported providers 2020-01-02 15:37:33 +01:00
buengese
8a2d1dbe24 jottacloud: add support whitelabel versions 2020-01-02 15:37:33 +01:00
Thomas Kriechbaumer
584e705c0c s3: introduce list_chunk option for bucket listing
The S3 ListObject API returns paginated bucket listings, with
"MaxKeys" items for each GET call.

The default value is 1000 entries, but for buckets with millions of
objects it might make sense to request more elements per request, if
the backend supports it. This commit adds a "list_chunk" option for
the user to specify a lower or higher value.

This commit does not add safe guards around this value - if a user
decides to request a too large list, it might result in connection
timeouts (on the server or client).

In AWS S3, there is a fixed limit of 1000, some other services might
have one too.  In Ceph, this can be configured in RadosGW.
2020-01-02 12:15:01 +00:00
Nick Craig-Wood
32a3ba9e3f Add Outvi V to contributors 2020-01-02 11:52:43 +00:00
Outvi V
db1c7f9ca8 s3: Add new region Asia Patific (Hong Kong) 2020-01-02 11:10:48 +00:00
Nick Craig-Wood
207474abab sync: add --no-check-dest flag - fixes #3616 2019-12-29 16:47:57 +00:00
Nick Craig-Wood
f754d897e5 Add Wei He to contributors 2019-12-28 13:29:08 +00:00
Wei He
4daecd3158 docs: fix in-page anchor navigation positioning 2019-12-22 23:33:12 +00:00
Cnly
59c75ba442 accounting: fix error count shown as checks - fixes #3814 2019-12-23 03:03:19 +08:00
Nick Craig-Wood
0ecb8bc2f9 s3: fix url decoding of NextMarker - fixes #3799
Before this patch we were failing to URL decode the NextMarker when
url encoding was used for the listing.

The result of this was duplicated listings entries for directories
with >1000 entries where the NextMarker was a file containing a space.
2019-12-12 13:33:30 +00:00
Nick Craig-Wood
1ab4985046 vfs: when renaming files in the cache, rename the cache item in memory too 2019-12-12 13:31:10 +00:00
Nick Craig-Wood
6e683b4359 vfs: fix rename of open files when using the VFS cache
Before this change, renaming an open file when using the VFS cache was
delayed until the file was closed.  This meant that the file was not
readable after a rename even though it is was in the cache.

After this change we rename the local cache file and the in memory
cache, delaying only the rename of the file in object storage.

See: https://forum.rclone.org/t/xen-orchestra-ebadf-bad-file-descriptor-write/13104
2019-12-12 13:31:10 +00:00
Nick Craig-Wood
241921c786 vfs: don't cache the path in RW file objects to fix renaming 2019-12-12 13:31:10 +00:00
buengese
a186284b23 asyncreader: fix EOF error 2019-12-10 12:12:29 +00:00
Ivan Andreev
41ba1bba2b chunker: reduce length of temporary suffix 2019-12-09 16:56:32 +00:00
Nick Craig-Wood
50bb9b7bdd check: fix --one-way recursing more directories than it needs to
Before this change rclone traversed all directories in the destination.

After this change rclone doesn't traverse directories in the
destination that don't exist in the source if the `--one-way` flag is
set.

See: https://forum.rclone.org/t/check-with-one-way-flag-should-not-traverses-all-destination-directories/13263
2019-12-07 13:26:55 +00:00
Nick Craig-Wood
4537d9b5cf operations: make reopen code error on NoLowLevelRetry errors - fixes #3777 2019-12-06 10:54:03 +00:00
Nick Craig-Wood
684dbe0e9d local: make source file being updated errors be NoLowLevelRetry errors #3777 2019-12-06 10:54:03 +00:00
Nick Craig-Wood
572c1079a5 fserrors: Make a new NoLowLevelRetry error and don't retry them #3777 2019-12-06 10:54:03 +00:00
Nick Craig-Wood
cb97239a60 build: pin actions/checkout to v1 to fix build failure 2019-12-04 13:48:03 +00:00
Nick Craig-Wood
e48145f959 Add David Cole to contributors 2019-12-04 12:14:30 +00:00
Nick Craig-Wood
2150cf7362 Add email for Aleksandar Janković 2019-12-04 12:14:21 +00:00
David Cole
707e51eac7 docs: correct typo in gui docs 2019-12-04 12:08:52 +00:00
Nick Craig-Wood
0d10640aaa s3: add --s3-copy-cutoff for size to switch to multipart copy
Before this change we used the same (relatively low limits) for server
side copy as we did for multipart uploads.  It doesn't make sense to
use the same limits since no data is being downloaded or uploaded for
a server side copy.

This change introduces a new parameter --s3-copy-cutoff to control
when the switch from single to multipart server size copy happens and
defaults it to the maximum 5GB.

This makes server side copies much more efficient.

It also fixes the erroneous error when trying to set the modification
time of a file bigger than 5GB.

See #3778
2019-12-03 10:37:55 +00:00
Nick Craig-Wood
f4746f5064 s3: fix multipart copy - fixes #3778
Before this change multipart copies were giving the error

    Range specified is not valid for source object of size

This was due to an off by one error in the range source introduced in
7b1274e29a "s3: support for multipart copy"
2019-12-03 10:37:55 +00:00
Aleksandar Janković
c05bb63f96 s3: fix DisableChecksum condition 2019-12-02 15:15:59 +00:00
Danil Semelenov
e2773b3b4e Fix completion with an encrypted config
Closes #3767.
2019-11-29 14:48:12 +00:00
Nick Craig-Wood
d3b0bed091 drive: make sure invalid auth for teamdrives always reports an error
For some reason Google doesn't return an error if you use a service
account with the wrong permissions to list a team drive.  This gives
the user the false impression that the drive is empty.

This change:
- calls teamdrives get on rclone about
- calls teamdrives get on a listing of the root which returned no entries

These will both detect a team drive which has the incorrect auth and
workaround the issue.

Fixes: #3763
See: https://forum.rclone.org/t/rclone-missing-error-code-when-sas-have-no-permission/13086
See: https://forum.rclone.org/t/need-need-bug-verification-rclone-about-doesnt-work-on-teamdrives-empty-output/13105
2019-11-28 10:51:17 +00:00
Nick Craig-Wood
33c80bbb96 jottacloud: add URL to generate Login Token to config wizard 2019-11-28 10:03:48 +00:00
Nick Craig-Wood
705e4694ed webdav: fix case of "Bearer" in Authorization: header to agree with RFC
Before this change rclone used "Authorization: BEARER token".  However
according the the RFC this should be "Bearer"

https://tools.ietf.org/html/rfc6750#section-2.1

This changes it to "Authorization: Bearer token"

Fixes #3751 and interop with Salesforce Webdav server
2019-11-27 12:04:31 +00:00
Nick Craig-Wood
4fbc90d115 webdav: make nextcloud only upload SHA1 checksums
When using nextcloud, before this change we only uploaded one of SHA1
or MD5 checksum in the OC-Checksum header with preference to SHA1 if
both were set.

This makes the MD5 checksums read as empty string which makes syncing
with checksums less useful than they should be as all the MD5
checksums are blank.

This change makes it so that we only upload the SHA1 to nextcloud.

The behaviour of owncloud is unchanged as owncloud uses the checksum
as an upload integrity check only and calculates its own checksums.

See: https://forum.rclone.org/t/how-to-specify-hash-method-to-checksum/13055
2019-11-27 11:58:55 +00:00
Nick Craig-Wood
ed39adc65b Add Fernando to contributors 2019-11-27 11:40:44 +00:00
Fernando
162fdfe455 mount: document remotes as network shares on Windows
Provided instructions for mounting remotes as network shares/network drives in a Windows environment
2019-11-27 11:40:24 +00:00
buengese
8f33c932f2 jottacloud: update docs for new auth method 2019-11-26 13:49:49 +00:00
buengese
4195bd7880 jottacloud: use new auth method used by official client 2019-11-26 13:49:49 +00:00
Marco Molteni
d72f3e31c0 docs/install: explain how to workaround macOS Gatekeeper requiring notarization
Fix #3689
2019-11-26 12:33:30 +00:00
Garry McNulty
11f44cff50 drive: add --drive-use-shared-date to use date file was shared instead of modified date - fixes #3624 2019-11-26 12:19:44 +00:00
SezalAgrawal
c3751e9a50 operations: fix dedupe continuing on errors like insufficientFilePermisson - fixes #3470
* Fix dedupe on merge continuing on errors like insufficientFilePermisson
* Sorted the directories to remove recursion logic
2019-11-26 10:58:52 +00:00
Nick Craig-Wood
420ae905b5 vfs: make sure existing files opened for write show correct size
Before this change if an existing file was opened for write without
truncate its size would show as 0 rather than the full size of the
file.
2019-11-25 11:31:44 +00:00
Nick Craig-Wood
a7d65bd519 sftp: add --sftp-skip-links to skip symlinks and non regular files - fixes #3716
This also corrects the symlink detection logic to only check symlink
files.  Previous to this it was checking all directories too which was
making it do more stat calls than was necessary.
2019-11-24 16:10:53 +00:00
Nick Craig-Wood
1db31d7149 swift: fix parsing of X-Object-Manifest
Before this change we forgot to URL decode the X-Object-Manifest in a dynamic large object.

This problem was introduced by 2fe8285f89 "swift: reserve
segments of dynamic large object when delete objects in container what
was enabled versioning."
2019-11-21 13:25:02 +00:00
Nick Craig-Wood
4641bd5116 Add anuar45 to contributors 2019-11-21 11:16:04 +00:00
anuar45
7e602dbf39 stats: show deletes in stats and hide zero stats
This shows deletes in the stats.  It also doesn't show zero stats
in order not to make the stats block too long.
2019-11-21 11:15:47 +00:00
Nick Craig-Wood
e14d968f8d Start v1.50.2-DEV development 2019-11-19 16:51:32 +00:00
Nick Craig-Wood
e0eeeaafcd accounting: don't show entries in both transferring and checking
See: https://forum.rclone.org/t/showing-progress-checking/12958
2019-11-19 13:22:33 +00:00
Nick Craig-Wood
d46f8d0ae5 accounting: fix memory leak on retries operations
Before this change if an operation was retried on operations.Copy and
the operation was large enough to use an async buffer then an async
buffer was leaked on the retry.  This leaked memory, a file handle and
a go routine.

After this change if Account.WithBuffer is called and there is already
a buffer, then a new one won't be allocated.
2019-11-19 12:11:59 +00:00
Nick Craig-Wood
1e6278556c Add Maciej Zimnoch to contributors 2019-11-18 16:28:19 +00:00
Nick Craig-Wood
303f4ee152 Add Ankur Gupta to contributors 2019-11-18 16:28:19 +00:00
Nguyễn Hữu Luân
2fe8285f89 swift: reserve segments of dynamic large object when delete objects in container what was enabled versioning.
add code handle move object when moving the object is contained by the container what was enabled versioning with "X-History-Location".
2019-11-18 16:26:10 +00:00
Maciej Zimnoch
f5443ac939 accounting: clear finished transfer in stats-reset
In order to reduce memory usage `stats-reset` also
clears finished transfers.

Fixes #3734
2019-11-18 14:25:32 +00:00
Maciej Zimnoch
7cf056b2c2 accounting: allow MaxCompletedTransfers to be configurable
rclone library users might be intrested in changing default value to
other, or even disabling it. With current version it's impossible which
leads to races when number of uploaded objects exceeds default limit.

Fixes #3732
2019-11-18 14:25:32 +00:00
Ankur Gupta
75a6c49f87 Fix error counter - fixes #3650
For few commands, RClone counts a error multiple times. This was fixed by
creating a new error type which keeps a flag to remember if the error has
already been counted or not. The CountError function now wraps the original
error eith the above new error type and returns it.
2019-11-18 14:13:02 +00:00
Nick Craig-Wood
19229b1215 drive: fix --drive-root-folder-id with team/shared drives
Before this change rclone used the team_drive ID as the root if set
even if the root_folder_id was set too.

This change uses the root_folder_id in preference over the team_drive
which restores the functionality.

This problem was introduced by ba7c2ac443

Fixes #3742
2019-11-16 18:38:21 +00:00
Nick Craig-Wood
b5bb4c2a21 vfs: fix tests not to upload a 0 length file
Some remotes can't upload 0 length files, so this fixes the
TestCacheRename test so that it writes something to the file.
2019-11-15 09:26:40 +00:00
Nick Craig-Wood
479c803fd9 vendor: update all dependencies 2019-11-14 21:51:34 +00:00
Nick Craig-Wood
3dcf1e61cf cache: follow move of upstream library github.com/coreos/bbolt github.com/etcd-io/bbolt 2019-11-14 21:51:34 +00:00
Nick Craig-Wood
3da1cbfc81 Add Marco Molteni to contributors 2019-11-14 21:51:34 +00:00
Marco Molteni
0c9a8cf776 doc: add Scaleway to the S3 table of contents
Hello, documentation for Scaleway was already there, but the TOC was missing it.
2019-11-14 21:49:43 +00:00
Nick Craig-Wood
f3871377c3 Add Sebastian Brandt to contributors 2019-11-14 12:54:42 +00:00
Nick Craig-Wood
cc9a7dc073 Add Barry Muldrey to contributors 2019-11-14 12:54:42 +00:00
Nick Craig-Wood
b61dd809ee Add new email for Anagh Kumar Baranwal 2019-11-14 12:54:38 +00:00
Sebastian Brandt
f158a398f3 sftp: Retry Creation of Connection - fixes #3656
Removes the existing rate limiter because it is implicitly included in
the pacer.
2019-11-14 12:50:01 +00:00
jaKa
acefa5c40d koofr: use rclone HTTP client. 2019-11-14 11:36:44 +00:00
Barry Muldrey
2784c3234b fs/config/configflags: fix --compare-dest and --copy-dest help strings
from rsync manual:

--compare-dest=DIR
    This option instructs rsync to use DIR on the destination machine as an
    additional hierarchy to compare destination files against doing transfers
    (if the files are missing in the destination directory). If a file is found
    in DIR that is identical to the sender's file, the file will NOT be
    transferred to the destination directory. This is useful for creating
    a sparse backup of just files that have changed from an earlier backup.

--copy-dest=DIR
    This option behaves like --compare-dest, but rsync will also copy unchanged
     files found in DIR to the destination directory using a local copy.
      This is useful for doing transfers to a new destination while leaving
       existing files intact, and then doing a flash-cutover when all files
        have been successfully transferred.
2019-11-12 13:37:58 +00:00
Nick Craig-Wood
c21a4fee58 mount,cmount: make sure we call unmount when exiting 2019-11-11 22:08:52 +00:00
Nick Craig-Wood
358f5a8084 vfs: fix edge cases when reading ModTime from file
This fixes the unreliable test TestMount/CacheMode=full/TestFileModTime
2019-11-11 16:20:28 +00:00
Nick Craig-Wood
9115752679 proxy: reduce the internal bcrypt strength to fix race tests
Before this change the race tests were taking too long.  The bcrypt
function went from about 20ms to 1s under the race detector and this
is called for every transaction on webdav.

This change reduces the bcrypt strength so it takes 1ms non race so
the race tests pass and still has adequate security for in memory only
storage.
2019-11-11 16:20:28 +00:00
Nick Craig-Wood
51efb349ac vfs: revise locking in file and dir to fix race conditions 2019-11-11 16:20:27 +00:00
Nick Craig-Wood
e0d9314059 mounttest: fix occasionally failing test TestRenameOpenHandle 2019-11-11 16:20:27 +00:00
Nick Craig-Wood
21c6babdbb mount: enable async reads for a 20% speedup
Now that the vfs can cope with out of order reads we can enable the
async read feature for an increase in througput on the local disk of
about 20%.
2019-11-11 16:20:27 +00:00
Nick Craig-Wood
5beeac7959 vfs: make ReadAt for non cached files work better with non-sequential reads
This makes ReadAt for non cached files wait a short time (up to 5mS)
if it gets an out of order read (which would normally cause a seek and
which take a long time) to see if the gap will be filled with an in
order read.

This makes mount2 based on go-fuse work more efficiently and enables
async reading in normal mount.

A similar change was done for WriteAt in af030f74f5
2019-11-11 16:20:27 +00:00
Nick Craig-Wood
be5392f448 vfs: only calculate one hash for reads
This speeds up mounting on the local backend enormously.
2019-11-11 16:20:27 +00:00
Nick Craig-Wood
c00dcb7e67 chunkedreader: disable hash calculation for first segment
This will produce a slight speedup for small files.
2019-11-11 16:20:27 +00:00
Nick Craig-Wood
6150ae89d6 vfs: add a newly created file straight into the directory 2019-11-11 15:20:09 +00:00
Nick Craig-Wood
1e423d21e1 drive: fix listing of the root directory with drive.files scope
We attempt to find the ID of the root folder by doing a GET on the
folder ID "root". With scope "drive.files" this fails with a 404
message.

After this change if we get the 404 message, we just carry on using
"root" as the root folder ID and we cache that for future lookups.

This means that changenotify messages will not work correctly in the
root folder but otherwise has minor consequences.

See: https://forum.rclone.org/t/fresh-raspberry-pi-build-google-drive-404-error-failed-to-ls-googleapi-error-404-file-not-found/12791
2019-11-11 09:07:34 +00:00
Brett Dutro
53d55ae760 Add test for cache renaming functionality 2019-11-10 11:58:46 +00:00
Anagh Kumar Baranwal
5928704e1b On rename, rename in cache too if the file exists
Signed-off-by: Anagh Kumar Baranwal <6824881+darthShadow@users.noreply.github.com>
2019-11-10 11:58:46 +00:00
buengese
5ddfa9f7f6 config: SetValueAndSave ignore error if config section does not exist yet 2019-11-09 16:44:08 +00:00
Nick Craig-Wood
9b5308144f s3: Reduce memory usage streaming files by reducing max stream upload size
Before this change rclone would allow the user to stream (eg with
rclone mount, rclone rcat or uploading google photos or docs) 5TB
files.  This meant that rclone allocated 4 * 525 MB buffers per
transfer which is way too much memory by default.

This change makes rclone use the configured chunk size for streamed
uploads.  This is 5MB by default which means that rclone can stream
upload files up to 48GB by default staying below the 10,000 chunks
limit.

This can be increased with --s3-chunk-size if necessary.

If rclone detects that a file is being streamed to s3 it will make a
single NOTICE level log stating the limitation.

This fixes the enormous memory usage.

Fixes #3568
See: https://forum.rclone.org/t/how-much-memory-does-rclone-need/12743
2019-11-09 15:55:19 +00:00
Aleksandar Jankovic
4b20afa94a backend/s3: fix ExpiryWindow value
ExpiryWindow accepts duration but it was set to value 3.
This changes it to 3 * time.Minute since default is 5 min.
2019-11-05 13:55:55 +00:00
Nick Craig-Wood
049ff1f269 config: check a remote exists when creating a new one 2019-11-05 12:39:33 +00:00
Nick Craig-Wood
3f7af64316 config: give config questions default values - fixes #3672 2019-11-05 11:53:44 +00:00
Nick Craig-Wood
0eaf5475ef Start v1.50.1-DEV development 2019-11-02 15:26:01 +00:00
Nick Craig-Wood
7bf056316f local: fix listings of . on Windows - fixes #3676 2019-10-30 16:00:18 +00:00
Xiaoxing Ye
520ddbcceb config: do not open browser on headless if google fs
On google fs (drive, google photos, and google cloud storage), if
headless is selected, do not open browser.

This also supplies a new option "auth-no-open-browser" for authorize
if the user does not want it.

This should fix #3323.
2019-10-30 14:12:42 +00:00
Nick Craig-Wood
1ce1ea34aa hash: fix hash names for DropboxHash and CRC-32
These were unintentionally renamed as part of 1dc8bcd48c

Fixes #3679
2019-10-30 12:20:10 +00:00
Nick Craig-Wood
e6378daadf fshttp: don't print token bucket errors on context cancelled
These happen as a natural part of exceeding --max-transfer and we
don't need to worry the user with them.
2019-10-30 12:20:10 +00:00
Nick Craig-Wood
7ff95c6250 Add Xiaoxing Ye to contributors 2019-10-30 12:20:10 +00:00
Xiaoxing Ye
6d58d9a86f vendor: change goftp/server url
Closing #3674
2019-10-29 17:41:56 +00:00
Chaitanya
e0356f5aae rcd: Adding group parameter to stats 2019-10-29 16:39:37 +00:00
Xiaoxing Ye
191cfb79d1 onedrive: no trailing slash reading metadata...
No trailing slash when reading metadata of an item given item ID.

This should fix #3664.
2019-10-29 13:33:11 +00:00
Nick Craig-Wood
e81eca4055 fshttp: fix error reporting on tpslimit token bucket errors 2019-10-28 22:11:38 +00:00
Nick Craig-Wood
ee3215ac76 build: make replacement of new rclone binary atomic on build
This avoids the "text file busy" message when trying to replace the
binary of a running rclone.
2019-10-28 22:11:38 +00:00
Nick Craig-Wood
199ac61bde rc: add methods to turn on blocking and mutex profiling 2019-10-28 22:11:38 +00:00
Nick Craig-Wood
a40cc1167d Add zero-24 to contributors 2019-10-28 16:49:33 +00:00
zero-24
c57ea8d867 docs: add instructions to create your own dropbox app ID 2019-10-28 16:49:16 +00:00
Nick Craig-Wood
1868c77e16 rc: fix formatting of docs 2019-10-27 10:43:40 +00:00
Brett Dutro
378a3f4133 mount: replace use of WriteAt with Write for cache mode >= writes and O_APPEND
os.File.WriteAt returns an error if a file was opened with O_APPEND.
This replaces it with os.File.Write if the file was opened with
O_APPEND.
2019-10-26 17:27:52 +01:00
Nick Craig-Wood
daff5a824e Start v1.50.0-DEV development 2019-10-26 12:42:06 +01:00
695 changed files with 90182 additions and 75789 deletions

View File

@@ -102,9 +102,10 @@ jobs:
steps: steps:
- name: Checkout - name: Checkout
uses: actions/checkout@master uses: actions/checkout@v1
with: with:
path: ./src/github.com/${{ github.repository }} # Checkout into a fixed path to avoid import path problems on go < 1.11
path: ./src/github.com/rclone/rclone
- name: Install Go - name: Install Go
uses: actions/setup-go@v1 uses: actions/setup-go@v1
@@ -201,7 +202,8 @@ jobs:
env: env:
RCLONE_CONFIG_PASS: ${{ secrets.RCLONE_CONFIG_PASS }} RCLONE_CONFIG_PASS: ${{ secrets.RCLONE_CONFIG_PASS }}
# working-directory: '$(modulePath)' # working-directory: '$(modulePath)'
if: matrix.deploy && github.head_ref == '' # Deploy binaries if enabled in config && not a PR && not a fork
if: matrix.deploy && github.head_ref == '' && github.repository == 'rclone/rclone'
xgo: xgo:
timeout-minutes: 60 timeout-minutes: 60
@@ -211,9 +213,10 @@ jobs:
steps: steps:
- name: Checkout - name: Checkout
uses: actions/checkout@master uses: actions/checkout@v1
with: with:
path: ./src/github.com/${{ github.repository }} # Checkout into a fixed path to avoid import path problems on go < 1.11
path: ./src/github.com/rclone/rclone
- name: Set environment variables - name: Set environment variables
shell: bash shell: bash
@@ -247,4 +250,5 @@ jobs:
make circleci_upload make circleci_upload
env: env:
RCLONE_CONFIG_PASS: ${{ secrets.RCLONE_CONFIG_PASS }} RCLONE_CONFIG_PASS: ${{ secrets.RCLONE_CONFIG_PASS }}
if: github.head_ref == '' # Upload artifacts if not a PR && not a fork
if: github.head_ref == '' && github.repository == 'rclone/rclone'

View File

@@ -82,13 +82,9 @@ You patch will get reviewed and you might get asked to fix some stuff.
If so, then make the changes in the same branch, squash the commits, If so, then make the changes in the same branch, squash the commits,
rebase it to master then push it to GitHub with `--force`. rebase it to master then push it to GitHub with `--force`.
## Enabling CI for your fork ## ## CI for your fork ##
The CI config files for rclone have taken care of forks of the project, so you can enable CI for your fork repo easily. rclone currently uses [GitHub Actions](https://github.com/rclone/rclone/actions) to build and test the project, which should be automatically available for your fork too from the `Actions` tab in your repository.
rclone currently uses [Travis CI](https://travis-ci.org/), [AppVeyor](https://ci.appveyor.com/), and
[Circle CI](https://circleci.com/) to build the project. To enable them for your fork, simply go into their
websites, find your fork of rclone, and enable building there.
## Testing ## ## Testing ##

14556
MANUAL.html generated

File diff suppressed because one or more lines are too long

33
MANUAL.md generated
View File

@@ -1,6 +1,6 @@
% rclone(1) User Manual % rclone(1) User Manual
% Nick Craig-Wood % Nick Craig-Wood
% Nov 19, 2019 % Oct 26, 2019
# Rclone - rsync for cloud storage # Rclone - rsync for cloud storage
@@ -7133,7 +7133,6 @@ Authentication is required for this call.
### config/get: Get a remote in the config file. {#config/get} ### config/get: Get a remote in the config file. {#config/get}
Parameters: Parameters:
- name - name of remote to get - name - name of remote to get
See the [config dump command](https://rclone.org/commands/rclone_config_dump/) command for more information on the above. See the [config dump command](https://rclone.org/commands/rclone_config_dump/) command for more information on the above.
@@ -7276,7 +7275,6 @@ If group is not provided then summed up stats for all groups will be
returned. returned.
Parameters Parameters
- group - name of the stats group (string) - group - name of the stats group (string)
Returns the following values: Returns the following values:
@@ -7318,8 +7316,8 @@ This clears counters and errors for all stats or specific stats group if group
is provided. is provided.
Parameters Parameters
- group - name of the stats group (string) - group - name of the stats group (string)
```
### core/transferred: Returns stats about completed transfers. {#core/transferred} ### core/transferred: Returns stats about completed transfers. {#core/transferred}
@@ -7333,7 +7331,6 @@ returned.
Note only the last 100 completed transfers are returned. Note only the last 100 completed transfers are returned.
Parameters Parameters
- group - name of the stats group (string) - group - name of the stats group (string)
Returns the following values: Returns the following values:
@@ -7357,7 +7354,6 @@ Returns the following values:
### core/version: Shows the current version of rclone and the go runtime. {#core/version} ### core/version: Shows the current version of rclone and the go runtime. {#core/version}
This shows the current version of go and the go runtime This shows the current version of go and the go runtime
- version - rclone version, eg "v1.44" - version - rclone version, eg "v1.44"
- decomposed - version number as [major, minor, patch, subpatch] - decomposed - version number as [major, minor, patch, subpatch]
- note patch and subpatch will be 999 for a git compiled version - note patch and subpatch will be 999 for a git compiled version
@@ -7371,17 +7367,14 @@ This shows the current version of go and the go runtime
Parameters - None Parameters - None
Results Results
- jobids - array of integer job ids - jobids - array of integer job ids
### job/status: Reads the status of the job ID {#job/status} ### job/status: Reads the status of the job ID {#job/status}
Parameters Parameters
- jobid - id of the job (integer) - jobid - id of the job (integer)
Results Results
- finished - boolean - finished - boolean
- duration - time in seconds that the job ran for - duration - time in seconds that the job ran for
- endTime - time the job finished (eg "2018-10-26T18:50:20.528746884+01:00") - endTime - time the job finished (eg "2018-10-26T18:50:20.528746884+01:00")
@@ -7396,7 +7389,6 @@ Results
### job/stop: Stop the running job {#job/stop} ### job/stop: Stop the running job {#job/stop}
Parameters Parameters
- jobid - id of the job (integer) - jobid - id of the job (integer)
### operations/about: Return the space used on the remote {#operations/about} ### operations/about: Return the space used on the remote {#operations/about}
@@ -8460,7 +8452,7 @@ These flags are available for every command.
--use-json-log Use json log format. --use-json-log Use json log format.
--use-mmap Use mmap allocator (see docs). --use-mmap Use mmap allocator (see docs).
--use-server-modtime Use server modified time instead of object metadata --use-server-modtime Use server modified time instead of object metadata
--user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.50.2") --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.50.0")
-v, --verbose count Print lots more stuff (repeat for more) -v, --verbose count Print lots more stuff (repeat for more)
``` ```
@@ -20189,25 +20181,6 @@ to override the default choice.
# Changelog # Changelog
## v1.50.2 - 2019-11-19
* Bug Fixes
* accounting: Fix memory leak on retries operations (Nick Craig-Wood)
* Drive
* Fix listing of the root directory with drive.files scope (Nick Craig-Wood)
* Fix --drive-root-folder-id with team/shared drives (Nick Craig-Wood)
## v1.50.1 - 2019-11-02
* Bug Fixes
* hash: Fix accidentally changed hash names for `DropboxHash` and `CRC-32` (Nick Craig-Wood)
* fshttp: Fix error reporting on tpslimit token bucket errors (Nick Craig-Wood)
* fshttp: Don't print token bucket errors on context cancelled (Nick Craig-Wood)
* Local
* Fix listings of . on Windows (Nick Craig-Wood)
* Onedrive
* Fix DirMove/Move after Onedrive change (Xiaoxing Ye)
## v1.50.0 - 2019-10-26 ## v1.50.0 - 2019-10-26
* New backends * New backends

14205
MANUAL.txt generated

File diff suppressed because it is too large Load Diff

View File

@@ -46,7 +46,8 @@ endif
rclone: rclone:
go build -v --ldflags "-s -X github.com/rclone/rclone/fs.Version=$(TAG)" $(BUILDTAGS) go build -v --ldflags "-s -X github.com/rclone/rclone/fs.Version=$(TAG)" $(BUILDTAGS)
mkdir -p `go env GOPATH`/bin/ mkdir -p `go env GOPATH`/bin/
cp -av rclone`go env GOEXE` `go env GOPATH`/bin/ cp -av rclone`go env GOEXE` `go env GOPATH`/bin/rclone`go env GOEXE`.new
mv -v `go env GOPATH`/bin/rclone`go env GOEXE`.new `go env GOPATH`/bin/rclone`go env GOEXE`
test_all: test_all:
go install --ldflags "-s -X github.com/rclone/rclone/fs.Version=$(TAG)" $(BUILDTAGS) github.com/rclone/rclone/fstest/test_all go install --ldflags "-s -X github.com/rclone/rclone/fs.Version=$(TAG)" $(BUILDTAGS) github.com/rclone/rclone/fstest/test_all

View File

@@ -34,6 +34,7 @@ Rclone *("rsync for cloud storage")* is a command line program to sync files and
* Dreamhost [:page_facing_up:](https://rclone.org/s3/#dreamhost) * Dreamhost [:page_facing_up:](https://rclone.org/s3/#dreamhost)
* Dropbox [:page_facing_up:](https://rclone.org/dropbox/) * Dropbox [:page_facing_up:](https://rclone.org/dropbox/)
* FTP [:page_facing_up:](https://rclone.org/ftp/) * FTP [:page_facing_up:](https://rclone.org/ftp/)
* GetSky [:page_facing_up:](https://rclone.org/jottacloud/)
* Google Cloud Storage [:page_facing_up:](https://rclone.org/googlecloudstorage/) * Google Cloud Storage [:page_facing_up:](https://rclone.org/googlecloudstorage/)
* Google Drive [:page_facing_up:](https://rclone.org/drive/) * Google Drive [:page_facing_up:](https://rclone.org/drive/)
* Google Photos [:page_facing_up:](https://rclone.org/googlephotos/) * Google Photos [:page_facing_up:](https://rclone.org/googlephotos/)

View File

@@ -89,7 +89,7 @@ Now
* make TAG=${NEW_TAG} upload_github * make TAG=${NEW_TAG} upload_github
* NB this overwrites the current beta so we need to do this * NB this overwrites the current beta so we need to do this
* git co master * git co master
* make LAST_TAG=${NEW_TAG} startdev * make VERSION=${NEW_TAG} startdev
* # cherry pick the changes to the changelog and VERSION * # cherry pick the changes to the changelog and VERSION
* git checkout ${BASE_TAG}-fixes VERSION docs/content/changelog.md * git checkout ${BASE_TAG}-fixes VERSION docs/content/changelog.md
* git commit --amend * git commit --amend

20
backend/cache/cache_mount_other_test.go vendored Normal file
View File

@@ -0,0 +1,20 @@
// +build !linux !go1.11
// +build !darwin !go1.11
// +build !freebsd !go1.11
// +build !windows
package cache_test
import (
"testing"
"github.com/rclone/rclone/fs"
)
func (r *run) mountFs(t *testing.T, f fs.Fs) {
panic("mountFs not defined for this platform")
}
func (r *run) unmountFs(t *testing.T, f fs.Fs) {
panic("unmountFs not defined for this platform")
}

View File

@@ -1,4 +1,4 @@
// +build !plan9,!windows // +build linux,go1.11 darwin,go1.11 freebsd,go1.11
package cache_test package cache_test

View File

@@ -16,7 +16,7 @@ import (
"sync" "sync"
"time" "time"
bolt "github.com/coreos/bbolt" bolt "github.com/etcd-io/bbolt"
"github.com/pkg/errors" "github.com/pkg/errors"
"github.com/rclone/rclone/fs" "github.com/rclone/rclone/fs"
"github.com/rclone/rclone/fs/walk" "github.com/rclone/rclone/fs/walk"

View File

@@ -12,11 +12,13 @@ import (
gohash "hash" gohash "hash"
"io" "io"
"io/ioutil" "io/ioutil"
"math/rand"
"path" "path"
"regexp" "regexp"
"sort" "sort"
"strconv" "strconv"
"strings" "strings"
"sync"
"time" "time"
"github.com/pkg/errors" "github.com/pkg/errors"
@@ -34,46 +36,57 @@ import (
// and optional metadata object. If it's present, // and optional metadata object. If it's present,
// meta object is named after the original file. // meta object is named after the original file.
// //
// The only supported metadata format is simplejson atm.
// It supports only per-file meta objects that are rudimentary,
// used mostly for consistency checks (lazily for performance reasons).
// Other formats can be developed that use an external meta store
// free of these limitations, but this needs some support from
// rclone core (eg. metadata store interfaces).
//
// The following types of chunks are supported: // The following types of chunks are supported:
// data and control, active and temporary. // data and control, active and temporary.
// Chunk type is identified by matching chunk file name // Chunk type is identified by matching chunk file name
// based on the chunk name format configured by user. // based on the chunk name format configured by user.
// //
// Both data and control chunks can be either temporary or // Both data and control chunks can be either temporary (aka hidden)
// active (non-temporary). // or active (non-temporary aka normal aka permanent).
// An operation creates temporary chunks while it runs. // An operation creates temporary chunks while it runs.
// By completion it removes temporary and leaves active // By completion it removes temporary and leaves active chunks.
// (aka normal aka permanent) chunks.
// //
// Temporary (aka hidden) chunks have a special hardcoded suffix // Temporary chunks have a special hardcoded suffix in addition
// in addition to the configured name pattern. The suffix comes last // to the configured name pattern.
// to prevent name collisions with non-temporary chunks. // Temporary suffix includes so called transaction identifier
// Temporary suffix includes so called transaction number usually // (abbreviated as `xactID` below), a generic non-negative base-36 "number"
// abbreviated as `xactNo` below, a generic non-negative integer
// used by parallel operations to share a composite object. // used by parallel operations to share a composite object.
// Chunker also accepts the longer decimal temporary suffix (obsolete),
// which is transparently converted to the new format. In its maximum
// length of 13 decimals it makes a 7-digit base-36 number.
// //
// Chunker can tell data chunks from control chunks by the characters // Chunker can tell data chunks from control chunks by the characters
// located in the "hash placeholder" position of configured format. // located in the "hash placeholder" position of configured format.
// Data chunks have decimal digits there. // Data chunks have decimal digits there.
// Control chunks have a short lowercase literal prepended by underscore // Control chunks have in that position a short lowercase alphanumeric
// in that position. // string (starting with a letter) prepended by underscore.
// //
// Metadata format v1 does not define any control chunk types, // Metadata format v1 does not define any control chunk types,
// they are currently ignored aka reserved. // they are currently ignored aka reserved.
// In future they can be used to implement resumable uploads etc. // In future they can be used to implement resumable uploads etc.
// //
const ( const (
ctrlTypeRegStr = `[a-z]{3,9}` ctrlTypeRegStr = `[a-z][a-z0-9]{2,6}`
tempChunkFormat = `%s..tmp_%010d` tempSuffixFormat = `_%04s`
tempChunkRegStr = `\.\.tmp_([0-9]{10,19})` tempSuffixRegStr = `_([0-9a-z]{4,9})`
tempSuffixRegOld = `\.\.tmp_([0-9]{10,13})`
) )
var ( var (
ctrlTypeRegexp = regexp.MustCompile(`^` + ctrlTypeRegStr + `$`) // regular expressions to validate control type and temporary suffix
ctrlTypeRegexp = regexp.MustCompile(`^` + ctrlTypeRegStr + `$`)
tempSuffixRegexp = regexp.MustCompile(`^` + tempSuffixRegStr + `$`)
) )
// Normally metadata is a small piece of JSON (about 100-300 bytes). // Normally metadata is a small piece of JSON (about 100-300 bytes).
// The size of valid metadata size must never exceed this limit. // The size of valid metadata must never exceed this limit.
// Current maximum provides a reasonable room for future extensions. // Current maximum provides a reasonable room for future extensions.
// //
// Please refrain from increasing it, this can cause old rclone versions // Please refrain from increasing it, this can cause old rclone versions
@@ -101,6 +114,9 @@ const revealHidden = false
// Prevent memory overflow due to specially crafted chunk name // Prevent memory overflow due to specially crafted chunk name
const maxSafeChunkNumber = 10000000 const maxSafeChunkNumber = 10000000
// Number of attempts to find unique transaction identifier
const maxTransactionProbes = 100
// standard chunker errors // standard chunker errors
var ( var (
ErrChunkOverflow = errors.New("chunk number overflow") ErrChunkOverflow = errors.New("chunk number overflow")
@@ -113,13 +129,6 @@ const (
delFailed = 2 // move, then delete and try again if failed delFailed = 2 // move, then delete and try again if failed
) )
// Note: metadata logic is tightly coupled with chunker code in many
// places, eg. in checks whether a file should have meta object or is
// eligible for chunking.
// If more metadata formats (or versions of a format) are added in future,
// it may be advisable to factor it into a "metadata strategy" interface
// similar to chunkingReader or linearReader below.
// Register with Fs // Register with Fs
func init() { func init() {
fs.Register(&fs.RegInfo{ fs.Register(&fs.RegInfo{
@@ -261,7 +270,7 @@ func NewFs(name, rpath string, m configmap.Mapper) (fs.Fs, error) {
// detects a composite file because it finds the first chunk! // detects a composite file because it finds the first chunk!
// (yet can't satisfy fstest.CheckListing, will ignore) // (yet can't satisfy fstest.CheckListing, will ignore)
if err == nil && !f.useMeta && strings.Contains(rpath, "/") { if err == nil && !f.useMeta && strings.Contains(rpath, "/") {
firstChunkPath := f.makeChunkName(remotePath, 0, "", -1) firstChunkPath := f.makeChunkName(remotePath, 0, "", "")
_, testErr := baseInfo.NewFs(baseName, firstChunkPath, baseConfig) _, testErr := baseInfo.NewFs(baseName, firstChunkPath, baseConfig)
if testErr == fs.ErrorIsFile { if testErr == fs.ErrorIsFile {
err = testErr err = testErr
@@ -310,12 +319,16 @@ type Fs struct {
dataNameFmt string // name format of data chunks dataNameFmt string // name format of data chunks
ctrlNameFmt string // name format of control chunks ctrlNameFmt string // name format of control chunks
nameRegexp *regexp.Regexp // regular expression to match chunk names nameRegexp *regexp.Regexp // regular expression to match chunk names
xactIDRand *rand.Rand // generator of random transaction identifiers
xactIDMutex sync.Mutex // mutex for the source of randomness
opt Options // copy of Options opt Options // copy of Options
features *fs.Features // optional features features *fs.Features // optional features
dirSort bool // reserved for future, ignored dirSort bool // reserved for future, ignored
} }
// configure must be called only from NewFs or by unit tests // configure sets up chunker for given name format, meta format and hash type.
// It also seeds the source of random transaction identifiers.
// configure must be called only from NewFs or by unit tests.
func (f *Fs) configure(nameFormat, metaFormat, hashType string) error { func (f *Fs) configure(nameFormat, metaFormat, hashType string) error {
if err := f.setChunkNameFormat(nameFormat); err != nil { if err := f.setChunkNameFormat(nameFormat); err != nil {
return errors.Wrapf(err, "invalid name format '%s'", nameFormat) return errors.Wrapf(err, "invalid name format '%s'", nameFormat)
@@ -326,6 +339,10 @@ func (f *Fs) configure(nameFormat, metaFormat, hashType string) error {
if err := f.setHashType(hashType); err != nil { if err := f.setHashType(hashType); err != nil {
return err return err
} }
randomSeed := time.Now().UnixNano()
f.xactIDRand = rand.New(rand.NewSource(randomSeed))
return nil return nil
} }
@@ -414,13 +431,13 @@ func (f *Fs) setChunkNameFormat(pattern string) error {
} }
reDataOrCtrl := fmt.Sprintf("(?:(%s)|_(%s))", reDigits, ctrlTypeRegStr) reDataOrCtrl := fmt.Sprintf("(?:(%s)|_(%s))", reDigits, ctrlTypeRegStr)
// this must be non-greedy or else it can eat up temporary suffix // this must be non-greedy or else it could eat up temporary suffix
const mainNameRegStr = "(.+?)" const mainNameRegStr = "(.+?)"
strRegex := regexp.QuoteMeta(pattern) strRegex := regexp.QuoteMeta(pattern)
strRegex = reHashes.ReplaceAllLiteralString(strRegex, reDataOrCtrl) strRegex = reHashes.ReplaceAllLiteralString(strRegex, reDataOrCtrl)
strRegex = strings.Replace(strRegex, "\\*", mainNameRegStr, -1) strRegex = strings.Replace(strRegex, "\\*", mainNameRegStr, -1)
strRegex = fmt.Sprintf("^%s(?:%s)?$", strRegex, tempChunkRegStr) strRegex = fmt.Sprintf("^%s(?:%s|%s)?$", strRegex, tempSuffixRegStr, tempSuffixRegOld)
f.nameRegexp = regexp.MustCompile(strRegex) f.nameRegexp = regexp.MustCompile(strRegex)
// craft printf formats for active data/control chunks // craft printf formats for active data/control chunks
@@ -435,34 +452,36 @@ func (f *Fs) setChunkNameFormat(pattern string) error {
return nil return nil
} }
// makeChunkName produces chunk name (or path) for given file. // makeChunkName produces chunk name (or path) for a given file.
// //
// mainPath can be name, relative or absolute path of main file. // filePath can be name, relative or absolute path of main file.
// //
// chunkNo must be a zero based index of data chunk. // chunkNo must be a zero based index of data chunk.
// Negative chunkNo eg. -1 indicates a control chunk. // Negative chunkNo eg. -1 indicates a control chunk.
// ctrlType is type of control chunk (must be valid). // ctrlType is type of control chunk (must be valid).
// ctrlType must be "" for data chunks. // ctrlType must be "" for data chunks.
// //
// xactNo is a transaction number. // xactID is a transaction identifier. Empty xactID denotes active chunk,
// Negative xactNo eg. -1 indicates an active chunk, // otherwise temporary chunk name is produced.
// otherwise produce temporary chunk name.
// //
func (f *Fs) makeChunkName(mainPath string, chunkNo int, ctrlType string, xactNo int64) string { func (f *Fs) makeChunkName(filePath string, chunkNo int, ctrlType, xactID string) string {
dir, mainName := path.Split(mainPath) dir, parentName := path.Split(filePath)
var name string var name, tempSuffix string
switch { switch {
case chunkNo >= 0 && ctrlType == "": case chunkNo >= 0 && ctrlType == "":
name = fmt.Sprintf(f.dataNameFmt, mainName, chunkNo+f.opt.StartFrom) name = fmt.Sprintf(f.dataNameFmt, parentName, chunkNo+f.opt.StartFrom)
case chunkNo < 0 && ctrlTypeRegexp.MatchString(ctrlType): case chunkNo < 0 && ctrlTypeRegexp.MatchString(ctrlType):
name = fmt.Sprintf(f.ctrlNameFmt, mainName, ctrlType) name = fmt.Sprintf(f.ctrlNameFmt, parentName, ctrlType)
default: default:
panic("makeChunkName: invalid argument") // must not produce something we can't consume panic("makeChunkName: invalid argument") // must not produce something we can't consume
} }
if xactNo >= 0 { if xactID != "" {
name = fmt.Sprintf(tempChunkFormat, name, xactNo) tempSuffix = fmt.Sprintf(tempSuffixFormat, xactID)
if !tempSuffixRegexp.MatchString(tempSuffix) {
panic("makeChunkName: invalid argument")
}
} }
return dir + name return dir + name + tempSuffix
} }
// parseChunkName checks whether given file path belongs to // parseChunkName checks whether given file path belongs to
@@ -470,20 +489,21 @@ func (f *Fs) makeChunkName(mainPath string, chunkNo int, ctrlType string, xactNo
// //
// filePath can be name, relative or absolute path of a file. // filePath can be name, relative or absolute path of a file.
// //
// Returned mainPath is a non-empty string if valid chunk name // Returned parentPath is path of the composite file owning the chunk.
// is detected or "" if it's not a chunk. // It's a non-empty string if valid chunk name is detected
// or "" if it's not a chunk.
// Other returned values depend on detected chunk type: // Other returned values depend on detected chunk type:
// data or control, active or temporary: // data or control, active or temporary:
// //
// data chunk - the returned chunkNo is non-negative and ctrlType is "" // data chunk - the returned chunkNo is non-negative and ctrlType is ""
// control chunk - the chunkNo is -1 and ctrlType is non-empty string // control chunk - the chunkNo is -1 and ctrlType is a non-empty string
// active chunk - the returned xactNo is -1 // active chunk - the returned xactID is ""
// temporary chunk - the xactNo is non-negative integer // temporary chunk - the xactID is a non-empty string
func (f *Fs) parseChunkName(filePath string) (mainPath string, chunkNo int, ctrlType string, xactNo int64) { func (f *Fs) parseChunkName(filePath string) (parentPath string, chunkNo int, ctrlType, xactID string) {
dir, name := path.Split(filePath) dir, name := path.Split(filePath)
match := f.nameRegexp.FindStringSubmatch(name) match := f.nameRegexp.FindStringSubmatch(name)
if match == nil || match[1] == "" { if match == nil || match[1] == "" {
return "", -1, "", -1 return "", -1, "", ""
} }
var err error var err error
@@ -494,19 +514,26 @@ func (f *Fs) parseChunkName(filePath string) (mainPath string, chunkNo int, ctrl
} }
if chunkNo -= f.opt.StartFrom; chunkNo < 0 { if chunkNo -= f.opt.StartFrom; chunkNo < 0 {
fs.Infof(f, "invalid data chunk number in file %q", name) fs.Infof(f, "invalid data chunk number in file %q", name)
return "", -1, "", -1 return "", -1, "", ""
} }
} }
xactNo = -1
if match[4] != "" { if match[4] != "" {
if xactNo, err = strconv.ParseInt(match[4], 10, 64); err != nil || xactNo < 0 { xactID = match[4]
fs.Infof(f, "invalid transaction number in file %q", name) }
return "", -1, "", -1 if match[5] != "" {
// old-style temporary suffix
number, err := strconv.ParseInt(match[5], 10, 64)
if err != nil || number < 0 {
fs.Infof(f, "invalid old-style transaction number in file %q", name)
return "", -1, "", ""
} }
// convert old-style transaction number to base-36 transaction ID
xactID = fmt.Sprintf(tempSuffixFormat, strconv.FormatInt(number, 36))
xactID = xactID[1:] // strip leading underscore
} }
mainPath = dir + match[1] parentPath = dir + match[1]
ctrlType = match[3] ctrlType = match[3]
return return
} }
@@ -514,17 +541,74 @@ func (f *Fs) parseChunkName(filePath string) (mainPath string, chunkNo int, ctrl
// forbidChunk prints error message or raises error if file is chunk. // forbidChunk prints error message or raises error if file is chunk.
// First argument sets log prefix, use `false` to suppress message. // First argument sets log prefix, use `false` to suppress message.
func (f *Fs) forbidChunk(o interface{}, filePath string) error { func (f *Fs) forbidChunk(o interface{}, filePath string) error {
if mainPath, _, _, _ := f.parseChunkName(filePath); mainPath != "" { if parentPath, _, _, _ := f.parseChunkName(filePath); parentPath != "" {
if f.opt.FailHard { if f.opt.FailHard {
return fmt.Errorf("chunk overlap with %q", mainPath) return fmt.Errorf("chunk overlap with %q", parentPath)
} }
if boolVal, isBool := o.(bool); !isBool || boolVal { if boolVal, isBool := o.(bool); !isBool || boolVal {
fs.Errorf(o, "chunk overlap with %q", mainPath) fs.Errorf(o, "chunk overlap with %q", parentPath)
} }
} }
return nil return nil
} }
// newXactID produces a sufficiently random transaction identifier.
//
// The temporary suffix mask allows identifiers consisting of 4-9
// base-36 digits (ie. digits 0-9 or lowercase letters a-z).
// The identifiers must be unique between transactions running on
// the single file in parallel.
//
// Currently the function produces 6-character identifiers.
// Together with underscore this makes a 7-character temporary suffix.
//
// The first 4 characters isolate groups of transactions by time intervals.
// The maximum length of interval is base-36 "zzzz" ie. 1,679,615 seconds.
// The function rather takes a maximum prime closest to this number
// (see https://primes.utm.edu) as the interval length to better safeguard
// against repeating pseudo-random sequences in cases when rclone is
// invoked from a periodic scheduler like unix cron.
// Thus, the interval is slightly more than 19 days 10 hours 33 minutes.
//
// The remaining 2 base-36 digits (in the range from 0 to 1295 inclusive)
// are taken from the local random source.
// This provides about 0.1% collision probability for two parallel
// operations started at the same second and working on the same file.
//
// Non-empty filePath argument enables probing for existing temporary chunk
// to further eliminate collisions.
func (f *Fs) newXactID(ctx context.Context, filePath string) (xactID string, err error) {
const closestPrimeZzzzSeconds = 1679609
const maxTwoBase36Digits = 1295
unixSec := time.Now().Unix()
if unixSec < 0 {
unixSec = -unixSec // unlikely but the number must be positive
}
circleSec := unixSec % closestPrimeZzzzSeconds
first4chars := strconv.FormatInt(circleSec, 36)
for tries := 0; tries < maxTransactionProbes; tries++ {
f.xactIDMutex.Lock()
randomness := f.xactIDRand.Int63n(maxTwoBase36Digits + 1)
f.xactIDMutex.Unlock()
last2chars := strconv.FormatInt(randomness, 36)
xactID = fmt.Sprintf("%04s%02s", first4chars, last2chars)
if filePath == "" {
return
}
probeChunk := f.makeChunkName(filePath, 0, "", xactID)
_, probeErr := f.base.NewObject(ctx, probeChunk)
if probeErr != nil {
return
}
}
return "", fmt.Errorf("can't setup transaction for %s", filePath)
}
// List the objects and directories in dir into entries. // List the objects and directories in dir into entries.
// The entries can be returned in any order but should be // The entries can be returned in any order but should be
// for a complete directory. // for a complete directory.
@@ -602,8 +686,8 @@ func (f *Fs) processEntries(ctx context.Context, origEntries fs.DirEntries, dirP
switch entry := dirOrObject.(type) { switch entry := dirOrObject.(type) {
case fs.Object: case fs.Object:
remote := entry.Remote() remote := entry.Remote()
if mainRemote, chunkNo, ctrlType, xactNo := f.parseChunkName(remote); mainRemote != "" { if mainRemote, chunkNo, ctrlType, xactID := f.parseChunkName(remote); mainRemote != "" {
if xactNo != -1 { if xactID != "" {
if revealHidden { if revealHidden {
fs.Infof(f, "ignore temporary chunk %q", remote) fs.Infof(f, "ignore temporary chunk %q", remote)
} }
@@ -686,7 +770,7 @@ func (f *Fs) processEntries(ctx context.Context, origEntries fs.DirEntries, dirP
// //
// Please note that every NewObject invocation will scan the whole directory. // Please note that every NewObject invocation will scan the whole directory.
// Using here something like fs.DirCache might improve performance // Using here something like fs.DirCache might improve performance
// (but will make logic more complex, though). // (yet making the logic more complex).
// //
// Note that chunker prefers analyzing file names rather than reading // Note that chunker prefers analyzing file names rather than reading
// the content of meta object assuming that directory scans are fast // the content of meta object assuming that directory scans are fast
@@ -752,8 +836,8 @@ func (f *Fs) NewObject(ctx context.Context, remote string) (fs.Object, error) {
if !strings.Contains(entryRemote, remote) { if !strings.Contains(entryRemote, remote) {
continue // bypass regexp to save cpu continue // bypass regexp to save cpu
} }
mainRemote, chunkNo, ctrlType, xactNo := f.parseChunkName(entryRemote) mainRemote, chunkNo, ctrlType, xactID := f.parseChunkName(entryRemote)
if mainRemote == "" || mainRemote != remote || ctrlType != "" || xactNo != -1 { if mainRemote == "" || mainRemote != remote || ctrlType != "" || xactID != "" {
continue // skip non-conforming, temporary and control chunks continue // skip non-conforming, temporary and control chunks
} }
//fs.Debugf(f, "%q belongs to %q as chunk %d", entryRemote, mainRemote, chunkNo) //fs.Debugf(f, "%q belongs to %q as chunk %d", entryRemote, mainRemote, chunkNo)
@@ -786,7 +870,7 @@ func (f *Fs) NewObject(ctx context.Context, remote string) (fs.Object, error) {
// This is either a composite object with metadata or a non-chunked // This is either a composite object with metadata or a non-chunked
// file without metadata. Validate it and update the total data size. // file without metadata. Validate it and update the total data size.
// As an optimization, skip metadata reading here - we will call // As an optimization, skip metadata reading here - we will call
// readMetadata lazily when needed. // readMetadata lazily when needed (reading can be expensive).
if err := o.validate(); err != nil { if err := o.validate(); err != nil {
return nil, err return nil, err
} }
@@ -843,14 +927,11 @@ func (f *Fs) put(ctx context.Context, in io.Reader, src fs.ObjectInfo, remote st
} }
}() }()
// Use system timer as a trivial source of transaction numbers,
// don't try hard to safeguard against chunk collisions between
// parallel transactions.
xactNo := time.Now().Unix()
if xactNo < 0 {
xactNo = -xactNo // unlikely but transaction number must be positive
}
baseRemote := remote baseRemote := remote
xactID, errXact := f.newXactID(ctx, baseRemote)
if errXact != nil {
return nil, errXact
}
// Transfer chunks data // Transfer chunks data
for c.chunkNo = 0; !c.done; c.chunkNo++ { for c.chunkNo = 0; !c.done; c.chunkNo++ {
@@ -858,7 +939,7 @@ func (f *Fs) put(ctx context.Context, in io.Reader, src fs.ObjectInfo, remote st
return nil, ErrChunkOverflow return nil, ErrChunkOverflow
} }
tempRemote := f.makeChunkName(baseRemote, c.chunkNo, "", xactNo) tempRemote := f.makeChunkName(baseRemote, c.chunkNo, "", xactID)
size := c.sizeLeft size := c.sizeLeft
if size > c.chunkSize { if size > c.chunkSize {
size = c.chunkSize size = c.chunkSize
@@ -962,7 +1043,7 @@ func (f *Fs) put(ctx context.Context, in io.Reader, src fs.ObjectInfo, remote st
// Rename data chunks from temporary to final names // Rename data chunks from temporary to final names
for chunkNo, chunk := range c.chunks { for chunkNo, chunk := range c.chunks {
chunkRemote := f.makeChunkName(baseRemote, chunkNo, "", -1) chunkRemote := f.makeChunkName(baseRemote, chunkNo, "", "")
chunkMoved, errMove := f.baseMove(ctx, chunk, chunkRemote, delFailed) chunkMoved, errMove := f.baseMove(ctx, chunk, chunkRemote, delFailed)
if errMove != nil { if errMove != nil {
return nil, errMove return nil, errMove
@@ -1221,11 +1302,6 @@ func (f *Fs) PutUnchecked(ctx context.Context, in io.Reader, src fs.ObjectInfo,
return f.newObject("", o, nil), nil return f.newObject("", o, nil), nil
} }
// Precision returns the precision of this Fs
func (f *Fs) Precision() time.Duration {
return f.base.Precision()
}
// Hashes returns the supported hash sets. // Hashes returns the supported hash sets.
// Chunker advertises a hash type if and only if it can be calculated // Chunker advertises a hash type if and only if it can be calculated
// for files of any size, non-chunked or composite. // for files of any size, non-chunked or composite.
@@ -1613,8 +1689,8 @@ func (f *Fs) ChangeNotify(ctx context.Context, notifyFunc func(string, fs.EntryT
wrappedNotifyFunc := func(path string, entryType fs.EntryType) { wrappedNotifyFunc := func(path string, entryType fs.EntryType) {
//fs.Debugf(f, "ChangeNotify: path %q entryType %d", path, entryType) //fs.Debugf(f, "ChangeNotify: path %q entryType %d", path, entryType)
if entryType == fs.EntryObject { if entryType == fs.EntryObject {
mainPath, _, _, xactNo := f.parseChunkName(path) mainPath, _, _, xactID := f.parseChunkName(path)
if mainPath != "" && xactNo == -1 { if mainPath != "" && xactID == "" {
path = mainPath path = mainPath
} }
} }
@@ -2063,7 +2139,7 @@ type metaSimpleJSON struct {
// Current implementation creates metadata in three cases: // Current implementation creates metadata in three cases:
// - for files larger than chunk size // - for files larger than chunk size
// - if file contents can be mistaken as meta object // - if file contents can be mistaken as meta object
// - if consistent hashing is on but wrapped remote can't provide given hash // - if consistent hashing is On but wrapped remote can't provide given hash
// //
func marshalSimpleJSON(ctx context.Context, size int64, nChunks int, md5, sha1 string) ([]byte, error) { func marshalSimpleJSON(ctx context.Context, size int64, nChunks int, md5, sha1 string) ([]byte, error) {
version := metadataVersion version := metadataVersion
@@ -2177,6 +2253,11 @@ func (f *Fs) String() string {
return fmt.Sprintf("Chunked '%s:%s'", f.name, f.root) return fmt.Sprintf("Chunked '%s:%s'", f.name, f.root)
} }
// Precision returns the precision of this Fs
func (f *Fs) Precision() time.Duration {
return f.base.Precision()
}
// Check the interfaces are satisfied // Check the interfaces are satisfied
var ( var (
_ fs.Fs = (*Fs)(nil) _ fs.Fs = (*Fs)(nil)

View File

@@ -64,35 +64,40 @@ func testChunkNameFormat(t *testing.T, f *Fs) {
assert.Error(t, err) assert.Error(t, err)
} }
assertMakeName := func(wantChunkName, mainName string, chunkNo int, ctrlType string, xactNo int64) { assertMakeName := func(wantChunkName, mainName string, chunkNo int, ctrlType, xactID string) {
gotChunkName := f.makeChunkName(mainName, chunkNo, ctrlType, xactNo) gotChunkName := ""
assert.Equal(t, wantChunkName, gotChunkName) assert.NotPanics(t, func() {
gotChunkName = f.makeChunkName(mainName, chunkNo, ctrlType, xactID)
}, "makeChunkName(%q,%d,%q,%q) must not panic", mainName, chunkNo, ctrlType, xactID)
if gotChunkName != "" {
assert.Equal(t, wantChunkName, gotChunkName)
}
} }
assertMakeNamePanics := func(mainName string, chunkNo int, ctrlType string, xactNo int64) { assertMakeNamePanics := func(mainName string, chunkNo int, ctrlType, xactID string) {
assert.Panics(t, func() { assert.Panics(t, func() {
_ = f.makeChunkName(mainName, chunkNo, ctrlType, xactNo) _ = f.makeChunkName(mainName, chunkNo, ctrlType, xactID)
}, "makeChunkName(%q,%d,%q,%d) should panic", mainName, chunkNo, ctrlType, xactNo) }, "makeChunkName(%q,%d,%q,%q) should panic", mainName, chunkNo, ctrlType, xactID)
} }
assertParseName := func(fileName, wantMainName string, wantChunkNo int, wantCtrlType string, wantXactNo int64) { assertParseName := func(fileName, wantMainName string, wantChunkNo int, wantCtrlType, wantXactID string) {
gotMainName, gotChunkNo, gotCtrlType, gotXactNo := f.parseChunkName(fileName) gotMainName, gotChunkNo, gotCtrlType, gotXactID := f.parseChunkName(fileName)
assert.Equal(t, wantMainName, gotMainName) assert.Equal(t, wantMainName, gotMainName)
assert.Equal(t, wantChunkNo, gotChunkNo) assert.Equal(t, wantChunkNo, gotChunkNo)
assert.Equal(t, wantCtrlType, gotCtrlType) assert.Equal(t, wantCtrlType, gotCtrlType)
assert.Equal(t, wantXactNo, gotXactNo) assert.Equal(t, wantXactID, gotXactID)
} }
const newFormatSupported = false // support for patterns not starting with base name (*) const newFormatSupported = false // support for patterns not starting with base name (*)
// valid formats // valid formats
assertFormat(`*.rclone_chunk.###`, `%s.rclone_chunk.%03d`, `%s.rclone_chunk._%s`, `^(.+?)\.rclone_chunk\.(?:([0-9]{3,})|_([a-z]{3,9}))(?:\.\.tmp_([0-9]{10,19}))?$`) assertFormat(`*.rclone_chunk.###`, `%s.rclone_chunk.%03d`, `%s.rclone_chunk._%s`, `^(.+?)\.rclone_chunk\.(?:([0-9]{3,})|_([a-z][a-z0-9]{2,6}))(?:_([0-9a-z]{4,9})|\.\.tmp_([0-9]{10,13}))?$`)
assertFormat(`*.rclone_chunk.#`, `%s.rclone_chunk.%d`, `%s.rclone_chunk._%s`, `^(.+?)\.rclone_chunk\.(?:([0-9]+)|_([a-z]{3,9}))(?:\.\.tmp_([0-9]{10,19}))?$`) assertFormat(`*.rclone_chunk.#`, `%s.rclone_chunk.%d`, `%s.rclone_chunk._%s`, `^(.+?)\.rclone_chunk\.(?:([0-9]+)|_([a-z][a-z0-9]{2,6}))(?:_([0-9a-z]{4,9})|\.\.tmp_([0-9]{10,13}))?$`)
assertFormat(`*_chunk_#####`, `%s_chunk_%05d`, `%s_chunk__%s`, `^(.+?)_chunk_(?:([0-9]{5,})|_([a-z]{3,9}))(?:\.\.tmp_([0-9]{10,19}))?$`) assertFormat(`*_chunk_#####`, `%s_chunk_%05d`, `%s_chunk__%s`, `^(.+?)_chunk_(?:([0-9]{5,})|_([a-z][a-z0-9]{2,6}))(?:_([0-9a-z]{4,9})|\.\.tmp_([0-9]{10,13}))?$`)
assertFormat(`*-chunk-#`, `%s-chunk-%d`, `%s-chunk-_%s`, `^(.+?)-chunk-(?:([0-9]+)|_([a-z]{3,9}))(?:\.\.tmp_([0-9]{10,19}))?$`) assertFormat(`*-chunk-#`, `%s-chunk-%d`, `%s-chunk-_%s`, `^(.+?)-chunk-(?:([0-9]+)|_([a-z][a-z0-9]{2,6}))(?:_([0-9a-z]{4,9})|\.\.tmp_([0-9]{10,13}))?$`)
assertFormat(`*-chunk-#-%^$()[]{}.+-!?:\`, `%s-chunk-%d-%%^$()[]{}.+-!?:\`, `%s-chunk-_%s-%%^$()[]{}.+-!?:\`, `^(.+?)-chunk-(?:([0-9]+)|_([a-z]{3,9}))-%\^\$\(\)\[\]\{\}\.\+-!\?:\\(?:\.\.tmp_([0-9]{10,19}))?$`) assertFormat(`*-chunk-#-%^$()[]{}.+-!?:\`, `%s-chunk-%d-%%^$()[]{}.+-!?:\`, `%s-chunk-_%s-%%^$()[]{}.+-!?:\`, `^(.+?)-chunk-(?:([0-9]+)|_([a-z][a-z0-9]{2,6}))-%\^\$\(\)\[\]\{\}\.\+-!\?:\\(?:_([0-9a-z]{4,9})|\.\.tmp_([0-9]{10,13}))?$`)
if newFormatSupported { if newFormatSupported {
assertFormat(`_*-chunk-##,`, `_%s-chunk-%02d,`, `_%s-chunk-_%s,`, `^_(.+?)-chunk-(?:([0-9]{2,})|_([a-z]{3,9})),(?:\.\.tmp_([0-9]{10,19}))?$`) assertFormat(`_*-chunk-##,`, `_%s-chunk-%02d,`, `_%s-chunk-_%s,`, `^_(.+?)-chunk-(?:([0-9]{2,})|_([a-z][a-z0-9]{2,6})),(?:_([0-9a-z]{4,9})|\.\.tmp_([0-9]{10,13}))?$`)
} }
// invalid formats // invalid formats
@@ -111,142 +116,223 @@ func testChunkNameFormat(t *testing.T, f *Fs) {
// quick tests // quick tests
if newFormatSupported { if newFormatSupported {
assertFormat(`part_*_#`, `part_%s_%d`, `part_%s__%s`, `^part_(.+?)_(?:([0-9]+)|_([a-z]{3,9}))(?:\.\.tmp_([0-9]{10,19}))?$`) assertFormat(`part_*_#`, `part_%s_%d`, `part_%s__%s`, `^part_(.+?)_(?:([0-9]+)|_([a-z][a-z0-9]{2,6}))(?:_([0-9][0-9a-z]{3,8})\.\.tmp_([0-9]{10,13}))?$`)
f.opt.StartFrom = 1 f.opt.StartFrom = 1
assertMakeName(`part_fish_1`, "fish", 0, "", -1) assertMakeName(`part_fish_1`, "fish", 0, "", "")
assertParseName(`part_fish_43`, "fish", 42, "", -1) assertParseName(`part_fish_43`, "fish", 42, "", "")
assertMakeName(`part_fish_3..tmp_0000000004`, "fish", 2, "", 4) assertMakeName(`part_fish__locks`, "fish", -2, "locks", "")
assertParseName(`part_fish_4..tmp_0000000005`, "fish", 3, "", 5) assertParseName(`part_fish__locks`, "fish", -1, "locks", "")
assertMakeName(`part_fish__locks`, "fish", -2, "locks", -3) assertMakeName(`part_fish__x2y`, "fish", -2, "x2y", "")
assertParseName(`part_fish__locks`, "fish", -1, "locks", -1) assertParseName(`part_fish__x2y`, "fish", -1, "x2y", "")
assertMakeName(`part_fish__blockinfo..tmp_1234567890123456789`, "fish", -3, "blockinfo", 1234567890123456789) assertMakeName(`part_fish_3_0004`, "fish", 2, "", "4")
assertParseName(`part_fish__blockinfo..tmp_1234567890123456789`, "fish", -1, "blockinfo", 1234567890123456789) assertParseName(`part_fish_4_0005`, "fish", 3, "", "0005")
assertMakeName(`part_fish__blkinfo_jj5fvo3wr`, "fish", -3, "blkinfo", "jj5fvo3wr")
assertParseName(`part_fish__blkinfo_zz9fvo3wr`, "fish", -1, "blkinfo", "zz9fvo3wr")
// old-style temporary suffix (parse only)
assertParseName(`part_fish_4..tmp_0000000011`, "fish", 3, "", "000b")
assertParseName(`part_fish__blkinfo_jj5fvo3wr`, "fish", -1, "blkinfo", "jj5fvo3wr")
} }
// prepare format for long tests // prepare format for long tests
assertFormat(`*.chunk.###`, `%s.chunk.%03d`, `%s.chunk._%s`, `^(.+?)\.chunk\.(?:([0-9]{3,})|_([a-z]{3,9}))(?:\.\.tmp_([0-9]{10,19}))?$`) assertFormat(`*.chunk.###`, `%s.chunk.%03d`, `%s.chunk._%s`, `^(.+?)\.chunk\.(?:([0-9]{3,})|_([a-z][a-z0-9]{2,6}))(?:_([0-9a-z]{4,9})|\.\.tmp_([0-9]{10,13}))?$`)
f.opt.StartFrom = 2 f.opt.StartFrom = 2
// valid data chunks // valid data chunks
assertMakeName(`fish.chunk.003`, "fish", 1, "", -1) assertMakeName(`fish.chunk.003`, "fish", 1, "", "")
assertMakeName(`fish.chunk.011..tmp_0000054321`, "fish", 9, "", 54321) assertParseName(`fish.chunk.003`, "fish", 1, "", "")
assertMakeName(`fish.chunk.011..tmp_1234567890`, "fish", 9, "", 1234567890) assertMakeName(`fish.chunk.021`, "fish", 19, "", "")
assertMakeName(`fish.chunk.1916..tmp_123456789012345`, "fish", 1914, "", 123456789012345) assertParseName(`fish.chunk.021`, "fish", 19, "", "")
assertParseName(`fish.chunk.003`, "fish", 1, "", -1) // valid temporary data chunks
assertParseName(`fish.chunk.004..tmp_0000000021`, "fish", 2, "", 21) assertMakeName(`fish.chunk.011_4321`, "fish", 9, "", "4321")
assertParseName(`fish.chunk.021`, "fish", 19, "", -1) assertParseName(`fish.chunk.011_4321`, "fish", 9, "", "4321")
assertParseName(`fish.chunk.323..tmp_1234567890123456789`, "fish", 321, "", 1234567890123456789) assertMakeName(`fish.chunk.011_00bc`, "fish", 9, "", "00bc")
assertParseName(`fish.chunk.011_00bc`, "fish", 9, "", "00bc")
assertMakeName(`fish.chunk.1916_5jjfvo3wr`, "fish", 1914, "", "5jjfvo3wr")
assertParseName(`fish.chunk.1916_5jjfvo3wr`, "fish", 1914, "", "5jjfvo3wr")
assertMakeName(`fish.chunk.1917_zz9fvo3wr`, "fish", 1915, "", "zz9fvo3wr")
assertParseName(`fish.chunk.1917_zz9fvo3wr`, "fish", 1915, "", "zz9fvo3wr")
// valid temporary data chunks (old temporary suffix, only parse)
assertParseName(`fish.chunk.004..tmp_0000000047`, "fish", 2, "", "001b")
assertParseName(`fish.chunk.323..tmp_9994567890123`, "fish", 321, "", "3jjfvo3wr")
// parsing invalid data chunk names // parsing invalid data chunk names
assertParseName(`fish.chunk.3`, "", -1, "", -1) assertParseName(`fish.chunk.3`, "", -1, "", "")
assertParseName(`fish.chunk.001`, "", -1, "", -1) assertParseName(`fish.chunk.001`, "", -1, "", "")
assertParseName(`fish.chunk.21`, "", -1, "", -1) assertParseName(`fish.chunk.21`, "", -1, "", "")
assertParseName(`fish.chunk.-21`, "", -1, "", -1) assertParseName(`fish.chunk.-21`, "", -1, "", "")
assertParseName(`fish.chunk.004.tmp_0000000021`, "", -1, "", -1) assertParseName(`fish.chunk.004abcd`, "", -1, "", "") // missing underscore delimiter
assertParseName(`fish.chunk.003..tmp_123456789`, "", -1, "", -1) assertParseName(`fish.chunk.004__1234`, "", -1, "", "") // extra underscore delimiter
assertParseName(`fish.chunk.003..tmp_012345678901234567890123456789`, "", -1, "", -1) assertParseName(`fish.chunk.004_123`, "", -1, "", "") // too short temporary suffix
assertParseName(`fish.chunk.003..tmp_-1`, "", -1, "", -1) assertParseName(`fish.chunk.004_1234567890`, "", -1, "", "") // too long temporary suffix
assertParseName(`fish.chunk.004_-1234`, "", -1, "", "") // temporary suffix must be positive
assertParseName(`fish.chunk.004_123E`, "", -1, "", "") // uppercase not allowed
assertParseName(`fish.chunk.004_12.3`, "", -1, "", "") // punctuation not allowed
// parsing invalid data chunk names (old temporary suffix)
assertParseName(`fish.chunk.004.tmp_0000000021`, "", -1, "", "")
assertParseName(`fish.chunk.003..tmp_123456789`, "", -1, "", "")
assertParseName(`fish.chunk.003..tmp_012345678901234567890123456789`, "", -1, "", "")
assertParseName(`fish.chunk.323..tmp_12345678901234`, "", -1, "", "")
assertParseName(`fish.chunk.003..tmp_-1`, "", -1, "", "")
// valid control chunks // valid control chunks
assertMakeName(`fish.chunk._info`, "fish", -1, "info", -1) assertMakeName(`fish.chunk._info`, "fish", -1, "info", "")
assertMakeName(`fish.chunk._locks`, "fish", -2, "locks", -1) assertMakeName(`fish.chunk._locks`, "fish", -2, "locks", "")
assertMakeName(`fish.chunk._blockinfo`, "fish", -3, "blockinfo", -1) assertMakeName(`fish.chunk._blkinfo`, "fish", -3, "blkinfo", "")
assertMakeName(`fish.chunk._x2y`, "fish", -4, "x2y", "")
assertParseName(`fish.chunk._info`, "fish", -1, "info", -1) assertParseName(`fish.chunk._info`, "fish", -1, "info", "")
assertParseName(`fish.chunk._locks`, "fish", -1, "locks", -1) assertParseName(`fish.chunk._locks`, "fish", -1, "locks", "")
assertParseName(`fish.chunk._blockinfo`, "fish", -1, "blockinfo", -1) assertParseName(`fish.chunk._blkinfo`, "fish", -1, "blkinfo", "")
assertParseName(`fish.chunk._x2y`, "fish", -1, "x2y", "")
// valid temporary control chunks // valid temporary control chunks
assertMakeName(`fish.chunk._info..tmp_0000000021`, "fish", -1, "info", 21) assertMakeName(`fish.chunk._info_0001`, "fish", -1, "info", "1")
assertMakeName(`fish.chunk._locks..tmp_0000054321`, "fish", -2, "locks", 54321) assertMakeName(`fish.chunk._locks_4321`, "fish", -2, "locks", "4321")
assertMakeName(`fish.chunk._uploads..tmp_0000000000`, "fish", -3, "uploads", 0) assertMakeName(`fish.chunk._uploads_abcd`, "fish", -3, "uploads", "abcd")
assertMakeName(`fish.chunk._blockinfo..tmp_1234567890123456789`, "fish", -4, "blockinfo", 1234567890123456789) assertMakeName(`fish.chunk._blkinfo_xyzabcdef`, "fish", -4, "blkinfo", "xyzabcdef")
assertMakeName(`fish.chunk._x2y_1aaa`, "fish", -5, "x2y", "1aaa")
assertParseName(`fish.chunk._info..tmp_0000000021`, "fish", -1, "info", 21) assertParseName(`fish.chunk._info_0001`, "fish", -1, "info", "0001")
assertParseName(`fish.chunk._locks..tmp_0000054321`, "fish", -1, "locks", 54321) assertParseName(`fish.chunk._locks_4321`, "fish", -1, "locks", "4321")
assertParseName(`fish.chunk._uploads..tmp_0000000000`, "fish", -1, "uploads", 0) assertParseName(`fish.chunk._uploads_9abc`, "fish", -1, "uploads", "9abc")
assertParseName(`fish.chunk._blockinfo..tmp_1234567890123456789`, "fish", -1, "blockinfo", 1234567890123456789) assertParseName(`fish.chunk._blkinfo_xyzabcdef`, "fish", -1, "blkinfo", "xyzabcdef")
assertParseName(`fish.chunk._x2y_1aaa`, "fish", -1, "x2y", "1aaa")
// valid temporary control chunks (old temporary suffix, parse only)
assertParseName(`fish.chunk._info..tmp_0000000047`, "fish", -1, "info", "001b")
assertParseName(`fish.chunk._locks..tmp_0000054321`, "fish", -1, "locks", "15wx")
assertParseName(`fish.chunk._uploads..tmp_0000000000`, "fish", -1, "uploads", "0000")
assertParseName(`fish.chunk._blkinfo..tmp_9994567890123`, "fish", -1, "blkinfo", "3jjfvo3wr")
assertParseName(`fish.chunk._x2y..tmp_0000000000`, "fish", -1, "x2y", "0000")
// parsing invalid control chunk names // parsing invalid control chunk names
assertParseName(`fish.chunk.info`, "", -1, "", -1) assertParseName(`fish.chunk.metadata`, "", -1, "", "") // must be prepended by underscore
assertParseName(`fish.chunk.locks`, "", -1, "", -1) assertParseName(`fish.chunk.info`, "", -1, "", "")
assertParseName(`fish.chunk.uploads`, "", -1, "", -1) assertParseName(`fish.chunk.locks`, "", -1, "", "")
assertParseName(`fish.chunk.blockinfo`, "", -1, "", -1) assertParseName(`fish.chunk.uploads`, "", -1, "", "")
assertParseName(`fish.chunk._os`, "", -1, "", -1) assertParseName(`fish.chunk._os`, "", -1, "", "") // too short
assertParseName(`fish.chunk._futuredata`, "", -1, "", -1) assertParseName(`fish.chunk._metadata`, "", -1, "", "") // too long
assertParseName(`fish.chunk._me_ta`, "", -1, "", -1) assertParseName(`fish.chunk._blockinfo`, "", -1, "", "") // way too long
assertParseName(`fish.chunk._in-fo`, "", -1, "", -1) assertParseName(`fish.chunk._4me`, "", -1, "", "") // cannot start with digit
assertParseName(`fish.chunk._.bin`, "", -1, "", -1) assertParseName(`fish.chunk._567`, "", -1, "", "") // cannot be all digits
assertParseName(`fish.chunk._me_ta`, "", -1, "", "") // punctuation not allowed
assertParseName(`fish.chunk._in-fo`, "", -1, "", "")
assertParseName(`fish.chunk._.bin`, "", -1, "", "")
assertParseName(`fish.chunk._.2xy`, "", -1, "", "")
assertParseName(`fish.chunk._locks..tmp_123456789`, "", -1, "", -1) // parsing invalid temporary control chunks
assertParseName(`fish.chunk._meta..tmp_-1`, "", -1, "", -1) assertParseName(`fish.chunk._blkinfo1234`, "", -1, "", "") // missing underscore delimiter
assertParseName(`fish.chunk._blockinfo..tmp_012345678901234567890123456789`, "", -1, "", -1) assertParseName(`fish.chunk._info__1234`, "", -1, "", "") // extra underscore delimiter
assertParseName(`fish.chunk._info_123`, "", -1, "", "") // too short temporary suffix
assertParseName(`fish.chunk._info_1234567890`, "", -1, "", "") // too long temporary suffix
assertParseName(`fish.chunk._info_-1234`, "", -1, "", "") // temporary suffix must be positive
assertParseName(`fish.chunk._info_123E`, "", -1, "", "") // uppercase not allowed
assertParseName(`fish.chunk._info_12.3`, "", -1, "", "") // punctuation not allowed
assertParseName(`fish.chunk._locks..tmp_123456789`, "", -1, "", "")
assertParseName(`fish.chunk._meta..tmp_-1`, "", -1, "", "")
assertParseName(`fish.chunk._blockinfo..tmp_012345678901234567890123456789`, "", -1, "", "")
// short control chunk names: 3 letters ok, 1-2 letters not allowed // short control chunk names: 3 letters ok, 1-2 letters not allowed
assertMakeName(`fish.chunk._ext`, "fish", -1, "ext", -1) assertMakeName(`fish.chunk._ext`, "fish", -1, "ext", "")
assertMakeName(`fish.chunk._ext..tmp_0000000021`, "fish", -1, "ext", 21) assertParseName(`fish.chunk._int`, "fish", -1, "int", "")
assertParseName(`fish.chunk._int`, "fish", -1, "int", -1)
assertParseName(`fish.chunk._int..tmp_0000000021`, "fish", -1, "int", 21) assertMakeNamePanics("fish", -1, "in", "")
assertMakeNamePanics("fish", -1, "in", -1) assertMakeNamePanics("fish", -1, "up", "4")
assertMakeNamePanics("fish", -1, "up", 4) assertMakeNamePanics("fish", -1, "x", "")
assertMakeNamePanics("fish", -1, "x", -1) assertMakeNamePanics("fish", -1, "c", "1z")
assertMakeNamePanics("fish", -1, "c", 4)
assertMakeName(`fish.chunk._ext_0000`, "fish", -1, "ext", "0")
assertMakeName(`fish.chunk._ext_0026`, "fish", -1, "ext", "26")
assertMakeName(`fish.chunk._int_0abc`, "fish", -1, "int", "abc")
assertMakeName(`fish.chunk._int_9xyz`, "fish", -1, "int", "9xyz")
assertMakeName(`fish.chunk._out_jj5fvo3wr`, "fish", -1, "out", "jj5fvo3wr")
assertMakeName(`fish.chunk._out_jj5fvo3wr`, "fish", -1, "out", "jj5fvo3wr")
assertParseName(`fish.chunk._ext_0000`, "fish", -1, "ext", "0000")
assertParseName(`fish.chunk._ext_0026`, "fish", -1, "ext", "0026")
assertParseName(`fish.chunk._int_0abc`, "fish", -1, "int", "0abc")
assertParseName(`fish.chunk._int_9xyz`, "fish", -1, "int", "9xyz")
assertParseName(`fish.chunk._out_jj5fvo3wr`, "fish", -1, "out", "jj5fvo3wr")
assertParseName(`fish.chunk._out_jj5fvo3wr`, "fish", -1, "out", "jj5fvo3wr")
// base file name can sometimes look like a valid chunk name // base file name can sometimes look like a valid chunk name
assertParseName(`fish.chunk.003.chunk.004`, "fish.chunk.003", 2, "", -1) assertParseName(`fish.chunk.003.chunk.004`, "fish.chunk.003", 2, "", "")
assertParseName(`fish.chunk.003.chunk.005..tmp_0000000021`, "fish.chunk.003", 3, "", 21) assertParseName(`fish.chunk.003.chunk._info`, "fish.chunk.003", -1, "info", "")
assertParseName(`fish.chunk.003.chunk._info`, "fish.chunk.003", -1, "info", -1) assertParseName(`fish.chunk.003.chunk._Meta`, "", -1, "", "")
assertParseName(`fish.chunk.003.chunk._blockinfo..tmp_1234567890123456789`, "fish.chunk.003", -1, "blockinfo", 1234567890123456789)
assertParseName(`fish.chunk.003.chunk._Meta`, "", -1, "", -1)
assertParseName(`fish.chunk.003.chunk._x..tmp_0000054321`, "", -1, "", -1)
assertParseName(`fish.chunk.004..tmp_0000000021.chunk.004`, "fish.chunk.004..tmp_0000000021", 2, "", -1) assertParseName(`fish.chunk._info.chunk.004`, "fish.chunk._info", 2, "", "")
assertParseName(`fish.chunk.004..tmp_0000000021.chunk.005..tmp_0000000021`, "fish.chunk.004..tmp_0000000021", 3, "", 21) assertParseName(`fish.chunk._info.chunk._info`, "fish.chunk._info", -1, "info", "")
assertParseName(`fish.chunk.004..tmp_0000000021.chunk._info`, "fish.chunk.004..tmp_0000000021", -1, "info", -1) assertParseName(`fish.chunk._info.chunk._info.chunk._Meta`, "", -1, "", "")
assertParseName(`fish.chunk.004..tmp_0000000021.chunk._blockinfo..tmp_1234567890123456789`, "fish.chunk.004..tmp_0000000021", -1, "blockinfo", 1234567890123456789)
assertParseName(`fish.chunk.004..tmp_0000000021.chunk._Meta`, "", -1, "", -1)
assertParseName(`fish.chunk.004..tmp_0000000021.chunk._x..tmp_0000054321`, "", -1, "", -1)
assertParseName(`fish.chunk._info.chunk.004`, "fish.chunk._info", 2, "", -1) // base file name looking like a valid chunk name (old temporary suffix)
assertParseName(`fish.chunk._info.chunk.005..tmp_0000000021`, "fish.chunk._info", 3, "", 21) assertParseName(`fish.chunk.003.chunk.005..tmp_0000000022`, "fish.chunk.003", 3, "", "000m")
assertParseName(`fish.chunk._info.chunk._info`, "fish.chunk._info", -1, "info", -1) assertParseName(`fish.chunk.003.chunk._x..tmp_0000054321`, "", -1, "", "")
assertParseName(`fish.chunk._info.chunk._blockinfo..tmp_1234567890123456789`, "fish.chunk._info", -1, "blockinfo", 1234567890123456789) assertParseName(`fish.chunk._info.chunk.005..tmp_0000000023`, "fish.chunk._info", 3, "", "000n")
assertParseName(`fish.chunk._info.chunk._info.chunk._Meta`, "", -1, "", -1) assertParseName(`fish.chunk._info.chunk._info.chunk._x..tmp_0000054321`, "", -1, "", "")
assertParseName(`fish.chunk._info.chunk._info.chunk._x..tmp_0000054321`, "", -1, "", -1)
assertParseName(`fish.chunk._blockinfo..tmp_1234567890123456789.chunk.004`, "fish.chunk._blockinfo..tmp_1234567890123456789", 2, "", -1) assertParseName(`fish.chunk.003.chunk._blkinfo..tmp_9994567890123`, "fish.chunk.003", -1, "blkinfo", "3jjfvo3wr")
assertParseName(`fish.chunk._blockinfo..tmp_1234567890123456789.chunk.005..tmp_0000000021`, "fish.chunk._blockinfo..tmp_1234567890123456789", 3, "", 21) assertParseName(`fish.chunk._info.chunk._blkinfo..tmp_9994567890123`, "fish.chunk._info", -1, "blkinfo", "3jjfvo3wr")
assertParseName(`fish.chunk._blockinfo..tmp_1234567890123456789.chunk._info`, "fish.chunk._blockinfo..tmp_1234567890123456789", -1, "info", -1)
assertParseName(`fish.chunk._blockinfo..tmp_1234567890123456789.chunk._blockinfo..tmp_1234567890123456789`, "fish.chunk._blockinfo..tmp_1234567890123456789", -1, "blockinfo", 1234567890123456789) assertParseName(`fish.chunk.004..tmp_0000000021.chunk.004`, "fish.chunk.004..tmp_0000000021", 2, "", "")
assertParseName(`fish.chunk._blockinfo..tmp_1234567890123456789.chunk._info.chunk._Meta`, "", -1, "", -1) assertParseName(`fish.chunk.004..tmp_0000000021.chunk.005..tmp_0000000025`, "fish.chunk.004..tmp_0000000021", 3, "", "000p")
assertParseName(`fish.chunk._blockinfo..tmp_1234567890123456789.chunk._info.chunk._x..tmp_0000054321`, "", -1, "", -1) assertParseName(`fish.chunk.004..tmp_0000000021.chunk._info`, "fish.chunk.004..tmp_0000000021", -1, "info", "")
assertParseName(`fish.chunk.004..tmp_0000000021.chunk._blkinfo..tmp_9994567890123`, "fish.chunk.004..tmp_0000000021", -1, "blkinfo", "3jjfvo3wr")
assertParseName(`fish.chunk.004..tmp_0000000021.chunk._Meta`, "", -1, "", "")
assertParseName(`fish.chunk.004..tmp_0000000021.chunk._x..tmp_0000054321`, "", -1, "", "")
assertParseName(`fish.chunk._blkinfo..tmp_9994567890123.chunk.004`, "fish.chunk._blkinfo..tmp_9994567890123", 2, "", "")
assertParseName(`fish.chunk._blkinfo..tmp_9994567890123.chunk.005..tmp_0000000026`, "fish.chunk._blkinfo..tmp_9994567890123", 3, "", "000q")
assertParseName(`fish.chunk._blkinfo..tmp_9994567890123.chunk._info`, "fish.chunk._blkinfo..tmp_9994567890123", -1, "info", "")
assertParseName(`fish.chunk._blkinfo..tmp_9994567890123.chunk._blkinfo..tmp_9994567890123`, "fish.chunk._blkinfo..tmp_9994567890123", -1, "blkinfo", "3jjfvo3wr")
assertParseName(`fish.chunk._blkinfo..tmp_9994567890123.chunk._info.chunk._Meta`, "", -1, "", "")
assertParseName(`fish.chunk._blkinfo..tmp_9994567890123.chunk._info.chunk._x..tmp_0000054321`, "", -1, "", "")
assertParseName(`fish.chunk._blkinfo..tmp_1234567890123456789.chunk.004`, "fish.chunk._blkinfo..tmp_1234567890123456789", 2, "", "")
assertParseName(`fish.chunk._blkinfo..tmp_1234567890123456789.chunk.005..tmp_0000000022`, "fish.chunk._blkinfo..tmp_1234567890123456789", 3, "", "000m")
assertParseName(`fish.chunk._blkinfo..tmp_1234567890123456789.chunk._info`, "fish.chunk._blkinfo..tmp_1234567890123456789", -1, "info", "")
assertParseName(`fish.chunk._blkinfo..tmp_1234567890123456789.chunk._blkinfo..tmp_9994567890123`, "fish.chunk._blkinfo..tmp_1234567890123456789", -1, "blkinfo", "3jjfvo3wr")
assertParseName(`fish.chunk._blkinfo..tmp_1234567890123456789.chunk._info.chunk._Meta`, "", -1, "", "")
assertParseName(`fish.chunk._blkinfo..tmp_1234567890123456789.chunk._info.chunk._x..tmp_0000054321`, "", -1, "", "")
// attempts to make invalid chunk names // attempts to make invalid chunk names
assertMakeNamePanics("fish", -1, "", -1) // neither data nor control assertMakeNamePanics("fish", -1, "", "") // neither data nor control
assertMakeNamePanics("fish", 0, "info", -1) // both data and control assertMakeNamePanics("fish", 0, "info", "") // both data and control
assertMakeNamePanics("fish", -1, "futuredata", -1) // control type too long assertMakeNamePanics("fish", -1, "metadata", "") // control type too long
assertMakeNamePanics("fish", -1, "123", -1) // digits not allowed assertMakeNamePanics("fish", -1, "blockinfo", "") // control type way too long
assertMakeNamePanics("fish", -1, "Meta", -1) // only lower case letters allowed assertMakeNamePanics("fish", -1, "2xy", "") // first digit not allowed
assertMakeNamePanics("fish", -1, "in-fo", -1) // punctuation not allowed assertMakeNamePanics("fish", -1, "123", "") // all digits not allowed
assertMakeNamePanics("fish", -1, "_info", -1) assertMakeNamePanics("fish", -1, "Meta", "") // only lower case letters allowed
assertMakeNamePanics("fish", -1, "info_", -1) assertMakeNamePanics("fish", -1, "in-fo", "") // punctuation not allowed
assertMakeNamePanics("fish", -2, ".bind", -3) assertMakeNamePanics("fish", -1, "_info", "")
assertMakeNamePanics("fish", -2, "bind.", -3) assertMakeNamePanics("fish", -1, "info_", "")
assertMakeNamePanics("fish", -2, ".bind", "")
assertMakeNamePanics("fish", -2, "bind.", "")
assertMakeNamePanics("fish", -1, "", 1) // neither data nor control assertMakeNamePanics("fish", -1, "", "1") // neither data nor control
assertMakeNamePanics("fish", 0, "info", 12) // both data and control assertMakeNamePanics("fish", 0, "info", "23") // both data and control
assertMakeNamePanics("fish", -1, "futuredata", 45) // control type too long assertMakeNamePanics("fish", -1, "metadata", "45") // control type too long
assertMakeNamePanics("fish", -1, "123", 123) // digits not allowed assertMakeNamePanics("fish", -1, "blockinfo", "7") // control type way too long
assertMakeNamePanics("fish", -1, "Meta", 456) // only lower case letters allowed assertMakeNamePanics("fish", -1, "2xy", "abc") // first digit not allowed
assertMakeNamePanics("fish", -1, "in-fo", 321) // punctuation not allowed assertMakeNamePanics("fish", -1, "123", "def") // all digits not allowed
assertMakeNamePanics("fish", -1, "_info", 15678) assertMakeNamePanics("fish", -1, "Meta", "mnk") // only lower case letters allowed
assertMakeNamePanics("fish", -1, "info_", 999) assertMakeNamePanics("fish", -1, "in-fo", "xyz") // punctuation not allowed
assertMakeNamePanics("fish", -2, ".bind", 0) assertMakeNamePanics("fish", -1, "_info", "5678")
assertMakeNamePanics("fish", -2, "bind.", 0) assertMakeNamePanics("fish", -1, "info_", "999")
assertMakeNamePanics("fish", -2, ".bind", "0")
assertMakeNamePanics("fish", -2, "bind.", "0")
assertMakeNamePanics("fish", 0, "", "1234567890") // temporary suffix too long
assertMakeNamePanics("fish", 0, "", "123F4") // uppercase not allowed
assertMakeNamePanics("fish", 0, "", "123.") // punctuation not allowed
assertMakeNamePanics("fish", 0, "", "_123")
} }
func testSmallFileInternals(t *testing.T, f *Fs) { func testSmallFileInternals(t *testing.T, f *Fs) {
@@ -383,7 +469,7 @@ func testPreventCorruption(t *testing.T, f *Fs) {
billyObj := newFile("billy") billyObj := newFile("billy")
billyChunkName := func(chunkNo int) string { billyChunkName := func(chunkNo int) string {
return f.makeChunkName(billyObj.Remote(), chunkNo, "", -1) return f.makeChunkName(billyObj.Remote(), chunkNo, "", "")
} }
err := f.Mkdir(ctx, billyChunkName(1)) err := f.Mkdir(ctx, billyChunkName(1))
@@ -433,7 +519,7 @@ func testPreventCorruption(t *testing.T, f *Fs) {
// recreate billy in case it was anyhow corrupted // recreate billy in case it was anyhow corrupted
willyObj := newFile("willy") willyObj := newFile("willy")
willyChunkName := f.makeChunkName(willyObj.Remote(), 1, "", -1) willyChunkName := f.makeChunkName(willyObj.Remote(), 1, "", "")
f.opt.FailHard = false f.opt.FailHard = false
willyChunk, err := f.NewObject(ctx, willyChunkName) willyChunk, err := f.NewObject(ctx, willyChunkName)
f.opt.FailHard = true f.opt.FailHard = true
@@ -484,7 +570,7 @@ func testChunkNumberOverflow(t *testing.T, f *Fs) {
f.opt.FailHard = false f.opt.FailHard = false
file, fileName := newFile(f, "wreaker") file, fileName := newFile(f, "wreaker")
wreak, _ := newFile(f.base, f.makeChunkName("wreaker", wreakNumber, "", -1)) wreak, _ := newFile(f.base, f.makeChunkName("wreaker", wreakNumber, "", ""))
f.opt.FailHard = false f.opt.FailHard = false
fstest.CheckListingWithRoot(t, f, dir, nil, nil, f.Precision()) fstest.CheckListingWithRoot(t, f, dir, nil, nil, f.Precision())
@@ -532,7 +618,7 @@ func testMetadataInput(t *testing.T, f *Fs) {
filename := path.Join(dir, name) filename := path.Join(dir, name)
require.True(t, len(contents) > 2 && len(contents) < minChunkForTest, description+" test data is correct") require.True(t, len(contents) > 2 && len(contents) < minChunkForTest, description+" test data is correct")
part := putFile(f.base, f.makeChunkName(filename, 0, "", -1), "oops", "", true) part := putFile(f.base, f.makeChunkName(filename, 0, "", ""), "oops", "", true)
_ = putFile(f, filename, contents, "upload "+description, false) _ = putFile(f, filename, contents, "upload "+description, false)
obj, err := f.NewObject(ctx, filename) obj, err := f.NewObject(ctx, filename)

View File

@@ -63,6 +63,7 @@ func init() {
Name: "password", Name: "password",
Help: "Password or pass phrase for encryption.", Help: "Password or pass phrase for encryption.",
IsPassword: true, IsPassword: true,
Required: true,
}, { }, {
Name: "password2", Name: "password2",
Help: "Password or pass phrase for salt. Optional but recommended.\nShould be different to the previous password.", Help: "Password or pass phrase for salt. Optional but recommended.\nShould be different to the previous password.",

View File

@@ -326,6 +326,17 @@ Photos folder" option in your google drive settings. You can then copy
or move the photos locally and use the date the image was taken or move the photos locally and use the date the image was taken
(created) set as the modification date.`, (created) set as the modification date.`,
Advanced: true, Advanced: true,
}, {
Name: "use_shared_date",
Default: false,
Help: `Use date file was shared instead of modified date.
Note that, as with "--drive-use-created-date", this flag may have
unexpected consequences when uploading/downloading files.
If both this flag and "--drive-use-created-date" are set, the created
date is used.`,
Advanced: true,
}, { }, {
Name: "list_chunk", Name: "list_chunk",
Default: 1000, Default: 1000,
@@ -463,6 +474,7 @@ type Options struct {
ImportExtensions string `config:"import_formats"` ImportExtensions string `config:"import_formats"`
AllowImportNameChange bool `config:"allow_import_name_change"` AllowImportNameChange bool `config:"allow_import_name_change"`
UseCreatedDate bool `config:"use_created_date"` UseCreatedDate bool `config:"use_created_date"`
UseSharedDate bool `config:"use_shared_date"`
ListChunk int64 `config:"list_chunk"` ListChunk int64 `config:"list_chunk"`
Impersonate string `config:"impersonate"` Impersonate string `config:"impersonate"`
AlternateExport bool `config:"alternate_export"` AlternateExport bool `config:"alternate_export"`
@@ -694,6 +706,9 @@ func (f *Fs) list(ctx context.Context, dirIDs []string, title string, directorie
if f.opt.AuthOwnerOnly { if f.opt.AuthOwnerOnly {
fields += ",owners" fields += ",owners"
} }
if f.opt.UseSharedDate {
fields += ",sharedWithMeTime"
}
if f.opt.SkipChecksumGphotos { if f.opt.SkipChecksumGphotos {
fields += ",spaces" fields += ",spaces"
} }
@@ -830,7 +845,7 @@ func configTeamDrive(ctx context.Context, opt *Options, m configmap.Mapper, name
} else { } else {
fmt.Printf("Change current team drive ID %q?\n", opt.TeamDriveID) fmt.Printf("Change current team drive ID %q?\n", opt.TeamDriveID)
} }
if !config.Confirm() { if !config.Confirm(false) {
return nil return nil
} }
client, err := createOAuthClient(opt, name, m) client, err := createOAuthClient(opt, name, m)
@@ -1095,6 +1110,8 @@ func (f *Fs) newBaseObject(remote string, info *drive.File) baseObject {
modifiedDate := info.ModifiedTime modifiedDate := info.ModifiedTime
if f.opt.UseCreatedDate { if f.opt.UseCreatedDate {
modifiedDate = info.CreatedTime modifiedDate = info.CreatedTime
} else if f.opt.UseSharedDate && info.SharedWithMeTime != "" {
modifiedDate = info.SharedWithMeTime
} }
size := info.Size size := info.Size
if f.opt.SizeAsQuota { if f.opt.SizeAsQuota {
@@ -1463,6 +1480,14 @@ func (f *Fs) List(ctx context.Context, dir string) (entries fs.DirEntries, err e
if iErr != nil { if iErr != nil {
return nil, iErr return nil, iErr
} }
// If listing the root of a teamdrive and got no entries,
// double check we have access
if f.isTeamDrive && len(entries) == 0 && f.root == "" && dir == "" {
err = f.teamDriveOK(ctx)
if err != nil {
return nil, err
}
}
return entries, nil return entries, nil
} }
@@ -1521,15 +1546,23 @@ func (f *Fs) listRRunner(ctx context.Context, wg *sync.WaitGroup, in <-chan list
listRSlices{dirs, paths}.Sort() listRSlices{dirs, paths}.Sort()
var iErr error var iErr error
_, err := f.list(ctx, dirs, "", false, false, false, func(item *drive.File) bool { _, err := f.list(ctx, dirs, "", false, false, false, func(item *drive.File) bool {
// shared with me items have no parents when at the root
if f.opt.SharedWithMe && len(item.Parents) == 0 && len(paths) == 1 && paths[0] == "" {
item.Parents = dirs
}
for _, parent := range item.Parents { for _, parent := range item.Parents {
// only handle parents that are in the requested dirs list var i int
i := sort.SearchStrings(dirs, parent) // If only one item in paths then no need to search for the ID
if i == len(dirs) || dirs[i] != parent { // assuming google drive is doing its job properly.
continue //
// Note that we at the root when len(paths) == 1 && paths[0] == ""
if len(paths) == 1 {
// don't check parents at root because
// - shared with me items have no parents at the root
// - if using a root alias, eg "root" or "appDataFolder" the ID won't match
i = 0
} else {
// only handle parents that are in the requested dirs list if not at root
i = sort.SearchStrings(dirs, parent)
if i == len(dirs) || dirs[i] != parent {
continue
}
} }
remote := path.Join(paths[i], item.Name) remote := path.Join(paths[i], item.Name)
entry, err := f.itemToDirEntry(remote, item) entry, err := f.itemToDirEntry(remote, item)
@@ -1600,6 +1633,7 @@ func (f *Fs) ListR(ctx context.Context, dir string, callback fs.ListRCallback) (
out := make(chan error, fs.Config.Checkers) out := make(chan error, fs.Config.Checkers)
list := walk.NewListRHelper(callback) list := walk.NewListRHelper(callback)
overflow := []listREntry{} overflow := []listREntry{}
listed := 0
cb := func(entry fs.DirEntry) error { cb := func(entry fs.DirEntry) error {
mu.Lock() mu.Lock()
@@ -1612,6 +1646,7 @@ func (f *Fs) ListR(ctx context.Context, dir string, callback fs.ListRCallback) (
overflow = append(overflow, listREntry{d.ID(), d.Remote()}) overflow = append(overflow, listREntry{d.ID(), d.Remote()})
} }
} }
listed++
return list.Add(entry) return list.Add(entry)
} }
@@ -1668,7 +1703,21 @@ func (f *Fs) ListR(ctx context.Context, dir string, callback fs.ListRCallback) (
return err return err
} }
return list.Flush() err = list.Flush()
if err != nil {
return err
}
// If listing the root of a teamdrive and got no entries,
// double check we have access
if f.isTeamDrive && listed == 0 && f.root == "" && dir == "" {
err = f.teamDriveOK(ctx)
if err != nil {
return err
}
}
return nil
} }
// itemToDirEntry converts a drive.File to a fs.DirEntry. // itemToDirEntry converts a drive.File to a fs.DirEntry.
@@ -2041,9 +2090,30 @@ func (f *Fs) CleanUp(ctx context.Context) error {
return nil return nil
} }
// teamDriveOK checks to see if we can access the team drive
func (f *Fs) teamDriveOK(ctx context.Context) (err error) {
if !f.isTeamDrive {
return nil
}
var td *drive.Drive
err = f.pacer.Call(func() (bool, error) {
td, err = f.svc.Drives.Get(f.opt.TeamDriveID).Fields("name,id,capabilities,createdTime,restrictions").Context(ctx).Do()
return shouldRetry(err)
})
if err != nil {
return errors.Wrap(err, "failed to get Team/Shared Drive info")
}
fs.Debugf(f, "read info from team drive %q", td.Name)
return err
}
// About gets quota information // About gets quota information
func (f *Fs) About(ctx context.Context) (*fs.Usage, error) { func (f *Fs) About(ctx context.Context) (*fs.Usage, error) {
if f.isTeamDrive { if f.isTeamDrive {
err := f.teamDriveOK(ctx)
if err != nil {
return nil, err
}
// Teamdrives don't appear to have a usage API so just return empty // Teamdrives don't appear to have a usage API so just return empty
return &fs.Usage{}, nil return &fs.Usage{}, nil
} }

View File

@@ -46,13 +46,57 @@ func (t Time) String() string { return time.Time(t).Format(timeFormat) }
// APIString returns Time string in Jottacloud API format // APIString returns Time string in Jottacloud API format
func (t Time) APIString() string { return time.Time(t).Format(apiTimeFormat) } func (t Time) APIString() string { return time.Time(t).Format(apiTimeFormat) }
// LoginToken is struct representing the login token generated in the WebUI
type LoginToken struct {
Username string `json:"username"`
Realm string `json:"realm"`
WellKnownLink string `json:"well_known_link"`
AuthToken string `json:"auth_token"`
}
// WellKnown contains some configuration parameters for setting up endpoints
type WellKnown struct {
Issuer string `json:"issuer"`
AuthorizationEndpoint string `json:"authorization_endpoint"`
TokenEndpoint string `json:"token_endpoint"`
TokenIntrospectionEndpoint string `json:"token_introspection_endpoint"`
UserinfoEndpoint string `json:"userinfo_endpoint"`
EndSessionEndpoint string `json:"end_session_endpoint"`
JwksURI string `json:"jwks_uri"`
CheckSessionIframe string `json:"check_session_iframe"`
GrantTypesSupported []string `json:"grant_types_supported"`
ResponseTypesSupported []string `json:"response_types_supported"`
SubjectTypesSupported []string `json:"subject_types_supported"`
IDTokenSigningAlgValuesSupported []string `json:"id_token_signing_alg_values_supported"`
UserinfoSigningAlgValuesSupported []string `json:"userinfo_signing_alg_values_supported"`
RequestObjectSigningAlgValuesSupported []string `json:"request_object_signing_alg_values_supported"`
ResponseNodesSupported []string `json:"response_modes_supported"`
RegistrationEndpoint string `json:"registration_endpoint"`
TokenEndpointAuthMethodsSupported []string `json:"token_endpoint_auth_methods_supported"`
TokenEndpointAuthSigningAlgValuesSupported []string `json:"token_endpoint_auth_signing_alg_values_supported"`
ClaimsSupported []string `json:"claims_supported"`
ClaimTypesSupported []string `json:"claim_types_supported"`
ClaimsParameterSupported bool `json:"claims_parameter_supported"`
ScopesSupported []string `json:"scopes_supported"`
RequestParameterSupported bool `json:"request_parameter_supported"`
RequestURIParameterSupported bool `json:"request_uri_parameter_supported"`
CodeChallengeMethodsSupported []string `json:"code_challenge_methods_supported"`
TLSClientCertificateBoundAccessTokens bool `json:"tls_client_certificate_bound_access_tokens"`
IntrospectionEndpoint string `json:"introspection_endpoint"`
}
// TokenJSON is the struct representing the HTTP response from OAuth2 // TokenJSON is the struct representing the HTTP response from OAuth2
// providers returning a token in JSON form. // providers returning a token in JSON form.
type TokenJSON struct { type TokenJSON struct {
AccessToken string `json:"access_token"` AccessToken string `json:"access_token"`
TokenType string `json:"token_type"` ExpiresIn int32 `json:"expires_in"` // at least PayPal returns string, while most return number
RefreshToken string `json:"refresh_token"` RefreshExpiresIn int32 `json:"refresh_expires_in"`
ExpiresIn int32 `json:"expires_in"` // at least PayPal returns string, while most return number RefreshToken string `json:"refresh_token"`
TokenType string `json:"token_type"`
IDToken string `json:"id_token"`
NotBeforePolicy int32 `json:"not-before-policy"`
SessionState string `json:"session_state"`
Scope string `json:"scope"`
} }
// JSON structures returned by new API // JSON structures returned by new API

View File

@@ -4,12 +4,13 @@ import (
"bytes" "bytes"
"context" "context"
"crypto/md5" "crypto/md5"
"encoding/base64"
"encoding/hex" "encoding/hex"
"encoding/json"
"fmt" "fmt"
"io" "io"
"io/ioutil" "io/ioutil"
"log" "log"
"math/rand"
"net/http" "net/http"
"net/url" "net/url"
"os" "os"
@@ -25,7 +26,6 @@ import (
"github.com/rclone/rclone/fs/config" "github.com/rclone/rclone/fs/config"
"github.com/rclone/rclone/fs/config/configmap" "github.com/rclone/rclone/fs/config/configmap"
"github.com/rclone/rclone/fs/config/configstruct" "github.com/rclone/rclone/fs/config/configstruct"
"github.com/rclone/rclone/fs/config/obscure"
"github.com/rclone/rclone/fs/encodings" "github.com/rclone/rclone/fs/encodings"
"github.com/rclone/rclone/fs/fserrors" "github.com/rclone/rclone/fs/fserrors"
"github.com/rclone/rclone/fs/fshttp" "github.com/rclone/rclone/fs/fshttp"
@@ -41,32 +41,29 @@ const enc = encodings.JottaCloud
// Globals // Globals
const ( const (
minSleep = 10 * time.Millisecond minSleep = 10 * time.Millisecond
maxSleep = 2 * time.Second maxSleep = 2 * time.Second
decayConstant = 2 // bigger for slower decay, exponential decayConstant = 2 // bigger for slower decay, exponential
defaultDevice = "Jotta" defaultDevice = "Jotta"
defaultMountpoint = "Archive" defaultMountpoint = "Archive"
rootURL = "https://www.jottacloud.com/jfs/" rootURL = "https://www.jottacloud.com/jfs/"
apiURL = "https://api.jottacloud.com/" apiURL = "https://api.jottacloud.com/"
baseURL = "https://www.jottacloud.com/" baseURL = "https://www.jottacloud.com/"
tokenURL = "https://api.jottacloud.com/auth/v1/token" defaultTokenURL = "https://id.jottacloud.com/auth/realms/jottacloud/protocol/openid-connect/token"
registerURL = "https://api.jottacloud.com/auth/v1/register" cachePrefix = "rclone-jcmd5-"
cachePrefix = "rclone-jcmd5-" configDevice = "device"
rcloneClientID = "nibfk8biu12ju7hpqomr8b1e40" configMountpoint = "mountpoint"
rcloneEncryptedClientSecret = "Vp8eAv7eVElMnQwN-kgU9cbhgApNDaMqWdlDi5qFydlQoji4JBxrGMF2" configTokenURL = "tokenURL"
configClientID = "client_id" configVersion = 1
configClientSecret = "client_secret"
configDevice = "device"
configMountpoint = "mountpoint"
charset = "abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789"
) )
var ( var (
// Description of how to auth for this app for a personal account // Description of how to auth for this app for a personal account
oauthConfig = &oauth2.Config{ oauthConfig = &oauth2.Config{
ClientID: "jottacli",
Endpoint: oauth2.Endpoint{ Endpoint: oauth2.Endpoint{
AuthURL: tokenURL, AuthURL: defaultTokenURL,
TokenURL: tokenURL, TokenURL: defaultTokenURL,
}, },
RedirectURL: oauthutil.RedirectLocalhostURL, RedirectURL: oauthutil.RedirectLocalhostURL,
} }
@@ -81,43 +78,37 @@ func init() {
NewFs: NewFs, NewFs: NewFs,
Config: func(name string, m configmap.Mapper) { Config: func(name string, m configmap.Mapper) {
ctx := context.TODO() ctx := context.TODO()
tokenString, ok := m.Get("token")
if ok && tokenString != "" {
fmt.Printf("Already have a token - refresh?\n")
if !config.Confirm() {
return
}
}
srv := rest.NewClient(fshttp.NewClient(fs.Config)) refresh := false
fmt.Printf("\nDo you want to create a machine specific API key?\n\nRclone has it's own Jottacloud API KEY which works fine as long as one only uses rclone on a single machine. When you want to use rclone with this account on more than one machine it's recommended to create a machine specific API key. These keys can NOT be shared between machines.\n\n") if version, ok := m.Get("configVersion"); ok {
if config.Confirm() { ver, err := strconv.Atoi(version)
deviceRegistration, err := registerDevice(ctx, srv)
if err != nil { if err != nil {
log.Fatalf("Failed to register device: %v", err) log.Fatalf("Failed to parse config version - corrupted config")
} }
refresh = ver != configVersion
m.Set(configClientID, deviceRegistration.ClientID)
m.Set(configClientSecret, obscure.MustObscure(deviceRegistration.ClientSecret))
fs.Debugf(nil, "Got clientID '%s' and clientSecret '%s'", deviceRegistration.ClientID, deviceRegistration.ClientSecret)
} }
clientID, ok := m.Get(configClientID) if refresh {
if !ok { fmt.Printf("Config outdated - refreshing\n")
clientID = rcloneClientID } else {
tokenString, ok := m.Get("token")
if ok && tokenString != "" {
fmt.Printf("Already have a token - refresh?\n")
if !config.Confirm(false) {
return
}
}
} }
clientSecret, ok := m.Get(configClientSecret)
if !ok {
clientSecret = rcloneEncryptedClientSecret
}
oauthConfig.ClientID = clientID
oauthConfig.ClientSecret = obscure.MustReveal(clientSecret)
fmt.Printf("Username> ") clientConfig := *fs.Config
username := config.ReadLine() clientConfig.UserAgent = "JottaCli 0.6.18626 windows-amd64"
password := config.GetPassword("Your Jottacloud password is only required during setup and will not be stored.") srv := rest.NewClient(fshttp.NewClient(&clientConfig))
token, err := doAuth(ctx, srv, username, password) fmt.Printf("Generate a personal login token here: https://www.jottacloud.com/web/secure\n")
fmt.Printf("Login Token> ")
loginToken := config.ReadLine()
token, err := doAuth(ctx, srv, loginToken, m)
if err != nil { if err != nil {
log.Fatalf("Failed to get oauth token: %s", err) log.Fatalf("Failed to get oauth token: %s", err)
} }
@@ -127,7 +118,7 @@ func init() {
} }
fmt.Printf("\nDo you want to use a non standard device/mountpoint e.g. for accessing files uploaded using the official Jottacloud client?\n\n") fmt.Printf("\nDo you want to use a non standard device/mountpoint e.g. for accessing files uploaded using the official Jottacloud client?\n\n")
if config.Confirm() { if config.Confirm(false) {
oAuthClient, _, err := oauthutil.NewClient(name, m, oauthConfig) oAuthClient, _, err := oauthutil.NewClient(name, m, oauthConfig)
if err != nil { if err != nil {
log.Fatalf("Failed to load oAuthClient: %s", err) log.Fatalf("Failed to load oAuthClient: %s", err)
@@ -143,6 +134,8 @@ func init() {
m.Set(configDevice, device) m.Set(configDevice, device)
m.Set(configMountpoint, mountpoint) m.Set(configMountpoint, mountpoint)
} }
m.Set("configVersion", strconv.Itoa(configVersion))
}, },
Options: []fs.Option{{ Options: []fs.Option{{
Name: "md5_memory_limit", Name: "md5_memory_limit",
@@ -249,67 +242,57 @@ func shouldRetry(resp *http.Response, err error) (bool, error) {
return fserrors.ShouldRetry(err) || fserrors.ShouldRetryHTTP(resp, retryErrorCodes), err return fserrors.ShouldRetry(err) || fserrors.ShouldRetryHTTP(resp, retryErrorCodes), err
} }
// registerDevice register a new device for use with the jottacloud API
func registerDevice(ctx context.Context, srv *rest.Client) (reg *api.DeviceRegistrationResponse, err error) {
// random generator to generate random device names
seededRand := rand.New(rand.NewSource(time.Now().UnixNano()))
randonDeviceNamePartLength := 21
randomDeviceNamePart := make([]byte, randonDeviceNamePartLength)
for i := range randomDeviceNamePart {
randomDeviceNamePart[i] = charset[seededRand.Intn(len(charset))]
}
randomDeviceName := "rclone-" + string(randomDeviceNamePart)
fs.Debugf(nil, "Trying to register device '%s'", randomDeviceName)
values := url.Values{}
values.Set("device_id", randomDeviceName)
opts := rest.Opts{
Method: "POST",
RootURL: registerURL,
ContentType: "application/x-www-form-urlencoded",
ExtraHeaders: map[string]string{"Authorization": "Bearer c2xrZmpoYWRsZmFramhkc2xma2phaHNkbGZramhhc2xkZmtqaGFzZGxrZmpobGtq"},
Parameters: values,
}
var deviceRegistration *api.DeviceRegistrationResponse
_, err = srv.CallJSON(ctx, &opts, nil, &deviceRegistration)
return deviceRegistration, err
}
// doAuth runs the actual token request // doAuth runs the actual token request
func doAuth(ctx context.Context, srv *rest.Client, username, password string) (token oauth2.Token, err error) { func doAuth(ctx context.Context, srv *rest.Client, loginTokenBase64 string, m configmap.Mapper) (token oauth2.Token, err error) {
loginTokenBytes, err := base64.StdEncoding.DecodeString(loginTokenBase64)
if err != nil {
return token, err
}
// decode login token
var loginToken api.LoginToken
decoder := json.NewDecoder(bytes.NewReader(loginTokenBytes))
err = decoder.Decode(&loginToken)
if err != nil {
return token, err
}
// retrieve endpoint urls
opts := rest.Opts{
Method: "GET",
RootURL: loginToken.WellKnownLink,
}
var wellKnown api.WellKnown
_, err = srv.CallJSON(ctx, &opts, nil, &wellKnown)
if err != nil {
return token, err
}
// save the tokenurl
oauthConfig.Endpoint.AuthURL = wellKnown.TokenEndpoint
oauthConfig.Endpoint.TokenURL = wellKnown.TokenEndpoint
m.Set(configTokenURL, wellKnown.TokenEndpoint)
// prepare out token request with username and password // prepare out token request with username and password
values := url.Values{} values := url.Values{}
values.Set("grant_type", "PASSWORD") values.Set("client_id", "jottacli")
values.Set("password", password) values.Set("grant_type", "password")
values.Set("username", username) values.Set("password", loginToken.AuthToken)
values.Set("client_id", oauthConfig.ClientID) values.Set("scope", "offline_access+openid")
values.Set("client_secret", oauthConfig.ClientSecret) values.Set("username", loginToken.Username)
opts := rest.Opts{ values.Encode()
opts = rest.Opts{
Method: "POST", Method: "POST",
RootURL: oauthConfig.Endpoint.AuthURL, RootURL: oauthConfig.Endpoint.AuthURL,
ContentType: "application/x-www-form-urlencoded", ContentType: "application/x-www-form-urlencoded",
Parameters: values, Body: strings.NewReader(values.Encode()),
} }
// do the first request // do the first request
var jsonToken api.TokenJSON var jsonToken api.TokenJSON
resp, err := srv.CallJSON(ctx, &opts, nil, &jsonToken) _, err = srv.CallJSON(ctx, &opts, nil, &jsonToken)
if err != nil { if err != nil {
// if 2fa is enabled the first request is expected to fail. We will do another request with the 2fa code as an additional http header return token, err
if resp != nil {
if resp.Header.Get("X-JottaCloud-OTP") == "required; SMS" {
fmt.Printf("This account uses 2 factor authentication you will receive a verification code via SMS.\n")
fmt.Printf("Enter verification code> ")
authCode := config.ReadLine()
authCode = strings.Replace(authCode, "-", "", -1) // remove any "-" contained in the code so we have a 6 digit number
opts.ExtraHeaders = make(map[string]string)
opts.ExtraHeaders["X-Jottacloud-Otp"] = authCode
resp, err = srv.CallJSON(ctx, &opts, nil, &jsonToken)
}
}
} }
token.AccessToken = jsonToken.AccessToken token.AccessToken = jsonToken.AccessToken
@@ -471,29 +454,6 @@ func (f *Fs) filePath(file string) string {
return urlPathEscape(f.filePathRaw(file)) return urlPathEscape(f.filePathRaw(file))
} }
// Jottacloud requires the grant_type 'refresh_token' string
// to be uppercase and throws a 400 Bad Request if we use the
// lower case used by the oauth2 module
//
// This filter catches all refresh requests, reads the body,
// changes the case and then sends it on
func grantTypeFilter(req *http.Request) {
if tokenURL == req.URL.String() {
// read the entire body
refreshBody, err := ioutil.ReadAll(req.Body)
if err != nil {
return
}
_ = req.Body.Close()
// make the refresh token upper case
refreshBody = []byte(strings.Replace(string(refreshBody), "grant_type=refresh_token", "grant_type=REFRESH_TOKEN", 1))
// set the new ReadCloser (with a dummy Close())
req.Body = ioutil.NopCloser(bytes.NewReader(refreshBody))
}
}
// NewFs constructs an Fs from the path, container:path // NewFs constructs an Fs from the path, container:path
func NewFs(name, root string, m configmap.Mapper) (fs.Fs, error) { func NewFs(name, root string, m configmap.Mapper) (fs.Fs, error) {
ctx := context.TODO() ctx := context.TODO()
@@ -504,35 +464,37 @@ func NewFs(name, root string, m configmap.Mapper) (fs.Fs, error) {
return nil, err return nil, err
} }
rootIsDir := strings.HasSuffix(root, "/") // Check config version
root = parsePath(root) var ok bool
var version string
clientID, ok := m.Get(configClientID) if version, ok = m.Get("configVersion"); ok {
if !ok { ver, err := strconv.Atoi(version)
clientID = rcloneClientID if err != nil {
return nil, errors.New("Failed to parse config version")
}
ok = ver == configVersion
} }
clientSecret, ok := m.Get(configClientSecret)
if !ok { if !ok {
clientSecret = rcloneEncryptedClientSecret return nil, errors.New("Outdated config - please reconfigure this backend")
} }
oauthConfig.ClientID = clientID
oauthConfig.ClientSecret = obscure.MustReveal(clientSecret)
// the oauth client for the api servers needs // if custome endpoints are set use them else stick with defaults
// a filter to fix the grant_type issues (see above) if tokenURL, ok := m.Get(configTokenURL); ok {
oauthConfig.Endpoint.TokenURL = tokenURL
// jottacloud is weird. we need to use the tokenURL as authURL
oauthConfig.Endpoint.AuthURL = tokenURL
}
// Create OAuth Client
baseClient := fshttp.NewClient(fs.Config) baseClient := fshttp.NewClient(fs.Config)
if do, ok := baseClient.Transport.(interface {
SetRequestFilter(f func(req *http.Request))
}); ok {
do.SetRequestFilter(grantTypeFilter)
} else {
fs.Debugf(name+":", "Couldn't add request filter - uploads will fail")
}
oAuthClient, ts, err := oauthutil.NewClientWithBaseClient(name, m, oauthConfig, baseClient) oAuthClient, ts, err := oauthutil.NewClientWithBaseClient(name, m, oauthConfig, baseClient)
if err != nil { if err != nil {
return nil, errors.Wrap(err, "Failed to configure Jottacloud oauth client") return nil, errors.Wrap(err, "Failed to configure Jottacloud oauth client")
} }
rootIsDir := strings.HasSuffix(root, "/")
root = parsePath(root)
f := &Fs{ f := &Fs{
name: name, name: name,
root: root, root: root,

View File

@@ -16,6 +16,7 @@ import (
"github.com/rclone/rclone/fs/config/configstruct" "github.com/rclone/rclone/fs/config/configstruct"
"github.com/rclone/rclone/fs/config/obscure" "github.com/rclone/rclone/fs/config/obscure"
"github.com/rclone/rclone/fs/encodings" "github.com/rclone/rclone/fs/encodings"
"github.com/rclone/rclone/fs/fshttp"
"github.com/rclone/rclone/fs/hash" "github.com/rclone/rclone/fs/hash"
httpclient "github.com/koofr/go-httpclient" httpclient "github.com/koofr/go-httpclient"
@@ -259,7 +260,9 @@ func NewFs(name, root string, m configmap.Mapper) (ff fs.Fs, err error) {
if err != nil { if err != nil {
return nil, err return nil, err
} }
client := koofrclient.NewKoofrClient(opt.Endpoint, false) httpClient := httpclient.New()
httpClient.Client = fshttp.NewClient(fs.Config)
client := koofrclient.NewKoofrClientWithHTTPClient(opt.Endpoint, httpClient)
basicAuth := fmt.Sprintf("Basic %s", basicAuth := fmt.Sprintf("Basic %s",
base64.StdEncoding.EncodeToString([]byte(opt.User+":"+pass))) base64.StdEncoding.EncodeToString([]byte(opt.User+":"+pass)))
client.HTTPClient.Headers.Set("Authorization", basicAuth) client.HTTPClient.Headers.Set("Authorization", basicAuth)

View File

@@ -350,7 +350,7 @@ func (f *Fs) List(ctx context.Context, dir string) (entries fs.DirEntries, err e
err = errors.Wrapf(err, "failed to open directory %q", dir) err = errors.Wrapf(err, "failed to open directory %q", dir)
fs.Errorf(dir, "%v", err) fs.Errorf(dir, "%v", err)
if isPerm { if isPerm {
accounting.Stats(ctx).Error(fserrors.NoRetryError(err)) _ = accounting.Stats(ctx).Error(fserrors.NoRetryError(err))
err = nil // ignore error but fail sync err = nil // ignore error but fail sync
} }
return nil, err return nil, err
@@ -386,7 +386,7 @@ func (f *Fs) List(ctx context.Context, dir string) (entries fs.DirEntries, err e
if fierr != nil { if fierr != nil {
err = errors.Wrapf(err, "failed to read directory %q", namepath) err = errors.Wrapf(err, "failed to read directory %q", namepath)
fs.Errorf(dir, "%v", fierr) fs.Errorf(dir, "%v", fierr)
accounting.Stats(ctx).Error(fserrors.NoRetryError(fierr)) // fail the sync _ = accounting.Stats(ctx).Error(fserrors.NoRetryError(fierr)) // fail the sync
continue continue
} }
fis = append(fis, fi) fis = append(fis, fi)
@@ -409,7 +409,7 @@ func (f *Fs) List(ctx context.Context, dir string) (entries fs.DirEntries, err e
// Skip bad symlinks // Skip bad symlinks
err = fserrors.NoRetryError(errors.Wrap(err, "symlink")) err = fserrors.NoRetryError(errors.Wrap(err, "symlink"))
fs.Errorf(newRemote, "Listing error: %v", err) fs.Errorf(newRemote, "Listing error: %v", err)
accounting.Stats(ctx).Error(err) err = accounting.Stats(ctx).Error(err)
continue continue
} }
if err != nil { if err != nil {
@@ -820,10 +820,10 @@ func (file *localOpenFile) Read(p []byte) (n int, err error) {
return 0, errors.Wrap(err, "can't read status of source file while transferring") return 0, errors.Wrap(err, "can't read status of source file while transferring")
} }
if file.o.size != fi.Size() { if file.o.size != fi.Size() {
return 0, errors.Errorf("can't copy - source file is being updated (size changed from %d to %d)", file.o.size, fi.Size()) return 0, fserrors.NoLowLevelRetryError(errors.Errorf("can't copy - source file is being updated (size changed from %d to %d)", file.o.size, fi.Size()))
} }
if !file.o.modTime.Equal(fi.ModTime()) { if !file.o.modTime.Equal(fi.ModTime()) {
return 0, errors.Errorf("can't copy - source file is being updated (mod time changed from %v to %v)", file.o.modTime, fi.ModTime()) return 0, fserrors.NoLowLevelRetryError(errors.Errorf("can't copy - source file is being updated (mod time changed from %v to %v)", file.o.modTime, fi.ModTime()))
} }
} }
@@ -956,7 +956,17 @@ func (o *Object) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, op
if !o.translatedLink { if !o.translatedLink {
f, err := file.OpenFile(o.path, os.O_WRONLY|os.O_CREATE|os.O_TRUNC, 0666) f, err := file.OpenFile(o.path, os.O_WRONLY|os.O_CREATE|os.O_TRUNC, 0666)
if err != nil { if err != nil {
return err if runtime.GOOS == "windows" && os.IsPermission(err) {
// If permission denied on Windows might be trying to update a
// hidden file, in which case try opening without CREATE
// See: https://stackoverflow.com/questions/13215716/ioerror-errno-13-permission-denied-when-trying-to-open-hidden-file-in-w-mod
f, err = file.OpenFile(o.path, os.O_WRONLY|os.O_TRUNC, 0666)
if err != nil {
return err
}
} else {
return err
}
} }
// Pre-allocate the file for performance reasons // Pre-allocate the file for performance reasons
err = preAllocate(src.Size(), f) err = preAllocate(src.Size(), f)

View File

@@ -269,7 +269,7 @@ func qsServiceConnection(opt *Options) (*qs.Service, error) {
cf.Protocol = protocol cf.Protocol = protocol
cf.Host = host cf.Host = host
cf.Port = port cf.Port = port
cf.ConnectionRetries = opt.ConnectionRetries // unsupported in v3.1: cf.ConnectionRetries = opt.ConnectionRetries
cf.Connection = fshttp.NewClient(fs.Config) cf.Connection = fshttp.NewClient(fs.Config)
return qs.Init(cf) return qs.Init(cf)

View File

@@ -14,7 +14,9 @@ What happens if you CTRL-C a multipart upload
*/ */
import ( import (
"bytes"
"context" "context"
"crypto/md5"
"encoding/base64" "encoding/base64"
"encoding/hex" "encoding/hex"
"encoding/xml" "encoding/xml"
@@ -24,8 +26,10 @@ import (
"net/url" "net/url"
"path" "path"
"regexp" "regexp"
"sort"
"strconv" "strconv"
"strings" "strings"
"sync"
"time" "time"
"github.com/aws/aws-sdk-go/aws" "github.com/aws/aws-sdk-go/aws"
@@ -33,12 +37,12 @@ import (
"github.com/aws/aws-sdk-go/aws/corehandlers" "github.com/aws/aws-sdk-go/aws/corehandlers"
"github.com/aws/aws-sdk-go/aws/credentials" "github.com/aws/aws-sdk-go/aws/credentials"
"github.com/aws/aws-sdk-go/aws/credentials/ec2rolecreds" "github.com/aws/aws-sdk-go/aws/credentials/ec2rolecreds"
"github.com/aws/aws-sdk-go/aws/credentials/stscreds"
"github.com/aws/aws-sdk-go/aws/defaults" "github.com/aws/aws-sdk-go/aws/defaults"
"github.com/aws/aws-sdk-go/aws/ec2metadata" "github.com/aws/aws-sdk-go/aws/ec2metadata"
"github.com/aws/aws-sdk-go/aws/request" "github.com/aws/aws-sdk-go/aws/request"
"github.com/aws/aws-sdk-go/aws/session" "github.com/aws/aws-sdk-go/aws/session"
"github.com/aws/aws-sdk-go/service/s3" "github.com/aws/aws-sdk-go/service/s3"
"github.com/aws/aws-sdk-go/service/s3/s3manager"
"github.com/ncw/swift" "github.com/ncw/swift"
"github.com/pkg/errors" "github.com/pkg/errors"
"github.com/rclone/rclone/fs" "github.com/rclone/rclone/fs"
@@ -51,7 +55,9 @@ import (
"github.com/rclone/rclone/fs/walk" "github.com/rclone/rclone/fs/walk"
"github.com/rclone/rclone/lib/bucket" "github.com/rclone/rclone/lib/bucket"
"github.com/rclone/rclone/lib/pacer" "github.com/rclone/rclone/lib/pacer"
"github.com/rclone/rclone/lib/readers"
"github.com/rclone/rclone/lib/rest" "github.com/rclone/rclone/lib/rest"
"golang.org/x/sync/errgroup"
) )
const enc = encodings.S3 const enc = encodings.S3
@@ -159,6 +165,9 @@ func init() {
}, { }, {
Value: "ap-south-1", Value: "ap-south-1",
Help: "Asia Pacific (Mumbai)\nNeeds location constraint ap-south-1.", Help: "Asia Pacific (Mumbai)\nNeeds location constraint ap-south-1.",
}, {
Value: "ap-east-1",
Help: "Asia Patific (Hong Kong) Region\nNeeds location constraint ap-east-1.",
}, { }, {
Value: "sa-east-1", Value: "sa-east-1",
Help: "South America (Sao Paulo) Region\nNeeds location constraint sa-east-1.", Help: "South America (Sao Paulo) Region\nNeeds location constraint sa-east-1.",
@@ -427,6 +436,9 @@ func init() {
}, { }, {
Value: "ap-south-1", Value: "ap-south-1",
Help: "Asia Pacific (Mumbai)", Help: "Asia Pacific (Mumbai)",
}, {
Value: "ap-east-1",
Help: "Asia Pacific (Hong Kong)",
}, { }, {
Value: "sa-east-1", Value: "sa-east-1",
Help: "South America (Sao Paulo) Region.", Help: "South America (Sao Paulo) Region.",
@@ -693,16 +705,37 @@ The minimum is 0 and the maximum is 5GB.`,
Name: "chunk_size", Name: "chunk_size",
Help: `Chunk size to use for uploading. Help: `Chunk size to use for uploading.
When uploading files larger than upload_cutoff they will be uploaded When uploading files larger than upload_cutoff or files with unknown
as multipart uploads using this chunk size. size (eg from "rclone rcat" or uploaded with "rclone mount" or google
photos or google docs) they will be uploaded as multipart uploads
using this chunk size.
Note that "--s3-upload-concurrency" chunks of this size are buffered Note that "--s3-upload-concurrency" chunks of this size are buffered
in memory per transfer. in memory per transfer.
If you are transferring large files over high speed links and you have If you are transferring large files over high speed links and you have
enough memory, then increasing this will speed up the transfers.`, enough memory, then increasing this will speed up the transfers.
Rclone will automatically increase the chunk size when uploading a
large file of known size to stay below the 10,000 chunks limit.
Files of unknown size are uploaded with the configured
chunk_size. Since the default chunk size is 5MB and there can be at
most 10,000 chunks, this means that by default the maximum size of
file you can stream upload is 48GB. If you wish to stream upload
larger files then you will need to increase chunk_size.`,
Default: minChunkSize, Default: minChunkSize,
Advanced: true, Advanced: true,
}, {
Name: "copy_cutoff",
Help: `Cutoff for switching to multipart copy
Any files larger than this that need to be server side copied will be
copied in chunks of this size.
The minimum is 0 and the maximum is 5GB.`,
Default: fs.SizeSuffix(maxSizeForCopy),
Advanced: true,
}, { }, {
Name: "disable_checksum", Name: "disable_checksum",
Help: "Don't store MD5 checksum with object metadata", Help: "Don't store MD5 checksum with object metadata",
@@ -733,7 +766,9 @@ if false then rclone will use virtual path style. See [the AWS S3
docs](https://docs.aws.amazon.com/AmazonS3/latest/dev/UsingBucket.html#access-bucket-intro) docs](https://docs.aws.amazon.com/AmazonS3/latest/dev/UsingBucket.html#access-bucket-intro)
for more info. for more info.
Some providers (eg Aliyun OSS or Netease COS) require this set to false.`, Some providers (eg AWS, Aliyun OSS or Netease COS) require this set to
false - rclone will do this automatically based on the provider
setting.`,
Default: true, Default: true,
Advanced: true, Advanced: true,
}, { }, {
@@ -765,19 +800,29 @@ WARNING: Storing parts of an incomplete multipart upload counts towards space us
`, `,
Default: false, Default: false,
Advanced: true, Advanced: true,
}, {
Name: "list_chunk",
Help: `Size of listing chunk (response list for each ListObject S3 request).
This option is also known as "MaxKeys", "max-items", or "page-size" from the AWS S3 specification.
Most services truncate the response list to 1000 objects even if requested more than that.
In AWS S3 this is a global maximum and cannot be changed, see [AWS S3](https://docs.aws.amazon.com/cli/latest/reference/s3/ls.html).
In Ceph, this can be increased with the "rgw list buckets max chunk" option.
`,
Default: 1000,
Advanced: true,
}}, }},
}) })
} }
// Constants // Constants
const ( const (
metaMtime = "Mtime" // the meta key to store mtime in - eg X-Amz-Meta-Mtime metaMtime = "Mtime" // the meta key to store mtime in - eg X-Amz-Meta-Mtime
metaMD5Hash = "Md5chksum" // the meta key to store md5hash in metaMD5Hash = "Md5chksum" // the meta key to store md5hash in
listChunkSize = 1000 // number of items to read at once maxRetries = 10 // number of retries to make of operations
maxRetries = 10 // number of retries to make of operations maxSizeForCopy = 5 * 1024 * 1024 * 1024 // The maximum size of object we can COPY
maxSizeForCopy = 5 * 1024 * 1024 * 1024 // The maximum size of object we can COPY maxUploadParts = 10000 // maximum allowed number of parts in a multi-part upload
maxFileSize = 5 * 1024 * 1024 * 1024 * 1024 // largest possible upload file size minChunkSize = fs.SizeSuffix(1024 * 1024 * 5)
minChunkSize = fs.SizeSuffix(s3manager.MinUploadPartSize)
defaultUploadCutoff = fs.SizeSuffix(200 * 1024 * 1024) defaultUploadCutoff = fs.SizeSuffix(200 * 1024 * 1024)
maxUploadCutoff = fs.SizeSuffix(5 * 1024 * 1024 * 1024) maxUploadCutoff = fs.SizeSuffix(5 * 1024 * 1024 * 1024)
minSleep = 10 * time.Millisecond // In case of error, start at 10ms sleep. minSleep = 10 * time.Millisecond // In case of error, start at 10ms sleep.
@@ -798,6 +843,7 @@ type Options struct {
SSEKMSKeyID string `config:"sse_kms_key_id"` SSEKMSKeyID string `config:"sse_kms_key_id"`
StorageClass string `config:"storage_class"` StorageClass string `config:"storage_class"`
UploadCutoff fs.SizeSuffix `config:"upload_cutoff"` UploadCutoff fs.SizeSuffix `config:"upload_cutoff"`
CopyCutoff fs.SizeSuffix `config:"copy_cutoff"`
ChunkSize fs.SizeSuffix `config:"chunk_size"` ChunkSize fs.SizeSuffix `config:"chunk_size"`
DisableChecksum bool `config:"disable_checksum"` DisableChecksum bool `config:"disable_checksum"`
SessionToken string `config:"session_token"` SessionToken string `config:"session_token"`
@@ -806,6 +852,7 @@ type Options struct {
V2Auth bool `config:"v2_auth"` V2Auth bool `config:"v2_auth"`
UseAccelerateEndpoint bool `config:"use_accelerate_endpoint"` UseAccelerateEndpoint bool `config:"use_accelerate_endpoint"`
LeavePartsOnError bool `config:"leave_parts_on_error"` LeavePartsOnError bool `config:"leave_parts_on_error"`
ListChunk int64 `config:"list_chunk"`
} }
// Fs represents a remote s3 server // Fs represents a remote s3 server
@@ -961,7 +1008,12 @@ func s3Connection(opt *Options) (*s3.S3, *session.Session, error) {
Client: ec2metadata.New(session.New(), &aws.Config{ Client: ec2metadata.New(session.New(), &aws.Config{
HTTPClient: lowTimeoutClient, HTTPClient: lowTimeoutClient,
}), }),
ExpiryWindow: 3, ExpiryWindow: 3 * time.Minute,
},
// Pick up IAM role if we are in EKS
&stscreds.WebIdentityRoleProvider{
ExpiryWindow: 3 * time.Minute,
}, },
} }
cred := credentials.NewChainCredentials(providers) cred := credentials.NewChainCredentials(providers)
@@ -984,7 +1036,7 @@ func s3Connection(opt *Options) (*s3.S3, *session.Session, error) {
if opt.Region == "" { if opt.Region == "" {
opt.Region = "us-east-1" opt.Region = "us-east-1"
} }
if opt.Provider == "Alibaba" || opt.Provider == "Netease" || opt.UseAccelerateEndpoint { if opt.Provider == "AWS" || opt.Provider == "Alibaba" || opt.Provider == "Netease" || opt.UseAccelerateEndpoint {
opt.ForcePathStyle = false opt.ForcePathStyle = false
} }
awsConfig := aws.NewConfig(). awsConfig := aws.NewConfig().
@@ -1232,7 +1284,6 @@ func (f *Fs) list(ctx context.Context, bucket, directory, prefix string, addBuck
if directory != "" { if directory != "" {
directory += "/" directory += "/"
} }
maxKeys := int64(listChunkSize)
delimiter := "" delimiter := ""
if !recurse { if !recurse {
delimiter = "/" delimiter = "/"
@@ -1260,7 +1311,7 @@ func (f *Fs) list(ctx context.Context, bucket, directory, prefix string, addBuck
Bucket: &bucket, Bucket: &bucket,
Delimiter: &delimiter, Delimiter: &delimiter,
Prefix: &directory, Prefix: &directory,
MaxKeys: &maxKeys, MaxKeys: &f.opt.ListChunk,
Marker: marker, Marker: marker,
} }
if urlEncodeListings { if urlEncodeListings {
@@ -1376,6 +1427,12 @@ func (f *Fs) list(ctx context.Context, bucket, directory, prefix string, addBuck
} else { } else {
marker = resp.NextMarker marker = resp.NextMarker
} }
if urlEncodeListings {
*marker, err = url.QueryUnescape(*marker)
if err != nil {
return errors.Wrapf(err, "failed to URL decode NextMarker %q", *marker)
}
}
} }
return nil return nil
} }
@@ -1642,7 +1699,7 @@ func (f *Fs) copy(ctx context.Context, req *s3.CopyObjectInput, dstBucket, dstPa
req.StorageClass = &f.opt.StorageClass req.StorageClass = &f.opt.StorageClass
} }
if srcSize >= int64(f.opt.UploadCutoff) { if srcSize >= int64(f.opt.CopyCutoff) {
return f.copyMultipart(ctx, req, dstBucket, dstPath, srcBucket, srcPath, srcSize) return f.copyMultipart(ctx, req, dstBucket, dstPath, srcBucket, srcPath, srcSize)
} }
return f.pacer.Call(func() (bool, error) { return f.pacer.Call(func() (bool, error) {
@@ -1655,8 +1712,8 @@ func calculateRange(partSize, partIndex, numParts, totalSize int64) string {
start := partIndex * partSize start := partIndex * partSize
var ends string var ends string
if partIndex == numParts-1 { if partIndex == numParts-1 {
if totalSize >= 0 { if totalSize >= 1 {
ends = strconv.FormatInt(totalSize, 10) ends = strconv.FormatInt(totalSize-1, 10)
} }
} else { } else {
ends = strconv.FormatInt(start+partSize-1, 10) ends = strconv.FormatInt(start+partSize-1, 10)
@@ -1693,7 +1750,7 @@ func (f *Fs) copyMultipart(ctx context.Context, req *s3.CopyObjectInput, dstBuck
} }
}() }()
partSize := int64(f.opt.ChunkSize) partSize := int64(f.opt.CopyCutoff)
numParts := (srcSize-1)/partSize + 1 numParts := (srcSize-1)/partSize + 1
var parts []*s3.CompletedPart var parts []*s3.CompletedPart
@@ -1921,11 +1978,6 @@ func (o *Object) SetModTime(ctx context.Context, modTime time.Time) error {
} }
o.meta[metaMtime] = aws.String(swift.TimeToFloatString(modTime)) o.meta[metaMtime] = aws.String(swift.TimeToFloatString(modTime))
if o.bytes >= maxSizeForCopy {
fs.Debugf(o, "SetModTime is unsupported for objects bigger than %v bytes", fs.SizeSuffix(maxSizeForCopy))
return nil
}
// Can't update metadata here, so return this error to force a recopy // Can't update metadata here, so return this error to force a recopy
if o.storageClass == "GLACIER" || o.storageClass == "DEEP_ARCHIVE" { if o.storageClass == "GLACIER" || o.storageClass == "DEEP_ARCHIVE" {
return fs.ErrorCantSetModTime return fs.ErrorCantSetModTime
@@ -1982,6 +2034,195 @@ func (o *Object) Open(ctx context.Context, options ...fs.OpenOption) (in io.Read
return resp.Body, nil return resp.Body, nil
} }
var warnStreamUpload sync.Once
func (o *Object) uploadMultipart(ctx context.Context, req *s3.PutObjectInput, size int64, in io.Reader) (err error) {
f := o.fs
// make concurrency machinery
concurrency := f.opt.UploadConcurrency
if concurrency < 1 {
concurrency = 1
}
bufs := make(chan []byte, concurrency)
defer func() {
// empty the channel on exit
close(bufs)
for range bufs {
}
}()
for i := 0; i < concurrency; i++ {
bufs <- nil
}
// calculate size of parts
partSize := int(f.opt.ChunkSize)
// size can be -1 here meaning we don't know the size of the incoming file. We use ChunkSize
// buffers here (default 5MB). With a maximum number of parts (10,000) this will be a file of
// 48GB which seems like a not too unreasonable limit.
if size == -1 {
warnStreamUpload.Do(func() {
fs.Logf(f, "Streaming uploads using chunk size %v will have maximum file size of %v",
f.opt.ChunkSize, fs.SizeSuffix(partSize*maxUploadParts))
})
} else {
// Adjust partSize until the number of parts is small enough.
if size/int64(partSize) >= maxUploadParts {
// Calculate partition size rounded up to the nearest MB
partSize = int((((size / maxUploadParts) >> 20) + 1) << 20)
}
}
var cout *s3.CreateMultipartUploadOutput
err = f.pacer.Call(func() (bool, error) {
var err error
cout, err = f.c.CreateMultipartUploadWithContext(ctx, &s3.CreateMultipartUploadInput{
Bucket: req.Bucket,
ACL: req.ACL,
Key: req.Key,
ContentType: req.ContentType,
Metadata: req.Metadata,
ServerSideEncryption: req.ServerSideEncryption,
SSEKMSKeyId: req.SSEKMSKeyId,
StorageClass: req.StorageClass,
})
return f.shouldRetry(err)
})
if err != nil {
return errors.Wrap(err, "multipart upload failed to initialise")
}
uid := cout.UploadId
defer func() {
if o.fs.opt.LeavePartsOnError {
return
}
if err != nil {
// We can try to abort the upload, but ignore the error.
fs.Debugf(o, "Cancelling multipart upload")
errCancel := f.pacer.Call(func() (bool, error) {
_, err := f.c.AbortMultipartUploadWithContext(ctx, &s3.AbortMultipartUploadInput{
Bucket: req.Bucket,
Key: req.Key,
UploadId: uid,
RequestPayer: req.RequestPayer,
})
return f.shouldRetry(err)
})
if errCancel != nil {
fs.Debugf(o, "Failed to cancel multipart upload: %v", errCancel)
}
}
}()
var (
g, gCtx = errgroup.WithContext(ctx)
finished = false
partsMu sync.Mutex // to protect parts
parts []*s3.CompletedPart
off int64
)
for partNum := int64(1); !finished; partNum++ {
// Get a block of memory from the channel (which limits concurrency)
buf := <-bufs
if buf == nil {
buf = make([]byte, partSize)
}
// Read the chunk
var n int
n, err = readers.ReadFill(in, buf) // this can never return 0, nil
if err == io.EOF {
if n == 0 && partNum != 1 { // end if no data and if not first chunk
break
}
finished = true
} else if err != nil {
return errors.Wrap(err, "multipart upload failed to read source")
}
buf = buf[:n]
partNum := partNum
fs.Debugf(o, "multipart upload starting chunk %d size %v offset %v/%v", partNum, fs.SizeSuffix(n), fs.SizeSuffix(off), fs.SizeSuffix(size))
off += int64(n)
g.Go(func() (err error) {
partLength := int64(len(buf))
// create checksum of buffer for integrity checking
md5sumBinary := md5.Sum(buf)
md5sum := base64.StdEncoding.EncodeToString(md5sumBinary[:])
err = f.pacer.Call(func() (bool, error) {
uploadPartReq := &s3.UploadPartInput{
Body: bytes.NewReader(buf),
Bucket: req.Bucket,
Key: req.Key,
PartNumber: &partNum,
UploadId: uid,
ContentMD5: &md5sum,
ContentLength: &partLength,
RequestPayer: req.RequestPayer,
SSECustomerAlgorithm: req.SSECustomerAlgorithm,
SSECustomerKey: req.SSECustomerKey,
SSECustomerKeyMD5: req.SSECustomerKeyMD5,
}
uout, err := f.c.UploadPartWithContext(gCtx, uploadPartReq)
if err != nil {
if partNum <= int64(concurrency) {
return f.shouldRetry(err)
}
// retry all chunks once have done the first batch
return true, err
}
partsMu.Lock()
parts = append(parts, &s3.CompletedPart{
PartNumber: &partNum,
ETag: uout.ETag,
})
partsMu.Unlock()
return false, nil
})
// return the memory
bufs <- buf[:partSize]
if err != nil {
return errors.Wrap(err, "multipart upload failed to upload part")
}
return nil
})
}
err = g.Wait()
if err != nil {
return err
}
// sort the completed parts by part number
sort.Slice(parts, func(i, j int) bool {
return *parts[i].PartNumber < *parts[j].PartNumber
})
err = f.pacer.Call(func() (bool, error) {
_, err := f.c.CompleteMultipartUploadWithContext(ctx, &s3.CompleteMultipartUploadInput{
Bucket: req.Bucket,
Key: req.Key,
MultipartUpload: &s3.CompletedMultipartUpload{
Parts: parts,
},
RequestPayer: req.RequestPayer,
UploadId: uid,
})
return f.shouldRetry(err)
})
if err != nil {
return errors.Wrap(err, "multipart upload failed to finalise")
}
return nil
}
// Update the Object from in with modTime and size // Update the Object from in with modTime and size
func (o *Object) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, options ...fs.OpenOption) error { func (o *Object) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, options ...fs.OpenOption) error {
bucket, bucketPath := o.split() bucket, bucketPath := o.split()
@@ -1993,35 +2234,17 @@ func (o *Object) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, op
size := src.Size() size := src.Size()
multipart := size < 0 || size >= int64(o.fs.opt.UploadCutoff) multipart := size < 0 || size >= int64(o.fs.opt.UploadCutoff)
var uploader *s3manager.Uploader
if multipart {
uploader = s3manager.NewUploader(o.fs.ses, func(u *s3manager.Uploader) {
u.Concurrency = o.fs.opt.UploadConcurrency
u.LeavePartsOnError = o.fs.opt.LeavePartsOnError
u.S3 = o.fs.c
u.PartSize = int64(o.fs.opt.ChunkSize)
if size == -1 {
// Make parts as small as possible while still being able to upload to the
// S3 file size limit. Rounded up to nearest MB.
u.PartSize = (((maxFileSize / s3manager.MaxUploadParts) >> 20) + 1) << 20
return
}
// Adjust PartSize until the number of parts is small enough.
if size/u.PartSize >= s3manager.MaxUploadParts {
// Calculate partition size rounded up to the nearest MB
u.PartSize = (((size / s3manager.MaxUploadParts) >> 20) + 1) << 20
}
})
}
// Set the mtime in the meta data // Set the mtime in the meta data
metadata := map[string]*string{ metadata := map[string]*string{
metaMtime: aws.String(swift.TimeToFloatString(modTime)), metaMtime: aws.String(swift.TimeToFloatString(modTime)),
} }
// read the md5sum if available for non multpart and if // read the md5sum if available
// disable checksum isn't present. // - for non multpart
// - so we can add a ContentMD5
// - for multipart provided checksums aren't disabled
// - so we can add the md5sum in the metadata as metaMD5Hash
var md5sum string var md5sum string
if !multipart || !o.fs.opt.DisableChecksum { if !multipart || !o.fs.opt.DisableChecksum {
hash, err := src.Hash(ctx, hash.MD5) hash, err := src.Hash(ctx, hash.MD5)
@@ -2038,52 +2261,32 @@ func (o *Object) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, op
// Guess the content type // Guess the content type
mimeType := fs.MimeType(ctx, src) mimeType := fs.MimeType(ctx, src)
req := s3.PutObjectInput{
Bucket: &bucket,
ACL: &o.fs.opt.ACL,
Key: &bucketPath,
ContentType: &mimeType,
Metadata: metadata,
}
if md5sum != "" {
req.ContentMD5 = &md5sum
}
if o.fs.opt.ServerSideEncryption != "" {
req.ServerSideEncryption = &o.fs.opt.ServerSideEncryption
}
if o.fs.opt.SSEKMSKeyID != "" {
req.SSEKMSKeyId = &o.fs.opt.SSEKMSKeyID
}
if o.fs.opt.StorageClass != "" {
req.StorageClass = &o.fs.opt.StorageClass
}
if multipart { if multipart {
req := s3manager.UploadInput{ err = o.uploadMultipart(ctx, &req, size, in)
Bucket: &bucket,
ACL: &o.fs.opt.ACL,
Key: &bucketPath,
Body: in,
ContentType: &mimeType,
Metadata: metadata,
//ContentLength: &size,
}
if o.fs.opt.ServerSideEncryption != "" {
req.ServerSideEncryption = &o.fs.opt.ServerSideEncryption
}
if o.fs.opt.SSEKMSKeyID != "" {
req.SSEKMSKeyId = &o.fs.opt.SSEKMSKeyID
}
if o.fs.opt.StorageClass != "" {
req.StorageClass = &o.fs.opt.StorageClass
}
err = o.fs.pacer.CallNoRetry(func() (bool, error) {
_, err = uploader.UploadWithContext(ctx, &req)
return o.fs.shouldRetry(err)
})
if err != nil { if err != nil {
return err return err
} }
} else { } else {
req := s3.PutObjectInput{
Bucket: &bucket,
ACL: &o.fs.opt.ACL,
Key: &bucketPath,
ContentType: &mimeType,
Metadata: metadata,
}
if md5sum != "" {
req.ContentMD5 = &md5sum
}
if o.fs.opt.ServerSideEncryption != "" {
req.ServerSideEncryption = &o.fs.opt.ServerSideEncryption
}
if o.fs.opt.SSEKMSKeyID != "" {
req.SSEKMSKeyId = &o.fs.opt.SSEKMSKeyID
}
if o.fs.opt.StorageClass != "" {
req.StorageClass = &o.fs.opt.StorageClass
}
// Create the request // Create the request
putObj, _ := o.fs.c.PutObjectRequest(&req) putObj, _ := o.fs.c.PutObjectRequest(&req)

View File

@@ -29,15 +29,17 @@ import (
"github.com/rclone/rclone/fs/fshttp" "github.com/rclone/rclone/fs/fshttp"
"github.com/rclone/rclone/fs/hash" "github.com/rclone/rclone/fs/hash"
"github.com/rclone/rclone/lib/env" "github.com/rclone/rclone/lib/env"
"github.com/rclone/rclone/lib/pacer"
"github.com/rclone/rclone/lib/readers" "github.com/rclone/rclone/lib/readers"
sshagent "github.com/xanzy/ssh-agent" sshagent "github.com/xanzy/ssh-agent"
"golang.org/x/crypto/ssh" "golang.org/x/crypto/ssh"
"golang.org/x/time/rate"
) )
const ( const (
connectionsPerSecond = 10 // don't make more than this many ssh connections/s
hashCommandNotSupported = "none" hashCommandNotSupported = "none"
minSleep = 100 * time.Millisecond
maxSleep = 2 * time.Second
decayConstant = 2 // bigger for slower decay, exponential
) )
var ( var (
@@ -154,6 +156,11 @@ Home directory can be found in a shared folder called "home"
Default: "", Default: "",
Help: "The command used to read sha1 hashes. Leave blank for autodetect.", Help: "The command used to read sha1 hashes. Leave blank for autodetect.",
Advanced: true, Advanced: true,
}, {
Name: "skip_links",
Default: false,
Help: "Set to skip any symlinks and any other non regular files.",
Advanced: true,
}}, }},
} }
fs.Register(fsi) fs.Register(fsi)
@@ -175,6 +182,7 @@ type Options struct {
SetModTime bool `config:"set_modtime"` SetModTime bool `config:"set_modtime"`
Md5sumCommand string `config:"md5sum_command"` Md5sumCommand string `config:"md5sum_command"`
Sha1sumCommand string `config:"sha1sum_command"` Sha1sumCommand string `config:"sha1sum_command"`
SkipLinks bool `config:"skip_links"`
} }
// Fs stores the interface to the remote SFTP files // Fs stores the interface to the remote SFTP files
@@ -190,7 +198,7 @@ type Fs struct {
cachedHashes *hash.Set cachedHashes *hash.Set
poolMu sync.Mutex poolMu sync.Mutex
pool []*conn pool []*conn
connLimit *rate.Limiter // for limiting number of connections per second pacer *fs.Pacer // pacer for operations
} }
// Object is a remote SFTP file that has been stat'd (so it exists, but is not necessarily open for reading) // Object is a remote SFTP file that has been stat'd (so it exists, but is not necessarily open for reading)
@@ -270,10 +278,6 @@ func (c *conn) closed() error {
// Open a new connection to the SFTP server. // Open a new connection to the SFTP server.
func (f *Fs) sftpConnection() (c *conn, err error) { func (f *Fs) sftpConnection() (c *conn, err error) {
// Rate limit rate of new connections // Rate limit rate of new connections
err = f.connLimit.Wait(context.Background())
if err != nil {
return nil, errors.Wrap(err, "limiter failed in connect")
}
c = &conn{ c = &conn{
err: make(chan error, 1), err: make(chan error, 1),
} }
@@ -307,7 +311,14 @@ func (f *Fs) getSftpConnection() (c *conn, err error) {
if c != nil { if c != nil {
return c, nil return c, nil
} }
return f.sftpConnection() err = f.pacer.Call(func() (bool, error) {
c, err = f.sftpConnection()
if err != nil {
return true, err
}
return false, nil
})
return c, err
} }
// Return an SFTP connection to the pool // Return an SFTP connection to the pool
@@ -465,7 +476,7 @@ func NewFsWithConnection(ctx context.Context, name string, root string, m config
config: sshConfig, config: sshConfig,
url: "sftp://" + opt.User + "@" + opt.Host + ":" + opt.Port + "/" + root, url: "sftp://" + opt.User + "@" + opt.Host + ":" + opt.Port + "/" + root,
mkdirLock: newStringLock(), mkdirLock: newStringLock(),
connLimit: rate.NewLimiter(rate.Limit(connectionsPerSecond), 1), pacer: fs.NewPacer(pacer.NewDefault(pacer.MinSleep(minSleep), pacer.MaxSleep(maxSleep), pacer.DecayConstant(decayConstant))),
} }
f.features = (&fs.Features{ f.features = (&fs.Features{
CanHaveEmptyDirectories: true, CanHaveEmptyDirectories: true,
@@ -595,12 +606,16 @@ func (f *Fs) List(ctx context.Context, dir string) (entries fs.DirEntries, err e
remote := path.Join(dir, info.Name()) remote := path.Join(dir, info.Name())
// If file is a symlink (not a regular file is the best cross platform test we can do), do a stat to // If file is a symlink (not a regular file is the best cross platform test we can do), do a stat to
// pick up the size and type of the destination, instead of the size and type of the symlink. // pick up the size and type of the destination, instead of the size and type of the symlink.
if !info.Mode().IsRegular() { if !info.Mode().IsRegular() && !info.IsDir() {
if f.opt.SkipLinks {
// skip non regular file if SkipLinks is set
continue
}
oldInfo := info oldInfo := info
info, err = f.stat(remote) info, err = f.stat(remote)
if err != nil { if err != nil {
if !os.IsNotExist(err) { if !os.IsNotExist(err) {
fs.Errorf(remote, "stat of non-regular file/dir failed: %v", err) fs.Errorf(remote, "stat of non-regular file failed: %v", err)
} }
info = oldInfo info = oldInfo
} }

View File

@@ -7,6 +7,7 @@ import (
"context" "context"
"fmt" "fmt"
"io" "io"
"net/url"
"path" "path"
"strconv" "strconv"
"strings" "strings"
@@ -530,10 +531,10 @@ type listFn func(remote string, object *swift.Object, isDirectory bool) error
// //
// Set recurse to read sub directories // Set recurse to read sub directories
func (f *Fs) listContainerRoot(container, directory, prefix string, addContainer bool, recurse bool, fn listFn) error { func (f *Fs) listContainerRoot(container, directory, prefix string, addContainer bool, recurse bool, fn listFn) error {
if prefix != "" { if prefix != "" && !strings.HasSuffix(prefix, "/") {
prefix += "/" prefix += "/"
} }
if directory != "" { if directory != "" && !strings.HasSuffix(directory, "/") {
directory += "/" directory += "/"
} }
// Options for ObjectsWalk // Options for ObjectsWalk
@@ -952,6 +953,18 @@ func (o *Object) isStaticLargeObject() (bool, error) {
return o.hasHeader("X-Static-Large-Object") return o.hasHeader("X-Static-Large-Object")
} }
func (o *Object) isInContainerVersioning(container string) (bool, error) {
_, headers, err := o.fs.c.Container(container)
if err != nil {
return false, err
}
xHistoryLocation := headers["X-History-Location"]
if len(xHistoryLocation) > 0 {
return true, nil
}
return false, nil
}
// Size returns the size of an object in bytes // Size returns the size of an object in bytes
func (o *Object) Size() int64 { func (o *Object) Size() int64 {
return o.size return o.size
@@ -1083,9 +1096,8 @@ func min(x, y int64) int64 {
// //
// if except is passed in then segments with that prefix won't be deleted // if except is passed in then segments with that prefix won't be deleted
func (o *Object) removeSegments(except string) error { func (o *Object) removeSegments(except string) error {
container, containerPath := o.split() segmentsContainer, prefix, err := o.getSegmentsDlo()
segmentsContainer := container + "_segments" err = o.fs.listContainerRoot(segmentsContainer, prefix, "", false, true, func(remote string, object *swift.Object, isDirectory bool) error {
err := o.fs.listContainerRoot(segmentsContainer, containerPath, "", false, true, func(remote string, object *swift.Object, isDirectory bool) error {
if isDirectory { if isDirectory {
return nil return nil
} }
@@ -1114,6 +1126,23 @@ func (o *Object) removeSegments(except string) error {
return nil return nil
} }
func (o *Object) getSegmentsDlo() (segmentsContainer string, prefix string, err error) {
if err = o.readMetaData(); err != nil {
return
}
dirManifest := o.headers["X-Object-Manifest"]
dirManifest, err = url.PathUnescape(dirManifest)
if err != nil {
return
}
delimiter := strings.Index(dirManifest, "/")
if len(dirManifest) == 0 || delimiter < 0 {
err = errors.New("Missing or wrong structure of manifest of Dynamic large object")
return
}
return dirManifest[:delimiter], dirManifest[delimiter+1:], nil
}
// urlEncode encodes a string so that it is a valid URL // urlEncode encodes a string so that it is a valid URL
// //
// We don't use any of Go's standard methods as we need `/` not // We don't use any of Go's standard methods as we need `/` not
@@ -1300,12 +1329,9 @@ func (o *Object) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, op
} }
// Remove an object // Remove an object
func (o *Object) Remove(ctx context.Context) error { func (o *Object) Remove(ctx context.Context) (err error) {
container, containerPath := o.split() container, containerPath := o.split()
isDynamicLargeObject, err := o.isDynamicLargeObject()
if err != nil {
return err
}
// Remove file/manifest first // Remove file/manifest first
err = o.fs.pacer.Call(func() (bool, error) { err = o.fs.pacer.Call(func() (bool, error) {
err = o.fs.c.ObjectDelete(container, containerPath) err = o.fs.c.ObjectDelete(container, containerPath)
@@ -1314,12 +1340,22 @@ func (o *Object) Remove(ctx context.Context) error {
if err != nil { if err != nil {
return err return err
} }
isDynamicLargeObject, err := o.isDynamicLargeObject()
if err != nil {
return err
}
// ...then segments if required // ...then segments if required
if isDynamicLargeObject { if isDynamicLargeObject {
err = o.removeSegments("") isInContainerVersioning, err := o.isInContainerVersioning(container)
if err != nil { if err != nil {
return err return err
} }
if !isInContainerVersioning {
err = o.removeSegments("")
if err != nil {
return err
}
}
} }
return nil return nil
} }

View File

@@ -113,7 +113,8 @@ type Fs struct {
canStream bool // set if can stream canStream bool // set if can stream
useOCMtime bool // set if can use X-OC-Mtime useOCMtime bool // set if can use X-OC-Mtime
retryWithZeroDepth bool // some vendors (sharepoint) won't list files when Depth is 1 (our default) retryWithZeroDepth bool // some vendors (sharepoint) won't list files when Depth is 1 (our default)
hasChecksums bool // set if can use owncloud style checksums hasMD5 bool // set if can use owncloud style checksums for MD5
hasSHA1 bool // set if can use owncloud style checksums for SHA1
} }
// Object describes a webdav object // Object describes a webdav object
@@ -215,7 +216,7 @@ func (f *Fs) readMetaDataForPath(ctx context.Context, path string, depth string)
}, },
NoRedirect: true, NoRedirect: true,
} }
if f.hasChecksums { if f.hasMD5 || f.hasSHA1 {
opts.Body = bytes.NewBuffer(owncloudProps) opts.Body = bytes.NewBuffer(owncloudProps)
} }
var result api.Multistatus var result api.Multistatus
@@ -383,7 +384,7 @@ func NewFs(name, root string, m configmap.Mapper) (fs.Fs, error) {
// sets the BearerToken up // sets the BearerToken up
func (f *Fs) setBearerToken(token string) { func (f *Fs) setBearerToken(token string) {
f.opt.BearerToken = token f.opt.BearerToken = token
f.srv.SetHeader("Authorization", "BEARER "+token) f.srv.SetHeader("Authorization", "Bearer "+token)
} }
// fetch the bearer token using the command // fetch the bearer token using the command
@@ -430,11 +431,12 @@ func (f *Fs) setQuirks(ctx context.Context, vendor string) error {
f.canStream = true f.canStream = true
f.precision = time.Second f.precision = time.Second
f.useOCMtime = true f.useOCMtime = true
f.hasChecksums = true f.hasMD5 = true
f.hasSHA1 = true
case "nextcloud": case "nextcloud":
f.precision = time.Second f.precision = time.Second
f.useOCMtime = true f.useOCMtime = true
f.hasChecksums = true f.hasSHA1 = true
case "sharepoint": case "sharepoint":
// To mount sharepoint, two Cookies are required // To mount sharepoint, two Cookies are required
// They have to be set instead of BasicAuth // They have to be set instead of BasicAuth
@@ -536,7 +538,7 @@ func (f *Fs) listAll(ctx context.Context, dir string, directoriesOnly bool, file
"Depth": depth, "Depth": depth,
}, },
} }
if f.hasChecksums { if f.hasMD5 || f.hasSHA1 {
opts.Body = bytes.NewBuffer(owncloudProps) opts.Body = bytes.NewBuffer(owncloudProps)
} }
var result api.Multistatus var result api.Multistatus
@@ -945,10 +947,14 @@ func (f *Fs) DirMove(ctx context.Context, src fs.Fs, srcRemote, dstRemote string
// Hashes returns the supported hash sets. // Hashes returns the supported hash sets.
func (f *Fs) Hashes() hash.Set { func (f *Fs) Hashes() hash.Set {
if f.hasChecksums { hashes := hash.Set(hash.None)
return hash.NewHashSet(hash.MD5, hash.SHA1) if f.hasMD5 {
hashes.Add(hash.MD5)
} }
return hash.Set(hash.None) if f.hasSHA1 {
hashes.Add(hash.SHA1)
}
return hashes
} }
// About gets quota information // About gets quota information
@@ -1015,13 +1021,11 @@ func (o *Object) Remote() string {
// Hash returns the SHA1 or MD5 of an object returning a lowercase hex string // Hash returns the SHA1 or MD5 of an object returning a lowercase hex string
func (o *Object) Hash(ctx context.Context, t hash.Type) (string, error) { func (o *Object) Hash(ctx context.Context, t hash.Type) (string, error) {
if o.fs.hasChecksums { if t == hash.MD5 && o.fs.hasMD5 {
switch t { return o.md5, nil
case hash.SHA1: }
return o.sha1, nil if t == hash.SHA1 && o.fs.hasSHA1 {
case hash.MD5: return o.sha1, nil
return o.md5, nil
}
} }
return "", hash.ErrUnsupported return "", hash.ErrUnsupported
} }
@@ -1042,10 +1046,14 @@ func (o *Object) setMetaData(info *api.Prop) (err error) {
o.hasMetaData = true o.hasMetaData = true
o.size = info.Size o.size = info.Size
o.modTime = time.Time(info.Modified) o.modTime = time.Time(info.Modified)
if o.fs.hasChecksums { if o.fs.hasMD5 || o.fs.hasSHA1 {
hashes := info.Hashes() hashes := info.Hashes()
o.sha1 = hashes[hash.SHA1] if o.fs.hasSHA1 {
o.md5 = hashes[hash.MD5] o.sha1 = hashes[hash.SHA1]
}
if o.fs.hasMD5 {
o.md5 = hashes[hash.MD5]
}
} }
return nil return nil
} }
@@ -1126,19 +1134,21 @@ func (o *Object) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, op
ContentLength: &size, // FIXME this isn't necessary with owncloud - See https://github.com/nextcloud/nextcloud-snap/issues/365 ContentLength: &size, // FIXME this isn't necessary with owncloud - See https://github.com/nextcloud/nextcloud-snap/issues/365
ContentType: fs.MimeType(ctx, src), ContentType: fs.MimeType(ctx, src),
} }
if o.fs.useOCMtime || o.fs.hasChecksums { if o.fs.useOCMtime || o.fs.hasMD5 || o.fs.hasSHA1 {
opts.ExtraHeaders = map[string]string{} opts.ExtraHeaders = map[string]string{}
if o.fs.useOCMtime { if o.fs.useOCMtime {
opts.ExtraHeaders["X-OC-Mtime"] = fmt.Sprintf("%f", float64(src.ModTime(ctx).UnixNano())/1e9) opts.ExtraHeaders["X-OC-Mtime"] = fmt.Sprintf("%f", float64(src.ModTime(ctx).UnixNano())/1e9)
} }
if o.fs.hasChecksums { // Set one upload checksum
// Set an upload checksum - prefer SHA1 // Owncloud uses one checksum only to check the upload and stores its own SHA1 and MD5
// // Nextcloud stores the checksum you supply (SHA1 or MD5) but only stores one
// This is used as an upload integrity test. If we set if o.fs.hasSHA1 {
// only SHA1 here, owncloud will calculate the MD5 too.
if sha1, _ := src.Hash(ctx, hash.SHA1); sha1 != "" { if sha1, _ := src.Hash(ctx, hash.SHA1); sha1 != "" {
opts.ExtraHeaders["OC-Checksum"] = "SHA1:" + sha1 opts.ExtraHeaders["OC-Checksum"] = "SHA1:" + sha1
} else if md5, _ := src.Hash(ctx, hash.MD5); md5 != "" { }
}
if o.fs.hasMD5 && opts.ExtraHeaders["OC-Checksum"] == "" {
if md5, _ := src.Hash(ctx, hash.MD5); md5 != "" {
opts.ExtraHeaders["OC-Checksum"] = "MD5:" + md5 opts.ExtraHeaders["OC-Checksum"] = "MD5:" + md5
} }
} }

View File

@@ -1,5 +1,6 @@
@echo off @echo off
echo Setting environment variables for mingw+WinFsp compile echo Setting environment variables for mingw+WinFsp compile
set GOPATH=X:\go set GOPATH=Z:\go
set PATH=C:\Program Files\mingw-w64\i686-7.1.0-win32-dwarf-rt_v5-rev0\mingw32\bin;%PATH% rem set PATH=C:\Program Files\mingw-w64\i686-7.1.0-win32-dwarf-rt_v5-rev0\mingw32\bin;%PATH%
set PATH=C:\Program Files\mingw-w64\x86_64-8.1.0-win32-seh-rt_v6-rev0\mingw64\bin;%PATH%
set CPATH=C:\Program Files\WinFsp\inc\fuse;C:\Program Files (x86)\WinFsp\inc\fuse set CPATH=C:\Program Files\WinFsp\inc\fuse;C:\Program Files (x86)\WinFsp\inc\fuse

View File

@@ -3,11 +3,18 @@ package authorize
import ( import (
"github.com/rclone/rclone/cmd" "github.com/rclone/rclone/cmd"
"github.com/rclone/rclone/fs/config" "github.com/rclone/rclone/fs/config"
"github.com/rclone/rclone/fs/config/flags"
"github.com/spf13/cobra" "github.com/spf13/cobra"
) )
var (
noAutoBrowser bool
)
func init() { func init() {
cmd.Root.AddCommand(commandDefinition) cmd.Root.AddCommand(commandDefinition)
cmdFlags := commandDefinition.Flags()
flags.BoolVarP(cmdFlags, &noAutoBrowser, "auth-no-open-browser", "", false, "Do not automatically open auth link in default browser")
} }
var commandDefinition = &cobra.Command{ var commandDefinition = &cobra.Command{
@@ -16,9 +23,12 @@ var commandDefinition = &cobra.Command{
Long: ` Long: `
Remote authorization. Used to authorize a remote or headless Remote authorization. Used to authorize a remote or headless
rclone from a machine with a browser - use as instructed by rclone from a machine with a browser - use as instructed by
rclone config.`, rclone config.
Use the --auth-no-open-browser to prevent rclone to open auth
link in default browser automatically.`,
Run: func(command *cobra.Command, args []string) { Run: func(command *cobra.Command, args []string) {
cmd.CheckArgs(1, 3, command, args) cmd.CheckArgs(1, 3, command, args)
config.Authorize(args) config.Authorize(args, noAutoBrowser)
}, },
} }

View File

@@ -82,7 +82,7 @@ func ShowVersion() {
func NewFsFile(remote string) (fs.Fs, string) { func NewFsFile(remote string) (fs.Fs, string) {
_, _, fsPath, err := fs.ParseRemote(remote) _, _, fsPath, err := fs.ParseRemote(remote)
if err != nil { if err != nil {
fs.CountError(err) err = fs.CountError(err)
log.Fatalf("Failed to create file system for %q: %v", remote, err) log.Fatalf("Failed to create file system for %q: %v", remote, err)
} }
f, err := cache.Get(remote) f, err := cache.Get(remote)
@@ -92,7 +92,7 @@ func NewFsFile(remote string) (fs.Fs, string) {
case nil: case nil:
return f, "" return f, ""
default: default:
fs.CountError(err) err = fs.CountError(err)
log.Fatalf("Failed to create file system for %q: %v", remote, err) log.Fatalf("Failed to create file system for %q: %v", remote, err)
} }
return nil, "" return nil, ""
@@ -107,13 +107,13 @@ func newFsFileAddFilter(remote string) (fs.Fs, string) {
if fileName != "" { if fileName != "" {
if !filter.Active.InActive() { if !filter.Active.InActive() {
err := errors.Errorf("Can't limit to single files when using filters: %v", remote) err := errors.Errorf("Can't limit to single files when using filters: %v", remote)
fs.CountError(err) err = fs.CountError(err)
log.Fatalf(err.Error()) log.Fatalf(err.Error())
} }
// Limit transfers to this file // Limit transfers to this file
err := filter.Active.AddFile(fileName) err := filter.Active.AddFile(fileName)
if err != nil { if err != nil {
fs.CountError(err) err = fs.CountError(err)
log.Fatalf("Failed to limit to single file %q: %v", remote, err) log.Fatalf("Failed to limit to single file %q: %v", remote, err)
} }
} }
@@ -135,7 +135,7 @@ func NewFsSrc(args []string) fs.Fs {
func newFsDir(remote string) fs.Fs { func newFsDir(remote string) fs.Fs {
f, err := cache.Get(remote) f, err := cache.Get(remote)
if err != nil { if err != nil {
fs.CountError(err) err = fs.CountError(err)
log.Fatalf("Failed to create file system for %q: %v", remote, err) log.Fatalf("Failed to create file system for %q: %v", remote, err)
} }
return f return f
@@ -189,11 +189,11 @@ func NewFsSrcDstFiles(args []string) (fsrc fs.Fs, srcFileName string, fdst fs.Fs
fdst, err := cache.Get(dstRemote) fdst, err := cache.Get(dstRemote)
switch err { switch err {
case fs.ErrorIsFile: case fs.ErrorIsFile:
fs.CountError(err) _ = fs.CountError(err)
log.Fatalf("Source doesn't exist or is a directory and destination is a file") log.Fatalf("Source doesn't exist or is a directory and destination is a file")
case nil: case nil:
default: default:
fs.CountError(err) _ = fs.CountError(err)
log.Fatalf("Failed to create file system for destination %q: %v", dstRemote, err) log.Fatalf("Failed to create file system for destination %q: %v", dstRemote, err)
} }
return return
@@ -239,7 +239,7 @@ func Run(Retry bool, showStats bool, cmd *cobra.Command, f func() error) {
SigInfoHandler() SigInfoHandler()
for try := 1; try <= *retries; try++ { for try := 1; try <= *retries; try++ {
err = f() err = f()
fs.CountError(err) err = fs.CountError(err)
lastErr := accounting.GlobalStats().GetLastError() lastErr := accounting.GlobalStats().GetLastError()
if err == nil { if err == nil {
err = lastErr err = lastErr
@@ -386,12 +386,12 @@ func initConfig() {
fs.Infof(nil, "Creating CPU profile %q\n", *cpuProfile) fs.Infof(nil, "Creating CPU profile %q\n", *cpuProfile)
f, err := os.Create(*cpuProfile) f, err := os.Create(*cpuProfile)
if err != nil { if err != nil {
fs.CountError(err) err = fs.CountError(err)
log.Fatal(err) log.Fatal(err)
} }
err = pprof.StartCPUProfile(f) err = pprof.StartCPUProfile(f)
if err != nil { if err != nil {
fs.CountError(err) err = fs.CountError(err)
log.Fatal(err) log.Fatal(err)
} }
atexit.Register(func() { atexit.Register(func() {
@@ -405,17 +405,17 @@ func initConfig() {
fs.Infof(nil, "Saving Memory profile %q\n", *memProfile) fs.Infof(nil, "Saving Memory profile %q\n", *memProfile)
f, err := os.Create(*memProfile) f, err := os.Create(*memProfile)
if err != nil { if err != nil {
fs.CountError(err) err = fs.CountError(err)
log.Fatal(err) log.Fatal(err)
} }
err = pprof.WriteHeapProfile(f) err = pprof.WriteHeapProfile(f)
if err != nil { if err != nil {
fs.CountError(err) err = fs.CountError(err)
log.Fatal(err) log.Fatal(err)
} }
err = f.Close() err = f.Close()
if err != nil { if err != nil {
fs.CountError(err) err = fs.CountError(err)
log.Fatal(err) log.Fatal(err)
} }
}) })

View File

@@ -371,7 +371,12 @@ func (fsys *FS) Write(path string, buff []byte, ofst int64, fh uint64) (n int) {
if errc != 0 { if errc != 0 {
return errc return errc
} }
n, err := handle.WriteAt(buff, ofst) var err error
if fsys.VFS.Opt.CacheMode < vfs.CacheModeWrites || handle.Node().Mode()&os.ModeAppend == 0 {
n, err = handle.WriteAt(buff, ofst)
} else {
n, err = handle.Write(buff)
}
if err != nil { if err != nil {
return translateError(err) return translateError(err)
} }

View File

@@ -21,6 +21,7 @@ import (
"github.com/pkg/errors" "github.com/pkg/errors"
"github.com/rclone/rclone/cmd/mountlib" "github.com/rclone/rclone/cmd/mountlib"
"github.com/rclone/rclone/fs" "github.com/rclone/rclone/fs"
"github.com/rclone/rclone/lib/atexit"
"github.com/rclone/rclone/vfs" "github.com/rclone/rclone/vfs"
"github.com/rclone/rclone/vfs/vfsflags" "github.com/rclone/rclone/vfs/vfsflags"
) )
@@ -207,7 +208,7 @@ func mount(f fs.Fs, mountpoint string) (*vfs.VFS, <-chan error, func() error, er
// If noModTime is set then it // If noModTime is set then it
func Mount(f fs.Fs, mountpoint string) error { func Mount(f fs.Fs, mountpoint string) error {
// Mount it // Mount it
FS, errChan, _, err := mount(f, mountpoint) FS, errChan, unmount, err := mount(f, mountpoint)
if err != nil { if err != nil {
return errors.Wrap(err, "failed to mount FUSE fs") return errors.Wrap(err, "failed to mount FUSE fs")
} }
@@ -217,6 +218,10 @@ func Mount(f fs.Fs, mountpoint string) error {
sigHup := make(chan os.Signal, 1) sigHup := make(chan os.Signal, 1)
signal.Notify(sigHup, syscall.SIGHUP) signal.Notify(sigHup, syscall.SIGHUP)
atexit.Register(func() {
_ = unmount()
})
if err := sdnotify.Ready(); err != nil && err != sdnotify.ErrSdNotifyNoSocket { if err := sdnotify.Ready(); err != nil && err != sdnotify.ErrSdNotifyNoSocket {
return errors.Wrap(err, "failed to notify systemd") return errors.Wrap(err, "failed to notify systemd")
} }

View File

@@ -88,7 +88,7 @@ func cryptCheck(ctx context.Context, fdst, fsrc fs.Fs) error {
underlyingDst := cryptDst.UnWrap() underlyingDst := cryptDst.UnWrap()
underlyingHash, err := underlyingDst.Hash(ctx, hashType) underlyingHash, err := underlyingDst.Hash(ctx, hashType)
if err != nil { if err != nil {
fs.CountError(err) err = fs.CountError(err)
fs.Errorf(dst, "Error reading hash from underlying %v: %v", underlyingDst, err) fs.Errorf(dst, "Error reading hash from underlying %v: %v", underlyingDst, err)
return true, false return true, false
} }
@@ -97,7 +97,7 @@ func cryptCheck(ctx context.Context, fdst, fsrc fs.Fs) error {
} }
cryptHash, err := fcrypt.ComputeHash(ctx, cryptDst, src, hashType) cryptHash, err := fcrypt.ComputeHash(ctx, cryptDst, src, hashType)
if err != nil { if err != nil {
fs.CountError(err) err = fs.CountError(err)
fs.Errorf(dst, "Error computing hash: %v", err) fs.Errorf(dst, "Error computing hash: %v", err)
return true, false return true, false
} }
@@ -106,7 +106,7 @@ func cryptCheck(ctx context.Context, fdst, fsrc fs.Fs) error {
} }
if cryptHash != underlyingHash { if cryptHash != underlyingHash {
err = errors.Errorf("hashes differ (%s:%s) %q vs (%s:%s) %q", fdst.Name(), fdst.Root(), cryptHash, fsrc.Name(), fsrc.Root(), underlyingHash) err = errors.Errorf("hashes differ (%s:%s) %q vs (%s:%s) %q", fdst.Name(), fdst.Root(), cryptHash, fsrc.Name(), fsrc.Root(), underlyingHash)
fs.CountError(err) err = fs.CountError(err)
fs.Errorf(src, err.Error()) fs.Errorf(src, err.Error())
return true, false return true, false
} }

View File

@@ -10,6 +10,7 @@ import (
"github.com/rclone/rclone/fs" "github.com/rclone/rclone/fs"
"github.com/rclone/rclone/fs/config/configflags" "github.com/rclone/rclone/fs/config/configflags"
"github.com/rclone/rclone/fs/filter/filterflags" "github.com/rclone/rclone/fs/filter/filterflags"
"github.com/rclone/rclone/fs/log/logflags"
"github.com/rclone/rclone/fs/rc/rcflags" "github.com/rclone/rclone/fs/rc/rcflags"
"github.com/rclone/rclone/lib/atexit" "github.com/rclone/rclone/lib/atexit"
"github.com/spf13/cobra" "github.com/spf13/cobra"
@@ -46,10 +47,11 @@ __rclone_custom_func() {
else else
__rclone_init_completion -n : || return __rclone_init_completion -n : || return
fi fi
local rclone=(command rclone --ask-password=false)
if [[ $cur != *:* ]]; then if [[ $cur != *:* ]]; then
local ifs=$IFS local ifs=$IFS
IFS=$'\n' IFS=$'\n'
local remotes=($(command rclone listremotes)) local remotes=($("${rclone[@]}" listremotes 2> /dev/null))
IFS=$ifs IFS=$ifs
local remote local remote
for remote in "${remotes[@]}"; do for remote in "${remotes[@]}"; do
@@ -68,7 +70,7 @@ __rclone_custom_func() {
fi fi
local ifs=$IFS local ifs=$IFS
IFS=$'\n' IFS=$'\n'
local lines=($(rclone lsf "${cur%%:*}:$prefix" 2>/dev/null)) local lines=($("${rclone[@]}" lsf "${cur%%:*}:$prefix" 2> /dev/null))
IFS=$ifs IFS=$ifs
local line local line
for line in "${lines[@]}"; do for line in "${lines[@]}"; do
@@ -168,6 +170,7 @@ func setupRootCommand(rootCmd *cobra.Command) {
configflags.AddFlags(pflag.CommandLine) configflags.AddFlags(pflag.CommandLine)
filterflags.AddFlags(pflag.CommandLine) filterflags.AddFlags(pflag.CommandLine)
rcflags.AddFlags(pflag.CommandLine) rcflags.AddFlags(pflag.CommandLine)
logflags.AddFlags(pflag.CommandLine)
Root.Run = runRoot Root.Run = runRoot
Root.Flags().BoolVarP(&version, "version", "V", false, "Print the version number") Root.Flags().BoolVarP(&version, "version", "V", false, "Print the version number")

View File

@@ -1,4 +1,4 @@
// +build linux darwin freebsd // +build linux,go1.11 darwin,go1.11 freebsd,go1.11
package mount package mount

View File

@@ -1,4 +1,4 @@
// +build linux darwin freebsd // +build linux,go1.11 darwin,go1.11 freebsd,go1.11
package mount package mount

View File

@@ -1,6 +1,6 @@
// FUSE main Fs // FUSE main Fs
// +build linux darwin freebsd // +build linux,go1.11 darwin,go1.11 freebsd,go1.11
package mount package mount

View File

@@ -1,10 +1,11 @@
// +build linux darwin freebsd // +build linux,go1.11 darwin,go1.11 freebsd,go1.11
package mount package mount
import ( import (
"context" "context"
"io" "io"
"os"
"bazil.org/fuse" "bazil.org/fuse"
fusefs "bazil.org/fuse/fs" fusefs "bazil.org/fuse/fs"
@@ -41,7 +42,12 @@ var _ fusefs.HandleWriter = (*FileHandle)(nil)
// Write data to the file handle // Write data to the file handle
func (fh *FileHandle) Write(ctx context.Context, req *fuse.WriteRequest, resp *fuse.WriteResponse) (err error) { func (fh *FileHandle) Write(ctx context.Context, req *fuse.WriteRequest, resp *fuse.WriteResponse) (err error) {
defer log.Trace(fh, "len=%d, offset=%d", len(req.Data), req.Offset)("written=%d, err=%v", &resp.Size, &err) defer log.Trace(fh, "len=%d, offset=%d", len(req.Data), req.Offset)("written=%d, err=%v", &resp.Size, &err)
n, err := fh.Handle.WriteAt(req.Data, req.Offset) var n int
if fh.Handle.Node().VFS().Opt.CacheMode < vfs.CacheModeWrites || fh.Handle.Node().Mode()&os.ModeAppend == 0 {
n, err = fh.Handle.WriteAt(req.Data, req.Offset)
} else {
n, err = fh.Handle.Write(req.Data)
}
if err != nil { if err != nil {
return translateError(err) return translateError(err)
} }

View File

@@ -1,6 +1,6 @@
// Package mount implents a FUSE mounting system for rclone remotes. // Package mount implents a FUSE mounting system for rclone remotes.
// +build linux darwin freebsd // +build linux,go1.11 darwin,go1.11 freebsd,go1.11
package mount package mount
@@ -32,12 +32,10 @@ func mountOptions(device string) (options []fuse.MountOption) {
fuse.Subtype("rclone"), fuse.Subtype("rclone"),
fuse.FSName(device), fuse.FSName(device),
fuse.VolumeName(mountlib.VolumeName), fuse.VolumeName(mountlib.VolumeName),
fuse.AsyncRead(),
// Options from benchmarking in the fuse module // Options from benchmarking in the fuse module
//fuse.MaxReadahead(64 * 1024 * 1024), //fuse.MaxReadahead(64 * 1024 * 1024),
//fuse.AsyncRead(), - FIXME this causes
// ReadFileHandle.Read error: read /home/files/ISOs/xubuntu-15.10-desktop-amd64.iso: bad file descriptor
// which is probably related to errors people are having
//fuse.WritebackCache(), //fuse.WritebackCache(),
} }
if mountlib.NoAppleDouble { if mountlib.NoAppleDouble {
@@ -139,6 +137,9 @@ func Mount(f fs.Fs, mountpoint string) error {
sigHup := make(chan os.Signal, 1) sigHup := make(chan os.Signal, 1)
signal.Notify(sigHup, syscall.SIGHUP) signal.Notify(sigHup, syscall.SIGHUP)
atexit.IgnoreSignals() atexit.IgnoreSignals()
atexit.Register(func() {
_ = unmount()
})
if err := sdnotify.Ready(); err != nil && err != sdnotify.ErrSdNotifyNoSocket { if err := sdnotify.Ready(); err != nil && err != sdnotify.ErrSdNotifyNoSocket {
return errors.Wrap(err, "failed to notify systemd") return errors.Wrap(err, "failed to notify systemd")

View File

@@ -1,4 +1,4 @@
// +build linux darwin freebsd // +build linux,go1.11 darwin,go1.11 freebsd,go1.11
package mount package mount

View File

@@ -1,6 +1,14 @@
// Build for mount for unsupported platforms to stop go complaining // Build for mount for unsupported platforms to stop go complaining
// about "no buildable Go source files " // about "no buildable Go source files "
// +build !linux,!darwin,!freebsd // Invert the build constraint: linux,go1.11 darwin,go1.11 freebsd,go1.11
//
// !((linux&&go1.11) || (darwin&&go1.11) || (freebsd&&go1.11))
// == !(linux&&go1.11) && !(darwin&&go1.11) && !(freebsd&&go1.11))
// == (!linux || !go1.11) && (!darwin || go1.11) && (!freebsd || !go1.11))
// +build !linux !go1.11
// +build !darwin !go1.11
// +build !freebsd !go1.11
package mount package mount

View File

@@ -50,6 +50,8 @@ func TestRenameOpenHandle(t *testing.T) {
err = file.Close() err = file.Close()
require.NoError(t, err) require.NoError(t, err)
run.waitForWriters()
// verify file was renamed properly // verify file was renamed properly
run.checkDir(t, "renamebla 9") run.checkDir(t, "renamebla 9")

View File

@@ -34,6 +34,11 @@ func osCreate(name string) (*os.File, error) {
return os.OpenFile(name, os.O_WRONLY|os.O_CREATE|os.O_TRUNC, 0666) return os.OpenFile(name, os.O_WRONLY|os.O_CREATE|os.O_TRUNC, 0666)
} }
// os.Create with append
func osAppend(name string) (*os.File, error) {
return os.OpenFile(name, os.O_WRONLY|os.O_APPEND, 0666)
}
// TestFileModTimeWithOpenWriters tests mod time on open files // TestFileModTimeWithOpenWriters tests mod time on open files
func TestFileModTimeWithOpenWriters(t *testing.T) { func TestFileModTimeWithOpenWriters(t *testing.T) {
run.skipIfNoFUSE(t) run.skipIfNoFUSE(t)

View File

@@ -6,6 +6,7 @@ import (
"context" "context"
"flag" "flag"
"fmt" "fmt"
"io"
"io/ioutil" "io/ioutil"
"log" "log"
"os" "os"
@@ -22,6 +23,7 @@ import (
"github.com/rclone/rclone/fs" "github.com/rclone/rclone/fs"
"github.com/rclone/rclone/fs/walk" "github.com/rclone/rclone/fs/walk"
"github.com/rclone/rclone/fstest" "github.com/rclone/rclone/fstest"
"github.com/rclone/rclone/lib/file"
"github.com/rclone/rclone/vfs" "github.com/rclone/rclone/vfs"
"github.com/stretchr/testify/assert" "github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require" "github.com/stretchr/testify/require"
@@ -78,6 +80,7 @@ func RunTests(t *testing.T, fn MountFn) {
t.Run("TestWriteFileDoubleClose", TestWriteFileDoubleClose) t.Run("TestWriteFileDoubleClose", TestWriteFileDoubleClose)
t.Run("TestWriteFileFsync", TestWriteFileFsync) t.Run("TestWriteFileFsync", TestWriteFileFsync)
t.Run("TestWriteFileDup", TestWriteFileDup) t.Run("TestWriteFileDup", TestWriteFileDup)
t.Run("TestWriteFileAppend", TestWriteFileAppend)
}) })
log.Printf("Finished test run with cache mode %v (ok=%v)", cacheMode, ok) log.Printf("Finished test run with cache mode %v (ok=%v)", cacheMode, ok)
if !ok { if !ok {
@@ -344,9 +347,36 @@ func (r *Run) waitForWriters() {
run.vfs.WaitForWriters(10 * time.Second) run.vfs.WaitForWriters(10 * time.Second)
} }
// writeFile writes data to a file named by filename.
// If the file does not exist, WriteFile creates it with permissions perm;
// otherwise writeFile truncates it before writing.
// If there is an error writing then writeFile
// deletes it an existing file and tries again.
func writeFile(filename string, data []byte, perm os.FileMode) error {
f, err := file.OpenFile(filename, os.O_WRONLY|os.O_CREATE|os.O_TRUNC, perm)
if err != nil {
err = os.Remove(filename)
if err != nil {
return err
}
f, err = file.OpenFile(filename, os.O_WRONLY|os.O_CREATE, perm)
if err != nil {
return err
}
}
n, err := f.Write(data)
if err == nil && n < len(data) {
err = io.ErrShortWrite
}
if err1 := f.Close(); err == nil {
err = err1
}
return err
}
func (r *Run) createFile(t *testing.T, filepath string, contents string) { func (r *Run) createFile(t *testing.T, filepath string, contents string) {
filepath = r.path(filepath) filepath = r.path(filepath)
err := ioutil.WriteFile(filepath, []byte(contents), 0600) err := writeFile(filepath, []byte(contents), 0600)
require.NoError(t, err) require.NoError(t, err)
r.waitForWriters() r.waitForWriters()
} }

View File

@@ -2,6 +2,7 @@ package mounttest
import ( import (
"os" "os"
"runtime"
"testing" "testing"
"github.com/stretchr/testify/assert" "github.com/stretchr/testify/assert"
@@ -130,3 +131,48 @@ func TestWriteFileDup(t *testing.T) {
run.waitForWriters() run.waitForWriters()
run.rm(t, "to be synced") run.rm(t, "to be synced")
} }
// TestWriteFileAppend tests that O_APPEND works on cache backends >= writes
func TestWriteFileAppend(t *testing.T) {
run.skipIfNoFUSE(t)
if run.vfs.Opt.CacheMode < vfs.CacheModeWrites {
t.Skip("not supported on vfs-cache-mode < writes")
return
}
// TODO: Windows needs the v1.5 release of WinFsp to handle O_APPEND properly.
// Until it gets released, skip this test on Windows.
if runtime.GOOS == "windows" {
t.Skip("currently unsupported on Windows")
}
filepath := run.path("to be synced")
fh, err := osCreate(filepath)
require.NoError(t, err)
testData := []byte("0123456789")
appendData := []byte("10")
_, err = fh.Write(testData)
require.NoError(t, err)
err = fh.Close()
require.NoError(t, err)
fh, err = osAppend(filepath)
require.NoError(t, err)
_, err = fh.Write(appendData)
require.NoError(t, err)
err = fh.Close()
require.NoError(t, err)
info, err := os.Stat(filepath)
require.NoError(t, err)
require.EqualValues(t, len(testData)+len(appendData), info.Size())
run.waitForWriters()
run.rm(t, "to be synced")
}

View File

@@ -214,7 +214,7 @@ func withHeader(name string, value string, next http.Handler) http.Handler {
// serveError returns an http.StatusInternalServerError and logs the error // serveError returns an http.StatusInternalServerError and logs the error
func serveError(what interface{}, w http.ResponseWriter, text string, err error) { func serveError(what interface{}, w http.ResponseWriter, text string, err error) {
fs.CountError(err) err = fs.CountError(err)
fs.Errorf(what, "%s: %v", text, err) fs.Errorf(what, "%s: %v", text, err)
http.Error(w, text+".", http.StatusInternalServerError) http.Error(w, text+".", http.StatusInternalServerError)
} }

View File

@@ -15,7 +15,6 @@ import (
"strconv" "strconv"
"sync" "sync"
ftp "github.com/goftp/server"
"github.com/pkg/errors" "github.com/pkg/errors"
"github.com/rclone/rclone/cmd" "github.com/rclone/rclone/cmd"
"github.com/rclone/rclone/cmd/serve/proxy" "github.com/rclone/rclone/cmd/serve/proxy"
@@ -29,6 +28,7 @@ import (
"github.com/rclone/rclone/vfs/vfsflags" "github.com/rclone/rclone/vfs/vfsflags"
"github.com/spf13/cobra" "github.com/spf13/cobra"
"github.com/spf13/pflag" "github.com/spf13/pflag"
ftp "goftp.io/server"
) )
// Options contains options for the http Server // Options contains options for the http Server
@@ -155,7 +155,7 @@ func newServer(f fs.Fs, opt *Options) (*server, error) {
PassivePorts: opt.PassivePorts, PassivePorts: opt.PassivePorts,
Auth: s, // implemented by CheckPasswd method Auth: s, // implemented by CheckPasswd method
Logger: &Logger{}, Logger: &Logger{},
//TODO implement a maximum of https://godoc.org/github.com/goftp/server#ServerOpts //TODO implement a maximum of https://godoc.org/goftp.io/server#ServerOpts
} }
s.srv = ftp.NewServer(ftpopt) s.srv = ftp.NewServer(ftpopt)
return s, nil return s, nil
@@ -210,8 +210,8 @@ func (l *Logger) PrintResponse(sessionID string, code int, message string) {
// CheckPassword is called with the connection. // CheckPassword is called with the connection.
func findID(callerName []byte) (string, error) { func findID(callerName []byte) (string, error) {
// Dump the stack in this format // Dump the stack in this format
// github.com/rclone/rclone/vendor/github.com/goftp/server.(*Conn).Serve(0xc0000b2680) // github.com/rclone/rclone/vendor/goftp.io/server.(*Conn).Serve(0xc0000b2680)
// /home/ncw/go/src/github.com/rclone/rclone/vendor/github.com/goftp/server/conn.go:116 +0x11d // /home/ncw/go/src/github.com/rclone/rclone/vendor/goftp.io/server/conn.go:116 +0x11d
buf := make([]byte, 4096) buf := make([]byte, 4096)
n := runtime.Stack(buf, false) n := runtime.Stack(buf, false)
buf = buf[:n] buf = buf[:n]

View File

@@ -11,7 +11,6 @@ import (
"fmt" "fmt"
"testing" "testing"
ftp "github.com/goftp/server"
_ "github.com/rclone/rclone/backend/local" _ "github.com/rclone/rclone/backend/local"
"github.com/rclone/rclone/cmd/serve/servetest" "github.com/rclone/rclone/cmd/serve/servetest"
"github.com/rclone/rclone/fs" "github.com/rclone/rclone/fs"
@@ -19,6 +18,7 @@ import (
"github.com/rclone/rclone/fs/config/obscure" "github.com/rclone/rclone/fs/config/obscure"
"github.com/stretchr/testify/assert" "github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require" "github.com/stretchr/testify/require"
ftp "goftp.io/server"
) )
const ( const (

View File

@@ -68,7 +68,7 @@ func (d *Directory) AddEntry(remote string, isDir bool) {
// Error logs the error and if a ResponseWriter is given it writes a http.StatusInternalServerError // Error logs the error and if a ResponseWriter is given it writes a http.StatusInternalServerError
func Error(what interface{}, w http.ResponseWriter, text string, err error) { func Error(what interface{}, w http.ResponseWriter, text string, err error) {
fs.CountError(err) err = fs.CountError(err)
fs.Errorf(what, "%s: %v", text, err) fs.Errorf(what, "%s: %v", text, err)
if w != nil { if w != nil {
http.Error(w, text+".", http.StatusInternalServerError) http.Error(w, text+".", http.StatusInternalServerError)

View File

@@ -208,7 +208,10 @@ func (p *Proxy) call(user, pass string, passwordBytes []byte) (value interface{}
if err != nil { if err != nil {
return nil, false, err return nil, false, err
} }
pwHash, err := bcrypt.GenerateFromPassword(passwordBytes, bcrypt.DefaultCost) // The bcrypt cost is a compromise between security and speed. The password is looked up on every
// transaction for WebDAV so we store it lightly hashed. An attacker would find it easier to go after
// the unencrypted password in memory most likely.
pwHash, err := bcrypt.GenerateFromPassword(passwordBytes, bcrypt.MinCost)
if err != nil { if err != nil {
return nil, false, err return nil, false, err
} }

View File

@@ -271,7 +271,7 @@ func (s *server) postObject(w http.ResponseWriter, r *http.Request, remote strin
_, err := operations.RcatSize(r.Context(), s.f, remote, r.Body, r.ContentLength, time.Now()) _, err := operations.RcatSize(r.Context(), s.f, remote, r.Body, r.ContentLength, time.Now())
if err != nil { if err != nil {
accounting.Stats(r.Context()).Error(err) err = accounting.Stats(r.Context()).Error(err)
fs.Errorf(remote, "Post request rcat error: %v", err) fs.Errorf(remote, "Post request rcat error: %v", err)
http.Error(w, http.StatusText(http.StatusInternalServerError), http.StatusInternalServerError) http.Error(w, http.StatusText(http.StatusInternalServerError), http.StatusInternalServerError)

View File

@@ -192,7 +192,7 @@ Contributors
* Sheldon Rupp <me@shel.io> * Sheldon Rupp <me@shel.io>
* albertony <12441419+albertony@users.noreply.github.com> * albertony <12441419+albertony@users.noreply.github.com>
* cron410 <cron410@gmail.com> * cron410 <cron410@gmail.com>
* Anagh Kumar Baranwal <anaghk.dos@gmail.com> * Anagh Kumar Baranwal <anaghk.dos@gmail.com> <6824881+darthShadow@users.noreply.github.com>
* Felix Brucker <felix@felixbrucker.com> * Felix Brucker <felix@felixbrucker.com>
* Santiago Rodríguez <scollazo@users.noreply.github.com> * Santiago Rodríguez <scollazo@users.noreply.github.com>
* Craig Miskell <craig.miskell@fluxfederation.com> * Craig Miskell <craig.miskell@fluxfederation.com>
@@ -263,7 +263,7 @@ Contributors
* garry415 <garry.415@gmail.com> * garry415 <garry.415@gmail.com>
* forgems <forgems@gmail.com> * forgems <forgems@gmail.com>
* Florian Apolloner <florian@apolloner.eu> * Florian Apolloner <florian@apolloner.eu>
* Aleksandar Jankovic <office@ajankovic.com> * Aleksandar Janković <office@ajankovic.com> <ajankovic@users.noreply.github.com>
* Maran <maran@protonmail.com> * Maran <maran@protonmail.com>
* nguyenhuuluan434 <nguyenhuuluan434@gmail.com> * nguyenhuuluan434 <nguyenhuuluan434@gmail.com>
* Laura Hausmann <zotan@zotan.pw> <laura@hausmann.dev> * Laura Hausmann <zotan@zotan.pw> <laura@hausmann.dev>
@@ -306,3 +306,18 @@ Contributors
* Carlos Ferreyra <crypticmind@gmail.com> * Carlos Ferreyra <crypticmind@gmail.com>
* Saksham Khanna <sakshamkhanna@outlook.com> * Saksham Khanna <sakshamkhanna@outlook.com>
* dausruddin <5763466+dausruddin@users.noreply.github.com> * dausruddin <5763466+dausruddin@users.noreply.github.com>
* zero-24 <zero-24@users.noreply.github.com>
* Xiaoxing Ye <ye@xiaoxing.us>
* Barry Muldrey <barry@muldrey.net>
* Sebastian Brandt <sebastian.brandt@friday.de>
* Marco Molteni <marco.molteni@mailbox.org>
* Ankur Gupta <ankur0493@gmail.com>
* Maciej Zimnoch <maciej@scylladb.com>
* anuar45 <serdaliyev.anuar@gmail.com>
* Fernando <ferferga@users.noreply.github.com>
* David Cole <david.cole@sohonet.com>
* Wei He <git@weispot.com>
* Outvi V <19144373+outloudvi@users.noreply.github.com>
* Thomas Kriechbaumer <thomas@kriechbaumer.name>
* Tennix <tennix@users.noreply.github.com>
* Ole Schütt <ole@schuett.name>

View File

@@ -130,10 +130,10 @@ error message in such cases.
#### Chunk names #### Chunk names
The default chunk name format is `*.rclone-chunk.###`, hence by default The default chunk name format is `*.rclone_chunk.###`, hence by default
chunk names are `BIG_FILE_NAME.rclone-chunk.001`, chunk names are `BIG_FILE_NAME.rclone_chunk.001`,
`BIG_FILE_NAME.rclone-chunk.002` etc. You can configure a different name `BIG_FILE_NAME.rclone_chunk.002` etc. You can configure another name format
format using the `--chunker-name-format` option. The format uses asterisk using the `name_format` configuration file option. The format uses asterisk
`*` as a placeholder for the base file name and one or more consecutive `*` as a placeholder for the base file name and one or more consecutive
hash characters `#` as a placeholder for sequential chunk number. hash characters `#` as a placeholder for sequential chunk number.
There must be one and only one asterisk. The number of consecutive hash There must be one and only one asterisk. The number of consecutive hash
@@ -211,6 +211,9 @@ file hashing, configure chunker with `md5all` or `sha1all`. These two modes
guarantee given hash for all files. If wrapped remote doesn't support it, guarantee given hash for all files. If wrapped remote doesn't support it,
chunker will then add metadata to all files, even small. However, this can chunker will then add metadata to all files, even small. However, this can
double the amount of small files in storage and incur additional service charges. double the amount of small files in storage and incur additional service charges.
You can even use chunker to force md5/sha1 support in any other remote
at expence of sidecar meta objects by setting eg. `chunk_type=sha1all`
to force hashsums and `chunk_size=1P` to effectively disable chunking.
Normally, when a file is copied to chunker controlled remote, chunker Normally, when a file is copied to chunker controlled remote, chunker
will ask the file source for compatible file hash and revert to on-the-fly will ask the file source for compatible file hash and revert to on-the-fly
@@ -274,6 +277,14 @@ Chunker requires wrapped remote to support server side `move` (or `copy` +
This is because it internally renames temporary chunk files to their final This is because it internally renames temporary chunk files to their final
names when an operation completes successfully. names when an operation completes successfully.
Chunker encodes chunk number in file name, so with default `name_format`
setting it adds 17 characters. Also chunker adds 7 characters of temporary
suffix during operations. Many file systems limit base file name without path
by 255 characters. Using rclone's crypt remote as a base file system limits
file name by 143 characters. Thus, maximum name length is 231 for most files
and 119 for chunker-over-crypt. A user in need can change name format to
eg. `*.rcc##` and save 10 characters (provided at most 99 chunks per file).
Note that a move implemented using the copy-and-delete method may incur Note that a move implemented using the copy-and-delete method may incur
double charging with some cloud storage providers. double charging with some cloud storage providers.

View File

@@ -1,5 +1,5 @@
--- ---
date: 2019-11-19T16:02:36Z date: 2019-10-26T11:04:03+01:00
title: "rclone" title: "rclone"
slug: rclone slug: rclone
url: /commands/rclone/ url: /commands/rclone/

View File

@@ -1,5 +1,5 @@
--- ---
date: 2019-11-19T16:02:36Z date: 2019-10-26T11:04:03+01:00
title: "rclone about" title: "rclone about"
slug: rclone_about slug: rclone_about
url: /commands/rclone_about/ url: /commands/rclone_about/

View File

@@ -1,5 +1,5 @@
--- ---
date: 2019-11-19T16:02:36Z date: 2019-10-26T11:04:03+01:00
title: "rclone authorize" title: "rclone authorize"
slug: rclone_authorize slug: rclone_authorize
url: /commands/rclone_authorize/ url: /commands/rclone_authorize/
@@ -22,7 +22,8 @@ rclone authorize [flags]
### Options ### Options
``` ```
-h, --help help for authorize --auth-no-open-browser Do not automatically open auth link in default browser
-h, --help help for authorize
``` ```
See the [global flags page](/flags/) for global options not listed here. See the [global flags page](/flags/) for global options not listed here.

View File

@@ -1,5 +1,5 @@
--- ---
date: 2019-11-19T16:02:36Z date: 2019-10-26T11:04:03+01:00
title: "rclone cachestats" title: "rclone cachestats"
slug: rclone_cachestats slug: rclone_cachestats
url: /commands/rclone_cachestats/ url: /commands/rclone_cachestats/

View File

@@ -1,5 +1,5 @@
--- ---
date: 2019-11-19T16:02:36Z date: 2019-10-26T11:04:03+01:00
title: "rclone cat" title: "rclone cat"
slug: rclone_cat slug: rclone_cat
url: /commands/rclone_cat/ url: /commands/rclone_cat/

View File

@@ -1,5 +1,5 @@
--- ---
date: 2019-11-19T16:02:36Z date: 2019-10-26T11:04:03+01:00
title: "rclone check" title: "rclone check"
slug: rclone_check slug: rclone_check
url: /commands/rclone_check/ url: /commands/rclone_check/

View File

@@ -1,5 +1,5 @@
--- ---
date: 2019-11-19T16:02:36Z date: 2019-10-26T11:04:03+01:00
title: "rclone cleanup" title: "rclone cleanup"
slug: rclone_cleanup slug: rclone_cleanup
url: /commands/rclone_cleanup/ url: /commands/rclone_cleanup/

View File

@@ -1,5 +1,5 @@
--- ---
date: 2019-11-19T16:02:36Z date: 2019-10-26T11:04:03+01:00
title: "rclone config" title: "rclone config"
slug: rclone_config slug: rclone_config
url: /commands/rclone_config/ url: /commands/rclone_config/

View File

@@ -1,5 +1,5 @@
--- ---
date: 2019-11-19T16:02:36Z date: 2019-10-26T11:04:03+01:00
title: "rclone config create" title: "rclone config create"
slug: rclone_config_create slug: rclone_config_create
url: /commands/rclone_config_create/ url: /commands/rclone_config_create/

View File

@@ -1,5 +1,5 @@
--- ---
date: 2019-11-19T16:02:36Z date: 2019-10-26T11:04:03+01:00
title: "rclone config delete" title: "rclone config delete"
slug: rclone_config_delete slug: rclone_config_delete
url: /commands/rclone_config_delete/ url: /commands/rclone_config_delete/

View File

@@ -1,5 +1,5 @@
--- ---
date: 2019-11-19T16:02:36Z date: 2019-10-26T11:04:03+01:00
title: "rclone config disconnect" title: "rclone config disconnect"
slug: rclone_config_disconnect slug: rclone_config_disconnect
url: /commands/rclone_config_disconnect/ url: /commands/rclone_config_disconnect/

View File

@@ -1,5 +1,5 @@
--- ---
date: 2019-11-19T16:02:36Z date: 2019-10-26T11:04:03+01:00
title: "rclone config dump" title: "rclone config dump"
slug: rclone_config_dump slug: rclone_config_dump
url: /commands/rclone_config_dump/ url: /commands/rclone_config_dump/

View File

@@ -1,5 +1,5 @@
--- ---
date: 2019-11-19T16:02:36Z date: 2019-10-26T11:04:03+01:00
title: "rclone config edit" title: "rclone config edit"
slug: rclone_config_edit slug: rclone_config_edit
url: /commands/rclone_config_edit/ url: /commands/rclone_config_edit/

View File

@@ -1,5 +1,5 @@
--- ---
date: 2019-11-19T16:02:36Z date: 2019-10-26T11:04:03+01:00
title: "rclone config file" title: "rclone config file"
slug: rclone_config_file slug: rclone_config_file
url: /commands/rclone_config_file/ url: /commands/rclone_config_file/

View File

@@ -1,5 +1,5 @@
--- ---
date: 2019-11-19T16:02:36Z date: 2019-10-26T11:04:03+01:00
title: "rclone config password" title: "rclone config password"
slug: rclone_config_password slug: rclone_config_password
url: /commands/rclone_config_password/ url: /commands/rclone_config_password/

View File

@@ -1,5 +1,5 @@
--- ---
date: 2019-11-19T16:02:36Z date: 2019-10-26T11:04:03+01:00
title: "rclone config providers" title: "rclone config providers"
slug: rclone_config_providers slug: rclone_config_providers
url: /commands/rclone_config_providers/ url: /commands/rclone_config_providers/

View File

@@ -1,5 +1,5 @@
--- ---
date: 2019-11-19T16:02:36Z date: 2019-10-26T11:04:03+01:00
title: "rclone config reconnect" title: "rclone config reconnect"
slug: rclone_config_reconnect slug: rclone_config_reconnect
url: /commands/rclone_config_reconnect/ url: /commands/rclone_config_reconnect/

View File

@@ -1,5 +1,5 @@
--- ---
date: 2019-11-19T16:02:36Z date: 2019-10-26T11:04:03+01:00
title: "rclone config show" title: "rclone config show"
slug: rclone_config_show slug: rclone_config_show
url: /commands/rclone_config_show/ url: /commands/rclone_config_show/

View File

@@ -1,5 +1,5 @@
--- ---
date: 2019-11-19T16:02:36Z date: 2019-10-26T11:04:03+01:00
title: "rclone config update" title: "rclone config update"
slug: rclone_config_update slug: rclone_config_update
url: /commands/rclone_config_update/ url: /commands/rclone_config_update/

View File

@@ -1,5 +1,5 @@
--- ---
date: 2019-11-19T16:02:36Z date: 2019-10-26T11:04:03+01:00
title: "rclone config userinfo" title: "rclone config userinfo"
slug: rclone_config_userinfo slug: rclone_config_userinfo
url: /commands/rclone_config_userinfo/ url: /commands/rclone_config_userinfo/

View File

@@ -1,5 +1,5 @@
--- ---
date: 2019-11-19T16:02:36Z date: 2019-10-26T11:04:03+01:00
title: "rclone copy" title: "rclone copy"
slug: rclone_copy slug: rclone_copy
url: /commands/rclone_copy/ url: /commands/rclone_copy/

View File

@@ -1,5 +1,5 @@
--- ---
date: 2019-11-19T16:02:36Z date: 2019-10-26T11:04:03+01:00
title: "rclone copyto" title: "rclone copyto"
slug: rclone_copyto slug: rclone_copyto
url: /commands/rclone_copyto/ url: /commands/rclone_copyto/

View File

@@ -1,5 +1,5 @@
--- ---
date: 2019-11-19T16:02:36Z date: 2019-10-26T11:04:03+01:00
title: "rclone copyurl" title: "rclone copyurl"
slug: rclone_copyurl slug: rclone_copyurl
url: /commands/rclone_copyurl/ url: /commands/rclone_copyurl/

View File

@@ -1,5 +1,5 @@
--- ---
date: 2019-11-19T16:02:36Z date: 2019-10-26T11:04:03+01:00
title: "rclone cryptcheck" title: "rclone cryptcheck"
slug: rclone_cryptcheck slug: rclone_cryptcheck
url: /commands/rclone_cryptcheck/ url: /commands/rclone_cryptcheck/

View File

@@ -1,5 +1,5 @@
--- ---
date: 2019-11-19T16:02:36Z date: 2019-10-26T11:04:03+01:00
title: "rclone cryptdecode" title: "rclone cryptdecode"
slug: rclone_cryptdecode slug: rclone_cryptdecode
url: /commands/rclone_cryptdecode/ url: /commands/rclone_cryptdecode/

View File

@@ -1,5 +1,5 @@
--- ---
date: 2019-11-19T16:02:36Z date: 2019-10-26T11:04:03+01:00
title: "rclone dbhashsum" title: "rclone dbhashsum"
slug: rclone_dbhashsum slug: rclone_dbhashsum
url: /commands/rclone_dbhashsum/ url: /commands/rclone_dbhashsum/

View File

@@ -1,5 +1,5 @@
--- ---
date: 2019-11-19T16:02:36Z date: 2019-10-26T11:04:03+01:00
title: "rclone dedupe" title: "rclone dedupe"
slug: rclone_dedupe slug: rclone_dedupe
url: /commands/rclone_dedupe/ url: /commands/rclone_dedupe/

View File

@@ -1,5 +1,5 @@
--- ---
date: 2019-11-19T16:02:36Z date: 2019-10-26T11:04:03+01:00
title: "rclone delete" title: "rclone delete"
slug: rclone_delete slug: rclone_delete
url: /commands/rclone_delete/ url: /commands/rclone_delete/

View File

@@ -1,5 +1,5 @@
--- ---
date: 2019-11-19T16:02:36Z date: 2019-10-26T11:04:03+01:00
title: "rclone deletefile" title: "rclone deletefile"
slug: rclone_deletefile slug: rclone_deletefile
url: /commands/rclone_deletefile/ url: /commands/rclone_deletefile/

View File

@@ -1,5 +1,5 @@
--- ---
date: 2019-11-19T16:02:36Z date: 2019-10-26T11:04:03+01:00
title: "rclone genautocomplete" title: "rclone genautocomplete"
slug: rclone_genautocomplete slug: rclone_genautocomplete
url: /commands/rclone_genautocomplete/ url: /commands/rclone_genautocomplete/

View File

@@ -1,5 +1,5 @@
--- ---
date: 2019-11-19T16:02:36Z date: 2019-10-26T11:04:03+01:00
title: "rclone genautocomplete bash" title: "rclone genautocomplete bash"
slug: rclone_genautocomplete_bash slug: rclone_genautocomplete_bash
url: /commands/rclone_genautocomplete_bash/ url: /commands/rclone_genautocomplete_bash/

View File

@@ -1,5 +1,5 @@
--- ---
date: 2019-11-19T16:02:36Z date: 2019-10-26T11:04:03+01:00
title: "rclone genautocomplete zsh" title: "rclone genautocomplete zsh"
slug: rclone_genautocomplete_zsh slug: rclone_genautocomplete_zsh
url: /commands/rclone_genautocomplete_zsh/ url: /commands/rclone_genautocomplete_zsh/

View File

@@ -1,5 +1,5 @@
--- ---
date: 2019-11-19T16:02:36Z date: 2019-10-26T11:04:03+01:00
title: "rclone gendocs" title: "rclone gendocs"
slug: rclone_gendocs slug: rclone_gendocs
url: /commands/rclone_gendocs/ url: /commands/rclone_gendocs/

View File

@@ -1,5 +1,5 @@
--- ---
date: 2019-11-19T16:02:36Z date: 2019-10-26T11:04:03+01:00
title: "rclone hashsum" title: "rclone hashsum"
slug: rclone_hashsum slug: rclone_hashsum
url: /commands/rclone_hashsum/ url: /commands/rclone_hashsum/

View File

@@ -1,5 +1,5 @@
--- ---
date: 2019-11-19T16:02:36Z date: 2019-10-26T11:04:03+01:00
title: "rclone link" title: "rclone link"
slug: rclone_link slug: rclone_link
url: /commands/rclone_link/ url: /commands/rclone_link/

View File

@@ -1,5 +1,5 @@
--- ---
date: 2019-11-19T16:02:36Z date: 2019-10-26T11:04:03+01:00
title: "rclone listremotes" title: "rclone listremotes"
slug: rclone_listremotes slug: rclone_listremotes
url: /commands/rclone_listremotes/ url: /commands/rclone_listremotes/

View File

@@ -1,5 +1,5 @@
--- ---
date: 2019-11-19T16:02:36Z date: 2019-10-26T11:04:03+01:00
title: "rclone ls" title: "rclone ls"
slug: rclone_ls slug: rclone_ls
url: /commands/rclone_ls/ url: /commands/rclone_ls/

View File

@@ -1,5 +1,5 @@
--- ---
date: 2019-11-19T16:02:36Z date: 2019-10-26T11:04:03+01:00
title: "rclone lsd" title: "rclone lsd"
slug: rclone_lsd slug: rclone_lsd
url: /commands/rclone_lsd/ url: /commands/rclone_lsd/

View File

@@ -1,5 +1,5 @@
--- ---
date: 2019-11-19T16:02:36Z date: 2019-10-26T11:04:03+01:00
title: "rclone lsf" title: "rclone lsf"
slug: rclone_lsf slug: rclone_lsf
url: /commands/rclone_lsf/ url: /commands/rclone_lsf/

View File

@@ -1,5 +1,5 @@
--- ---
date: 2019-11-19T16:02:36Z date: 2019-10-26T11:04:03+01:00
title: "rclone lsjson" title: "rclone lsjson"
slug: rclone_lsjson slug: rclone_lsjson
url: /commands/rclone_lsjson/ url: /commands/rclone_lsjson/

View File

@@ -1,5 +1,5 @@
--- ---
date: 2019-11-19T16:02:36Z date: 2019-10-26T11:04:03+01:00
title: "rclone lsl" title: "rclone lsl"
slug: rclone_lsl slug: rclone_lsl
url: /commands/rclone_lsl/ url: /commands/rclone_lsl/

View File

@@ -1,5 +1,5 @@
--- ---
date: 2019-11-19T16:02:36Z date: 2019-10-26T11:04:03+01:00
title: "rclone md5sum" title: "rclone md5sum"
slug: rclone_md5sum slug: rclone_md5sum
url: /commands/rclone_md5sum/ url: /commands/rclone_md5sum/

View File

@@ -1,5 +1,5 @@
--- ---
date: 2019-11-19T16:02:36Z date: 2019-10-26T11:04:03+01:00
title: "rclone mkdir" title: "rclone mkdir"
slug: rclone_mkdir slug: rclone_mkdir
url: /commands/rclone_mkdir/ url: /commands/rclone_mkdir/

View File

@@ -1,5 +1,5 @@
--- ---
date: 2019-11-19T16:02:36Z date: 2019-10-26T11:04:03+01:00
title: "rclone mount" title: "rclone mount"
slug: rclone_mount slug: rclone_mount
url: /commands/rclone_mount/ url: /commands/rclone_mount/
@@ -65,6 +65,28 @@ infrastructure](https://github.com/billziss-gh/winfsp/wiki/WinFsp-Service-Archit
which creates drives accessible for everyone on the system or which creates drives accessible for everyone on the system or
alternatively using [the nssm service manager](https://nssm.cc/usage). alternatively using [the nssm service manager](https://nssm.cc/usage).
#### Mount as a network drive
By default, rclone will mount the remote as a normal drive. However, you can also mount it as a **Network Drive**
(or **Network Share**, as mentioned in some places)
Unlike other systems, Windows provides a different filesystem type for network drives.
Windows and other programs treat the network drives and fixed/removable drives differently:
In network drives, many I/O operations are optimized, as the high latency and low reliability
(compared to a normal drive) of a network is expected.
Although many people prefer network shares to be mounted as normal system drives, this might cause
some issues, such as programs not working as expected or freezes and errors while operating with the
mounted remote in Windows Explorer. If you experience any of those, consider mounting rclone remotes as network shares,
as Windows expects normal drives to be fast and reliable, while cloud storage is far from that.
See also [Limitations](#limitations) section below for more info
Add `--fuse-flag --VolumePrefix=\server\share` to your `mount` command, **replacing `share` with any other
name of your choice if you are mounting more than one remote**. Otherwise, the mountpoints will conflict and
your mounted filesystems will overlap.
[Read more about drive mapping](https://en.wikipedia.org/wiki/Drive_mapping)
### Limitations ### Limitations
Without the use of "--vfs-cache-mode" this can only write files Without the use of "--vfs-cache-mode" this can only write files

View File

@@ -1,5 +1,5 @@
--- ---
date: 2019-11-19T16:02:36Z date: 2019-10-26T11:04:03+01:00
title: "rclone move" title: "rclone move"
slug: rclone_move slug: rclone_move
url: /commands/rclone_move/ url: /commands/rclone_move/

View File

@@ -1,5 +1,5 @@
--- ---
date: 2019-11-19T16:02:36Z date: 2019-10-26T11:04:03+01:00
title: "rclone moveto" title: "rclone moveto"
slug: rclone_moveto slug: rclone_moveto
url: /commands/rclone_moveto/ url: /commands/rclone_moveto/

View File

@@ -1,5 +1,5 @@
--- ---
date: 2019-11-19T16:02:36Z date: 2019-10-26T11:04:03+01:00
title: "rclone ncdu" title: "rclone ncdu"
slug: rclone_ncdu slug: rclone_ncdu
url: /commands/rclone_ncdu/ url: /commands/rclone_ncdu/

View File

@@ -1,5 +1,5 @@
--- ---
date: 2019-11-19T16:02:36Z date: 2019-10-26T11:04:03+01:00
title: "rclone obscure" title: "rclone obscure"
slug: rclone_obscure slug: rclone_obscure
url: /commands/rclone_obscure/ url: /commands/rclone_obscure/

View File

@@ -1,5 +1,5 @@
--- ---
date: 2019-11-19T16:02:36Z date: 2019-10-26T11:04:03+01:00
title: "rclone purge" title: "rclone purge"
slug: rclone_purge slug: rclone_purge
url: /commands/rclone_purge/ url: /commands/rclone_purge/

View File

@@ -1,5 +1,5 @@
--- ---
date: 2019-11-19T16:02:36Z date: 2019-10-26T11:04:03+01:00
title: "rclone rc" title: "rclone rc"
slug: rclone_rc slug: rclone_rc
url: /commands/rclone_rc/ url: /commands/rclone_rc/

Some files were not shown because too many files have changed in this diff Show More