mirror of
https://github.com/rclone/rclone.git
synced 2026-01-21 11:53:17 +00:00
Compare commits
5 Commits
v1.59.1
...
v1.59-stab
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
ea73ac75ba | ||
|
|
50657752fd | ||
|
|
603efbfe76 | ||
|
|
831c79b11d | ||
|
|
acec3dbf11 |
28
MANUAL.html
generated
28
MANUAL.html
generated
@@ -19,7 +19,7 @@
|
||||
<header id="title-block-header">
|
||||
<h1 class="title">rclone(1) User Manual</h1>
|
||||
<p class="author">Nick Craig-Wood</p>
|
||||
<p class="date">Aug 08, 2022</p>
|
||||
<p class="date">Sep 15, 2022</p>
|
||||
</header>
|
||||
<h1 id="rclone-syncs-your-files-to-cloud-storage">Rclone syncs your files to cloud storage</h1>
|
||||
<p><img width="50%" src="https://rclone.org/img/logo_on_light__horizontal_color.svg" alt="rclone logo" style="float:right; padding: 5px;" ></p>
|
||||
@@ -8379,7 +8379,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
|
||||
--use-json-log Use json log format
|
||||
--use-mmap Use mmap allocator (see docs)
|
||||
--use-server-modtime Use server modified time instead of object metadata
|
||||
--user-agent string Set the user-agent to a specified string (default "rclone/v1.59.1")
|
||||
--user-agent string Set the user-agent to a specified string (default "rclone/v1.59.2")
|
||||
-v, --verbose count Print lots more stuff (repeat for more)</code></pre>
|
||||
<h2 id="backend-flags">Backend Flags</h2>
|
||||
<p>These flags are available for every command. They control the backends and may be set in the config file.</p>
|
||||
@@ -27765,6 +27765,30 @@ $ tree /tmp/b
|
||||
<li>"error": return an error based on option value</li>
|
||||
</ul>
|
||||
<h1 id="changelog">Changelog</h1>
|
||||
<h2 id="v1.59.2---2022-09-15">v1.59.2 - 2022-09-15</h2>
|
||||
<p><a href="https://github.com/rclone/rclone/compare/v1.59.1...v1.59.2">See commits</a></p>
|
||||
<ul>
|
||||
<li>Bug Fixes
|
||||
<ul>
|
||||
<li>config: Move locking to fix fatal error: concurrent map read and map write (Nick Craig-Wood)</li>
|
||||
</ul></li>
|
||||
<li>Local
|
||||
<ul>
|
||||
<li>Disable xattr support if the filesystems indicates it is not supported (Nick Craig-Wood)</li>
|
||||
</ul></li>
|
||||
<li>Azure Blob
|
||||
<ul>
|
||||
<li>Fix chunksize calculations producing too many parts (Nick Craig-Wood)</li>
|
||||
</ul></li>
|
||||
<li>B2
|
||||
<ul>
|
||||
<li>Fix chunksize calculations producing too many parts (Nick Craig-Wood)</li>
|
||||
</ul></li>
|
||||
<li>S3
|
||||
<ul>
|
||||
<li>Fix chunksize calculations producing too many parts (Nick Craig-Wood)</li>
|
||||
</ul></li>
|
||||
</ul>
|
||||
<h2 id="v1.59.1---2022-08-08">v1.59.1 - 2022-08-08</h2>
|
||||
<p><a href="https://github.com/rclone/rclone/compare/v1.59.0...v1.59.1">See commits</a></p>
|
||||
<ul>
|
||||
|
||||
19
MANUAL.md
generated
19
MANUAL.md
generated
@@ -1,6 +1,6 @@
|
||||
% rclone(1) User Manual
|
||||
% Nick Craig-Wood
|
||||
% Aug 08, 2022
|
||||
% Sep 15, 2022
|
||||
|
||||
# Rclone syncs your files to cloud storage
|
||||
|
||||
@@ -14342,7 +14342,7 @@ These flags are available for every command.
|
||||
--use-json-log Use json log format
|
||||
--use-mmap Use mmap allocator (see docs)
|
||||
--use-server-modtime Use server modified time instead of object metadata
|
||||
--user-agent string Set the user-agent to a specified string (default "rclone/v1.59.1")
|
||||
--user-agent string Set the user-agent to a specified string (default "rclone/v1.59.2")
|
||||
-v, --verbose count Print lots more stuff (repeat for more)
|
||||
```
|
||||
|
||||
@@ -39420,6 +39420,21 @@ Options:
|
||||
|
||||
# Changelog
|
||||
|
||||
## v1.59.2 - 2022-09-15
|
||||
|
||||
[See commits](https://github.com/rclone/rclone/compare/v1.59.1...v1.59.2)
|
||||
|
||||
* Bug Fixes
|
||||
* config: Move locking to fix fatal error: concurrent map read and map write (Nick Craig-Wood)
|
||||
* Local
|
||||
* Disable xattr support if the filesystems indicates it is not supported (Nick Craig-Wood)
|
||||
* Azure Blob
|
||||
* Fix chunksize calculations producing too many parts (Nick Craig-Wood)
|
||||
* B2
|
||||
* Fix chunksize calculations producing too many parts (Nick Craig-Wood)
|
||||
* S3
|
||||
* Fix chunksize calculations producing too many parts (Nick Craig-Wood)
|
||||
|
||||
## v1.59.1 - 2022-08-08
|
||||
|
||||
[See commits](https://github.com/rclone/rclone/compare/v1.59.0...v1.59.1)
|
||||
|
||||
24
MANUAL.txt
generated
24
MANUAL.txt
generated
@@ -1,6 +1,6 @@
|
||||
rclone(1) User Manual
|
||||
Nick Craig-Wood
|
||||
Aug 08, 2022
|
||||
Sep 15, 2022
|
||||
|
||||
Rclone syncs your files to cloud storage
|
||||
|
||||
@@ -13893,7 +13893,7 @@ These flags are available for every command.
|
||||
--use-json-log Use json log format
|
||||
--use-mmap Use mmap allocator (see docs)
|
||||
--use-server-modtime Use server modified time instead of object metadata
|
||||
--user-agent string Set the user-agent to a specified string (default "rclone/v1.59.1")
|
||||
--user-agent string Set the user-agent to a specified string (default "rclone/v1.59.2")
|
||||
-v, --verbose count Print lots more stuff (repeat for more)
|
||||
|
||||
Backend Flags
|
||||
@@ -38931,6 +38931,26 @@ Options:
|
||||
|
||||
Changelog
|
||||
|
||||
v1.59.2 - 2022-09-15
|
||||
|
||||
See commits
|
||||
|
||||
- Bug Fixes
|
||||
- config: Move locking to fix fatal error: concurrent map read and
|
||||
map write (Nick Craig-Wood)
|
||||
- Local
|
||||
- Disable xattr support if the filesystems indicates it is not
|
||||
supported (Nick Craig-Wood)
|
||||
- Azure Blob
|
||||
- Fix chunksize calculations producing too many parts (Nick
|
||||
Craig-Wood)
|
||||
- B2
|
||||
- Fix chunksize calculations producing too many parts (Nick
|
||||
Craig-Wood)
|
||||
- S3
|
||||
- Fix chunksize calculations producing too many parts (Nick
|
||||
Craig-Wood)
|
||||
|
||||
v1.59.1 - 2022-08-08
|
||||
|
||||
See commits
|
||||
|
||||
@@ -1676,14 +1676,14 @@ func (o *Object) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, op
|
||||
}
|
||||
}
|
||||
|
||||
uploadParts := int64(maxUploadParts)
|
||||
uploadParts := maxUploadParts
|
||||
if uploadParts < 1 {
|
||||
uploadParts = 1
|
||||
} else if uploadParts > maxUploadParts {
|
||||
uploadParts = maxUploadParts
|
||||
}
|
||||
// calculate size of parts/blocks
|
||||
partSize := chunksize.Calculator(o, int(uploadParts), o.fs.opt.ChunkSize)
|
||||
partSize := chunksize.Calculator(o, src.Size(), uploadParts, o.fs.opt.ChunkSize)
|
||||
|
||||
putBlobOptions := azblob.UploadStreamToBlockBlobOptions{
|
||||
BufferSize: int(partSize),
|
||||
|
||||
@@ -97,7 +97,7 @@ func (f *Fs) newLargeUpload(ctx context.Context, o *Object, in io.Reader, src fs
|
||||
if size == -1 {
|
||||
fs.Debugf(o, "Streaming upload with --b2-chunk-size %s allows uploads of up to %s and will fail only when that limit is reached.", f.opt.ChunkSize, maxParts*f.opt.ChunkSize)
|
||||
} else {
|
||||
chunkSize = chunksize.Calculator(src, maxParts, defaultChunkSize)
|
||||
chunkSize = chunksize.Calculator(o, size, maxParts, defaultChunkSize)
|
||||
parts = size / int64(chunkSize)
|
||||
if size%int64(chunkSize) != 0 {
|
||||
parts++
|
||||
|
||||
@@ -234,15 +234,16 @@ type Options struct {
|
||||
|
||||
// Fs represents a local filesystem rooted at root
|
||||
type Fs struct {
|
||||
name string // the name of the remote
|
||||
root string // The root directory (OS path)
|
||||
opt Options // parsed config options
|
||||
features *fs.Features // optional features
|
||||
dev uint64 // device number of root node
|
||||
precisionOk sync.Once // Whether we need to read the precision
|
||||
precision time.Duration // precision of local filesystem
|
||||
warnedMu sync.Mutex // used for locking access to 'warned'.
|
||||
warned map[string]struct{} // whether we have warned about this string
|
||||
name string // the name of the remote
|
||||
root string // The root directory (OS path)
|
||||
opt Options // parsed config options
|
||||
features *fs.Features // optional features
|
||||
dev uint64 // device number of root node
|
||||
precisionOk sync.Once // Whether we need to read the precision
|
||||
precision time.Duration // precision of local filesystem
|
||||
warnedMu sync.Mutex // used for locking access to 'warned'.
|
||||
warned map[string]struct{} // whether we have warned about this string
|
||||
xattrSupported int32 // whether xattrs are supported (atomic access)
|
||||
|
||||
// do os.Lstat or os.Stat
|
||||
lstat func(name string) (os.FileInfo, error)
|
||||
@@ -286,6 +287,9 @@ func NewFs(ctx context.Context, name, root string, m configmap.Mapper) (fs.Fs, e
|
||||
dev: devUnset,
|
||||
lstat: os.Lstat,
|
||||
}
|
||||
if xattrSupported {
|
||||
f.xattrSupported = 1
|
||||
}
|
||||
f.root = cleanRootPath(root, f.opt.NoUNC, f.opt.Enc)
|
||||
f.features = (&fs.Features{
|
||||
CaseInsensitive: f.caseInsensitive(),
|
||||
|
||||
@@ -6,6 +6,8 @@ package local
|
||||
import (
|
||||
"fmt"
|
||||
"strings"
|
||||
"sync/atomic"
|
||||
"syscall"
|
||||
|
||||
"github.com/pkg/xattr"
|
||||
"github.com/rclone/rclone/fs"
|
||||
@@ -16,12 +18,30 @@ const (
|
||||
xattrSupported = xattr.XATTR_SUPPORTED
|
||||
)
|
||||
|
||||
// Check to see if the error supplied is a not supported error, and if
|
||||
// so, disable xattrs
|
||||
func (f *Fs) xattrIsNotSupported(err error) bool {
|
||||
xattrErr, ok := err.(*xattr.Error)
|
||||
if !ok {
|
||||
return false
|
||||
}
|
||||
// Xattrs not supported can be ENOTSUP or ENOATTR or EINVAL (on Solaris)
|
||||
if xattrErr.Err == syscall.EINVAL || xattrErr.Err == syscall.ENOTSUP || xattrErr.Err == xattr.ENOATTR {
|
||||
// Show xattrs not supported
|
||||
if atomic.CompareAndSwapInt32(&f.xattrSupported, 1, 0) {
|
||||
fs.Errorf(f, "xattrs not supported - disabling: %v", err)
|
||||
}
|
||||
return true
|
||||
}
|
||||
return false
|
||||
}
|
||||
|
||||
// getXattr returns the extended attributes for an object
|
||||
//
|
||||
// It doesn't return any attributes owned by this backend in
|
||||
// metadataKeys
|
||||
func (o *Object) getXattr() (metadata fs.Metadata, err error) {
|
||||
if !xattrSupported {
|
||||
if !xattrSupported || atomic.LoadInt32(&o.fs.xattrSupported) == 0 {
|
||||
return nil, nil
|
||||
}
|
||||
var list []string
|
||||
@@ -31,6 +51,9 @@ func (o *Object) getXattr() (metadata fs.Metadata, err error) {
|
||||
list, err = xattr.LList(o.path)
|
||||
}
|
||||
if err != nil {
|
||||
if o.fs.xattrIsNotSupported(err) {
|
||||
return nil, nil
|
||||
}
|
||||
return nil, fmt.Errorf("failed to read xattr: %w", err)
|
||||
}
|
||||
if len(list) == 0 {
|
||||
@@ -45,6 +68,9 @@ func (o *Object) getXattr() (metadata fs.Metadata, err error) {
|
||||
v, err = xattr.LGet(o.path, k)
|
||||
}
|
||||
if err != nil {
|
||||
if o.fs.xattrIsNotSupported(err) {
|
||||
return nil, nil
|
||||
}
|
||||
return nil, fmt.Errorf("failed to read xattr key %q: %w", k, err)
|
||||
}
|
||||
k = strings.ToLower(k)
|
||||
@@ -64,7 +90,7 @@ func (o *Object) getXattr() (metadata fs.Metadata, err error) {
|
||||
//
|
||||
// It doesn't set any attributes owned by this backend in metadataKeys
|
||||
func (o *Object) setXattr(metadata fs.Metadata) (err error) {
|
||||
if !xattrSupported {
|
||||
if !xattrSupported || atomic.LoadInt32(&o.fs.xattrSupported) == 0 {
|
||||
return nil
|
||||
}
|
||||
for k, value := range metadata {
|
||||
@@ -80,6 +106,9 @@ func (o *Object) setXattr(metadata fs.Metadata) (err error) {
|
||||
err = xattr.LSet(o.path, k, v)
|
||||
}
|
||||
if err != nil {
|
||||
if o.fs.xattrIsNotSupported(err) {
|
||||
return nil
|
||||
}
|
||||
return fmt.Errorf("failed to set xattr key %q: %w", k, err)
|
||||
}
|
||||
}
|
||||
|
||||
@@ -2076,7 +2076,7 @@ type Options struct {
|
||||
UploadCutoff fs.SizeSuffix `config:"upload_cutoff"`
|
||||
CopyCutoff fs.SizeSuffix `config:"copy_cutoff"`
|
||||
ChunkSize fs.SizeSuffix `config:"chunk_size"`
|
||||
MaxUploadParts int64 `config:"max_upload_parts"`
|
||||
MaxUploadParts int `config:"max_upload_parts"`
|
||||
DisableChecksum bool `config:"disable_checksum"`
|
||||
SharedCredentialsFile string `config:"shared_credentials_file"`
|
||||
Profile string `config:"profile"`
|
||||
@@ -4108,10 +4108,10 @@ func (o *Object) uploadMultipart(ctx context.Context, req *s3.PutObjectInput, si
|
||||
if size == -1 {
|
||||
warnStreamUpload.Do(func() {
|
||||
fs.Logf(f, "Streaming uploads using chunk size %v will have maximum file size of %v",
|
||||
f.opt.ChunkSize, fs.SizeSuffix(int64(partSize)*uploadParts))
|
||||
f.opt.ChunkSize, fs.SizeSuffix(int64(partSize)*int64(uploadParts)))
|
||||
})
|
||||
} else {
|
||||
partSize = chunksize.Calculator(o, int(uploadParts), f.opt.ChunkSize)
|
||||
partSize = chunksize.Calculator(o, size, uploadParts, f.opt.ChunkSize)
|
||||
}
|
||||
|
||||
memPool := f.getMemoryPool(int64(partSize))
|
||||
|
||||
@@ -5,6 +5,21 @@ description: "Rclone Changelog"
|
||||
|
||||
# Changelog
|
||||
|
||||
## v1.59.2 - 2022-09-15
|
||||
|
||||
[See commits](https://github.com/rclone/rclone/compare/v1.59.1...v1.59.2)
|
||||
|
||||
* Bug Fixes
|
||||
* config: Move locking to fix fatal error: concurrent map read and map write (Nick Craig-Wood)
|
||||
* Local
|
||||
* Disable xattr support if the filesystems indicates it is not supported (Nick Craig-Wood)
|
||||
* Azure Blob
|
||||
* Fix chunksize calculations producing too many parts (Nick Craig-Wood)
|
||||
* B2
|
||||
* Fix chunksize calculations producing too many parts (Nick Craig-Wood)
|
||||
* S3
|
||||
* Fix chunksize calculations producing too many parts (Nick Craig-Wood)
|
||||
|
||||
## v1.59.1 - 2022-08-08
|
||||
|
||||
[See commits](https://github.com/rclone/rclone/compare/v1.59.0...v1.59.1)
|
||||
|
||||
@@ -160,7 +160,7 @@ These flags are available for every command.
|
||||
--use-json-log Use json log format
|
||||
--use-mmap Use mmap allocator (see docs)
|
||||
--use-server-modtime Use server modified time instead of object metadata
|
||||
--user-agent string Set the user-agent to a specified string (default "rclone/v1.59.1")
|
||||
--user-agent string Set the user-agent to a specified string (default "rclone/v1.59.2")
|
||||
-v, --verbose count Print lots more stuff (repeat for more)
|
||||
```
|
||||
|
||||
|
||||
@@ -1 +1 @@
|
||||
v1.59.1
|
||||
v1.59.2
|
||||
@@ -5,18 +5,26 @@ import (
|
||||
"github.com/rclone/rclone/fs"
|
||||
)
|
||||
|
||||
/*
|
||||
Calculator calculates the minimum chunk size needed to fit within the maximum number of parts, rounded up to the nearest fs.Mebi
|
||||
// Calculator calculates the minimum chunk size needed to fit within
|
||||
// the maximum number of parts, rounded up to the nearest fs.Mebi.
|
||||
//
|
||||
// For most backends, (chunk_size) * (concurrent_upload_routines)
|
||||
// memory will be required so we want to use the smallest possible
|
||||
// chunk size that's going to allow the upload to proceed. Rounding up
|
||||
// to the nearest fs.Mebi on the assumption that some backends may
|
||||
// only allow integer type parameters when specifying the chunk size.
|
||||
//
|
||||
// Returns the default chunk size if it is sufficiently large enough
|
||||
// to support the given file size otherwise returns the smallest chunk
|
||||
// size necessary to allow the upload to proceed.
|
||||
func Calculator(o interface{}, size int64, maxParts int, defaultChunkSize fs.SizeSuffix) fs.SizeSuffix {
|
||||
// If streaming then use default chunk size
|
||||
if size < 0 {
|
||||
fs.Debugf(o, "Streaming upload with chunk_size %s allows uploads of up to %s and will fail only when that limit is reached.", defaultChunkSize, fs.SizeSuffix(maxParts)*defaultChunkSize)
|
||||
|
||||
For most backends, (chunk_size) * (concurrent_upload_routines) memory will be required so we want to use the smallest
|
||||
possible chunk size that's going to allow the upload to proceed. Rounding up to the nearest fs.Mebi on the assumption
|
||||
that some backends may only allow integer type parameters when specifying the chunk size.
|
||||
|
||||
Returns the default chunk size if it is sufficiently large enough to support the given file size otherwise returns the
|
||||
smallest chunk size necessary to allow the upload to proceed.
|
||||
*/
|
||||
func Calculator(objInfo fs.ObjectInfo, maxParts int, defaultChunkSize fs.SizeSuffix) fs.SizeSuffix {
|
||||
fileSize := fs.SizeSuffix(objInfo.Size())
|
||||
return defaultChunkSize
|
||||
}
|
||||
fileSize := fs.SizeSuffix(size)
|
||||
requiredChunks := fileSize / defaultChunkSize
|
||||
if requiredChunks < fs.SizeSuffix(maxParts) || (requiredChunks == fs.SizeSuffix(maxParts) && fileSize%defaultChunkSize == 0) {
|
||||
return defaultChunkSize
|
||||
@@ -31,6 +39,6 @@ func Calculator(objInfo fs.ObjectInfo, maxParts int, defaultChunkSize fs.SizeSuf
|
||||
minChunk += fs.Mebi
|
||||
}
|
||||
|
||||
fs.Debugf(objInfo, "size: %v, parts: %v, default: %v, new: %v; default chunk size insufficient, returned new chunk size", fileSize, maxParts, defaultChunkSize, minChunk)
|
||||
fs.Debugf(o, "size: %v, parts: %v, default: %v, new: %v; default chunk size insufficient, returned new chunk size", fileSize, maxParts, defaultChunkSize, minChunk)
|
||||
return minChunk
|
||||
}
|
||||
|
||||
@@ -2,34 +2,100 @@ package chunksize
|
||||
|
||||
import (
|
||||
"testing"
|
||||
"time"
|
||||
|
||||
"github.com/rclone/rclone/fs"
|
||||
"github.com/rclone/rclone/fs/object"
|
||||
)
|
||||
|
||||
func TestComputeChunkSize(t *testing.T) {
|
||||
tests := map[string]struct {
|
||||
fileSize fs.SizeSuffix
|
||||
for _, test := range []struct {
|
||||
name string
|
||||
size fs.SizeSuffix
|
||||
maxParts int
|
||||
defaultChunkSize fs.SizeSuffix
|
||||
expected fs.SizeSuffix
|
||||
want fs.SizeSuffix
|
||||
}{
|
||||
"default size returned when file size is small enough": {fileSize: 1000, maxParts: 10000, defaultChunkSize: toSizeSuffixMiB(10), expected: toSizeSuffixMiB(10)},
|
||||
"default size returned when file size is just 1 byte small enough": {fileSize: toSizeSuffixMiB(100000) - 1, maxParts: 10000, defaultChunkSize: toSizeSuffixMiB(10), expected: toSizeSuffixMiB(10)},
|
||||
"no rounding up when everything divides evenly": {fileSize: toSizeSuffixMiB(1000000), maxParts: 10000, defaultChunkSize: toSizeSuffixMiB(100), expected: toSizeSuffixMiB(100)},
|
||||
"rounding up to nearest MiB when not quite enough parts": {fileSize: toSizeSuffixMiB(1000000), maxParts: 9999, defaultChunkSize: toSizeSuffixMiB(100), expected: toSizeSuffixMiB(101)},
|
||||
"rounding up to nearest MiB when one extra byte": {fileSize: toSizeSuffixMiB(1000000) + 1, maxParts: 10000, defaultChunkSize: toSizeSuffixMiB(100), expected: toSizeSuffixMiB(101)},
|
||||
"expected MiB value when rounding sets to absolute minimum": {fileSize: toSizeSuffixMiB(1) - 1, maxParts: 1, defaultChunkSize: toSizeSuffixMiB(1), expected: toSizeSuffixMiB(1)},
|
||||
"expected MiB value when rounding to absolute min with extra": {fileSize: toSizeSuffixMiB(1) + 1, maxParts: 1, defaultChunkSize: toSizeSuffixMiB(1), expected: toSizeSuffixMiB(2)},
|
||||
}
|
||||
{
|
||||
name: "streaming file",
|
||||
size: -1,
|
||||
maxParts: 10000,
|
||||
defaultChunkSize: toSizeSuffixMiB(10),
|
||||
want: toSizeSuffixMiB(10),
|
||||
}, {
|
||||
name: "default size returned when file size is small enough",
|
||||
size: 1000,
|
||||
maxParts: 10000,
|
||||
defaultChunkSize: toSizeSuffixMiB(10),
|
||||
want: toSizeSuffixMiB(10),
|
||||
}, {
|
||||
name: "default size returned when file size is just 1 byte small enough",
|
||||
size: toSizeSuffixMiB(100000) - 1,
|
||||
maxParts: 10000,
|
||||
defaultChunkSize: toSizeSuffixMiB(10),
|
||||
want: toSizeSuffixMiB(10),
|
||||
}, {
|
||||
name: "no rounding up when everything divides evenly",
|
||||
size: toSizeSuffixMiB(1000000),
|
||||
maxParts: 10000,
|
||||
defaultChunkSize: toSizeSuffixMiB(100),
|
||||
want: toSizeSuffixMiB(100),
|
||||
}, {
|
||||
name: "rounding up to nearest MiB when not quite enough parts",
|
||||
size: toSizeSuffixMiB(1000000),
|
||||
maxParts: 9999,
|
||||
defaultChunkSize: toSizeSuffixMiB(100),
|
||||
want: toSizeSuffixMiB(101),
|
||||
}, {
|
||||
name: "rounding up to nearest MiB when one extra byte",
|
||||
size: toSizeSuffixMiB(1000000) + 1,
|
||||
maxParts: 10000,
|
||||
defaultChunkSize: toSizeSuffixMiB(100),
|
||||
want: toSizeSuffixMiB(101),
|
||||
}, {
|
||||
name: "expected MiB value when rounding sets to absolute minimum",
|
||||
size: toSizeSuffixMiB(1) - 1,
|
||||
maxParts: 1,
|
||||
defaultChunkSize: toSizeSuffixMiB(1),
|
||||
want: toSizeSuffixMiB(1),
|
||||
}, {
|
||||
name: "expected MiB value when rounding to absolute min with extra",
|
||||
size: toSizeSuffixMiB(1) + 1,
|
||||
maxParts: 1,
|
||||
defaultChunkSize: toSizeSuffixMiB(1),
|
||||
want: toSizeSuffixMiB(2),
|
||||
}, {
|
||||
name: "issue from forum #1",
|
||||
size: 120864818840,
|
||||
maxParts: 10000,
|
||||
defaultChunkSize: 5 * 1024 * 1024,
|
||||
want: toSizeSuffixMiB(12),
|
||||
},
|
||||
} {
|
||||
t.Run(test.name, func(t *testing.T) {
|
||||
got := Calculator(test.name, int64(test.size), test.maxParts, test.defaultChunkSize)
|
||||
if got != test.want {
|
||||
t.Fatalf("expected: %v, got: %v", test.want, got)
|
||||
}
|
||||
if test.size < 0 {
|
||||
return
|
||||
}
|
||||
parts := func(result fs.SizeSuffix) int {
|
||||
n := test.size / result
|
||||
r := test.size % result
|
||||
if r != 0 {
|
||||
n++
|
||||
}
|
||||
return int(n)
|
||||
}
|
||||
// Check this gives the parts in range
|
||||
if parts(got) > test.maxParts {
|
||||
t.Fatalf("too many parts %d", parts(got))
|
||||
}
|
||||
// Check that setting chunk size smaller gave too many parts
|
||||
if got > test.defaultChunkSize {
|
||||
if parts(got-toSizeSuffixMiB(1)) <= test.maxParts {
|
||||
t.Fatalf("chunk size %v too big as %v only gives %d parts", got, got-toSizeSuffixMiB(1), parts(got-toSizeSuffixMiB(1)))
|
||||
}
|
||||
|
||||
for name, tc := range tests {
|
||||
t.Run(name, func(t *testing.T) {
|
||||
src := object.NewStaticObjectInfo("mock", time.Now(), int64(tc.fileSize), true, nil, nil)
|
||||
result := Calculator(src, tc.maxParts, tc.defaultChunkSize)
|
||||
if result != tc.expected {
|
||||
t.Fatalf("expected: %v, got: %v", tc.expected, result)
|
||||
}
|
||||
})
|
||||
}
|
||||
|
||||
@@ -24,16 +24,15 @@ func Install() {
|
||||
// Storage implements config.Storage for saving and loading config
|
||||
// data in a simple INI based file.
|
||||
type Storage struct {
|
||||
gc *goconfig.ConfigFile // config file loaded - thread safe
|
||||
mu sync.Mutex // to protect the following variables
|
||||
gc *goconfig.ConfigFile // config file loaded - not thread safe
|
||||
fi os.FileInfo // stat of the file when last loaded
|
||||
}
|
||||
|
||||
// Check to see if we need to reload the config
|
||||
func (s *Storage) check() {
|
||||
s.mu.Lock()
|
||||
defer s.mu.Unlock()
|
||||
|
||||
//
|
||||
// mu must be held when calling this
|
||||
func (s *Storage) _check() {
|
||||
if configPath := config.GetConfigPath(); configPath != "" {
|
||||
// Check to see if config file has changed since it was last loaded
|
||||
fi, err := os.Stat(configPath)
|
||||
@@ -174,7 +173,10 @@ func (s *Storage) Save() error {
|
||||
|
||||
// Serialize the config into a string
|
||||
func (s *Storage) Serialize() (string, error) {
|
||||
s.check()
|
||||
s.mu.Lock()
|
||||
defer s.mu.Unlock()
|
||||
|
||||
s._check()
|
||||
var buf bytes.Buffer
|
||||
if err := goconfig.SaveConfigData(s.gc, &buf); err != nil {
|
||||
return "", fmt.Errorf("failed to save config file: %w", err)
|
||||
@@ -185,7 +187,10 @@ func (s *Storage) Serialize() (string, error) {
|
||||
|
||||
// HasSection returns true if section exists in the config file
|
||||
func (s *Storage) HasSection(section string) bool {
|
||||
s.check()
|
||||
s.mu.Lock()
|
||||
defer s.mu.Unlock()
|
||||
|
||||
s._check()
|
||||
_, err := s.gc.GetSection(section)
|
||||
return err == nil
|
||||
}
|
||||
@@ -193,26 +198,38 @@ func (s *Storage) HasSection(section string) bool {
|
||||
// DeleteSection removes the named section and all config from the
|
||||
// config file
|
||||
func (s *Storage) DeleteSection(section string) {
|
||||
s.check()
|
||||
s.mu.Lock()
|
||||
defer s.mu.Unlock()
|
||||
|
||||
s._check()
|
||||
s.gc.DeleteSection(section)
|
||||
}
|
||||
|
||||
// GetSectionList returns a slice of strings with names for all the
|
||||
// sections
|
||||
func (s *Storage) GetSectionList() []string {
|
||||
s.check()
|
||||
s.mu.Lock()
|
||||
defer s.mu.Unlock()
|
||||
|
||||
s._check()
|
||||
return s.gc.GetSectionList()
|
||||
}
|
||||
|
||||
// GetKeyList returns the keys in this section
|
||||
func (s *Storage) GetKeyList(section string) []string {
|
||||
s.check()
|
||||
s.mu.Lock()
|
||||
defer s.mu.Unlock()
|
||||
|
||||
s._check()
|
||||
return s.gc.GetKeyList(section)
|
||||
}
|
||||
|
||||
// GetValue returns the key in section with a found flag
|
||||
func (s *Storage) GetValue(section string, key string) (value string, found bool) {
|
||||
s.check()
|
||||
s.mu.Lock()
|
||||
defer s.mu.Unlock()
|
||||
|
||||
s._check()
|
||||
value, err := s.gc.GetValue(section, key)
|
||||
if err != nil {
|
||||
return "", false
|
||||
@@ -222,7 +239,10 @@ func (s *Storage) GetValue(section string, key string) (value string, found bool
|
||||
|
||||
// SetValue sets the value under key in section
|
||||
func (s *Storage) SetValue(section string, key string, value string) {
|
||||
s.check()
|
||||
s.mu.Lock()
|
||||
defer s.mu.Unlock()
|
||||
|
||||
s._check()
|
||||
if strings.HasPrefix(section, ":") {
|
||||
fs.Logf(nil, "Can't save config %q for on the fly backend %q", key, section)
|
||||
return
|
||||
@@ -232,7 +252,10 @@ func (s *Storage) SetValue(section string, key string, value string) {
|
||||
|
||||
// DeleteKey removes the key under section
|
||||
func (s *Storage) DeleteKey(section string, key string) bool {
|
||||
s.check()
|
||||
s.mu.Lock()
|
||||
defer s.mu.Unlock()
|
||||
|
||||
s._check()
|
||||
return s.gc.DeleteKey(section, key)
|
||||
}
|
||||
|
||||
|
||||
@@ -1,4 +1,4 @@
|
||||
package fs
|
||||
|
||||
// VersionTag of rclone
|
||||
var VersionTag = "v1.59.1"
|
||||
var VersionTag = "v1.59.2"
|
||||
|
||||
39
rclone.1
generated
39
rclone.1
generated
@@ -1,7 +1,7 @@
|
||||
.\"t
|
||||
.\" Automatically generated by Pandoc 2.9.2.1
|
||||
.\"
|
||||
.TH "rclone" "1" "Aug 08, 2022" "User Manual" ""
|
||||
.TH "rclone" "1" "Sep 15, 2022" "User Manual" ""
|
||||
.hy
|
||||
.SH Rclone syncs your files to cloud storage
|
||||
.PP
|
||||
@@ -19713,7 +19713,7 @@ These flags are available for every command.
|
||||
--use-json-log Use json log format
|
||||
--use-mmap Use mmap allocator (see docs)
|
||||
--use-server-modtime Use server modified time instead of object metadata
|
||||
--user-agent string Set the user-agent to a specified string (default \[dq]rclone/v1.59.1\[dq])
|
||||
--user-agent string Set the user-agent to a specified string (default \[dq]rclone/v1.59.2\[dq])
|
||||
-v, --verbose count Print lots more stuff (repeat for more)
|
||||
\f[R]
|
||||
.fi
|
||||
@@ -53981,6 +53981,41 @@ Options:
|
||||
.IP \[bu] 2
|
||||
\[dq]error\[dq]: return an error based on option value
|
||||
.SH Changelog
|
||||
.SS v1.59.2 - 2022-09-15
|
||||
.PP
|
||||
See commits (https://github.com/rclone/rclone/compare/v1.59.1...v1.59.2)
|
||||
.IP \[bu] 2
|
||||
Bug Fixes
|
||||
.RS 2
|
||||
.IP \[bu] 2
|
||||
config: Move locking to fix fatal error: concurrent map read and map
|
||||
write (Nick Craig-Wood)
|
||||
.RE
|
||||
.IP \[bu] 2
|
||||
Local
|
||||
.RS 2
|
||||
.IP \[bu] 2
|
||||
Disable xattr support if the filesystems indicates it is not supported
|
||||
(Nick Craig-Wood)
|
||||
.RE
|
||||
.IP \[bu] 2
|
||||
Azure Blob
|
||||
.RS 2
|
||||
.IP \[bu] 2
|
||||
Fix chunksize calculations producing too many parts (Nick Craig-Wood)
|
||||
.RE
|
||||
.IP \[bu] 2
|
||||
B2
|
||||
.RS 2
|
||||
.IP \[bu] 2
|
||||
Fix chunksize calculations producing too many parts (Nick Craig-Wood)
|
||||
.RE
|
||||
.IP \[bu] 2
|
||||
S3
|
||||
.RS 2
|
||||
.IP \[bu] 2
|
||||
Fix chunksize calculations producing too many parts (Nick Craig-Wood)
|
||||
.RE
|
||||
.SS v1.59.1 - 2022-08-08
|
||||
.PP
|
||||
See commits (https://github.com/rclone/rclone/compare/v1.59.0...v1.59.1)
|
||||
|
||||
Reference in New Issue
Block a user