mirror of
https://github.com/rclone/rclone.git
synced 2025-12-06 00:03:32 +00:00
Compare commits
163 Commits
fix-vfs-em
...
fix-7103-n
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
705572d400 | ||
|
|
00512e1303 | ||
|
|
fcfbd3153b | ||
|
|
9a8075b682 | ||
|
|
996037bee9 | ||
|
|
e90537b2e9 | ||
|
|
42c211c6b2 | ||
|
|
3d4f127b33 | ||
|
|
ff966b37af | ||
|
|
3b6effa81a | ||
|
|
8308d5d640 | ||
|
|
14024936a8 | ||
|
|
9065e921c1 | ||
|
|
99788b605e | ||
|
|
d4cc3760e6 | ||
|
|
a6acbd1844 | ||
|
|
389565f5e2 | ||
|
|
4b4198522d | ||
|
|
f7665300c0 | ||
|
|
73beae147f | ||
|
|
92f8e476b7 | ||
|
|
5849148d51 | ||
|
|
37853ec412 | ||
|
|
ae7ff28714 | ||
|
|
9873f4bc74 | ||
|
|
1b200bf69a | ||
|
|
e3fa6fe3cc | ||
|
|
9e1b3861e7 | ||
|
|
e9a753f678 | ||
|
|
708391a5bf | ||
|
|
1cfed18aa7 | ||
|
|
7751d5a00b | ||
|
|
8274712c2c | ||
|
|
625a564ba3 | ||
|
|
2dd2072cdb | ||
|
|
998d1d1727 | ||
|
|
fcb912a664 | ||
|
|
5f938fb9ed | ||
|
|
72b79504ea | ||
|
|
3e2a606adb | ||
|
|
95a6e3e338 | ||
|
|
d06bb55f3f | ||
|
|
9f3694cea3 | ||
|
|
2c50f26c36 | ||
|
|
22d6c8d30d | ||
|
|
96fb75c5a7 | ||
|
|
acd67edf9a | ||
|
|
b26db8e640 | ||
|
|
da955e5d4f | ||
|
|
4f8dab8bce | ||
|
|
000ddc4951 | ||
|
|
3faa84b47c | ||
|
|
e1162ec440 | ||
|
|
30cccc7101 | ||
|
|
1f5a29209e | ||
|
|
45255bccb3 | ||
|
|
055206c4ee | ||
|
|
f3070b82bc | ||
|
|
7e2deffc62 | ||
|
|
ae3ff50580 | ||
|
|
6486ba6344 | ||
|
|
7842000f8a | ||
|
|
1f9c962183 | ||
|
|
279d9ecc56 | ||
|
|
31773ecfbf | ||
|
|
666e34cf69 | ||
|
|
5a84a08b3f | ||
|
|
51a468b2ba | ||
|
|
fc798d800c | ||
|
|
3115ede1d8 | ||
|
|
7a5491ba7b | ||
|
|
a6cf4989b6 | ||
|
|
f489b54fa0 | ||
|
|
6244d1729b | ||
|
|
e97c2a2832 | ||
|
|
56bf9b4a10 | ||
|
|
ceb9406c2f | ||
|
|
1f887f7ba0 | ||
|
|
7db26b6b34 | ||
|
|
37a3309438 | ||
|
|
97be9015a4 | ||
|
|
487e4f09b3 | ||
|
|
09a408664d | ||
|
|
43fa256d56 | ||
|
|
6859c04772 | ||
|
|
38a0539096 | ||
|
|
2cd85813b4 | ||
|
|
e6e6069ecf | ||
|
|
fcf47a8393 | ||
|
|
46a323ae14 | ||
|
|
72be80ddca | ||
|
|
a9e7e7bcc2 | ||
|
|
925c4382e2 | ||
|
|
08c60c3091 | ||
|
|
5c594fea90 | ||
|
|
cc01223535 | ||
|
|
aaacfa51a0 | ||
|
|
c18c66f167 | ||
|
|
d6667d34e7 | ||
|
|
e649cf4d50 | ||
|
|
f080ec437c | ||
|
|
4023eaebe0 | ||
|
|
baf16a65f0 | ||
|
|
70fe2ac852 | ||
|
|
41cf7faea4 | ||
|
|
f226f2dfb1 | ||
|
|
31caa019fa | ||
|
|
0468375054 | ||
|
|
6001f05a12 | ||
|
|
f7b87a8049 | ||
|
|
d379641021 | ||
|
|
84281c9089 | ||
|
|
8e2dc069d2 | ||
|
|
61d6f538b3 | ||
|
|
65b2e378e0 | ||
|
|
dea6bdf3df | ||
|
|
27eb8c7f45 | ||
|
|
1607344613 | ||
|
|
5f138dd822 | ||
|
|
2520c05c4b | ||
|
|
f7f5e87632 | ||
|
|
a7e6806f26 | ||
|
|
d0eb884262 | ||
|
|
ae6874170f | ||
|
|
f5bab284c3 | ||
|
|
c75dfa6436 | ||
|
|
56eb82bdfc | ||
|
|
066e00b470 | ||
|
|
e0c445d36e | ||
|
|
74652bf318 | ||
|
|
b6a95c70e9 | ||
|
|
aca7d0fd22 | ||
|
|
12761b3058 | ||
|
|
3567a47258 | ||
|
|
6b670bd439 | ||
|
|
335ca6d572 | ||
|
|
c4a9e480c9 | ||
|
|
232d304c13 | ||
|
|
44ac79e357 | ||
|
|
0487e465ee | ||
|
|
bb6cfe109d | ||
|
|
864eb89a67 | ||
|
|
4471e6f258 | ||
|
|
e82db0b7d5 | ||
|
|
72e624c5e4 | ||
|
|
6092fa57c3 | ||
|
|
3e15a594b7 | ||
|
|
db8c007983 | ||
|
|
5836da14c2 | ||
|
|
8ed07d11a0 | ||
|
|
1f2ee44c20 | ||
|
|
32798dca25 | ||
|
|
075f98551f | ||
|
|
963ab220f6 | ||
|
|
281a007b1a | ||
|
|
589b7b4873 | ||
|
|
04d2781fda | ||
|
|
5b95fd9588 | ||
|
|
a42643101e | ||
|
|
bcca67efd5 | ||
|
|
7771aaacf6 | ||
|
|
fda06fc17d | ||
|
|
2faa4758e4 |
4
.github/FUNDING.yml
vendored
4
.github/FUNDING.yml
vendored
@@ -1,4 +0,0 @@
|
||||
github: [ncw]
|
||||
patreon: njcw
|
||||
liberapay: ncw
|
||||
custom: ["https://rclone.org/donate/"]
|
||||
16
.github/dependabot.yml
vendored
16
.github/dependabot.yml
vendored
@@ -1,10 +1,6 @@
|
||||
version: 2
|
||||
updates:
|
||||
- package-ecosystem: "github-actions"
|
||||
directory: "/"
|
||||
schedule:
|
||||
interval: "daily"
|
||||
- package-ecosystem: "gomod"
|
||||
directory: "/"
|
||||
schedule:
|
||||
interval: "daily"
|
||||
version: 2
|
||||
updates:
|
||||
- package-ecosystem: "github-actions"
|
||||
directory: "/"
|
||||
schedule:
|
||||
interval: "daily"
|
||||
|
||||
@@ -17,6 +17,7 @@ Current active maintainers of rclone are:
|
||||
| Fred | @creativeprojects | seafile backend |
|
||||
| Caleb Case | @calebcase | storj backend |
|
||||
| wiserain | @wiserain | pikpak backend |
|
||||
| albertony | @albertony | |
|
||||
|
||||
**This is a work in progress Draft**
|
||||
|
||||
|
||||
3146
MANUAL.html
generated
3146
MANUAL.html
generated
File diff suppressed because it is too large
Load Diff
3307
MANUAL.txt
generated
3307
MANUAL.txt
generated
File diff suppressed because it is too large
Load Diff
2
Makefile
2
Makefile
@@ -96,7 +96,7 @@ build_dep:
|
||||
|
||||
# Get the release dependencies we only install on linux
|
||||
release_dep_linux:
|
||||
go run bin/get-github-release.go -extract nfpm goreleaser/nfpm 'nfpm_.*_Linux_x86_64\.tar\.gz'
|
||||
go install github.com/goreleaser/nfpm/v2/cmd/nfpm@latest
|
||||
|
||||
# Get the release dependencies we only install on Windows
|
||||
release_dep_windows:
|
||||
|
||||
@@ -25,12 +25,12 @@ Rclone *("rsync for cloud storage")* is a command-line program to sync files and
|
||||
* Alibaba Cloud (Aliyun) Object Storage System (OSS) [:page_facing_up:](https://rclone.org/s3/#alibaba-oss)
|
||||
* Amazon Drive [:page_facing_up:](https://rclone.org/amazonclouddrive/) ([See note](https://rclone.org/amazonclouddrive/#status))
|
||||
* Amazon S3 [:page_facing_up:](https://rclone.org/s3/)
|
||||
* ArvanCloud Object Storage (AOS) [:page_facing_up:](https://rclone.org/s3/#arvan-cloud-object-storage-aos)
|
||||
* Backblaze B2 [:page_facing_up:](https://rclone.org/b2/)
|
||||
* Box [:page_facing_up:](https://rclone.org/box/)
|
||||
* Ceph [:page_facing_up:](https://rclone.org/s3/#ceph)
|
||||
* China Mobile Ecloud Elastic Object Storage (EOS) [:page_facing_up:](https://rclone.org/s3/#china-mobile-ecloud-eos)
|
||||
* Cloudflare R2 [:page_facing_up:](https://rclone.org/s3/#cloudflare-r2)
|
||||
* Arvan Cloud Object Storage (AOS) [:page_facing_up:](https://rclone.org/s3/#arvan-cloud-object-storage-aos)
|
||||
* Citrix ShareFile [:page_facing_up:](https://rclone.org/sharefile/)
|
||||
* DigitalOcean Spaces [:page_facing_up:](https://rclone.org/s3/#digitalocean-spaces)
|
||||
* Digi Storage [:page_facing_up:](https://rclone.org/koofr/#digi-storage)
|
||||
@@ -61,12 +61,14 @@ Rclone *("rsync for cloud storage")* is a command-line program to sync files and
|
||||
* Minio [:page_facing_up:](https://rclone.org/s3/#minio)
|
||||
* Nextcloud [:page_facing_up:](https://rclone.org/webdav/#nextcloud)
|
||||
* OVH [:page_facing_up:](https://rclone.org/swift/)
|
||||
* Blomp Cloud Storage [:page_facing_up:](https://rclone.org/swift/)
|
||||
* OpenDrive [:page_facing_up:](https://rclone.org/opendrive/)
|
||||
* OpenStack Swift [:page_facing_up:](https://rclone.org/swift/)
|
||||
* Oracle Cloud Storage [:page_facing_up:](https://rclone.org/swift/)
|
||||
* Oracle Object Storage [:page_facing_up:](https://rclone.org/oracleobjectstorage/)
|
||||
* ownCloud [:page_facing_up:](https://rclone.org/webdav/#owncloud)
|
||||
* pCloud [:page_facing_up:](https://rclone.org/pcloud/)
|
||||
* Petabox [:page_facing_up:](https://rclone.org/s3/#petabox)
|
||||
* PikPak [:page_facing_up:](https://rclone.org/pikpak/)
|
||||
* premiumize.me [:page_facing_up:](https://rclone.org/premiumizeme/)
|
||||
* put.io [:page_facing_up:](https://rclone.org/putio/)
|
||||
|
||||
@@ -58,6 +58,8 @@ const (
|
||||
decayConstant = 1 // bigger for slower decay, exponential
|
||||
maxListChunkSize = 5000 // number of items to read at once
|
||||
modTimeKey = "mtime"
|
||||
dirMetaKey = "hdi_isfolder"
|
||||
dirMetaValue = "true"
|
||||
timeFormatIn = time.RFC3339
|
||||
timeFormatOut = "2006-01-02T15:04:05.000000000Z07:00"
|
||||
storageDefaultBaseURL = "blob.core.windows.net"
|
||||
@@ -363,6 +365,18 @@ This option controls how often unused buffers will be removed from the pool.`,
|
||||
},
|
||||
},
|
||||
Advanced: true,
|
||||
}, {
|
||||
Name: "directory_markers",
|
||||
Default: false,
|
||||
Advanced: true,
|
||||
Help: `Upload an empty object with a trailing slash when a new directory is created
|
||||
|
||||
Empty folders are unsupported for bucket based remotes, this option
|
||||
creates an empty object ending with "/", to persist the folder.
|
||||
|
||||
This object also has the metadata "` + dirMetaKey + ` = ` + dirMetaValue + `" to conform to
|
||||
the Microsoft standard.
|
||||
`,
|
||||
}, {
|
||||
Name: "no_check_container",
|
||||
Help: `If set, don't attempt to check the container exists or create it.
|
||||
@@ -412,6 +426,7 @@ type Options struct {
|
||||
MemoryPoolUseMmap bool `config:"memory_pool_use_mmap"`
|
||||
Enc encoder.MultiEncoder `config:"encoding"`
|
||||
PublicAccess string `config:"public_access"`
|
||||
DirectoryMarkers bool `config:"directory_markers"`
|
||||
NoCheckContainer bool `config:"no_check_container"`
|
||||
NoHeadObject bool `config:"no_head_object"`
|
||||
}
|
||||
@@ -486,7 +501,7 @@ func parsePath(path string) (root string) {
|
||||
// split returns container and containerPath from the rootRelativePath
|
||||
// relative to f.root
|
||||
func (f *Fs) split(rootRelativePath string) (containerName, containerPath string) {
|
||||
containerName, containerPath = bucket.Split(path.Join(f.root, rootRelativePath))
|
||||
containerName, containerPath = bucket.Split(bucket.Join(f.root, rootRelativePath))
|
||||
return f.opt.Enc.FromStandardName(containerName), f.opt.Enc.FromStandardPath(containerPath)
|
||||
}
|
||||
|
||||
@@ -664,6 +679,10 @@ func NewFs(ctx context.Context, name, root string, m configmap.Mapper) (fs.Fs, e
|
||||
SetTier: true,
|
||||
GetTier: true,
|
||||
}).Fill(ctx, f)
|
||||
if opt.DirectoryMarkers {
|
||||
f.features.CanHaveEmptyDirectories = true
|
||||
fs.Debugf(f, "Using directory markers")
|
||||
}
|
||||
|
||||
// Client options specifying our own transport
|
||||
policyClientOptions := policy.ClientOptions{
|
||||
@@ -906,7 +925,7 @@ func (f *Fs) cntSVC(containerName string) (containerClient *container.Client) {
|
||||
// Return an Object from a path
|
||||
//
|
||||
// If it can't be found it returns the error fs.ErrorObjectNotFound.
|
||||
func (f *Fs) newObjectWithInfo(remote string, info *container.BlobItem) (fs.Object, error) {
|
||||
func (f *Fs) newObjectWithInfo(ctx context.Context, remote string, info *container.BlobItem) (fs.Object, error) {
|
||||
o := &Object{
|
||||
fs: f,
|
||||
remote: remote,
|
||||
@@ -917,7 +936,7 @@ func (f *Fs) newObjectWithInfo(remote string, info *container.BlobItem) (fs.Obje
|
||||
return nil, err
|
||||
}
|
||||
} else if !o.fs.opt.NoHeadObject {
|
||||
err := o.readMetaData() // reads info and headers, returning an error
|
||||
err := o.readMetaData(ctx) // reads info and headers, returning an error
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
@@ -928,7 +947,7 @@ func (f *Fs) newObjectWithInfo(remote string, info *container.BlobItem) (fs.Obje
|
||||
// NewObject finds the Object at remote. If it can't be found
|
||||
// it returns the error fs.ErrorObjectNotFound.
|
||||
func (f *Fs) NewObject(ctx context.Context, remote string) (fs.Object, error) {
|
||||
return f.newObjectWithInfo(remote, nil)
|
||||
return f.newObjectWithInfo(ctx, remote, nil)
|
||||
}
|
||||
|
||||
// getBlobSVC creates a blob client
|
||||
@@ -964,31 +983,7 @@ func isDirectoryMarker(size int64, metadata map[string]*string, remote string) b
|
||||
// defacto standard for marking blobs as directories.
|
||||
// Note also that the metadata hasn't been normalised to lower case yet
|
||||
for k, v := range metadata {
|
||||
if v != nil && strings.EqualFold(k, "hdi_isfolder") && *v == "true" {
|
||||
return true
|
||||
}
|
||||
}
|
||||
}
|
||||
return false
|
||||
}
|
||||
|
||||
// Returns whether file is a directory marker or not using metadata
|
||||
// with pointers to strings as the SDK seems to use both forms rather
|
||||
// annoyingly.
|
||||
//
|
||||
// NB This is a duplicate of isDirectoryMarker
|
||||
func isDirectoryMarkerP(size int64, metadata map[string]*string, remote string) bool {
|
||||
// Directory markers are 0 length
|
||||
if size == 0 {
|
||||
endsWithSlash := strings.HasSuffix(remote, "/")
|
||||
if endsWithSlash || remote == "" {
|
||||
return true
|
||||
}
|
||||
// Note that metadata with hdi_isfolder = true seems to be a
|
||||
// defacto standard for marking blobs as directories.
|
||||
// Note also that the metadata hasn't been normalised to lower case yet
|
||||
for k, pv := range metadata {
|
||||
if strings.EqualFold(k, "hdi_isfolder") && pv != nil && *pv == "true" {
|
||||
if v != nil && strings.EqualFold(k, dirMetaKey) && *v == dirMetaValue {
|
||||
return true
|
||||
}
|
||||
}
|
||||
@@ -1033,6 +1028,7 @@ func (f *Fs) list(ctx context.Context, containerName, directory, prefix string,
|
||||
Prefix: &directory,
|
||||
MaxResults: &maxResults,
|
||||
})
|
||||
foundItems := 0
|
||||
for pager.More() {
|
||||
var response container.ListBlobsHierarchyResponse
|
||||
err := f.pacer.Call(func() (bool, error) {
|
||||
@@ -1051,6 +1047,7 @@ func (f *Fs) list(ctx context.Context, containerName, directory, prefix string,
|
||||
}
|
||||
// Advance marker to next
|
||||
// marker = response.NextMarker
|
||||
foundItems += len(response.Segment.BlobItems)
|
||||
for i := range response.Segment.BlobItems {
|
||||
file := response.Segment.BlobItems[i]
|
||||
// Finish if file name no longer has prefix
|
||||
@@ -1066,20 +1063,27 @@ func (f *Fs) list(ctx context.Context, containerName, directory, prefix string,
|
||||
fs.Debugf(f, "Odd name received %q", remote)
|
||||
continue
|
||||
}
|
||||
remote = remote[len(prefix):]
|
||||
if isDirectoryMarkerP(*file.Properties.ContentLength, file.Metadata, remote) {
|
||||
continue // skip directory marker
|
||||
isDirectory := isDirectoryMarker(*file.Properties.ContentLength, file.Metadata, remote)
|
||||
if isDirectory {
|
||||
// Don't insert the root directory
|
||||
if remote == directory {
|
||||
continue
|
||||
}
|
||||
// process directory markers as directories
|
||||
remote = strings.TrimRight(remote, "/")
|
||||
}
|
||||
remote = remote[len(prefix):]
|
||||
if addContainer {
|
||||
remote = path.Join(containerName, remote)
|
||||
}
|
||||
// Send object
|
||||
err = fn(remote, file, false)
|
||||
err = fn(remote, file, isDirectory)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
// Send the subdirectories
|
||||
foundItems += len(response.Segment.BlobPrefixes)
|
||||
for _, remote := range response.Segment.BlobPrefixes {
|
||||
if remote.Name == nil {
|
||||
fs.Debugf(f, "Nil prefix received")
|
||||
@@ -1102,16 +1106,26 @@ func (f *Fs) list(ctx context.Context, containerName, directory, prefix string,
|
||||
}
|
||||
}
|
||||
}
|
||||
if f.opt.DirectoryMarkers && foundItems == 0 && directory != "" {
|
||||
// Determine whether the directory exists or not by whether it has a marker
|
||||
_, err := f.readMetaData(ctx, containerName, directory)
|
||||
if err != nil {
|
||||
if err == fs.ErrorObjectNotFound {
|
||||
return fs.ErrorDirNotFound
|
||||
}
|
||||
return err
|
||||
}
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// Convert a list item into a DirEntry
|
||||
func (f *Fs) itemToDirEntry(remote string, object *container.BlobItem, isDirectory bool) (fs.DirEntry, error) {
|
||||
func (f *Fs) itemToDirEntry(ctx context.Context, remote string, object *container.BlobItem, isDirectory bool) (fs.DirEntry, error) {
|
||||
if isDirectory {
|
||||
d := fs.NewDir(remote, time.Time{})
|
||||
return d, nil
|
||||
}
|
||||
o, err := f.newObjectWithInfo(remote, object)
|
||||
o, err := f.newObjectWithInfo(ctx, remote, object)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
@@ -1139,7 +1153,7 @@ func (f *Fs) listDir(ctx context.Context, containerName, directory, prefix strin
|
||||
return nil, fs.ErrorDirNotFound
|
||||
}
|
||||
err = f.list(ctx, containerName, directory, prefix, addContainer, false, int32(f.opt.ListChunkSize), func(remote string, object *container.BlobItem, isDirectory bool) error {
|
||||
entry, err := f.itemToDirEntry(remote, object, isDirectory)
|
||||
entry, err := f.itemToDirEntry(ctx, remote, object, isDirectory)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
@@ -1220,7 +1234,7 @@ func (f *Fs) ListR(ctx context.Context, dir string, callback fs.ListRCallback) (
|
||||
list := walk.NewListRHelper(callback)
|
||||
listR := func(containerName, directory, prefix string, addContainer bool) error {
|
||||
return f.list(ctx, containerName, directory, prefix, addContainer, true, int32(f.opt.ListChunkSize), func(remote string, object *container.BlobItem, isDirectory bool) error {
|
||||
entry, err := f.itemToDirEntry(remote, object, isDirectory)
|
||||
entry, err := f.itemToDirEntry(ctx, remote, object, isDirectory)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
@@ -1314,10 +1328,71 @@ func (f *Fs) PutStream(ctx context.Context, in io.Reader, src fs.ObjectInfo, opt
|
||||
return f.Put(ctx, in, src, options...)
|
||||
}
|
||||
|
||||
// Create directory marker file and parents
|
||||
func (f *Fs) createDirectoryMarker(ctx context.Context, container, dir string) error {
|
||||
if !f.opt.DirectoryMarkers || container == "" {
|
||||
return nil
|
||||
}
|
||||
|
||||
// Object to be uploaded
|
||||
o := &Object{
|
||||
fs: f,
|
||||
modTime: time.Now(),
|
||||
meta: map[string]string{
|
||||
dirMetaKey: dirMetaValue,
|
||||
},
|
||||
}
|
||||
|
||||
for {
|
||||
_, containerPath := f.split(dir)
|
||||
// Don't create the directory marker if it is the bucket or at the very root
|
||||
if containerPath == "" {
|
||||
break
|
||||
}
|
||||
o.remote = dir + "/"
|
||||
|
||||
// Check to see if object already exists
|
||||
_, err := f.readMetaData(ctx, container, containerPath+"/")
|
||||
if err == nil {
|
||||
return nil
|
||||
}
|
||||
|
||||
// Upload it if not
|
||||
fs.Debugf(o, "Creating directory marker")
|
||||
content := io.Reader(strings.NewReader(""))
|
||||
err = o.Update(ctx, content, o)
|
||||
if err != nil {
|
||||
return fmt.Errorf("creating directory marker failed: %w", err)
|
||||
}
|
||||
|
||||
// Now check parent directory exists
|
||||
dir = path.Dir(dir)
|
||||
if dir == "/" || dir == "." {
|
||||
break
|
||||
}
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
// Mkdir creates the container if it doesn't exist
|
||||
func (f *Fs) Mkdir(ctx context.Context, dir string) error {
|
||||
container, _ := f.split(dir)
|
||||
return f.makeContainer(ctx, container)
|
||||
e := f.makeContainer(ctx, container)
|
||||
if e != nil {
|
||||
return e
|
||||
}
|
||||
return f.createDirectoryMarker(ctx, container, dir)
|
||||
}
|
||||
|
||||
// mkdirParent creates the parent bucket/directory if it doesn't exist
|
||||
func (f *Fs) mkdirParent(ctx context.Context, remote string) error {
|
||||
remote = strings.TrimRight(remote, "/")
|
||||
dir := path.Dir(remote)
|
||||
if dir == "/" || dir == "." {
|
||||
dir = ""
|
||||
}
|
||||
return f.Mkdir(ctx, dir)
|
||||
}
|
||||
|
||||
// makeContainer creates the container if it doesn't exist
|
||||
@@ -1417,6 +1492,18 @@ func (f *Fs) deleteContainer(ctx context.Context, containerName string) error {
|
||||
// Returns an error if it isn't empty
|
||||
func (f *Fs) Rmdir(ctx context.Context, dir string) error {
|
||||
container, directory := f.split(dir)
|
||||
// Remove directory marker file
|
||||
if f.opt.DirectoryMarkers && container != "" && dir != "" {
|
||||
o := &Object{
|
||||
fs: f,
|
||||
remote: dir + "/",
|
||||
}
|
||||
fs.Debugf(o, "Removing directory marker")
|
||||
err := o.Remove(ctx)
|
||||
if err != nil {
|
||||
return fmt.Errorf("removing directory marker failed: %w", err)
|
||||
}
|
||||
}
|
||||
if container == "" || directory != "" {
|
||||
return nil
|
||||
}
|
||||
@@ -1458,7 +1545,7 @@ func (f *Fs) Purge(ctx context.Context, dir string) error {
|
||||
// If it isn't possible then return fs.ErrorCantCopy
|
||||
func (f *Fs) Copy(ctx context.Context, src fs.Object, remote string) (fs.Object, error) {
|
||||
dstContainer, dstPath := f.split(remote)
|
||||
err := f.makeContainer(ctx, dstContainer)
|
||||
err := f.mkdirParent(ctx, remote)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
@@ -1582,6 +1669,7 @@ func (o *Object) getMetadata() (metadata map[string]*string) {
|
||||
}
|
||||
metadata = make(map[string]*string, len(o.meta))
|
||||
for k, v := range o.meta {
|
||||
v := v
|
||||
metadata[k] = &v
|
||||
}
|
||||
return metadata
|
||||
@@ -1694,7 +1782,7 @@ func (o *Object) decodeMetaDataFromBlob(info *container.BlobItem) (err error) {
|
||||
} else {
|
||||
size = *info.Properties.ContentLength
|
||||
}
|
||||
if isDirectoryMarkerP(size, metadata, o.remote) {
|
||||
if isDirectoryMarker(size, metadata, o.remote) {
|
||||
return fs.ErrorNotAFile
|
||||
}
|
||||
// NOTE - Client library always returns MD5 as base64 decoded string, Object needs to maintain
|
||||
@@ -1732,6 +1820,29 @@ func (o *Object) clearMetaData() {
|
||||
o.modTime = time.Time{}
|
||||
}
|
||||
|
||||
// readMetaData gets the metadata if it hasn't already been fetched
|
||||
func (f *Fs) readMetaData(ctx context.Context, container, containerPath string) (blobProperties blob.GetPropertiesResponse, err error) {
|
||||
if !f.containerOK(container) {
|
||||
return blobProperties, fs.ErrorObjectNotFound
|
||||
}
|
||||
blb := f.getBlobSVC(container, containerPath)
|
||||
|
||||
// Read metadata (this includes metadata)
|
||||
options := blob.GetPropertiesOptions{}
|
||||
err = f.pacer.Call(func() (bool, error) {
|
||||
blobProperties, err = blb.GetProperties(ctx, &options)
|
||||
return f.shouldRetry(ctx, err)
|
||||
})
|
||||
if err != nil {
|
||||
// On directories - GetProperties does not work and current SDK does not populate service code correctly hence check regular http response as well
|
||||
if storageErr, ok := err.(*azcore.ResponseError); ok && (storageErr.ErrorCode == string(bloberror.BlobNotFound) || storageErr.StatusCode == http.StatusNotFound) {
|
||||
return blobProperties, fs.ErrorObjectNotFound
|
||||
}
|
||||
return blobProperties, err
|
||||
}
|
||||
return blobProperties, nil
|
||||
}
|
||||
|
||||
// readMetaData gets the metadata if it hasn't already been fetched
|
||||
//
|
||||
// Sets
|
||||
@@ -1740,33 +1851,15 @@ func (o *Object) clearMetaData() {
|
||||
// o.modTime
|
||||
// o.size
|
||||
// o.md5
|
||||
func (o *Object) readMetaData() (err error) {
|
||||
container, _ := o.split()
|
||||
if !o.fs.containerOK(container) {
|
||||
return fs.ErrorObjectNotFound
|
||||
}
|
||||
func (o *Object) readMetaData(ctx context.Context) (err error) {
|
||||
if !o.modTime.IsZero() {
|
||||
return nil
|
||||
}
|
||||
blb := o.getBlobSVC()
|
||||
// fs.Debugf(o, "Blob URL = %q", blb.URL())
|
||||
|
||||
// Read metadata (this includes metadata)
|
||||
options := blob.GetPropertiesOptions{}
|
||||
ctx := context.Background()
|
||||
var blobProperties blob.GetPropertiesResponse
|
||||
err = o.fs.pacer.Call(func() (bool, error) {
|
||||
blobProperties, err = blb.GetProperties(ctx, &options)
|
||||
return o.fs.shouldRetry(ctx, err)
|
||||
})
|
||||
container, containerPath := o.split()
|
||||
blobProperties, err := o.fs.readMetaData(ctx, container, containerPath)
|
||||
if err != nil {
|
||||
// On directories - GetProperties does not work and current SDK does not populate service code correctly hence check regular http response as well
|
||||
if storageErr, ok := err.(*azcore.ResponseError); ok && (storageErr.ErrorCode == string(bloberror.BlobNotFound) || storageErr.StatusCode == http.StatusNotFound) {
|
||||
return fs.ErrorObjectNotFound
|
||||
}
|
||||
return err
|
||||
}
|
||||
|
||||
return o.decodeMetaDataFromPropertiesResponse(&blobProperties)
|
||||
}
|
||||
|
||||
@@ -1776,7 +1869,7 @@ func (o *Object) readMetaData() (err error) {
|
||||
// LastModified returned in the http headers
|
||||
func (o *Object) ModTime(ctx context.Context) (result time.Time) {
|
||||
// The error is logged in readMetaData
|
||||
_ = o.readMetaData()
|
||||
_ = o.readMetaData(ctx)
|
||||
return o.modTime
|
||||
}
|
||||
|
||||
@@ -2122,12 +2215,17 @@ func (o *Object) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, op
|
||||
if container == "" || containerPath == "" {
|
||||
return fmt.Errorf("can't upload to root - need a container")
|
||||
}
|
||||
err = o.fs.makeContainer(ctx, container)
|
||||
if err != nil {
|
||||
return err
|
||||
// Create parent dir/bucket if not saving directory marker
|
||||
_, isDirMarker := o.meta[dirMetaKey]
|
||||
if !isDirMarker {
|
||||
err = o.fs.mkdirParent(ctx, o.remote)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
|
||||
// Update Mod time
|
||||
fs.Debugf(nil, "o.meta = %+v", o.meta)
|
||||
o.updateMetadataWithModTime(src.ModTime(ctx))
|
||||
if err != nil {
|
||||
return err
|
||||
@@ -2175,6 +2273,7 @@ func (o *Object) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, op
|
||||
size := src.Size()
|
||||
multipartUpload := size < 0 || size > o.fs.poolSize
|
||||
|
||||
fs.Debugf(nil, "o.meta = %+v", o.meta)
|
||||
if multipartUpload {
|
||||
err = o.uploadMultipart(ctx, in, size, blb, &httpHeaders)
|
||||
} else {
|
||||
@@ -2185,10 +2284,12 @@ func (o *Object) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, op
|
||||
}
|
||||
|
||||
// Refresh metadata on object
|
||||
o.clearMetaData()
|
||||
err = o.readMetaData()
|
||||
if err != nil {
|
||||
return err
|
||||
if !isDirMarker {
|
||||
o.clearMetaData()
|
||||
err = o.readMetaData(ctx)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
|
||||
// If tier is not changed or not specified, do not attempt to invoke `SetBlobTier` operation
|
||||
|
||||
@@ -9,6 +9,7 @@ import (
|
||||
"testing"
|
||||
|
||||
"github.com/rclone/rclone/fs"
|
||||
"github.com/rclone/rclone/fstest"
|
||||
"github.com/rclone/rclone/fstest/fstests"
|
||||
"github.com/stretchr/testify/assert"
|
||||
)
|
||||
@@ -25,6 +26,25 @@ func TestIntegration(t *testing.T) {
|
||||
})
|
||||
}
|
||||
|
||||
// TestIntegration2 runs integration tests against the remote
|
||||
func TestIntegration2(t *testing.T) {
|
||||
if *fstest.RemoteName != "" {
|
||||
t.Skip("Skipping as -remote set")
|
||||
}
|
||||
name := "TestAzureBlob:"
|
||||
fstests.Run(t, &fstests.Opt{
|
||||
RemoteName: name,
|
||||
NilObject: (*Object)(nil),
|
||||
TiersToTest: []string{"Hot", "Cool"},
|
||||
ChunkedUpload: fstests.ChunkedUploadConfig{
|
||||
MinChunkSize: defaultChunkSize,
|
||||
},
|
||||
ExtraConfig: []fstests.ExtraConfigItem{
|
||||
{Name: name, Key: "directory_markers", Value: "true"},
|
||||
},
|
||||
})
|
||||
}
|
||||
|
||||
func (f *Fs) SetUploadChunkSize(cs fs.SizeSuffix) (fs.SizeSuffix, error) {
|
||||
return f.setUploadChunkSize(cs)
|
||||
}
|
||||
|
||||
@@ -233,6 +233,7 @@ func NewFs(ctx context.Context, name, root string, m configmap.Mapper) (outFs fs
|
||||
ReadMetadata: true,
|
||||
WriteMetadata: true,
|
||||
UserMetadata: true,
|
||||
PartialUploads: true,
|
||||
}).Fill(ctx, f)
|
||||
canMove := true
|
||||
for _, u := range f.upstreams {
|
||||
@@ -289,6 +290,16 @@ func NewFs(ctx context.Context, name, root string, m configmap.Mapper) (outFs fs
|
||||
}
|
||||
}
|
||||
|
||||
// Enable CleanUp when any upstreams support it
|
||||
if features.CleanUp == nil {
|
||||
for _, u := range f.upstreams {
|
||||
if u.f.Features().CleanUp != nil {
|
||||
features.CleanUp = f.CleanUp
|
||||
break
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Enable ChangeNotify when any upstreams support it
|
||||
if features.ChangeNotify == nil {
|
||||
for _, u := range f.upstreams {
|
||||
@@ -299,6 +310,9 @@ func NewFs(ctx context.Context, name, root string, m configmap.Mapper) (outFs fs
|
||||
}
|
||||
}
|
||||
|
||||
// show that we wrap other backends
|
||||
features.Overlay = true
|
||||
|
||||
f.features = features
|
||||
|
||||
// Get common intersection of hashes
|
||||
@@ -887,6 +901,100 @@ func (f *Fs) Shutdown(ctx context.Context) error {
|
||||
})
|
||||
}
|
||||
|
||||
// PublicLink generates a public link to the remote path (usually readable by anyone)
|
||||
func (f *Fs) PublicLink(ctx context.Context, remote string, expire fs.Duration, unlink bool) (string, error) {
|
||||
u, uRemote, err := f.findUpstream(remote)
|
||||
if err != nil {
|
||||
return "", err
|
||||
}
|
||||
do := u.f.Features().PublicLink
|
||||
if do == nil {
|
||||
return "", fs.ErrorNotImplemented
|
||||
}
|
||||
return do(ctx, uRemote, expire, unlink)
|
||||
}
|
||||
|
||||
// Put in to the remote path with the modTime given of the given size
|
||||
//
|
||||
// May create the object even if it returns an error - if so
|
||||
// will return the object and the error, otherwise will return
|
||||
// nil and the error
|
||||
//
|
||||
// May create duplicates or return errors if src already
|
||||
// exists.
|
||||
func (f *Fs) PutUnchecked(ctx context.Context, in io.Reader, src fs.ObjectInfo, options ...fs.OpenOption) (fs.Object, error) {
|
||||
srcPath := src.Remote()
|
||||
u, uRemote, err := f.findUpstream(srcPath)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
do := u.f.Features().PutUnchecked
|
||||
if do == nil {
|
||||
return nil, fs.ErrorNotImplemented
|
||||
}
|
||||
uSrc := fs.NewOverrideRemote(src, uRemote)
|
||||
return do(ctx, in, uSrc, options...)
|
||||
}
|
||||
|
||||
// MergeDirs merges the contents of all the directories passed
|
||||
// in into the first one and rmdirs the other directories.
|
||||
func (f *Fs) MergeDirs(ctx context.Context, dirs []fs.Directory) error {
|
||||
if len(dirs) == 0 {
|
||||
return nil
|
||||
}
|
||||
var (
|
||||
u *upstream
|
||||
uDirs []fs.Directory
|
||||
)
|
||||
for _, dir := range dirs {
|
||||
uNew, uDir, err := f.findUpstream(dir.Remote())
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
if u == nil {
|
||||
u = uNew
|
||||
} else if u != uNew {
|
||||
return fmt.Errorf("can't merge directories from different upstreams")
|
||||
}
|
||||
uDirs = append(uDirs, fs.NewOverrideDirectory(dir, uDir))
|
||||
}
|
||||
do := u.f.Features().MergeDirs
|
||||
if do == nil {
|
||||
return fs.ErrorNotImplemented
|
||||
}
|
||||
return do(ctx, uDirs)
|
||||
}
|
||||
|
||||
// CleanUp the trash in the Fs
|
||||
//
|
||||
// Implement this if you have a way of emptying the trash or
|
||||
// otherwise cleaning up old versions of files.
|
||||
func (f *Fs) CleanUp(ctx context.Context) error {
|
||||
return f.multithread(ctx, func(ctx context.Context, u *upstream) error {
|
||||
if do := u.f.Features().CleanUp; do != nil {
|
||||
return do(ctx)
|
||||
}
|
||||
return nil
|
||||
})
|
||||
}
|
||||
|
||||
// OpenWriterAt opens with a handle for random access writes
|
||||
//
|
||||
// Pass in the remote desired and the size if known.
|
||||
//
|
||||
// It truncates any existing object
|
||||
func (f *Fs) OpenWriterAt(ctx context.Context, remote string, size int64) (fs.WriterAtCloser, error) {
|
||||
u, uRemote, err := f.findUpstream(remote)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
do := u.f.Features().OpenWriterAt
|
||||
if do == nil {
|
||||
return nil, fs.ErrorNotImplemented
|
||||
}
|
||||
return do(ctx, uRemote, size)
|
||||
}
|
||||
|
||||
// Object describes a wrapped Object
|
||||
//
|
||||
// This is a wrapped Object which knows its path prefix
|
||||
@@ -916,7 +1024,7 @@ func (o *Object) String() string {
|
||||
func (o *Object) Remote() string {
|
||||
newPath, err := o.u.pathAdjustment.do(o.Object.String())
|
||||
if err != nil {
|
||||
fs.Errorf(o, "Bad object: %v", err)
|
||||
fs.Errorf(o.Object, "Bad object: %v", err)
|
||||
return err.Error()
|
||||
}
|
||||
return newPath
|
||||
@@ -988,5 +1096,10 @@ var (
|
||||
_ fs.Abouter = (*Fs)(nil)
|
||||
_ fs.ListRer = (*Fs)(nil)
|
||||
_ fs.Shutdowner = (*Fs)(nil)
|
||||
_ fs.PublicLinker = (*Fs)(nil)
|
||||
_ fs.PutUncheckeder = (*Fs)(nil)
|
||||
_ fs.MergeDirser = (*Fs)(nil)
|
||||
_ fs.CleanUpper = (*Fs)(nil)
|
||||
_ fs.OpenWriterAter = (*Fs)(nil)
|
||||
_ fs.FullObject = (*Object)(nil)
|
||||
)
|
||||
|
||||
@@ -10,6 +10,11 @@ import (
|
||||
"github.com/rclone/rclone/fstest/fstests"
|
||||
)
|
||||
|
||||
var (
|
||||
unimplementableFsMethods = []string{"UnWrap", "WrapFs", "SetWrapper", "UserInfo", "Disconnect"}
|
||||
unimplementableObjectMethods = []string{}
|
||||
)
|
||||
|
||||
// TestIntegration runs integration tests against the remote
|
||||
func TestIntegration(t *testing.T) {
|
||||
if *fstest.RemoteName == "" {
|
||||
@@ -17,8 +22,8 @@ func TestIntegration(t *testing.T) {
|
||||
}
|
||||
fstests.Run(t, &fstests.Opt{
|
||||
RemoteName: *fstest.RemoteName,
|
||||
UnimplementableFsMethods: []string{"OpenWriterAt", "DuplicateFiles"},
|
||||
UnimplementableObjectMethods: []string{"MimeType"},
|
||||
UnimplementableFsMethods: unimplementableFsMethods,
|
||||
UnimplementableObjectMethods: unimplementableObjectMethods,
|
||||
})
|
||||
}
|
||||
|
||||
@@ -35,7 +40,9 @@ func TestLocal(t *testing.T) {
|
||||
{Name: name, Key: "type", Value: "combine"},
|
||||
{Name: name, Key: "upstreams", Value: upstreams},
|
||||
},
|
||||
QuickTestOK: true,
|
||||
QuickTestOK: true,
|
||||
UnimplementableFsMethods: unimplementableFsMethods,
|
||||
UnimplementableObjectMethods: unimplementableObjectMethods,
|
||||
})
|
||||
}
|
||||
|
||||
@@ -51,7 +58,9 @@ func TestMemory(t *testing.T) {
|
||||
{Name: name, Key: "type", Value: "combine"},
|
||||
{Name: name, Key: "upstreams", Value: upstreams},
|
||||
},
|
||||
QuickTestOK: true,
|
||||
QuickTestOK: true,
|
||||
UnimplementableFsMethods: unimplementableFsMethods,
|
||||
UnimplementableObjectMethods: unimplementableObjectMethods,
|
||||
})
|
||||
}
|
||||
|
||||
@@ -68,6 +77,8 @@ func TestMixed(t *testing.T) {
|
||||
{Name: name, Key: "type", Value: "combine"},
|
||||
{Name: name, Key: "upstreams", Value: upstreams},
|
||||
},
|
||||
UnimplementableFsMethods: unimplementableFsMethods,
|
||||
UnimplementableObjectMethods: unimplementableObjectMethods,
|
||||
})
|
||||
}
|
||||
|
||||
|
||||
@@ -186,6 +186,7 @@ func NewFs(ctx context.Context, name, rpath string, m configmap.Mapper) (fs.Fs,
|
||||
ReadMetadata: true,
|
||||
WriteMetadata: true,
|
||||
UserMetadata: true,
|
||||
PartialUploads: true,
|
||||
}).Fill(ctx, f).Mask(ctx, wrappedFs).WrapsFs(f, wrappedFs)
|
||||
// We support reading MIME types no matter the wrapped fs
|
||||
f.features.ReadMimeType = true
|
||||
|
||||
@@ -38,7 +38,6 @@ const (
|
||||
blockHeaderSize = secretbox.Overhead
|
||||
blockDataSize = 64 * 1024
|
||||
blockSize = blockHeaderSize + blockDataSize
|
||||
encryptedSuffix = ".bin" // when file name encryption is off we add this suffix to make sure the cloud provider doesn't process the file
|
||||
)
|
||||
|
||||
// Errors returned by cipher
|
||||
@@ -54,8 +53,9 @@ var (
|
||||
ErrorEncryptedBadBlock = errors.New("failed to authenticate decrypted block - bad password?")
|
||||
ErrorBadBase32Encoding = errors.New("bad base32 filename encoding")
|
||||
ErrorFileClosed = errors.New("file already closed")
|
||||
ErrorNotAnEncryptedFile = errors.New("not an encrypted file - no \"" + encryptedSuffix + "\" suffix")
|
||||
ErrorNotAnEncryptedFile = errors.New("not an encrypted file - does not match suffix")
|
||||
ErrorBadSeek = errors.New("Seek beyond end of file")
|
||||
ErrorSuffixMissingDot = errors.New("suffix config setting should include a '.'")
|
||||
defaultSalt = []byte{0xA8, 0x0D, 0xF4, 0x3A, 0x8F, 0xBD, 0x03, 0x08, 0xA7, 0xCA, 0xB8, 0x3E, 0x58, 0x1F, 0x86, 0xB1}
|
||||
obfuscQuoteRune = '!'
|
||||
)
|
||||
@@ -170,25 +170,27 @@ func NewNameEncoding(s string) (enc fileNameEncoding, err error) {
|
||||
|
||||
// Cipher defines an encoding and decoding cipher for the crypt backend
|
||||
type Cipher struct {
|
||||
dataKey [32]byte // Key for secretbox
|
||||
nameKey [32]byte // 16,24 or 32 bytes
|
||||
nameTweak [nameCipherBlockSize]byte // used to tweak the name crypto
|
||||
block gocipher.Block
|
||||
mode NameEncryptionMode
|
||||
fileNameEnc fileNameEncoding
|
||||
buffers sync.Pool // encrypt/decrypt buffers
|
||||
cryptoRand io.Reader // read crypto random numbers from here
|
||||
dirNameEncrypt bool
|
||||
passBadBlocks bool // if set passed bad blocks as zeroed blocks
|
||||
dataKey [32]byte // Key for secretbox
|
||||
nameKey [32]byte // 16,24 or 32 bytes
|
||||
nameTweak [nameCipherBlockSize]byte // used to tweak the name crypto
|
||||
block gocipher.Block
|
||||
mode NameEncryptionMode
|
||||
fileNameEnc fileNameEncoding
|
||||
buffers sync.Pool // encrypt/decrypt buffers
|
||||
cryptoRand io.Reader // read crypto random numbers from here
|
||||
dirNameEncrypt bool
|
||||
passBadBlocks bool // if set passed bad blocks as zeroed blocks
|
||||
encryptedSuffix string
|
||||
}
|
||||
|
||||
// newCipher initialises the cipher. If salt is "" then it uses a built in salt val
|
||||
func newCipher(mode NameEncryptionMode, password, salt string, dirNameEncrypt bool, enc fileNameEncoding) (*Cipher, error) {
|
||||
c := &Cipher{
|
||||
mode: mode,
|
||||
fileNameEnc: enc,
|
||||
cryptoRand: rand.Reader,
|
||||
dirNameEncrypt: dirNameEncrypt,
|
||||
mode: mode,
|
||||
fileNameEnc: enc,
|
||||
cryptoRand: rand.Reader,
|
||||
dirNameEncrypt: dirNameEncrypt,
|
||||
encryptedSuffix: ".bin",
|
||||
}
|
||||
c.buffers.New = func() interface{} {
|
||||
return new([blockSize]byte)
|
||||
@@ -200,6 +202,19 @@ func newCipher(mode NameEncryptionMode, password, salt string, dirNameEncrypt bo
|
||||
return c, nil
|
||||
}
|
||||
|
||||
// setEncryptedSuffix set suffix, or an empty string
|
||||
func (c *Cipher) setEncryptedSuffix(suffix string) {
|
||||
if strings.EqualFold(suffix, "none") {
|
||||
c.encryptedSuffix = ""
|
||||
return
|
||||
}
|
||||
if !strings.HasPrefix(suffix, ".") {
|
||||
fs.Errorf(nil, "crypt: bad suffix: %v", ErrorSuffixMissingDot)
|
||||
suffix = "." + suffix
|
||||
}
|
||||
c.encryptedSuffix = suffix
|
||||
}
|
||||
|
||||
// Call to set bad block pass through
|
||||
func (c *Cipher) setPassBadBlocks(passBadBlocks bool) {
|
||||
c.passBadBlocks = passBadBlocks
|
||||
@@ -512,7 +527,7 @@ func (c *Cipher) encryptFileName(in string) string {
|
||||
// EncryptFileName encrypts a file path
|
||||
func (c *Cipher) EncryptFileName(in string) string {
|
||||
if c.mode == NameEncryptionOff {
|
||||
return in + encryptedSuffix
|
||||
return in + c.encryptedSuffix
|
||||
}
|
||||
return c.encryptFileName(in)
|
||||
}
|
||||
@@ -572,8 +587,8 @@ func (c *Cipher) decryptFileName(in string) (string, error) {
|
||||
// DecryptFileName decrypts a file path
|
||||
func (c *Cipher) DecryptFileName(in string) (string, error) {
|
||||
if c.mode == NameEncryptionOff {
|
||||
remainingLength := len(in) - len(encryptedSuffix)
|
||||
if remainingLength == 0 || !strings.HasSuffix(in, encryptedSuffix) {
|
||||
remainingLength := len(in) - len(c.encryptedSuffix)
|
||||
if remainingLength == 0 || !strings.HasSuffix(in, c.encryptedSuffix) {
|
||||
return "", ErrorNotAnEncryptedFile
|
||||
}
|
||||
decrypted := in[:remainingLength]
|
||||
@@ -789,7 +804,7 @@ func (c *Cipher) newDecrypter(rc io.ReadCloser) (*decrypter, error) {
|
||||
if n < fileHeaderSize && err == io.EOF {
|
||||
// This read from 0..fileHeaderSize-1 bytes
|
||||
return nil, fh.finishAndClose(ErrorEncryptedFileTooShort)
|
||||
} else if err != nil {
|
||||
} else if err != io.EOF && err != nil {
|
||||
return nil, fh.finishAndClose(err)
|
||||
}
|
||||
// check the magic
|
||||
|
||||
@@ -405,6 +405,13 @@ func TestNonStandardEncryptFileName(t *testing.T) {
|
||||
// Off mode
|
||||
c, _ := newCipher(NameEncryptionOff, "", "", true, nil)
|
||||
assert.Equal(t, "1/12/123.bin", c.EncryptFileName("1/12/123"))
|
||||
// Off mode with custom suffix
|
||||
c, _ = newCipher(NameEncryptionOff, "", "", true, nil)
|
||||
c.setEncryptedSuffix(".jpg")
|
||||
assert.Equal(t, "1/12/123.jpg", c.EncryptFileName("1/12/123"))
|
||||
// Off mode with empty suffix
|
||||
c.setEncryptedSuffix("none")
|
||||
assert.Equal(t, "1/12/123", c.EncryptFileName("1/12/123"))
|
||||
// Obfuscation mode
|
||||
c, _ = newCipher(NameEncryptionObfuscated, "", "", true, nil)
|
||||
assert.Equal(t, "49.6/99.23/150.890/53.!!lipps", c.EncryptFileName("1/12/123/!hello"))
|
||||
@@ -483,21 +490,27 @@ func TestNonStandardDecryptFileName(t *testing.T) {
|
||||
in string
|
||||
expected string
|
||||
expectedErr error
|
||||
customSuffix string
|
||||
}{
|
||||
{NameEncryptionOff, true, "1/12/123.bin", "1/12/123", nil},
|
||||
{NameEncryptionOff, true, "1/12/123.bix", "", ErrorNotAnEncryptedFile},
|
||||
{NameEncryptionOff, true, ".bin", "", ErrorNotAnEncryptedFile},
|
||||
{NameEncryptionOff, true, "1/12/123-v2001-02-03-040506-123.bin", "1/12/123-v2001-02-03-040506-123", nil},
|
||||
{NameEncryptionOff, true, "1/12/123-v1970-01-01-010101-123-v2001-02-03-040506-123.bin", "1/12/123-v1970-01-01-010101-123-v2001-02-03-040506-123", nil},
|
||||
{NameEncryptionOff, true, "1/12/123-v1970-01-01-010101-123-v2001-02-03-040506-123.txt.bin", "1/12/123-v1970-01-01-010101-123-v2001-02-03-040506-123.txt", nil},
|
||||
{NameEncryptionObfuscated, true, "!.hello", "hello", nil},
|
||||
{NameEncryptionObfuscated, true, "hello", "", ErrorNotAnEncryptedFile},
|
||||
{NameEncryptionObfuscated, true, "161.\u00e4", "\u00a1", nil},
|
||||
{NameEncryptionObfuscated, true, "160.\u03c2", "\u03a0", nil},
|
||||
{NameEncryptionObfuscated, false, "1/12/123/53.!!lipps", "1/12/123/!hello", nil},
|
||||
{NameEncryptionObfuscated, false, "1/12/123/53-v2001-02-03-040506-123.!!lipps", "1/12/123/!hello-v2001-02-03-040506-123", nil},
|
||||
{NameEncryptionOff, true, "1/12/123.bin", "1/12/123", nil, ""},
|
||||
{NameEncryptionOff, true, "1/12/123.bix", "", ErrorNotAnEncryptedFile, ""},
|
||||
{NameEncryptionOff, true, ".bin", "", ErrorNotAnEncryptedFile, ""},
|
||||
{NameEncryptionOff, true, "1/12/123-v2001-02-03-040506-123.bin", "1/12/123-v2001-02-03-040506-123", nil, ""},
|
||||
{NameEncryptionOff, true, "1/12/123-v1970-01-01-010101-123-v2001-02-03-040506-123.bin", "1/12/123-v1970-01-01-010101-123-v2001-02-03-040506-123", nil, ""},
|
||||
{NameEncryptionOff, true, "1/12/123-v1970-01-01-010101-123-v2001-02-03-040506-123.txt.bin", "1/12/123-v1970-01-01-010101-123-v2001-02-03-040506-123.txt", nil, ""},
|
||||
{NameEncryptionOff, true, "1/12/123.jpg", "1/12/123", nil, ".jpg"},
|
||||
{NameEncryptionOff, true, "1/12/123", "1/12/123", nil, "none"},
|
||||
{NameEncryptionObfuscated, true, "!.hello", "hello", nil, ""},
|
||||
{NameEncryptionObfuscated, true, "hello", "", ErrorNotAnEncryptedFile, ""},
|
||||
{NameEncryptionObfuscated, true, "161.\u00e4", "\u00a1", nil, ""},
|
||||
{NameEncryptionObfuscated, true, "160.\u03c2", "\u03a0", nil, ""},
|
||||
{NameEncryptionObfuscated, false, "1/12/123/53.!!lipps", "1/12/123/!hello", nil, ""},
|
||||
{NameEncryptionObfuscated, false, "1/12/123/53-v2001-02-03-040506-123.!!lipps", "1/12/123/!hello-v2001-02-03-040506-123", nil, ""},
|
||||
} {
|
||||
c, _ := newCipher(test.mode, "", "", test.dirNameEncrypt, enc)
|
||||
if test.customSuffix != "" {
|
||||
c.setEncryptedSuffix(test.customSuffix)
|
||||
}
|
||||
actual, actualErr := c.DecryptFileName(test.in)
|
||||
what := fmt.Sprintf("Testing %q (mode=%v)", test.in, test.mode)
|
||||
assert.Equal(t, test.expected, actual, what)
|
||||
|
||||
@@ -48,7 +48,7 @@ func init() {
|
||||
Help: "Very simple filename obfuscation.",
|
||||
}, {
|
||||
Value: "off",
|
||||
Help: "Don't encrypt the file names.\nAdds a \".bin\" extension only.",
|
||||
Help: "Don't encrypt the file names.\nAdds a \".bin\", or \"suffix\" extension only.",
|
||||
},
|
||||
},
|
||||
}, {
|
||||
@@ -79,7 +79,9 @@ NB If filename_encryption is "off" then this option will do nothing.`,
|
||||
}, {
|
||||
Name: "server_side_across_configs",
|
||||
Default: false,
|
||||
Help: `Allow server-side operations (e.g. copy) to work across different crypt configs.
|
||||
Help: `Deprecated: use --server-side-across-configs instead.
|
||||
|
||||
Allow server-side operations (e.g. copy) to work across different crypt configs.
|
||||
|
||||
Normally this option is not what you want, but if you have two crypts
|
||||
pointing to the same backend you can use it.
|
||||
@@ -124,7 +126,7 @@ names, or for debugging purposes.`,
|
||||
Help: `If set this will pass bad blocks through as all 0.
|
||||
|
||||
This should not be set in normal operation, it should only be set if
|
||||
trying to recover a crypted file with errors and it is desired to
|
||||
trying to recover an encrypted file with errors and it is desired to
|
||||
recover as much of the file as possible.`,
|
||||
Default: false,
|
||||
Advanced: true,
|
||||
@@ -151,6 +153,14 @@ length and if it's case sensitive.`,
|
||||
},
|
||||
},
|
||||
Advanced: true,
|
||||
}, {
|
||||
Name: "suffix",
|
||||
Help: `If this is set it will override the default suffix of ".bin".
|
||||
|
||||
Setting suffix to "none" will result in an empty suffix. This may be useful
|
||||
when the path length is critical.`,
|
||||
Default: ".bin",
|
||||
Advanced: true,
|
||||
}},
|
||||
})
|
||||
}
|
||||
@@ -183,6 +193,7 @@ func newCipherForConfig(opt *Options) (*Cipher, error) {
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("failed to make cipher: %w", err)
|
||||
}
|
||||
cipher.setEncryptedSuffix(opt.Suffix)
|
||||
cipher.setPassBadBlocks(opt.PassBadBlocks)
|
||||
return cipher, nil
|
||||
}
|
||||
@@ -257,6 +268,7 @@ func NewFs(ctx context.Context, name, rpath string, m configmap.Mapper) (fs.Fs,
|
||||
ReadMetadata: true,
|
||||
WriteMetadata: true,
|
||||
UserMetadata: true,
|
||||
PartialUploads: true,
|
||||
}).Fill(ctx, f).Mask(ctx, wrappedFs).WrapsFs(f, wrappedFs)
|
||||
|
||||
return f, err
|
||||
@@ -274,6 +286,7 @@ type Options struct {
|
||||
ShowMapping bool `config:"show_mapping"`
|
||||
PassBadBlocks bool `config:"pass_bad_blocks"`
|
||||
FilenameEncoding string `config:"filename_encoding"`
|
||||
Suffix string `config:"suffix"`
|
||||
}
|
||||
|
||||
// Fs represents a wrapped fs.Fs
|
||||
|
||||
@@ -499,7 +499,9 @@ need to use --ignore size also.`,
|
||||
}, {
|
||||
Name: "server_side_across_configs",
|
||||
Default: false,
|
||||
Help: `Allow server-side operations (e.g. copy) to work across different drive configs.
|
||||
Help: `Deprecated: use --server-side-across-configs instead.
|
||||
|
||||
Allow server-side operations (e.g. copy) to work across different drive configs.
|
||||
|
||||
This can be useful if you wish to do a server-side copy between two
|
||||
different Google drives. Note that this isn't enabled by default
|
||||
@@ -1512,6 +1514,9 @@ func (f *Fs) newObjectWithExportInfo(
|
||||
// NewObject finds the Object at remote. If it can't be found
|
||||
// it returns the error fs.ErrorObjectNotFound.
|
||||
func (f *Fs) NewObject(ctx context.Context, remote string) (fs.Object, error) {
|
||||
if strings.HasSuffix(remote, "/") {
|
||||
return nil, fs.ErrorIsDir
|
||||
}
|
||||
info, extension, exportName, exportMimeType, isDocument, err := f.getRemoteInfoWithExport(ctx, remote)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
@@ -3881,7 +3886,7 @@ func (o *Object) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, op
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
newO, err := o.fs.newObjectWithInfo(ctx, src.Remote(), info)
|
||||
newO, err := o.fs.newObjectWithInfo(ctx, o.remote, info)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
@@ -144,7 +144,7 @@ func (b *batcher) commitBatch(ctx context.Context, items []*files.UploadSessionF
|
||||
// If commit fails then signal clients if sync
|
||||
var signalled = b.async
|
||||
defer func() {
|
||||
if err != nil && signalled {
|
||||
if err != nil && !signalled {
|
||||
// Signal to clients that there was an error
|
||||
for _, result := range results {
|
||||
result <- batcherResponse{err: err}
|
||||
|
||||
@@ -58,7 +58,7 @@ import (
|
||||
const (
|
||||
rcloneClientID = "5jcck7diasz0rqy"
|
||||
rcloneEncryptedClientSecret = "fRS5vVLr2v6FbyXYnIgjwBuUAt0osq_QZTXAEcmZ7g"
|
||||
minSleep = 10 * time.Millisecond
|
||||
defaultMinSleep = fs.Duration(10 * time.Millisecond)
|
||||
maxSleep = 2 * time.Second
|
||||
decayConstant = 2 // bigger for slower decay, exponential
|
||||
// Upload chunk size - setting too small makes uploads slow.
|
||||
@@ -260,8 +260,8 @@ uploaded.
|
||||
The default for this is 0 which means rclone will choose a sensible
|
||||
default based on the batch_mode in use.
|
||||
|
||||
- batch_mode: async - default batch_timeout is 500ms
|
||||
- batch_mode: sync - default batch_timeout is 10s
|
||||
- batch_mode: async - default batch_timeout is 10s
|
||||
- batch_mode: sync - default batch_timeout is 500ms
|
||||
- batch_mode: off - not in use
|
||||
`,
|
||||
Default: fs.Duration(0),
|
||||
@@ -271,6 +271,11 @@ default based on the batch_mode in use.
|
||||
Help: `Max time to wait for a batch to finish committing`,
|
||||
Default: fs.Duration(10 * time.Minute),
|
||||
Advanced: true,
|
||||
}, {
|
||||
Name: "pacer_min_sleep",
|
||||
Default: defaultMinSleep,
|
||||
Help: "Minimum time to sleep between API calls.",
|
||||
Advanced: true,
|
||||
}, {
|
||||
Name: config.ConfigEncoding,
|
||||
Help: config.ConfigEncodingHelp,
|
||||
@@ -299,6 +304,7 @@ type Options struct {
|
||||
BatchTimeout fs.Duration `config:"batch_timeout"`
|
||||
BatchCommitTimeout fs.Duration `config:"batch_commit_timeout"`
|
||||
AsyncBatch bool `config:"async_batch"`
|
||||
PacerMinSleep fs.Duration `config:"pacer_min_sleep"`
|
||||
Enc encoder.MultiEncoder `config:"encoding"`
|
||||
}
|
||||
|
||||
@@ -442,7 +448,7 @@ func NewFs(ctx context.Context, name, root string, m configmap.Mapper) (fs.Fs, e
|
||||
name: name,
|
||||
opt: *opt,
|
||||
ci: ci,
|
||||
pacer: fs.NewPacer(ctx, pacer.NewDefault(pacer.MinSleep(minSleep), pacer.MaxSleep(maxSleep), pacer.DecayConstant(decayConstant))),
|
||||
pacer: fs.NewPacer(ctx, pacer.NewDefault(pacer.MinSleep(opt.PacerMinSleep), pacer.MaxSleep(maxSleep), pacer.DecayConstant(decayConstant))),
|
||||
}
|
||||
f.batcher, err = newBatcher(ctx, f, f.opt.BatchMode, f.opt.BatchSize, time.Duration(f.opt.BatchTimeout))
|
||||
if err != nil {
|
||||
@@ -719,7 +725,7 @@ func (f *Fs) listSharedFolders(ctx context.Context) (entries fs.DirEntries, err
|
||||
}
|
||||
for _, entry := range res.Entries {
|
||||
leaf := f.opt.Enc.ToStandardName(entry.Name)
|
||||
d := fs.NewDir(leaf, time.Now()).SetID(entry.SharedFolderId)
|
||||
d := fs.NewDir(leaf, time.Time{}).SetID(entry.SharedFolderId)
|
||||
entries = append(entries, d)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
@@ -906,7 +912,7 @@ func (f *Fs) List(ctx context.Context, dir string) (entries fs.DirEntries, err e
|
||||
leaf := f.opt.Enc.ToStandardName(path.Base(entryPath))
|
||||
remote := path.Join(dir, leaf)
|
||||
if folderInfo != nil {
|
||||
d := fs.NewDir(remote, time.Now()).SetID(folderInfo.Id)
|
||||
d := fs.NewDir(remote, time.Time{}).SetID(folderInfo.Id)
|
||||
entries = append(entries, d)
|
||||
} else if fileInfo != nil {
|
||||
o, err := f.newObjectWithInfo(ctx, remote, fileInfo)
|
||||
|
||||
@@ -118,6 +118,9 @@ func (f *Fs) getDownloadToken(ctx context.Context, url string) (*GetTokenRespons
|
||||
Single: 1,
|
||||
Pass: f.opt.FilePassword,
|
||||
}
|
||||
if f.opt.CDN {
|
||||
request.CDN = 1
|
||||
}
|
||||
opts := rest.Opts{
|
||||
Method: "POST",
|
||||
Path: "/download/get_token.cgi",
|
||||
|
||||
@@ -54,6 +54,11 @@ func init() {
|
||||
Name: "folder_password",
|
||||
Advanced: true,
|
||||
IsPassword: true,
|
||||
}, {
|
||||
Help: "Set if you wish to use CDN download links.",
|
||||
Name: "cdn",
|
||||
Default: false,
|
||||
Advanced: true,
|
||||
}, {
|
||||
Name: config.ConfigEncoding,
|
||||
Help: config.ConfigEncodingHelp,
|
||||
@@ -89,6 +94,7 @@ type Options struct {
|
||||
SharedFolder string `config:"shared_folder"`
|
||||
FilePassword string `config:"file_password"`
|
||||
FolderPassword string `config:"folder_password"`
|
||||
CDN bool `config:"cdn"`
|
||||
Enc encoder.MultiEncoder `config:"encoding"`
|
||||
}
|
||||
|
||||
|
||||
@@ -20,6 +20,7 @@ type DownloadRequest struct {
|
||||
URL string `json:"url"`
|
||||
Single int `json:"single"`
|
||||
Pass string `json:"pass,omitempty"`
|
||||
CDN int `json:"cdn,omitempty"`
|
||||
}
|
||||
|
||||
// RemoveFolderRequest is the request structure of the corresponding request
|
||||
|
||||
@@ -15,7 +15,7 @@ import (
|
||||
"sync"
|
||||
"time"
|
||||
|
||||
"github.com/jlaffaye/ftp"
|
||||
"github.com/rclone/ftp"
|
||||
"github.com/rclone/rclone/fs"
|
||||
"github.com/rclone/rclone/fs/accounting"
|
||||
"github.com/rclone/rclone/fs/config"
|
||||
@@ -580,6 +580,7 @@ func NewFs(ctx context.Context, name, root string, m configmap.Mapper) (ff fs.Fs
|
||||
}
|
||||
f.features = (&fs.Features{
|
||||
CanHaveEmptyDirectories: true,
|
||||
PartialUploads: true,
|
||||
}).Fill(ctx, f)
|
||||
// set the pool drainer timer going
|
||||
if f.opt.IdleTimeout > 0 {
|
||||
@@ -692,6 +693,12 @@ func (f *Fs) findItem(ctx context.Context, remote string) (entry *ftp.Entry, err
|
||||
if err == fs.ErrorObjectNotFound {
|
||||
return nil, nil
|
||||
}
|
||||
if errX := textprotoError(err); errX != nil {
|
||||
switch errX.Code {
|
||||
case ftp.StatusBadArguments:
|
||||
err = nil
|
||||
}
|
||||
}
|
||||
return nil, err
|
||||
}
|
||||
if entry != nil {
|
||||
@@ -1098,7 +1105,7 @@ func (o *Object) ModTime(ctx context.Context) time.Time {
|
||||
// SetModTime sets the modification time of the object
|
||||
func (o *Object) SetModTime(ctx context.Context, modTime time.Time) error {
|
||||
if !o.fs.fSetTime {
|
||||
fs.Errorf(o.fs, "SetModTime is not supported")
|
||||
fs.Debugf(o.fs, "SetModTime is not supported")
|
||||
return nil
|
||||
}
|
||||
c, err := o.fs.getFtpConnection(ctx)
|
||||
|
||||
@@ -301,6 +301,15 @@ Docs: https://cloud.google.com/storage/docs/bucket-policy-only
|
||||
Value: "DURABLE_REDUCED_AVAILABILITY",
|
||||
Help: "Durable reduced availability storage class",
|
||||
}},
|
||||
}, {
|
||||
Name: "directory_markers",
|
||||
Default: false,
|
||||
Advanced: true,
|
||||
Help: `Upload an empty object with a trailing slash when a new directory is created
|
||||
|
||||
Empty folders are unsupported for bucket based remotes, this option creates an empty
|
||||
object ending with "/", to persist the folder.
|
||||
`,
|
||||
}, {
|
||||
Name: "no_check_bucket",
|
||||
Help: `If set, don't attempt to check the bucket exists or create it.
|
||||
@@ -366,6 +375,7 @@ type Options struct {
|
||||
Endpoint string `config:"endpoint"`
|
||||
Enc encoder.MultiEncoder `config:"encoding"`
|
||||
EnvAuth bool `config:"env_auth"`
|
||||
DirectoryMarkers bool `config:"directory_markers"`
|
||||
}
|
||||
|
||||
// Fs represents a remote storage server
|
||||
@@ -461,7 +471,7 @@ func parsePath(path string) (root string) {
|
||||
// split returns bucket and bucketPath from the rootRelativePath
|
||||
// relative to f.root
|
||||
func (f *Fs) split(rootRelativePath string) (bucketName, bucketPath string) {
|
||||
bucketName, bucketPath = bucket.Split(path.Join(f.root, rootRelativePath))
|
||||
bucketName, bucketPath = bucket.Split(bucket.Join(f.root, rootRelativePath))
|
||||
return f.opt.Enc.FromStandardName(bucketName), f.opt.Enc.FromStandardPath(bucketPath)
|
||||
}
|
||||
|
||||
@@ -547,6 +557,9 @@ func NewFs(ctx context.Context, name, root string, m configmap.Mapper) (fs.Fs, e
|
||||
BucketBased: true,
|
||||
BucketBasedRootOK: true,
|
||||
}).Fill(ctx, f)
|
||||
if opt.DirectoryMarkers {
|
||||
f.features.CanHaveEmptyDirectories = true
|
||||
}
|
||||
|
||||
// Create a new authorized Drive client.
|
||||
f.client = oAuthClient
|
||||
@@ -633,6 +646,7 @@ func (f *Fs) list(ctx context.Context, bucket, directory, prefix string, addBuck
|
||||
if !recurse {
|
||||
list = list.Delimiter("/")
|
||||
}
|
||||
foundItems := 0
|
||||
for {
|
||||
var objects *storage.Objects
|
||||
err = f.pacer.Call(func() (bool, error) {
|
||||
@@ -648,6 +662,7 @@ func (f *Fs) list(ctx context.Context, bucket, directory, prefix string, addBuck
|
||||
return err
|
||||
}
|
||||
if !recurse {
|
||||
foundItems += len(objects.Prefixes)
|
||||
var object storage.Object
|
||||
for _, remote := range objects.Prefixes {
|
||||
if !strings.HasSuffix(remote, "/") {
|
||||
@@ -668,22 +683,29 @@ func (f *Fs) list(ctx context.Context, bucket, directory, prefix string, addBuck
|
||||
}
|
||||
}
|
||||
}
|
||||
foundItems += len(objects.Items)
|
||||
for _, object := range objects.Items {
|
||||
remote := f.opt.Enc.ToStandardPath(object.Name)
|
||||
if !strings.HasPrefix(remote, prefix) {
|
||||
fs.Logf(f, "Odd name received %q", object.Name)
|
||||
continue
|
||||
}
|
||||
remote = remote[len(prefix):]
|
||||
isDirectory := remote == "" || strings.HasSuffix(remote, "/")
|
||||
// is this a directory marker?
|
||||
if isDirectory {
|
||||
// Don't insert the root directory
|
||||
if remote == directory {
|
||||
continue
|
||||
}
|
||||
// process directory markers as directories
|
||||
remote = strings.TrimRight(remote, "/")
|
||||
}
|
||||
remote = remote[len(prefix):]
|
||||
if addBucket {
|
||||
remote = path.Join(bucket, remote)
|
||||
}
|
||||
// is this a directory marker?
|
||||
if isDirectory {
|
||||
continue // skip directory marker
|
||||
}
|
||||
err = fn(remote, object, false)
|
||||
|
||||
err = fn(remote, object, isDirectory)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
@@ -693,6 +715,17 @@ func (f *Fs) list(ctx context.Context, bucket, directory, prefix string, addBuck
|
||||
}
|
||||
list.PageToken(objects.NextPageToken)
|
||||
}
|
||||
if f.opt.DirectoryMarkers && foundItems == 0 && directory != "" {
|
||||
// Determine whether the directory exists or not by whether it has a marker
|
||||
_, err := f.readObjectInfo(ctx, bucket, directory)
|
||||
if err != nil {
|
||||
if err == fs.ErrorObjectNotFound {
|
||||
return fs.ErrorDirNotFound
|
||||
}
|
||||
return err
|
||||
}
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
@@ -856,10 +889,69 @@ func (f *Fs) PutStream(ctx context.Context, in io.Reader, src fs.ObjectInfo, opt
|
||||
return f.Put(ctx, in, src, options...)
|
||||
}
|
||||
|
||||
// Create directory marker file and parents
|
||||
func (f *Fs) createDirectoryMarker(ctx context.Context, bucket, dir string) error {
|
||||
if !f.opt.DirectoryMarkers || bucket == "" {
|
||||
return nil
|
||||
}
|
||||
|
||||
// Object to be uploaded
|
||||
o := &Object{
|
||||
fs: f,
|
||||
modTime: time.Now(),
|
||||
}
|
||||
|
||||
for {
|
||||
_, bucketPath := f.split(dir)
|
||||
// Don't create the directory marker if it is the bucket or at the very root
|
||||
if bucketPath == "" {
|
||||
break
|
||||
}
|
||||
o.remote = dir + "/"
|
||||
|
||||
// Check to see if object already exists
|
||||
_, err := o.readObjectInfo(ctx)
|
||||
if err == nil {
|
||||
return nil
|
||||
}
|
||||
|
||||
// Upload it if not
|
||||
fs.Debugf(o, "Creating directory marker")
|
||||
content := io.Reader(strings.NewReader(""))
|
||||
err = o.Update(ctx, content, o)
|
||||
if err != nil {
|
||||
return fmt.Errorf("creating directory marker failed: %w", err)
|
||||
}
|
||||
|
||||
// Now check parent directory exists
|
||||
dir = path.Dir(dir)
|
||||
if dir == "/" || dir == "." {
|
||||
break
|
||||
}
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
// Mkdir creates the bucket if it doesn't exist
|
||||
func (f *Fs) Mkdir(ctx context.Context, dir string) (err error) {
|
||||
bucket, _ := f.split(dir)
|
||||
return f.makeBucket(ctx, bucket)
|
||||
e := f.checkBucket(ctx, bucket)
|
||||
if e != nil {
|
||||
return e
|
||||
}
|
||||
return f.createDirectoryMarker(ctx, bucket, dir)
|
||||
|
||||
}
|
||||
|
||||
// mkdirParent creates the parent bucket/directory if it doesn't exist
|
||||
func (f *Fs) mkdirParent(ctx context.Context, remote string) error {
|
||||
remote = strings.TrimRight(remote, "/")
|
||||
dir := path.Dir(remote)
|
||||
if dir == "/" || dir == "." {
|
||||
dir = ""
|
||||
}
|
||||
return f.Mkdir(ctx, dir)
|
||||
}
|
||||
|
||||
// makeBucket creates the bucket if it doesn't exist
|
||||
@@ -931,6 +1023,18 @@ func (f *Fs) checkBucket(ctx context.Context, bucket string) error {
|
||||
// to delete was not empty.
|
||||
func (f *Fs) Rmdir(ctx context.Context, dir string) (err error) {
|
||||
bucket, directory := f.split(dir)
|
||||
// Remove directory marker file
|
||||
if f.opt.DirectoryMarkers && bucket != "" && dir != "" {
|
||||
o := &Object{
|
||||
fs: f,
|
||||
remote: dir + "/",
|
||||
}
|
||||
fs.Debugf(o, "Removing directory marker")
|
||||
err := o.Remove(ctx)
|
||||
if err != nil {
|
||||
return fmt.Errorf("removing directory marker failed: %w", err)
|
||||
}
|
||||
}
|
||||
if bucket == "" || directory != "" {
|
||||
return nil
|
||||
}
|
||||
@@ -962,7 +1066,7 @@ func (f *Fs) Precision() time.Duration {
|
||||
// If it isn't possible then return fs.ErrorCantCopy
|
||||
func (f *Fs) Copy(ctx context.Context, src fs.Object, remote string) (fs.Object, error) {
|
||||
dstBucket, dstPath := f.split(remote)
|
||||
err := f.checkBucket(ctx, dstBucket)
|
||||
err := f.mkdirParent(ctx, remote)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
@@ -1100,10 +1204,15 @@ func (o *Object) setMetaData(info *storage.Object) {
|
||||
// readObjectInfo reads the definition for an object
|
||||
func (o *Object) readObjectInfo(ctx context.Context) (object *storage.Object, err error) {
|
||||
bucket, bucketPath := o.split()
|
||||
err = o.fs.pacer.Call(func() (bool, error) {
|
||||
get := o.fs.svc.Objects.Get(bucket, bucketPath).Context(ctx)
|
||||
if o.fs.opt.UserProject != "" {
|
||||
get = get.UserProject(o.fs.opt.UserProject)
|
||||
return o.fs.readObjectInfo(ctx, bucket, bucketPath)
|
||||
}
|
||||
|
||||
// readObjectInfo reads the definition for an object
|
||||
func (f *Fs) readObjectInfo(ctx context.Context, bucket, bucketPath string) (object *storage.Object, err error) {
|
||||
err = f.pacer.Call(func() (bool, error) {
|
||||
get := f.svc.Objects.Get(bucket, bucketPath).Context(ctx)
|
||||
if f.opt.UserProject != "" {
|
||||
get = get.UserProject(f.opt.UserProject)
|
||||
}
|
||||
object, err = get.Do()
|
||||
return shouldRetry(ctx, err)
|
||||
@@ -1244,11 +1353,14 @@ func (o *Object) Open(ctx context.Context, options ...fs.OpenOption) (in io.Read
|
||||
// Update the object with the contents of the io.Reader, modTime and size
|
||||
//
|
||||
// The new object may have been created if an error is returned
|
||||
func (o *Object) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, options ...fs.OpenOption) error {
|
||||
func (o *Object) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, options ...fs.OpenOption) (err error) {
|
||||
bucket, bucketPath := o.split()
|
||||
err := o.fs.checkBucket(ctx, bucket)
|
||||
if err != nil {
|
||||
return err
|
||||
// Create parent dir/bucket if not saving directory marker
|
||||
if !strings.HasSuffix(o.remote, "/") {
|
||||
err = o.fs.mkdirParent(ctx, o.remote)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
modTime := src.ModTime(ctx)
|
||||
|
||||
|
||||
@@ -6,6 +6,7 @@ import (
|
||||
"testing"
|
||||
|
||||
"github.com/rclone/rclone/backend/googlecloudstorage"
|
||||
"github.com/rclone/rclone/fstest"
|
||||
"github.com/rclone/rclone/fstest/fstests"
|
||||
)
|
||||
|
||||
@@ -16,3 +17,17 @@ func TestIntegration(t *testing.T) {
|
||||
NilObject: (*googlecloudstorage.Object)(nil),
|
||||
})
|
||||
}
|
||||
|
||||
func TestIntegration2(t *testing.T) {
|
||||
if *fstest.RemoteName != "" {
|
||||
t.Skip("Skipping as -remote set")
|
||||
}
|
||||
name := "TestGoogleCloudStorage"
|
||||
fstests.Run(t, &fstests.Opt{
|
||||
RemoteName: name + ":",
|
||||
NilObject: (*googlecloudstorage.Object)(nil),
|
||||
ExtraConfig: []fstests.ExtraConfigItem{
|
||||
{Name: name, Key: "directory_markers", Value: "true"},
|
||||
},
|
||||
})
|
||||
}
|
||||
|
||||
@@ -166,6 +166,7 @@ func NewFs(ctx context.Context, fsname, rpath string, cmap configmap.Mapper) (fs
|
||||
ReadMetadata: true,
|
||||
WriteMetadata: true,
|
||||
UserMetadata: true,
|
||||
PartialUploads: true,
|
||||
}
|
||||
f.features = stubFeatures.Fill(ctx, f).Mask(ctx, f.Fs).WrapsFs(f, f.Fs)
|
||||
|
||||
|
||||
@@ -495,7 +495,7 @@ func (f *Fs) List(ctx context.Context, dir string) (entries fs.DirEntries, err e
|
||||
add(file)
|
||||
case fs.ErrorNotAFile:
|
||||
// ...found a directory not a file
|
||||
add(fs.NewDir(remote, timeUnset))
|
||||
add(fs.NewDir(remote, time.Time{}))
|
||||
default:
|
||||
fs.Debugf(remote, "skipping because of error: %v", err)
|
||||
}
|
||||
@@ -507,7 +507,7 @@ func (f *Fs) List(ctx context.Context, dir string) (entries fs.DirEntries, err e
|
||||
name = strings.TrimRight(name, "/")
|
||||
remote := path.Join(dir, name)
|
||||
if isDir {
|
||||
add(fs.NewDir(remote, timeUnset))
|
||||
add(fs.NewDir(remote, time.Time{}))
|
||||
} else {
|
||||
in <- remote
|
||||
}
|
||||
|
||||
@@ -376,7 +376,7 @@ func (f *Fs) List(ctx context.Context, dir string) (entries fs.DirEntries, err e
|
||||
for i, file := range files {
|
||||
remote := path.Join(dir, f.opt.Enc.ToStandardName(file.Name))
|
||||
if file.Type == "dir" {
|
||||
entries[i] = fs.NewDir(remote, time.Unix(0, 0))
|
||||
entries[i] = fs.NewDir(remote, time.Time{})
|
||||
} else {
|
||||
entries[i] = &Object{
|
||||
fs: f,
|
||||
|
||||
@@ -303,6 +303,7 @@ func NewFs(ctx context.Context, name, root string, m configmap.Mapper) (fs.Fs, e
|
||||
WriteMetadata: true,
|
||||
UserMetadata: xattrSupported, // can only R/W general purpose metadata if xattrs are supported
|
||||
FilterAware: true,
|
||||
PartialUploads: true,
|
||||
}).Fill(ctx, f)
|
||||
if opt.FollowSymlinks {
|
||||
f.lstat = os.Stat
|
||||
|
||||
@@ -5,6 +5,7 @@ package local
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"runtime"
|
||||
"sync"
|
||||
"time"
|
||||
|
||||
@@ -23,7 +24,7 @@ func (o *Object) readMetadataFromFile(m *fs.Metadata) (err error) {
|
||||
// Check statx() is available as it was only introduced in kernel 4.11
|
||||
// If not, fall back to fstatat() which was introduced in 2.6.16 which is guaranteed for all Go versions
|
||||
var stat unix.Statx_t
|
||||
if unix.Statx(unix.AT_FDCWD, ".", 0, unix.STATX_ALL, &stat) != unix.ENOSYS {
|
||||
if runtime.GOOS != "android" && unix.Statx(unix.AT_FDCWD, ".", 0, unix.STATX_ALL, &stat) != unix.ENOSYS {
|
||||
readMetadataFromFileFn = readMetadataFromFileStatx
|
||||
} else {
|
||||
readMetadataFromFileFn = readMetadataFromFileFstatat
|
||||
|
||||
@@ -196,7 +196,9 @@ listing, set this option.`,
|
||||
}, {
|
||||
Name: "server_side_across_configs",
|
||||
Default: false,
|
||||
Help: `Allow server-side operations (e.g. copy) to work across different onedrive configs.
|
||||
Help: `Deprecated: use --server-side-across-configs instead.
|
||||
|
||||
Allow server-side operations (e.g. copy) to work across different onedrive configs.
|
||||
|
||||
This will only work if you are copying between two OneDrive *Personal* drives AND
|
||||
the files to copy are already shared between them. In other cases, rclone will
|
||||
@@ -301,6 +303,24 @@ rclone.
|
||||
Help: "None - don't use any hashes",
|
||||
}},
|
||||
Advanced: true,
|
||||
}, {
|
||||
Name: "av_override",
|
||||
Default: false,
|
||||
Help: `Allows download of files the server thinks has a virus.
|
||||
|
||||
The onedrive/sharepoint server may check files uploaded with an Anti
|
||||
Virus checker. If it detects any potential viruses or malware it will
|
||||
block download of the file.
|
||||
|
||||
In this case you will see a message like this
|
||||
|
||||
server reports this file is infected with a virus - use --onedrive-av-override to download anyway: Infected (name of virus): 403 Forbidden:
|
||||
|
||||
If you are 100% sure you want to download this file anyway then use
|
||||
the --onedrive-av-override flag, or av_override = true in the config
|
||||
file.
|
||||
`,
|
||||
Advanced: true,
|
||||
}, {
|
||||
Name: config.ConfigEncoding,
|
||||
Help: config.ConfigEncodingHelp,
|
||||
@@ -640,6 +660,7 @@ type Options struct {
|
||||
LinkType string `config:"link_type"`
|
||||
LinkPassword string `config:"link_password"`
|
||||
HashType string `config:"hash_type"`
|
||||
AVOverride bool `config:"av_override"`
|
||||
Enc encoder.MultiEncoder `config:"encoding"`
|
||||
}
|
||||
|
||||
@@ -1966,12 +1987,20 @@ func (o *Object) Open(ctx context.Context, options ...fs.OpenOption) (in io.Read
|
||||
var resp *http.Response
|
||||
opts := o.fs.newOptsCall(o.id, "GET", "/content")
|
||||
opts.Options = options
|
||||
if o.fs.opt.AVOverride {
|
||||
opts.Parameters = url.Values{"AVOverride": {"1"}}
|
||||
}
|
||||
|
||||
err = o.fs.pacer.Call(func() (bool, error) {
|
||||
resp, err = o.fs.srv.Call(ctx, &opts)
|
||||
return shouldRetry(ctx, resp, err)
|
||||
})
|
||||
if err != nil {
|
||||
if resp != nil {
|
||||
if virus := resp.Header.Get("X-Virus-Infected"); virus != "" {
|
||||
err = fmt.Errorf("server reports this file is infected with a virus - use --onedrive-av-override to download anyway: %s: %w", virus, err)
|
||||
}
|
||||
}
|
||||
return nil, err
|
||||
}
|
||||
|
||||
|
||||
@@ -227,9 +227,10 @@ type Media struct {
|
||||
Duration int64 `json:"duration,omitempty"`
|
||||
BitRate int `json:"bit_rate,omitempty"`
|
||||
FrameRate int `json:"frame_rate,omitempty"`
|
||||
VideoCodec string `json:"video_codec,omitempty"`
|
||||
AudioCodec string `json:"audio_codec,omitempty"`
|
||||
VideoType string `json:"video_type,omitempty"`
|
||||
VideoCodec string `json:"video_codec,omitempty"` // "h264", "hevc"
|
||||
AudioCodec string `json:"audio_codec,omitempty"` // "pcm_bluray", "aac"
|
||||
VideoType string `json:"video_type,omitempty"` // "mpegts"
|
||||
HdrType string `json:"hdr_type,omitempty"`
|
||||
} `json:"video,omitempty"`
|
||||
Link *Link `json:"link,omitempty"`
|
||||
NeedMoreQuota bool `json:"need_more_quota,omitempty"`
|
||||
|
||||
@@ -189,11 +189,6 @@ Fill in for rclone to use a non root folder as its starting point.
|
||||
Help: "Files bigger than this will be cached on disk to calculate hash if required.",
|
||||
Default: fs.SizeSuffix(10 * 1024 * 1024),
|
||||
Advanced: true,
|
||||
}, {
|
||||
Name: "multi_thread_streams",
|
||||
Help: "Max number of streams to use for multi-thread downloads.\n\nThis will override global flag `--multi-thread-streams` and defaults to 1 to avoid rate limiting.",
|
||||
Default: 1,
|
||||
Advanced: true,
|
||||
}, {
|
||||
Name: config.ConfigEncoding,
|
||||
Help: config.ConfigEncodingHelp,
|
||||
@@ -224,7 +219,6 @@ type Options struct {
|
||||
UseTrash bool `config:"use_trash"`
|
||||
TrashedOnly bool `config:"trashed_only"`
|
||||
HashMemoryThreshold fs.SizeSuffix `config:"hash_memory_limit"`
|
||||
MultiThreadStreams int `config:"multi_thread_streams"`
|
||||
Enc encoder.MultiEncoder `config:"encoding"`
|
||||
}
|
||||
|
||||
@@ -437,10 +431,6 @@ func newFs(ctx context.Context, name, path string, m configmap.Mapper) (*Fs, err
|
||||
|
||||
root := parsePath(path)
|
||||
|
||||
// overrides global `--multi-thread-streams` by local one
|
||||
ci := fs.GetConfig(ctx)
|
||||
ci.MultiThreadStreams = opt.MultiThreadStreams
|
||||
|
||||
f := &Fs{
|
||||
name: name,
|
||||
root: root,
|
||||
@@ -451,6 +441,7 @@ func newFs(ctx context.Context, name, path string, m configmap.Mapper) (*Fs, err
|
||||
f.features = (&fs.Features{
|
||||
ReadMimeType: true, // can read the mime type of objects
|
||||
CanHaveEmptyDirectories: true, // can have empty directories
|
||||
NoMultiThreading: true, // can't have multiple threads downloading
|
||||
}).Fill(ctx, f)
|
||||
|
||||
if err := f.newClientWithPacer(ctx); err != nil {
|
||||
@@ -1420,6 +1411,16 @@ func (f *Fs) Command(ctx context.Context, name string, arg []string, opt map[str
|
||||
|
||||
// ------------------------------------------------------------
|
||||
|
||||
// parseFileID gets fid parameter from url query
|
||||
func parseFileID(s string) string {
|
||||
if u, err := url.Parse(s); err == nil {
|
||||
if q, err := url.ParseQuery(u.RawQuery); err == nil {
|
||||
return q.Get("fid")
|
||||
}
|
||||
}
|
||||
return ""
|
||||
}
|
||||
|
||||
// setMetaData sets the metadata from info
|
||||
func (o *Object) setMetaData(info *api.File) (err error) {
|
||||
if info.Kind == api.KindOfFolder {
|
||||
@@ -1441,10 +1442,18 @@ func (o *Object) setMetaData(info *api.File) (err error) {
|
||||
o.md5sum = info.Md5Checksum
|
||||
if info.Links.ApplicationOctetStream != nil {
|
||||
o.link = info.Links.ApplicationOctetStream
|
||||
}
|
||||
if len(info.Medias) > 0 && info.Medias[0].Link != nil {
|
||||
fs.Debugf(o, "Using a media link")
|
||||
o.link = info.Medias[0].Link
|
||||
if fid := parseFileID(o.link.URL); fid != "" {
|
||||
for mid, media := range info.Medias {
|
||||
if media.Link == nil {
|
||||
continue
|
||||
}
|
||||
if mfid := parseFileID(media.Link.URL); fid == mfid {
|
||||
fs.Debugf(o, "Using a media link from Medias[%d]", mid)
|
||||
o.link = media.Link
|
||||
break
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
@@ -23,6 +23,7 @@ import (
|
||||
"github.com/rclone/rclone/lib/dircache"
|
||||
"github.com/rclone/rclone/lib/oauthutil"
|
||||
"github.com/rclone/rclone/lib/pacer"
|
||||
"github.com/rclone/rclone/lib/random"
|
||||
"github.com/rclone/rclone/lib/readers"
|
||||
)
|
||||
|
||||
@@ -252,9 +253,12 @@ func (f *Fs) Put(ctx context.Context, in io.Reader, src fs.ObjectInfo, options .
|
||||
// This will create a duplicate if we upload a new file without
|
||||
// checking to see if there is one already - use Put() for that.
|
||||
func (f *Fs) PutUnchecked(ctx context.Context, in io.Reader, src fs.ObjectInfo, options ...fs.OpenOption) (o fs.Object, err error) {
|
||||
return f.putUnchecked(ctx, in, src, src.Remote(), options...)
|
||||
}
|
||||
|
||||
func (f *Fs) putUnchecked(ctx context.Context, in io.Reader, src fs.ObjectInfo, remote string, options ...fs.OpenOption) (o fs.Object, err error) {
|
||||
// defer log.Trace(f, "src=%+v", src)("o=%+v, err=%v", &o, &err)
|
||||
size := src.Size()
|
||||
remote := src.Remote()
|
||||
leaf, directoryID, err := f.dirCache.FindPath(ctx, remote, true)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
@@ -540,24 +544,59 @@ func (f *Fs) Copy(ctx context.Context, src fs.Object, remote string) (o fs.Objec
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
modTime := src.ModTime(ctx)
|
||||
var resp struct {
|
||||
File putio.File `json:"file"`
|
||||
}
|
||||
// For some unknown reason the API sometimes returns the name
|
||||
// already exists unless we upload to a temporary name and
|
||||
// rename
|
||||
//
|
||||
// {"error_id":null,"error_message":"Name already exist","error_type":"NAME_ALREADY_EXIST","error_uri":"http://api.put.io/v2/docs","extra":{},"status":"ERROR","status_code":400}
|
||||
suffix := "." + random.String(8)
|
||||
err = f.pacer.Call(func() (bool, error) {
|
||||
params := url.Values{}
|
||||
params.Set("file_id", strconv.FormatInt(srcObj.file.ID, 10))
|
||||
params.Set("parent_id", directoryID)
|
||||
params.Set("name", f.opt.Enc.FromStandardName(leaf))
|
||||
params.Set("name", f.opt.Enc.FromStandardName(leaf+suffix))
|
||||
|
||||
req, err := f.client.NewRequest(ctx, "POST", "/v2/files/copy", strings.NewReader(params.Encode()))
|
||||
if err != nil {
|
||||
return false, err
|
||||
}
|
||||
req.Header.Set("Content-Type", "application/x-www-form-urlencoded")
|
||||
// fs.Debugf(f, "copying file (%d) to parent_id: %s", srcObj.file.ID, directoryID)
|
||||
_, err = f.client.Do(req, nil)
|
||||
_, err = f.client.Do(req, &resp)
|
||||
return shouldRetry(ctx, err)
|
||||
})
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return f.NewObject(ctx, remote)
|
||||
err = f.pacer.Call(func() (bool, error) {
|
||||
params := url.Values{}
|
||||
params.Set("file_id", strconv.FormatInt(resp.File.ID, 10))
|
||||
params.Set("name", f.opt.Enc.FromStandardName(leaf))
|
||||
|
||||
req, err := f.client.NewRequest(ctx, "POST", "/v2/files/rename", strings.NewReader(params.Encode()))
|
||||
if err != nil {
|
||||
return false, err
|
||||
}
|
||||
req.Header.Set("Content-Type", "application/x-www-form-urlencoded")
|
||||
_, err = f.client.Do(req, &resp)
|
||||
return shouldRetry(ctx, err)
|
||||
})
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
o, err = f.newObjectWithInfo(ctx, remote, resp.File)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
err = o.SetModTime(ctx, modTime)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return o, nil
|
||||
}
|
||||
|
||||
// Move src to this remote using server-side move operations.
|
||||
@@ -579,6 +618,7 @@ func (f *Fs) Move(ctx context.Context, src fs.Object, remote string) (o fs.Objec
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
modTime := src.ModTime(ctx)
|
||||
err = f.pacer.Call(func() (bool, error) {
|
||||
params := url.Values{}
|
||||
params.Set("file_id", strconv.FormatInt(srcObj.file.ID, 10))
|
||||
@@ -596,7 +636,15 @@ func (f *Fs) Move(ctx context.Context, src fs.Object, remote string) (o fs.Objec
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return f.NewObject(ctx, remote)
|
||||
o, err = f.NewObject(ctx, remote)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
err = o.SetModTime(ctx, modTime)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return o, nil
|
||||
}
|
||||
|
||||
// DirMove moves src, srcRemote to this remote at dstRemote
|
||||
|
||||
@@ -275,7 +275,7 @@ func (o *Object) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, op
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
newObj, err := o.fs.PutUnchecked(ctx, in, src, options...)
|
||||
newObj, err := o.fs.putUnchecked(ctx, in, src, o.remote, options...)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
263
backend/s3/s3.go
263
backend/s3/s3.go
@@ -66,7 +66,7 @@ import (
|
||||
func init() {
|
||||
fs.Register(&fs.RegInfo{
|
||||
Name: "s3",
|
||||
Description: "Amazon S3 Compliant Storage Providers including AWS, Alibaba, Ceph, China Mobile, Cloudflare, GCS, ArvanCloud, DigitalOcean, Dreamhost, Huawei OBS, IBM COS, IDrive e2, IONOS Cloud, Liara, Lyve Cloud, Minio, Netease, RackCorp, Scaleway, SeaweedFS, StackPath, Storj, Tencent COS, Qiniu and Wasabi",
|
||||
Description: "Amazon S3 Compliant Storage Providers including AWS, Alibaba, ArvanCloud, Ceph, China Mobile, Cloudflare, GCS, DigitalOcean, Dreamhost, Huawei OBS, IBM COS, IDrive e2, IONOS Cloud, Liara, Lyve Cloud, Minio, Netease, Petabox, RackCorp, Scaleway, SeaweedFS, StackPath, Storj, Tencent COS, Qiniu and Wasabi",
|
||||
NewFs: NewFs,
|
||||
CommandHelp: commandHelp,
|
||||
Config: func(ctx context.Context, name string, m configmap.Mapper, config fs.ConfigIn) (*fs.ConfigOut, error) {
|
||||
@@ -91,6 +91,9 @@ func init() {
|
||||
}, {
|
||||
Value: "Alibaba",
|
||||
Help: "Alibaba Cloud Object Storage System (OSS) formerly Aliyun",
|
||||
}, {
|
||||
Value: "ArvanCloud",
|
||||
Help: "Arvan Cloud Object Storage (AOS)",
|
||||
}, {
|
||||
Value: "Ceph",
|
||||
Help: "Ceph Object Storage",
|
||||
@@ -100,9 +103,6 @@ func init() {
|
||||
}, {
|
||||
Value: "Cloudflare",
|
||||
Help: "Cloudflare R2 Storage",
|
||||
}, {
|
||||
Value: "ArvanCloud",
|
||||
Help: "Arvan Cloud Object Storage (AOS)",
|
||||
}, {
|
||||
Value: "DigitalOcean",
|
||||
Help: "DigitalOcean Spaces",
|
||||
@@ -136,6 +136,9 @@ func init() {
|
||||
}, {
|
||||
Value: "Netease",
|
||||
Help: "Netease Object Storage (NOS)",
|
||||
}, {
|
||||
Value: "Petabox",
|
||||
Help: "Petabox Object Storage",
|
||||
}, {
|
||||
Value: "RackCorp",
|
||||
Help: "RackCorp Object Storage",
|
||||
@@ -440,10 +443,30 @@ func init() {
|
||||
Value: "eu-south-2",
|
||||
Help: "Logrono, Spain",
|
||||
}},
|
||||
}, {
|
||||
Name: "region",
|
||||
Help: "Region where your bucket will be created and your data stored.\n",
|
||||
Provider: "Petabox",
|
||||
Examples: []fs.OptionExample{{
|
||||
Value: "us-east-1",
|
||||
Help: "US East (N. Virginia)",
|
||||
}, {
|
||||
Value: "eu-central-1",
|
||||
Help: "Europe (Frankfurt)",
|
||||
}, {
|
||||
Value: "ap-southeast-1",
|
||||
Help: "Asia Pacific (Singapore)",
|
||||
}, {
|
||||
Value: "me-south-1",
|
||||
Help: "Middle East (Bahrain)",
|
||||
}, {
|
||||
Value: "sa-east-1",
|
||||
Help: "South America (São Paulo)",
|
||||
}},
|
||||
}, {
|
||||
Name: "region",
|
||||
Help: "Region to connect to.\n\nLeave blank if you are using an S3 clone and you don't have a region.",
|
||||
Provider: "!AWS,Alibaba,ChinaMobile,Cloudflare,IONOS,ArvanCloud,Liara,Qiniu,RackCorp,Scaleway,Storj,TencentCOS,HuaweiOBS,IDrive",
|
||||
Provider: "!AWS,Alibaba,ArvanCloud,ChinaMobile,Cloudflare,IONOS,Petabox,Liara,Qiniu,RackCorp,Scaleway,Storj,TencentCOS,HuaweiOBS,IDrive",
|
||||
Examples: []fs.OptionExample{{
|
||||
Value: "",
|
||||
Help: "Use this if unsure.\nWill use v4 signatures and an empty region.",
|
||||
@@ -552,15 +575,15 @@ func init() {
|
||||
Help: "Anhui China (Huainan)",
|
||||
}},
|
||||
}, {
|
||||
// ArvanCloud endpoints: https://www.arvancloud.com/en/products/cloud-storage
|
||||
// ArvanCloud endpoints: https://www.arvancloud.ir/en/products/cloud-storage
|
||||
Name: "endpoint",
|
||||
Help: "Endpoint for Arvan Cloud Object Storage (AOS) API.",
|
||||
Provider: "ArvanCloud",
|
||||
Examples: []fs.OptionExample{{
|
||||
Value: "s3.ir-thr-at1.arvanstorage.com",
|
||||
Help: "The default endpoint - a good choice if you are unsure.\nTehran Iran (Asiatech)",
|
||||
Value: "s3.ir-thr-at1.arvanstorage.ir",
|
||||
Help: "The default endpoint - a good choice if you are unsure.\nTehran Iran (Simin)",
|
||||
}, {
|
||||
Value: "s3.ir-tbz-sh1.arvanstorage.com",
|
||||
Value: "s3.ir-tbz-sh1.arvanstorage.ir",
|
||||
Help: "Tabriz Iran (Shahriar)",
|
||||
}},
|
||||
}, {
|
||||
@@ -768,6 +791,30 @@ func init() {
|
||||
Value: "s3-eu-south-2.ionoscloud.com",
|
||||
Help: "Logrono, Spain",
|
||||
}},
|
||||
}, {
|
||||
Name: "endpoint",
|
||||
Help: "Endpoint for Petabox S3 Object Storage.\n\nSpecify the endpoint from the same region.",
|
||||
Provider: "Petabox",
|
||||
Required: true,
|
||||
Examples: []fs.OptionExample{{
|
||||
Value: "s3.petabox.io",
|
||||
Help: "US East (N. Virginia)",
|
||||
}, {
|
||||
Value: "s3.us-east-1.petabox.io",
|
||||
Help: "US East (N. Virginia)",
|
||||
}, {
|
||||
Value: "s3.eu-central-1.petabox.io",
|
||||
Help: "Europe (Frankfurt)",
|
||||
}, {
|
||||
Value: "s3.ap-southeast-1.petabox.io",
|
||||
Help: "Asia Pacific (Singapore)",
|
||||
}, {
|
||||
Value: "s3.me-south-1.petabox.io",
|
||||
Help: "Middle East (Bahrain)",
|
||||
}, {
|
||||
Value: "s3.sa-east-1.petabox.io",
|
||||
Help: "South America (São Paulo)",
|
||||
}},
|
||||
}, {
|
||||
// Liara endpoints: https://liara.ir/landing/object-storage
|
||||
Name: "endpoint",
|
||||
@@ -1109,7 +1156,7 @@ func init() {
|
||||
}, {
|
||||
Name: "endpoint",
|
||||
Help: "Endpoint for S3 API.\n\nRequired when using an S3 clone.",
|
||||
Provider: "!AWS,IBMCOS,IDrive,IONOS,TencentCOS,HuaweiOBS,Alibaba,ChinaMobile,GCS,Liara,ArvanCloud,Scaleway,StackPath,Storj,RackCorp,Qiniu",
|
||||
Provider: "!AWS,ArvanCloud,IBMCOS,IDrive,IONOS,TencentCOS,HuaweiOBS,Alibaba,ChinaMobile,GCS,Liara,Scaleway,StackPath,Storj,RackCorp,Qiniu,Petabox",
|
||||
Examples: []fs.OptionExample{{
|
||||
Value: "objects-us-east-1.dream.io",
|
||||
Help: "Dream Objects endpoint",
|
||||
@@ -1211,8 +1258,12 @@ func init() {
|
||||
Help: "Liara Iran endpoint",
|
||||
Provider: "Liara",
|
||||
}, {
|
||||
Value: "s3.ir-thr-at1.arvanstorage.com",
|
||||
Help: "ArvanCloud Tehran Iran (Asiatech) endpoint",
|
||||
Value: "s3.ir-thr-at1.arvanstorage.ir",
|
||||
Help: "ArvanCloud Tehran Iran (Simin) endpoint",
|
||||
Provider: "ArvanCloud",
|
||||
}, {
|
||||
Value: "s3.ir-tbz-sh1.arvanstorage.ir",
|
||||
Help: "ArvanCloud Tabriz Iran (Shahriar) endpoint",
|
||||
Provider: "ArvanCloud",
|
||||
}},
|
||||
}, {
|
||||
@@ -1396,7 +1447,7 @@ func init() {
|
||||
Provider: "ArvanCloud",
|
||||
Examples: []fs.OptionExample{{
|
||||
Value: "ir-thr-at1",
|
||||
Help: "Tehran Iran (Asiatech)",
|
||||
Help: "Tehran Iran (Simin)",
|
||||
}, {
|
||||
Value: "ir-tbz-sh1",
|
||||
Help: "Tabriz Iran (Shahriar)",
|
||||
@@ -1593,7 +1644,7 @@ func init() {
|
||||
}, {
|
||||
Name: "location_constraint",
|
||||
Help: "Location constraint - must be set to match the Region.\n\nLeave blank if not sure. Used when creating buckets only.",
|
||||
Provider: "!AWS,Alibaba,HuaweiOBS,ChinaMobile,Cloudflare,IBMCOS,IDrive,IONOS,Liara,ArvanCloud,Qiniu,RackCorp,Scaleway,StackPath,Storj,TencentCOS",
|
||||
Provider: "!AWS,Alibaba,ArvanCloud,HuaweiOBS,ChinaMobile,Cloudflare,IBMCOS,IDrive,IONOS,Liara,Qiniu,RackCorp,Scaleway,StackPath,Storj,TencentCOS,Petabox",
|
||||
}, {
|
||||
Name: "acl",
|
||||
Help: `Canned ACL used when creating buckets and storing or copying objects.
|
||||
@@ -1836,7 +1887,7 @@ If you leave it blank, this is calculated automatically from the sse_customer_ke
|
||||
Help: "Standard storage class",
|
||||
}},
|
||||
}, {
|
||||
// Mapping from here: https://www.arvancloud.com/en/products/cloud-storage
|
||||
// Mapping from here: https://www.arvancloud.ir/en/products/cloud-storage
|
||||
Name: "storage_class",
|
||||
Help: "The storage class to use when storing new objects in ArvanCloud.",
|
||||
Provider: "ArvanCloud",
|
||||
@@ -1863,7 +1914,7 @@ If you leave it blank, this is calculated automatically from the sse_customer_ke
|
||||
Help: "Infrequent access storage mode",
|
||||
}},
|
||||
}, {
|
||||
// Mapping from here: https://www.scaleway.com/en/docs/object-storage-glacier/#-Scaleway-Storage-Classes
|
||||
// Mapping from here: https://www.scaleway.com/en/docs/storage/object/quickstart/
|
||||
Name: "storage_class",
|
||||
Help: "The storage class to use when storing new objects in S3.",
|
||||
Provider: "Scaleway",
|
||||
@@ -1872,10 +1923,13 @@ If you leave it blank, this is calculated automatically from the sse_customer_ke
|
||||
Help: "Default.",
|
||||
}, {
|
||||
Value: "STANDARD",
|
||||
Help: "The Standard class for any upload.\nSuitable for on-demand content like streaming or CDN.",
|
||||
Help: "The Standard class for any upload.\nSuitable for on-demand content like streaming or CDN.\nAvailable in all regions.",
|
||||
}, {
|
||||
Value: "GLACIER",
|
||||
Help: "Archived storage.\nPrices are lower, but it needs to be restored first to be accessed.",
|
||||
Help: "Archived storage.\nPrices are lower, but it needs to be restored first to be accessed.\nAvailable in FR-PAR and NL-AMS regions.",
|
||||
}, {
|
||||
Value: "ONEZONE_IA",
|
||||
Help: "One Zone - Infrequent Access.\nA good choice for storing secondary backup copies or easily re-creatable data.\nAvailable in the FR-PAR region only.",
|
||||
}},
|
||||
}, {
|
||||
// Mapping from here: https://developer.qiniu.com/kodo/5906/storage-type
|
||||
@@ -2193,6 +2247,15 @@ See: https://github.com/rclone/rclone/issues/4673, https://github.com/rclone/rcl
|
||||
This is usually set to a CloudFront CDN URL as AWS S3 offers
|
||||
cheaper egress for data downloaded through the CloudFront network.`,
|
||||
Advanced: true,
|
||||
}, {
|
||||
Name: "directory_markers",
|
||||
Default: false,
|
||||
Advanced: true,
|
||||
Help: `Upload an empty object with a trailing slash when a new directory is created
|
||||
|
||||
Empty folders are unsupported for bucket based remotes, this option creates an empty
|
||||
object ending with "/", to persist the folder.
|
||||
`,
|
||||
}, {
|
||||
Name: "use_multipart_etag",
|
||||
Help: `Whether to use ETag in multipart uploads for verification
|
||||
@@ -2422,6 +2485,7 @@ type Options struct {
|
||||
MemoryPoolUseMmap bool `config:"memory_pool_use_mmap"`
|
||||
DisableHTTP2 bool `config:"disable_http2"`
|
||||
DownloadURL string `config:"download_url"`
|
||||
DirectoryMarkers bool `config:"directory_markers"`
|
||||
UseMultipartEtag fs.Tristate `config:"use_multipart_etag"`
|
||||
UsePresignedRequest bool `config:"use_presigned_request"`
|
||||
Versions bool `config:"versions"`
|
||||
@@ -2868,6 +2932,8 @@ func setQuirks(opt *Options) {
|
||||
// listObjectsV2 supported - https://api.ionos.com/docs/s3/#Basic-Operations-get-Bucket-list-type-2
|
||||
virtualHostStyle = false
|
||||
urlEncodeListings = false
|
||||
case "Petabox":
|
||||
// No quirks
|
||||
case "Liara":
|
||||
virtualHostStyle = false
|
||||
urlEncodeListings = false
|
||||
@@ -2911,6 +2977,7 @@ func setQuirks(opt *Options) {
|
||||
case "Qiniu":
|
||||
useMultipartEtag = false
|
||||
urlEncodeListings = false
|
||||
virtualHostStyle = false
|
||||
case "GCS":
|
||||
// Google break request Signature by mutating accept-encoding HTTP header
|
||||
// https://github.com/rclone/rclone/issues/6670
|
||||
@@ -2989,6 +3056,7 @@ func NewFs(ctx context.Context, name, root string, m configmap.Mapper) (fs.Fs, e
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
fs.Debugf(nil, "name = %q, root = %q, opt = %#v", name, root, opt)
|
||||
err = checkUploadChunkSize(opt.ChunkSize)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("s3: chunk size: %w", err)
|
||||
@@ -3079,6 +3147,9 @@ func NewFs(ctx context.Context, name, root string, m configmap.Mapper) (fs.Fs, e
|
||||
if opt.Provider == "IDrive" {
|
||||
f.features.SetTier = false
|
||||
}
|
||||
if opt.DirectoryMarkers {
|
||||
f.features.CanHaveEmptyDirectories = true
|
||||
}
|
||||
// f.listMultipartUploads()
|
||||
|
||||
if f.rootBucket != "" && f.rootDirectory != "" && !opt.NoHeadObject && !strings.HasSuffix(root, "/") {
|
||||
@@ -3571,6 +3642,7 @@ func (f *Fs) list(ctx context.Context, opt listOpt, fn listFn) error {
|
||||
default:
|
||||
listBucket = f.newV2List(&req)
|
||||
}
|
||||
foundItems := 0
|
||||
for {
|
||||
var resp *s3.ListObjectsV2Output
|
||||
var err error
|
||||
@@ -3612,6 +3684,7 @@ func (f *Fs) list(ctx context.Context, opt listOpt, fn listFn) error {
|
||||
return err
|
||||
}
|
||||
if !opt.recurse {
|
||||
foundItems += len(resp.CommonPrefixes)
|
||||
for _, commonPrefix := range resp.CommonPrefixes {
|
||||
if commonPrefix.Prefix == nil {
|
||||
fs.Logf(f, "Nil common prefix received")
|
||||
@@ -3644,6 +3717,7 @@ func (f *Fs) list(ctx context.Context, opt listOpt, fn listFn) error {
|
||||
}
|
||||
}
|
||||
}
|
||||
foundItems += len(resp.Contents)
|
||||
for i, object := range resp.Contents {
|
||||
remote := aws.StringValue(object.Key)
|
||||
if urlEncodeListings {
|
||||
@@ -3658,19 +3732,29 @@ func (f *Fs) list(ctx context.Context, opt listOpt, fn listFn) error {
|
||||
fs.Logf(f, "Odd name received %q", remote)
|
||||
continue
|
||||
}
|
||||
isDirectory := (remote == "" || strings.HasSuffix(remote, "/")) && object.Size != nil && *object.Size == 0
|
||||
// is this a directory marker?
|
||||
if isDirectory {
|
||||
if opt.noSkipMarkers {
|
||||
// process directory markers as files
|
||||
isDirectory = false
|
||||
} else {
|
||||
// Don't insert the root directory
|
||||
if remote == opt.directory {
|
||||
continue
|
||||
}
|
||||
// process directory markers as directories
|
||||
remote = strings.TrimRight(remote, "/")
|
||||
}
|
||||
}
|
||||
remote = remote[len(opt.prefix):]
|
||||
isDirectory := remote == "" || strings.HasSuffix(remote, "/")
|
||||
if opt.addBucket {
|
||||
remote = bucket.Join(opt.bucket, remote)
|
||||
}
|
||||
// is this a directory marker?
|
||||
if isDirectory && object.Size != nil && *object.Size == 0 && !opt.noSkipMarkers {
|
||||
continue // skip directory marker
|
||||
}
|
||||
if versionIDs != nil {
|
||||
err = fn(remote, object, versionIDs[i], false)
|
||||
err = fn(remote, object, versionIDs[i], isDirectory)
|
||||
} else {
|
||||
err = fn(remote, object, nil, false)
|
||||
err = fn(remote, object, nil, isDirectory)
|
||||
}
|
||||
if err != nil {
|
||||
if err == errEndList {
|
||||
@@ -3683,6 +3767,20 @@ func (f *Fs) list(ctx context.Context, opt listOpt, fn listFn) error {
|
||||
break
|
||||
}
|
||||
}
|
||||
if f.opt.DirectoryMarkers && foundItems == 0 && opt.directory != "" {
|
||||
// Determine whether the directory exists or not by whether it has a marker
|
||||
req := s3.HeadObjectInput{
|
||||
Bucket: &opt.bucket,
|
||||
Key: &opt.directory,
|
||||
}
|
||||
_, err := f.headObject(ctx, &req)
|
||||
if err != nil {
|
||||
if err == fs.ErrorObjectNotFound {
|
||||
return fs.ErrorDirNotFound
|
||||
}
|
||||
return err
|
||||
}
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
@@ -3873,10 +3971,70 @@ func (f *Fs) bucketExists(ctx context.Context, bucket string) (bool, error) {
|
||||
return false, err
|
||||
}
|
||||
|
||||
// Create directory marker file and parents
|
||||
func (f *Fs) createDirectoryMarker(ctx context.Context, bucket, dir string) error {
|
||||
if !f.opt.DirectoryMarkers || bucket == "" {
|
||||
return nil
|
||||
}
|
||||
|
||||
// Object to be uploaded
|
||||
o := &Object{
|
||||
fs: f,
|
||||
meta: map[string]string{
|
||||
metaMtime: swift.TimeToFloatString(time.Now()),
|
||||
},
|
||||
}
|
||||
|
||||
for {
|
||||
_, bucketPath := f.split(dir)
|
||||
// Don't create the directory marker if it is the bucket or at the very root
|
||||
if bucketPath == "" {
|
||||
break
|
||||
}
|
||||
o.remote = dir + "/"
|
||||
|
||||
// Check to see if object already exists
|
||||
_, err := o.headObject(ctx)
|
||||
if err == nil {
|
||||
return nil
|
||||
}
|
||||
|
||||
// Upload it if not
|
||||
fs.Debugf(o, "Creating directory marker")
|
||||
content := io.Reader(strings.NewReader(""))
|
||||
err = o.Update(ctx, content, o)
|
||||
if err != nil {
|
||||
return fmt.Errorf("creating directory marker failed: %w", err)
|
||||
}
|
||||
|
||||
// Now check parent directory exists
|
||||
dir = path.Dir(dir)
|
||||
if dir == "/" || dir == "." {
|
||||
break
|
||||
}
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
// Mkdir creates the bucket if it doesn't exist
|
||||
func (f *Fs) Mkdir(ctx context.Context, dir string) error {
|
||||
bucket, _ := f.split(dir)
|
||||
return f.makeBucket(ctx, bucket)
|
||||
e := f.makeBucket(ctx, bucket)
|
||||
if e != nil {
|
||||
return e
|
||||
}
|
||||
return f.createDirectoryMarker(ctx, bucket, dir)
|
||||
}
|
||||
|
||||
// mkdirParent creates the parent bucket/directory if it doesn't exist
|
||||
func (f *Fs) mkdirParent(ctx context.Context, remote string) error {
|
||||
remote = strings.TrimRight(remote, "/")
|
||||
dir := path.Dir(remote)
|
||||
if dir == "/" || dir == "." {
|
||||
dir = ""
|
||||
}
|
||||
return f.Mkdir(ctx, dir)
|
||||
}
|
||||
|
||||
// makeBucket creates the bucket if it doesn't exist
|
||||
@@ -3917,6 +4075,18 @@ func (f *Fs) makeBucket(ctx context.Context, bucket string) error {
|
||||
// Returns an error if it isn't empty
|
||||
func (f *Fs) Rmdir(ctx context.Context, dir string) error {
|
||||
bucket, directory := f.split(dir)
|
||||
// Remove directory marker file
|
||||
if f.opt.DirectoryMarkers && bucket != "" && dir != "" {
|
||||
o := &Object{
|
||||
fs: f,
|
||||
remote: dir + "/",
|
||||
}
|
||||
fs.Debugf(o, "Removing directory marker")
|
||||
err := o.Remove(ctx)
|
||||
if err != nil {
|
||||
return fmt.Errorf("removing directory marker failed: %w", err)
|
||||
}
|
||||
}
|
||||
if bucket == "" || directory != "" {
|
||||
return nil
|
||||
}
|
||||
@@ -4115,7 +4285,7 @@ func (f *Fs) Copy(ctx context.Context, src fs.Object, remote string) (fs.Object,
|
||||
return nil, errNotWithVersionAt
|
||||
}
|
||||
dstBucket, dstPath := f.split(remote)
|
||||
err := f.makeBucket(ctx, dstBucket)
|
||||
err := f.mkdirParent(ctx, remote)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
@@ -4742,22 +4912,26 @@ func (o *Object) headObject(ctx context.Context) (resp *s3.HeadObjectOutput, err
|
||||
Key: &bucketPath,
|
||||
VersionId: o.versionID,
|
||||
}
|
||||
if o.fs.opt.RequesterPays {
|
||||
return o.fs.headObject(ctx, &req)
|
||||
}
|
||||
|
||||
func (f *Fs) headObject(ctx context.Context, req *s3.HeadObjectInput) (resp *s3.HeadObjectOutput, err error) {
|
||||
if f.opt.RequesterPays {
|
||||
req.RequestPayer = aws.String(s3.RequestPayerRequester)
|
||||
}
|
||||
if o.fs.opt.SSECustomerAlgorithm != "" {
|
||||
req.SSECustomerAlgorithm = &o.fs.opt.SSECustomerAlgorithm
|
||||
if f.opt.SSECustomerAlgorithm != "" {
|
||||
req.SSECustomerAlgorithm = &f.opt.SSECustomerAlgorithm
|
||||
}
|
||||
if o.fs.opt.SSECustomerKey != "" {
|
||||
req.SSECustomerKey = &o.fs.opt.SSECustomerKey
|
||||
if f.opt.SSECustomerKey != "" {
|
||||
req.SSECustomerKey = &f.opt.SSECustomerKey
|
||||
}
|
||||
if o.fs.opt.SSECustomerKeyMD5 != "" {
|
||||
req.SSECustomerKeyMD5 = &o.fs.opt.SSECustomerKeyMD5
|
||||
if f.opt.SSECustomerKeyMD5 != "" {
|
||||
req.SSECustomerKeyMD5 = &f.opt.SSECustomerKeyMD5
|
||||
}
|
||||
err = o.fs.pacer.Call(func() (bool, error) {
|
||||
err = f.pacer.Call(func() (bool, error) {
|
||||
var err error
|
||||
resp, err = o.fs.c.HeadObjectWithContext(ctx, &req)
|
||||
return o.fs.shouldRetry(ctx, err)
|
||||
resp, err = f.c.HeadObjectWithContext(ctx, req)
|
||||
return f.shouldRetry(ctx, err)
|
||||
})
|
||||
if err != nil {
|
||||
if awsErr, ok := err.(awserr.RequestFailure); ok {
|
||||
@@ -4767,7 +4941,9 @@ func (o *Object) headObject(ctx context.Context) (resp *s3.HeadObjectOutput, err
|
||||
}
|
||||
return nil, err
|
||||
}
|
||||
o.fs.cache.MarkOK(bucket)
|
||||
if req.Bucket != nil {
|
||||
f.cache.MarkOK(*req.Bucket)
|
||||
}
|
||||
return resp, nil
|
||||
}
|
||||
|
||||
@@ -5416,9 +5592,12 @@ func (o *Object) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, op
|
||||
return errNotWithVersionAt
|
||||
}
|
||||
bucket, bucketPath := o.split()
|
||||
err := o.fs.makeBucket(ctx, bucket)
|
||||
if err != nil {
|
||||
return err
|
||||
// Create parent dir/bucket if not saving directory marker
|
||||
if !strings.HasSuffix(o.remote, "/") {
|
||||
err := o.fs.mkdirParent(ctx, o.remote)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
modTime := src.ModTime(ctx)
|
||||
size := src.Size()
|
||||
@@ -5741,7 +5920,7 @@ func (o *Object) Metadata(ctx context.Context) (metadata fs.Metadata, err error)
|
||||
setMetadata("content-disposition", o.contentDisposition)
|
||||
setMetadata("content-encoding", o.contentEncoding)
|
||||
setMetadata("content-language", o.contentLanguage)
|
||||
setMetadata("tier", o.storageClass)
|
||||
metadata["tier"] = o.GetTier()
|
||||
|
||||
return metadata, nil
|
||||
}
|
||||
|
||||
@@ -18,6 +18,7 @@ import (
|
||||
"github.com/rclone/rclone/fs/hash"
|
||||
"github.com/rclone/rclone/fstest"
|
||||
"github.com/rclone/rclone/fstest/fstests"
|
||||
"github.com/rclone/rclone/lib/bucket"
|
||||
"github.com/rclone/rclone/lib/random"
|
||||
"github.com/rclone/rclone/lib/version"
|
||||
"github.com/stretchr/testify/assert"
|
||||
@@ -317,14 +318,19 @@ func (f *Fs) InternalTestVersions(t *testing.T) {
|
||||
|
||||
// Check we can make a NewFs from that object with a version suffix
|
||||
t.Run("NewFs", func(t *testing.T) {
|
||||
newPath := path.Join(fs.ConfigString(f), fileNameVersion)
|
||||
newPath := bucket.Join(fs.ConfigStringFull(f), fileNameVersion)
|
||||
// Make sure --s3-versions is set in the config of the new remote
|
||||
confPath := strings.Replace(newPath, ":", ",versions:", 1)
|
||||
fNew, err := cache.Get(ctx, confPath)
|
||||
fs.Debugf(nil, "oldPath = %q", newPath)
|
||||
lastColon := strings.LastIndex(newPath, ":")
|
||||
require.True(t, lastColon >= 0)
|
||||
newPath = newPath[:lastColon] + ",versions" + newPath[lastColon:]
|
||||
fs.Debugf(nil, "newPath = %q", newPath)
|
||||
fNew, err := cache.Get(ctx, newPath)
|
||||
// This should return pointing to a file
|
||||
assert.Equal(t, fs.ErrorIsFile, err)
|
||||
require.Equal(t, fs.ErrorIsFile, err)
|
||||
require.NotNil(t, fNew)
|
||||
// With the directory the directory above
|
||||
assert.Equal(t, dirName, path.Base(fs.ConfigString(fNew)))
|
||||
assert.Equal(t, dirName, path.Base(fs.ConfigStringFull(fNew)))
|
||||
})
|
||||
})
|
||||
|
||||
|
||||
@@ -5,6 +5,7 @@ import (
|
||||
"testing"
|
||||
|
||||
"github.com/rclone/rclone/fs"
|
||||
"github.com/rclone/rclone/fstest"
|
||||
"github.com/rclone/rclone/fstest/fstests"
|
||||
)
|
||||
|
||||
@@ -20,6 +21,24 @@ func TestIntegration(t *testing.T) {
|
||||
})
|
||||
}
|
||||
|
||||
func TestIntegration2(t *testing.T) {
|
||||
if *fstest.RemoteName != "" {
|
||||
t.Skip("skipping as -remote is set")
|
||||
}
|
||||
name := "TestS3"
|
||||
fstests.Run(t, &fstests.Opt{
|
||||
RemoteName: name + ":",
|
||||
NilObject: (*Object)(nil),
|
||||
TiersToTest: []string{"STANDARD", "STANDARD_IA"},
|
||||
ChunkedUpload: fstests.ChunkedUploadConfig{
|
||||
MinChunkSize: minChunkSize,
|
||||
},
|
||||
ExtraConfig: []fstests.ExtraConfigItem{
|
||||
{Name: name, Key: "directory_markers", Value: "true"},
|
||||
},
|
||||
})
|
||||
}
|
||||
|
||||
func (f *Fs) SetUploadChunkSize(cs fs.SizeSuffix) (fs.SizeSuffix, error) {
|
||||
return f.setUploadChunkSize(cs)
|
||||
}
|
||||
|
||||
@@ -14,6 +14,7 @@ import (
|
||||
|
||||
// URL parameters that need to be added to the signature
|
||||
var s3ParamsToSign = map[string]struct{}{
|
||||
"delete": {},
|
||||
"acl": {},
|
||||
"location": {},
|
||||
"logging": {},
|
||||
|
||||
@@ -994,6 +994,7 @@ func NewFsWithConnection(ctx context.Context, f *Fs, name string, root string, m
|
||||
f.features = (&fs.Features{
|
||||
CanHaveEmptyDirectories: true,
|
||||
SlowHash: true,
|
||||
PartialUploads: true,
|
||||
}).Fill(ctx, f)
|
||||
// Make a connection and pool it to return errors early
|
||||
c, err := f.getSftpConnection(ctx)
|
||||
@@ -1065,7 +1066,7 @@ func NewFsWithConnection(ctx context.Context, f *Fs, name string, root string, m
|
||||
}
|
||||
}
|
||||
f.putSftpConnection(&c, err)
|
||||
if root != "" {
|
||||
if root != "" && !strings.HasSuffix(root, "/") {
|
||||
// Check to see if the root is actually an existing file,
|
||||
// and if so change the filesystem root to its parent directory.
|
||||
oldAbsRoot := f.absRoot
|
||||
@@ -1168,13 +1169,6 @@ func (f *Fs) dirExists(ctx context.Context, dir string) (bool, error) {
|
||||
// found.
|
||||
func (f *Fs) List(ctx context.Context, dir string) (entries fs.DirEntries, err error) {
|
||||
root := path.Join(f.absRoot, dir)
|
||||
ok, err := f.dirExists(ctx, root)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("List failed: %w", err)
|
||||
}
|
||||
if !ok {
|
||||
return nil, fs.ErrorDirNotFound
|
||||
}
|
||||
sftpDir := root
|
||||
if sftpDir == "" {
|
||||
sftpDir = "."
|
||||
@@ -1186,6 +1180,9 @@ func (f *Fs) List(ctx context.Context, dir string) (entries fs.DirEntries, err e
|
||||
infos, err := c.sftpClient.ReadDir(sftpDir)
|
||||
f.putSftpConnection(&c, err)
|
||||
if err != nil {
|
||||
if errors.Is(err, os.ErrNotExist) {
|
||||
return nil, fs.ErrorDirNotFound
|
||||
}
|
||||
return nil, fmt.Errorf("error listing %q: %w", dir, err)
|
||||
}
|
||||
for _, info := range infos {
|
||||
@@ -1329,10 +1326,17 @@ func (f *Fs) Move(ctx context.Context, src fs.Object, remote string) (fs.Object,
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("Move: %w", err)
|
||||
}
|
||||
err = c.sftpClient.Rename(
|
||||
srcObj.path(),
|
||||
path.Join(f.absRoot, remote),
|
||||
)
|
||||
srcPath, dstPath := srcObj.path(), path.Join(f.absRoot, remote)
|
||||
if _, ok := c.sftpClient.HasExtension("posix-rename@openssh.com"); ok {
|
||||
err = c.sftpClient.PosixRename(srcPath, dstPath)
|
||||
} else {
|
||||
// If haven't got PosixRename then remove source first before renaming
|
||||
err = c.sftpClient.Remove(dstPath)
|
||||
if err != nil && !errors.Is(err, iofs.ErrNotExist) {
|
||||
fs.Errorf(f, "Move: Failed to remove existing file %q: %v", dstPath, err)
|
||||
}
|
||||
err = c.sftpClient.Rename(srcPath, dstPath)
|
||||
}
|
||||
f.putSftpConnection(&c, err)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("Move Rename failed: %w", err)
|
||||
|
||||
@@ -775,8 +775,13 @@ func (f *Fs) Put(ctx context.Context, in io.Reader, src fs.ObjectInfo, options .
|
||||
}
|
||||
}
|
||||
|
||||
// PutStream uploads to the remote path with the modTime given of indeterminate size
|
||||
func (f *Fs) PutStream(ctx context.Context, in io.Reader, src fs.ObjectInfo, options ...fs.OpenOption) (fs.Object, error) {
|
||||
// FIXMEPutStream uploads to the remote path with the modTime given of indeterminate size
|
||||
//
|
||||
// PutStream no longer appears to work - the streamed uploads need the
|
||||
// size specified at the start otherwise we get this error:
|
||||
//
|
||||
// upload failed: file size does not match (-2)
|
||||
func (f *Fs) FIXMEPutStream(ctx context.Context, in io.Reader, src fs.ObjectInfo, options ...fs.OpenOption) (fs.Object, error) {
|
||||
return f.Put(ctx, in, src, options...)
|
||||
}
|
||||
|
||||
@@ -1453,12 +1458,12 @@ func (o *Object) ID() string {
|
||||
|
||||
// Check the interfaces are satisfied
|
||||
var (
|
||||
_ fs.Fs = (*Fs)(nil)
|
||||
_ fs.Purger = (*Fs)(nil)
|
||||
_ fs.Mover = (*Fs)(nil)
|
||||
_ fs.DirMover = (*Fs)(nil)
|
||||
_ fs.Copier = (*Fs)(nil)
|
||||
_ fs.PutStreamer = (*Fs)(nil)
|
||||
_ fs.Fs = (*Fs)(nil)
|
||||
_ fs.Purger = (*Fs)(nil)
|
||||
_ fs.Mover = (*Fs)(nil)
|
||||
_ fs.DirMover = (*Fs)(nil)
|
||||
_ fs.Copier = (*Fs)(nil)
|
||||
// _ fs.PutStreamer = (*Fs)(nil)
|
||||
_ fs.DirCacheFlusher = (*Fs)(nil)
|
||||
_ fs.Object = (*Object)(nil)
|
||||
_ fs.IDer = (*Object)(nil)
|
||||
|
||||
@@ -528,7 +528,11 @@ func (f *Fs) NewObject(ctx context.Context, relative string) (_ fs.Object, err e
|
||||
// May create the object even if it returns an error - if so will return the
|
||||
// object and the error, otherwise will return nil and the error
|
||||
func (f *Fs) Put(ctx context.Context, in io.Reader, src fs.ObjectInfo, options ...fs.OpenOption) (_ fs.Object, err error) {
|
||||
fs.Debugf(f, "cp input ./%s # %+v %d", src.Remote(), options, src.Size())
|
||||
return f.put(ctx, in, src, src.Remote(), options...)
|
||||
}
|
||||
|
||||
func (f *Fs) put(ctx context.Context, in io.Reader, src fs.ObjectInfo, remote string, options ...fs.OpenOption) (_ fs.Object, err error) {
|
||||
fs.Debugf(f, "cp input ./%s # %+v %d", remote, options, src.Size())
|
||||
|
||||
// Reject options we don't support.
|
||||
for _, option := range options {
|
||||
@@ -539,7 +543,7 @@ func (f *Fs) Put(ctx context.Context, in io.Reader, src fs.ObjectInfo, options .
|
||||
}
|
||||
}
|
||||
|
||||
bucketName, bucketPath := f.absolute(src.Remote())
|
||||
bucketName, bucketPath := f.absolute(remote)
|
||||
|
||||
upload, err := f.project.UploadObject(ctx, bucketName, bucketPath, nil)
|
||||
if err != nil {
|
||||
@@ -549,7 +553,7 @@ func (f *Fs) Put(ctx context.Context, in io.Reader, src fs.ObjectInfo, options .
|
||||
if err != nil {
|
||||
aerr := upload.Abort()
|
||||
if aerr != nil && !errors.Is(aerr, uplink.ErrUploadDone) {
|
||||
fs.Errorf(f, "cp input ./%s %+v: %+v", src.Remote(), options, aerr)
|
||||
fs.Errorf(f, "cp input ./%s %+v: %+v", remote, options, aerr)
|
||||
}
|
||||
}
|
||||
}()
|
||||
@@ -574,7 +578,7 @@ func (f *Fs) Put(ctx context.Context, in io.Reader, src fs.ObjectInfo, options .
|
||||
}
|
||||
|
||||
err = fserrors.RetryError(err)
|
||||
fs.Errorf(f, "cp input ./%s %+v: %+v\n", src.Remote(), options, err)
|
||||
fs.Errorf(f, "cp input ./%s %+v: %+v\n", remote, options, err)
|
||||
|
||||
return nil, err
|
||||
}
|
||||
@@ -589,11 +593,19 @@ func (f *Fs) Put(ctx context.Context, in io.Reader, src fs.ObjectInfo, options .
|
||||
return nil, err
|
||||
}
|
||||
err = fserrors.RetryError(errors.New("bucket was not available, now created, the upload must be retried"))
|
||||
} else if errors.Is(err, uplink.ErrTooManyRequests) {
|
||||
// Storj has a rate limit of 1 per second of uploading to the same file.
|
||||
// This produces ErrTooManyRequests here, so we wait 1 second and retry.
|
||||
//
|
||||
// See: https://github.com/storj/uplink/issues/149
|
||||
fs.Debugf(f, "uploading too fast - sleeping for 1 second: %v", err)
|
||||
time.Sleep(time.Second)
|
||||
err = fserrors.RetryError(err)
|
||||
}
|
||||
return nil, err
|
||||
}
|
||||
|
||||
return newObjectFromUplink(f, src.Remote(), upload.Info()), nil
|
||||
return newObjectFromUplink(f, remote, upload.Info()), nil
|
||||
}
|
||||
|
||||
// PutStream uploads to the remote path with the modTime given of indeterminate
|
||||
|
||||
@@ -176,9 +176,9 @@ func (o *Object) Open(ctx context.Context, options ...fs.OpenOption) (_ io.ReadC
|
||||
// But for unknown-sized objects (indicated by src.Size() == -1), Upload should either
|
||||
// return an error or update the object properly (rather than e.g. calling panic).
|
||||
func (o *Object) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, options ...fs.OpenOption) (err error) {
|
||||
fs.Debugf(o, "cp input ./%s %+v", src.Remote(), options)
|
||||
fs.Debugf(o, "cp input ./%s %+v", o.Remote(), options)
|
||||
|
||||
oNew, err := o.fs.Put(ctx, in, src, options...)
|
||||
oNew, err := o.fs.put(ctx, in, src, o.Remote(), options...)
|
||||
|
||||
if err == nil {
|
||||
*o = *(oNew.(*Object))
|
||||
|
||||
@@ -100,7 +100,7 @@ but other operations such as Remove and Copy will fail.
|
||||
func init() {
|
||||
fs.Register(&fs.RegInfo{
|
||||
Name: "swift",
|
||||
Description: "OpenStack Swift (Rackspace Cloud Files, Memset Memstore, OVH)",
|
||||
Description: "OpenStack Swift (Rackspace Cloud Files, Blomp Cloud Storage, Memset Memstore, OVH)",
|
||||
NewFs: NewFs,
|
||||
Options: append([]fs.Option{{
|
||||
Name: "env_auth",
|
||||
@@ -142,6 +142,9 @@ func init() {
|
||||
}, {
|
||||
Value: "https://auth.cloud.ovh.net/v3",
|
||||
Help: "OVH",
|
||||
}, {
|
||||
Value: "https://authenticate.ain.net",
|
||||
Help: "Blomp Cloud Storage",
|
||||
}},
|
||||
}, {
|
||||
Name: "user_id",
|
||||
@@ -1558,6 +1561,10 @@ func (o *Object) Remove(ctx context.Context) (err error) {
|
||||
// Remove file/manifest first
|
||||
err = o.fs.pacer.Call(func() (bool, error) {
|
||||
err = o.fs.c.ObjectDelete(ctx, container, containerPath)
|
||||
if err == swift.ObjectNotFound {
|
||||
fs.Errorf(o, "Dangling object - ignoring: %v", err)
|
||||
err = nil
|
||||
}
|
||||
return shouldRetry(ctx, err)
|
||||
})
|
||||
if err != nil {
|
||||
|
||||
@@ -49,8 +49,7 @@ func (e Errors) Error() string {
|
||||
|
||||
if len(e) == 0 {
|
||||
buf.WriteString("no error")
|
||||
}
|
||||
if len(e) == 1 {
|
||||
} else if len(e) == 1 {
|
||||
buf.WriteString("1 error: ")
|
||||
} else {
|
||||
fmt.Fprintf(&buf, "%d errors: ", len(e))
|
||||
@@ -61,8 +60,17 @@ func (e Errors) Error() string {
|
||||
buf.WriteString("; ")
|
||||
}
|
||||
|
||||
buf.WriteString(err.Error())
|
||||
if err != nil {
|
||||
buf.WriteString(err.Error())
|
||||
} else {
|
||||
buf.WriteString("nil error")
|
||||
}
|
||||
}
|
||||
|
||||
return buf.String()
|
||||
}
|
||||
|
||||
// Unwrap returns the wrapped errors
|
||||
func (e Errors) Unwrap() []error {
|
||||
return e
|
||||
}
|
||||
|
||||
94
backend/union/errors_test.go
Normal file
94
backend/union/errors_test.go
Normal file
@@ -0,0 +1,94 @@
|
||||
//go:build go1.20
|
||||
// +build go1.20
|
||||
|
||||
package union
|
||||
|
||||
import (
|
||||
"errors"
|
||||
"testing"
|
||||
|
||||
"github.com/stretchr/testify/assert"
|
||||
)
|
||||
|
||||
var (
|
||||
err1 = errors.New("Error 1")
|
||||
err2 = errors.New("Error 2")
|
||||
err3 = errors.New("Error 3")
|
||||
)
|
||||
|
||||
func TestErrorsMap(t *testing.T) {
|
||||
es := Errors{
|
||||
nil,
|
||||
err1,
|
||||
err2,
|
||||
}
|
||||
want := Errors{
|
||||
err2,
|
||||
}
|
||||
got := es.Map(func(e error) error {
|
||||
if e == err1 {
|
||||
return nil
|
||||
}
|
||||
return e
|
||||
})
|
||||
assert.Equal(t, want, got)
|
||||
}
|
||||
|
||||
func TestErrorsFilterNil(t *testing.T) {
|
||||
es := Errors{
|
||||
nil,
|
||||
err1,
|
||||
nil,
|
||||
err2,
|
||||
nil,
|
||||
}
|
||||
want := Errors{
|
||||
err1,
|
||||
err2,
|
||||
}
|
||||
got := es.FilterNil()
|
||||
assert.Equal(t, want, got)
|
||||
}
|
||||
|
||||
func TestErrorsErr(t *testing.T) {
|
||||
// Check not all nil case
|
||||
es := Errors{
|
||||
nil,
|
||||
err1,
|
||||
nil,
|
||||
err2,
|
||||
nil,
|
||||
}
|
||||
want := Errors{
|
||||
err1,
|
||||
err2,
|
||||
}
|
||||
got := es.Err()
|
||||
|
||||
// Check all nil case
|
||||
assert.Equal(t, want, got)
|
||||
es = Errors{
|
||||
nil,
|
||||
nil,
|
||||
nil,
|
||||
}
|
||||
assert.Nil(t, es.Err())
|
||||
}
|
||||
|
||||
func TestErrorsError(t *testing.T) {
|
||||
assert.Equal(t, "no error", Errors{}.Error())
|
||||
assert.Equal(t, "1 error: Error 1", Errors{err1}.Error())
|
||||
assert.Equal(t, "1 error: nil error", Errors{nil}.Error())
|
||||
assert.Equal(t, "2 errors: Error 1; Error 2", Errors{err1, err2}.Error())
|
||||
}
|
||||
|
||||
func TestErrorsUnwrap(t *testing.T) {
|
||||
es := Errors{
|
||||
err1,
|
||||
err2,
|
||||
}
|
||||
assert.Equal(t, []error{err1, err2}, es.Unwrap())
|
||||
assert.True(t, errors.Is(es, err1))
|
||||
assert.True(t, errors.Is(es, err2))
|
||||
assert.False(t, errors.Is(es, err3))
|
||||
}
|
||||
@@ -3,7 +3,6 @@ package policy
|
||||
import (
|
||||
"context"
|
||||
"fmt"
|
||||
"math/rand"
|
||||
"path"
|
||||
"strings"
|
||||
"time"
|
||||
@@ -109,9 +108,7 @@ func findEntry(ctx context.Context, f fs.Fs, remote string) fs.DirEntry {
|
||||
if err != nil {
|
||||
return nil
|
||||
}
|
||||
// random modtime for root
|
||||
randomNow := time.Unix(time.Now().Unix()-rand.Int63n(10000), 0)
|
||||
return fs.NewDir("", randomNow)
|
||||
return fs.NewDir("", time.Time{})
|
||||
}
|
||||
found := false
|
||||
for _, e := range entries {
|
||||
|
||||
@@ -801,6 +801,24 @@ func (f *Fs) Shutdown(ctx context.Context) error {
|
||||
return errs.Err()
|
||||
}
|
||||
|
||||
// CleanUp the trash in the Fs
|
||||
//
|
||||
// Implement this if you have a way of emptying the trash or
|
||||
// otherwise cleaning up old versions of files.
|
||||
func (f *Fs) CleanUp(ctx context.Context) error {
|
||||
errs := Errors(make([]error, len(f.upstreams)))
|
||||
multithread(len(f.upstreams), func(i int) {
|
||||
u := f.upstreams[i]
|
||||
if do := u.Features().CleanUp; do != nil {
|
||||
err := do(ctx)
|
||||
if err != nil {
|
||||
errs[i] = fmt.Errorf("%s: %w", u.Name(), err)
|
||||
}
|
||||
}
|
||||
})
|
||||
return errs.Err()
|
||||
}
|
||||
|
||||
// NewFs constructs an Fs from the path.
|
||||
//
|
||||
// The returned Fs is the actual Fs, referenced by remote in the config
|
||||
@@ -884,6 +902,7 @@ func NewFs(ctx context.Context, name, root string, m configmap.Mapper) (fs.Fs, e
|
||||
ReadMetadata: true,
|
||||
WriteMetadata: true,
|
||||
UserMetadata: true,
|
||||
PartialUploads: true,
|
||||
}).Fill(ctx, f)
|
||||
canMove, slowHash := true, false
|
||||
for _, f := range upstreams {
|
||||
@@ -914,6 +933,9 @@ func NewFs(ctx context.Context, name, root string, m configmap.Mapper) (fs.Fs, e
|
||||
}
|
||||
}
|
||||
|
||||
// show that we wrap other backends
|
||||
features.Overlay = true
|
||||
|
||||
f.features = features
|
||||
|
||||
// Get common intersection of hashes
|
||||
@@ -960,4 +982,5 @@ var (
|
||||
_ fs.Abouter = (*Fs)(nil)
|
||||
_ fs.ListRer = (*Fs)(nil)
|
||||
_ fs.Shutdowner = (*Fs)(nil)
|
||||
_ fs.CleanUpper = (*Fs)(nil)
|
||||
)
|
||||
|
||||
@@ -11,6 +11,11 @@ import (
|
||||
"github.com/rclone/rclone/fstest/fstests"
|
||||
)
|
||||
|
||||
var (
|
||||
unimplementableFsMethods = []string{"UnWrap", "WrapFs", "SetWrapper", "UserInfo", "Disconnect", "PublicLink", "PutUnchecked", "MergeDirs", "OpenWriterAt"}
|
||||
unimplementableObjectMethods = []string{}
|
||||
)
|
||||
|
||||
// TestIntegration runs integration tests against the remote
|
||||
func TestIntegration(t *testing.T) {
|
||||
if *fstest.RemoteName == "" {
|
||||
@@ -18,8 +23,8 @@ func TestIntegration(t *testing.T) {
|
||||
}
|
||||
fstests.Run(t, &fstests.Opt{
|
||||
RemoteName: *fstest.RemoteName,
|
||||
UnimplementableFsMethods: []string{"OpenWriterAt", "DuplicateFiles"},
|
||||
UnimplementableObjectMethods: []string{"MimeType"},
|
||||
UnimplementableFsMethods: unimplementableFsMethods,
|
||||
UnimplementableObjectMethods: unimplementableObjectMethods,
|
||||
})
|
||||
}
|
||||
|
||||
@@ -39,8 +44,8 @@ func TestStandard(t *testing.T) {
|
||||
{Name: name, Key: "create_policy", Value: "epmfs"},
|
||||
{Name: name, Key: "search_policy", Value: "ff"},
|
||||
},
|
||||
UnimplementableFsMethods: []string{"OpenWriterAt", "DuplicateFiles"},
|
||||
UnimplementableObjectMethods: []string{"MimeType"},
|
||||
UnimplementableFsMethods: unimplementableFsMethods,
|
||||
UnimplementableObjectMethods: unimplementableObjectMethods,
|
||||
QuickTestOK: true,
|
||||
})
|
||||
}
|
||||
@@ -61,8 +66,8 @@ func TestRO(t *testing.T) {
|
||||
{Name: name, Key: "create_policy", Value: "epmfs"},
|
||||
{Name: name, Key: "search_policy", Value: "ff"},
|
||||
},
|
||||
UnimplementableFsMethods: []string{"OpenWriterAt", "DuplicateFiles"},
|
||||
UnimplementableObjectMethods: []string{"MimeType"},
|
||||
UnimplementableFsMethods: unimplementableFsMethods,
|
||||
UnimplementableObjectMethods: unimplementableObjectMethods,
|
||||
QuickTestOK: true,
|
||||
})
|
||||
}
|
||||
@@ -83,8 +88,8 @@ func TestNC(t *testing.T) {
|
||||
{Name: name, Key: "create_policy", Value: "epmfs"},
|
||||
{Name: name, Key: "search_policy", Value: "ff"},
|
||||
},
|
||||
UnimplementableFsMethods: []string{"OpenWriterAt", "DuplicateFiles"},
|
||||
UnimplementableObjectMethods: []string{"MimeType"},
|
||||
UnimplementableFsMethods: unimplementableFsMethods,
|
||||
UnimplementableObjectMethods: unimplementableObjectMethods,
|
||||
QuickTestOK: true,
|
||||
})
|
||||
}
|
||||
@@ -105,8 +110,8 @@ func TestPolicy1(t *testing.T) {
|
||||
{Name: name, Key: "create_policy", Value: "lus"},
|
||||
{Name: name, Key: "search_policy", Value: "all"},
|
||||
},
|
||||
UnimplementableFsMethods: []string{"OpenWriterAt", "DuplicateFiles"},
|
||||
UnimplementableObjectMethods: []string{"MimeType"},
|
||||
UnimplementableFsMethods: unimplementableFsMethods,
|
||||
UnimplementableObjectMethods: unimplementableObjectMethods,
|
||||
QuickTestOK: true,
|
||||
})
|
||||
}
|
||||
@@ -127,8 +132,8 @@ func TestPolicy2(t *testing.T) {
|
||||
{Name: name, Key: "create_policy", Value: "rand"},
|
||||
{Name: name, Key: "search_policy", Value: "ff"},
|
||||
},
|
||||
UnimplementableFsMethods: []string{"OpenWriterAt", "DuplicateFiles"},
|
||||
UnimplementableObjectMethods: []string{"MimeType"},
|
||||
UnimplementableFsMethods: unimplementableFsMethods,
|
||||
UnimplementableObjectMethods: unimplementableObjectMethods,
|
||||
QuickTestOK: true,
|
||||
})
|
||||
}
|
||||
@@ -149,8 +154,8 @@ func TestPolicy3(t *testing.T) {
|
||||
{Name: name, Key: "create_policy", Value: "all"},
|
||||
{Name: name, Key: "search_policy", Value: "all"},
|
||||
},
|
||||
UnimplementableFsMethods: []string{"OpenWriterAt", "DuplicateFiles"},
|
||||
UnimplementableObjectMethods: []string{"MimeType"},
|
||||
UnimplementableFsMethods: unimplementableFsMethods,
|
||||
UnimplementableObjectMethods: unimplementableObjectMethods,
|
||||
QuickTestOK: true,
|
||||
})
|
||||
}
|
||||
|
||||
@@ -45,6 +45,11 @@ func init() {
|
||||
Options: []fs.Option{{
|
||||
Help: "Your access token.\n\nGet it from https://uptobox.com/my_account.",
|
||||
Name: "access_token",
|
||||
}, {
|
||||
Help: "Set to make uploaded files private",
|
||||
Name: "private",
|
||||
Advanced: true,
|
||||
Default: false,
|
||||
}, {
|
||||
Name: config.ConfigEncoding,
|
||||
Help: config.ConfigEncodingHelp,
|
||||
@@ -63,6 +68,7 @@ func init() {
|
||||
// Options defines the configuration for this backend
|
||||
type Options struct {
|
||||
AccessToken string `config:"access_token"`
|
||||
Private bool `config:"private"`
|
||||
Enc encoder.MultiEncoder `config:"encoding"`
|
||||
}
|
||||
|
||||
@@ -75,6 +81,7 @@ type Fs struct {
|
||||
srv *rest.Client
|
||||
pacer *fs.Pacer
|
||||
IDRegexp *regexp.Regexp
|
||||
public string // "0" to make objects private
|
||||
}
|
||||
|
||||
// Object represents an Uptobox object
|
||||
@@ -211,6 +218,9 @@ func NewFs(ctx context.Context, name string, root string, config configmap.Mappe
|
||||
CanHaveEmptyDirectories: true,
|
||||
ReadMimeType: false,
|
||||
}).Fill(ctx, f)
|
||||
if f.opt.Private {
|
||||
f.public = "0"
|
||||
}
|
||||
|
||||
client := fshttp.NewClient(ctx)
|
||||
f.srv = rest.NewClient(client).SetRoot(apiBaseURL)
|
||||
@@ -472,11 +482,11 @@ func (f *Fs) updateFileInformation(ctx context.Context, update *api.UpdateFileIn
|
||||
return err
|
||||
}
|
||||
|
||||
func (f *Fs) putUnchecked(ctx context.Context, in io.Reader, remote string, size int64, options ...fs.OpenOption) (fs.Object, error) {
|
||||
func (f *Fs) putUnchecked(ctx context.Context, in io.Reader, remote string, size int64, options ...fs.OpenOption) error {
|
||||
if size > int64(200e9) { // max size 200GB
|
||||
return nil, errors.New("file too big, can't upload")
|
||||
return errors.New("file too big, can't upload")
|
||||
} else if size == 0 {
|
||||
return nil, fs.ErrorCantUploadEmptyFiles
|
||||
return fs.ErrorCantUploadEmptyFiles
|
||||
}
|
||||
// yes it does take 4 requests if we're uploading to root and 6+ if we're uploading to any subdir :(
|
||||
|
||||
@@ -494,19 +504,19 @@ func (f *Fs) putUnchecked(ctx context.Context, in io.Reader, remote string, size
|
||||
return shouldRetry(ctx, resp, err)
|
||||
})
|
||||
if err != nil {
|
||||
return nil, err
|
||||
return err
|
||||
}
|
||||
if info.StatusCode != 0 {
|
||||
return nil, fmt.Errorf("putUnchecked api error: %d - %s", info.StatusCode, info.Message)
|
||||
return fmt.Errorf("putUnchecked api error: %d - %s", info.StatusCode, info.Message)
|
||||
}
|
||||
// we need to have a safe name for the upload to work
|
||||
tmpName := "rcloneTemp" + random.String(8)
|
||||
upload, err := f.uploadFile(ctx, in, size, tmpName, info.Data.UploadLink, options...)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
return err
|
||||
}
|
||||
if len(upload.Files) != 1 {
|
||||
return nil, errors.New("upload unexpected response")
|
||||
return errors.New("upload unexpected response")
|
||||
}
|
||||
match := f.IDRegexp.FindStringSubmatch(upload.Files[0].URL)
|
||||
|
||||
@@ -521,23 +531,27 @@ func (f *Fs) putUnchecked(ctx context.Context, in io.Reader, remote string, size
|
||||
// this might need some more error handling. if any of the following requests fail
|
||||
// we'll leave an orphaned temporary file floating around somewhere
|
||||
// they rarely fail though
|
||||
return nil, err
|
||||
return err
|
||||
}
|
||||
|
||||
err = f.move(ctx, fullBase, match[1])
|
||||
if err != nil {
|
||||
return nil, err
|
||||
return err
|
||||
}
|
||||
}
|
||||
|
||||
// rename file to final name
|
||||
err = f.updateFileInformation(ctx, &api.UpdateFileInformation{Token: f.opt.AccessToken, FileCode: match[1], NewName: f.opt.Enc.FromStandardName(leaf)})
|
||||
err = f.updateFileInformation(ctx, &api.UpdateFileInformation{
|
||||
Token: f.opt.AccessToken,
|
||||
FileCode: match[1],
|
||||
NewName: f.opt.Enc.FromStandardName(leaf),
|
||||
Public: f.public,
|
||||
})
|
||||
if err != nil {
|
||||
return nil, err
|
||||
return err
|
||||
}
|
||||
|
||||
// finally fetch the file object.
|
||||
return f.NewObject(ctx, remote)
|
||||
return nil
|
||||
}
|
||||
|
||||
// Put in to the remote path with the modTime given of the given size
|
||||
@@ -567,7 +581,11 @@ func (f *Fs) Put(ctx context.Context, in io.Reader, src fs.ObjectInfo, options .
|
||||
// This will create a duplicate if we upload a new file without
|
||||
// checking to see if there is one already - use Put() for that.
|
||||
func (f *Fs) PutUnchecked(ctx context.Context, in io.Reader, src fs.ObjectInfo, options ...fs.OpenOption) (fs.Object, error) {
|
||||
return f.putUnchecked(ctx, in, src.Remote(), src.Size(), options...)
|
||||
err := f.putUnchecked(ctx, in, src.Remote(), src.Size(), options...)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return f.NewObject(ctx, src.Remote())
|
||||
}
|
||||
|
||||
// CreateDir dir creates a directory with the given parent path
|
||||
@@ -660,7 +678,7 @@ func (f *Fs) Rmdir(ctx context.Context, dir string) error {
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
if info.Data.CurrentFolder.FileCount > 0 {
|
||||
if len(info.Data.Folders) > 0 || len(info.Data.Files) > 0 {
|
||||
return fs.ErrorDirectoryNotEmpty
|
||||
}
|
||||
|
||||
@@ -696,7 +714,12 @@ func (f *Fs) Move(ctx context.Context, src fs.Object, remote string) (fs.Object,
|
||||
|
||||
// rename to final name if we need to
|
||||
if needRename {
|
||||
err := f.updateFileInformation(ctx, &api.UpdateFileInformation{Token: f.opt.AccessToken, FileCode: srcObj.code, NewName: f.opt.Enc.FromStandardName(dstLeaf)})
|
||||
err := f.updateFileInformation(ctx, &api.UpdateFileInformation{
|
||||
Token: f.opt.AccessToken,
|
||||
FileCode: srcObj.code,
|
||||
NewName: f.opt.Enc.FromStandardName(dstLeaf),
|
||||
Public: f.public,
|
||||
})
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("move: failed final rename: %w", err)
|
||||
}
|
||||
@@ -888,7 +911,12 @@ func (f *Fs) Copy(ctx context.Context, src fs.Object, remote string) (fs.Object,
|
||||
}
|
||||
|
||||
if needRename {
|
||||
err := f.updateFileInformation(ctx, &api.UpdateFileInformation{Token: f.opt.AccessToken, FileCode: newObj.(*Object).code, NewName: f.opt.Enc.FromStandardName(dstLeaf)})
|
||||
err := f.updateFileInformation(ctx, &api.UpdateFileInformation{
|
||||
Token: f.opt.AccessToken,
|
||||
FileCode: newObj.(*Object).code,
|
||||
NewName: f.opt.Enc.FromStandardName(dstLeaf),
|
||||
Public: f.public,
|
||||
})
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("copy: failed final rename: %w", err)
|
||||
}
|
||||
@@ -923,7 +951,8 @@ func (o *Object) Remote() string {
|
||||
// It attempts to read the objects mtime and if that isn't present the
|
||||
// LastModified returned in the http headers
|
||||
func (o *Object) ModTime(ctx context.Context) time.Time {
|
||||
return time.Now()
|
||||
ci := fs.GetConfig(ctx)
|
||||
return time.Time(ci.DefaultTime)
|
||||
}
|
||||
|
||||
// Size returns the size of an object in bytes
|
||||
@@ -1000,7 +1029,7 @@ func (o *Object) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, op
|
||||
}
|
||||
|
||||
// upload with new size but old name
|
||||
info, err := o.fs.putUnchecked(ctx, in, o.Remote(), src.Size(), options...)
|
||||
err := o.fs.putUnchecked(ctx, in, o.Remote(), src.Size(), options...)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
@@ -1011,6 +1040,12 @@ func (o *Object) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, op
|
||||
return fmt.Errorf("failed to remove old version: %w", err)
|
||||
}
|
||||
|
||||
// Fetch new object after deleting the duplicate
|
||||
info, err := o.fs.NewObject(ctx, o.Remote())
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
// Replace guts of old object with new one
|
||||
*o = *info.(*Object)
|
||||
|
||||
|
||||
@@ -175,6 +175,7 @@ type Fs struct {
|
||||
precision time.Duration // mod time precision
|
||||
canStream bool // set if can stream
|
||||
useOCMtime bool // set if can use X-OC-Mtime
|
||||
propsetMtime bool // set if can use propset
|
||||
retryWithZeroDepth bool // some vendors (sharepoint) won't list files when Depth is 1 (our default)
|
||||
checkBeforePurge bool // enables extra check that directory to purge really exists
|
||||
hasOCMD5 bool // set if can use owncloud style checksums for MD5
|
||||
@@ -568,7 +569,7 @@ func (f *Fs) fetchAndSetBearerToken() error {
|
||||
return nil
|
||||
}
|
||||
|
||||
var validateNextCloudChunkedURL = regexp.MustCompile(`^.*/dav/files/[^/]+/?$`)
|
||||
var validateNextCloudChunkedURL = regexp.MustCompile(`^.*/dav/files/`)
|
||||
|
||||
// setQuirks adjusts the Fs for the vendor passed in
|
||||
func (f *Fs) setQuirks(ctx context.Context, vendor string) error {
|
||||
@@ -582,11 +583,13 @@ func (f *Fs) setQuirks(ctx context.Context, vendor string) error {
|
||||
f.canStream = true
|
||||
f.precision = time.Second
|
||||
f.useOCMtime = true
|
||||
f.propsetMtime = true
|
||||
f.hasOCMD5 = true
|
||||
f.hasOCSHA1 = true
|
||||
case "nextcloud":
|
||||
f.precision = time.Second
|
||||
f.useOCMtime = true
|
||||
f.propsetMtime = true
|
||||
f.hasOCSHA1 = true
|
||||
f.canChunk = true
|
||||
if err := f.verifyChunkConfig(); err != nil {
|
||||
@@ -1047,7 +1050,7 @@ func (f *Fs) copyOrMove(ctx context.Context, src fs.Object, remote string, metho
|
||||
NoResponse: true,
|
||||
ExtraHeaders: map[string]string{
|
||||
"Destination": destinationURL.String(),
|
||||
"Overwrite": "F",
|
||||
"Overwrite": "T",
|
||||
},
|
||||
}
|
||||
if f.useOCMtime {
|
||||
@@ -1065,6 +1068,13 @@ func (f *Fs) copyOrMove(ctx context.Context, src fs.Object, remote string, metho
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("copy NewObject failed: %w", err)
|
||||
}
|
||||
if f.useOCMtime && resp.Header.Get("X-OC-Mtime") != "accepted" && f.propsetMtime {
|
||||
fs.Debugf(dstObj, "Setting modtime after copy to %v", src.ModTime(ctx))
|
||||
err = dstObj.SetModTime(ctx, src.ModTime(ctx))
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("failed to set modtime: %w", err)
|
||||
}
|
||||
}
|
||||
return dstObj, nil
|
||||
}
|
||||
|
||||
@@ -1147,7 +1157,7 @@ func (f *Fs) DirMove(ctx context.Context, src fs.Fs, srcRemote, dstRemote string
|
||||
NoResponse: true,
|
||||
ExtraHeaders: map[string]string{
|
||||
"Destination": addSlash(destinationURL.String()),
|
||||
"Overwrite": "F",
|
||||
"Overwrite": "T",
|
||||
},
|
||||
}
|
||||
// Direct the MOVE/COPY to the source server
|
||||
@@ -1299,8 +1309,53 @@ func (o *Object) ModTime(ctx context.Context) time.Time {
|
||||
return o.modTime
|
||||
}
|
||||
|
||||
// Set modified time using propset
|
||||
//
|
||||
// <d:multistatus xmlns:d="DAV:" xmlns:s="http://sabredav.org/ns"><d:response><d:href>/ocm/remote.php/webdav/office/wir.jpg</d:href><d:propstat><d:prop><d:lastmodified/></d:prop><d:status>HTTP/1.1 200 OK</d:status></d:propstat></d:response></d:multistatus>
|
||||
var owncloudPropset = `<?xml version="1.0" encoding="utf-8" ?>
|
||||
<D:propertyupdate xmlns:D="DAV:">
|
||||
<D:set>
|
||||
<D:prop>
|
||||
<lastmodified xmlns="DAV:">%d</lastmodified>
|
||||
</D:prop>
|
||||
</D:set>
|
||||
</D:propertyupdate>
|
||||
`
|
||||
|
||||
// SetModTime sets the modification time of the local fs object
|
||||
func (o *Object) SetModTime(ctx context.Context, modTime time.Time) error {
|
||||
if o.fs.propsetMtime {
|
||||
opts := rest.Opts{
|
||||
Method: "PROPPATCH",
|
||||
Path: o.filePath(),
|
||||
NoRedirect: true,
|
||||
Body: strings.NewReader(fmt.Sprintf(owncloudPropset, modTime.Unix())),
|
||||
}
|
||||
var result api.Multistatus
|
||||
var resp *http.Response
|
||||
var err error
|
||||
err = o.fs.pacer.Call(func() (bool, error) {
|
||||
resp, err = o.fs.srv.CallXML(ctx, &opts, nil, &result)
|
||||
return o.fs.shouldRetry(ctx, resp, err)
|
||||
})
|
||||
if err != nil {
|
||||
if apiErr, ok := err.(*api.Error); ok {
|
||||
// does not exist
|
||||
if apiErr.StatusCode == http.StatusNotFound {
|
||||
return fs.ErrorObjectNotFound
|
||||
}
|
||||
}
|
||||
return fmt.Errorf("couldn't set modified time: %w", err)
|
||||
}
|
||||
// FIXME check if response is valid
|
||||
if len(result.Responses) == 1 && result.Responses[0].Props.StatusOK() {
|
||||
// update cached modtime
|
||||
o.modTime = modTime
|
||||
return nil
|
||||
}
|
||||
// fallback
|
||||
return fs.ErrorCantSetModTime
|
||||
}
|
||||
return fs.ErrorCantSetModTime
|
||||
}
|
||||
|
||||
|
||||
@@ -1100,7 +1100,7 @@ func (o *Object) upload(ctx context.Context, in io.Reader, overwrite bool, mimeT
|
||||
NoResponse: true,
|
||||
}
|
||||
|
||||
err = o.fs.pacer.Call(func() (bool, error) {
|
||||
err = o.fs.pacer.CallNoRetry(func() (bool, error) {
|
||||
resp, err = o.fs.srv.Call(ctx, &opts)
|
||||
return shouldRetry(ctx, resp, err)
|
||||
})
|
||||
|
||||
@@ -1206,7 +1206,7 @@ func (o *Object) Open(ctx context.Context, options ...fs.OpenOption) (in io.Read
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
if partialContent && resp.StatusCode == 200 {
|
||||
if partialContent && resp.StatusCode == 200 && resp.Header.Get("Content-Range") == "" {
|
||||
if start > 0 {
|
||||
// We need to read and discard the beginning of the data...
|
||||
_, err = io.CopyN(io.Discard, resp.Body, start)
|
||||
|
||||
@@ -98,8 +98,14 @@ Note to run these commands on a running backend then see
|
||||
out, err = doCommand(context.Background(), name, arg, opt)
|
||||
}
|
||||
if err != nil {
|
||||
if err == fs.ErrorCommandNotFound {
|
||||
extra := ""
|
||||
if f.Features().Overlay {
|
||||
extra = " (try the underlying remote)"
|
||||
}
|
||||
return fmt.Errorf("%q %w%s", name, err, extra)
|
||||
}
|
||||
return fmt.Errorf("command %q failed: %w", name, err)
|
||||
|
||||
}
|
||||
// Output the result
|
||||
writeJSON := false
|
||||
|
||||
@@ -824,8 +824,9 @@ func touchFiles(ctx context.Context, dateStr string, f fs.Fs, dir, glob string)
|
||||
err = nil
|
||||
buf := new(bytes.Buffer)
|
||||
size := obj.Size()
|
||||
separator := ""
|
||||
if size > 0 {
|
||||
err = operations.Cat(ctx, f, buf, 0, size)
|
||||
err = operations.Cat(ctx, f, buf, 0, size, []byte(separator))
|
||||
}
|
||||
info := object.NewStaticObjectInfo(remote, date, size, true, nil, f)
|
||||
if err == nil {
|
||||
|
||||
@@ -16,11 +16,12 @@ import (
|
||||
|
||||
// Globals
|
||||
var (
|
||||
head = int64(0)
|
||||
tail = int64(0)
|
||||
offset = int64(0)
|
||||
count = int64(-1)
|
||||
discard = false
|
||||
head = int64(0)
|
||||
tail = int64(0)
|
||||
offset = int64(0)
|
||||
count = int64(-1)
|
||||
discard = false
|
||||
separator = string("")
|
||||
)
|
||||
|
||||
func init() {
|
||||
@@ -31,6 +32,7 @@ func init() {
|
||||
flags.Int64VarP(cmdFlags, &offset, "offset", "", offset, "Start printing at offset N (or from end if -ve)")
|
||||
flags.Int64VarP(cmdFlags, &count, "count", "", count, "Only print N characters")
|
||||
flags.BoolVarP(cmdFlags, &discard, "discard", "", discard, "Discard the output instead of printing")
|
||||
flags.StringVarP(cmdFlags, &separator, "separator", "", separator, "Separator to use between objects when printing multiple files")
|
||||
}
|
||||
|
||||
var commandDefinition = &cobra.Command{
|
||||
@@ -56,6 +58,18 @@ Use the |--head| flag to print characters only at the start, |--tail| for
|
||||
the end and |--offset| and |--count| to print a section in the middle.
|
||||
Note that if offset is negative it will count from the end, so
|
||||
|--offset -1 --count 1| is equivalent to |--tail 1|.
|
||||
|
||||
Use the |--separator| flag to print a separator value between files. Be sure to
|
||||
shell-escape special characters. For example, to print a newline between
|
||||
files, use:
|
||||
|
||||
* bash:
|
||||
|
||||
rclone --include "*.txt" --separator $'\n' cat remote:path/to/dir
|
||||
|
||||
* powershell:
|
||||
|
||||
rclone --include "*.txt" --separator "|n" cat remote:path/to/dir
|
||||
`, "|", "`"),
|
||||
Annotations: map[string]string{
|
||||
"versionIntroduced": "v1.33",
|
||||
@@ -82,7 +96,7 @@ Note that if offset is negative it will count from the end, so
|
||||
w = io.Discard
|
||||
}
|
||||
cmd.Run(false, false, command, func() error {
|
||||
return operations.Cat(context.Background(), fsrc, w, offset, count)
|
||||
return operations.Cat(context.Background(), fsrc, w, offset, count, []byte(separator))
|
||||
})
|
||||
},
|
||||
}
|
||||
|
||||
@@ -11,7 +11,7 @@ func init() {
|
||||
}
|
||||
|
||||
var completionDefinition = &cobra.Command{
|
||||
Use: "genautocomplete [shell]",
|
||||
Use: "completion [shell]",
|
||||
Short: `Output completion script for a given shell.`,
|
||||
Long: `
|
||||
Generates a shell completion script for rclone.
|
||||
@@ -20,4 +20,5 @@ Run with ` + "`--help`" + ` to list the supported shells.
|
||||
Annotations: map[string]string{
|
||||
"versionIntroduced": "v1.33",
|
||||
},
|
||||
Aliases: []string{"genautocomplete"},
|
||||
}
|
||||
|
||||
@@ -24,7 +24,7 @@ func init() {
|
||||
|
||||
var commandDefinition = &cobra.Command{
|
||||
Use: "listremotes",
|
||||
Short: `List all the remotes in the config file.`,
|
||||
Short: `List all the remotes in the config file and defined in environment variables.`,
|
||||
Long: `
|
||||
rclone listremotes lists all the available remotes from the config file.
|
||||
|
||||
|
||||
@@ -25,15 +25,13 @@ var _ fusefs.HandleReader = (*FileHandle)(nil)
|
||||
func (fh *FileHandle) Read(ctx context.Context, req *fuse.ReadRequest, resp *fuse.ReadResponse) (err error) {
|
||||
var n int
|
||||
defer log.Trace(fh, "len=%d, offset=%d", req.Size, req.Offset)("read=%d, err=%v", &n, &err)
|
||||
data := make([]byte, req.Size)
|
||||
data := resp.Data[:req.Size]
|
||||
n, err = fh.Handle.ReadAt(data, req.Offset)
|
||||
resp.Data = data[:n]
|
||||
if err == io.EOF {
|
||||
err = nil
|
||||
} else if err != nil {
|
||||
return translateError(err)
|
||||
}
|
||||
resp.Data = data[:n]
|
||||
return nil
|
||||
return translateError(err)
|
||||
}
|
||||
|
||||
// Check interface satisfied
|
||||
|
||||
@@ -26,12 +26,14 @@ func init() {
|
||||
// man mount.fuse for more info and note the -o flag for other options
|
||||
func mountOptions(fsys *FS, f fs.Fs, opt *mountlib.Options) (mountOpts *fuse.MountOptions) {
|
||||
mountOpts = &fuse.MountOptions{
|
||||
AllowOther: fsys.opt.AllowOther,
|
||||
FsName: opt.DeviceName,
|
||||
Name: "rclone",
|
||||
DisableXAttrs: true,
|
||||
Debug: fsys.opt.DebugFUSE,
|
||||
MaxReadAhead: int(fsys.opt.MaxReadAhead),
|
||||
AllowOther: fsys.opt.AllowOther,
|
||||
FsName: opt.DeviceName,
|
||||
Name: "rclone",
|
||||
DisableXAttrs: true,
|
||||
Debug: fsys.opt.DebugFUSE,
|
||||
MaxReadAhead: int(fsys.opt.MaxReadAhead),
|
||||
MaxWrite: 1024 * 1024, // Linux v4.20+ caps requests at 1 MiB
|
||||
DisableReadDirPlus: true,
|
||||
|
||||
// RememberInodes: true,
|
||||
// SingleThreaded: true,
|
||||
@@ -47,12 +49,42 @@ func mountOptions(fsys *FS, f fs.Fs, opt *mountlib.Options) (mountOpts *fuse.Mou
|
||||
// async I/O. Concurrency for synchronous I/O is not limited.
|
||||
MaxBackground int
|
||||
|
||||
// Write size to use. If 0, use default. This number is
|
||||
// capped at the kernel maximum.
|
||||
// MaxWrite is the max size for read and write requests. If 0, use
|
||||
// go-fuse default (currently 64 kiB).
|
||||
// This number is internally capped at MAX_KERNEL_WRITE (higher values don't make
|
||||
// sense).
|
||||
//
|
||||
// Non-direct-io reads are mostly served via kernel readahead, which is
|
||||
// additionally subject to the MaxReadAhead limit.
|
||||
//
|
||||
// Implementation notes:
|
||||
//
|
||||
// There's four values the Linux kernel looks at when deciding the request size:
|
||||
// * MaxWrite, passed via InitOut.MaxWrite. Limits the WRITE size.
|
||||
// * max_read, passed via a string mount option. Limits the READ size.
|
||||
// go-fuse sets max_read equal to MaxWrite.
|
||||
// You can see the current max_read value in /proc/self/mounts .
|
||||
// * MaxPages, passed via InitOut.MaxPages. In Linux 4.20 and later, the value
|
||||
// can go up to 1 MiB and go-fuse calculates the MaxPages value acc.
|
||||
// to MaxWrite, rounding up.
|
||||
// On older kernels, the value is fixed at 128 kiB and the
|
||||
// passed value is ignored. No request can be larger than MaxPages, so
|
||||
// READ and WRITE are effectively capped at MaxPages.
|
||||
// * MaxReadAhead, passed via InitOut.MaxReadAhead.
|
||||
MaxWrite int
|
||||
|
||||
// Max read ahead to use. If 0, use default. This number is
|
||||
// capped at the kernel maximum.
|
||||
// MaxReadAhead is the max read ahead size to use. It controls how much data the
|
||||
// kernel reads in advance to satisfy future read requests from applications.
|
||||
// How much exactly is subject to clever heuristics in the kernel
|
||||
// (see https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/mm/readahead.c?h=v6.2-rc5#n375
|
||||
// if you are brave) and hence also depends on the kernel version.
|
||||
//
|
||||
// If 0, use kernel default. This number is capped at the kernel maximum
|
||||
// (128 kiB on Linux) and cannot be larger than MaxWrite.
|
||||
//
|
||||
// MaxReadAhead only affects buffered reads (=non-direct-io), but even then, the
|
||||
// kernel can and does send larger reads to satisfy read reqests from applications
|
||||
// (up to MaxWrite or VM_READAHEAD_PAGES=128 kiB, whichever is less).
|
||||
MaxReadAhead int
|
||||
|
||||
// If IgnoreSecurityLabels is set, all security related xattr
|
||||
@@ -87,9 +119,19 @@ func mountOptions(fsys *FS, f fs.Fs, opt *mountlib.Options) (mountOpts *fuse.Mou
|
||||
// you must implement the GetLk/SetLk/SetLkw methods.
|
||||
EnableLocks bool
|
||||
|
||||
// If set, the kernel caches all Readlink return values. The
|
||||
// filesystem must use content notification to force the
|
||||
// kernel to issue a new Readlink call.
|
||||
EnableSymlinkCaching bool
|
||||
|
||||
// If set, ask kernel not to do automatic data cache invalidation.
|
||||
// The filesystem is fully responsible for invalidating data cache.
|
||||
ExplicitDataCacheControl bool
|
||||
|
||||
// Disable ReadDirPlus capability so ReadDir is used instead. Simple
|
||||
// directory queries (i.e. 'ls' without '-l') can be faster with
|
||||
// ReadDir, as no per-file stat calls are needed
|
||||
DisableReadDirPlus bool
|
||||
*/
|
||||
|
||||
}
|
||||
@@ -176,8 +218,8 @@ func mount(VFS *vfs.VFS, mountpoint string, opt *mountlib.Options) (<-chan error
|
||||
MountOptions: *mountOpts,
|
||||
EntryTimeout: &opt.AttrTimeout,
|
||||
AttrTimeout: &opt.AttrTimeout,
|
||||
// UID
|
||||
// GID
|
||||
GID: VFS.Opt.GID,
|
||||
UID: VFS.Opt.UID,
|
||||
}
|
||||
|
||||
root, err := fsys.Root()
|
||||
|
||||
@@ -1,16 +1,14 @@
|
||||
//go:build linux || (darwin && amd64)
|
||||
// +build linux darwin,amd64
|
||||
//go:build linux
|
||||
// +build linux
|
||||
|
||||
package mount2
|
||||
|
||||
import (
|
||||
"testing"
|
||||
|
||||
"github.com/rclone/rclone/fstest/testy"
|
||||
"github.com/rclone/rclone/vfs/vfstest"
|
||||
)
|
||||
|
||||
func TestMount(t *testing.T) {
|
||||
testy.SkipUnreliable(t)
|
||||
vfstest.RunTests(t, false, mount)
|
||||
}
|
||||
|
||||
@@ -85,17 +85,16 @@ func (n *Node) lookupVfsNodeInDir(leaf string) (vfsNode vfs.Node, errno syscall.
|
||||
// will not work.
|
||||
func (n *Node) Statfs(ctx context.Context, out *fuse.StatfsOut) syscall.Errno {
|
||||
defer log.Trace(n, "")("out=%+v", &out)
|
||||
out = new(fuse.StatfsOut)
|
||||
const blockSize = 4096
|
||||
const fsBlocks = (1 << 50) / blockSize
|
||||
out.Blocks = fsBlocks // Total data blocks in file system.
|
||||
out.Bfree = fsBlocks // Free blocks in file system.
|
||||
out.Bavail = fsBlocks // Free blocks in file system if you're not root.
|
||||
out.Files = 1e9 // Total files in file system.
|
||||
out.Ffree = 1e9 // Free files in file system.
|
||||
out.Bsize = blockSize // Block size
|
||||
out.NameLen = 255 // Maximum file name length?
|
||||
out.Frsize = blockSize // Fragment size, smallest addressable data size in the file system.
|
||||
total, _, free := n.fsys.VFS.Statfs()
|
||||
out.Blocks = uint64(total) / blockSize // Total data blocks in file system.
|
||||
out.Bfree = uint64(free) / blockSize // Free blocks in file system.
|
||||
out.Bavail = out.Bfree // Free blocks in file system if you're not root.
|
||||
out.Files = 1e9 // Total files in file system.
|
||||
out.Ffree = 1e9 // Free files in file system.
|
||||
out.Bsize = blockSize // Block size
|
||||
out.NameLen = 255 // Maximum file name length?
|
||||
out.Frsize = blockSize // Fragment size, smallest addressable data size in the file system.
|
||||
mountlib.ClipBlocks(&out.Blocks)
|
||||
mountlib.ClipBlocks(&out.Bfree)
|
||||
mountlib.ClipBlocks(&out.Bavail)
|
||||
@@ -405,3 +404,40 @@ func (n *Node) Rename(ctx context.Context, oldName string, newParent fusefs.Inod
|
||||
}
|
||||
|
||||
var _ = (fusefs.NodeRenamer)((*Node)(nil))
|
||||
|
||||
// Getxattr should read data for the given attribute into
|
||||
// `dest` and return the number of bytes. If `dest` is too
|
||||
// small, it should return ERANGE and the size of the attribute.
|
||||
// If not defined, Getxattr will return ENOATTR.
|
||||
func (n *Node) Getxattr(ctx context.Context, attr string, dest []byte) (uint32, syscall.Errno) {
|
||||
return 0, syscall.ENOSYS // we never implement this
|
||||
}
|
||||
|
||||
var _ fusefs.NodeGetxattrer = (*Node)(nil)
|
||||
|
||||
// Setxattr should store data for the given attribute. See
|
||||
// setxattr(2) for information about flags.
|
||||
// If not defined, Setxattr will return ENOATTR.
|
||||
func (n *Node) Setxattr(ctx context.Context, attr string, data []byte, flags uint32) syscall.Errno {
|
||||
return syscall.ENOSYS // we never implement this
|
||||
}
|
||||
|
||||
var _ fusefs.NodeSetxattrer = (*Node)(nil)
|
||||
|
||||
// Removexattr should delete the given attribute.
|
||||
// If not defined, Removexattr will return ENOATTR.
|
||||
func (n *Node) Removexattr(ctx context.Context, attr string) syscall.Errno {
|
||||
return syscall.ENOSYS // we never implement this
|
||||
}
|
||||
|
||||
var _ fusefs.NodeRemovexattrer = (*Node)(nil)
|
||||
|
||||
// Listxattr should read all attributes (null terminated) into
|
||||
// `dest`. If the `dest` buffer is too small, it should return ERANGE
|
||||
// and the correct size. If not defined, return an empty list and
|
||||
// success.
|
||||
func (n *Node) Listxattr(ctx context.Context, dest []byte) (uint32, syscall.Errno) {
|
||||
return 0, syscall.ENOSYS // we never implement this
|
||||
}
|
||||
|
||||
var _ fusefs.NodeListxattrer = (*Node)(nil)
|
||||
|
||||
@@ -261,6 +261,17 @@ Mounting on macOS can be done either via [macFUSE](https://osxfuse.github.io/)
|
||||
FUSE driver utilizing a macOS kernel extension (kext). FUSE-T is an alternative FUSE system
|
||||
which "mounts" via an NFSv4 local server.
|
||||
|
||||
#### macFUSE Notes
|
||||
|
||||
If installing macFUSE using [dmg packages](https://github.com/osxfuse/osxfuse/releases) from
|
||||
the website, rclone will locate the macFUSE libraries without any further intervention.
|
||||
If however, macFUSE is installed using the [macports](https://www.macports.org/) package manager,
|
||||
the following addition steps are required.
|
||||
|
||||
sudo mkdir /usr/local/lib
|
||||
cd /usr/local/lib
|
||||
sudo ln -s /opt/local/lib/libfuse.2.dylib
|
||||
|
||||
#### FUSE-T Limitations, Caveats, and Notes
|
||||
|
||||
There are some limitations, caveats, and notes about how it works. These are current as
|
||||
@@ -397,20 +408,19 @@ or create systemd mount units:
|
||||
|||
|
||||
# /etc/systemd/system/mnt-data.mount
|
||||
[Unit]
|
||||
After=network-online.target
|
||||
Description=Mount for /mnt/data
|
||||
[Mount]
|
||||
Type=rclone
|
||||
What=sftp1:subdir
|
||||
Where=/mnt/data
|
||||
Options=rw,allow_other,args2env,vfs-cache-mode=writes,config=/etc/rclone.conf,cache-dir=/var/rclone
|
||||
Options=rw,_netdev,allow_other,args2env,vfs-cache-mode=writes,config=/etc/rclone.conf,cache-dir=/var/rclone
|
||||
|||
|
||||
|
||||
optionally accompanied by systemd automount unit
|
||||
|||
|
||||
# /etc/systemd/system/mnt-data.automount
|
||||
[Unit]
|
||||
After=network-online.target
|
||||
Before=remote-fs.target
|
||||
Description=AutoMount for /mnt/data
|
||||
[Automount]
|
||||
Where=/mnt/data
|
||||
TimeoutIdleSec=600
|
||||
|
||||
@@ -68,13 +68,14 @@ var cmdSelfUpdate = &cobra.Command{
|
||||
"versionIntroduced": "v1.55",
|
||||
},
|
||||
Run: func(command *cobra.Command, args []string) {
|
||||
ctx := context.Background()
|
||||
cmd.CheckArgs(0, 0, command, args)
|
||||
if Opt.Package == "" {
|
||||
Opt.Package = "zip"
|
||||
}
|
||||
gotActionFlags := Opt.Stable || Opt.Beta || Opt.Output != "" || Opt.Version != "" || Opt.Package != "zip"
|
||||
if Opt.Check && !gotActionFlags {
|
||||
versionCmd.CheckVersion()
|
||||
versionCmd.CheckVersion(ctx)
|
||||
return
|
||||
}
|
||||
if Opt.Package != "zip" {
|
||||
@@ -108,7 +109,7 @@ func GetVersion(ctx context.Context, beta bool, version string) (newVersion, sit
|
||||
|
||||
if version == "" {
|
||||
// Request the latest release number from the download site
|
||||
_, newVersion, _, err = versionCmd.GetVersion(siteURL + "/version.txt")
|
||||
_, newVersion, _, err = versionCmd.GetVersion(ctx, siteURL+"/version.txt")
|
||||
return
|
||||
}
|
||||
|
||||
|
||||
@@ -303,7 +303,7 @@ func (s *server) Serve() (err error) {
|
||||
go func() {
|
||||
fs.Logf(s.f, "Serving HTTP on %s", s.HTTPConn.Addr().String())
|
||||
|
||||
err = s.serveHTTP()
|
||||
err := s.serveHTTP()
|
||||
if err != nil {
|
||||
fs.Logf(s.f, "Error on serving HTTP server: %v", err)
|
||||
}
|
||||
|
||||
@@ -3,10 +3,12 @@ package webdav
|
||||
|
||||
import (
|
||||
"context"
|
||||
"encoding/xml"
|
||||
"errors"
|
||||
"fmt"
|
||||
"net/http"
|
||||
"os"
|
||||
"strconv"
|
||||
"strings"
|
||||
"time"
|
||||
|
||||
@@ -255,6 +257,52 @@ func (w *WebDAV) auth(user, pass string) (value interface{}, err error) {
|
||||
return VFS, err
|
||||
}
|
||||
|
||||
type webdavRW struct {
|
||||
http.ResponseWriter
|
||||
status int
|
||||
}
|
||||
|
||||
func (rw *webdavRW) WriteHeader(statusCode int) {
|
||||
rw.status = statusCode
|
||||
rw.ResponseWriter.WriteHeader(statusCode)
|
||||
}
|
||||
|
||||
func (rw *webdavRW) isSuccessfull() bool {
|
||||
return rw.status == 0 || (rw.status >= 200 && rw.status <= 299)
|
||||
}
|
||||
|
||||
func (w *WebDAV) postprocess(r *http.Request, remote string) {
|
||||
// set modtime from requests, don't write to client because status is already written
|
||||
switch r.Method {
|
||||
case "COPY", "MOVE", "PUT":
|
||||
VFS, err := w.getVFS(r.Context())
|
||||
if err != nil {
|
||||
fs.Errorf(nil, "Failed to get VFS: %v", err)
|
||||
return
|
||||
}
|
||||
|
||||
// Get the node
|
||||
node, err := VFS.Stat(remote)
|
||||
if err != nil {
|
||||
fs.Errorf(nil, "Failed to stat node: %v", err)
|
||||
return
|
||||
}
|
||||
|
||||
mh := r.Header.Get("X-OC-Mtime")
|
||||
if mh != "" {
|
||||
modtimeUnix, err := strconv.ParseInt(mh, 10, 64)
|
||||
if err == nil {
|
||||
err = node.SetModTime(time.Unix(modtimeUnix, 0))
|
||||
if err != nil {
|
||||
fs.Errorf(nil, "Failed to set modtime: %v", err)
|
||||
}
|
||||
} else {
|
||||
fs.Errorf(nil, "Failed to parse modtime: %v", err)
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func (w *WebDAV) ServeHTTP(rw http.ResponseWriter, r *http.Request) {
|
||||
urlPath := r.URL.Path
|
||||
isDir := strings.HasSuffix(urlPath, "/")
|
||||
@@ -266,7 +314,12 @@ func (w *WebDAV) ServeHTTP(rw http.ResponseWriter, r *http.Request) {
|
||||
// Add URL Prefix back to path since webdavhandler needs to
|
||||
// return absolute references.
|
||||
r.URL.Path = w.opt.HTTP.BaseURL + r.URL.Path
|
||||
w.webdavhandler.ServeHTTP(rw, r)
|
||||
wrw := &webdavRW{ResponseWriter: rw}
|
||||
w.webdavhandler.ServeHTTP(wrw, r)
|
||||
|
||||
if wrw.isSuccessfull() {
|
||||
w.postprocess(r, remote)
|
||||
}
|
||||
}
|
||||
|
||||
// serveDir serves a directory index at dirRemote
|
||||
@@ -356,7 +409,7 @@ func (w *WebDAV) OpenFile(ctx context.Context, name string, flags int, perm os.F
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return Handle{Handle: f, w: w}, nil
|
||||
return Handle{Handle: f, w: w, ctx: ctx}, nil
|
||||
}
|
||||
|
||||
// RemoveAll removes a file or a directory and its contents
|
||||
@@ -404,7 +457,8 @@ func (w *WebDAV) Stat(ctx context.Context, name string) (fi os.FileInfo, err err
|
||||
// Handle represents an open file
|
||||
type Handle struct {
|
||||
vfs.Handle
|
||||
w *WebDAV
|
||||
w *WebDAV
|
||||
ctx context.Context
|
||||
}
|
||||
|
||||
// Readdir reads directory entries from the handle
|
||||
@@ -429,6 +483,65 @@ func (h Handle) Stat() (fi os.FileInfo, err error) {
|
||||
return FileInfo{FileInfo: fi, w: h.w}, nil
|
||||
}
|
||||
|
||||
// DeadProps returns extra properties about the handle
|
||||
func (h Handle) DeadProps() (map[xml.Name]webdav.Property, error) {
|
||||
var (
|
||||
xmlName xml.Name
|
||||
property webdav.Property
|
||||
properties = make(map[xml.Name]webdav.Property)
|
||||
)
|
||||
if h.w.opt.HashType != hash.None {
|
||||
entry := h.Handle.Node().DirEntry()
|
||||
if o, ok := entry.(fs.Object); ok {
|
||||
hash, err := o.Hash(h.ctx, h.w.opt.HashType)
|
||||
if err == nil {
|
||||
xmlName.Space = "http://owncloud.org/ns"
|
||||
xmlName.Local = "checksums"
|
||||
property.XMLName = xmlName
|
||||
property.InnerXML = append(property.InnerXML, "<checksum xmlns=\"http://owncloud.org/ns\">"...)
|
||||
property.InnerXML = append(property.InnerXML, strings.ToUpper(h.w.opt.HashType.String())...)
|
||||
property.InnerXML = append(property.InnerXML, ':')
|
||||
property.InnerXML = append(property.InnerXML, hash...)
|
||||
property.InnerXML = append(property.InnerXML, "</checksum>"...)
|
||||
properties[xmlName] = property
|
||||
} else {
|
||||
fs.Errorf(nil, "failed to calculate hash: %v", err)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
xmlName.Space = "DAV:"
|
||||
xmlName.Local = "lastmodified"
|
||||
property.XMLName = xmlName
|
||||
property.InnerXML = strconv.AppendInt(nil, h.Handle.Node().ModTime().Unix(), 10)
|
||||
properties[xmlName] = property
|
||||
|
||||
return properties, nil
|
||||
}
|
||||
|
||||
// Patch changes modtime of the underlying resources, it returns ok for all properties, the error is from setModtime if any
|
||||
// FIXME does not check for invalid property and SetModTime error
|
||||
func (h Handle) Patch(proppatches []webdav.Proppatch) ([]webdav.Propstat, error) {
|
||||
var (
|
||||
stat webdav.Propstat
|
||||
err error
|
||||
)
|
||||
stat.Status = http.StatusOK
|
||||
for _, patch := range proppatches {
|
||||
for _, prop := range patch.Props {
|
||||
stat.Props = append(stat.Props, webdav.Property{XMLName: prop.XMLName})
|
||||
if prop.XMLName.Space == "DAV:" && prop.XMLName.Local == "lastmodified" {
|
||||
var modtimeUnix int64
|
||||
modtimeUnix, err = strconv.ParseInt(string(prop.InnerXML), 10, 64)
|
||||
if err == nil {
|
||||
err = h.Handle.Node().SetModTime(time.Unix(modtimeUnix, 0))
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
return []webdav.Propstat{stat}, err
|
||||
}
|
||||
|
||||
// FileInfo represents info about a file satisfying os.FileInfo and
|
||||
// also some additional interfaces for webdav for ETag and ContentType
|
||||
type FileInfo struct {
|
||||
|
||||
@@ -65,7 +65,7 @@ func TestWebDav(t *testing.T) {
|
||||
// Config for the backend we'll use to connect to the server
|
||||
config := configmap.Simple{
|
||||
"type": "webdav",
|
||||
"vendor": "other",
|
||||
"vendor": "owncloud",
|
||||
"url": w.Server.URLs()[0],
|
||||
"user": testUser,
|
||||
"pass": obscure.MustObscure(testPass),
|
||||
|
||||
@@ -2,6 +2,7 @@
|
||||
package version
|
||||
|
||||
import (
|
||||
"context"
|
||||
"errors"
|
||||
"fmt"
|
||||
"io"
|
||||
@@ -13,6 +14,7 @@ import (
|
||||
"github.com/rclone/rclone/cmd"
|
||||
"github.com/rclone/rclone/fs"
|
||||
"github.com/rclone/rclone/fs/config/flags"
|
||||
"github.com/rclone/rclone/fs/fshttp"
|
||||
"github.com/spf13/cobra"
|
||||
)
|
||||
|
||||
@@ -71,9 +73,10 @@ Or
|
||||
"versionIntroduced": "v1.33",
|
||||
},
|
||||
Run: func(command *cobra.Command, args []string) {
|
||||
ctx := context.Background()
|
||||
cmd.CheckArgs(0, 0, command, args)
|
||||
if check {
|
||||
CheckVersion()
|
||||
CheckVersion(ctx)
|
||||
} else {
|
||||
cmd.ShowVersion()
|
||||
}
|
||||
@@ -89,8 +92,8 @@ func stripV(s string) string {
|
||||
}
|
||||
|
||||
// GetVersion gets the version available for download
|
||||
func GetVersion(url string) (v *semver.Version, vs string, date time.Time, err error) {
|
||||
resp, err := http.Get(url)
|
||||
func GetVersion(ctx context.Context, url string) (v *semver.Version, vs string, date time.Time, err error) {
|
||||
resp, err := fshttp.NewClient(ctx).Get(url)
|
||||
if err != nil {
|
||||
return v, vs, date, err
|
||||
}
|
||||
@@ -114,7 +117,7 @@ func GetVersion(url string) (v *semver.Version, vs string, date time.Time, err e
|
||||
}
|
||||
|
||||
// CheckVersion checks the installed version against available downloads
|
||||
func CheckVersion() {
|
||||
func CheckVersion(ctx context.Context) {
|
||||
vCurrent, err := semver.NewVersion(stripV(fs.Version))
|
||||
if err != nil {
|
||||
fs.Errorf(nil, "Failed to parse version: %v", err)
|
||||
@@ -122,7 +125,7 @@ func CheckVersion() {
|
||||
const timeFormat = "2006-01-02"
|
||||
|
||||
printVersion := func(what, url string) {
|
||||
v, vs, t, err := GetVersion(url + "version.txt")
|
||||
v, vs, t, err := GetVersion(ctx, url+"version.txt")
|
||||
if err != nil {
|
||||
fs.Errorf(nil, "Failed to get rclone %s version: %v", what, err)
|
||||
return
|
||||
|
||||
@@ -113,7 +113,7 @@ WebDAV or S3, that work out of the box.)
|
||||
{{< provider name="Box" home="https://www.box.com/" config="/box/" >}}
|
||||
{{< provider name="Ceph" home="http://ceph.com/" config="/s3/#ceph" >}}
|
||||
{{< provider name="China Mobile Ecloud Elastic Object Storage (EOS)" home="https://ecloud.10086.cn/home/product-introduction/eos/" config="/s3/#china-mobile-ecloud-eos" >}}
|
||||
{{< provider name="Arvan Cloud Object Storage (AOS)" home="https://www.arvancloud.com/en/products/cloud-storage" config="/s3/#arvan-cloud-object-storage-aos" >}}
|
||||
{{< provider name="Arvan Cloud Object Storage (AOS)" home="https://www.arvancloud.ir/en/products/cloud-storage" config="/s3/#arvan-cloud-object-storage-aos" >}}
|
||||
{{< provider name="Citrix ShareFile" home="http://sharefile.com/" config="/sharefile/" >}}
|
||||
{{< provider name="Cloudflare R2" home="https://blog.cloudflare.com/r2-open-beta/" config="/s3/#cloudflare-r2" >}}
|
||||
{{< provider name="DigitalOcean Spaces" home="https://www.digitalocean.com/products/object-storage/" config="/s3/#digitalocean-spaces" >}}
|
||||
@@ -146,12 +146,14 @@ WebDAV or S3, that work out of the box.)
|
||||
{{< provider name="Minio" home="https://www.minio.io/" config="/s3/#minio" >}}
|
||||
{{< provider name="Nextcloud" home="https://nextcloud.com/" config="/webdav/#nextcloud" >}}
|
||||
{{< provider name="OVH" home="https://www.ovh.co.uk/public-cloud/storage/object-storage/" config="/swift/" >}}
|
||||
{{< provider name="Blomp Cloud Storage" home="https://rclone.org/swift/" config="/swift/" >}}
|
||||
{{< provider name="OpenDrive" home="https://www.opendrive.com/" config="/opendrive/" >}}
|
||||
{{< provider name="OpenStack Swift" home="https://docs.openstack.org/swift/latest/" config="/swift/" >}}
|
||||
{{< provider name="Oracle Cloud Storage Swift" home="https://docs.oracle.com/en-us/iaas/integration/doc/configure-object-storage.html" config="/swift/" >}}
|
||||
{{< provider name="Oracle Object Storage" home="https://www.oracle.com/cloud/storage/object-storage" config="/oracleobjectstorage/" >}}
|
||||
{{< provider name="ownCloud" home="https://owncloud.org/" config="/webdav/#owncloud" >}}
|
||||
{{< provider name="pCloud" home="https://www.pcloud.com/" config="/pcloud/" >}}
|
||||
{{< provider name="Petabox" home="https://petabox.io/" config="/s3/#petabox" >}}
|
||||
{{< provider name="PikPak" home="https://mypikpak.com/" config="/pikpak/" >}}
|
||||
{{< provider name="premiumize.me" home="https://premiumize.me/" config="/premiumizeme/" >}}
|
||||
{{< provider name="put.io" home="https://put.io/" config="/putio/" >}}
|
||||
|
||||
@@ -587,7 +587,7 @@ put them back in again.` >}}
|
||||
* Leroy van Logchem <lr.vanlogchem@gmail.com>
|
||||
* Zsolt Ero <zsolt.ero@gmail.com>
|
||||
* Lesmiscore <nao20010128@gmail.com>
|
||||
* ehsantdy <ehsan.tadayon@arvancloud.com>
|
||||
* ehsantdy <ehsan.tadayon@arvancloud.com> <ehsantadayon85@gmail.com>
|
||||
* SwazRGB <65694696+swazrgb@users.noreply.github.com>
|
||||
* Mateusz Puczyński <mati6095@gmail.com>
|
||||
* Michael C Tiernan - MIT-Research Computing Project <mtiernan@mit.edu>
|
||||
@@ -713,3 +713,28 @@ put them back in again.` >}}
|
||||
* wiserain <mail275@gmail.com>
|
||||
* Roel Arents <roel.arents@kadaster.nl>
|
||||
* Shyim <github@shyim.de>
|
||||
* Rintze Zelle <78232505+rzelle-lallemand@users.noreply.github.com>
|
||||
* Damo <damoclark@users.noreply.github.com>
|
||||
* WeidiDeng <weidi_deng@icloud.com>
|
||||
* Brian Starkey <stark3y@gmail.com>
|
||||
* jladbrook <jhladbrook@gmail.com>
|
||||
* Loren Gordon <lorengordon@users.noreply.github.com>
|
||||
* dlitster <davidlitster@gmail.com>
|
||||
* Tobias Gion <tobias@gion.io>
|
||||
* Jānis Bebrītis <janis.bebritis@wunder.io>
|
||||
* Adam K <github.com@ak.tidy.email>
|
||||
* Andrei Smirnov <smirnov.captain@gmail.com>
|
||||
* Janne Hellsten <jjhellst@gmail.com>
|
||||
* cc <12904584+shvc@users.noreply.github.com>
|
||||
* Tareq Sharafy <tareq.sha@gmail.com>
|
||||
* kapitainsky <dariuszb@me.com>
|
||||
* douchen <playgoobug@gmail.com>
|
||||
* Sam Lai <70988+slai@users.noreply.github.com>
|
||||
* URenko <18209292+URenko@users.noreply.github.com>
|
||||
* Stanislav Gromov <kullfar@gmail.com>
|
||||
* Paulo Schreiner <paulo.schreiner@delivion.de>
|
||||
* Mariusz Suchodolski <mariusz@suchodol.ski>
|
||||
* danielkrajnik <dan94kra@gmail.com>
|
||||
* Peter Fern <github@0xc0dedbad.com>
|
||||
* zzq <i@zhangzqs.cn>
|
||||
* mac-15 <usman.ilamdin@phpstudios.com>
|
||||
|
||||
@@ -162,6 +162,12 @@ It reads configuration from these variables, in the following order:
|
||||
- `AZURE_CLIENT_ID`: client ID of the application the user will authenticate to
|
||||
- `AZURE_USERNAME`: a username (usually an email address)
|
||||
- `AZURE_PASSWORD`: the user's password
|
||||
4. Workload Identity
|
||||
- `AZURE_TENANT_ID`: Tenant to authenticate in.
|
||||
- `AZURE_CLIENT_ID`: Client ID of the application the user will authenticate to.
|
||||
- `AZURE_FEDERATED_TOKEN_FILE`: Path to projected service account token file.
|
||||
- `AZURE_AUTHORITY_HOST`: Authority of an Azure Active Directory endpoint (default: login.microsoftonline.com).
|
||||
|
||||
|
||||
##### Env Auth: 2. Managed Service Identity Credentials
|
||||
|
||||
@@ -189,7 +195,7 @@ Then you could access rclone resources like this:
|
||||
|
||||
Or
|
||||
|
||||
rclone lsf --azureblob-env-auth --azureblob-acccount=ACCOUNT :azureblob:CONTAINER
|
||||
rclone lsf --azureblob-env-auth --azureblob-account=ACCOUNT :azureblob:CONTAINER
|
||||
|
||||
Which is analogous to using the `az` tool:
|
||||
|
||||
@@ -786,6 +792,24 @@ Properties:
|
||||
- "container"
|
||||
- Allow full public read access for container and blob data.
|
||||
|
||||
#### --azureblob-directory-markers
|
||||
|
||||
Upload an empty object with a trailing slash when a new directory is created
|
||||
|
||||
Empty folders are unsupported for bucket based remotes, this option
|
||||
creates an empty object ending with "/", to persist the folder.
|
||||
|
||||
This object also has the metadata "hdi_isfolder = true" to conform to
|
||||
the Microsoft standard.
|
||||
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: directory_markers
|
||||
- Env Var: RCLONE_AZUREBLOB_DIRECTORY_MARKERS
|
||||
- Type: bool
|
||||
- Default: false
|
||||
|
||||
#### --azureblob-no-check-container
|
||||
|
||||
If set, don't attempt to check the container exists or create it.
|
||||
|
||||
@@ -5,6 +5,177 @@ description: "Rclone Changelog"
|
||||
|
||||
# Changelog
|
||||
|
||||
## v1.63.0 - 2023-06-30
|
||||
|
||||
[See commits](https://github.com/rclone/rclone/compare/v1.62.0...v1.63.0)
|
||||
|
||||
* New backends
|
||||
* [Pikpak](/pikpak/) (wiserain)
|
||||
* New S3 providers
|
||||
* [petabox.io](/s3/#petabox) (Andrei Smirnov)
|
||||
* [Google Cloud Storage](/s3/#google-cloud-storage) (Anthony Pessy)
|
||||
* New WebDAV providers
|
||||
* [Fastmail](/webdav/#fastmail-files) (Arnavion)
|
||||
* Major changes
|
||||
* Files will be copied to a temporary name ending in `.partial` when copying to `local`,`ftp`,`sftp` then renamed at the end of the transfer. (Janne Hellsten, Nick Craig-Wood)
|
||||
* This helps with data integrity as we don't delete the existing file until the new one is complete.
|
||||
* It can be disabled with the [--inplace](/docs/#inplace) flag.
|
||||
* This behaviour will also happen if the backend is wrapped, for example `sftp` wrapped with `crypt`.
|
||||
* The [s3](/s3/#s3-directory-markers), [azureblob](/azureblob/#azureblob-directory-markers) and [gcs](/googlecloudstorage/#gcs-directory-markers) backends now support directory markers so empty directories are supported (Jānis Bebrītis, Nick Craig-Wood)
|
||||
* The [--default-time](/docs/#default-time-time) flag now controls the unknown modification time of files/dirs (Nick Craig-Wood)
|
||||
* If a file or directory does not have a modification time rclone can read then rclone will display this fixed time instead.
|
||||
* For the old behaviour use `--default-time 0s` which will set this time to the time rclone started up.
|
||||
* New Features
|
||||
* build
|
||||
* Modernise linters in use and fixup all affected code (albertony)
|
||||
* Push docker beta to GHCR (GitHub container registry) (Richard Tweed)
|
||||
* cat: Add `--separator` option to cat command (Loren Gordon)
|
||||
* config
|
||||
* Do not remove/overwrite other files during config file save (albertony)
|
||||
* Do not overwrite config file symbolic link (albertony)
|
||||
* Stop `config create` making invalid config files (Nick Craig-Wood)
|
||||
* doc updates (Adam K, Aditya Basu, albertony, asdffdsazqqq, Damo, danielkrajnik, Dimitri Papadopoulos, dlitster, Drew Parsons, jumbi77, kapitainsky, mac-15, Mariusz Suchodolski, Nick Craig-Wood, NickIAm, Rintze Zelle, Stanislav Gromov, Tareq Sharafy, URenko, yuudi, Zach Kipp)
|
||||
* fs
|
||||
* Add `size` to JSON logs when moving or copying an object (Nick Craig-Wood)
|
||||
* Allow boolean features to be enabled with `--disable !Feature` (Nick Craig-Wood)
|
||||
* genautocomplete: Rename to `completion` with alias to the old name (Nick Craig-Wood)
|
||||
* librclone: Added example on using `librclone` with Go (alankrit)
|
||||
* lsjson: Make `--stat` more efficient (Nick Craig-Wood)
|
||||
* operations
|
||||
* Implement `--multi-thread-write-buffer-size` for speed improvements on downloads (Paulo Schreiner)
|
||||
* Reopen downloads on error when using `check --download` and `cat` (Nick Craig-Wood)
|
||||
* rc: `config/listremotes` includes remotes defined with environment variables (kapitainsky)
|
||||
* selfupdate: Obey `--no-check-certificate` flag (Nick Craig-Wood)
|
||||
* serve restic: Trigger systemd notify (Shyim)
|
||||
* serve webdav: Implement owncloud checksum and modtime extensions (WeidiDeng)
|
||||
* sync: `--suffix-keep-extension` preserve 2 part extensions like .tar.gz (Nick Craig-Wood)
|
||||
* Bug Fixes
|
||||
* accounting
|
||||
* Fix Prometheus metrics to be the same as `core/stats` (Nick Craig-Wood)
|
||||
* Bwlimit signal handler should always start (Sam Lai)
|
||||
* bisync: Fix `maxDelete` parameter being ignored via the rc (Nick Craig-Wood)
|
||||
* cmd/ncdu: Fix screen corruption when logging (eNV25)
|
||||
* filter: Fix deadlock with errors on `--files-from` (douchen)
|
||||
* fs
|
||||
* Fix interaction between `--progress` and `--interactive` (Nick Craig-Wood)
|
||||
* Fix infinite recursive call in pacer ModifyCalculator (fixes issue reported by the staticcheck linter) (albertony)
|
||||
* lib/atexit: Ensure OnError only calls cancel function once (Nick Craig-Wood)
|
||||
* lib/rest: Fix problems re-using HTTP connections (Nick Craig-Wood)
|
||||
* rc
|
||||
* Fix `operations/stat` with trailing `/` (Nick Craig-Wood)
|
||||
* Fix missing `--rc` flags (Nick Craig-Wood)
|
||||
* Fix output of Time values in `options/get` (Nick Craig-Wood)
|
||||
* serve dlna: Fix potential data race (Nick Craig-Wood)
|
||||
* version: Fix reported os/kernel version for windows (albertony)
|
||||
* Mount
|
||||
* Add `--mount-case-insensitive` to force the mount to be case insensitive (Nick Craig-Wood)
|
||||
* Removed unnecessary byte slice allocation for reads (Anagh Kumar Baranwal)
|
||||
* Clarify rclone mount error when installed via homebrew (Nick Craig-Wood)
|
||||
* Added _netdev to the example mount so it gets treated as a remote-fs rather than local-fs (Anagh Kumar Baranwal)
|
||||
* Mount2
|
||||
* Updated go-fuse version (Anagh Kumar Baranwal)
|
||||
* Fixed statfs (Anagh Kumar Baranwal)
|
||||
* Disable xattrs (Anagh Kumar Baranwal)
|
||||
* VFS
|
||||
* Add MkdirAll function to make a directory and all beneath (Nick Craig-Wood)
|
||||
* Fix reload: failed to add virtual dir entry: file does not exist (Nick Craig-Wood)
|
||||
* Fix writing to a read only directory creating spurious directory entries (WeidiDeng)
|
||||
* Fix potential data race (Nick Craig-Wood)
|
||||
* Fix backends being Shutdown too early when startup takes a long time (Nick Craig-Wood)
|
||||
* Local
|
||||
* Fix filtering of symlinks with `-l`/`--links` flag (Nick Craig-Wood)
|
||||
* Fix /path/to/file.rclonelink when `-l`/`--links` is in use (Nick Craig-Wood)
|
||||
* Fix crash with `--metadata` on Android (Nick Craig-Wood)
|
||||
* Cache
|
||||
* Fix backends shutting down when in use when used via the rc (Nick Craig-Wood)
|
||||
* Crypt
|
||||
* Add `--crypt-suffix` option to set a custom suffix for encrypted files (jladbrook)
|
||||
* Add `--crypt-pass-bad-blocks` to allow corrupted file output (Nick Craig-Wood)
|
||||
* Fix reading 0 length files (Nick Craig-Wood)
|
||||
* Try not to return "unexpected EOF" error (Nick Craig-Wood)
|
||||
* Reduce allocations (albertony)
|
||||
* Recommend Dropbox for `base32768` encoding (Nick Craig-Wood)
|
||||
* Azure Blob
|
||||
* Empty directory markers (Nick Craig-Wood)
|
||||
* Support azure workload identities (Tareq Sharafy)
|
||||
* Fix azure blob uploads with multiple bits of metadata (Nick Craig-Wood)
|
||||
* Fix azurite compatibility by sending nil tier if set to empty string (Roel Arents)
|
||||
* Combine
|
||||
* Implement missing methods (Nick Craig-Wood)
|
||||
* Fix goroutine stack overflow on bad object (Nick Craig-Wood)
|
||||
* Drive
|
||||
* Add `--drive-env-auth` to get IAM credentials from runtime (Peter Brunner)
|
||||
* Update drive service account guide (Juang, Yi-Lin)
|
||||
* Fix change notify picking up files outside the root (Nick Craig-Wood)
|
||||
* Fix trailing slash mis-identificaton of folder as file (Nick Craig-Wood)
|
||||
* Fix incorrect remote after Update on object (Nick Craig-Wood)
|
||||
* Dropbox
|
||||
* Implement `--dropbox-pacer-min-sleep` flag (Nick Craig-Wood)
|
||||
* Fix the dropbox batcher stalling (Misty)
|
||||
* Fichier
|
||||
* Add `--ficicher-cdn` option to use the CDN for download (Nick Craig-Wood)
|
||||
* FTP
|
||||
* Lower log message priority when `SetModTime` is not supported to debug (Tobias Gion)
|
||||
* Fix "unsupported LIST line" errors on startup (Nick Craig-Wood)
|
||||
* Fix "501 Not a valid pathname." errors when creating directories (Nick Craig-Wood)
|
||||
* Google Cloud Storage
|
||||
* Empty directory markers (Jānis Bebrītis, Nick Craig-Wood)
|
||||
* Added `--gcs-user-project` needed for requester pays (Christopher Merry)
|
||||
* HTTP
|
||||
* Add client certificate user auth middleware. This can auth `serve restic` from the username in the client cert. (Peter Fern)
|
||||
* Jottacloud
|
||||
* Fix vfs writeback stuck in a failed upload loop with file versioning disabled (albertony)
|
||||
* Onedrive
|
||||
* Add `--onedrive-av-override` flag to download files flagged as virus (Nick Craig-Wood)
|
||||
* Fix quickxorhash on 32 bit architectures (Nick Craig-Wood)
|
||||
* Report any list errors during `rclone cleanup` (albertony)
|
||||
* Putio
|
||||
* Fix uploading to the wrong object on Update with overriden remote name (Nick Craig-Wood)
|
||||
* Fix modification times not being preserved for server side copy and move (Nick Craig-Wood)
|
||||
* Fix server side copy failures (400 errors) (Nick Craig-Wood)
|
||||
* S3
|
||||
* Empty directory markers (Jānis Bebrītis, Nick Craig-Wood)
|
||||
* Update Scaleway storage classes (Brian Starkey)
|
||||
* Fix `--s3-versions` on individual objects (Nick Craig-Wood)
|
||||
* Fix hang on aborting multpart upload with iDrive e2 (Nick Craig-Wood)
|
||||
* Fix missing "tier" metadata (Nick Craig-Wood)
|
||||
* Fix V3sign: add missing subresource delete (cc)
|
||||
* Fix Arvancloud Domain and region changes and alphabetise the provider (Ehsan Tadayon)
|
||||
* Fix Qiniu KODO quirks virtualHostStyle is false (zzq)
|
||||
* SFTP
|
||||
* Add `--sftp-host-key-algorithms ` to allow specifying SSH host key algorithms (Joel)
|
||||
* Fix using `--sftp-key-use-agent` and `--sftp-key-file` together needing private key file (Arnav Singh)
|
||||
* Fix move to allow overwriting existing files (Nick Craig-Wood)
|
||||
* Don't stat directories before listing them (Nick Craig-Wood)
|
||||
* Don't check remote points to a file if it ends with / (Nick Craig-Wood)
|
||||
* Sharefile
|
||||
* Disable streamed transfers as they no longer work (Nick Craig-Wood)
|
||||
* Smb
|
||||
* Code cleanup to avoid overwriting ctx before first use (fixes issue reported by the staticcheck linter) (albertony)
|
||||
* Storj
|
||||
* Fix "uplink: too many requests" errors when uploading to the same file (Nick Craig-Wood)
|
||||
* Fix uploading to the wrong object on Update with overriden remote name (Nick Craig-Wood)
|
||||
* Swift
|
||||
* Ignore 404 error when deleting an object (Nick Craig-Wood)
|
||||
* Union
|
||||
* Implement missing methods (Nick Craig-Wood)
|
||||
* Allow errors to be unwrapped for inspection (Nick Craig-Wood)
|
||||
* Uptobox
|
||||
* Add `--uptobox-private` flag to make all uploaded files private (Nick Craig-Wood)
|
||||
* Fix improper regex (Aaron Gokaslan)
|
||||
* Fix Update returning the wrong object (Nick Craig-Wood)
|
||||
* Fix rmdir declaring that directories weren't empty (Nick Craig-Wood)
|
||||
* WebDAV
|
||||
* nextcloud: Add support for chunked uploads (Paul)
|
||||
* Set modtime using propset for owncloud and nextcloud (WeidiDeng)
|
||||
* Make pacer minSleep configurable with `--webdav-pacer-min-sleep` (ed)
|
||||
* Fix server side copy/move not overwriting (WeidiDeng)
|
||||
* Fix modtime on server side copy for owncloud and nextcloud (Nick Craig-Wood)
|
||||
* Yandex
|
||||
* Fix 400 Bad Request on transfer failure (Nick Craig-Wood)
|
||||
* Zoho
|
||||
* Fix downloads with `Range:` header returning the wrong data (Nick Craig-Wood)
|
||||
|
||||
## v1.62.2 - 2023-03-16
|
||||
|
||||
[See commits](https://github.com/rclone/rclone/compare/v1.62.1...v1.62.2)
|
||||
|
||||
@@ -218,7 +218,7 @@ guarantee given hash for all files. If wrapped remote doesn't support it,
|
||||
chunker will then add metadata to all files, even small. However, this can
|
||||
double the amount of small files in storage and incur additional service charges.
|
||||
You can even use chunker to force md5/sha1 support in any other remote
|
||||
at expense of sidecar meta objects by setting e.g. `chunk_type=sha1all`
|
||||
at expense of sidecar meta objects by setting e.g. `hash_type=sha1all`
|
||||
to force hashsums and `chunk_size=1P` to effectively disable chunking.
|
||||
|
||||
Normally, when a file is copied to chunker controlled remote, chunker
|
||||
|
||||
@@ -42,7 +42,7 @@ See the [global flags page](/flags/) for global options not listed here.
|
||||
* [rclone check](/commands/rclone_check/) - Checks the files in the source and destination match.
|
||||
* [rclone checksum](/commands/rclone_checksum/) - Checks the files in the source against a SUM file.
|
||||
* [rclone cleanup](/commands/rclone_cleanup/) - Clean up the remote if possible.
|
||||
* [rclone completion](/commands/rclone_completion/) - Generate the autocompletion script for the specified shell
|
||||
* [rclone completion](/commands/rclone_completion/) - Output completion script for a given shell.
|
||||
* [rclone config](/commands/rclone_config/) - Enter an interactive configuration session.
|
||||
* [rclone copy](/commands/rclone_copy/) - Copy files from source to dest, skipping identical files.
|
||||
* [rclone copyto](/commands/rclone_copyto/) - Copy files from source to dest, skipping identical files.
|
||||
@@ -52,11 +52,10 @@ See the [global flags page](/flags/) for global options not listed here.
|
||||
* [rclone dedupe](/commands/rclone_dedupe/) - Interactively find duplicate filenames and delete/rename them.
|
||||
* [rclone delete](/commands/rclone_delete/) - Remove the files in path.
|
||||
* [rclone deletefile](/commands/rclone_deletefile/) - Remove a single file from remote.
|
||||
* [rclone genautocomplete](/commands/rclone_genautocomplete/) - Output completion script for a given shell.
|
||||
* [rclone gendocs](/commands/rclone_gendocs/) - Output markdown docs for rclone to the directory supplied.
|
||||
* [rclone hashsum](/commands/rclone_hashsum/) - Produces a hashsum file for all the objects in the path.
|
||||
* [rclone link](/commands/rclone_link/) - Generate public link to file/folder.
|
||||
* [rclone listremotes](/commands/rclone_listremotes/) - List all the remotes in the config file.
|
||||
* [rclone listremotes](/commands/rclone_listremotes/) - List all the remotes in the config file and defined in environment variables.
|
||||
* [rclone ls](/commands/rclone_ls/) - List the objects in the path with size and path.
|
||||
* [rclone lsd](/commands/rclone_lsd/) - List all directories/containers/buckets in the path.
|
||||
* [rclone lsf](/commands/rclone_lsf/) - List directories and objects in remote:path formatted for parsing.
|
||||
|
||||
@@ -32,6 +32,18 @@ the end and `--offset` and `--count` to print a section in the middle.
|
||||
Note that if offset is negative it will count from the end, so
|
||||
`--offset -1 --count 1` is equivalent to `--tail 1`.
|
||||
|
||||
Use the `--separator` flag to print a separator value between files. Be sure to
|
||||
shell-escape special characters. For example, to print a newline between
|
||||
files, use:
|
||||
|
||||
* bash:
|
||||
|
||||
rclone --include "*.txt" --separator $'\n' cat remote:path/to/dir
|
||||
|
||||
* powershell:
|
||||
|
||||
rclone --include "*.txt" --separator "`n" cat remote:path/to/dir
|
||||
|
||||
|
||||
```
|
||||
rclone cat remote:path [flags]
|
||||
@@ -40,12 +52,13 @@ rclone cat remote:path [flags]
|
||||
## Options
|
||||
|
||||
```
|
||||
--count int Only print N characters (default -1)
|
||||
--discard Discard the output instead of printing
|
||||
--head int Only print the first N characters
|
||||
-h, --help help for cat
|
||||
--offset int Start printing at offset N (or from end if -ve)
|
||||
--tail int Only print the last N characters
|
||||
--count int Only print N characters (default -1)
|
||||
--discard Discard the output instead of printing
|
||||
--head int Only print the first N characters
|
||||
-h, --help help for cat
|
||||
--offset int Start printing at offset N (or from end if -ve)
|
||||
--separator string Separator to use between objects when printing multiple files
|
||||
--tail int Only print the last N characters
|
||||
```
|
||||
|
||||
See the [global flags page](/flags/) for global options not listed here.
|
||||
|
||||
@@ -52,8 +52,9 @@ you what happened to it. These are reminiscent of diff files.
|
||||
- `* path` means path was present in source and destination but different.
|
||||
- `! path` means there was an error reading or hashing the source or dest.
|
||||
|
||||
The default number of parallel checks is N=8. See the [--checkers=N](/docs/#checkers-n) option
|
||||
for more information.
|
||||
The default number of parallel checks is 8. See the [--checkers=N](/docs/#checkers-n)
|
||||
option for more information.
|
||||
|
||||
|
||||
```
|
||||
rclone check source:path dest:path [flags]
|
||||
|
||||
@@ -44,6 +44,9 @@ you what happened to it. These are reminiscent of diff files.
|
||||
- `* path` means path was present in source and destination but different.
|
||||
- `! path` means there was an error reading or hashing the source or dest.
|
||||
|
||||
The default number of parallel checks is 8. See the [--checkers=N](/docs/#checkers-n)
|
||||
option for more information.
|
||||
|
||||
|
||||
```
|
||||
rclone checksum <hash> sumfile src:path [flags]
|
||||
|
||||
@@ -1,18 +1,20 @@
|
||||
---
|
||||
title: "rclone completion"
|
||||
description: "Generate the autocompletion script for the specified shell"
|
||||
description: "Output completion script for a given shell."
|
||||
slug: rclone_completion
|
||||
url: /commands/rclone_completion/
|
||||
versionIntroduced: v1.33
|
||||
# autogenerated - DO NOT EDIT, instead edit the source code in cmd/completion/ and as part of making a release run "make commanddocs"
|
||||
---
|
||||
# rclone completion
|
||||
|
||||
Generate the autocompletion script for the specified shell
|
||||
Output completion script for a given shell.
|
||||
|
||||
## Synopsis
|
||||
|
||||
Generate the autocompletion script for rclone for the specified shell.
|
||||
See each sub-command's help for details on how to use the generated script.
|
||||
|
||||
Generates a shell completion script for rclone.
|
||||
Run with `--help` to list the supported shells.
|
||||
|
||||
|
||||
## Options
|
||||
@@ -26,8 +28,7 @@ See the [global flags page](/flags/) for global options not listed here.
|
||||
## SEE ALSO
|
||||
|
||||
* [rclone](/commands/rclone/) - Show help for rclone commands, flags and backends.
|
||||
* [rclone completion bash](/commands/rclone_completion_bash/) - Generate the autocompletion script for bash
|
||||
* [rclone completion fish](/commands/rclone_completion_fish/) - Generate the autocompletion script for fish
|
||||
* [rclone completion powershell](/commands/rclone_completion_powershell/) - Generate the autocompletion script for powershell
|
||||
* [rclone completion zsh](/commands/rclone_completion_zsh/) - Generate the autocompletion script for zsh
|
||||
* [rclone completion bash](/commands/rclone_completion_bash/) - Output bash completion script for rclone.
|
||||
* [rclone completion fish](/commands/rclone_completion_fish/) - Output fish completion script for rclone.
|
||||
* [rclone completion zsh](/commands/rclone_completion_zsh/) - Output zsh completion script for rclone.
|
||||
|
||||
|
||||
@@ -1,52 +1,48 @@
|
||||
---
|
||||
title: "rclone completion bash"
|
||||
description: "Generate the autocompletion script for bash"
|
||||
description: "Output bash completion script for rclone."
|
||||
slug: rclone_completion_bash
|
||||
url: /commands/rclone_completion_bash/
|
||||
# autogenerated - DO NOT EDIT, instead edit the source code in cmd/completion/bash/ and as part of making a release run "make commanddocs"
|
||||
---
|
||||
# rclone completion bash
|
||||
|
||||
Generate the autocompletion script for bash
|
||||
Output bash completion script for rclone.
|
||||
|
||||
## Synopsis
|
||||
|
||||
Generate the autocompletion script for the bash shell.
|
||||
|
||||
This script depends on the 'bash-completion' package.
|
||||
If it is not installed already, you can install it via your OS's package manager.
|
||||
Generates a bash shell autocompletion script for rclone.
|
||||
|
||||
To load completions in your current shell session:
|
||||
This writes to /etc/bash_completion.d/rclone by default so will
|
||||
probably need to be run with sudo or as root, e.g.
|
||||
|
||||
source <(rclone completion bash)
|
||||
sudo rclone genautocomplete bash
|
||||
|
||||
To load completions for every new session, execute once:
|
||||
Logout and login again to use the autocompletion scripts, or source
|
||||
them directly
|
||||
|
||||
### Linux:
|
||||
. /etc/bash_completion
|
||||
|
||||
rclone completion bash > /etc/bash_completion.d/rclone
|
||||
If you supply a command line argument the script will be written
|
||||
there.
|
||||
|
||||
### macOS:
|
||||
|
||||
rclone completion bash > $(brew --prefix)/etc/bash_completion.d/rclone
|
||||
|
||||
You will need to start a new shell for this setup to take effect.
|
||||
If output_file is "-", then the output will be written to stdout.
|
||||
|
||||
|
||||
```
|
||||
rclone completion bash
|
||||
rclone completion bash [output_file] [flags]
|
||||
```
|
||||
|
||||
## Options
|
||||
|
||||
```
|
||||
-h, --help help for bash
|
||||
--no-descriptions disable completion descriptions
|
||||
-h, --help help for bash
|
||||
```
|
||||
|
||||
See the [global flags page](/flags/) for global options not listed here.
|
||||
|
||||
## SEE ALSO
|
||||
|
||||
* [rclone completion](/commands/rclone_completion/) - Generate the autocompletion script for the specified shell
|
||||
* [rclone completion](/commands/rclone_completion/) - Output completion script for a given shell.
|
||||
|
||||
|
||||
@@ -1,43 +1,48 @@
|
||||
---
|
||||
title: "rclone completion fish"
|
||||
description: "Generate the autocompletion script for fish"
|
||||
description: "Output fish completion script for rclone."
|
||||
slug: rclone_completion_fish
|
||||
url: /commands/rclone_completion_fish/
|
||||
# autogenerated - DO NOT EDIT, instead edit the source code in cmd/completion/fish/ and as part of making a release run "make commanddocs"
|
||||
---
|
||||
# rclone completion fish
|
||||
|
||||
Generate the autocompletion script for fish
|
||||
Output fish completion script for rclone.
|
||||
|
||||
## Synopsis
|
||||
|
||||
Generate the autocompletion script for the fish shell.
|
||||
|
||||
To load completions in your current shell session:
|
||||
Generates a fish autocompletion script for rclone.
|
||||
|
||||
rclone completion fish | source
|
||||
This writes to /etc/fish/completions/rclone.fish by default so will
|
||||
probably need to be run with sudo or as root, e.g.
|
||||
|
||||
To load completions for every new session, execute once:
|
||||
sudo rclone genautocomplete fish
|
||||
|
||||
rclone completion fish > ~/.config/fish/completions/rclone.fish
|
||||
Logout and login again to use the autocompletion scripts, or source
|
||||
them directly
|
||||
|
||||
You will need to start a new shell for this setup to take effect.
|
||||
. /etc/fish/completions/rclone.fish
|
||||
|
||||
If you supply a command line argument the script will be written
|
||||
there.
|
||||
|
||||
If output_file is "-", then the output will be written to stdout.
|
||||
|
||||
|
||||
```
|
||||
rclone completion fish [flags]
|
||||
rclone completion fish [output_file] [flags]
|
||||
```
|
||||
|
||||
## Options
|
||||
|
||||
```
|
||||
-h, --help help for fish
|
||||
--no-descriptions disable completion descriptions
|
||||
-h, --help help for fish
|
||||
```
|
||||
|
||||
See the [global flags page](/flags/) for global options not listed here.
|
||||
|
||||
## SEE ALSO
|
||||
|
||||
* [rclone completion](/commands/rclone_completion/) - Generate the autocompletion script for the specified shell
|
||||
* [rclone completion](/commands/rclone_completion/) - Output completion script for a given shell.
|
||||
|
||||
|
||||
@@ -9,7 +9,7 @@ url: /commands/rclone_completion_powershell/
|
||||
|
||||
Generate the autocompletion script for powershell
|
||||
|
||||
## Synopsis
|
||||
# Synopsis
|
||||
|
||||
Generate the autocompletion script for powershell.
|
||||
|
||||
@@ -25,7 +25,7 @@ to your powershell profile.
|
||||
rclone completion powershell [flags]
|
||||
```
|
||||
|
||||
## Options
|
||||
# Options
|
||||
|
||||
```
|
||||
-h, --help help for powershell
|
||||
@@ -34,7 +34,7 @@ rclone completion powershell [flags]
|
||||
|
||||
See the [global flags page](/flags/) for global options not listed here.
|
||||
|
||||
## SEE ALSO
|
||||
# SEE ALSO
|
||||
|
||||
* [rclone completion](/commands/rclone_completion/) - Generate the autocompletion script for the specified shell
|
||||
|
||||
|
||||
@@ -1,54 +1,48 @@
|
||||
---
|
||||
title: "rclone completion zsh"
|
||||
description: "Generate the autocompletion script for zsh"
|
||||
description: "Output zsh completion script for rclone."
|
||||
slug: rclone_completion_zsh
|
||||
url: /commands/rclone_completion_zsh/
|
||||
# autogenerated - DO NOT EDIT, instead edit the source code in cmd/completion/zsh/ and as part of making a release run "make commanddocs"
|
||||
---
|
||||
# rclone completion zsh
|
||||
|
||||
Generate the autocompletion script for zsh
|
||||
Output zsh completion script for rclone.
|
||||
|
||||
## Synopsis
|
||||
|
||||
Generate the autocompletion script for the zsh shell.
|
||||
|
||||
If shell completion is not already enabled in your environment you will need
|
||||
to enable it. You can execute the following once:
|
||||
Generates a zsh autocompletion script for rclone.
|
||||
|
||||
echo "autoload -U compinit; compinit" >> ~/.zshrc
|
||||
This writes to /usr/share/zsh/vendor-completions/_rclone by default so will
|
||||
probably need to be run with sudo or as root, e.g.
|
||||
|
||||
To load completions in your current shell session:
|
||||
sudo rclone genautocomplete zsh
|
||||
|
||||
source <(rclone completion zsh); compdef _rclone rclone
|
||||
Logout and login again to use the autocompletion scripts, or source
|
||||
them directly
|
||||
|
||||
To load completions for every new session, execute once:
|
||||
autoload -U compinit && compinit
|
||||
|
||||
### Linux:
|
||||
If you supply a command line argument the script will be written
|
||||
there.
|
||||
|
||||
rclone completion zsh > "${fpath[1]}/_rclone"
|
||||
|
||||
### macOS:
|
||||
|
||||
rclone completion zsh > $(brew --prefix)/share/zsh/site-functions/_rclone
|
||||
|
||||
You will need to start a new shell for this setup to take effect.
|
||||
If output_file is "-", then the output will be written to stdout.
|
||||
|
||||
|
||||
```
|
||||
rclone completion zsh [flags]
|
||||
rclone completion zsh [output_file] [flags]
|
||||
```
|
||||
|
||||
## Options
|
||||
|
||||
```
|
||||
-h, --help help for zsh
|
||||
--no-descriptions disable completion descriptions
|
||||
-h, --help help for zsh
|
||||
```
|
||||
|
||||
See the [global flags page](/flags/) for global options not listed here.
|
||||
|
||||
## SEE ALSO
|
||||
|
||||
* [rclone completion](/commands/rclone_completion/) - Generate the autocompletion script for the specified shell
|
||||
* [rclone completion](/commands/rclone_completion/) - Output completion script for a given shell.
|
||||
|
||||
|
||||
@@ -13,16 +13,16 @@ Cryptcheck checks the integrity of an encrypted remote.
|
||||
## Synopsis
|
||||
|
||||
|
||||
rclone cryptcheck checks a remote against an [encrypted](/crypt/) remote.
|
||||
rclone cryptcheck checks a remote against a [crypted](/crypt/) remote.
|
||||
This is the equivalent of running rclone [check](/commands/rclone_check/),
|
||||
but able to check the checksums of the encrypted remote.
|
||||
|
||||
For it to work the underlying remote of the encryptedremote must support
|
||||
For it to work the underlying remote of the cryptedremote must support
|
||||
some kind of checksum.
|
||||
|
||||
It works by reading the nonce from each file on the encryptedremote: and
|
||||
It works by reading the nonce from each file on the cryptedremote: and
|
||||
using that to encrypt each file on the remote:. It then checks the
|
||||
checksum of the underlying file on the ercryptedremote: against the
|
||||
checksum of the underlying file on the cryptedremote: against the
|
||||
checksum of the file it has just encrypted.
|
||||
|
||||
Use it like this
|
||||
@@ -57,11 +57,12 @@ you what happened to it. These are reminiscent of diff files.
|
||||
- `* path` means path was present in source and destination but different.
|
||||
- `! path` means there was an error reading or hashing the source or dest.
|
||||
|
||||
The default number of parallel checks is N=8. See the [--checkers=N](/docs/#checkers-n) option
|
||||
for more information.
|
||||
The default number of parallel checks is 8. See the [--checkers=N](/docs/#checkers-n)
|
||||
option for more information.
|
||||
|
||||
|
||||
```
|
||||
rclone cryptcheck remote:path encryptedremote:path [flags]
|
||||
rclone cryptcheck remote:path cryptedremote:path [flags]
|
||||
```
|
||||
|
||||
## Options
|
||||
|
||||
@@ -10,14 +10,14 @@ versionIntroduced: v1.33
|
||||
|
||||
Output completion script for a given shell.
|
||||
|
||||
## Synopsis
|
||||
# Synopsis
|
||||
|
||||
|
||||
Generates a shell completion script for rclone.
|
||||
Run with `--help` to list the supported shells.
|
||||
|
||||
|
||||
## Options
|
||||
# Options
|
||||
|
||||
```
|
||||
-h, --help help for genautocomplete
|
||||
@@ -25,7 +25,7 @@ Run with `--help` to list the supported shells.
|
||||
|
||||
See the [global flags page](/flags/) for global options not listed here.
|
||||
|
||||
## SEE ALSO
|
||||
# SEE ALSO
|
||||
|
||||
* [rclone](/commands/rclone/) - Show help for rclone commands, flags and backends.
|
||||
* [rclone genautocomplete bash](/commands/rclone_genautocomplete_bash/) - Output bash completion script for rclone.
|
||||
|
||||
@@ -9,7 +9,7 @@ url: /commands/rclone_genautocomplete_bash/
|
||||
|
||||
Output bash completion script for rclone.
|
||||
|
||||
## Synopsis
|
||||
# Synopsis
|
||||
|
||||
|
||||
Generates a bash shell autocompletion script for rclone.
|
||||
@@ -34,7 +34,7 @@ If output_file is "-", then the output will be written to stdout.
|
||||
rclone genautocomplete bash [output_file] [flags]
|
||||
```
|
||||
|
||||
## Options
|
||||
# Options
|
||||
|
||||
```
|
||||
-h, --help help for bash
|
||||
@@ -42,7 +42,7 @@ rclone genautocomplete bash [output_file] [flags]
|
||||
|
||||
See the [global flags page](/flags/) for global options not listed here.
|
||||
|
||||
## SEE ALSO
|
||||
# SEE ALSO
|
||||
|
||||
* [rclone genautocomplete](/commands/rclone_genautocomplete/) - Output completion script for a given shell.
|
||||
|
||||
|
||||
@@ -9,7 +9,7 @@ url: /commands/rclone_genautocomplete_fish/
|
||||
|
||||
Output fish completion script for rclone.
|
||||
|
||||
## Synopsis
|
||||
# Synopsis
|
||||
|
||||
|
||||
Generates a fish autocompletion script for rclone.
|
||||
@@ -34,7 +34,7 @@ If output_file is "-", then the output will be written to stdout.
|
||||
rclone genautocomplete fish [output_file] [flags]
|
||||
```
|
||||
|
||||
## Options
|
||||
# Options
|
||||
|
||||
```
|
||||
-h, --help help for fish
|
||||
@@ -42,7 +42,7 @@ rclone genautocomplete fish [output_file] [flags]
|
||||
|
||||
See the [global flags page](/flags/) for global options not listed here.
|
||||
|
||||
## SEE ALSO
|
||||
# SEE ALSO
|
||||
|
||||
* [rclone genautocomplete](/commands/rclone_genautocomplete/) - Output completion script for a given shell.
|
||||
|
||||
|
||||
@@ -9,7 +9,7 @@ url: /commands/rclone_genautocomplete_zsh/
|
||||
|
||||
Output zsh completion script for rclone.
|
||||
|
||||
## Synopsis
|
||||
# Synopsis
|
||||
|
||||
|
||||
Generates a zsh autocompletion script for rclone.
|
||||
@@ -34,7 +34,7 @@ If output_file is "-", then the output will be written to stdout.
|
||||
rclone genautocomplete zsh [output_file] [flags]
|
||||
```
|
||||
|
||||
## Options
|
||||
# Options
|
||||
|
||||
```
|
||||
-h, --help help for zsh
|
||||
@@ -42,7 +42,7 @@ rclone genautocomplete zsh [output_file] [flags]
|
||||
|
||||
See the [global flags page](/flags/) for global options not listed here.
|
||||
|
||||
## SEE ALSO
|
||||
# SEE ALSO
|
||||
|
||||
* [rclone genautocomplete](/commands/rclone_genautocomplete/) - Output completion script for a given shell.
|
||||
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
---
|
||||
title: "rclone listremotes"
|
||||
description: "List all the remotes in the config file."
|
||||
description: "List all the remotes in the config file and defined in environment variables."
|
||||
slug: rclone_listremotes
|
||||
url: /commands/rclone_listremotes/
|
||||
versionIntroduced: v1.34
|
||||
@@ -8,7 +8,7 @@ versionIntroduced: v1.34
|
||||
---
|
||||
# rclone listremotes
|
||||
|
||||
List all the remotes in the config file.
|
||||
List all the remotes in the config file and defined in environment variables.
|
||||
|
||||
## Synopsis
|
||||
|
||||
|
||||
@@ -272,6 +272,17 @@ Mounting on macOS can be done either via [macFUSE](https://osxfuse.github.io/)
|
||||
FUSE driver utilizing a macOS kernel extension (kext). FUSE-T is an alternative FUSE system
|
||||
which "mounts" via an NFSv4 local server.
|
||||
|
||||
### macFUSE Notes
|
||||
|
||||
If installing macFUSE using [dmg packages](https://github.com/osxfuse/osxfuse/releases) from
|
||||
the website, rclone will locate the macFUSE libraries without any further intervention.
|
||||
If however, macFUSE is installed using the [macports](https://www.macports.org/) package manager,
|
||||
the following addition steps are required.
|
||||
|
||||
sudo mkdir /usr/local/lib
|
||||
cd /usr/local/lib
|
||||
sudo ln -s /opt/local/lib/libfuse.2.dylib
|
||||
|
||||
### FUSE-T Limitations, Caveats, and Notes
|
||||
|
||||
There are some limitations, caveats, and notes about how it works. These are current as
|
||||
@@ -408,20 +419,19 @@ or create systemd mount units:
|
||||
```
|
||||
# /etc/systemd/system/mnt-data.mount
|
||||
[Unit]
|
||||
After=network-online.target
|
||||
Description=Mount for /mnt/data
|
||||
[Mount]
|
||||
Type=rclone
|
||||
What=sftp1:subdir
|
||||
Where=/mnt/data
|
||||
Options=rw,allow_other,args2env,vfs-cache-mode=writes,config=/etc/rclone.conf,cache-dir=/var/rclone
|
||||
Options=rw,_netdev,allow_other,args2env,vfs-cache-mode=writes,config=/etc/rclone.conf,cache-dir=/var/rclone
|
||||
```
|
||||
|
||||
optionally accompanied by systemd automount unit
|
||||
```
|
||||
# /etc/systemd/system/mnt-data.automount
|
||||
[Unit]
|
||||
After=network-online.target
|
||||
Before=remote-fs.target
|
||||
Description=AutoMount for /mnt/data
|
||||
[Automount]
|
||||
Where=/mnt/data
|
||||
TimeoutIdleSec=600
|
||||
@@ -534,7 +544,7 @@ find that you need one or the other or both.
|
||||
|
||||
--cache-dir string Directory rclone will use for caching.
|
||||
--vfs-cache-mode CacheMode Cache mode off|minimal|writes|full (default off)
|
||||
--vfs-cache-max-age duration Max age of objects in the cache (default 1h0m0s)
|
||||
--vfs-cache-max-age duration Max time since last access of objects in the cache (default 1h0m0s)
|
||||
--vfs-cache-max-size SizeSuffix Max total size of objects in the cache (default off)
|
||||
--vfs-cache-poll-interval duration Interval to poll the cache for stale objects (default 1m0s)
|
||||
--vfs-write-back duration Time to writeback files after last use when using cache (default 5s)
|
||||
@@ -557,7 +567,18 @@ flags.
|
||||
If using `--vfs-cache-max-size` note that the cache may exceed this size
|
||||
for two reasons. Firstly because it is only checked every
|
||||
`--vfs-cache-poll-interval`. Secondly because open files cannot be
|
||||
evicted from the cache.
|
||||
evicted from the cache. When `--vfs-cache-max-size`
|
||||
is exceeded, rclone will attempt to evict the least accessed files
|
||||
from the cache first. rclone will start with files that haven't
|
||||
been accessed for the longest. This cache flushing strategy is
|
||||
efficient and more relevant files are likely to remain cached.
|
||||
|
||||
The `--vfs-cache-max-age` will evict files from the cache
|
||||
after the set time since last access has passed. The default value of
|
||||
1 hour will start evicting files from cache that haven't been accessed
|
||||
for 1 hour. When a cached file is accessed the 1 hour timer is reset to 0
|
||||
and will wait for 1 more hour before evicting. Specify the time with
|
||||
standard notation, s, m, h, d, w .
|
||||
|
||||
You **should not** run two copies of rclone using the same VFS cache
|
||||
with the same or overlapping remotes if using `--vfs-cache-mode > off`.
|
||||
@@ -802,6 +823,7 @@ rclone mount remote:path /path/to/mountpoint [flags]
|
||||
--gid uint32 Override the gid field set by the filesystem (not supported on Windows) (default 1000)
|
||||
-h, --help help for mount
|
||||
--max-read-ahead SizeSuffix The number of bytes that can be prefetched for sequential reads (not supported on Windows) (default 128Ki)
|
||||
--mount-case-insensitive Tristate Tell the OS the mount is case insensitive (true) or sensitive (false) regardless of the backend (auto) (default unset)
|
||||
--network-mode Mount as remote network drive, instead of fixed disk drive (supported on Windows only)
|
||||
--no-checksum Don't compare checksums on up/download
|
||||
--no-modtime Don't read/write the modification time (can speed things up)
|
||||
@@ -813,7 +835,7 @@ rclone mount remote:path /path/to/mountpoint [flags]
|
||||
--read-only Only allow read-only access
|
||||
--uid uint32 Override the uid field set by the filesystem (not supported on Windows) (default 1000)
|
||||
--umask int Override the permission bits set by the filesystem (not supported on Windows) (default 2)
|
||||
--vfs-cache-max-age Duration Max age of objects in the cache (default 1h0m0s)
|
||||
--vfs-cache-max-age Duration Max time since last access of objects in the cache (default 1h0m0s)
|
||||
--vfs-cache-max-size SizeSuffix Max total size of objects in the cache (default off)
|
||||
--vfs-cache-mode CacheMode Cache mode off|minimal|writes|full (default off)
|
||||
--vfs-cache-poll-interval Duration Interval to poll the cache for stale objects (default 1m0s)
|
||||
|
||||
@@ -38,10 +38,11 @@ and actually stream it, even if remote backend doesn't support streaming.
|
||||
size of the stream is different in length to the `--size` passed in
|
||||
then the transfer will likely fail.
|
||||
|
||||
Note that the upload can also not be retried because the data is
|
||||
not kept around until the upload succeeds. If you need to transfer
|
||||
a lot of data, you're better off caching locally and then
|
||||
`rclone move` it to the destination.
|
||||
Note that the upload cannot be retried because the data is not stored.
|
||||
If the backend supports multipart uploading then individual chunks can
|
||||
be retried. If you need to transfer a lot of data, you may be better
|
||||
off caching it locally and then `rclone move` it to the
|
||||
destination which can use retries.
|
||||
|
||||
```
|
||||
rclone rcat remote:path [flags]
|
||||
|
||||
@@ -25,54 +25,54 @@ See the [rc documentation](/rc/) for more info on the rc flags.
|
||||
|
||||
## Server options
|
||||
|
||||
Use `--addr` to specify which IP address and port the server should
|
||||
listen on, eg `--addr 1.2.3.4:8000` or `--addr :8080` to listen to all
|
||||
Use `--rc-addr` to specify which IP address and port the server should
|
||||
listen on, eg `--rc-addr 1.2.3.4:8000` or `--rc-addr :8080` to listen to all
|
||||
IPs. By default it only listens on localhost. You can use port
|
||||
:0 to let the OS choose an available port.
|
||||
|
||||
If you set `--addr` to listen on a public or LAN accessible IP address
|
||||
If you set `--rc-addr` to listen on a public or LAN accessible IP address
|
||||
then using Authentication is advised - see the next section for info.
|
||||
|
||||
You can use a unix socket by setting the url to `unix:///path/to/socket`
|
||||
or just by using an absolute path name. Note that unix sockets bypass the
|
||||
authentication - this is expected to be done with file system permissions.
|
||||
|
||||
`--addr` may be repeated to listen on multiple IPs/ports/sockets.
|
||||
`--rc-addr` may be repeated to listen on multiple IPs/ports/sockets.
|
||||
|
||||
`--server-read-timeout` and `--server-write-timeout` can be used to
|
||||
`--rc-server-read-timeout` and `--rc-server-write-timeout` can be used to
|
||||
control the timeouts on the server. Note that this is the total time
|
||||
for a transfer.
|
||||
|
||||
`--max-header-bytes` controls the maximum number of bytes the server will
|
||||
`--rc-max-header-bytes` controls the maximum number of bytes the server will
|
||||
accept in the HTTP header.
|
||||
|
||||
`--baseurl` controls the URL prefix that rclone serves from. By default
|
||||
rclone will serve from the root. If you used `--baseurl "/rclone"` then
|
||||
`--rc-baseurl` controls the URL prefix that rclone serves from. By default
|
||||
rclone will serve from the root. If you used `--rc-baseurl "/rclone"` then
|
||||
rclone would serve from a URL starting with "/rclone/". This is
|
||||
useful if you wish to proxy rclone serve. Rclone automatically
|
||||
inserts leading and trailing "/" on `--baseurl`, so `--baseurl "rclone"`,
|
||||
`--baseurl "/rclone"` and `--baseurl "/rclone/"` are all treated
|
||||
inserts leading and trailing "/" on `--rc-baseurl`, so `--rc-baseurl "rclone"`,
|
||||
`--rc-baseurl "/rclone"` and `--rc-baseurl "/rclone/"` are all treated
|
||||
identically.
|
||||
|
||||
### TLS (SSL)
|
||||
|
||||
By default this will serve over http. If you want you can serve over
|
||||
https. You will need to supply the `--cert` and `--key` flags.
|
||||
https. You will need to supply the `--rc-cert` and `--rc-key` flags.
|
||||
If you wish to do client side certificate validation then you will need to
|
||||
supply `--client-ca` also.
|
||||
supply `--rc-client-ca` also.
|
||||
|
||||
`--cert` should be a either a PEM encoded certificate or a concatenation
|
||||
of that with the CA certificate. `--key` should be the PEM encoded
|
||||
private key and `--client-ca` should be the PEM encoded client
|
||||
`--rc-cert` should be a either a PEM encoded certificate or a concatenation
|
||||
of that with the CA certificate. `--krc-ey` should be the PEM encoded
|
||||
private key and `--rc-client-ca` should be the PEM encoded client
|
||||
certificate authority certificate.
|
||||
|
||||
--min-tls-version is minimum TLS version that is acceptable. Valid
|
||||
--rc-min-tls-version is minimum TLS version that is acceptable. Valid
|
||||
values are "tls1.0", "tls1.1", "tls1.2" and "tls1.3" (default
|
||||
"tls1.0").
|
||||
|
||||
### Template
|
||||
|
||||
`--template` allows a user to specify a custom markup template for HTTP
|
||||
`--rc-template` allows a user to specify a custom markup template for HTTP
|
||||
and WebDAV serve functions. The server exports the following markup
|
||||
to be used within the template to server pages:
|
||||
|
||||
@@ -100,9 +100,13 @@ to be used within the template to server pages:
|
||||
By default this will serve files without needing a login.
|
||||
|
||||
You can either use an htpasswd file which can take lots of users, or
|
||||
set a single username and password with the `--user` and `--pass` flags.
|
||||
set a single username and password with the `--rc-user` and `--rc-pass` flags.
|
||||
|
||||
Use `--htpasswd /path/to/htpasswd` to provide an htpasswd file. This is
|
||||
If no static users are configured by either of the above methods, and client
|
||||
certificates are required by the `--client-ca` flag passed to the server, the
|
||||
client certificate common name will be considered as the username.
|
||||
|
||||
Use `--rc-htpasswd /path/to/htpasswd` to provide an htpasswd file. This is
|
||||
in standard apache format and supports MD5, SHA1 and BCrypt for basic
|
||||
authentication. Bcrypt is recommended.
|
||||
|
||||
@@ -114,9 +118,9 @@ To create an htpasswd file:
|
||||
|
||||
The password file can be updated while rclone is running.
|
||||
|
||||
Use `--realm` to set the authentication realm.
|
||||
Use `--rc-realm` to set the authentication realm.
|
||||
|
||||
Use `--salt` to change the password hashing salt from the default.
|
||||
Use `--rc-salt` to change the password hashing salt from the default.
|
||||
|
||||
|
||||
```
|
||||
|
||||
@@ -112,7 +112,7 @@ find that you need one or the other or both.
|
||||
|
||||
--cache-dir string Directory rclone will use for caching.
|
||||
--vfs-cache-mode CacheMode Cache mode off|minimal|writes|full (default off)
|
||||
--vfs-cache-max-age duration Max age of objects in the cache (default 1h0m0s)
|
||||
--vfs-cache-max-age duration Max time since last access of objects in the cache (default 1h0m0s)
|
||||
--vfs-cache-max-size SizeSuffix Max total size of objects in the cache (default off)
|
||||
--vfs-cache-poll-interval duration Interval to poll the cache for stale objects (default 1m0s)
|
||||
--vfs-write-back duration Time to writeback files after last use when using cache (default 5s)
|
||||
@@ -135,7 +135,18 @@ flags.
|
||||
If using `--vfs-cache-max-size` note that the cache may exceed this size
|
||||
for two reasons. Firstly because it is only checked every
|
||||
`--vfs-cache-poll-interval`. Secondly because open files cannot be
|
||||
evicted from the cache.
|
||||
evicted from the cache. When `--vfs-cache-max-size`
|
||||
is exceeded, rclone will attempt to evict the least accessed files
|
||||
from the cache first. rclone will start with files that haven't
|
||||
been accessed for the longest. This cache flushing strategy is
|
||||
efficient and more relevant files are likely to remain cached.
|
||||
|
||||
The `--vfs-cache-max-age` will evict files from the cache
|
||||
after the set time since last access has passed. The default value of
|
||||
1 hour will start evicting files from cache that haven't been accessed
|
||||
for 1 hour. When a cached file is accessed the 1 hour timer is reset to 0
|
||||
and will wait for 1 more hour before evicting. Specify the time with
|
||||
standard notation, s, m, h, d, w .
|
||||
|
||||
You **should not** run two copies of rclone using the same VFS cache
|
||||
with the same or overlapping remotes if using `--vfs-cache-mode > off`.
|
||||
@@ -379,7 +390,7 @@ rclone serve dlna remote:path [flags]
|
||||
--read-only Only allow read-only access
|
||||
--uid uint32 Override the uid field set by the filesystem (not supported on Windows) (default 1000)
|
||||
--umask int Override the permission bits set by the filesystem (not supported on Windows) (default 2)
|
||||
--vfs-cache-max-age Duration Max age of objects in the cache (default 1h0m0s)
|
||||
--vfs-cache-max-age Duration Max time since last access of objects in the cache (default 1h0m0s)
|
||||
--vfs-cache-max-size SizeSuffix Max total size of objects in the cache (default off)
|
||||
--vfs-cache-mode CacheMode Cache mode off|minimal|writes|full (default off)
|
||||
--vfs-cache-poll-interval Duration Interval to poll the cache for stale objects (default 1m0s)
|
||||
|
||||
@@ -128,7 +128,7 @@ find that you need one or the other or both.
|
||||
|
||||
--cache-dir string Directory rclone will use for caching.
|
||||
--vfs-cache-mode CacheMode Cache mode off|minimal|writes|full (default off)
|
||||
--vfs-cache-max-age duration Max age of objects in the cache (default 1h0m0s)
|
||||
--vfs-cache-max-age duration Max time since last access of objects in the cache (default 1h0m0s)
|
||||
--vfs-cache-max-size SizeSuffix Max total size of objects in the cache (default off)
|
||||
--vfs-cache-poll-interval duration Interval to poll the cache for stale objects (default 1m0s)
|
||||
--vfs-write-back duration Time to writeback files after last use when using cache (default 5s)
|
||||
@@ -151,7 +151,18 @@ flags.
|
||||
If using `--vfs-cache-max-size` note that the cache may exceed this size
|
||||
for two reasons. Firstly because it is only checked every
|
||||
`--vfs-cache-poll-interval`. Secondly because open files cannot be
|
||||
evicted from the cache.
|
||||
evicted from the cache. When `--vfs-cache-max-size`
|
||||
is exceeded, rclone will attempt to evict the least accessed files
|
||||
from the cache first. rclone will start with files that haven't
|
||||
been accessed for the longest. This cache flushing strategy is
|
||||
efficient and more relevant files are likely to remain cached.
|
||||
|
||||
The `--vfs-cache-max-age` will evict files from the cache
|
||||
after the set time since last access has passed. The default value of
|
||||
1 hour will start evicting files from cache that haven't been accessed
|
||||
for 1 hour. When a cached file is accessed the 1 hour timer is reset to 0
|
||||
and will wait for 1 more hour before evicting. Specify the time with
|
||||
standard notation, s, m, h, d, w .
|
||||
|
||||
You **should not** run two copies of rclone using the same VFS cache
|
||||
with the same or overlapping remotes if using `--vfs-cache-mode > off`.
|
||||
@@ -398,6 +409,7 @@ rclone serve docker [flags]
|
||||
--gid uint32 Override the gid field set by the filesystem (not supported on Windows) (default 1000)
|
||||
-h, --help help for docker
|
||||
--max-read-ahead SizeSuffix The number of bytes that can be prefetched for sequential reads (not supported on Windows) (default 128Ki)
|
||||
--mount-case-insensitive Tristate Tell the OS the mount is case insensitive (true) or sensitive (false) regardless of the backend (auto) (default unset)
|
||||
--network-mode Mount as remote network drive, instead of fixed disk drive (supported on Windows only)
|
||||
--no-checksum Don't compare checksums on up/download
|
||||
--no-modtime Don't read/write the modification time (can speed things up)
|
||||
@@ -412,7 +424,7 @@ rclone serve docker [flags]
|
||||
--socket-gid int GID for unix socket (default: current process GID) (default 1000)
|
||||
--uid uint32 Override the uid field set by the filesystem (not supported on Windows) (default 1000)
|
||||
--umask int Override the permission bits set by the filesystem (not supported on Windows) (default 2)
|
||||
--vfs-cache-max-age Duration Max age of objects in the cache (default 1h0m0s)
|
||||
--vfs-cache-max-age Duration Max time since last access of objects in the cache (default 1h0m0s)
|
||||
--vfs-cache-max-size SizeSuffix Max total size of objects in the cache (default off)
|
||||
--vfs-cache-mode CacheMode Cache mode off|minimal|writes|full (default off)
|
||||
--vfs-cache-poll-interval Duration Interval to poll the cache for stale objects (default 1m0s)
|
||||
|
||||
@@ -109,7 +109,7 @@ find that you need one or the other or both.
|
||||
|
||||
--cache-dir string Directory rclone will use for caching.
|
||||
--vfs-cache-mode CacheMode Cache mode off|minimal|writes|full (default off)
|
||||
--vfs-cache-max-age duration Max age of objects in the cache (default 1h0m0s)
|
||||
--vfs-cache-max-age duration Max time since last access of objects in the cache (default 1h0m0s)
|
||||
--vfs-cache-max-size SizeSuffix Max total size of objects in the cache (default off)
|
||||
--vfs-cache-poll-interval duration Interval to poll the cache for stale objects (default 1m0s)
|
||||
--vfs-write-back duration Time to writeback files after last use when using cache (default 5s)
|
||||
@@ -132,7 +132,18 @@ flags.
|
||||
If using `--vfs-cache-max-size` note that the cache may exceed this size
|
||||
for two reasons. Firstly because it is only checked every
|
||||
`--vfs-cache-poll-interval`. Secondly because open files cannot be
|
||||
evicted from the cache.
|
||||
evicted from the cache. When `--vfs-cache-max-size`
|
||||
is exceeded, rclone will attempt to evict the least accessed files
|
||||
from the cache first. rclone will start with files that haven't
|
||||
been accessed for the longest. This cache flushing strategy is
|
||||
efficient and more relevant files are likely to remain cached.
|
||||
|
||||
The `--vfs-cache-max-age` will evict files from the cache
|
||||
after the set time since last access has passed. The default value of
|
||||
1 hour will start evicting files from cache that haven't been accessed
|
||||
for 1 hour. When a cached file is accessed the 1 hour timer is reset to 0
|
||||
and will wait for 1 more hour before evicting. Specify the time with
|
||||
standard notation, s, m, h, d, w .
|
||||
|
||||
You **should not** run two copies of rclone using the same VFS cache
|
||||
with the same or overlapping remotes if using `--vfs-cache-mode > off`.
|
||||
@@ -460,7 +471,7 @@ rclone serve ftp remote:path [flags]
|
||||
--uid uint32 Override the uid field set by the filesystem (not supported on Windows) (default 1000)
|
||||
--umask int Override the permission bits set by the filesystem (not supported on Windows) (default 2)
|
||||
--user string User name for authentication (default "anonymous")
|
||||
--vfs-cache-max-age Duration Max age of objects in the cache (default 1h0m0s)
|
||||
--vfs-cache-max-age Duration Max time since last access of objects in the cache (default 1h0m0s)
|
||||
--vfs-cache-max-size SizeSuffix Max total size of objects in the cache (default off)
|
||||
--vfs-cache-mode CacheMode Cache mode off|minimal|writes|full (default off)
|
||||
--vfs-cache-poll-interval Duration Interval to poll the cache for stale objects (default 1m0s)
|
||||
|
||||
@@ -103,6 +103,10 @@ By default this will serve files without needing a login.
|
||||
You can either use an htpasswd file which can take lots of users, or
|
||||
set a single username and password with the `--user` and `--pass` flags.
|
||||
|
||||
If no static users are configured by either of the above methods, and client
|
||||
certificates are required by the `--client-ca` flag passed to the server, the
|
||||
client certificate common name will be considered as the username.
|
||||
|
||||
Use `--htpasswd /path/to/htpasswd` to provide an htpasswd file. This is
|
||||
in standard apache format and supports MD5, SHA1 and BCrypt for basic
|
||||
authentication. Bcrypt is recommended.
|
||||
@@ -195,7 +199,7 @@ find that you need one or the other or both.
|
||||
|
||||
--cache-dir string Directory rclone will use for caching.
|
||||
--vfs-cache-mode CacheMode Cache mode off|minimal|writes|full (default off)
|
||||
--vfs-cache-max-age duration Max age of objects in the cache (default 1h0m0s)
|
||||
--vfs-cache-max-age duration Max time since last access of objects in the cache (default 1h0m0s)
|
||||
--vfs-cache-max-size SizeSuffix Max total size of objects in the cache (default off)
|
||||
--vfs-cache-poll-interval duration Interval to poll the cache for stale objects (default 1m0s)
|
||||
--vfs-write-back duration Time to writeback files after last use when using cache (default 5s)
|
||||
@@ -218,7 +222,18 @@ flags.
|
||||
If using `--vfs-cache-max-size` note that the cache may exceed this size
|
||||
for two reasons. Firstly because it is only checked every
|
||||
`--vfs-cache-poll-interval`. Secondly because open files cannot be
|
||||
evicted from the cache.
|
||||
evicted from the cache. When `--vfs-cache-max-size`
|
||||
is exceeded, rclone will attempt to evict the least accessed files
|
||||
from the cache first. rclone will start with files that haven't
|
||||
been accessed for the longest. This cache flushing strategy is
|
||||
efficient and more relevant files are likely to remain cached.
|
||||
|
||||
The `--vfs-cache-max-age` will evict files from the cache
|
||||
after the set time since last access has passed. The default value of
|
||||
1 hour will start evicting files from cache that haven't been accessed
|
||||
for 1 hour. When a cached file is accessed the 1 hour timer is reset to 0
|
||||
and will wait for 1 more hour before evicting. Specify the time with
|
||||
standard notation, s, m, h, d, w .
|
||||
|
||||
You **should not** run two copies of rclone using the same VFS cache
|
||||
with the same or overlapping remotes if using `--vfs-cache-mode > off`.
|
||||
@@ -554,7 +569,7 @@ rclone serve http remote:path [flags]
|
||||
--uid uint32 Override the uid field set by the filesystem (not supported on Windows) (default 1000)
|
||||
--umask int Override the permission bits set by the filesystem (not supported on Windows) (default 2)
|
||||
--user string User name for authentication
|
||||
--vfs-cache-max-age Duration Max age of objects in the cache (default 1h0m0s)
|
||||
--vfs-cache-max-age Duration Max time since last access of objects in the cache (default 1h0m0s)
|
||||
--vfs-cache-max-size SizeSuffix Max total size of objects in the cache (default off)
|
||||
--vfs-cache-mode CacheMode Cache mode off|minimal|writes|full (default off)
|
||||
--vfs-cache-poll-interval Duration Interval to poll the cache for stale objects (default 1m0s)
|
||||
|
||||
@@ -148,6 +148,10 @@ By default this will serve files without needing a login.
|
||||
You can either use an htpasswd file which can take lots of users, or
|
||||
set a single username and password with the `--user` and `--pass` flags.
|
||||
|
||||
If no static users are configured by either of the above methods, and client
|
||||
certificates are required by the `--client-ca` flag passed to the server, the
|
||||
client certificate common name will be considered as the username.
|
||||
|
||||
Use `--htpasswd /path/to/htpasswd` to provide an htpasswd file. This is
|
||||
in standard apache format and supports MD5, SHA1 and BCrypt for basic
|
||||
authentication. Bcrypt is recommended.
|
||||
|
||||
@@ -141,7 +141,7 @@ find that you need one or the other or both.
|
||||
|
||||
--cache-dir string Directory rclone will use for caching.
|
||||
--vfs-cache-mode CacheMode Cache mode off|minimal|writes|full (default off)
|
||||
--vfs-cache-max-age duration Max age of objects in the cache (default 1h0m0s)
|
||||
--vfs-cache-max-age duration Max time since last access of objects in the cache (default 1h0m0s)
|
||||
--vfs-cache-max-size SizeSuffix Max total size of objects in the cache (default off)
|
||||
--vfs-cache-poll-interval duration Interval to poll the cache for stale objects (default 1m0s)
|
||||
--vfs-write-back duration Time to writeback files after last use when using cache (default 5s)
|
||||
@@ -164,7 +164,18 @@ flags.
|
||||
If using `--vfs-cache-max-size` note that the cache may exceed this size
|
||||
for two reasons. Firstly because it is only checked every
|
||||
`--vfs-cache-poll-interval`. Secondly because open files cannot be
|
||||
evicted from the cache.
|
||||
evicted from the cache. When `--vfs-cache-max-size`
|
||||
is exceeded, rclone will attempt to evict the least accessed files
|
||||
from the cache first. rclone will start with files that haven't
|
||||
been accessed for the longest. This cache flushing strategy is
|
||||
efficient and more relevant files are likely to remain cached.
|
||||
|
||||
The `--vfs-cache-max-age` will evict files from the cache
|
||||
after the set time since last access has passed. The default value of
|
||||
1 hour will start evicting files from cache that haven't been accessed
|
||||
for 1 hour. When a cached file is accessed the 1 hour timer is reset to 0
|
||||
and will wait for 1 more hour before evicting. Specify the time with
|
||||
standard notation, s, m, h, d, w .
|
||||
|
||||
You **should not** run two copies of rclone using the same VFS cache
|
||||
with the same or overlapping remotes if using `--vfs-cache-mode > off`.
|
||||
@@ -492,7 +503,7 @@ rclone serve sftp remote:path [flags]
|
||||
--uid uint32 Override the uid field set by the filesystem (not supported on Windows) (default 1000)
|
||||
--umask int Override the permission bits set by the filesystem (not supported on Windows) (default 2)
|
||||
--user string User name for authentication
|
||||
--vfs-cache-max-age Duration Max age of objects in the cache (default 1h0m0s)
|
||||
--vfs-cache-max-age Duration Max time since last access of objects in the cache (default 1h0m0s)
|
||||
--vfs-cache-max-size SizeSuffix Max total size of objects in the cache (default off)
|
||||
--vfs-cache-mode CacheMode Cache mode off|minimal|writes|full (default off)
|
||||
--vfs-cache-poll-interval Duration Interval to poll the cache for stale objects (default 1m0s)
|
||||
|
||||
@@ -132,6 +132,10 @@ By default this will serve files without needing a login.
|
||||
You can either use an htpasswd file which can take lots of users, or
|
||||
set a single username and password with the `--user` and `--pass` flags.
|
||||
|
||||
If no static users are configured by either of the above methods, and client
|
||||
certificates are required by the `--client-ca` flag passed to the server, the
|
||||
client certificate common name will be considered as the username.
|
||||
|
||||
Use `--htpasswd /path/to/htpasswd` to provide an htpasswd file. This is
|
||||
in standard apache format and supports MD5, SHA1 and BCrypt for basic
|
||||
authentication. Bcrypt is recommended.
|
||||
@@ -224,7 +228,7 @@ find that you need one or the other or both.
|
||||
|
||||
--cache-dir string Directory rclone will use for caching.
|
||||
--vfs-cache-mode CacheMode Cache mode off|minimal|writes|full (default off)
|
||||
--vfs-cache-max-age duration Max age of objects in the cache (default 1h0m0s)
|
||||
--vfs-cache-max-age duration Max time since last access of objects in the cache (default 1h0m0s)
|
||||
--vfs-cache-max-size SizeSuffix Max total size of objects in the cache (default off)
|
||||
--vfs-cache-poll-interval duration Interval to poll the cache for stale objects (default 1m0s)
|
||||
--vfs-write-back duration Time to writeback files after last use when using cache (default 5s)
|
||||
@@ -247,7 +251,18 @@ flags.
|
||||
If using `--vfs-cache-max-size` note that the cache may exceed this size
|
||||
for two reasons. Firstly because it is only checked every
|
||||
`--vfs-cache-poll-interval`. Secondly because open files cannot be
|
||||
evicted from the cache.
|
||||
evicted from the cache. When `--vfs-cache-max-size`
|
||||
is exceeded, rclone will attempt to evict the least accessed files
|
||||
from the cache first. rclone will start with files that haven't
|
||||
been accessed for the longest. This cache flushing strategy is
|
||||
efficient and more relevant files are likely to remain cached.
|
||||
|
||||
The `--vfs-cache-max-age` will evict files from the cache
|
||||
after the set time since last access has passed. The default value of
|
||||
1 hour will start evicting files from cache that haven't been accessed
|
||||
for 1 hour. When a cached file is accessed the 1 hour timer is reset to 0
|
||||
and will wait for 1 more hour before evicting. Specify the time with
|
||||
standard notation, s, m, h, d, w .
|
||||
|
||||
You **should not** run two copies of rclone using the same VFS cache
|
||||
with the same or overlapping remotes if using `--vfs-cache-mode > off`.
|
||||
@@ -585,7 +600,7 @@ rclone serve webdav remote:path [flags]
|
||||
--uid uint32 Override the uid field set by the filesystem (not supported on Windows) (default 1000)
|
||||
--umask int Override the permission bits set by the filesystem (not supported on Windows) (default 2)
|
||||
--user string User name for authentication
|
||||
--vfs-cache-max-age Duration Max age of objects in the cache (default 1h0m0s)
|
||||
--vfs-cache-max-age Duration Max time since last access of objects in the cache (default 1h0m0s)
|
||||
--vfs-cache-max-size SizeSuffix Max total size of objects in the cache (default off)
|
||||
--vfs-cache-mode CacheMode Cache mode off|minimal|writes|full (default off)
|
||||
--vfs-cache-poll-interval Duration Interval to poll the cache for stale objects (default 1m0s)
|
||||
|
||||
@@ -26,7 +26,7 @@ recursion.
|
||||
|
||||
Some backends do not always provide file sizes, see for example
|
||||
[Google Photos](/googlephotos/#size) and
|
||||
[Google Drive](/drive/#limitations-of-google-docs).
|
||||
[Google Docs](/drive/#limitations-of-google-docs).
|
||||
Rclone will then show a notice in the log indicating how many such
|
||||
files were encountered, and count them in as empty files in the output
|
||||
of the size command.
|
||||
|
||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user